Docker and Self-Hosting: My First Steps

"It works on my machine" – every developer has said this at least once. When I first encountered Docker, I didn't understand why containerization was such a big deal. Applications are applications, right? Just install them and run them. But as I ventured into self-hosting various services on my VPS, I quickly discovered why Docker has revolutionized how we deploy software. This is my journey from Docker skeptic to container enthusiast.

My Pre-Docker Struggles

Before Docker, setting up services on my VPS was a nightmare. Here's what a typical installation looked like:

The Old Way: Dependency Hellbash
1# Trying to install a Node.js app
2  curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
3  sudo apt-get install -y nodejs
4  
5  # App needs specific Node version... uninstall and try again
6  sudo apt-get remove nodejs
7  curl -fsSL https://deb.nodesource.com/setup_14.x | sudo -E bash -
8  sudo apt-get install -y nodejs
9  
10  # Install app dependencies
11  npm install
12  
13  # Error: needs Python 2.7 for node-gyp
14  sudo apt-get install python2.7
15  
16  # Error: needs build tools
17  sudo apt-get install build-essential
18  
19  # Finally runs... but breaks my Python 3 scripts
20  # Now I need to manage multiple Python versions
21  
22  # Install another app that needs Node 18...
23  # Everything breaks!

Each application had its own requirements, and they often conflicted. I was constantly worried about breaking existing services when installing new ones. Virtual environments helped with Python, but what about system-level dependencies? That's when I decided to give Docker a serious try.

Docker Basics: Understanding Containers

The concept that clicked for me was this: Docker containers are like lightweight, portable computers within your computer. Each container has its own filesystem, processes, and network, but shares the host's kernel. Here's my first successful Docker experience:

My First Docker Containerbash
1# Install Docker
2  curl -fsSL https://get.docker.com -o get-docker.sh
3  sudo sh get-docker.sh
4  
5  # Add myself to docker group (avoid sudo)
6  sudo usermod -aG docker $USER
7  newgrp docker
8  
9  # Run my first container
10  docker run hello-world
11  
12  # The magic moment - running a web server
13  docker run -d -p 8080:80 nginx
14  
15  # Check if it's running
16  docker ps
17  CONTAINER ID   IMAGE   COMMAND                  CREATED         STATUS
18  a1b2c3d4e5f6   nginx   "/docker-entrypoint.…"   5 seconds ago   Up 4 seconds
19  
20  # Visit http://localhost:8080 - it works!
21  
22  # Stop and remove
23  docker stop a1b2c3d4e5f6
24  docker rm a1b2c3d4e5f6
25  
26  # Or in one command
27  docker rm -f a1b2c3d4e5f6

The simplicity was mind-blowing. No installation, no configuration, no dependency conflicts. Just docker run nginx and I had a working web server. But this was just the beginning.

Docker Images vs Containers

Understanding the difference between images and containers was crucial:

  • Image: A blueprint or template (like a class in programming)
  • Container: A running instance of an image (like an object)
Working with Images and Containersbash
1# List images
2  docker images
3  
4  # Pull an image without running it
5  docker pull ubuntu:22.04
6  
7  # Run a container from an image
8  docker run -it ubuntu:22.04 bash
9  
10  # Inside the container
11  root@container:/# apt update
12  root@container:/# apt install curl
13  root@container:/# exit
14  
15  # Container stops when we exit
16  docker ps -a  # Shows stopped containers
17  
18  # Create an image from our modified container
19  docker commit <container-id> my-ubuntu-with-curl
20  
21  # Now we can run containers with curl pre-installed
22  docker run -it my-ubuntu-with-curl curl --version

Dockerfile: Automating Image Creation

Creating images manually with docker commit wasn't scalable. Dockerfiles changed everything:

My First Dockerfiledockerfile
1# Start from a base image
2  FROM node:16-alpine
3  
4  # Set working directory
5  WORKDIR /app
6  
7  # Copy package files
8  COPY package*.json ./
9  
10  # Install dependencies
11  RUN npm install
12  
13  # Copy application code
14  COPY . .
15  
16  # Expose port
17  EXPOSE 3000
18  
19  # Command to run the app
20  CMD ["npm", "start"]

Building and running this was simple:

Building Custom Imagesbash
1# Build the image
2  docker build -t my-node-app .
3  
4  # Run container from our custom image
5  docker run -d -p 3000:3000 my-node-app
6  
7  # View logs
8  docker logs <container-id>
9  
10  # Execute commands in running container
11  docker exec -it <container-id> sh

Docker Compose: Managing Multi-Container Applications

As I started self-hosting more complex applications, managing individual containers became unwieldy. Docker Compose was the solution:

My First Docker Compose Setupyaml
1version: '3.8'
2  
3  services:
4    # WordPress site
5    wordpress:
6      image: wordpress:latest
7      ports:
8        - "8080:80"
9      environment:
10        WORDPRESS_DB_HOST: db
11        WORDPRESS_DB_USER: wordpress
12        WORDPRESS_DB_PASSWORD: secret
13        WORDPRESS_DB_NAME: wordpress
14      volumes:
15        - wordpress_data:/var/www/html
16      depends_on:
17        - db
18      restart: unless-stopped
19  
20    # MySQL database
21    db:
22      image: mysql:8.0
23      environment:
24        MYSQL_DATABASE: wordpress
25        MYSQL_USER: wordpress
26        MYSQL_PASSWORD: secret
27        MYSQL_ROOT_PASSWORD: rootsecret
28      volumes:
29        - db_data:/var/lib/mysql
30      restart: unless-stopped
31  
32    # phpMyAdmin for database management
33    phpmyadmin:
34      image: phpmyadmin/phpmyadmin
35      ports:
36        - "8081:80"
37      environment:
38        PMA_HOST: db
39        PMA_USER: root
40        PMA_PASSWORD: rootsecret
41      depends_on:
42        - db
43      restart: unless-stopped
44  
45  volumes:
46    wordpress_data:
47    db_data:

With one command, I could spin up an entire WordPress stack:

Docker Compose Commandsbash
1# Start all services
2  docker-compose up -d
3  
4  # View logs
5  docker-compose logs -f
6  
7  # Stop all services
8  docker-compose down
9  
10  # Stop and remove volumes (careful!)
11  docker-compose down -v
12  
13  # Update images and restart
14  docker-compose pull
15  docker-compose up -d
16  
17  # Scale a service
18  docker-compose up -d --scale wordpress=3

My Self-Hosting Journey

With Docker knowledge in hand, I embarked on a self-hosting spree. Here are some services I successfully deployed:

1. Nextcloud - Personal Cloud Storage

Nextcloud with Dockeryaml
1version: '3'
2  
3  services:
4    nextcloud:
5      image: nextcloud:latest
6      ports:
7        - 8082:80
8      volumes:
9        - nextcloud_data:/var/www/html
10        - ./data:/var/www/html/data
11      environment:
12        - MYSQL_PASSWORD=nextcloud
13        - MYSQL_DATABASE=nextcloud
14        - MYSQL_USER=nextcloud
15        - MYSQL_HOST=nextcloud_db
16      restart: unless-stopped
17  
18    nextcloud_db:
19      image: mariadb:10.5
20      volumes:
21        - nextcloud_db:/var/lib/mysql
22      environment:
23        - MYSQL_ROOT_PASSWORD=rootpassword
24        - MYSQL_PASSWORD=nextcloud
25        - MYSQL_DATABASE=nextcloud
26        - MYSQL_USER=nextcloud
27      restart: unless-stopped
28  
29  volumes:
30    nextcloud_data:
31    nextcloud_db:

2. Portainer - Docker Management UI

Portainer for Easy Docker Managementbash
1# Portainer makes Docker management visual
2  docker volume create portainer_data
3  
4  docker run -d     -p 9000:9000     -p 8000:8000     --name portainer     --restart=always     -v /var/run/docker.sock:/var/run/docker.sock     -v portainer_data:/data     portainer/portainer-ce:latest
5  
6  # Access at http://localhost:9000
7  # Now I can manage containers through a web UI!

3. GitLab - Self-Hosted Git

GitLab CE with Dockeryaml
1version: '3.6'
2  
3  services:
4    gitlab:
5      image: gitlab/gitlab-ce:latest
6      hostname: 'gitlab.local'
7      ports:
8        - '8083:80'
9        - '8443:443'
10        - '2222:22'
11      volumes:
12        - gitlab_config:/etc/gitlab
13        - gitlab_logs:/var/log/gitlab
14        - gitlab_data:/var/opt/gitlab
15      environment:
16        GITLAB_OMNIBUS_CONFIG: |
17          external_url 'http://gitlab.local:8083'
18          gitlab_rails['gitlab_shell_ssh_port'] = 2222
19          # Reduce memory usage for small VPS
20          postgresql['shared_buffers'] = "256MB"
21          postgresql['max_worker_processes'] = 2
22          sidekiq['max_concurrency'] = 9
23          prometheus_monitoring['enable'] = false
24      restart: unless-stopped
25  
26  volumes:
27    gitlab_config:
28    gitlab_logs:
29    gitlab_data:

Docker Networking: Connecting Containers

Understanding Docker networking was crucial for complex setups:

Docker Networking Conceptsbash
1# Default bridge network
2  docker network ls
3  
4  # Create custom network
5  docker network create myapp-network
6  
7  # Run containers on the same network
8  docker run -d --name webapp --network myapp-network nginx
9  docker run -d --name database --network myapp-network mysql:8
10  
11  # Containers can communicate by name
12  docker exec webapp ping database
13  
14  # Inspect network
15  docker network inspect myapp-network
16  
17  # Connect existing container to network
18  docker network connect myapp-network existing-container
19  
20  # Port mapping for external access
21  docker run -d -p 8080:80 --name web nginx
22  # 8080 = host port, 80 = container port

Volume Management: Persistent Data

One of my early mistakes was not understanding volumes, leading to data loss when I removed containers:

Managing Persistent Databash
1# Named volumes (preferred)
2  docker volume create myapp_data
3  docker run -v myapp_data:/app/data myapp
4  
5  # Bind mounts (for development)
6  docker run -v $(pwd)/src:/app/src myapp
7  
8  # Anonymous volumes (avoid these)
9  docker run -v /app/data myapp
10  
11  # List volumes
12  docker volume ls
13  
14  # Inspect volume
15  docker volume inspect myapp_data
16  
17  # Clean up unused volumes
18  docker volume prune
19  
20  # Backup a volume
21  docker run --rm     -v myapp_data:/source     -v $(pwd):/backup     alpine tar -czf /backup/myapp_backup.tar.gz -C /source .
22  
23  # Restore a volume
24  docker run --rm     -v myapp_data:/target     -v $(pwd):/backup     alpine tar -xzf /backup/myapp_backup.tar.gz -C /target

Resource Management: Not Crashing My VPS

With limited VPS resources (2GB RAM), I learned to constrain containers:

Docker Resource Limitsyaml
1version: '3.8'
2  
3  services:
4    app:
5      image: myapp:latest
6      deploy:
7        resources:
8          limits:
9            cpus: '0.5'      # Half a CPU
10            memory: 512M     # 512 MB RAM
11          reservations:
12            cpus: '0.25'
13            memory: 256M
14      restart: unless-stopped
15  
16    # For docker run
17    # docker run -d     #   --memory="512m"     #   --cpus="0.5"     #   --restart=unless-stopped     #   myapp:latest

Security Considerations

As I deployed more services, security became paramount:

Docker Security Best Practicesbash
1# Don't run containers as root
2  # In Dockerfile:
3  USER node
4  
5  # Use specific image versions (not :latest in production)
6  FROM node:16.20.0-alpine
7  
8  # Scan images for vulnerabilities
9  docker scan myapp:latest
10  
11  # Use secrets for sensitive data
12  docker secret create db_password ./password.txt
13  docker service create     --secret db_password     --env DB_PASSWORD_FILE=/run/secrets/db_password     myapp
14  
15  # Network isolation
16  docker network create frontend
17  docker network create backend
18  # Only connect containers that need to communicate
19  
20  # Read-only containers where possible
21  docker run --read-only     --tmpfs /tmp     --tmpfs /run     myapp
22  
23  # Limit capabilities
24  docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp

My Docker Workflow Evolution

Over time, I developed a workflow for deploying new services:

My Standard Deployment Templateyaml
1version: '3.8'
2  
3  services:
4    app:
5      image: ${APP_IMAGE:-myapp:latest}
6      container_name: ${APP_NAME:-myapp}
7      restart: unless-stopped
8      networks:
9        - internal
10        - proxy
11      volumes:
12        - app_data:/data
13        - ./config:/config:ro
14      environment:
15        - NODE_ENV=production
16        - DB_HOST=database
17      env_file:
18        - .env
19      labels:
20        - "traefik.enable=true"
21        - "traefik.http.routers.${APP_NAME}.rule=Host(\`${APP_DOMAIN}\`)"
22        - "traefik.http.routers.${APP_NAME}.tls=true"
23        - "traefik.http.routers.${APP_NAME}.tls.certresolver=letsencrypt"
24      healthcheck:
25        test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
26        interval: 30s
27        timeout: 10s
28        retries: 3
29      logging:
30        driver: "json-file"
31        options:
32          max-size: "10m"
33          max-file: "3"
34      depends_on:
35        - database
36  
37    database:
38      image: postgres:15-alpine
39      restart: unless-stopped
40      networks:
41        - internal
42      volumes:
43        - db_data:/var/lib/postgresql/data
44      environment:
45        - POSTGRES_DB=${DB_NAME}
46        - POSTGRES_USER=${DB_USER}
47        - POSTGRES_PASSWORD=${DB_PASSWORD}
48  
49  networks:
50    internal:
51      internal: true
52    proxy:
53      external: true
54  
55  volumes:
56    app_data:
57    db_data:

Debugging Docker Issues

Learning to debug containerized applications was essential:

Docker Debugging Techniquesbash
1# View logs
2  docker logs -f container_name
3  docker-compose logs -f service_name
4  
5  # Execute commands in running container
6  docker exec -it container_name sh
7  docker exec container_name ps aux
8  
9  # Debug failed containers
10  docker run -it --entrypoint sh image_name
11  
12  # Inspect container details
13  docker inspect container_name
14  
15  # Check resource usage
16  docker stats
17  
18  # Debug networking
19  docker exec container_name ping other_container
20  docker exec container_name nslookup google.com
21  docker port container_name
22  
23  # Copy files from container
24  docker cp container_name:/path/to/file ./local_file
25  
26  # Debug build issues
27  docker build --no-cache -t myapp .
28  docker build --progress=plain -t myapp .
29  
30  # Clean up everything and start fresh
31  docker-compose down -v
32  docker system prune -a --volumes

Lessons Learned

My Docker journey taught me valuable lessons:

  • Start simple: Don't try to containerize everything at once
  • Use official images: They're usually well-maintained and documented
  • Pin versions: Never use :latest in production
  • One process per container: Follow the Unix philosophy
  • Logs to stdout: Let Docker handle log management
  • Health checks are crucial: They prevent cascading failures
  • Backup volumes: Containers are ephemeral, data isn't
  • Monitor resources: Containers can consume more than expected

My Current Self-Hosted Stack

Today, my VPS runs a carefully curated stack of services:

  • Traefik: Reverse proxy with automatic SSL
  • Portainer: Docker management UI
  • Nextcloud: Personal cloud storage
  • Vaultwarden: Password manager (Bitwarden compatible)
  • Uptime Kuma: Service monitoring
  • Grafana + Prometheus: Metrics and dashboards
  • GitLab CE: Code repository and CI/CD
  • PostgreSQL: Database for various services
  • Redis: Caching and session storage

Dockerizing Network Services

One of my most interesting Docker projects was containerizing various networking tools and VPN-related services. This taught me a lot about Docker networking and security.

Running Headscale in Docker

Headscale with Docker Composeyaml
1version: '3.8'
2  
3  services:
4    headscale:
5      image: headscale/headscale:latest
6      container_name: headscale
7      volumes:
8        - ./config:/etc/headscale
9        - headscale_data:/var/lib/headscale
10      ports:
11        - "8085:8080"
12      environment:
13        - TZ=Europe/Berlin
14      command: headscale serve
15      restart: unless-stopped
16      networks:
17        - headscale_net
18  
19    # Headscale UI (unofficial but helpful)
20    headscale-ui:
21      image: ghcr.io/gurucomputing/headscale-ui:latest
22      container_name: headscale-ui
23      ports:
24        - "8086:80"
25      environment:
26        - HEADSCALE_URL=http://headscale:8080
27      depends_on:
28        - headscale
29      restart: unless-stopped
30      networks:
31        - headscale_net
32  
33    # PostgreSQL for Headscale (better than SQLite)
34    postgres:
35      image: postgres:15-alpine
36      container_name: headscale_db
37      environment:
38        - POSTGRES_USER=headscale
39        - POSTGRES_PASSWORD=secure_password
40        - POSTGRES_DB=headscale
41      volumes:
42        - postgres_data:/var/lib/postgresql/data
43      restart: unless-stopped
44      networks:
45        - headscale_net
46  
47  networks:
48    headscale_net:
49      driver: bridge
50  
51  volumes:
52    headscale_data:
53    postgres_data:

Docker Network Isolation for Security

I learned to use Docker networks to isolate services properly:

Secure Docker Networkingbash
1# Create isolated networks
2  docker network create --internal database_net
3  docker network create public_net
4  docker network create management_net
5  
6  # Run services with proper isolation
7  # Database - internal only
8  docker run -d     --name postgres     --network database_net     postgres:15
9  
10  # API service - can reach database and public
11  docker run -d     --name api     --network public_net     myapi:latest
12  
13  # Connect API to database network
14  docker network connect database_net api
15  
16  # Management tools - separate network
17  docker run -d     --name portainer     --network management_net     -v /var/run/docker.sock:/var/run/docker.sock     portainer/portainer-ce
18  
19  # Inspect network connections
20  docker network inspect database_net
21  docker port api
22  
23  # Security: Database has no public exposure!

Challenges with Containerized Network Services

Running network services in Docker brought unique challenges:

Network Services Docker Challengesyaml
1# Problem 1: UDP services (like WireGuard)
2  # Docker's NAT can interfere with UDP
3  wireguard:
4    image: linuxserver/wireguard
5    cap_add:
6      - NET_ADMIN
7      - SYS_MODULE
8    environment:
9      - PUID=1000
10      - PGID=1000
11    volumes:
12      - ./config:/config
13      - /lib/modules:/lib/modules:ro
14    ports:
15      - "51820:51820/udp"  # UDP port mapping
16    sysctls:
17      - net.ipv4.conf.all.src_valid_mark=1
18      - net.ipv4.ip_forward=1
19    restart: unless-stopped
20  
21  # Problem 2: Container needs host networking
22  # Some services need direct network access
23  derp_server:
24    image: tailscale/derper
25    network_mode: host  # Full host network access
26    environment:
27      - DERPER_HOSTNAME=derp.example.com
28      - DERPER_VERIFY_CLIENTS=true
29    volumes:
30      - ./certs:/certs:ro
31    restart: unless-stopped
32  
33  # Problem 3: Privileged containers
34  # Network tools often need elevated privileges
35  network_monitor:
36    image: netdata/netdata
37    cap_add:
38      - SYS_PTRACE
39    security_opt:
40      - apparmor:unconfined
41    volumes:
42      - /proc:/host/proc:ro
43      - /sys:/host/sys:ro
44      - /var/run/docker.sock:/var/run/docker.sock:ro

My Network Services Stack

Here's the complete network services stack I ended up running:

Complete Network Services Stackyaml
1version: '3.8'
2  
3  services:
4    # Traefik - Reverse proxy with automatic SSL
5    traefik:
6      image: traefik:v2.10
7      container_name: traefik
8      security_opt:
9        - no-new-privileges:true
10      ports:
11        - "80:80"
12        - "443:443"
13        - "8080:8080"  # Dashboard
14      volumes:
15        - ./traefik.yml:/traefik.yml:ro
16        - ./acme.json:/acme.json
17        - /var/run/docker.sock:/var/run/docker.sock:ro
18      labels:
19        - "traefik.enable=true"
20        - "traefik.http.routers.dashboard.rule=Host(`traefik.local`)"
21        - "traefik.http.routers.dashboard.service=api@internal"
22      restart: unless-stopped
23  
24    # Pi-hole - Network-wide ad blocking
25    pihole:
26      image: pihole/pihole:latest
27      container_name: pihole
28      hostname: pihole
29      ports:
30        - "53:53/tcp"
31        - "53:53/udp"
32        - "8081:80/tcp"
33      environment:
34        - TZ=Europe/Berlin
35        - WEBPASSWORD=secure_password
36        - PIHOLE_DNS_=1.1.1.1;1.0.0.1
37      volumes:
38        - pihole_data:/etc/pihole
39        - dnsmasq_data:/etc/dnsmasq.d
40      cap_add:
41        - NET_ADMIN
42      restart: unless-stopped
43  
44    # WireGuard - VPN server
45    wireguard:
46      image: linuxserver/wireguard
47      container_name: wireguard
48      cap_add:
49        - NET_ADMIN
50        - SYS_MODULE
51      environment:
52        - PUID=1000
53        - PGID=1000
54        - TZ=Europe/Berlin
55        - SERVERURL=vpn.example.com
56        - SERVERPORT=51820
57        - PEERS=laptop,phone,tablet
58        - PEERDNS=auto
59      volumes:
60        - wireguard_data:/config
61        - /lib/modules:/lib/modules:ro
62      ports:
63        - "51820:51820/udp"
64      sysctls:
65        - net.ipv4.conf.all.src_valid_mark=1
66        - net.ipv4.ip_forward=1
67      restart: unless-stopped
68  
69    # Nginx Proxy Manager - Easy reverse proxy
70    nginx-proxy-manager:
71      image: jc21/nginx-proxy-manager:latest
72      container_name: nginx_proxy_manager
73      ports:
74        - "8082:80"
75        - "8443:443"
76        - "8083:81"  # Admin interface
77      volumes:
78        - npm_data:/data
79        - npm_letsencrypt:/etc/letsencrypt
80      restart: unless-stopped
81  
82  volumes:
83    pihole_data:
84    dnsmasq_data:
85    wireguard_data:
86    npm_data:
87    npm_letsencrypt:

Monitoring Containerized Services

Monitoring became crucial with so many services:

Monitoring Docker Servicesyaml
1# Monitoring stack
2  monitoring:
3    prometheus:
4      image: prom/prometheus:latest
5      volumes:
6        - ./prometheus.yml:/etc/prometheus/prometheus.yml
7        - prometheus_data:/prometheus
8      command:
9        - '--config.file=/etc/prometheus/prometheus.yml'
10        - '--storage.tsdb.path=/prometheus'
11      ports:
12        - "9090:9090"
13      restart: unless-stopped
14  
15    grafana:
16      image: grafana/grafana:latest
17      volumes:
18        - grafana_data:/var/lib/grafana
19      environment:
20        - GF_SECURITY_ADMIN_PASSWORD=admin
21        - GF_USERS_ALLOW_SIGN_UP=false
22      ports:
23        - "3000:3000"
24      restart: unless-stopped
25  
26    # Docker metrics exporter
27    cadvisor:
28      image: gcr.io/cadvisor/cadvisor:latest
29      volumes:
30        - /:/rootfs:ro
31        - /var/run:/var/run:ro
32        - /sys:/sys:ro
33        - /var/lib/docker/:/var/lib/docker:ro
34      ports:
35        - "8084:8080"
36      restart: unless-stopped
37  
38  # Uptime monitoring
39  uptime-kuma:
40    image: louislam/uptime-kuma:latest
41    volumes:
42      - uptime_data:/app/data
43    ports:
44      - "3001:3001"
45    restart: unless-stopped

💬 Comments & Discussion

Share your thoughts, ask questions, or discuss this post. Comments are powered by GitHub Discussions.