"It works on my machine" – every developer has said this at least once. When I first encountered Docker, I didn't understand why containerization was such a big deal. Applications are applications, right? Just install them and run them. But as I ventured into self-hosting various services on my VPS, I quickly discovered why Docker has revolutionized how we deploy software. This is my journey from Docker skeptic to container enthusiast.
My Pre-Docker Struggles
Before Docker, setting up services on my VPS was a nightmare. Here's what a typical installation looked like:
The Old Way: Dependency Hellbash
1# Trying to install a Node.js app2curl -fsSL https://deb.nodesource.com/setup_16.x |sudo -E bash -
3sudoapt-getinstall -y nodejs
45# App needs specific Node version... uninstall and try again6sudoapt-get remove nodejs
7curl -fsSL https://deb.nodesource.com/setup_14.x |sudo -E bash -
8sudoapt-getinstall -y nodejs
910# Install app dependencies11npminstall1213# Error: needs Python 2.7 for node-gyp14sudoapt-getinstall python2.7
1516# Error: needs build tools17sudoapt-getinstall build-essential
1819# Finally runs... but breaks my Python 3 scripts20# Now I need to manage multiple Python versions2122# Install another app that needs Node 18...23# Everything breaks!
Each application had its own requirements, and they often conflicted. I was constantly worried about breaking existing services when installing new ones. Virtual environments helped with Python, but what about system-level dependencies? That's when I decided to give Docker a serious try.
Docker Basics: Understanding Containers
The concept that clicked for me was this: Docker containers are like lightweight, portable computers within your computer. Each container has its own filesystem, processes, and network, but shares the host's kernel. Here's my first successful Docker experience:
My First Docker Containerbash
1# Install Docker2curl -fsSL https://get.docker.com -o get-docker.sh
3sudosh get-docker.sh
45# Add myself to docker group (avoid sudo)6sudousermod -aG docker$USER7 newgrp docker89# Run my first container10docker run hello-world
1112# The magic moment - running a web server13docker run -d -p 8080:80 nginx
1415# Check if it's running16dockerps17 CONTAINER ID IMAGE COMMAND CREATED STATUS
18 a1b2c3d4e5f6 nginx "/docker-entrypoint.…"5 seconds ago Up 4 seconds
1920# Visit http://localhost:8080 - it works!2122# Stop and remove23docker stop a1b2c3d4e5f6
24dockerrm a1b2c3d4e5f6
2526# Or in one command27dockerrm -f a1b2c3d4e5f6
The simplicity was mind-blowing. No installation, no configuration, no dependency conflicts. Just docker run nginx and I had a working web server. But this was just the beginning.
Docker Images vs Containers
Understanding the difference between images and containers was crucial:
Image: A blueprint or template (like a class in programming)
Container: A running instance of an image (like an object)
Working with Images and Containersbash
1# List images2docker images
34# Pull an image without running it5docker pull ubuntu:22.04
67# Run a container from an image8docker run -it ubuntu:22.04 bash910# Inside the container11 root@container:/# apt update12 root@container:/# apt install curl13 root@container:/# exit1415# Container stops when we exit16dockerps -a # Shows stopped containers1718# Create an image from our modified container19docker commit <container-id> my-ubuntu-with-curl
2021# Now we can run containers with curl pre-installed22docker run -it my-ubuntu-with-curl curl --version
1# Start from a base image2FROM node:16-alpine34# Set working directory5WORKDIR /app67# Copy package files8COPY package*.json ./910# Install dependencies11RUN npm install1213# Copy application code14COPY . .1516# Expose port17EXPOSE 30001819# Command to run the app20CMD ["npm", "start"]
Building and running this was simple:
Building Custom Imagesbash
1# Build the image2docker build -t my-node-app .34# Run container from our custom image5docker run -d -p 3000:3000 my-node-app
67# View logs8docker logs <container-id>910# Execute commands in running container11dockerexec -it <container-id>sh
As I started self-hosting more complex applications, managing individual containers became unwieldy. Docker Compose was the solution:
My First Docker Compose Setupyaml
1version:'3.8'23services:4# WordPress site5wordpress:6image: wordpress:latest
7ports:8-"8080:80"9environment:10WORDPRESS_DB_HOST: db
11WORDPRESS_DB_USER: wordpress
12WORDPRESS_DB_PASSWORD: secret
13WORDPRESS_DB_NAME: wordpress
14volumes:15- wordpress_data:/var/www/html
16depends_on:17- db
18restart: unless-stopped
1920# MySQL database21db:22image: mysql:8.023environment:24MYSQL_DATABASE: wordpress
25MYSQL_USER: wordpress
26MYSQL_PASSWORD: secret
27MYSQL_ROOT_PASSWORD: rootsecret
28volumes:29- db_data:/var/lib/mysql
30restart: unless-stopped
3132# phpMyAdmin for database management33phpmyadmin:34image: phpmyadmin/phpmyadmin
35ports:36-"8081:80"37environment:38PMA_HOST: db
39PMA_USER: root
40PMA_PASSWORD: rootsecret
41depends_on:42- db
43restart: unless-stopped
4445volumes:46wordpress_data:47 db_data:
With one command, I could spin up an entire WordPress stack:
Docker Compose Commandsbash
1# Start all services2docker-compose up -d
34# View logs5docker-compose logs -f
67# Stop all services8docker-compose down
910# Stop and remove volumes (careful!)11docker-compose down -v
1213# Update images and restart14docker-compose pull
15docker-compose up -d
1617# Scale a service18docker-compose up -d --scale wordpress=3
My Self-Hosting Journey
With Docker knowledge in hand, I embarked on a self-hosting spree. Here are some services I successfully deployed:
1# Portainer makes Docker management visual2docker volume create portainer_data
34docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
56# Access at http://localhost:90007# Now I can manage containers through a web UI!
Understanding Docker networking was crucial for complex setups:
Docker Networking Conceptsbash
1# Default bridge network2docker network ls34# Create custom network5docker network create myapp-network
67# Run containers on the same network8docker run -d --name webapp --network myapp-network nginx
9docker run -d --name database --network myapp-network mysql:8
1011# Containers can communicate by name12dockerexec webapp ping database
1314# Inspect network15docker network inspect myapp-network
1617# Connect existing container to network18docker network connect myapp-network existing-container
1920# Port mapping for external access21docker run -d -p 8080:80 --name web nginx
22# 8080 = host port, 80 = container port
Volume Management: Persistent Data
One of my early mistakes was not understanding volumes, leading to data loss when I removed containers:
Managing Persistent Databash
1# Named volumes (preferred)2docker volume create myapp_data
3docker run -v myapp_data:/app/data myapp
45# Bind mounts (for development)6docker run -v $(pwd)/src:/app/src myapp
78# Anonymous volumes (avoid these)9docker run -v /app/data myapp
1011# List volumes12docker volume ls1314# Inspect volume15docker volume inspect myapp_data
1617# Clean up unused volumes18docker volume prune
1920# Backup a volume21docker run --rm -v myapp_data:/source -v $(pwd):/backup alpine tar -czf /backup/myapp_backup.tar.gz -C /source .2223# Restore a volume24docker run --rm -v myapp_data:/target -v $(pwd):/backup alpine tar -xzf /backup/myapp_backup.tar.gz -C /target
Resource Management: Not Crashing My VPS
With limited VPS resources (2GB RAM), I learned to constrain containers:
Docker Resource Limitsyaml
1version:'3.8'23services:4app:5image: myapp:latest
6deploy:7resources:8limits:9cpus:'0.5'# Half a CPU10memory: 512M # 512 MB RAM11reservations:12cpus:'0.25'13memory: 256M
14restart: unless-stopped
1516# For docker run17# docker run -d # --memory="512m" # --cpus="0.5" # --restart=unless-stopped # myapp:latest
Security Considerations
As I deployed more services, security became paramount:
Docker Security Best Practicesbash
1# Don't run containers as root2# In Dockerfile:3USERnode45# Use specific image versions (not :latest in production)6 FROM node:16.20.0-alpine
78# Scan images for vulnerabilities9docker scan myapp:latest
1011# Use secrets for sensitive data12docker secret create db_password ./password.txt
13dockerservice create --secret db_password --env DB_PASSWORD_FILE=/run/secrets/db_password myapp
1415# Network isolation16docker network create frontend
17docker network create backend
18# Only connect containers that need to communicate1920# Read-only containers where possible21docker run --read-only --tmpfs /tmp --tmpfs /run myapp
2223# Limit capabilities24docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp
My Docker Workflow Evolution
Over time, I developed a workflow for deploying new services:
One of my most interesting Docker projects was containerizing various networking tools and VPN-related services. This taught me a lot about Docker networking and security.
I learned to use Docker networks to isolate services properly:
Secure Docker Networkingbash
1# Create isolated networks2docker network create --internal database_net
3docker network create public_net
4docker network create management_net
56# Run services with proper isolation7# Database - internal only8docker run -d --name postgres --network database_net postgres:15
910# API service - can reach database and public11docker run -d --name api --network public_net myapi:latest
1213# Connect API to database network14docker network connect database_net api
1516# Management tools - separate network17docker run -d --name portainer --network management_net -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce
1819# Inspect network connections20docker network inspect database_net
21docker port api
2223# Security: Database has no public exposure!
Challenges with Containerized Network Services
Running network services in Docker brought unique challenges:
Network Services Docker Challengesyaml
1# Problem 1: UDP services (like WireGuard)2# Docker's NAT can interfere with UDP3wireguard:4image: linuxserver/wireguard
5cap_add:6- NET_ADMIN
7- SYS_MODULE
8environment:9- PUID=1000
10- PGID=1000
11volumes:12- ./config:/config
13- /lib/modules:/lib/modules:ro
14ports:15-"51820:51820/udp"# UDP port mapping16sysctls:17- net.ipv4.conf.all.src_valid_mark=1
18- net.ipv4.ip_forward=1
19restart: unless-stopped
2021# Problem 2: Container needs host networking22# Some services need direct network access23derp_server:24image: tailscale/derper
25network_mode: host # Full host network access26environment:27- DERPER_HOSTNAME=derp.example.com
28- DERPER_VERIFY_CLIENTS=true
29volumes:30- ./certs:/certs:ro
31restart: unless-stopped
3233# Problem 3: Privileged containers34# Network tools often need elevated privileges35network_monitor:36image: netdata/netdata
37cap_add:38- SYS_PTRACE
39security_opt:40- apparmor:unconfined
41volumes:42- /proc:/host/proc:ro
43- /sys:/host/sys:ro
44- /var/run/docker.sock:/var/run/docker.sock:ro
My Network Services Stack
Here's the complete network services stack I ended up running:
💬 Comments & Discussion
Share your thoughts, ask questions, or discuss this post. Comments are powered by GitHub Discussions.
💡 Tip: You need a GitHub account to comment. This helps reduce spam and keeps discussions high-quality.