Docker Fundamentals: Images, Containers & Deployment (Complete Guide)
The Problem You're Solving
Your app works on your laptop but crashes on production:
You: "It works on my machine!"
Production: Java version 8 vs 11, Python 3.8 vs 3.10, 0.8GB RAM vs 16GB
Result: Memory leaks, version incompatibilities, race conditions
Docker eliminates this:
You: Package app + exact dependencies in a container
Production: Run same container - works exactly the same
Result: 100% consistency, 10x faster deployment, zero environment issues
That difference = days debugging production vs immediate deployments.
Docker mastery appears in 26% of DevOps/backend interviews and is essential for modern deployment.
What is Docker?
Docker packages your application + dependencies into a container - a lightweight, isolated environment:
# Dockerfile (recipe)
FROM python:3.10
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
# Build image
docker build -t myapp:latest .
# Run container
docker run myapp:latest
# Your app runs with exact Python 3.10, exact dependencies
Images vs Containers
Images (Blueprint)
- Definition: Immutable template/blueprint
- Storage: On disk
- Size: Usually 100MB-1GB
- Usage: Used to create containers
# List images
docker images
# REPOSITORY TAG IMAGE ID
# myapp latest abc123def456
# python 3.10 def456ghi789
Containers (Running Instance)
- Definition: Running instance of an image
- Storage: In memory (ephemeral)
- Size: Only data written (copy-on-write)
- Usage: Actual application execution
# List running containers
docker ps
# CONTAINER ID IMAGE STATUS
# xyz789abc123 myapp:latest Up 2 hours
# Start container
docker run myapp:latest
# Stop container
docker stop xyz789abc123
Dockerfile: Building Images
Basic Dockerfile Structure
# 1. Choose base image (operating system + runtime)
FROM ubuntu:20.04
# 2. Set working directory
WORKDIR /app
# 3. Copy files from host to container
COPY . .
# 4. Install dependencies
RUN apt-get update && apt-get install -y python3 pip
# 5. Install Python dependencies
RUN pip install -r requirements.txt
# 6. Expose port (documentation only)
EXPOSE 8000
# 7. Define startup command
CMD ["python3", "app.py"]
Multi-Stage Build (Optimizing Size)
# Stage 1: Build
FROM python:3.10 as builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user -r requirements.txt
# Stage 2: Runtime (only copies essentials)
FROM python:3.10-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY app.py .
ENV PATH=/root/.local/bin:$PATH
CMD ["python", "app.py"]
Result: 900MB ā 150MB (6x smaller!)
Docker Compose: Multi-Container Apps
Problem: Running Multiple Services
# ā Manual - Start each service separately
docker run -d postgres:13
docker run -d redis:7
docker run -d myapp:latest
# ā
Docker Compose - Start all with one command
docker-compose up
docker-compose.yml
version: '3.8'
services:
# Web application
app:
build: .
ports:
- "8000:8000"
environment:
DATABASE_URL: postgres://db:5432/myapp
depends_on:
- db
- cache
# PostgreSQL database
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_PASSWORD: secret
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
# Redis cache
cache:
image: redis:7
ports:
- "6379:6379"
volumes:
postgres_data:
# Start all services
docker-compose up
# Stop all services
docker-compose down
# View logs
docker-compose logs -f app
Volumes: Persistent Data
Problem: Data Loss on Container Restart
# ā Without volumes - data lost
docker run postgres:13
# (Create data in database)
docker stop container_id
# Data is gone!
# ā
With volumes - data persists
docker run -v postgres_data:/var/lib/postgresql/data postgres:13
# (Create data in database)
docker stop container_id
docker start container_id
# Data still there!
Volume Types
services:
app:
# 1. Named volume (managed by Docker)
volumes:
- app_data:/app/data
db:
# 2. Bind mount (host directory)
volumes:
- /host/path:/container/path
cache:
# 3. Temporary (deleted with container)
volumes:
- /tmp/cache # No source specified
volumes:
app_data: # Define named volume
Networking: Container Communication
services:
web:
build: .
networks:
- mynetwork
environment:
API_URL: http://api:3000
api:
image: api:latest
networks:
- mynetwork
expose:
- 3000
networks:
mynetwork: # Containers can reach each other by name
Best Practices
Best Practice 1: Use Specific Base Image Versions
# ā BAD - Latest might break your app
FROM python:latest
# ā
GOOD - Reproducible builds
FROM python:3.10.11
Best Practice 2: Layer Optimization
# ā SLOW - Invalidates cache on every change
FROM ubuntu:20.04
COPY . /app
RUN apt-get update && apt-get install -y build-essential
# ā
FAST - Leverages layer caching
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y build-essential
COPY . /app
Best Practice 3: Minimal Base Images
# ā BIG - 1.2GB (full Ubuntu)
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y python3
# ā
SMALL - 900MB (minimal Python)
FROM python:3.10
# ā
TINY - 150MB (alpine)
FROM python:3.10-alpine
Best Practice 4: Health Checks
FROM python:3.10
COPY app.py /app.py
HEALTHCHECK --interval=10s --timeout=5s \
CMD python -c "import requests; requests.get('http://localhost:8000')"
CMD ["python", "app.py"]
Best Practice 5: .dockerignore File
# .dockerignore (like .gitignore)
node_modules
.git
.env
*.log
__pycache__
.pytest_cache
Common Mistakes
ā Mistake 1: Running as Root
# WRONG - Security risk
FROM ubuntu:20.04
RUN apt-get install -y python3
CMD ["python3", "app.py"]
# Runs as root!
# CORRECT - Create non-root user
FROM ubuntu:20.04
RUN apt-get install -y python3
RUN useradd -m appuser
USER appuser
CMD ["python3", "app.py"]
ā Mistake 2: Large Images
# WRONG - 1.2GB image
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y build-essential python3
# CORRECT - Use Python base image (900MB)
FROM python:3.10
# EVEN BETTER - Use alpine (150MB)
FROM python:3.10-alpine
ā Mistake 3: Not Using .dockerignore
# Without .dockerignore - copies everything
COPY . /app
# Copies: node_modules (500MB), .git, logs, temp files
# With .dockerignore - only necessary files
COPY . /app
# Copies: source code only (5MB)
FAQ: Docker Mastery
Q1: Docker vs Virtual Machine?
A: Docker is lightweight, VM is full OS.
Docker Container:
- Uses host OS kernel
- Only packs application + dependencies
- Size: 100-500MB
- Startup: milliseconds
Virtual Machine:
- Includes full OS + kernel
- Heavier isolation
- Size: 5-15GB
- Startup: seconds
Q2: How do I pass secrets to containers?
A: Use environment variables with secret managers.
# WRONG - Hardcoded secrets
ENV DATABASE_PASSWORD=secret123
# BETTER - Use docker-compose with .env
docker-compose up # Reads from .env file
# BEST - Use secret manager
# (Docker Swarm, Kubernetes, or Vault)
docker-compose.yml:
services:
app:
environment:
DATABASE_PASSWORD: ${DB_PASSWORD} # From .env or command line
Q3: Interview Question: Optimize a slow Docker build.
A: Here's the strategy:
# BEFORE: 5 minutes to build
FROM ubuntu:20.04
COPY . /app
RUN apt-get update && apt-get install -y build-essential python3
RUN pip install -r requirements.txt
RUN python -m pytest # Slow!
CMD ["python", "app.py"]
# AFTER: 30 seconds to build
FROM python:3.10-alpine
# Install dependencies first (cached separately)
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy source code
COPY app.py .
# Remove tests from image
# RUN python -m pytest (run in CI/CD instead)
CMD ["python", "app.py"]
Interview insight: "I'd profile with docker build --progress=plain, identify slow layers, and reorganize to maximize cache hits."
Q4: How do containers access external services?
A: Network modes or port mapping.
# Map port
docker run -p 8000:8000 myapp # Host:8000 ā Container:8000
# Connect to host services
docker run --network host myapp # Container uses host network
# Custom network
docker network create mynet
docker run --network mynet myapp
docker run --network mynet postgres # Can reach as 'postgres'
Conclusion
Docker fundamentals:
- Images - Immutable blueprints
- Containers - Running instances
- Dockerfile - Define how to build
- Docker Compose - Orchestrate multiple services
- Volumes - Persist data
- Networks - Container communication
Master Docker and you'll deploy faster, more reliably, and with zero environment surprises.