Dockerfile, Compose, Multi-stage Builds, Networking, Volumes — containerization.
# ── Node.js production Dockerfile ──
FROM node:20-alpine AS base
WORKDIR /app
# Install dependencies first (layer caching)
COPY package.json package-lock.json ./
RUN npm ci --only=production
# Copy source and build
COPY . .
RUN npm run build
# Production image
FROM node:20-alpine AS production
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup -g 1001 -S appgroup && \
adduser -S appuser -u 1001 -G appgroup
COPY --from=base --chown=appuser:appgroup /app/dist ./dist
COPY --from=base --chown=appuser:appgroup /app/node_modules ./node_modules
COPY --from=base --chown=appuser:appgroup /app/package.json ./
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["node", "dist/server.js"]# ── Python / FastAPI Dockerfile ──
FROM python:3.12-slim AS builder
WORKDIR /build
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential libpq-dev && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt
FROM python:3.12-slim AS runtime
WORKDIR /app
RUN groupadd -r appuser && useradd -r -g appuser appuser
COPY --from=builder /root/.local /home/appuser/.local
COPY --from=builder /build /app
ENV PATH="/home/appuser/.local/bin:$PATH" \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1
USER appuser
EXPOSE 8000
HEALTHCHECK --interval=30s --timeout=5s --start-period=5s --retries=3 \
CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')" || exit 1
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]# .dockerignore — exclude from build context
.git
.gitignore
node_modules
npm-debug.log
dist
coverage
.env
.env.*
*.md
.dockerignore
Dockerfile
docker-compose*.yml
__pycache__
*.pyc
.venv
venv
*.egg-info| Directive | Description |
|---|---|
| FROM | Base image (first instruction, required) |
| RUN | Execute shell commands during build |
| COPY | Copy files from host to image |
| ADD | Like COPY + auto-extract tar, remote URLs |
| WORKDIR | Set working directory for RUN/CMD/COPY |
| CMD | Default command (executable form or shell) |
| ENTRYPOINT | Main executable (hard to override) |
| ENV | Set environment variables |
| ARG | Build-time variables (not in runtime) |
| EXPOSE | Document port (metadata only) |
| LABEL | Key-value metadata for the image |
| VOLUME | Create mount point in the image |
| USER | Set UID/username for RUN/CMD/ENTRYPOINT |
| HEALTHCHECK | Container health monitoring command |
| SHELL | Default shell for RUN instructions |
| STOPSIGNAL | Signal to stop the container |
# ── ARG vs ENV ──
# ARG: build-time only, NOT available at runtime
ARG APP_VERSION=1.0.0
ARG NODE_VERSION=20
# ENV: available at both build-time AND runtime
ENV NODE_ENV=production
ENV APP_VERSION=${APP_VERSION}
# ARG from base image can be reused
FROM node:${NODE_VERSION}-alpine
# Multi-ARG targets
ARG TARGETPLATFORM
RUN echo "Building for $TARGETPLATFORM"
# ARG with scope
ARG SECRET_KEY
RUN echo "Building with key..." # visible in image layers!
# Use multi-stage to prevent leaks:# ── Labels and metadata ──
LABEL maintainer="dev@example.com" \
org.opencontainers.image.title="my-app" \
org.opencontainers.image.version="1.0.0" \
org.opencontainers.image.description="Production web app" \
org.opencontainers.image.source="https://github.com/example/app" \
org.opencontainers.image.licenses="MIT"
# SHELL directive (Dockerfile frontend syntax)
SHELL ["/bin/bash", "-c", "-o", "pipefail"]
# STOPSIGNAL
STOPSIGNAL SIGTERM
# Combined CMD + ENTRYPOINT pattern
ENTRYPOINT ["python", "-m", "gunicorn"]
CMD ["app:app", "--bind", "0.0.0.0:8000", "--workers", "4"]COPY package*.json . before COPY . . ensures dependency installs are cached even when source code changes. Use --mount=type=cache with BuildKit for pip/npm caching.COPY . . before installing dependencies — it sends your entire project (including .env files) to the daemon. Always use a .dockerignore and copy deps first.# ── Container Lifecycle ──
docker run -d --name myapp -p 8080:3000 -e NODE_ENV=production myimage:latest
docker run -it --rm alpine sh # interactive shell, auto-remove
docker run --init --restart=unless-stopped myapp # proper PID 1, auto-restart
docker run -d --memory=512m --cpus=1.5 myapp # resource limits
docker run -d --network=my-net --network-alias=api myapp
docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Status}}\t{{.Ports}}"
docker ps --filter "status=running" --filter "name=my-*"
docker stop myapp && docker rm myapp # stop and remove
docker rm -f $(docker ps -aq) # remove ALL containers
docker start -ai myapp # start with attach + interactive
docker restart myapp
docker pause myapp && docker unpause myapp
docker kill myapp # SIGKILL (immediate)# ── Image Management ──
docker pull nginx:alpine
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
docker images -f "dangling=true" # images with no tag
docker tag myapp:latest registry.example.com/myapp:v1.2.0
docker push registry.example.com/myapp:v1.2.0
docker rmi myimage:latest # remove image
docker rmi $(docker images -f "dangling=true" -q) # prune dangling
# ── Build ──
docker build -t myapp:latest .
docker build -t myapp:v1.2.0 -f Dockerfile.prod .
docker build --build-arg APP_VERSION=1.2.0 -t myapp:latest .
docker build --no-cache -t myapp:latest . # force rebuild all layers
docker build --target builder -t myapp:builder . # build specific stage
docker buildx build --platform linux/amd64,linux/arm64 -t myapp .# ── Exec, Logs & Debug ──
docker exec -it myapp sh # interactive shell
docker exec -u root myapp apt update # exec as different user
docker exec myapp cat /etc/hosts # run single command
docker exec -it myapp top # monitor processes
docker logs -f myapp # follow logs
docker logs --since 30m myapp # last 30 minutes
docker logs --tail 100 myapp # last 100 lines
docker logs -f --timestamps myapp # with timestamps
docker logs myapp 2>&1 | grep "ERROR" # filter errors
# ── Inspect ──
docker inspect myapp # full JSON metadata
docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' myapp
docker inspect --format '{{.Config.Env}}' myapp
docker inspect --format '{{json .Mounts}}' myapp | python3 -m json.tool# ── System Cleanup ──
docker system df # disk usage summary
docker system prune # remove unused data
docker system prune -a --volumes # aggressive cleanup
docker system prune --filter "until=24h" # prune older than 24h
docker image prune -a # remove all unused images
docker container prune # remove stopped containers
docker volume prune # remove unused volumes
docker builder prune # remove build cache
# ── Compose Commands ──
docker compose up -d # start all services
docker compose up -d --build # rebuild & start
docker compose up -d --scale worker=3 # scale service
docker compose down # stop & remove containers
docker compose down -v # also remove volumes
docker compose logs -f api # follow service logs
docker compose ps # list service containers
docker compose exec api sh # exec into service
docker compose run --rm api pytest # one-off command
docker compose build --parallel # build all in parallel
docker compose top # running processes
docker compose config # validate & view config| Flag | Description |
|---|---|
| -d | Detach — run in background |
| -p HOST:CONTAINER | Port mapping |
| -e KEY=VALUE | Environment variable |
| -v HOST:CONTAINER | Volume / bind mount |
| --name NAME | Assign container name |
| --network NET | Connect to network |
| --rm | Auto-remove on exit |
| -it | Interactive + pseudo-TTY |
| --restart POLICY | no, always, on-failure, unless-stopped |
| --memory / --cpus | Resource limits |
| --env-file .env | Load env from file |
| --init | Use tini as PID 1 (signal handling) |
| --read-only | Read-only filesystem |
| --cap-drop ALL | Drop all Linux capabilities |
| Placeholder | Output |
|---|---|
| {{.ID}} | Container ID (short) |
| {{.Names}} | Container name |
| {{.Image}} | Image name |
| {{.Status}} | Up/Exited with duration |
| {{.Ports}} | Port mappings |
| {{.Size}} | Virtual + on-disk size |
| {{.Networks}} | Attached networks |
| {{.Labels}} | Container labels |
| {{.CreatedAt}} | Creation timestamp |
--init when running Node.js or Python containers as PID 1. Without it, zombie processes accumulate because shell signals (SIGTERM) are not properly forwarded to your application.# ── Full-stack Compose (web + api + db + redis) ──
services:
# ── Frontend (Next.js) ──
web:
build:
context: ./frontend
dockerfile: Dockerfile
target: production
args:
NEXT_PUBLIC_API_URL: http://localhost:3001
ports:
- "3000:3000"
environment:
- NEXT_PUBLIC_API_URL=http://api:3001
depends_on:
api:
condition: service_healthy
restart: unless-stopped
networks:
- frontend
# ── Backend API ──
api:
build:
context: ./backend
dockerfile: Dockerfile
args:
APP_VERSION: "2.1.0"
ports:
- "3001:3001"
environment:
- DATABASE_URL=postgresql://appuser:secretpass@db:5432/appdb
- REDIS_URL=redis://redis:6379/0
- JWT_SECRET=${JWT_SECRET}
- NODE_ENV=production
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3001/health"]
interval: 15s
timeout: 5s
start_period: 10s
retries: 3
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
networks:
- frontend
- backend
# ── PostgreSQL ──
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: secretpass
POSTGRES_DB: appdb
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
networks:
- backend
# ── Redis ──
redis:
image: redis:7-alpine
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 3
restart: unless-stopped
networks:
- backend
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # no external access
volumes:
pgdata:
driver: local
redisdata:
driver: local# ── Development override (auto-loaded) ──
# compose.override.yaml is automatically merged with compose.yaml
services:
web:
build:
target: development
volumes:
- ./frontend/src:/app/src
- ./frontend/public:/app/public
environment:
- NEXT_PUBLIC_API_URL=http://localhost:3001
- NODE_ENV=development
command: npm run dev
api:
build:
target: development
volumes:
- ./backend/src:/app/src
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://appuser:secretpass@db:5432/appdb
command: npm run dev
db:
ports:
- "5432:5432"# ── Production Compose with deploy & secrets ──
services:
api:
image: registry.example.com/api:v2.1.0
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 30s
order: start-first
failure_action: rollback
rollback_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.5"
memory: 256M
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 5
worker:
image: registry.example.com/api:v2.1.0
command: ["node", "dist/worker.js"]
deploy:
replicas: 2
profiles:
- workers
# ── External secrets ──
secrets:
db_password:
file: ./secrets/db_password.txt
jwt_secret:
external: true| Key | Type | Description |
|---|---|---|
| build | object | Build config (context, dockerfile, args, target) |
| image | string | Pre-built image to use |
| ports | list | HOST:CONTAINER port mappings |
| volumes | list | HOST:CONTAINER mount paths |
| environment | list/map | Environment variables |
| env_file | list | Load .env files |
| depends_on | map | Service dependencies (with conditions) |
| networks | list | Networks to join |
| healthcheck | object | Container health monitoring |
| restart | string | Restart policy |
| deploy | object | Replicas, resources, update config |
| configs | list | External configuration files |
| secrets | list | Sensitive data mounting |
| profiles | list | Optional service groups |
| entrypoint | list | Override entrypoint |
| command | list | Override default command |
# ── Environment & variable interpolation ──
# compose.yaml
services:
api:
environment:
# Direct values
- NODE_ENV=production
# Variable interpolation (resolved at compose up)
- APP_VERSION=${APP_VERSION:-1.0.0}
# From .env file or shell
- DATABASE_URL=${DATABASE_URL}
# Required (fails if missing)
- JWT_SECRET=${JWT_SECRET:?JWT_SECRET is required}
# .env file (auto-loaded by compose)
APP_VERSION=2.1.0
DATABASE_URL=postgresql://user:pass@db:5432/appdb
JWT_SECRET=my-super-secret-key
# Compose variable precedence (highest to lowest):
# 1. shell environment variables
# 2. environment attribute in compose.yaml
# 3. env_file in compose.yaml
# 4. .env filedepends_on with conditions instead of just listing service names. condition: service_healthy waits for the dependency to pass its healthcheck before starting. Available conditions: service_started, service_healthy, service_completed_successfully..env file in .gitignore. Docker Compose supports secrets: with file-based or external providers.# ── Network Management ──
docker network create my-network # default bridge
docker network create -d bridge --subnet 172.20.0.0/16 my-net
docker network create -d overlay my-overlay # for Swarm
docker network create --internal my-internal-net # no external access
docker network ls
docker network inspect my-network
docker network rm my-network
# ── Connect containers to networks ──
docker run -d --name api --network my-net my-api
docker network connect my-net my-container # add existing
docker network disconnect my-net my-container # remove from net
docker run -d --name api \
--network my-net --network-alias api-service my-api # alias
# ── Default networks ──
docker network create --driver bridge app-frontend
docker network create --driver bridge app-backend --internal# ── Compose networking ──
services:
api:
image: my-api:latest
networks:
frontend:
aliases:
- api.internal
backend:
aliases:
- api.backend
ports:
- "8080:8080" # accessible from host
db:
image: postgres:16-alpine
networks:
- backend # no port mapping = no host access (secure)
cache:
image: redis:7-alpine
networks:
- backend
# External network: connect to pre-existing Docker network
proxy:
image: nginx:alpine
networks:
- frontend
- external_net # must exist outside Compose
networks:
frontend:
driver: bridge
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "127.0.0.1"
backend:
driver: bridge
internal: true # no internet access, only inter-container
external_net:
external: true # created outside Compose
name: existing-network| Driver | Scope | Use Case | Container Discovery |
|---|---|---|---|
| bridge | Single host | Default, dev/testing | DNS by name |
| host | Single host | Max performance, no isolation | N/A (shares host) |
| none | Single host | No networking at all | N/A |
| overlay | Multi-host | Swarm / Kubernetes-like | Built-in DNS |
| macvlan | Single host | Assign physical MAC/IP | External DNS |
| ipvlan | Single host | Share host IP, no MAC | External DNS |
| Directive | Host Access | Purpose |
|---|---|---|
| -p 8080:3000 | Yes | Publish to host port 8080 |
| -p 127.0.0.1:5432:5432 | Localhost only | Secure: loopback only |
| -p 8080-8090:80-90 | Range | Port range mapping |
| EXPOSE 3000 | No | Documentation / inter-container |
| EXPOSE 3000/udp | No | UDP protocol (default TCP) |
| --expose 3000 | No | CLI equivalent of EXPOSE |
# ── DNS resolution in Docker networks ──
# Docker has a built-in DNS server (127.0.0.11)
# Containers resolve each other by SERVICE name
services:
api:
image: my-api
networks:
app-net:
# Custom DNS for this container
dns:
- 8.8.8.8
- 8.8.4.4
dns_search:
- mycompany.local
# ── DNS round-robin load balancing ──
web:
deploy:
replicas: 3
networks:
- app-net
# Other containers reach any replica at: http://web:3000
# ── Extra hosts (like /etc/hosts) ──
worker:
image: my-worker
extra_hosts:
- "host.docker.internal:host-gateway" # access host machine
- "api.local:172.20.0.5"
networks:
- app-net# ── Host network mode ──
docker run -d --network host myapp # shares host network stack
# App listens on host ports directly (no -p needed)
# ⚠️ No network isolation — port conflicts possible
# ── None network ──
docker run -d --network none myapp # completely isolated
# ── Inspect network details ──
docker network inspect my-net --format '
Name: {{.Name}}
Driver: {{.Driver}}
Subnet: {{index .IPAM.Config 0 "Subnet"}}
Gateway: {{index .IPAM.Config 0 "Gateway"}}
Containers: {{len .Containers}}
'
# ── macvlan: container gets its own IP on physical network ──
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
--ip-range=192.168.1.128/25 \
-o parent=eth0 macvlan-net
docker run -d --network macvlan-net --ip 192.168.1.200 myapplink (deprecated). Use host.docker.internal (with extra_hosts) to access the host machine from inside a container.# ── Volume types in Compose ──
services:
db:
image: postgres:16-alpine
# 1. Named volume (managed by Docker)
volumes:
- pgdata:/var/lib/postgresql/data
web:
image: nginx:alpine
# 2. Bind mount (host path mapped into container)
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./static:/usr/share/nginx/html:ro
cache:
image: redis:7-alpine
# 3. tmpfs (in-memory, fast, non-persistent)
tmpfs:
- /tmp:rw,noexec,size=100m
app:
image: my-app:latest
volumes:
- type: volume
source: app-data
target: /app/data
- type: bind
source: ./config
target: /app/config
read_only: true
- type: tmpfs
target: /app/tmp
tmpfs:
size: 50m
volumes:
pgdata:
driver: local
driver_opts:
type: none
o: bind
device: /data/postgres # or use named volume without device
app-data:
driver: local# ── Volume Management ──
docker volume create my-volume
docker volume create --driver local --opt type=nfs --opt o=addr=192.168.1.1,rw --opt device=:/data nfs-volume
docker volume ls
docker volume inspect my-volume
docker volume rm my-volume
docker volume rm $(docker volume ls -q -f "dangling=true")
docker volume prune
# ── Backup a volume ──
docker run --rm -v my-volume:/data -v $(pwd):/backup alpine \
tar czf /backup/my-volume-backup.tar.gz /data
# ── Restore a volume ──
docker run --rm -v my-volume:/data -v $(pwd):/backup alpine \
tar xzf /backup/my-volume-backup.tar.gz -C /
# ── Copy data between host and container ──
docker cp ./local-file.txt mycontainer:/app/file.txt
docker cp mycontainer:/app/output.txt ./output.txt| Type | Location | Persistence | Performance | Use Case |
|---|---|---|---|---|
| Named Volume | /var/lib/docker/volumes/ | Yes | Fast | Database data, stateful apps |
| Bind Mount | Any host path | Yes | Moderate | Config files, dev source code |
| tmpfs | Container memory | No | Fastest | Temp files, session data |
| Flag | Description |
|---|---|
| :ro | Read-only mount |
| :rw | Read-write (default) |
| :z | SELinux shared label |
| :Z | SELinux private label |
| :cached | OS-level caching (bind mount) |
| :consistent | Full consistency (default) |
| :delegated | Container view is authoritative |
| noexec | Prevent executing binaries (tmpfs) |
| nosuid | Ignore SUID bit (tmpfs) |
# ── Sharing data between containers ──
services:
# Shared named volume
uploader:
image: my-uploader
volumes:
- uploads:/data/uploads
processor:
image: my-processor
volumes:
- uploads:/data/uploads # same volume, reads uploads
# Volume from another container (deprecated, use named volumes)
# Instead of: volumes_from: ["uploader"], use shared named volumes
volumes:
uploads:
driver: local
# ── Volume permissions ──
# Problem: container runs as non-root but volume is owned by root
# Solution: use named volume with init container
services:
app:
image: my-app
user: "1000:1000"
volumes:
- appdata:/app/data
# Init container to set permissions
volume-init:
image: alpine
entrypoint: ["sh", "-c", "chown -R 1000:1000 /app/data"]
volumes:
- appdata:/app/data
volumes:
appdata:bind mounts only for development or configuration files.# ── Multi-stage Go application ──
# Stage 1: Build
FROM golang:1.22-alpine AS builder
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -ldflags="-s -w" -o /app/server ./cmd/server
# Stage 2: Runtime (scratch = zero overhead)
FROM scratch AS runtime
# Import CA certificates for HTTPS
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Copy binary from builder
COPY --from=builder /app/server /server
EXPOSE 8080
ENTRYPOINT ["/server"]# ── Multi-stage Next.js application ──
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build
# Stage 3: Production runner
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production \
NEXT_TELEMETRY_DISABLED=1
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
# Copy only necessary files
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
CMD ["node", "server.js"]# ── Multi-stage Java (Spring Boot) ──
FROM eclipse-temurin:21-jdk-alpine AS builder
WORKDIR /build
COPY pom.xml .
COPY src ./src
RUN ./mvnw -DskipTests package -B
FROM eclipse-temurin:21-jre-alpine AS runtime
WORKDIR /app
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Unpack the layered JAR (Spring Boot 3.2+)
COPY --from=builder /build/target/*.jar app.jar
RUN java -Djarmode=layertools -jar app.jar extract
# Layers: dependencies change less often than application code
COPY --from=builder --chown=appuser:appgroup /app/dependencies/ ./
COPY --from=builder --chown=appuser:appgroup /app/spring-boot-loader/ ./
COPY --from=builder --chown=appuser:appgroup /app/snapshot-dependencies/ ./
COPY --from=builder --chown=appuser:appgroup /app/application/ ./
USER appuser
EXPOSE 8080
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]| Syntax | Description |
|---|---|
| COPY --from=0 | Copy from first stage (index 0) |
| COPY --from=builder | Copy from named stage |
| COPY --from=nginx:alpine | Copy from external image |
| COPY --from=builder /app /app | Specific paths |
| COPY --from=builder --chown=1000:1000 | Set ownership |
| COPY --from=builder --chmod=755 | Set permissions |
# ── BuildKit caching patterns ──
# syntax=docker/dockerfile:1
# Python with pip cache
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
# Node with npm cache
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
# Go with module cache
FROM golang:1.22-alpine
WORKDIR /src
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
go mod download
# Secret mounting (never stored in image)
FROM node:20-alpine
RUN --mount=type=secret,id=github_token \
echo "Using token..." && \
npm config set //npm.pkg.github.com/:_authToken $(cat /run/secrets/github_token)--mount=type=cache) are the single biggest build performance improvement. They persist package caches (pip, npm, Go modules) across builds without bloating image layers. Enable with DOCKER_BUILDKIT=1.RUN --mount=type=secret,id=mysecret with BuildKit to mount secrets only during build time. They are never persisted in any layer and cannot be extracted from the image.# ── Production-grade Dockerfile ──
# syntax=docker/dockerfile:1
FROM node:20-alpine AS base
# Build-time variables (not in final image)
ARG NODE_ENV=production
# Set environment early (affects RUN commands)
ENV NODE_ENV=${NODE_ENV}
# Security: pin specific digest
FROM node:20-alpine@sha256:abc123 AS builder
WORKDIR /app
# Install only production deps with cache
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --omit=dev
# Copy and build
COPY . .
RUN npm run build
# ── Minimal runtime image ──
FROM gcr.io/distroless/nodejs20-debian12 AS production
# Non-root user (distroless has 'nonroot' built-in)
USER nonroot
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
ENTRYPOINT ["node", "dist/server.js"]# ── Security scanning ──
# Scan image for vulnerabilities
docker scout cves myimage:latest
docker scout recommendations myimage:latest
# Trivy (popular open-source scanner)
trivy image myimage:latest
trivy image --severity HIGH,CRITICAL myimage:latest
trivy image --ignore-unfixed myimage:latest
# Docker Bench Security
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
docker/docker-bench-security
# ── Resource limits in Compose ──
# deploy.resources (Compose v2 / Swarm)
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
pids: 100
reservations:
cpus: "0.25"
memory: 128M
# Docker CLI resource limits
docker run -d --memory=512m --memory-swap=1g --cpus=1.5 --pids-limit=100 myapp| Practice | Impact | How |
|---|---|---|
| Combine RUN commands | Fewer layers | Use && or \ |
| Copy deps first | Better cache | COPY package*.json before COPY . |
| Multi-stage builds | Smaller image | Separate build and runtime |
| Distroless/alpine | Minimal attack surface | FROM gcr.io/distroless/... |
| Use .dockerignore | Smaller context | Exclude node_modules, .git, etc. |
| Pin image digests | Reproducible | FROM image@sha256:... |
| Minimize installed packages | Smaller layers | apt-get install --no-install-recommends |
| Clean up in same layer | No leftover size | rm -rf /var/lib/apt/lists in same RUN |
| Practice | Why |
|---|---|
| Run as non-root | Prevents host escalation |
| Use distroless images | No shell, minimal attack surface |
| Never run as --privileged | Full host access = game over |
| Scan with Trivy/Scout | Find CVEs before deployment |
| Drop all capabilities | Least privilege principle |
| Use read-only FS | Prevent runtime modification |
| Set resource limits | Prevent DoS and resource exhaustion |
| Pin image digests | Reproducible, verified builds |
| Sign images | Verify image authenticity (cosign) |
| Never embed secrets | Use secrets or env injection |
# ── Logging configuration ──
services:
api:
image: my-api:latest
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
tag: "{{.Name}}/{{.ID}}"
# ── Fluentd / Loki logging driver ──
api-prod:
image: my-api:latest
logging:
driver: fluentd
options:
fluentd-address: "localhost:24224"
tag: "docker.{{.Name}}"
# ── Restrict capabilities ──
api-secure:
image: my-api:latest
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE # bind to ports < 1024
security_opt:
- no-new-privileges:true
read_only: true
tmpfs:
- /tmp:noexec,size=100m
- /var/run:noexec,size=10m# ── CI/CD: GitHub Actions ──
name: Build and Push Docker Image
on:
push:
branches: [main]
tags: ["v*"]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
myuser/myapp:latest
myuser/myapp:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64,linux/arm64gcr.io/distroless/nodejs20, gcr.io/distroless/python3) for production. They contain only your app and runtime dependencies — no shell, no package manager, no attack surface. If you need a shell for debugging, keep a debug tag with alpine-based images.--privileged in production. This gives the container full access to the host (all devices, all capabilities, escape SELinux/AppArmor). Use specific --cap-add flags instead for the minimum privileges needed.Docker is a platform for building, shipping, and running applications in containers. Containers share the host OS kernel but have isolated filesystems, network stacks, and process trees.
| Aspect | Docker Container | Virtual Machine |
|---|---|---|
| Kernel | Shared with host | Own guest kernel |
| Startup | Seconds | Minutes |
| Size | MBs (lean) | GBs (full OS) |
| Isolation | Process-level | Hardware-level |
| Performance | Near-native | Overhead from hypervisor |
| Density | Many per host | Few per host |
A Docker image is a read-only template with instructions for creating a container. Images consist of layers — each RUN, COPY, and ADDinstruction creates a new layer. Layers are cached and shared across images. When you run a container, Docker adds a thin read-write layer on top of the image layers.
# ENTRYPOINT: defines the executable (hard to override)
ENTRYPOINT ["python"]
# CMD: provides default arguments to ENTRYPOINT (easy to override)
CMD ["app.py"]
# Running: docker run myimage => python app.py
# Running: docker run myimage script.py => python script.py
# Shell form (runs via /bin/sh -c):
ENTRYPOINT python app.py # cannot append args
# Exec form (runs directly, preferred):
ENTRYPOINT ["python", "app.py"] # can append argsRule of thumb: Use ENTRYPOINT for the fixed executable and CMD for default arguments. Users can override CMD at runtime: docker run myimage --debug.
Docker provides built-in DNS resolution on custom networks. When containers are on the same network, they can reach each other by service name (e.g., http://db:5432). The bridge driver creates a virtual network with NAT for single-host communication. The overlay driver spans multiple hosts for Swarm deployments. Containers use the embedded DNS server at 127.0.0.11 for service discovery.
Volumes persist data beyond the container lifecycle. When a container is removed, its writable layer is deleted, but volume data survives. Named volumes are managed by Docker and are the recommended approach. Bind mounts link a host path directly — useful for development but have portability and permission issues.
Multi-stage builds allow you to use multiple FROM statements in a single Dockerfile. Each FROM starts a new build stage. You can selectively copy artifacts (COPY --from=stage) from earlier stages into the final image, discarding build tools and intermediate files. This dramatically reduces image size — e.g., a Go binary built with the full toolchain can run in a scratch image under 20MB instead of 800MB+.
# 1. Check container status
docker ps -a | grep myapp
docker logs myapp --tail 50
# 2. Inspect exit code
docker inspect myapp --format '{{.State.ExitCode}}'
# 3. Interactive shell to investigate
docker run -it --entrypoint sh myimage
# 4. Override command
docker run -it --rm myimage sh -c "ls -la /app && cat config.json"
# 5. Check resource usage
docker stats myapp
# 6. Inspect network
docker inspect myapp --format '{{json .NetworkSettings.Networks}}'
# 7. Healthcheck logs
docker inspect --format '{{json .State.Health}}' myappDocker Compose is a tool for defining and running multi-container applications. You write a compose.yaml file that declares services, networks, volumes, and their relationships. Compose handles dependency ordering, networking, and consistent environments. Use it for local development (with compose.override.yaml for dev settings),CI/CD testing, and small production deployments. For large-scale production, consider Kubernetes or Docker Swarm.
An image is a static, read-only template that defines everything needed to run an application (code, runtime, system tools, libraries, settings). A container is a running instance of an image — it adds a thin read-write layer on top of the image layers. One image can create many containers. Think of it as: image = class, container = instance. Images are immutable; containers have mutable state but are ephemeral by design.
Containers are ephemeral — their writable layer is deleted on removal. To persist data, Docker provides three mechanisms: named volumes (managed by Docker, stored in /var/lib/docker/volumes/),bind mounts (map any host directory into the container), and tmpfs mounts(in-memory, non-persistent). Named volumes are the recommended default because Docker manages their lifecycle, they are portable, and they support volume drivers for cloud storage.