Docker Complete Guide 2026: Containerize Node.js, Python, and Next.js Apps
Advertisement
Docker 2026: Container Everything
Docker made "works on my machine" a thing of the past. Every production deployment in 2026 is containerized. This guide covers production-grade Docker patterns.
- Multi-Stage Build for Node.js
- Next.js Production Docker
- Python / FastAPI Docker
- Docker Compose for Local Development
- Secrets Management
- .dockerignore (Critical!)
- Container Registry and CI/CD
- Key Docker Commands
Multi-Stage Build for Node.js
Single-stage builds include dev dependencies and build tools — 10x larger than needed:
# Dockerfile — Multi-stage Node.js
FROM node:20-alpine AS base
WORKDIR /app
COPY package*.json ./
# Dependencies stage
FROM base AS deps
RUN npm ci --only=production
# Build stage
FROM base AS builder
RUN npm ci # Install all deps including dev
COPY . .
RUN npm run build
# Production stage — minimal image
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
# Security: run as non-root
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nodeuser
# Copy only what's needed
COPY /app/node_modules ./node_modules
COPY /app/dist ./dist
COPY /app/package.json ./package.json
USER nodeuser
EXPOSE 3000
HEALTHCHECK \
CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "dist/index.js"]
Result: base image ~150MB vs production image ~85MB (43% smaller).
Next.js Production Docker
# Dockerfile for Next.js
FROM node:20-alpine AS base
WORKDIR /app
COPY package*.json ./
FROM base AS deps
RUN npm ci
FROM base AS builder
RUN npm ci
COPY . .
# Build Next.js with output: 'standalone'
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Next.js standalone output includes everything needed
COPY /app/public ./public
COPY /app/.next/standalone ./
COPY /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
// next.config.js — required for standalone output
module.exports = {
output: 'standalone',
}
Python / FastAPI Docker
# Dockerfile for Python FastAPI
FROM python:3.12-slim AS base
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy and install Python deps first (layer caching)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy source code
COPY src/ ./src/
# Non-root user
RUN useradd -m -u 1001 appuser && chown -R appuser /app
USER appuser
EXPOSE 8000
HEALTHCHECK \
CMD curl -f http://localhost:8000/health || exit 1
CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "2"]
Docker Compose for Local Development
# docker-compose.yml
version: '3.9'
services:
app:
build:
context: .
target: builder # Use builder stage for hot reload
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:password@db:5432/myapp
- REDIS_URL=redis://redis:6379
volumes:
- .:/app # Hot reload
- /app/node_modules # Exclude node_modules
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
command: npm run dev
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- app
volumes:
postgres_data:
redis_data:
Secrets Management
# DON'T do this:
ENV API_KEY=secret123 # Visible in docker history!
# DO this instead:
# 1. Use --secret in build (BuildKit)
RUN \
API_KEY=$(cat /run/secrets/api_key) npm run build
# Build with:
# DOCKER_BUILDKIT=1 docker build --secret id=api_key,src=./secrets/api_key .
# 2. Pass at runtime (best for production)
# docker run -e API_KEY=$API_KEY myapp
# 3. Use Docker Swarm secrets or Kubernetes secrets
# docker-compose with secrets
version: '3.9'
services:
app:
environment:
- DATABASE_URL_FILE=/run/secrets/db_url
secrets:
- db_url
secrets:
db_url:
file: ./secrets/db_url.txt
.dockerignore (Critical!)
# .dockerignore — prevents copying large/sensitive files
node_modules
.next
dist
.git
.gitignore
README.md
*.env*
*.log
.DS_Store
coverage
.github
tests
__tests__
*.test.ts
*.spec.ts
Dockerfile*
docker-compose*
Container Registry and CI/CD
# .github/workflows/docker.yml
name: Build and Push Docker Image
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
username/myapp:latest
username/myapp:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64,linux/arm64
Key Docker Commands
# Build
docker build -t myapp:latest .
docker build --target production -t myapp:prod .
# Run
docker run -p 3000:3000 -e DATABASE_URL=$DATABASE_URL myapp:latest
docker run -d --name myapp --restart=unless-stopped myapp:latest
# Inspect
docker logs myapp -f --tail=100
docker exec -it myapp sh
docker stats myapp
# Compose
docker compose up -d --build
docker compose logs -f app
docker compose down -v # Also remove volumes
# Clean up
docker system prune -af --volumes # Remove everything unused
Docker + Docker Compose is the minimum viable setup for any production deployment. Learn these patterns once and they work everywhere.
Advertisement