Kubernetes Complete Guide 2026: Deploy, Scale, and Manage Containers
Advertisement
Kubernetes 2026: The Container Orchestrator
Kubernetes (K8s) is the standard for running containers in production. If you're running more than 5 containers, you need K8s or a managed equivalent like AWS ECS.
- Core Concepts
- Deployment
- Service
- Ingress with TLS
- Secrets and ConfigMaps
- Horizontal Pod Autoscaler
- Kubernetes Commands
- Managed Kubernetes (Cloud)
- Namespace Organization
Core Concepts
Pod → One or more containers that share network/storage
Deployment → Manages replica sets, rolling updates, rollbacks
Service → Stable DNS/IP for accessing pods
Ingress → HTTP routing rules (like nginx reverse proxy)
ConfigMap → Non-sensitive configuration
Secret → Sensitive configuration (passwords, tokens)
HPA → Horizontal Pod Autoscaler
Namespace → Logical isolation within a cluster
Deployment
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webcoderspeed-app
namespace: production
labels:
app: webcoderspeed
spec:
replicas: 3
selector:
matchLabels:
app: webcoderspeed
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # One extra pod during update
maxUnavailable: 0 # Never reduce below desired
template:
metadata:
labels:
app: webcoderspeed
spec:
containers:
- name: app
image: ghcr.io/webcoderspeed/app:v1.2.0
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: redis-url
# Health checks
livenessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 15
periodSeconds: 20
failureThreshold: 3
readinessProbe:
httpGet:
path: /api/health/ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
# Resource limits
resources:
requests:
cpu: '100m' # 0.1 CPU
memory: '256Mi'
limits:
cpu: '500m' # 0.5 CPU
memory: '512Mi'
# Spread pods across nodes
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: webcoderspeed
Service
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: webcoderspeed-service
namespace: production
spec:
selector:
app: webcoderspeed
ports:
- name: http
port: 80
targetPort: 3000
protocol: TCP
type: ClusterIP # Internal only — Ingress handles external traffic
Ingress with TLS
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webcoderspeed-ingress
namespace: production
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
spec:
tls:
- hosts:
- webcoderspeed.com
- www.webcoderspeed.com
secretName: webcoderspeed-tls
rules:
- host: webcoderspeed.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webcoderspeed-service
port:
number: 80
Secrets and ConfigMaps
# k8s/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
namespace: production
type: Opaque
stringData: # Auto base64 encoded
database-url: "postgresql://user:password@postgres:5432/db"
redis-url: "redis://redis:6379"
jwt-secret: "your-256-bit-secret-here"
---
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
FEATURE_AI: "true"
# Create secret from file (don't commit secrets to git!)
kubectl create secret generic app-secrets \
--from-literal=database-url="postgresql://..." \
--from-literal=jwt-secret="$(openssl rand -hex 32)" \
--namespace=production
# Or use External Secrets Operator + AWS Secrets Manager / GCP Secret Manager
Horizontal Pod Autoscaler
# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: webcoderspeed-hpa
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: webcoderspeed-app
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70 # Scale when avg CPU > 70%
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 30 # Scale up quickly
policies:
- type: Pods
value: 2
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300 # Scale down slowly
Kubernetes Commands
# Apply/update resources
kubectl apply -f k8s/
# Check status
kubectl get pods -n production
kubectl get deployments -n production
kubectl describe pod webcoderspeed-app-xxxxx -n production
# Logs
kubectl logs -f deployment/webcoderspeed-app -n production
kubectl logs -f webcoderspeed-app-xxxxx --previous # Previous crash
# Execute command in pod
kubectl exec -it webcoderspeed-app-xxxxx -- sh
kubectl exec -it webcoderspeed-app-xxxxx -- psql $DATABASE_URL
# Scale manually
kubectl scale deployment webcoderspeed-app --replicas=5 -n production
# Rolling update
kubectl set image deployment/webcoderspeed-app app=ghcr.io/webcoderspeed/app:v1.3.0 -n production
kubectl rollout status deployment/webcoderspeed-app -n production
# Rollback
kubectl rollout undo deployment/webcoderspeed-app -n production
# Port forwarding (debugging)
kubectl port-forward svc/webcoderspeed-service 3000:80 -n production
Managed Kubernetes (Cloud)
# AWS EKS
eksctl create cluster --name my-cluster --region us-east-1 --nodes 3
# Google GKE
gcloud container clusters create my-cluster --num-nodes=3 --zone=us-central1-a
# Azure AKS
az aks create -g myResourceGroup -n myAKSCluster --node-count 3
# All give you a managed control plane — you only manage worker nodes
Namespace Organization
# Separate environments with namespaces
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
environment: production
---
apiVersion: v1
kind: Namespace
metadata:
name: staging
# Deploy to different environments
kubectl apply -f k8s/ -n staging
kubectl apply -f k8s/ -n production
# Context switching
kubectl config use-context prod-cluster
kubectl config use-context staging-cluster
Kubernetes has a steep learning curve but pays off at scale. For most startups, managed platforms like Vercel, Fly.io, or Railway provide K8s power without the operational overhead.
Advertisement