- Published on
Kubernetes NetworkPolicies — Zero-Trust Networking Between Pods
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
By default, Kubernetes allows any pod to communicate with any other pod in the cluster. This open-network posture violates zero-trust security principles. NetworkPolicies act as firewalls between pods, controlling ingress (inbound) and egress (outbound) traffic based on labels and namespaces. Combined with network plugins like Cilium, they enforce Layer 3/4 policies and even Layer 7 (application layer) rules. This post covers the theory and practice of NetworkPolicies in production deployments.
- Default-Deny Ingress and Egress
- Allowlisting by Label Selector
- Namespace Isolation
- Egress to External IPs
- NetworkPolicy Design Patterns
- Testing NetworkPolicy with netshoot
- Cilium for L7 Policies
- Common Gotchas
- Checklist
- Conclusion
Default-Deny Ingress and Egress
The foundation of zero-trust networking: deny all traffic by default, then explicitly allow what's needed.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This blocks all ingress and egress for all pods in the namespace. Nothing works until you add allow rules.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress: []
This denies only ingress; pods can initiate outbound connections. For a typical web application tier:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-ingress
namespace: production
spec:
podSelector:
matchLabels:
app: api
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: production
podSelector:
matchLabels:
app: web
tier: frontend
ports:
- protocol: TCP
port: 8080
This allows traffic to api pods (port 8080) only from web pods in the same namespace.
Allowlisting by Label Selector
Use pod labels to define allowed communication paths. This is more flexible than IP-based rules.
apiVersion: v1
kind: Pod
metadata:
name: database
namespace: production
labels:
app: postgres
tier: database
access: restricted
spec:
containers:
- name: postgres
image: postgres:15
---
apiVersion: v1
kind: Pod
metadata:
name: api-server
namespace: production
labels:
app: api
tier: backend
access: service
spec:
containers:
- name: app
image: my-api:v1
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-to-db
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: api
access: service
ports:
- protocol: TCP
port: 5432
Only pods labeled app: api and access: service can reach PostgreSQL on port 5432. Other pods are blocked, even if they know the endpoint.
Namespace Isolation
Isolate namespaces from each other using namespace-level selectors:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
isolation: strict
env: prod
---
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
isolation: medium
env: staging
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-cross-namespace
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
isolation: strict
env: prod
Pods in production namespace accept traffic only from other pods in production. Staging cannot reach production.
Egress to External IPs
Control which pods can reach external networks. This prevents data exfiltration and lateral movement.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-external-egress
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
# Allow DNS (required for DNS lookups)
- to:
- namespaceSelector: {}
ports:
- protocol: UDP
port: 53
# Allow traffic to external HTTPS APIs
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32 # Block AWS metadata service
- 10.0.0.0/8 # Block internal networks
ports:
- protocol: TCP
port: 443
# Allow traffic to database pods
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
This allows the API:
- DNS queries (UDP port 53) to any namespace
- HTTPS outbound (TCP port 443) to public networks, blocking internal IPs
- Database connections (TCP port 5432) to PostgreSQL pods
Critical: Always allow DNS egress (UDP 53, TCP 53), or pods cannot resolve domain names.
NetworkPolicy Design Patterns
Pattern 1: Allow-listed ingress (deny other)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-ingress
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
Only ingress-nginx can reach frontend pods.
Pattern 2: Ephemeral clients (batch jobs)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: batch-jobs-egress
namespace: production
spec:
podSelector:
matchLabels:
workload: batch
policyTypes:
- Egress
egress:
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 443
- to:
- podSelector:
matchLabels:
app: cache
ports:
- protocol: TCP
port: 6379
Batch jobs can reach external HTTPS and internal Redis cache, nothing else.
Pattern 3: Multi-tier application
---
# Tier 1: Ingress to frontend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-to-frontend
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
---
# Tier 2: Frontend to backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
---
# Tier 3: Backend to database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-to-database
namespace: production
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
Traffic flows frontend → backend → database. No cross-cutting (e.g., frontend cannot reach database directly).
Testing NetworkPolicy with netshoot
Deploy netshoot for testing connectivity:
apiVersion: v1
kind: Pod
metadata:
name: netshoot
namespace: production
spec:
containers:
- name: netshoot
image: nicolaka/netshoot:latest
command: ["sleep", "3600"]
serviceAccountName: netshoot
Test connectivity:
# From netshoot pod, test if you can reach an API pod
kubectl exec -it netshoot -n production -- nc -zv api-pod.production.svc.cluster.local 8080
# Test DNS resolution
kubectl exec -it netshoot -n production -- nslookup google.com
# Test egress to external IP
kubectl exec -it netshoot -n production -- curl -I https://example.com
# Capture traffic with tcpdump
kubectl exec -it netshoot -n production -- tcpdump -i eth0 -w traffic.pcap
Expected results:
- Connection success: Policy allows traffic
- Connection timeout: Policy denies traffic (verify with
kubectl logs <pod>; check kubelet logs) - Connection refused: Pod isn't listening
Cilium for L7 Policies
Cilium enforces policies at Layer 7 (HTTP, gRPC, DNS). Standard Kubernetes NetworkPolicies only support Layer 3/4.
helm repo add cilium https://helm.cilium.io
helm install cilium cilium/cilium \
--namespace kube-system \
--set cni.chainingMode=aws-cni
Create a Cilium NetworkPolicy to restrict HTTP methods:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-get-only
namespace: production
spec:
endpointSelector:
matchLabels:
app: api
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: GET
path: "/api/.*"
This allows only GET requests to /api/* endpoints; POST, PUT, DELETE are denied.
Common Gotchas
Gotcha 1: Forgetting DNS egress
If pods can't resolve domain names, you've forgotten to allow DNS egress:
# WRONG: Blocks all egress
spec:
policyTypes:
- Egress
egress: []
# CORRECT: Allow DNS, then other egress
spec:
policyTypes:
- Egress
egress:
- to:
- namespaceSelector: {}
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
Gotcha 2: Namespace selector matching
Namespace selectors require the namespace to have matching labels:
# This policy references staging namespace
ingress:
- from:
- namespaceSelector:
matchLabels:
name: staging
Ensure the namespace has the label:
kubectl label namespace staging name=staging
Gotcha 3: Empty selectors (allow all)
An empty podSelector matches all pods. This bypasses the policy:
# WRONG: Allows all pods (same as no policy)
ingress:
- from:
- podSelector: {}
# CORRECT: Allow specific pods
ingress:
- from:
- podSelector:
matchLabels:
app: trusted
Checklist
- Default-deny ingress and egress policy deployed per namespace
- DNS egress (port 53 UDP/TCP) explicitly allowed
- All pod-to-pod communication paths defined in NetworkPolicies
- External IP access restricted by egress policies
- Namespace labels applied for namespace-scoped policies
- Cross-namespace communication policies tested
- NetworkPolicy changes validated with netshoot before production
- Cilium or similar L7 policy system evaluated for HTTP-level controls
- Monitoring alerts for denied connections (via Cilium metrics)
- Runbook for diagnosing NetworkPolicy connectivity issues
Conclusion
NetworkPolicies are the foundation of zero-trust networking in Kubernetes. Start with default-deny rules, use labels to define allowed communication paths, and test thoroughly with netshoot. For sophisticated controls (HTTP method restrictions, DNS filtering), deploy Cilium. Regular audits of policies ensure they remain aligned with application architecture. This systematic approach prevents compromised pods from moving laterally and limits blast radius during security incidents.