The Default-Allow Problem
By default, every pod in a Kubernetes cluster can communicate with every other pod across all namespaces. This flat network model simplifies initial development but creates a significant security risk in production. A compromised pod can reach databases, internal APIs, and services in other namespaces without restriction.
Network Policies implement zero-trust networking at the pod level, defining explicit ingress and egress rules based on labels, namespaces, and CIDR blocks. This guide covers practical policy patterns for microservices architectures, drawing on our security and compliance practice.
Prerequisites
Network Policies require a CNI plugin that supports them. Not all CNI plugins enforce policies:
- Calico — Full NetworkPolicy support plus extended Calico-specific policies
- Cilium — eBPF-based enforcement with L7 visibility
- Weave Net — Basic NetworkPolicy support
- Flannel — Does NOT support NetworkPolicy (common mistake)
Verify your CNI supports policies before relying on them for security.
Default Deny All Traffic
Start by denying all traffic in a namespace, then selectively allow what is needed:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
An empty podSelector matches all pods in the namespace. With no ingress or egress rules defined, all traffic is blocked. This is the foundation of zero-trust networking.
Allow DNS Resolution
After applying default-deny, pods cannot resolve DNS. Allow egress to the kube-dns service:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Service-to-Service Policies
Define explicit communication paths between microservices:
API Gateway to Backend Services
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-gateway-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: user-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: api-gateway
ports:
- protocol: TCP
port: 8080
This policy allows only pods labeled app: api-gateway to reach app: user-api on port 8080. All other ingress to user-api is denied.
Database Access Restriction
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-access
namespace: production
spec:
podSelector:
matchLabels:
app: postgresql
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
Only pods with the tier: backend label can connect to PostgreSQL. Frontend pods, job runners without the label, and any compromised pods cannot reach the database directly.
Namespace Isolation
In multi-tenant clusters, isolate namespaces from each other while allowing specific cross-namespace communication:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-scrape
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- protocol: TCP
port: 9090
- protocol: TCP
port: 9091
This allows Prometheus in the monitoring namespace to scrape metrics from all pods in production, without opening access for other pods in the monitoring namespace.
Egress Filtering
Control what external resources pods can access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-egress
namespace: production
spec:
podSelector:
matchLabels:
app: payment-service
policyTypes:
- Egress
egress:
# Allow access to internal database
- to:
- podSelector:
matchLabels:
app: postgresql
ports:
- protocol: TCP
port: 5432
# Allow access to Stripe API
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
The payment service can reach its database and external HTTPS endpoints (for Stripe API calls) but nothing else. In a production environment, you would further restrict the CIDR to Stripe's published IP ranges.
Policy Visualization
Understanding the cumulative effect of multiple policies is challenging. Use tools to visualize the network topology:
# Install kubectl network policy viewer
kubectl get networkpolicies -n production -o yaml | npx @kubevious/cli network-policy-viewer
# Or use Cilium's Hubble for real-time flow visualization
hubble observe --namespace production --verdict DROPPED
Testing Network Policies
Validate policies before deploying to production:
# Deploy a test pod
kubectl run test-pod --rm -it --image=busybox --namespace=production -- sh
# Test connectivity from inside the pod
wget -qO- --timeout=3 http://user-api:8080/health
# Should succeed if policy allows it
wget -qO- --timeout=3 http://postgresql:5432
# Should timeout/fail if policy blocks it
# Test cross-namespace access
wget -qO- --timeout=3 http://user-api.production.svc:8080/health
# From a pod in a different namespace — should be blocked
Common Pitfalls
Policy Ordering
Network Policies are additive — if any policy allows traffic, it is allowed. There is no explicit deny rule. The deny-all baseline combined with specific allow policies creates the desired restriction.
Health Check Traffic
Kubelet health checks originate from the node, not from a pod. Some CNI plugins require explicit allowances for node-to-pod health check traffic:
ingress:
- from:
- ipBlock:
cidr: 10.0.0.0/8 # Node CIDR range
ports:
- protocol: TCP
port: 8080
Stateful Connections
Network Policies are stateful — if egress to an external service is allowed, the response traffic is automatically permitted. You do not need separate ingress rules for return traffic.
Best Practices
- Start with default-deny in every namespace
- Label pods consistently — network policies depend entirely on label selectors
- Document every policy with annotations explaining the business reason
- Test policies in staging with real traffic before production deployment
- Monitor dropped packets using your CNI's flow logging capabilities
- Review policies quarterly as service dependencies evolve
- Integrate network policy manifests into your infrastructure as code repository
Network Policies are a fundamental building block of Kubernetes security. Combined with RBAC, pod security standards, and secrets management, they create defense-in-depth that limits the blast radius of any security incident.
Need help with this?
Our team handles this kind of work daily. Let us take care of your infrastructure.
Related Articles
How to Detect and Respond to a Compromised Linux Server
A practical incident response guide for Linux servers: identifying signs of compromise, initial triage, evidence preservation, containment, rootkit detection, and writing an incident report.
SecurityAWS WAF Configuration for Web Application Security
Deploy and configure AWS WAF with managed rule groups, custom rules, rate limiting, and bot control to protect web applications from common threats.
SecurityCompliance-Ready Infrastructure on AWS Guide
Build AWS infrastructure that meets SOC 2, HIPAA, and GDPR compliance requirements with automated controls, audit logging, and security guardrails.