📋 Understanding Kubernetes Networking
🎯 Why Services Matter
The Problem: Pods are ephemeral - they come and go with changing IPs.
The Solution: Services provide stable endpoints and load balancing.
Key Benefit: Decouple consumers from providers with service discovery.
☎️
Real-World Analogy
Think of Services as a company phone system:
- ☎️ Service = Main company phone number
- 📞 Endpoints = Individual employee extensions
- 🔄 Load Balancer = Call distribution system
- 📝 DNS = Company phone directory
- 🚪 Ingress = Reception desk routing external calls
Kubernetes Networking Model
Cluster Network Architecture
External Client
Internet
LoadBalancer
34.102.136.180
NodePort
Node:30080
ClusterIP
10.96.0.1
Pod
10.244.1.5
💡 Kubernetes Network Principles
- Every Pod gets its own IP: No NAT between pods
- Containers in a Pod share network: Communicate via localhost
- All Pods can communicate: No NAT required across nodes
- Services get stable IPs: Virtual IPs that don't change
🔌 Service Types
🔒
ClusterIP
Default type. Exposes service on a cluster-internal IP. Only reachable from within the cluster.
Use Case: Internal microservices, databases
🚪
NodePort
Exposes service on each node's IP at a static port (30000-32767). Accessible from outside.
Use Case: Development, simple external access
⚖️
LoadBalancer
Exposes service externally using cloud provider's load balancer. Gets external IP.
Use Case: Production apps on cloud
🔗
ExternalName
Maps service to external DNS name. No proxying, just DNS CNAME record.
Use Case: External databases, APIs
ClusterIP Service Example
clusterip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: backend-service
namespace: default
spec:
type: ClusterIP # Default type
selector:
app: backend
tier: api
ports:
- name: http
protocol: TCP
port: 80 # Service port
targetPort: 8080 # Container port
- name: metrics
protocol: TCP
port: 9090
targetPort: metrics # Named port
sessionAffinity: ClientIP # Sticky sessions
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
NodePort Service Example
nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
selector:
app: frontend
ports:
- name: http
port: 80 # Service port
targetPort: 3000 # Container port
nodePort: 30080 # Node port (30000-32767)
protocol: TCP
LoadBalancer Service Example
loadbalancer-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
annotations:
# AWS annotations
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
type: LoadBalancer
selector:
app: web
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
loadBalancerSourceRanges: # Restrict access
- 10.0.0.0/8
- 172.16.0.0/12
Headless Service (No Cluster IP)
headless-service.yaml
apiVersion: v1
kind: Service
metadata:
name: database-headless
spec:
clusterIP: None # Headless service
selector:
app: cassandra
ports:
- name: cql
port: 9042
targetPort: 9042
# DNS returns all Pod IPs, not a single service IP
# Used for StatefulSets and direct pod communication
Service Commands
Service Management Commands
# Create service from YAML
kubectl apply -f service.yaml
# Expose deployment as service
kubectl expose deployment nginx --port=80 --target-port=8080 --type=ClusterIP
# Get services
kubectl get services
kubectl get svc -o wide
# Describe service
kubectl describe service my-service
# Get endpoints
kubectl get endpoints my-service
# Test service from inside cluster
kubectl run test-pod --image=busybox -it --rm -- wget -O- my-service
# Port forward to access service locally
kubectl port-forward service/my-service 8080:80
# Get service in YAML format
kubectl get service my-service -o yaml
🔍 DNS & Service Discovery
Kubernetes DNS Resolution
How DNS Works in Kubernetes
1
Service Creation
When you create a Service, Kubernetes DNS creates a DNS record
2
DNS Format
<service-name>.<namespace>.svc.cluster.local
3
Pod DNS Query
Pods query CoreDNS to resolve service names to IPs
4
IP Resolution
CoreDNS returns the ClusterIP of the service
DNS Examples
DNS Resolution Examples
# Full DNS name
my-service.default.svc.cluster.local
# Within same namespace
my-service
# Cross namespace
my-service.other-namespace
# Service subdomain
my-service.other-namespace.svc
# Pod DNS (for StatefulSets)
pod-0.my-service.default.svc.cluster.local
# SRV records for ports
_http._tcp.my-service.default.svc.cluster.local
Testing DNS Resolution
DNS Testing Commands
# Run DNS test pod
kubectl run dns-test --image=busybox:1.28 -it --rm --restart=Never -- sh
# Inside the pod, test DNS resolution
nslookup my-service
nslookup my-service.default.svc.cluster.local
nslookup kubernetes.default
# Test with dig (if available)
kubectl run dig-test --image=tutum/dnsutils -it --rm --restart=Never -- sh
dig my-service.default.svc.cluster.local
# Check CoreDNS logs
kubectl logs -n kube-system -l k8s-app=kube-dns
# Get CoreDNS config
kubectl get configmap coredns -n kube-system -o yaml
Custom DNS Configuration
pod-with-dns-config.yaml
apiVersion: v1
kind: Pod
metadata:
name: custom-dns-pod
spec:
dnsPolicy: "None" # Custom DNS settings
dnsConfig:
nameservers:
- 8.8.8.8
- 8.8.4.4
searches:
- default.svc.cluster.local
- svc.cluster.local
- cluster.local
options:
- name: ndots
value: "2"
- name: edns0
containers:
- name: app
image: nginx
💡 DNS Policies
- ClusterFirst: Default. Use cluster DNS first, then host
- Default: Use node's DNS settings
- ClusterFirstWithHostNet: For pods with hostNetwork: true
- None: Use custom DNS config
🌐 Advanced Networking
Ingress Controller
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Service Mesh Integration
Service with Istio Annotations
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
annotations:
# Istio traffic management
traffic.sidecar.istio.io/includeInboundPorts: "9080"
traffic.sidecar.istio.io/excludeOutboundPorts: "15090,15021"
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
Multi-Port Services
multi-port-service.yaml
apiVersion: v1
kind: Service
metadata:
name: multi-port-service
spec:
selector:
app: multi-app
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
- name: https
port: 443
targetPort: 8443
protocol: TCP
- name: metrics
port: 9090
targetPort: 9090
protocol: TCP
- name: grpc
port: 50051
targetPort: 50051
protocol: TCP
EndpointSlices
📝 EndpointSlices vs Endpoints
EndpointSlices are the new way to track network endpoints:
- Scalability: Better for large numbers of endpoints
- Performance: Reduced API server load
- Topology: Support for topology-aware routing
- Dual-stack: Better IPv4/IPv6 support
🛡️ Network Policies
Network Policy Rules
INGRESS
Allow traffic FROM specific pods/namespaces TO this pod
EGRESS
Allow traffic FROM this pod TO specific destinations
Default Deny All Policy
deny-all-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Apply to all pods in namespace
policyTypes:
- Ingress
- Egress
Allow Specific Traffic
web-db-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-to-db
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web
- namespaceSelector:
matchLabels:
name: production
ports:
- protocol: TCP
port: 5432
Egress Control
egress-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-dns
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53 # DNS
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32 # Block metadata service
ports:
- protocol: TCP
port: 443
⚠️ Network Policy Pitfalls
- CNI Support: Not all CNI plugins support NetworkPolicies
- Default Allow: Without policies, all traffic is allowed
- No Deny Rules: Policies are additive, can't explicitly deny
- DNS Access: Remember to allow DNS (port 53) for name resolution
💻 Practice Exercises
Exercise 1: Multi-Tier Application
Objective: Set up services for a 3-tier application
- Create frontend service (LoadBalancer)
- Create backend API service (ClusterIP)
- Create database service (ClusterIP, headless)
- Verify connectivity between tiers
💡 Solution
# Frontend Service (LoadBalancer)
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
tier: frontend
ports:
- port: 80
targetPort: 3000
---
# Backend API Service (ClusterIP)
apiVersion: v1
kind: Service
metadata:
name: backend-api
spec:
type: ClusterIP
selector:
tier: backend
ports:
- port: 8080
targetPort: 8080
---
# Database Service (Headless)
apiVersion: v1
kind: Service
metadata:
name: database
spec:
clusterIP: None
selector:
tier: database
ports:
- port: 5432
targetPort: 5432
Exercise 2: Network Policy Implementation
Objective: Secure your application with network policies
- Create default deny-all policy
- Allow frontend to backend communication
- Allow backend to database communication
- Allow egress to external APIs on port 443
💡 Solution
# Default deny all
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# Frontend to Backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-allow-frontend
spec:
podSelector:
matchLabels:
tier: backend
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- port: 8080
---
# Backend to Database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-allow-backend
spec:
podSelector:
matchLabels:
tier: database
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- port: 5432
Exercise 3: Service Discovery Testing
Objective: Test DNS and service discovery
- Create a service
- Test DNS resolution from a pod
- Access service using different DNS formats
- Verify load balancing across endpoints
💡 Solution
# Create test deployment and service
kubectl create deployment test-app --image=nginx --replicas=3
kubectl expose deployment test-app --port=80
# Run test pod
kubectl run test-client --image=busybox -it --rm -- sh
# Inside the pod, test DNS
nslookup test-app
wget -O- test-app
wget -O- test-app.default
wget -O- test-app.default.svc.cluster.local
# Check endpoints
kubectl get endpoints test-app
# Test load balancing
for i in {1..10}; do
kubectl exec test-client -- wget -qO- test-app | grep "Server"
done
🚀 Challenge: Complete Network Setup
Create a production-ready network configuration:
- Multi-tier application with proper service types
- Ingress controller for external access
- Network policies for security
- Custom DNS configuration
- Service mesh integration (optional)