🏗️ Foundational Patterns
Core patterns that form the building blocks of cloud-native applications in Kubernetes.
Health Probe Pattern
Ensure containers are healthy and ready to serve traffic using liveness, readiness, and startup probes.
apiVersion: apps/v1 kind: Deployment metadata: name: web-app spec: template: spec: containers: - name: app image: myapp:v1 ports: - containerPort: 8080 # Liveness probe - restarts container if unhealthy livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 # Readiness probe - removes from service if not ready readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 failureThreshold: 3 # Startup probe - for slow starting containers startupProbe: httpGet: path: /startup port: 8080 initialDelaySeconds: 0 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 30 # 30 * 10 = 5 minutes max startup
- Use readiness probes to prevent traffic to unready pods
- Liveness probes should be simple and fast
- Startup probes for applications with long initialization
- Don't share the same endpoint for liveness and readiness
Resource Management Pattern
Define resource requests and limits to ensure proper scheduling and prevent resource starvation.
apiVersion: apps/v1 kind: Deployment metadata: name: resource-aware-app spec: template: spec: containers: - name: app image: myapp:v1 resources: requests: memory: "256Mi" cpu: "250m" ephemeral-storage: "1Gi" limits: memory: "512Mi" cpu: "500m" ephemeral-storage: "2Gi" # QoS Class: Burstable (has requests and limits) # Other QoS Classes: # - Guaranteed: requests == limits for all resources # - BestEffort: no requests or limits --- apiVersion: v1 kind: ResourceQuota metadata: name: namespace-quota namespace: production spec: hard: requests.cpu: "100" requests.memory: 200Gi limits.cpu: "200" limits.memory: 400Gi persistentvolumeclaims: "10" services.loadbalancers: "2" --- apiVersion: v1 kind: LimitRange metadata: name: mem-cpu-limit-range namespace: production spec: limits: - max: cpu: "2" memory: "2Gi" min: cpu: "100m" memory: "128Mi" default: cpu: "500m" memory: "512Mi" defaultRequest: cpu: "250m" memory: "256Mi" type: Container
Declarative Deployment Pattern
Use declarative configurations and GitOps practices for reproducible deployments.
# kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: production resources: - deployment.yaml - service.yaml - configmap.yaml patchesStrategicMerge: - deployment-patch.yaml configMapGenerator: - name: app-config envs: - config.env images: - name: myapp newTag: v2.1.0 replicas: - name: web-app count: 5 commonLabels: app.kubernetes.io/name: myapp app.kubernetes.io/version: v2.1.0 app.kubernetes.io/managed-by: kustomize
🚀 Init Container Pattern
Use init containers to prepare the environment before main containers start.
Database Migration
Download Assets
Application
apiVersion: apps/v1 kind: Deployment metadata: name: app-with-init spec: template: spec: initContainers: # Wait for database - name: wait-for-db image: busybox:1.35 command: ['sh', '-c'] args: - | until nc -z postgres-service 5432; do echo "Waiting for database..." sleep 2 done echo "Database is ready!" # Run migrations - name: db-migration image: myapp:v1 command: ['python', 'manage.py', 'migrate'] env: - name: DATABASE_URL valueFrom: secretKeyRef: name: db-secret key: url # Download configuration - name: fetch-config image: busybox:1.35 command: ['wget'] args: ['-O', '/shared/config.yaml', 'http://config-server/config'] volumeMounts: - name: shared-data mountPath: /shared containers: - name: app image: myapp:v1 volumeMounts: - name: shared-data mountPath: /app/config volumes: - name: shared-data emptyDir: {}
🚗 Sidecar Pattern
Extend and enhance the main container without changing it by deploying helper containers alongside.
Business Logic
Logging/Monitoring/Proxy
Example: Logging Sidecar
apiVersion: apps/v1 kind: Deployment metadata: name: app-with-logging-sidecar spec: template: spec: containers: # Main application container - name: app image: myapp:v1 ports: - containerPort: 8080 volumeMounts: - name: logs mountPath: /var/log/app # Logging sidecar - name: log-forwarder image: fluentd:latest volumeMounts: - name: logs mountPath: /var/log/app - name: fluentd-config mountPath: /fluentd/etc env: - name: ELASTICSEARCH_HOST value: "elasticsearch.logging.svc.cluster.local" volumes: - name: logs emptyDir: {} - name: fluentd-config configMap: name: fluentd-config
Example: Service Mesh Sidecar (Envoy)
apiVersion: apps/v1 kind: Deployment metadata: name: app-with-envoy-sidecar spec: template: metadata: annotations: sidecar.istio.io/inject: "true" spec: containers: - name: app image: myapp:v1 ports: - containerPort: 8080 # Envoy sidecar (typically injected automatically by Istio) - name: istio-proxy image: istio/proxyv2:1.16.0 args: - proxy - sidecar - --configPath=/etc/istio/proxy - --serviceCluster=myapp env: - name: PILOT_AGENT_ADDR value: istio-pilot.istio-system:15010 volumeMounts: - name: istio-certs mountPath: /etc/certs securityContext: runAsUser: 1337
🌐 Ambassador Pattern
Use a proxy container to handle external communication on behalf of the main container.
apiVersion: apps/v1 kind: Deployment metadata: name: app-with-ambassador spec: template: spec: containers: # Main application - name: app image: myapp:v1 env: - name: DATABASE_HOST value: localhost # Connect through ambassador - name: DATABASE_PORT value: "5432" # Ambassador container - handles database connections - name: db-proxy image: cloud-sql-proxy:latest command: - /cloud_sql_proxy - -instances=project:region:instance=tcp:5432 - -credential_file=/secrets/cloudsql/key.json volumeMounts: - name: cloudsql-creds mountPath: /secrets/cloudsql readOnly: true # Another ambassador for caching - name: cache-proxy image: twemproxy:latest ports: - containerPort: 6379 configMap: - name: twemproxy-config mountPath: /etc/twemproxy volumes: - name: cloudsql-creds secret: secretName: cloudsql-key - name: twemproxy-config configMap: name: twemproxy-config
- Database connection pooling and proxying
- Rate limiting and circuit breaking
- Authentication and authorization
- Request/response transformation
- Service discovery and load balancing
🔌 Adapter Pattern
Transform or normalize the output of the main container to work with external systems.
apiVersion: apps/v1 kind: Deployment metadata: name: app-with-adapter spec: template: spec: containers: # Legacy application with custom metrics format - name: legacy-app image: legacy-app:v1 ports: - containerPort: 8080 volumeMounts: - name: metrics mountPath: /metrics # Adapter to convert metrics to Prometheus format - name: metrics-adapter image: prometheus-adapter:latest ports: - containerPort: 9090 name: metrics volumeMounts: - name: metrics mountPath: /input-metrics command: - /adapter - --input-format=custom - --output-format=prometheus - --input-path=/input-metrics - --port=9090 volumes: - name: metrics emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: legacy-app-metrics annotations: prometheus.io/scrape: "true" prometheus.io/port: "9090" spec: selector: app: legacy-app ports: - name: metrics port: 9090 targetPort: 9090
🔗 Multi-Container Patterns Comparison
Pattern | Purpose | Communication | Example Use Cases |
---|---|---|---|
Sidecar | Extend functionality | Shared volumes, localhost | Logging, monitoring, security |
Ambassador | Proxy external services | Network proxy | Database proxy, API gateway |
Adapter | Standardize interfaces | Data transformation | Metrics conversion, log formatting |
Init Container | Setup and initialization | Sequential execution | Database migration, configuration |
🎯 Batch Job Pattern
Run finite workloads to completion using Jobs and CronJobs.
# Parallel batch processing with work queue apiVersion: batch/v1 kind: Job metadata: name: batch-processor spec: parallelism: 5 completions: 100 backoffLimit: 3 activeDeadlineSeconds: 3600 template: spec: containers: - name: worker image: batch-processor:v1 env: - name: QUEUE_URL value: "redis://redis:6379/0" command: - python - worker.py args: - --queue=tasks - --timeout=300 restartPolicy: OnFailure --- # Scheduled batch job apiVersion: batch/v1 kind: CronJob metadata: name: nightly-report spec: schedule: "0 2 * * *" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 1 jobTemplate: spec: template: spec: containers: - name: report-generator image: reporter:v1 command: ["python", "generate_report.py"] env: - name: REPORT_TYPE value: "daily" - name: OUTPUT_BUCKET value: "s3://reports/daily/" restartPolicy: OnFailure
⚡ Circuit Breaker Pattern
Prevent cascading failures by failing fast when downstream services are unavailable.
Normal operation
Failing fast
Testing recovery
# Istio DestinationRule with Circuit Breaker apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: circuit-breaker spec: host: myservice trafficPolicy: connectionPool: tcp: maxConnections: 100 http: http1MaxPendingRequests: 100 http2MaxRequests: 100 maxRequestsPerConnection: 2 outlierDetection: consecutiveErrors: 5 interval: 30s baseEjectionTime: 30s maxEjectionPercent: 50 minHealthPercent: 30 splitExternalLocalOriginErrors: true --- # Application-level circuit breaker configuration apiVersion: v1 kind: ConfigMap metadata: name: circuit-breaker-config data: config.yaml: | circuitBreaker: requestVolumeThreshold: 20 sleepWindow: 5000 errorThresholdPercentage: 50 timeout: 3000 fallback: enabled: true response: | { "status": "service temporarily unavailable", "fallback": true }
👑 Leader Election Pattern
Ensure only one instance performs certain operations using distributed coordination.
apiVersion: apps/v1 kind: Deployment metadata: name: singleton-processor spec: replicas: 3 # Multiple replicas for HA template: spec: serviceAccountName: leader-election containers: - name: app image: singleton-app:v1 env: - name: ELECTION_NAME value: "singleton-processor-leader" - name: ELECTION_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name command: - /app args: - --leader-elect=true - --leader-elect-lease-duration=15s - --leader-elect-renew-deadline=10s - --leader-elect-retry-period=2s --- apiVersion: v1 kind: ServiceAccount metadata: name: leader-election --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: leader-election rules: - apiGroups: ["coordination.k8s.io"] resources: ["leases"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: [""] resources: ["configmaps"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: leader-election roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: leader-election subjects: - kind: ServiceAccount name: leader-election
🔄 Retry & Backoff Pattern
Handle transient failures gracefully with exponential backoff.
# Service mesh retry configuration apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: retry-policy spec: hosts: - myservice http: - match: - uri: prefix: / route: - destination: host: myservice retry: attempts: 3 perTryTimeout: 30s retryOn: 5xx,reset,connect-failure,refused-stream retryRemoteLocalities: true timeout: 90s --- # Job with backoff configuration apiVersion: batch/v1 kind: Job metadata: name: retry-job spec: backoffLimit: 6 # Max retries template: spec: containers: - name: worker image: worker:v1 env: - name: RETRY_CONFIG value: | { "maxAttempts": 5, "initialInterval": 1000, "maxInterval": 30000, "multiplier": 2, "maxElapsedTime": 300000 } restartPolicy: OnFailure
⚙️ Configuration Management Patterns
Best practices for managing application configuration in Kubernetes.
Immutable Configuration Pattern
Use immutable ConfigMaps and Secrets with versioning for safer updates.
# Immutable ConfigMap with version suffix apiVersion: v1 kind: ConfigMap metadata: name: app-config-v2 immutable: true # Cannot be modified after creation data: app.properties: | server.port=8080 database.pool.size=10 cache.ttl=3600 feature-flags.json: | { "newFeature": true, "betaFeature": false } --- apiVersion: apps/v1 kind: Deployment metadata: name: app spec: template: spec: containers: - name: app image: myapp:v2 envFrom: - configMapRef: name: app-config-v2 # Reference specific version volumeMounts: - name: config mountPath: /config volumes: - name: config configMap: name: app-config-v2 items: - key: feature-flags.json path: features.json
Secret Management Pattern
Securely manage sensitive data using Kubernetes Secrets and external secret stores.
# External Secrets Operator apiVersion: external-secrets.io/v1beta1 kind: SecretStore metadata: name: vault-backend spec: provider: vault: server: "https://vault.example.com:8200" path: "secret" version: "v2" auth: kubernetes: mountPath: "kubernetes" role: "demo" serviceAccountRef: name: "vault-auth" --- apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: database-credentials spec: refreshInterval: 15m secretStoreRef: name: vault-backend kind: SecretStore target: name: database-secret creationPolicy: Owner data: - secretKey: username remoteRef: key: database/credentials property: username - secretKey: password remoteRef: key: database/credentials property: password --- # Sealed Secrets for GitOps apiVersion: bitnami.com/v1alpha1 kind: SealedSecret metadata: name: database-secret spec: encryptedData: username: AgXZO...encrypted...data password: AgBXN...encrypted...data template: type: Opaque metadata: labels: app: myapp
Hot Reload Configuration Pattern
Enable dynamic configuration updates without pod restarts.
apiVersion: apps/v1 kind: Deployment metadata: name: hot-reload-app spec: template: metadata: annotations: checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} spec: containers: - name: app image: myapp:v1 volumeMounts: - name: config mountPath: /config env: - name: CONFIG_RELOAD_ENABLED value: "true" - name: CONFIG_PATH value: "/config" # Config reloader sidecar - name: config-reloader image: jimmidyson/configmap-reload:v0.5.0 args: - --volume-dir=/config - --webhook-url=http://localhost:8080/reload volumeMounts: - name: config mountPath: /config volumes: - name: config configMap: name: app-config
🔄 GitOps Pattern
Declarative deployment using Git as the single source of truth.
# ArgoCD Application apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: myapp namespace: argocd spec: project: default source: repoURL: https://github.com/example/k8s-config targetRevision: HEAD path: overlays/production destination: server: https://kubernetes.default.svc namespace: production syncPolicy: automated: prune: true selfHeal: true allowEmpty: false syncOptions: - CreateNamespace=true retry: limit: 5 backoff: duration: 5s factor: 2 maxDuration: 3m --- # Flux GitRepository apiVersion: source.toolkit.fluxcd.io/v1beta2 kind: GitRepository metadata: name: myapp namespace: flux-system spec: interval: 1m url: https://github.com/example/k8s-config ref: branch: main secretRef: name: github-auth --- apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 kind: Kustomization metadata: name: myapp namespace: flux-system spec: interval: 10m path: "./overlays/production" prune: true sourceRef: kind: GitRepository name: myapp validation: client postBuild: substitute: cluster_name: production region: us-west-2
🚫 Common Anti-Patterns to Avoid
❌ Anti-Pattern: Using Latest Tag
Problem: The 'latest' tag is mutable and can lead to inconsistent deployments.
# DON'T DO THIS spec: containers: - name: app image: myapp:latest # Unpredictable!
# DO THIS INSTEAD spec: containers: - name: app image: myapp:v1.2.3 # Specific version # OR use digest for immutability image: myapp@sha256:abc123...
❌ Anti-Pattern: Hardcoding Configuration
Problem: Embedding environment-specific values in container images.
# DON'T DO THIS ENV DATABASE_HOST=prod-db.example.com ENV API_KEY=sk-1234567890
# DO THIS INSTEAD env: - name: DATABASE_HOST valueFrom: configMapKeyRef: name: db-config key: host - name: API_KEY valueFrom: secretKeyRef: name: api-secrets key: key
❌ Anti-Pattern: Not Setting Resource Limits
Problem: Pods can consume unlimited resources, causing node instability.
# DON'T DO THIS spec: containers: - name: app image: myapp:v1 # No resource constraints!
# DO THIS INSTEAD spec: containers: - name: app image: myapp:v1 resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m"
❌ Anti-Pattern: Running as Root
Problem: Security vulnerability - containers shouldn't run with root privileges.
# DON'T DO THIS spec: containers: - name: app image: myapp:v1 # Runs as root by default
# DO THIS INSTEAD spec: securityContext: runAsNonRoot: true runAsUser: 1000 fsGroup: 2000 containers: - name: app image: myapp:v1 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true capabilities: drop: - ALL
❌ Anti-Pattern: Singleton Pods
Problem: Creating naked pods without controllers leads to no self-healing.
# DON'T DO THIS apiVersion: v1 kind: Pod metadata: name: my-app spec: containers: - name: app image: myapp:v1
# DO THIS INSTEAD apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 1 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: app image: myapp:v1