Kubernetes Security

Secure your clusters with RBAC, network policies, and security best practices. Learn defense-in-depth strategies for production Kubernetes environments.

👤 Role-Based Access Control (RBAC)

Control who can access what resources in your Kubernetes cluster using fine-grained permissions.

Subjects

Users, Groups, or ServiceAccounts that need access

Roles/ClusterRoles

Define what actions can be performed on which resources

Bindings

Link subjects to roles, granting the defined permissions

Creating a Role

YAML
# Namespace-scoped Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log", "pods/status"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list"]
---
# Cluster-wide ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: secret-reader
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list"]
  # Restrict to specific resource names
  resourceNames: ["app-secret", "db-secret"]

Creating RoleBindings

YAML
# Bind Role to User
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: production
subjects:
- kind: User
  name: jane
  apiGroup: rbac.authorization.k8s.io
- kind: Group
  name: developers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
---
# Bind ClusterRole to ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-secrets-global
subjects:
- kind: ServiceAccount
  name: secret-manager
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: secret-reader
  apiGroup: rbac.authorization.k8s.io

ServiceAccount with RBAC

YAML
# Create ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-deployer
  namespace: production
---
# Create Role with deployment permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: deployment-manager
rules:
- apiGroups: ["apps", "extensions"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["pods", "services"]
  verbs: ["get", "list", "watch"]
---
# Bind Role to ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: deployment-manager-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: app-deployer
  namespace: production
roleRef:
  kind: Role
  name: deployment-manager
  apiGroup: rbac.authorization.k8s.io
---
# Use ServiceAccount in Pod
apiVersion: v1
kind: Pod
metadata:
  name: deployer-pod
  namespace: production
spec:
  serviceAccountName: app-deployer
  containers:
  - name: kubectl
    image: bitnami/kubectl:latest
    command: ["sleep", "3600"]
💡 Pro Tip: Use the principle of least privilege. Grant only the minimum permissions required for a task.

🔍 RBAC Best Practices

👨‍💻
Developers

Read pods, logs, deployments

🚀
CI/CD

Deploy, update, rollback

📊
Monitoring

Read all resources, metrics

🔧
Admin

Full cluster access

Common RBAC Patterns

YAML
# Read-only access to namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: namespace-viewer
rules:
- apiGroups: [""]
  resources: ["namespaces", "pods", "services", "configmaps", "endpoints", "persistentvolumeclaims", "events"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources: ["deployments", "daemonsets", "replicasets", "statefulsets"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
  resources: ["jobs", "cronjobs"]
  verbs: ["get", "list", "watch"]
---
# Developer access with limited delete
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: developer
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log", "pods/exec", "pods/portforward"]
  verbs: ["*"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
  # No delete permission
- apiGroups: [""]
  resources: ["secrets", "configmaps"]
  verbs: ["get", "list"]
  # Read-only for sensitive data

🛡️ Network Policies

Control traffic flow at the IP address or port level using Kubernetes NetworkPolicies.

⚠️ Important: Network policies require a CNI plugin that supports them (Calico, Cilium, Weave Net). They won't work with basic networking like kubenet.

Default Deny All Traffic

YAML
# Deny all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: production
spec:
  podSelector: {}  # Apply to all pods in namespace
  policyTypes:
  - Ingress
---
# Deny all egress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress: []  # No allowed egress rules

Allow Specific Traffic

YAML
# Allow frontend to backend communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-netpol
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
      tier: api
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    # Allow from frontend pods
    - podSelector:
        matchLabels:
          app: frontend
    # Allow from specific namespace
    - namespaceSelector:
        matchLabels:
          name: monitoring
      podSelector:
        matchLabels:
          app: prometheus
    # Allow from specific IP ranges
    - ipBlock:
        cidr: 10.0.0.0/8
        except:
        - 10.0.1.0/24
    ports:
    - protocol: TCP
      port: 8080
    - protocol: TCP
      port: 8443
  egress:
  # Allow DNS
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
    - protocol: UDP
      port: 53
  # Allow to database
  - to:
    - podSelector:
        matchLabels:
          app: postgres
    ports:
    - protocol: TCP
      port: 5432

Advanced Network Policies

YAML
# Multi-tier application network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: web-tier-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      tier: web
  policyTypes:
  - Ingress
  - Egress
  ingress:
  # Allow from ingress controller
  - from:
    - namespaceSelector:
        matchLabels:
          name: ingress-nginx
      podSelector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
    ports:
    - protocol: TCP
      port: 80
    - protocol: TCP
      port: 443
  egress:
  # Allow to API tier
  - to:
    - podSelector:
        matchLabels:
          tier: api
    ports:
    - protocol: TCP
      port: 8080
  # Allow external HTTPS
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
    ports:
    - protocol: TCP
      port: 443
  # Allow DNS
  - to:
    - podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
    - protocol: UDP
      port: 53
Network Policy Patterns:
  • Zero Trust: Default deny all, explicitly allow required traffic
  • Microsegmentation: Isolate different application tiers
  • Namespace Isolation: Prevent cross-namespace communication
  • Egress Control: Restrict outbound connections to approved endpoints

🌐 Service Mesh Security

YAML
# Istio PeerAuthentication for mTLS
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: production
spec:
  mtls:
    mode: STRICT  # Enforce mTLS for all traffic
---
# Authorization Policy
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: frontend-authz
  namespace: production
spec:
  selector:
    matchLabels:
      app: frontend
  action: ALLOW
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/production/sa/backend"]
    to:
    - operation:
        methods: ["GET", "POST"]
        paths: ["/api/*"]
  - from:
    - source:
        namespaces: ["monitoring"]
    to:
    - operation:
        methods: ["GET"]
        paths: ["/metrics"]

🔧 Pod Security Context

Configure security settings at the pod and container level to minimize attack surface.

YAML
apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
spec:
  # Pod-level security context
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
    fsGroupChangePolicy: "OnRootMismatch"
    seccompProfile:
      type: RuntimeDefault
    supplementalGroups: [4000]
  
  containers:
  - name: app
    image: myapp:v1
    # Container-level security context
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop:
        - ALL
        add:
        - NET_BIND_SERVICE
      runAsNonRoot: true
      runAsUser: 1000
      seLinuxOptions:
        level: "s0:c123,c456"
      seccompProfile:
        type: Localhost
        localhostProfile: "profiles/audit.json"
    
    volumeMounts:
    - name: tmp
      mountPath: /tmp
    - name: var-cache
      mountPath: /var/cache
  
  volumes:
  # Writable volumes for read-only root filesystem
  - name: tmp
    emptyDir: {}
  - name: var-cache
    emptyDir: {}

Pod Security Standards

1

Privileged

Unrestricted policy, providing the widest possible permissions

2

Baseline

Minimally restrictive policy, prevents known privilege escalations

3

Restricted

Heavily restricted policy, following current Pod hardening best practices

Pod Security Admission

YAML
# Namespace labels for Pod Security Standards
apiVersion: v1
kind: Namespace
metadata:
  name: secure-namespace
  labels:
    # Enforce restricted standard
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/enforce-version: latest
    
    # Audit baseline violations
    pod-security.kubernetes.io/audit: baseline
    pod-security.kubernetes.io/audit-version: latest
    
    # Warn on policy violations
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: latest

📋 Pod Security Policies (Deprecated)

⚠️ Note: PodSecurityPolicy is deprecated in Kubernetes v1.21+ and removed in v1.25+. Use Pod Security Standards instead.

Alternative: OPA Gatekeeper

YAML
# OPA Gatekeeper ConstraintTemplate
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequiredsecuritycontext
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredSecurityContext
      validation:
        openAPIV3Schema:
          type: object
  targets:
  - target: admission.k8s.gatekeeper.sh
    rego: |
      package k8srequiredsecuritycontext
      
      violation[{"msg": msg}] {
        container := input.review.object.spec.containers[_]
        not container.securityContext.runAsNonRoot
        msg := "Container must run as non-root user"
      }
      
      violation[{"msg": msg}] {
        container := input.review.object.spec.containers[_]
        not container.securityContext.allowPrivilegeEscalation == false
        msg := "Container must not allow privilege escalation"
      }
      
      violation[{"msg": msg}] {
        container := input.review.object.spec.containers[_]
        not container.securityContext.readOnlyRootFilesystem
        msg := "Container must have read-only root filesystem"
      }
---
# Apply constraint
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredSecurityContext
metadata:
  name: must-have-security-context
spec:
  match:
    kinds:
    - apiGroups: ["apps"]
      kinds: ["Deployment", "StatefulSet", "DaemonSet"]
    namespaces: ["production"]

🔐 Secrets Management

Securely store and manage sensitive information like passwords, tokens, and keys.

Creating and Using Secrets

Bash
# Create secret from literal values
kubectl create secret generic db-credentials \
  --from-literal=username=dbuser \
  --from-literal=password='S3cur3P@ssw0rd!'

# Create secret from files
kubectl create secret generic ssl-certs \
  --from-file=tls.crt=/path/to/tls.crt \
  --from-file=tls.key=/path/to/tls.key

# Create TLS secret
kubectl create secret tls tls-secret \
  --cert=path/to/tls.crt \
  --key=path/to/tls.key
YAML
# Secret manifest (base64 encoded)
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  api-key: YXBpLWtleS12YWx1ZQ==  # base64 encoded
  db-password: cGFzc3dvcmQxMjM=
stringData:  # Plain text (will be encoded automatically)
  config.yaml: |
    database:
      host: postgres
      port: 5432
---
# Using secrets in pods
apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  containers:
  - name: app
    image: myapp:v1
    # Mount as environment variables
    env:
    - name: API_KEY
      valueFrom:
        secretKeyRef:
          name: app-secrets
          key: api-key
    envFrom:
    - secretRef:
        name: app-secrets
    # Mount as files
    volumeMounts:
    - name: secrets
      mountPath: /etc/secrets
      readOnly: true
  volumes:
  - name: secrets
    secret:
      secretName: app-secrets
      defaultMode: 0400  # Read-only for owner
      items:
      - key: config.yaml
        path: config.yaml
        mode: 0400

External Secrets Management

YAML
# HashiCorp Vault with Secrets Store CSI Driver
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: vault-database
spec:
  provider: vault
  parameters:
    vaultAddress: "http://vault.vault:8200"
    roleName: "database"
    objects: |
      - objectName: "db-password"
        secretPath: "secret/data/database"
        secretKey: "password"
      - objectName: "db-username"
        secretPath: "secret/data/database"
        secretKey: "username"
---
# Use with CSI volume
apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  serviceAccountName: app
  containers:
  - name: app
    image: myapp:v1
    volumeMounts:
    - name: secrets-store
      mountPath: "/mnt/secrets"
      readOnly: true
  volumes:
  - name: secrets-store
    csi:
      driver: secrets-store.csi.k8s.io
      readOnly: true
      volumeAttributes:
        secretProviderClass: vault-database

Encrypting Secrets at Rest

YAML
# EncryptionConfiguration for API server
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
    - secrets
    providers:
    # AES-GCM with random nonce
    - aesgcm:
        keys:
        - name: key1
          secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
    # AES-CBC with PKCS#7 padding
    - aescbc:
        keys:
        - name: key2
          secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
    # Identity provider (no encryption)
    - identity: {}
---
# API server configuration
# --encryption-provider-config=/etc/kubernetes/encryption-config.yaml

Secrets Best Practices

🔒
Enable encryption at rest: Configure etcd encryption for secrets
🔄
Rotate secrets regularly: Implement automated secret rotation
🚫
Never commit secrets: Use sealed secrets or external secret stores
📊
Audit secret access: Monitor and log secret usage

🔍 Security Scanning

Identify and fix vulnerabilities in container images, Kubernetes manifests, and running workloads.

Image Scanning

trivy image nginx:latest

Scan Docker image for vulnerabilities

grype nginx:latest

Scan with Anchore Grype

docker scan nginx:latest

Docker native scanning with Snyk

Admission Controller for Image Scanning

YAML
# OPA Gatekeeper policy for image validation
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8sallowedimages
spec:
  crd:
    spec:
      names:
        kind: K8sAllowedImages
      validation:
        openAPIV3Schema:
          type: object
          properties:
            allowedRegistries:
              type: array
              items:
                type: string
            bannedImages:
              type: array
              items:
                type: string
  targets:
  - target: admission.k8s.gatekeeper.sh
    rego: |
      package k8sallowedimages
      
      violation[{"msg": msg}] {
        container := input.review.object.spec.containers[_]
        not starts_with(container.image, input.parameters.allowedRegistries[_])
        msg := sprintf("Container image %v is not from allowed registry", [container.image])
      }
      
      violation[{"msg": msg}] {
        container := input.review.object.spec.containers[_]
        image := container.image
        banned := input.parameters.bannedImages[_]
        contains(image, banned)
        msg := sprintf("Container image %v is banned", [image])
      }
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedImages
metadata:
  name: must-use-approved-registry
spec:
  match:
    kinds:
    - apiGroups: ["apps", ""]
      kinds: ["Deployment", "StatefulSet", "DaemonSet", "Pod"]
  parameters:
    allowedRegistries:
    - "gcr.io/my-org/"
    - "docker.io/mycompany/"
    bannedImages:
    - "latest"
    - "alpha"

Runtime Security with Falco

YAML
# Falco rules for runtime security
- rule: Terminal shell in container
  desc: A shell was used as the entrypoint/exec
  condition: >
    spawned_process and container
    and shell_procs and proc.name in (shell_binaries)
    and not container.image.repository in (allowed_images)
  output: >
    Shell opened in container (user=%user.name container_id=%container.id 
    container_name=%container.name image=%container.image.repository:%container.image.tag
    shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)
  priority: WARNING
  tags: [container, shell]

- rule: Write below etc
  desc: an attempt to write to any file below /etc
  condition: >
    write and etc_dir and not shadowutils_binaries
    and not (container and proc.name in (known_binaries))
  output: >
    File below /etc opened for writing (user=%user.name command=%proc.cmdline
    file=%fd.name container_id=%container.id image=%container.image.repository)
  priority: ERROR
  tags: [filesystem, mitre_persistence]

- rule: Outbound Connection to C2 Servers
  desc: Detect outbound connection to command & control servers
  condition: >
    outbound and not (fd.sip in (allowed_outbound_ips))
    and fd.sport >= 30000
  output: >
    Outbound connection to unknown server (command=%proc.cmdline 
    connection=%fd.name container_id=%container.id)
  priority: WARNING
  tags: [network]

Kubernetes Manifest Scanning

Bash
# Scan Kubernetes manifests with Kubesec
kubesec scan deployment.yaml

# Scan with Polaris
polaris audit --audit-path ./manifests/

# Scan with Checkov
checkov -f deployment.yaml --framework kubernetes

# Scan with KubeLinter
kube-linter lint manifests/
Security Scanning Tools:
  • Trivy: Comprehensive vulnerability scanner
  • Falco: Runtime security monitoring
  • KubeLinter: Static analysis of Kubernetes YAML
  • Kubesec: Security risk analysis for manifests
  • Polaris: Best practices validation
  • OPA Gatekeeper: Policy enforcement

📝 Audit Logging

Track and monitor all API server activities for security and compliance.

Audit Policy Configuration

YAML
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
  # Don't log requests to these paths
  - level: None
    nonResourceURLs:
    - /healthz*
    - /metrics
    - /swagger*
    
  # Log metadata for all requests in RequestReceived stage
  - level: Metadata
    omitStages:
    - RequestReceived
    
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      resources: ["pods", "pods/status"]
    namespaces: ["production", "staging"]
    
  # Log secret and configmap access
  - level: Metadata
    resources:
    - group: ""
      resources: ["secrets", "configmaps"]
    
  # Log full request/response for sensitive operations
  - level: RequestResponse
    verbs: ["delete", "deletecollection"]
    
  # Log authentication failures
  - level: Metadata
    users: ["system:anonymous"]
    verbs: ["get", "list", "watch"]
    
  # Log everything from specific users
  - level: RequestResponse
    users: ["admin@example.com"]
    
  # Detailed logging for RBAC changes
  - level: RequestResponse
    resources:
    - group: "rbac.authorization.k8s.io"
      resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"]
    
  # Default level for everything else
  - level: Metadata

Processing Audit Logs

YAML
# Fluentd configuration for audit logs

  @type tail
  path /var/log/kubernetes/audit.log
  pos_file /var/log/kubernetes/audit.log.pos
  tag kubernetes.audit
  
    @type json
    time_key timestamp
    time_format %Y-%m-%dT%H:%M:%S.%N%z
  



  @type grep
  
    key $.responseStatus.code
    pattern /^(4|5)\d{2}$/
  



  @type elasticsearch
  host elasticsearch.monitoring
  port 9200
  index_name kubernetes-audit
  
    @type file
    path /var/log/fluentd-buffers/kubernetes.audit
    flush_interval 5s
  

Compliance Scanning

Bash
# CIS Kubernetes Benchmark with kube-bench
kube-bench run --targets master,node,etcd,policies

# Example output parsing
kube-bench run --json | jq '.tests[] | select(.results[].status=="FAIL")'

# Compliance checking with Polaris
polaris audit --set-exit-code-on-danger --severity error

Security Monitoring Checklist

1

API Server Audit Logs

Track all API requests, authentication, and authorization decisions

2

Runtime Monitoring

Detect anomalous container behavior with Falco or Sysdig

3

Network Traffic Analysis

Monitor network flows and detect unusual patterns

4

Image Vulnerability Scanning

Continuous scanning of running container images

5

Compliance Validation

Regular CIS benchmark and policy compliance checks

🚨 Security Event Response:
  1. Isolate affected workloads using NetworkPolicies
  2. Collect audit logs and runtime events
  3. Analyze attack vector and impact
  4. Patch vulnerabilities and update policies
  5. Document incident and update runbooks