Kubernetes with Pulumi

Medium 25 min read

K8s Provider Setup

Why Kubernetes with Pulumi?

The Problem: YAML manifests for Kubernetes are verbose, lack type checking, and make it difficult to share logic between resources or create abstractions.

The Solution: Pulumi lets you define Kubernetes resources in TypeScript, Python, or Go with full IDE support, loops, conditionals, and component reuse.

Real Impact: Teams using Pulumi for Kubernetes report 70% fewer lines of configuration and catch misconfigurations at compile time instead of deploy time.

Real-World Analogy

Think of Pulumi for Kubernetes as upgrading from handwritten letters to a word processor:

  • YAML manifests = Handwritten letters (precise but tedious, error-prone)
  • Pulumi programs = Word processor with templates, spell-check, and auto-complete
  • Component Resources = Reusable document templates
  • Helm Charts in Pulumi = Importing existing templates and customizing them programmatically

Key Kubernetes + Pulumi Concepts

Typed Resources

Every K8s resource has full TypeScript types. Catch misconfigurations in your editor before deploying.

Server-Side Apply

Pulumi uses server-side apply by default, handling field ownership and conflicts automatically.

Await Logic

Pulumi waits for resources to become ready (Deployments rolled out, Services have endpoints) before proceeding.

Helm Integration

Deploy Helm charts with full programmatic control over values, transformations, and post-render customization.

Setting Up the K8s Provider

setup.sh
# Install the Kubernetes provider
npm install @pulumi/kubernetes

# By default, uses ~/.kube/config
# Or configure a specific kubeconfig
pulumi config set kubernetes:kubeconfig /path/to/kubeconfig
Pulumi Deploying to Kubernetes Cluster
Pulumi Program index.ts Deployment Service Helm Chart ConfigMap pulumi up Kubernetes Cluster namespace: app Deployment (3) Pods Service (LB) Ingress Helm: nginx-ingress Chart v4.8.0 ConfigMap app-config Secret db-creds

Deploying Workloads

Creating a Deployment

deployment.ts
import * as k8s from "@pulumi/kubernetes";

// Create a namespace
const ns = new k8s.core.v1.Namespace("app-ns", {
    metadata: { name: "my-app" },
});

// Create a Deployment
const appLabels = { app: "nginx" };
const deployment = new k8s.apps.v1.Deployment("nginx-deploy", {
    metadata: { namespace: ns.metadata.name },
    spec: {
        replicas: 3,
        selector: { matchLabels: appLabels },
        template: {
            metadata: { labels: appLabels },
            spec: {
                containers: [{
                    name: "nginx",
                    image: "nginx:1.25",
                    ports: [{ containerPort: 80 }],
                    resources: {
                        requests: { cpu: "100m", memory: "128Mi" },
                        limits: { cpu: "250m", memory: "256Mi" },
                    },
                }],
            },
        },
    },
});
Output
$ pulumi up
     Type                          Name           Status
 +   kubernetes:apps:Deployment    nginx-deploy   created
 +   kubernetes:core:Service       nginx-svc      created

Outputs:
    deploymentName: "nginx-deploy-abc123"
    replicas: 3
Key Takeaway: Pulumi's Kubernetes provider maps directly to the Kubernetes API. Every K8s resource type is available as a Pulumi resource with full TypeScript/Python type checking -- catching YAML errors at compile time.

Services & Ingress

Exposing with Services

service.ts
// Create a LoadBalancer Service
const service = new k8s.core.v1.Service("nginx-svc", {
    metadata: { namespace: ns.metadata.name },
    spec: {
        type: "LoadBalancer",
        selector: appLabels,
        ports: [{ port: 80, targetPort: 80 }],
    },
});

export const serviceIp = service.status.loadBalancer.ingress[0].ip;

// Create an Ingress resource
const ingress = new k8s.networking.v1.Ingress("app-ingress", {
    metadata: {
        namespace: ns.metadata.name,
        annotations: {
            "kubernetes.io/ingress.class": "nginx",
        },
    },
    spec: {
        rules: [{
            host: "app.example.com",
            http: {
                paths: [{
                    path: "/",
                    pathType: "Prefix",
                    backend: {
                        service: {
                            name: service.metadata.name,
                            port: { number: 80 },
                        },
                    },
                }],
            },
        }],
    },
});

ConfigMaps & Secrets

config.ts
// ConfigMap from data
const configMap = new k8s.core.v1.ConfigMap("app-config", {
    metadata: { namespace: ns.metadata.name },
    data: {
        "APP_ENV": pulumi.getStack(),
        "LOG_LEVEL": "info",
        "nginx.conf": `server {
    listen 80;
    location / { proxy_pass http://backend:8080; }
}`,
    },
});

// Secret with Pulumi secret values
const config = new pulumi.Config();
const secret = new k8s.core.v1.Secret("db-creds", {
    metadata: { namespace: ns.metadata.name },
    stringData: {
        "DB_PASSWORD": config.requireSecret("dbPassword"),
        "DB_HOST": "db.internal:5432",
    },
});

Common Mistake

Wrong: Putting secrets in ConfigMaps: new k8s.core.v1.ConfigMap("config", { data: { dbPassword: "hunter2" } })

Why it fails: ConfigMaps are stored as plain text in etcd. Anyone with cluster access can read them. Secrets are base64-encoded and can be encrypted at rest.

Instead: Use k8s.core.v1.Secret for sensitive data, and combine with Pulumi's pulumi.secret() to encrypt values in state.

Helm Charts with Pulumi

helm.ts
// Deploy a Helm chart from a repository
const nginx = new k8s.helm.v3.Release("ingress-nginx", {
    chart: "ingress-nginx",
    version: "4.8.3",
    namespace: "ingress-system",
    createNamespace: true,
    repositoryOpts: {
        repo: "https://kubernetes.github.io/ingress-nginx",
    },
    values: {
        controller: {
            replicaCount: 2,
            service: { type: "LoadBalancer" },
            metrics: { enabled: true },
        },
    },
});

// Deploy cert-manager for TLS
const certManager = new k8s.helm.v3.Release("cert-manager", {
    chart: "cert-manager",
    version: "1.13.2",
    namespace: "cert-manager",
    createNamespace: true,
    repositoryOpts: {
        repo: "https://charts.jetstack.io",
    },
    values: {
        installCRDs: true,
    },
});
Key Takeaway: Pulumi's Helm support lets you deploy charts with type-safe value overrides. Use Chart for standard Helm charts and Release for more control over the Helm lifecycle (install, upgrade, rollback).
Deep Dive: Pulumi vs Raw YAML for Kubernetes

Raw YAML manifests are error-prone: no type checking, no IDE autocomplete, and manual string templating via Helm. Pulumi provides compile-time validation, real programming language constructs (loops, conditionals, functions), and automatic dependency tracking between resources. For example, creating 10 nearly-identical deployments requires 10 YAML files or complex Helm templates, but in Pulumi it's a simple for loop. Trade-off: teams must know TypeScript/Python, and Pulumi adds a state management layer that raw kubectl apply doesn't need.

Quick Reference

ResourcePulumi ClassKey Properties
Namespacek8s.core.v1.Namespacemetadata.name, metadata.labels
Deploymentk8s.apps.v1.Deploymentspec.replicas, spec.template
Servicek8s.core.v1.Servicespec.type, spec.selector, spec.ports
Ingressk8s.networking.v1.Ingressspec.rules, metadata.annotations
ConfigMapk8s.core.v1.ConfigMapdata, binaryData
Secretk8s.core.v1.SecretstringData, type
Helm Releasek8s.helm.v3.Releasechart, version, values, repositoryOpts

Provider Configuration Tips

  • Default provider uses ~/.kube/config current context
  • Use explicit providers for multi-cluster deployments
  • Pass kubeconfig from cloud provider outputs (EKS, GKE, AKS) for seamless integration
  • Enable enableServerSideApply: true for safer resource management