K8s Provider Setup
Why Kubernetes with Pulumi?
The Problem: YAML manifests for Kubernetes are verbose, lack type checking, and make it difficult to share logic between resources or create abstractions.
The Solution: Pulumi lets you define Kubernetes resources in TypeScript, Python, or Go with full IDE support, loops, conditionals, and component reuse.
Real Impact: Teams using Pulumi for Kubernetes report 70% fewer lines of configuration and catch misconfigurations at compile time instead of deploy time.
Real-World Analogy
Think of Pulumi for Kubernetes as upgrading from handwritten letters to a word processor:
- YAML manifests = Handwritten letters (precise but tedious, error-prone)
- Pulumi programs = Word processor with templates, spell-check, and auto-complete
- Component Resources = Reusable document templates
- Helm Charts in Pulumi = Importing existing templates and customizing them programmatically
Key Kubernetes + Pulumi Concepts
Typed Resources
Every K8s resource has full TypeScript types. Catch misconfigurations in your editor before deploying.
Server-Side Apply
Pulumi uses server-side apply by default, handling field ownership and conflicts automatically.
Await Logic
Pulumi waits for resources to become ready (Deployments rolled out, Services have endpoints) before proceeding.
Helm Integration
Deploy Helm charts with full programmatic control over values, transformations, and post-render customization.
Setting Up the K8s Provider
# Install the Kubernetes provider
npm install @pulumi/kubernetes
# By default, uses ~/.kube/config
# Or configure a specific kubeconfig
pulumi config set kubernetes:kubeconfig /path/to/kubeconfig
Deploying Workloads
Creating a Deployment
import * as k8s from "@pulumi/kubernetes";
// Create a namespace
const ns = new k8s.core.v1.Namespace("app-ns", {
metadata: { name: "my-app" },
});
// Create a Deployment
const appLabels = { app: "nginx" };
const deployment = new k8s.apps.v1.Deployment("nginx-deploy", {
metadata: { namespace: ns.metadata.name },
spec: {
replicas: 3,
selector: { matchLabels: appLabels },
template: {
metadata: { labels: appLabels },
spec: {
containers: [{
name: "nginx",
image: "nginx:1.25",
ports: [{ containerPort: 80 }],
resources: {
requests: { cpu: "100m", memory: "128Mi" },
limits: { cpu: "250m", memory: "256Mi" },
},
}],
},
},
},
});
$ pulumi up
Type Name Status
+ kubernetes:apps:Deployment nginx-deploy created
+ kubernetes:core:Service nginx-svc created
Outputs:
deploymentName: "nginx-deploy-abc123"
replicas: 3
Services & Ingress
Exposing with Services
// Create a LoadBalancer Service
const service = new k8s.core.v1.Service("nginx-svc", {
metadata: { namespace: ns.metadata.name },
spec: {
type: "LoadBalancer",
selector: appLabels,
ports: [{ port: 80, targetPort: 80 }],
},
});
export const serviceIp = service.status.loadBalancer.ingress[0].ip;
// Create an Ingress resource
const ingress = new k8s.networking.v1.Ingress("app-ingress", {
metadata: {
namespace: ns.metadata.name,
annotations: {
"kubernetes.io/ingress.class": "nginx",
},
},
spec: {
rules: [{
host: "app.example.com",
http: {
paths: [{
path: "/",
pathType: "Prefix",
backend: {
service: {
name: service.metadata.name,
port: { number: 80 },
},
},
}],
},
}],
},
});
ConfigMaps & Secrets
// ConfigMap from data
const configMap = new k8s.core.v1.ConfigMap("app-config", {
metadata: { namespace: ns.metadata.name },
data: {
"APP_ENV": pulumi.getStack(),
"LOG_LEVEL": "info",
"nginx.conf": `server {
listen 80;
location / { proxy_pass http://backend:8080; }
}`,
},
});
// Secret with Pulumi secret values
const config = new pulumi.Config();
const secret = new k8s.core.v1.Secret("db-creds", {
metadata: { namespace: ns.metadata.name },
stringData: {
"DB_PASSWORD": config.requireSecret("dbPassword"),
"DB_HOST": "db.internal:5432",
},
});
Common Mistake
Wrong: Putting secrets in ConfigMaps: new k8s.core.v1.ConfigMap("config", { data: { dbPassword: "hunter2" } })
Why it fails: ConfigMaps are stored as plain text in etcd. Anyone with cluster access can read them. Secrets are base64-encoded and can be encrypted at rest.
Instead: Use k8s.core.v1.Secret for sensitive data, and combine with Pulumi's pulumi.secret() to encrypt values in state.
Helm Charts with Pulumi
// Deploy a Helm chart from a repository
const nginx = new k8s.helm.v3.Release("ingress-nginx", {
chart: "ingress-nginx",
version: "4.8.3",
namespace: "ingress-system",
createNamespace: true,
repositoryOpts: {
repo: "https://kubernetes.github.io/ingress-nginx",
},
values: {
controller: {
replicaCount: 2,
service: { type: "LoadBalancer" },
metrics: { enabled: true },
},
},
});
// Deploy cert-manager for TLS
const certManager = new k8s.helm.v3.Release("cert-manager", {
chart: "cert-manager",
version: "1.13.2",
namespace: "cert-manager",
createNamespace: true,
repositoryOpts: {
repo: "https://charts.jetstack.io",
},
values: {
installCRDs: true,
},
});
Chart for standard Helm charts and Release for more control over the Helm lifecycle (install, upgrade, rollback).
Deep Dive: Pulumi vs Raw YAML for Kubernetes
Raw YAML manifests are error-prone: no type checking, no IDE autocomplete, and manual string templating via Helm. Pulumi provides compile-time validation, real programming language constructs (loops, conditionals, functions), and automatic dependency tracking between resources. For example, creating 10 nearly-identical deployments requires 10 YAML files or complex Helm templates, but in Pulumi it's a simple for loop. Trade-off: teams must know TypeScript/Python, and Pulumi adds a state management layer that raw kubectl apply doesn't need.
Quick Reference
| Resource | Pulumi Class | Key Properties |
|---|---|---|
| Namespace | k8s.core.v1.Namespace | metadata.name, metadata.labels |
| Deployment | k8s.apps.v1.Deployment | spec.replicas, spec.template |
| Service | k8s.core.v1.Service | spec.type, spec.selector, spec.ports |
| Ingress | k8s.networking.v1.Ingress | spec.rules, metadata.annotations |
| ConfigMap | k8s.core.v1.ConfigMap | data, binaryData |
| Secret | k8s.core.v1.Secret | stringData, type |
| Helm Release | k8s.helm.v3.Release | chart, version, values, repositoryOpts |
Provider Configuration Tips
- Default provider uses
~/.kube/configcurrent context - Use explicit providers for multi-cluster deployments
- Pass kubeconfig from cloud provider outputs (EKS, GKE, AKS) for seamless integration
- Enable
enableServerSideApply: truefor safer resource management