GCP Infrastructure with Pulumi

Medium 28 min read

GCP Provider Setup

Why GCP with Pulumi?

The Problem: Google Cloud's Deployment Manager uses YAML/Jinja templates that lack type safety and are difficult to test or refactor.

The Solution: Pulumi provides first-class GCP support with auto-generated types from the Google Cloud API, giving you complete coverage and type safety.

Real Impact: Pulumi's GCP Native provider covers 100% of Google Cloud resources on day one of their release, keeping your IaC always up to date.

Real-World Analogy

Think of GCP with Pulumi as a smart factory automation system:

  • Projects = Factory buildings that isolate different product lines
  • VPC Networks = Internal conveyor belt systems connecting machines
  • Compute Engine = Heavy-duty machines on the factory floor
  • Cloud Functions = Automated sensors that trigger actions on demand
  • Cloud Storage = Warehouses that store raw materials and finished goods

Key GCP Concepts in Pulumi

Project-Based Organization

GCP organizes resources into projects. Configure the project and region in Pulumi config for consistent deployments.

GCP Native Provider

Auto-generated from Google's API discovery documents, providing 100% resource coverage with accurate types.

Service Account Auth

Use service accounts with JSON key files or workload identity for secure authentication in CI/CD pipelines.

Labels Strategy

Apply consistent labels across all GCP resources for cost allocation, environment tracking, and team ownership.

Installing the GCP Provider

setup.sh
# Create a new Pulumi GCP project
pulumi new gcp-typescript

# Install the GCP provider
npm install @pulumi/gcp

# Configure project and region
pulumi config set gcp:project my-gcp-project-id
pulumi config set gcp:region us-central1

# Authenticate with gcloud CLI
gcloud auth application-default login
Output
$ pulumi up
Previewing update:
  + gcp:compute:Network           vpc-main          create
  + gcp:compute:Subnetwork        subnet-us-east    create
  + gcp:compute:Firewall          allow-http        create

Resources:
    + 3 to create

Outputs:
    networkId: "projects/my-project/global/networks/vpc-main"
    subnetCidr: "10.0.1.0/24"
Key Takeaway: GCP uses a project-based resource hierarchy. Always specify the project explicitly in your Pulumi provider configuration rather than relying on the default project from gcloud CLI -- this prevents accidental deployments to the wrong project.

VPC & Networking

GCP Architecture with Pulumi
GCP Project: my-project VPC Network (Custom Mode) Subnet: us-central1 (10.0.1.0/24) GCE Instance Cloud Function Subnet: us-east1 (10.0.2.0/24) GKE Cluster Cloud SQL Cloud Storage Buckets (GCS) Firestore NoSQL Database Secret Manager Secrets & Config Cloud Load Balancer (HTTPS) Firewall Rules applied across VPC

Creating VPC and Subnets

network.ts
import * as gcp from "@pulumi/gcp";

// Create a custom VPC network
const network = new gcp.compute.Network("app-network", {
    autoCreateSubnetworks: false,
    description: "Main application VPC",
});

// Create subnets in different regions
const subnetCentral = new gcp.compute.Subnetwork("central-subnet", {
    network: network.id,
    ipCidrRange: "10.0.1.0/24",
    region: "us-central1",
    privateIpGoogleAccess: true,
});

const subnetEast = new gcp.compute.Subnetwork("east-subnet", {
    network: network.id,
    ipCidrRange: "10.0.2.0/24",
    region: "us-east1",
});

// Firewall rule for HTTP traffic
const httpFirewall = new gcp.compute.Firewall("allow-http", {
    network: network.id,
    allows: [{ protocol: "tcp", ports: ["80", "443"] }],
    sourceRanges: ["0.0.0.0/0"],
    targetTags: ["web-server"],
});

// Cloud Router for NAT
const router = new gcp.compute.Router("nat-router", {
    network: network.id,
    region: "us-central1",
});

const nat = new gcp.compute.RouterNat("cloud-nat", {
    router: router.name,
    region: "us-central1",
    natIpAllocateOption: "AUTO_ONLY",
    sourceSubnetworkIpRangesToNat: "ALL_SUBNETWORKS_ALL_IP_RANGES",
});

Compute (GCE & Cloud Functions)

Compute Engine Instance

compute.ts
// Create a Compute Engine instance
const instance = new gcp.compute.Instance("web-server", {
    machineType: "e2-medium",
    zone: "us-central1-a",
    bootDisk: {
        initializeParams: {
            image: "debian-cloud/debian-11",
            size: 20,
        },
    },
    networkInterfaces: [{
        network: network.id,
        subnetwork: subnetCentral.id,
        accessConfigs: [{}], // Gives external IP
    }],
    metadataStartupScript: `#!/bin/bash
apt-get update
apt-get install -y nginx
echo "Hello from Pulumi on GCP!" > /var/www/html/index.html
systemctl start nginx`,
    tags: ["web-server"],
    labels: { environment: pulumi.getStack() },
});

export const instanceIp = instance.networkInterfaces[0].accessConfigs[0].natIp;

Cloud Functions

functions.ts
// Cloud Storage bucket for function source
const sourceBucket = new gcp.storage.Bucket("func-source", {
    location: "US",
    uniformBucketLevelAccess: true,
});

// Upload function source code
const sourceArchive = new gcp.storage.BucketObject("func-zip", {
    bucket: sourceBucket.name,
    source: new pulumi.asset.FileArchive("./function-source"),
});

// Create the Cloud Function
const func = new gcp.cloudfunctions.Function("api-function", {
    runtime: "nodejs18",
    entryPoint: "handler",
    sourceArchiveBucket: sourceBucket.name,
    sourceArchiveObject: sourceArchive.name,
    triggerHttp: true,
    availableMemoryMb: 256,
    environmentVariables: {
        STAGE: pulumi.getStack(),
    },
});

// Allow public invocation
const invoker = new gcp.cloudfunctions.FunctionIamMember("invoker", {
    cloudFunction: func.name,
    role: "roles/cloudfunctions.invoker",
    member: "allUsers",
});

export const functionUrl = func.httpsTriggerUrl;

Common Mistake

Wrong: Using the default compute service account for GCE instances and Cloud Functions

Why it fails: The default service account has Editor permissions on the entire project. A compromised VM can access any resource, read any secret, and modify any service.

Instead: Create dedicated service accounts per service with minimum required IAM roles. Use Workload Identity for GKE pods instead of service account keys.

Storage (GCS & Firestore)

Cloud Storage and Firestore

storage.ts
// Cloud Storage bucket with lifecycle rules
const dataBucket = new gcp.storage.Bucket("data-bucket", {
    location: "US",
    storageClass: "STANDARD",
    uniformBucketLevelAccess: true,
    versioning: { enabled: true },
    lifecycleRules: [{
        action: { type: "SetStorageClass", storageClass: "NEARLINE" },
        condition: { age: 30 },
    }, {
        action: { type: "Delete" },
        condition: { age: 365 },
    }],
    labels: { environment: pulumi.getStack() },
});

// Firestore database (Native mode)
const firestore = new gcp.firestore.Database("app-db", {
    locationId: "us-central",
    type: "FIRESTORE_NATIVE",
});

export const bucketUrl = dataBucket.url;

IAM & Service Accounts

Service Accounts and IAM Bindings

iam.ts
// Create a service account for the application
const appSa = new gcp.serviceaccount.Account("app-sa", {
    accountId: "app-service-account",
    displayName: "Application Service Account",
});

// Grant Storage Object Viewer role
new gcp.storage.BucketIAMMember("bucket-viewer", {
    bucket: dataBucket.name,
    role: "roles/storage.objectViewer",
    member: appSa.email.apply(e => `serviceAccount:${e}`),
});

// Grant Firestore User role at project level
new gcp.projects.IAMMember("firestore-user", {
    role: "roles/datastore.user",
    member: appSa.email.apply(e => `serviceAccount:${e}`),
});

export const serviceAccountEmail = appSa.email;
Deep Dive: GCP Organization Policies with Pulumi

GCP Organization Policies enforce constraints across all projects in your organization. Use Pulumi to define and manage these policies as code: restrict allowed regions (data residency), enforce uniform bucket-level access, disable service account key creation, or require OS login for VMs. Define policies in a dedicated "governance" Pulumi project that runs before any infrastructure project. This ensures compliance is enforced automatically, not through manual review.

Key Takeaway: Enable GCP's uniform bucket-level access on all Cloud Storage buckets and manage permissions through IAM, not ACLs. This simplifies permission management and aligns with GCP's security best practices.

Quick Reference

GCP Resource Cheat Sheet

Resource Pulumi Class Key Properties
VPC Network gcp.compute.Network autoCreateSubnetworks, description
Subnet gcp.compute.Subnetwork ipCidrRange, region, network
GCE Instance gcp.compute.Instance machineType, zone, bootDisk
Cloud Function gcp.cloudfunctions.Function runtime, entryPoint, triggerHttp
Cloud Storage gcp.storage.Bucket location, storageClass, versioning
Firestore gcp.firestore.Database locationId, type
Service Account gcp.serviceaccount.Account accountId, displayName

GCP Authentication Methods

Authentication Options

  • gcloud auth application-default login - Interactive login for development
  • Service Account Key - Set GOOGLE_CREDENTIALS env var with JSON key file path
  • Workload Identity - Federate with GitHub Actions or other OIDC providers
  • GCE Metadata - Automatic when running on Compute Engine or GKE