Kubernetes on GKE — Google Cloud Guide

Sanjeev SharmaSanjeev Sharma
5 min read

Advertisement

Kubernetes on GKE — Google Cloud Guide

Google Kubernetes Engine (GKE) provides managed Kubernetes with seamless GCP integration.

Introduction

GKE is Google's managed Kubernetes service with automatic scaling, updates, and monitoring integration.

Prerequisites

# Install Google Cloud SDK
curl https://sdk.cloud.google.com | bash
exec -l $SHELL

# Initialize
gcloud init
gcloud auth login

# Install kubectl
gcloud components install kubectl

# Install gke-gcloud-auth-plugin
gcloud components install gke-gcloud-auth-plugin

Creating GKE Clusters

Standard Cluster

# Create cluster
gcloud container clusters create my-cluster \
  --zone us-central1-a \
  --num-nodes 3 \
  --machine-type n1-standard-1

# Get credentials
gcloud container clusters get-credentials my-cluster --zone us-central1-a

# Verify
kubectl cluster-info

Production Cluster

gcloud container clusters create prod-cluster \
  --zone us-central1-a \
  --num-nodes 3 \
  --machine-type n2-standard-4 \
  --enable-vertical-pod-autoscaling \
  --enable-autorepair \
  --enable-autoupgrade \
  --enable-ip-alias \
  --enable-stackdriver-kubernetes \
  --logging=SYSTEM,WORKLOAD \
  --monitoring=SYSTEM,WORKLOAD \
  --enable-network-policy

Node Pool Management

# Create node pool
gcloud container node-pools create gpu-pool \
  --cluster my-cluster \
  --zone us-central1-a \
  --machine-type n1-standard-4 \
  --accelerator type=nvidia-tesla-k80,count=1

# Scale node pool
gcloud container clusters update my-cluster \
  --enable-autoscaling \
  --min-nodes 1 \
  --max-nodes 10 \
  --zone us-central1-a

# List node pools
gcloud container node-pools list --cluster my-cluster --zone us-central1-a

# Delete node pool
gcloud container node-pools delete default-pool \
  --cluster my-cluster --zone us-central1-a

Networking

VPC and Subnets

# Create VPC
gcloud compute networks create my-vpc --subnet-mode=custom

# Create subnet
gcloud compute networks subnets create my-subnet \
  --network=my-vpc \
  --region=us-central1 \
  --range=10.0.0.0/20

# Create cluster on custom VPC
gcloud container clusters create my-cluster \
  --network=my-vpc \
  --subnetwork=my-subnet \
  --zone=us-central1-a

Load Balancing

# Automatic load balancer with service
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: web-lb
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 3000
EOF

# Get load balancer IP
kubectl get service web-lb

Workload Identity

Secure pod to GCP service integration:

# Create service account
gcloud iam service-accounts create my-app-sa

# Create Kubernetes service account
kubectl create serviceaccount my-app

# Bind Kubernetes SA to GCP SA
gcloud iam service-accounts add-iam-policy-binding \
  my-app-sa@PROJECT_ID.iam.gserviceaccount.com \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:PROJECT_ID.svc.id.goog[default/my-app]"

# Annotate Kubernetes SA
kubectl annotate serviceaccount my-app \
  iam.gke.io/gcp-service-account=my-app-sa@PROJECT_ID.iam.gserviceaccount.com

Use in Pod:

apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  serviceAccountName: my-app
  containers:
  - name: app
    image: my-app:1.0

Storage

Persistent Disk

# Create persistent disk
gcloud compute disks create my-disk \
  --size=100GB \
  --zone=us-central1-a

# Create PVC
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gcp-disk
spec:
  capacity:
    storage: 100Gi
  accessModes:
  - ReadWriteOnce
  gcePersistentDisk:
    pdName: my-disk
EOF

Filestore (NFS)

# Create Filestore instance
gcloud filestore instances create my-filestore \
  --zone=us-central1-a \
  --tier=standard \
  --file-share name=share,capacity=1TB

# Mount in pod
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nfs-pod
spec:
  containers:
  - name: app
    image: my-app:1.0
    volumeMounts:
    - name: nfs
      mountPath: /data
  volumes:
  - name: nfs
    nfs:
      server: 10.0.0.2
      path: /share
EOF

Monitoring and Logging

Cloud Monitoring

# Enable monitoring
gcloud container clusters update my-cluster \
  --enable-cloud-logging \
  --enable-cloud-monitoring

# View metrics
gcloud monitoring metrics-descriptors list

# Create alert policy
gcloud alpha monitoring policies create \
  --notification-channels=CHANNEL_ID \
  --display-name='Pod CPU Alert'

Cloud Logging

# View logs
gcloud logging read "resource.type=k8s_cluster AND resource.labels.cluster_name=my-cluster" --limit 50

# Stream logs
gcloud logging read --stream

Backup and Disaster Recovery

Backup and Restore

# Enable backup for workloads
gcloud container backup-restore backup-plans create my-backup \
  --cluster=my-cluster \
  --all-namespaces

# Create backup
gcloud container backup-restore backups create \
  --backup-plan=my-backup

# Restore from backup
gcloud container backup-restore restores create \
  --backup=backup-id \
  --cluster=my-cluster

Cost Optimization

Committed Use Discounts

# Purchase commitment
gcloud compute commitments create my-commitment \
  --plan=one-year \
  --resources=CPUs:8,MEMORY:32GB

Spot VMs (Preemptible)

# Create cluster with preemptible nodes
gcloud container clusters create my-cluster \
  --preemptible \
  --num-nodes=3

# Create node pool with preemptible
gcloud container node-pools create preempt-pool \
  --cluster=my-cluster \
  --preemptible

Production Checklist

  • Enable Cloud Logging and Monitoring
  • Configure Workload Identity
  • Set up network policies
  • Use Persistent Disks for stateful apps
  • Enable cluster autoscaling
  • Configure backup policies
  • Set pod resource requests/limits
  • Use preemptible nodes for cost savings
  • Configure ingress controller

FAQ

Q: What's the difference between GKE and Kubernetes? A: GKE is Google's managed Kubernetes service. Kubernetes is the underlying orchestration platform.

Q: Should I use preemptible nodes in production? A: Preemptible nodes offer cost savings but can be interrupted. Use for fault-tolerant workloads with multiple replicas.

Q: How do I upgrade my GKE cluster? A: Use gcloud container clusters upgrade or enable automatic upgrades with --enable-autoupgrade.

Advertisement

Sanjeev Sharma

Written by

Sanjeev Sharma

Full Stack Engineer · E-mopro