- Published on
GitOps With ArgoCD — Git as the Single Source of Truth for Kubernetes Deployments
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
GitOps inverts deployment automation. Instead of pipelines pushing code to clusters, clusters pull desired state from Git. This post walks through ArgoCD's production setup, app-of-apps pattern for multi-environment deployments, managing secrets safely, and progressive deployment strategies that catch errors before they hit users.
- GitOps Principles
- ArgoCD Installation and Setup
- App-of-Apps Pattern
- Automated Sync vs Manual Approval
- Managing Secrets Safely (Sealed Secrets)
- Drift Detection and Reconciliation
- Rollback via Git Revert
- Progressive Delivery with Argo Rollouts
- Production GitOps Deployment Checklist
- Conclusion
GitOps Principles
GitOps rests on four pillars:
1. Declarative Configuration
# Instead of imperative: kubectl apply, kubectl set image
# Declare desired state in Git
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myregistry.azurecr.io/api:v2.1.0
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
Git is the source of truth. Nothing manual.
2. Version Control as Audit Log
# Every change has a Git commit (immutable history)
git log --oneline
v2.1.0 Upgrade api image to v2.1.0
v2.0.5 Rollback api due to memory leak
v2.0.4 Fix database connection pool sizing
v2.0.3 Add readiness probe timeout
# Rollback is: git revert <commit-hash>
# Then cluster auto-syncs to new Git state
3. Continuous Reconciliation
Git State Cluster State
↓ ↓
└─────────────┬──────┘
│
Continuously Compare
│
Are they equal?
/ \
YES NO
│ │
Idle Sync to Git State
(automated)
ArgoCD continuously compares Git with running cluster. If drift detected, it reconciles automatically.
4. Pull Model (not Push)
Traditional (Push): GitOps (Pull):
┌──────────┐ ┌──────────┐
│ CI/CD │ kubectl push │ Git Repo │
│ Pipeline │────────────────→│ │
└──────────┘ └──────────┘
↑
│ ArgoCD
┌───┴────┐
│ Cluster│
└────────┘
Advantages of pull:
- No CI/CD pipeline needs cluster credentials
- Cluster controls what gets deployed
- Clear separation of concerns
ArgoCD Installation and Setup
Kubernetes deployment (production-ready):
# Install ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Patch for high availability
kubectl patch deployment argocd-repo-server -n argocd -p '{"spec":{"replicas":3}}'
kubectl patch deployment argocd-application-controller -n argocd -p '{"spec":{"replicas":3}}'
kubectl patch deployment argocd-dex-server -n argocd -p '{"spec":{"replicas":2}}'
Production values (Helm):
# argocd-values.yaml
repoServer:
replicas: 3
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
applicationController:
replicas: 3
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
dexServer:
replicas: 2
redis:
resources:
requests:
cpu: 100m
memory: 128Mi
# Ingress for web UI
server:
ingress:
enabled: true
ingressClassName: nginx
hosts:
- argocd.company.com
tls:
- secretName: argocd-tls
hosts:
- argocd.company.com
# OIDC authentication
configs:
cm:
oidc.config: |
name: Okta
issuer: https://company.okta.com
clientID: $OIDC_CLIENT_ID
clientSecret: $OIDC_CLIENT_SECRET
requestedScopes:
- openid
- profile
- email
App-of-Apps Pattern
Manage multiple environments with a single Git sync point:
Root application (app-of-apps):
# argocd/root-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/myorg/infrastructure
targetRevision: main
path: apps # Points to apps/ directory
destination:
server: https://kubernetes.default.svc
namespace: argocd
# Root app automatically syncs
# Child apps sync based on their own policy
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Directory structure:
infrastructure/
├── apps/
│ ├── app-of-apps.yaml # Root app
│ ├── namespaces.yaml # Create namespaces first
│ ├── production/
│ │ ├── api-app.yaml # API service definition
│ │ ├── frontend-app.yaml
│ │ └── database-app.yaml
│ ├── staging/
│ │ ├── api-app.yaml
│ │ └── frontend-app.yaml
│ └── monitoring/
│ ├── prometheus-app.yaml
│ └── grafana-app.yaml
├── services/
│ ├── api/
│ │ ├── kustomization.yaml
│ │ ├── deployment.yaml
│ │ └── service.yaml
│ ├── frontend/
│ │ └── ...
│ └── database/
│ └── ...
└── README.md
Child application definitions:
# apps/production/api-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: api-prod
namespace: argocd
spec:
project: production
source:
repoURL: https://github.com/myorg/infrastructure
targetRevision: main
path: services/api
kustomize:
images:
- name: myregistry.azurecr.io/api
newTag: v2.1.0 # Version controlled in Git
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
# Manual sync for production (safer)
syncOptions:
- CreateNamespace=true
# Prevent simultaneous updates
info:
- name: Documentation
value: 'https://wiki.company.com/api-deployment'
---
# apps/staging/api-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: api-staging
namespace: argocd
spec:
project: staging
source:
repoURL: https://github.com/myorg/infrastructure
targetRevision: main
path: services/api
kustomize:
images:
- name: myregistry.azurecr.io/api
newTag: v2.1.0-rc1 # Different version for staging
destination:
server: https://kubernetes.default.svc
namespace: staging
syncPolicy:
# Auto-sync staging (experiment safely)
automated:
prune: true
selfHeal: true
Automated Sync vs Manual Approval
Choose sync strategy based on risk:
Low-risk environments (staging):
syncPolicy:
automated:
prune: true # Delete resources not in Git
selfHeal: true # Correct cluster drift automatically
syncOptions:
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- Automatically syncs when Git changes
- Deletes resources manually added to cluster
- Self-heals if someone manually edits the cluster
High-risk environments (production):
syncPolicy:
# No automatic sync
syncOptions:
- CreateNamespace=true
# Require manual approval via CLI or UI
# argocd app sync api-prod
Progressive sync with Argo Rollouts (next section):
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
# Canary deployment: 10% → 50% → 100%
# Can be rolled back immediately
Managing Secrets Safely (Sealed Secrets)
Never commit secrets to Git. Use Sealed Secrets:
Installation:
# Install sealed-secrets controller
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.18.0/controller.yaml -n kube-system
# Install sealing key (production: backup this key!)
kubectl get secret -n kube-system -l sealedsecrets.bitnami.com/status=active -o yaml > sealing-key.yaml
Create and seal a secret:
# Create plaintext secret
echo -n 'my-secret-password' > password.txt
# Seal it (one-way encryption with cluster's public key)
kubectl seal -f password.txt -n production \
> password-sealed.yaml
# Sealed secret is safe to commit to Git
cat password-sealed.yaml
# apiVersion: bitnami.com/v1alpha1
# kind: SealedSecret
# metadata:
# name: db-password
# namespace: production
# spec:
# encryptedData:
# password: AgC5p8X9... (encrypted, cannot be decrypted outside this cluster)
Use sealed secrets in deployment:
# services/api/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: production
spec:
template:
spec:
containers:
- name: api
image: myregistry/api:v2.1.0
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-password
key: password
---
# Reference sealed secret in Git
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: db-password
namespace: production
spec:
encryptedData:
password: AgC5p8X9p8Y9q8Z0r9S1t2u3v4w5x6y7z8a9b0c1d2e3f4...
Vault integration (alternative for complex secrets):
# argocd secret plugin
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-vault-plugin
namespace: argocd
data:
avp.yaml: |
vaultAddr: https://vault.company.com
vaultToken: ${VAULT_TOKEN}
avpAuth: k8s
k8sRole: argocd
---
# Deployment using Vault placeholders
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: production
annotations:
avp.kubernetes.io/path: "secret/data/api"
spec:
template:
spec:
containers:
- name: api
env:
- name: DATABASE_PASSWORD
value: <path:secret/data/api#password> # Filled by plugin
Drift Detection and Reconciliation
ArgoCD continuously monitors cluster state vs Git:
Detect drift automatically:
# Check application status
argocd app get api-prod
Name: api-prod
Namespace: argocd
Project: production
Status: OutOfSync # Cluster state != Git
OperationState: Succeeded
SyncPolicy: Manual
Sync: LastSync: 2026-03-15T10:30:00Z, Terms: []
REPO REVISION
https://github.com/myorg/infrastructure main
DESTINATION
SERVER NAMESPACE
https://kubernetes.default.svc production
SYNCROOT
v2.0.5 (old) vs v2.1.0 (Git) → OutOfSync
What causes drift:
Cause | Example
─────────────────────────────┼──────────────────────────
Manual kubectl edit | kubectl set image deployment/api image=...
HPA scaling up/down | Auto-scaled from 3→5 replicas
Operator modifying resources | Cert-manager updating secrets
Failed sync | Interrupted deployment
External changes | AWS auto-scaling groups
─────────────────────────────┴──────────────────────────
Configure drift detection:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: api-prod
spec:
# Check drift every 3 minutes (default 5m)
revisionHistoryLimit: 10
# Ignore certain changes (don't treat as drift)
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas # Ignore HPA scaling
- group: ''
kind: Service
jsonPointers:
- /spec/clusterIP # Ignore auto-assigned IPs
# Alert on drift
notifications:
- name: slack
destination: '#deployments'
on: OutOfSync
Rollback via Git Revert
The power of GitOps: rollback is just a Git operation:
# Check history
git log --oneline apps/production/api-app.yaml
v2.1.0 Upgrade api to v2.1.0 (broken)
v2.0.5 Previous working version
v2.0.4 Database migration
v2.0.3 Minor fix
# Rollback to v2.0.5
git revert <commit-hash-of-v2.1.0>
git push
# ArgoCD automatically detects change
# Syncs cluster back to v2.0.5 within seconds
Automated rollback on failed health checks:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: api-prod
spec:
syncPolicy:
automated:
prune: true
selfHeal: true
# Monitor health after deployment
health:
resources:
- name: api
kind: Deployment
# If pods can't start within 5m, trigger rollback
timeout: 5m
healthLevel: Progressing
# Automatic rollback on health failure
rollback:
enabled: true
onHealthFailure: true
maxRetries: 3
Progressive Delivery with Argo Rollouts
Canary deployments catch errors before 100% traffic is affected:
Installation:
kubectl create namespace argo-rollouts
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/download/v1.6.0/install.yaml
# Install traffic manager (Istio or Flagger)
kubectl apply -f https://raw.githubusercontent.com/fluxcd/flagger/main/install/istio/namespace.yaml
Canary rollout definition:
# services/api/rollout.yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: api
namespace: production
spec:
replicas: 10
selector:
matchLabels:
app: api
# Progressive delivery strategy
strategy:
canary:
steps:
- setWeight: 10 # 10% traffic to new version
duration: 5m # Hold for 5 minutes
- pause: {} # Manual approval required
- setWeight: 50 # 50% traffic if approved
duration: 10m
- setWeight: 100 # 100% traffic if healthy
# Rollback criteria
analysis:
interval: 30s
threshold: 5
metrics:
- name: error-rate
thresholdRange:
max: 0.05 # Rollback if error rate > 5%
interval: 1m
# Traffic management with Istio
trafficRouting:
istio:
virtualService:
name: api
destinationRule:
name: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myregistry/api:v2.1.0
resources:
requests:
cpu: 500m
memory: 512Mi
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
Monitoring canary deployment:
# Watch rollout progress
kubectl argo rollouts get rollout api -n production --watch
NAME STRATEGY STEP SET WEIGHT READY UPDATED UP-TO-DATE AVAILABLE
api Canary 2/4 50 10/10 10/10 10/10 10/10
# Check metrics
kubectl argo rollouts get analysis api-step-2-analysis -n production
# Promote if healthy
kubectl argo rollouts promote api -n production
# Abort if issues detected
kubectl argo rollouts abort api -n production
Production GitOps Deployment Checklist
# deployment/gitops-readiness.yaml
infrastructure:
- "✓ ArgoCD deployed with 3 replicas"
- "✓ OIDC authentication configured"
- "✓ Git repository accessible (GitHub/GitLab)"
- "✓ Sealed secrets controller running"
git_workflow:
- "✓ Git branch strategy defined (main=prod, staging=rc)"
- "✓ Pull request reviews required for production changes"
- "✓ Signed commits enforced"
- "✓ Git history immutable (force-push disabled)"
synchronization:
- "✓ Staging: Automatic sync enabled"
- "✓ Production: Manual sync required"
- "✓ Drift detection: Every 3 minutes"
- "✓ Notifications: Slack/PagerDuty on OutOfSync"
security:
- "✓ Sealed secrets in place"
- "✓ RBAC: Teams can only sync their apps"
- "✓ Image signatures verified"
- "✓ Audit logs enabled"
progressive_delivery:
- "✓ Argo Rollouts installed"
- "✓ Canary deployments tested"
- "✓ Automated rollback on health failure"
- "✓ Error budgets documented"
Conclusion
GitOps transforms infrastructure from imperative commands to declarative Git-driven state. ArgoCD makes this practical on Kubernetes. The app-of-apps pattern scales to multi-environment, multi-team deployments. Sealed secrets keep sensitive data safe while keeping all infrastructure code in version control. Progressive delivery with Argo Rollouts catches issues early. The result: deployments become auditable, reproducible, and rollback-safe. Infrastructure changes leave an immutable audit trail. Your cluster is always in sync with Git, or you know immediately when it isn't.