- Published on
Terraform Modules at Scale — Reusable, Versioned Infrastructure Components
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
Copy-paste infrastructure (duplicating 200 lines of HCL across five projects) doesn't scale. Terraform modules are abstraction layers: inputs (variables), outputs, and implementation. Versioned modules in registries allow teams to standardize infrastructure while letting each project customize via module inputs. At scale, modules eliminate toil.
- When to Create a Module vs Use a Provider Resource
- Module Structure (inputs/outputs/main/versions)
- Versioning Modules in a Private Registry
- Module Composition Patterns
- Testing Modules With Terratest
- OpenTofu (Terraform Fork) in 2026
- Terraform Provider Aliases for Multi-Account/Multi-Region
- State Management at Scale (Workspaces vs Directories)
- Terragrunt for DRY Terraform
- Migrating From Terragrunt to Native Terraform Stacks
- Checklist
- Conclusion
When to Create a Module vs Use a Provider Resource
Module-worthy patterns:
- Infrastructure composition: VPC (requires subnets, route tables, NAT gateways, security groups)
- Organizational standards: "All RDS clusters must have encryption, automated backups, and monitoring"
- Repeated patterns: Three identical microservices with different names and scale
- Compliance: PCI-DSS databases require encryption, private subnets, and audit logging
Non-module-worthy:
- Single AWS resource (EC2 instance) with no defaults or logic
- Trivial provider abstractions (using
aws_s3_bucketdirectly is fine) - One-off infrastructure unique to a single project
Module decision: "Would I copy this code to a second project?" If yes, it's module-worthy.
Module Structure (inputs/outputs/main/versions)
Standard module layout:
modules/rds-aurora/
├── main.tf # Core resources
├── variables.tf # Input variables
├── outputs.tf # Output values
├── versions.tf # Provider versions
├── README.md # Documentation
└── examples/
└── complete/ # Full example usage
├── main.tf
├── variables.tf
└── outputs.tf
variables.tf:
variable "cluster_identifier" {
description = "RDS cluster identifier"
type = string
}
variable "database_name" {
description = "Database name"
type = string
}
variable "instance_class" {
description = "RDS instance class"
type = string
default = "db.r6g.large"
}
variable "backup_retention_days" {
description = "Backup retention period"
type = number
default = 30
}
variable "enable_enhanced_monitoring" {
description = "Enable enhanced CloudWatch monitoring"
type = bool
default = true
}
variable "tags" {
description = "Tags to apply to resources"
type = map(string)
default = {}
}
main.tf:
resource "aws_rds_cluster" "default" {
cluster_identifier = var.cluster_identifier
database_name = var.database_name
engine = "aurora-postgresql"
engine_version = "15.2"
master_username = "postgres"
master_password = random_password.db_password.result
backup_retention_period = var.backup_retention_days
storage_encrypted = true
kms_key_id = aws_kms_key.rds_key.arn
skip_final_snapshot = false
final_snapshot_identifier = "${var.cluster_identifier}-final-snapshot-${formatdate("YYYY-MM-DD-hhmm", timestamp())}"
enabled_cloudwatch_logs_exports = ["postgresql"]
tags = merge(
var.tags,
{
Name = var.cluster_identifier
Module = "rds-aurora"
}
)
}
resource "aws_rds_cluster_instance" "instance" {
count = 2
identifier = "${var.cluster_identifier}-instance-${count.index + 1}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = var.instance_class
engine = aws_rds_cluster.default.engine
engine_version = aws_rds_cluster.default.engine_version
publicly_accessible = false
monitoring_interval = var.enable_enhanced_monitoring ? 60 : 0
tags = var.tags
}
resource "random_password" "db_password" {
length = 32
special = true
}
resource "aws_kms_key" "rds_key" {
description = "KMS key for RDS encryption"
deletion_window_in_days = 10
enable_key_rotation = true
tags = var.tags
}
resource "aws_kms_alias" "rds_key_alias" {
name = "alias/${var.cluster_identifier}"
target_key_id = aws_kms_key.rds_key.key_id
}
outputs.tf:
output "cluster_endpoint" {
description = "RDS cluster endpoint"
value = aws_rds_cluster.default.endpoint
}
output "reader_endpoint" {
description = "RDS reader endpoint"
value = aws_rds_cluster.default.reader_endpoint
}
output "cluster_id" {
description = "RDS cluster ID"
value = aws_rds_cluster.default.id
}
output "database_password_secret" {
description = "Database password (store in Secrets Manager)"
value = random_password.db_password.result
sensitive = true
}
output "kms_key_id" {
description = "KMS key ID for encryption"
value = aws_kms_key.rds_key.id
}
versions.tf:
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.30"
}
random = {
source = "hashicorp/random"
version = "~> 3.5"
}
}
}
Versioning Modules in a Private Registry
Store versioned modules in Terraform Cloud, GitHub, or a private S3 registry:
Terraform Cloud registry (recommended):
# Authenticate to Terraform Cloud
terraform login
# Publish module
cd modules/rds-aurora
terraform providers schema -json | jq '.provider_schemas' > schema.json
# Create release in Terraform Registry
# Requires: GitHub tag (v1.0.0 format)
git tag v1.0.0
git push origin v1.0.0
# Use published module
module "rds_aurora" {
source = "app.terraform.io/my-org/aurora/aws"
version = "1.0.0"
cluster_identifier = "prod-db"
database_name = "appdb"
instance_class = "db.r6g.xlarge"
}
Self-hosted S3 registry:
# Store modules as versioned tarballs in S3
# modules/rds-aurora/1.0.0.tar.gz
module "rds_aurora" {
source = "s3://terraform-modules/rds-aurora/1.0.0.tar.gz"
cluster_identifier = "prod-db"
database_name = "appdb"
}
Versioning prevents surprises: upgrading from 1.0.0 to 1.1.0 is intentional, not automatic.
Module Composition Patterns
Layer modules for complex infrastructure:
# Tier 1: Low-level modules (components)
# - modules/security-group
# - modules/rds-aurora
# - modules/elasticache
# Tier 2: Mid-level modules (systems)
# - modules/microservice (combines app, RDS, cache)
# - modules/data-pipeline (combines Lambda, SQS, S3)
# Tier 3: Top-level module (platform)
# - modules/platform (combines VPC, microservices, databases)
# Usage:
module "api_service" {
source = "app.terraform.io/my-org/microservice/aws"
version = "2.1.0"
name = "user-api"
vpc_id = aws_vpc.main.id
instance_count = 3
database_config = {
engine = "postgresql"
instance_class = "db.r6g.large"
}
}
module "worker_service" {
source = "app.terraform.io/my-org/microservice/aws"
version = "2.1.0"
name = "background-worker"
vpc_id = aws_vpc.main.id
instance_count = 2
enable_database = false
}
Composition eliminates duplication across similar services.
Testing Modules With Terratest
Unit test modules before publishing:
package test
import (
"fmt"
"testing"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
func TestRDSAuroraModule(t *testing.T) {
terraformOptions := &terraform.Options{
TerraformDir: "../examples/complete",
Vars: map[string]interface{}{
"cluster_identifier": "test-db",
"database_name": "testdb",
},
}
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
clusterEndpoint := terraform.Output(t, terraformOptions, "cluster_endpoint")
assert.NotEmpty(t, clusterEndpoint)
clusterId := terraform.Output(t, terraformOptions, "cluster_id")
assert.Contains(t, clusterId, "test-db")
}
func TestRDSEnc encryption(t *testing.T) {
terraformOptions := &terraform.Options{
TerraformDir: "../examples/complete",
}
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
// Verify KMS encryption is enabled
kmsKeyId := terraform.Output(t, terraformOptions, "kms_key_id")
assert.NotEmpty(t, kmsKeyId)
}
Run tests:
go test -v test/rds_test.go
OpenTofu (Terraform Fork) in 2026
OpenTofu (open-source Terraform fork) is production-ready:
# Install OpenTofu (drop-in Terraform replacement)
brew install opentofu
# Use instead of terraform
tofu init
tofu plan
tofu apply
Benefits:
- No Terraform Cloud lock-in (state stored in S3/backend of choice)
- Faster releases (community-driven)
- Module registry at
registry.opentofu.org
Drawback: Smaller ecosystem of third-party modules.
Terraform Provider Aliases for Multi-Account/Multi-Region
Deploy to multiple AWS accounts and regions with provider aliases:
provider "aws" {
alias = "primary"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::111111111111:role/TerraformRole"
}
}
provider "aws" {
alias = "replica"
region = "eu-west-1"
assume_role {
role_arn = "arn:aws:iam::222222222222:role/TerraformRole"
}
}
module "primary_db" {
source = "app.terraform.io/my-org/aurora/aws"
version = "1.0.0"
providers = { aws = aws.primary }
cluster_identifier = "prod-db-primary"
}
module "replica_db" {
source = "app.terraform.io/my-org/aurora/aws"
version = "1.0.0"
providers = { aws = aws.replica }
cluster_identifier = "prod-db-replica"
}
Aliases allow single Terraform state to manage resources across accounts and regions.
State Management at Scale (Workspaces vs Directories)
Workspaces (not recommended):
# Multiple states in one backend
terraform workspace new staging
terraform workspace new production
terraform apply # Applies to current workspace
Drawback: Easy to accidentally apply to wrong workspace.
Directory-per-environment (recommended):
infrastructure/
├── environments/
│ ├── staging/
│ │ ├── main.tf
│ │ ├── terraform.tfvars
│ │ └── backend.tf
│ └── production/
│ ├── main.tf
│ ├── terraform.tfvars
│ └── backend.tf
└── modules/
Each environment has isolated state:
cd environments/production
terraform apply # Applies only to production
Directory-per-environment is safer for teams.
Terragrunt for DRY Terraform
Terragrunt reduces HCL duplication:
# terragrunt.hcl
terraform {
source = "git::https://github.com/my-org/terraform-modules.git//rds-aurora?ref=v1.0.0"
}
inputs = {
backup_retention_days = 30
instance_class = "db.r6g.large"
}
remote_state {
backend = "s3"
config = {
bucket = "my-terraform-state"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-lock"
}
}
Usage:
# Deploy all environments with one command
cd infrastructure
terragrunt run-all apply
# Respects dependencies automatically
# (Outputs from modules feed into inputs of dependent modules)
Terragrunt enables infrastructure-as-code at massive scale.
Migrating From Terragrunt to Native Terraform Stacks
Terraform 1.8+ introduced stacks (native composition):
# stacks.tf
stack "staging" {
source = "./"
inputs = {
environment = "staging"
instance_count = 2
}
}
stack "production" {
source = "./"
inputs = {
environment = "production"
instance_count = 5
}
}
Deploy all stacks:
terraform stack plan
terraform stack apply
Stacks replace Terragrunt for multi-environment deployments. Simpler, native to Terraform.
Checklist
- Identify repeatable infrastructure patterns (candidates for modules)
- Design module inputs/outputs following Terraform conventions
- Add comprehensive documentation and examples
- Test modules with Terratest before publishing
- Version modules in Terraform Cloud or private registry
- Create module composition patterns for complex systems
- Use provider aliases for multi-account/multi-region deployments
- Implement directory-per-environment state strategy
- Set up remote state with S3 backend and DynamoDB locking
- Document module versioning strategy for team
- Evaluate Terragrunt or native Terraform stacks for multi-environment orchestration
- Plan migration path to OpenTofu if avoiding Terraform Cloud
Conclusion
Terraform modules transform infrastructure from one-off scripts into reusable, versioned components. Module composition eliminates duplication across projects. Testing and versioning ensure reliability. At scale, module-driven infrastructure enables teams to standardize while remaining flexible. Combined with stacks or Terragrunt, Terraform modules form the foundation of enterprise infrastructure-as-code.