S3 with DynamoDB locking is solid, but I'd add a few operational things I've learned:
Use separate state files per environment and per service. One monolithic state file means one person locks everyone out. I structure mine like terraform/services/{service-name}/{environment}/.
Enable versioning and MFA delete on your S3 bucket. Saves you when someone runs terraform destroy in the wrong workspace.
Also enforce read-only access for most team members. Use IAM roles so people can only plan, not apply. Approvals go through CI/CD—I use GitHub Actions to run terraform plan, then require manual approval before the workflow runs apply.
terraform {
backend "s3" {
bucket = "my-tfstate"
key = "services/api/prod/terraform.tfstate"
dynamodb_table = "terraform-locks"
}
}
The real win is making it so developers can't accidentally blow up production from their laptop.