Terraform Interview Questions

Secrets Management and Sensitive Values

questions
Scroll to track progress

Your Terraform stores RDS admin password as plaintext variable in `terraform.tfvars`. This file is in git, accessible to entire team, and shows up in plan output. Security audit fails. How do you properly secure this?

Use AWS Secrets Manager for secret injection: 1) Move secret to AWS Secrets Manager: `aws secretsmanager create-secret --name rds/admin-password --secret-string "complex-password"`. 2) In Terraform, fetch secret: `data "aws_secretsmanager_secret_version" "rds_password" { secret_id = "rds/admin-password" } resource "aws_db_instance" "main" { master_user_password = jsondecode(data.aws_secretsmanager_secret_version.rds_password.secret_string)["password"] }`. 3) Remove plaintext from git: delete `terraform.tfvars` entry, use `.gitignore` to prevent accidental commits. 4) Use `sensitive = true` on password variables: `variable "admin_password" { sensitive = true }`. Prevents Terraform from printing in logs. 5) Grant IAM permissions: CI role can read Secrets Manager, developers cannot. 6) Rotate secrets: use Lambda + SecretsManager to auto-rotate every 30 days. 7) Audit: CloudTrail logs all secret access. 8) Validate: `terraform plan` should never show password value, only `(sensitive value)`.

Follow-up: How would you handle a case where the secret is already compromised and exposed in git history?

You're storing API keys for GitHub, Datadog, and other services in Terraform. Developers need these for provisioning resources, but shouldn't have permanent access. Design a secrets management system for multi-service setup.

Use HashiCorp Vault for centralized secret management: 1) Deploy Vault cluster (or use HCP Vault managed service). 2) Store secrets: `vault kv put secret/github token="ghp_xxx"` and `vault kv put secret/datadog api_key="dd_xxx"`. 3) In Terraform, configure Vault provider: `provider "vault" { address = "https://vault.company.com" namespace = "admin" }`. 4) Fetch secrets: `data "vault_generic_secret" "github" { path = "secret/github" }` then use `data.vault_generic_secret.github.data["token"]`. 5) Add access control: Vault policies allow CI role to read secrets, but not modify. Developers authenticate via OIDC or MFA. 6) Rotate secrets: Vault auto-rotates API keys via Terraform provider refresh. 7) Audit: Vault logs all secret access. 8) Cache locally: Terraform provider caches fetched secrets briefly (10 min) to reduce Vault load. 9) Document: show team how to add new secrets to Vault.

Follow-up: How would you prevent a compromised CI token from accessing all secrets permanently?

Your Terraform state file contains sensitive values (database passwords, API keys) in plaintext. State is stored in S3, but if someone gains S3 access, all secrets are exposed. How do you protect state?

Encrypt state with KMS: 1) Create KMS key: `aws kms create-key --description terraform-state`. 2) Configure S3 backend encryption: `backend "s3" { bucket = "tf-state" encrypt = true kms_key_id = "arn:aws:kms:us-east-1:123456789:key/abc123" }`. 3) Re-encrypt state: `terraform init -migrate-state` re-uploads with KMS encryption. 4) Verify: `aws s3api get-object-tagging --bucket tf-state --key prod.tfstate` and check server-side encryption. 5) Add KMS key policy restricting access: only allow CI role and security team to decrypt. 6) Enable versioning + MFA delete on bucket to prevent accidental deletion. 7) Use state locking with DynamoDB encryption. 8) Restrict direct S3 access: IAM policy allows only s3:GetObject and s3:PutObject, not delete or read versions. 9) Monitor: CloudTrail logs decrypt operations. Alert on suspicious access patterns. 10) For extra paranoia: use Vault for state encryption key management, not static KMS key.

Follow-up: How would you handle an accidental terraform state push that exposed secrets?

You need to pass a database password to a Lambda function that Terraform creates. The password must not appear in: git history, Terraform logs, AWS CloudTrail, function environment variables (readable via API). How do you do this?

Use Lambda environment variables sourced from Secrets Manager: 1) Store secret: `aws secretsmanager create-secret --name lambda/db-password`. 2) In Terraform: `resource "aws_lambda_function" "worker" { environment { variables = { SECRETS_MANAGER_ARN = aws_secretsmanager_secret.db_password.arn } } }`. 3) Lambda code reads secret at runtime: Python: `import json; secret = json.loads(boto3.client('secretsmanager').get_secret_value(SecretId=os.environ['SECRETS_MANAGER_ARN'])['SecretString']); password = secret['password']`. 4) Grant Lambda IAM role permission: `{ "Effect": "Allow", "Action": "secretsmanager:GetSecretValue", "Resource": "arn:..." }`. 5) Password never touches logs: Terraform never sees plaintext, only ARN. CloudTrail logs GetSecretValue calls (not the actual value). 6) Rotate secret: SecretsManager auto-rotation changes password without Lambda code changes. 7) Audit: CloudTrail shows when Lambda accessed secret. 8) Validate: confirm password doesn't appear in Lambda environment variables via AWS API.

Follow-up: How would you test this Lambda code without hardcoding credentials?

Terraform outputs sensitive values like database password. Developers run `terraform output` and see plaintext. How do you prevent this while still allowing legitimate programmatic access?

Use sensitive output flags and role-based access: 1) Mark outputs sensitive: `output "db_password" { value = aws_db_instance.main.master_userpassword sensitive = true }`. 2) Running `terraform output` now shows `` instead of plaintext. 3) Terraform logs don't log sensitive outputs. 4) Programmatic access still works for CI: `terraform output -raw db_password` returns plaintext, but only CI role has this permission. 5) Restrict `terraform` CLI access: IAM policies deny developers from running terraform in prod environments. 6) Use Terraform Cloud: UI hides sensitive outputs by default, shows only to approved users. 7) For legitimate access: use Vault or Secrets Manager instead of Terraform outputs. Lambda/apps fetch secrets from Vault at runtime. 8) Audit: track who accesses sensitive outputs via CloudTrail + custom logging. 9) Document: show developers that they should never need raw password - all services use IAM roles or retrieve from Secrets Manager.

Follow-up: How would you prevent developers from accidentally logging sensitive outputs in application code?

Your CI/CD pipeline stores AWS credentials in GitHub Secrets. Terraform uses these for auth. A developer's GitHub token gets compromised. Attacker uses it to extract AWS credentials from Secrets, then assumes IAM role and destroys infrastructure. How do you prevent this?

Use temporary credentials with OIDC and no stored secrets: 1) Remove GitHub Secrets storing AWS credentials. 2) Configure OpenID Connect (OIDC): GitHub Actions -> AWS trust relationship. 3) In GitHub Actions: `uses: aws-actions/configure-aws-credentials@v2` with `role-to-assume: arn:aws:iam::123456789:role/GitHubActionsRole`. 4) AWS STS issues temporary credentials valid for 1 hour. No stored secrets. 5) Terraform uses temporary credentials automatically. 6) Even if GitHub token compromised: attacker cannot use it to assume AWS role (role trust checks token comes from GitHub Actions context). 7) Add IAM restrictions: role allows only specific Terraform resources (dev environment only, not prod). 8) Add geo restrictions: assume role fails if request comes from outside corporate IP ranges. 9) Audit: CloudTrail shows all assume-role calls and which resources were accessed. 10) Rotate: credentials auto-expire after 1 hour, new ones issued on next job run.

Follow-up: How would you handle a situation where prod deployments still need credentials but you want to eliminate shared secrets?

You're using Terraform to provision Kubernetes secrets (database passwords, API keys). These are stored in etcd unencrypted. Cluster admin can dump etcd and read all secrets. Design secure secret handling in K8s.

Use Kubernetes secrets with external secret storage: 1) Don't store secrets directly in K8s etcd. Instead, store reference: `resource "kubernetes_secret" "db_password" { data = { password_ref = "vault://secret/db-password" } }`. 2) Deploy External Secrets Operator in K8s cluster: watches Terraform-created references, fetches actual secrets from Vault at sync time. 3) In Terraform: use Vault provider to create secrets, K8s Secret references them. 4) Enable K8s secret encryption at rest: `kube-apiserver --encryption-provider-config` with KMS backend. Etcd secrets encrypted. 5) RBAC: limit cluster admin from viewing secrets. `kubectl get secrets` shows only base64 (not plaintext). 6) Audit: log all secret accesses via k8s audit logs + Vault. 7) Rotate: External Secrets Operator syncs Vault changes to K8s automatically. 8) For extra security: use Sealed Secrets or kube-secrets instead of plain Secrets. 9) Document: show developers to never manually create K8s secrets via Terraform - always use External Secrets Operator.

Follow-up: How would you ensure rotation of K8s secrets doesn't break running pods?

A developer commits a file with database credentials to git. It's caught before merge, but the commit is already in local history. How do you remove it and prevent recurrence?

Use git-secrets to detect and remove credentials: 1) First, rotate compromised credentials immediately: `aws secretsmanager put-secret-value --secret-id db-password --secret-string "new-password"`. 2) Remove from git history: `git filter-branch --force --tree-filter 'rm -f sensitive-file.txt' HEAD` to rewrite history. 3) Force push: `git push --force-with-lease` (carefully, coordinate with team). 4) Clean up reflog: `git reflog expire --expire=now --all && git gc --aggressive`. 5) Install git-secrets hook: `git secrets --install && git secrets --register-aws` to detect AWS credentials on commit. 6) Run initial scan: `git secrets --scan-history` on all branches. 7) Pre-commit hook: `.git/hooks/pre-commit` runs git-secrets before allowing commits. 8) Add to .gitignore: `*.tfvars`, `*.env`, any files with credentials. 9) Educate: show team why credentials shouldn't be in git, point to Secrets Manager. 10) Document: CONTRIBUTING.md shows proper secret handling patterns.

Follow-up: How would you verify the credentials are truly removed from git history?

Want to go deeper?