Terraform Interview Questions

Dynamic Blocks and for_each Patterns

questions
Scroll to track progress

You're creating a security group with rules: HTTP (80), HTTPS (443), SSH (22), and custom ports 3000, 5432. Currently using separate `resource "aws_security_group_rule"` for each. This becomes 50+ resources across modules. How do you consolidate?

Use `for_each` for rules as map: 1) Define rules: `variable "ingress_rules" { type = map(object({ from_port = number, to_port = number, protocol = string })) default = { http = { from_port = 80, to_port = 80, protocol = "tcp" } https = { from_port = 443, to_port = 443, protocol = "tcp" } ssh = { from_port = 22, to_port = 22, protocol = "tcp" } app = { from_port = 3000, to_port = 3000, protocol = "tcp" } } }`. 2) Create rules: `resource "aws_security_group_rule" "ingress" { for_each = var.ingress_rules security_group_id = aws_security_group.main.id type = "ingress" from_port = each.value.from_port to_port = each.value.to_port protocol = each.value.protocol cidr_blocks = ["0.0.0.0/0"] }`. 3) Each rule becomes: `aws_security_group_rule.ingress["http"]`, `ingress["https"]`, etc. 4) Adding new rule: just add to map. Removing: delete from map. 5) Validate: `terraform plan` shows only delta. 6) For different ports per environment: override with `terraform.tfvars`: `ingress_rules = { http = {...} staging_port = { from_port = 8080, to_port = 8080, protocol = "tcp" } }`.

Follow-up: How would you handle rules with different CIDR blocks (some public, some private) using the same structure?

You have a VPC with 10 subnets. Each needs a route table with routes to internet gateway, NAT gateway, and VPC endpoints. Writing 10 separate resource blocks is tedious. Design a scalable approach.

Use `for_each` for subnets with nested resources: 1) Define subnets: `variable "subnets" { type = map(object({ cidr_block = string, availability_zone = string })) default = { public_1 = { cidr_block = "10.0.1.0/24", az = "us-east-1a" } public_2 = { cidr_block = "10.0.2.0/24", az = "us-east-1b" } ... } }`. 2) Create subnets: `resource "aws_subnet" "main" { for_each = var.subnets vpc_id = aws_vpc.main.id cidr_block = each.value.cidr_block availability_zone = each.value.az }`. 3) Create route tables: `resource "aws_route_table" "main" { for_each = var.subnets vpc_id = aws_vpc.main.id }`. 4) Associate: `resource "aws_route_table_association" "main" { for_each = var.subnets subnet_id = aws_subnet.main[each.key].id route_table_id = aws_route_table.main[each.key].id }`. 5) Add routes: `resource "aws_route" "igw" { for_each = var.subnets route_table_id = aws_route_table.main[each.key].id destination_cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.main.id }`. 6) All 10 subnets provisioned with single block.

Follow-up: How would you differentiate between public and private subnets in the same for_each loop?

You're provisioning 20 microservices. Each needs: IAM role, security group, Lambda, and API Gateway endpoint. Hardcoding each is 80+ resources. Design module-based approach with for_each.

Use `for_each` at module level: 1) Define services: `variable "services" { type = map(object({ memory = number, timeout = number, handler = string })) default = { auth = { memory = 512, timeout = 10, handler = "index.handler" } api = { memory = 1024, timeout = 30, handler = "app.main" } ... } }`. 2) Create modules: `module "service" { for_each = var.services source = "./modules/microservice" service_name = each.key service_config = each.value environment = var.environment }`. 3) Each module creates: IAM role, security group, Lambda, API Gateway. 4) Access in root: `module.service["auth"].lambda_arn`. 5) Outputs consolidate: `output "service_endpoints" { value = { for k, v in module.service : k => v.api_endpoint } }`. 6) Adding service: add to map. Removing: delete from map. Terraform handles all 4 resources automatically. 7) Validate: `terraform state list | grep module.service` shows all service modules.

Follow-up: How would you handle services with different architectural requirements (some need databases, some don't)?

You're defining security group rules dynamically. Some rules are always needed (SSH, monitoring), others are optional based on environment (debug ports in dev only). Using single map is inflexible. Design conditional rule sets.

Combine `for_each` with `merge` and conditionals: 1) Define base rules: `local { base_rules = { ssh = { from_port = 22, ... }, monitoring = { from_port = 9090, ... } } }`. 2) Define optional rules: `local { debug_rules = var.environment == "dev" ? { debug = { from_port = 8000, ... } } : {} }`. 3) Merge: `local { all_rules = merge(local.base_rules, local.debug_rules) }`. 4) Apply: `resource "aws_security_group_rule" "ingress" { for_each = local.all_rules ... }`. 5) For service-specific rules: `local { service_rules = { auth = { additional = { from_port = 6379, ... } }, api = {} } }`. 6) In module: `local { final_rules = merge(local.base_rules, lookup(local.service_rules, var.service_name, {})) }`. 7) Complex conditionals: `for_each = { for rule_name, rule_config in local.all_rules : rule_name => rule_config if contains(var.enabled_rules, rule_name) }` allows filtering. 8) Test logic: `terraform console` to verify merged results.

Follow-up: How would you handle rules that depend on external data (e.g., team-specific ports)?

You use `dynamic` blocks to create security group ingress rules. But now you need to add egress rules without duplicating the dynamic block. The structures are similar but not identical. How do you reuse logic?

Create helper locals for shared logic: 1) Define rule structure once: `local { rule_blocks = { ingress = { type = "ingress", direction_attribute = "from_port" }, egress = { type = "egress", direction_attribute = "to_port" } } }`. 2) Create ingress: `dynamic "ingress" { for_each = var.ingress_rules content { from_port = ingress.value.from_port to_port = ingress.value.to_port protocol = ingress.value.protocol cidr_blocks = ingress.value.cidr_blocks } }`. 3) Create egress: `dynamic "egress" { for_each = var.egress_rules content { from_port = egress.value.from_port to_port = egress.value.to_port protocol = egress.value.protocol cidr_blocks = egress.value.cidr_blocks } }`. 4) Both use similar variables, so define together: `variable "firewall_rules" { type = object({ ingress = list(...), egress = list(...) }) }`. 5) Refactor module: extract dynamic block into module sub-resource. 6) Or use `for_each` instead of `dynamic`: `for_each = var.ingress_rules` is often simpler than nested dynamic blocks.

Follow-up: When would you choose dynamic blocks over for_each?

You're using `for_each` to create 50 resources. During refactoring, you need to reorder the map keys. Reordering causes Terraform to delete/recreate all 50 (because keys are used as resource addresses). How do you safely reorder?

Use state mv to remap resources after reordering: 1) Before reordering, back up state: `terraform state pull > backup.tfstate`. 2) Reorder map in HCL. 3) Run `terraform plan` to see deletion/recreation. 4) For each resource being destroyed/recreated, use `terraform state mv` to remap: if old map was `[a, b, c]` and new is `[c, a, b]`, then `terraform state mv 'aws_instance.main["a"]' 'aws_instance.main["c"]'` (mapping old position to new). 5) Run after each move: `terraform plan` to verify zero changes. 6) Commit to git: state moves are manual. Document why reordering needed. 7) Prevent: instead of map order, use stable map keys: use names/IDs as keys, not positional indices. 8) Better solution: restructure before using `for_each`. If you know order matters, use list + count with explicit naming: `count.index` gives stable position.

Follow-up: How would you detect and warn about this reordering risk automatically?

You're using `for_each` with a complex map based on JSON loaded from file. Sometimes the JSON is invalid, causing cryptic Terraform errors. How do you validate and debug?

Add validation at multiple levels: 1) Load JSON: `local { services_json = jsondecode(file("${path.module}/services.json")) }`. 2) Validate schema: add `validation` block: `variable "services_map" { type = map(object({ ... })) validation { condition = alltrue([ for k, v in var.services_map : can(regex("^[a-z0-9-]+$", k)) ]) error_message = "Service names must be lowercase alphanumeric with hyphens." } }`. 3) Debug in terraform console: `terraform console` then type `local.services_json` to inspect parsed JSON. 4) Add debug output: `output "debug_services" { value = local.services_json }`. 5) Test JSON parsing: `jq . services.json` to validate JSON syntax before Terraform. 6) Add pre-commit hook: `jq . services.json > /dev/null` prevents invalid JSON commits. 7) For complex schemas: use external validation tool or schema validator. 8) Document: show team that all JSON must pass manual jq validation before TF runs.

Follow-up: How would you handle JSON files that grow to 10,000+ entries?

You have nested for_each: `for_each = var.apps` which each contains services, each service has ports. Creating deeply nested loops causes state address explosion. Terraform plan becomes unreadable. How do you simplify?

Flatten nested for_each to single level: 1) Instead of: `for_each = var.apps` then `for_each = app.services` (nested), flatten: `local { flat_services = flatten([ for app_name, app in var.apps : [ for svc_name, svc in app.services : { app_name = app_name, service_name = svc_name, ...svc } ] ]) }` then `local { service_map = { for s in local.flat_services : "${s.app_name}/${s.service_name}" => s } }`. 2) Use: `resource "aws_instance" "svc" { for_each = local.service_map ... }`. 3) State addresses become: `aws_instance.svc["app1/auth"]` instead of nested references. 4) Readability: `terraform state list | head -20` shows flat list. 5) Drawback: editing map requires restructuring. Document clearly. 6) For complex hierarchies: consider splitting into multiple resources: one for_each per level. 7) Test: `terraform plan | grep -c "will be created"` to count resources. Verify expected count matches flattened size.

Follow-up: How would you migrate existing nested for_each to flattened structure without recreating resources?

Want to go deeper?