Featured image of post Terraform Modules on Azure: From Building Blocks to a Full-Scale Multi-Tier Application

Terraform Modules on Azure: From Building Blocks to a Full-Scale Multi-Tier Application

Learn how to build scalable, reusable Azure infrastructure using Terraform modules. From module basics to a complete multi-tier deployment, this guide walks you through real-world examples and best practices.

Introduction

Welcome to the final chapter in our Terraform on Azure series.

Over the past several guides, you’ve gone from writing your first Terraform configuration to building dynamic, environment-aware infrastructure using variables, outputs, and real-world deployment patterns. You’ve learned how to organise your code, manage state, and make your infrastructure both flexible and predictable.

Now, it’s time to take everything you’ve learned and apply it in a more structured, reusable way — using Terraform modules.

Why Modules?

As your infrastructure grows, patterns start to emerge. You may find yourself repeating the same networking setup, virtual machine block, or storage configuration across different environments or projects. Copying and pasting works for a while — until it doesn’t.

Modules solve this problem.

They let you package infrastructure logic into clean, reusable building blocks. You define something once — like a virtual network, a resource group, or an application stack — and then reuse it anywhere, with different settings for each use.

Modules help you reduce duplication, improve consistency, and simplify complex infrastructure into understandable pieces.

What This Guide Will Cover

In this article, we’ll walk through:

  • What modules are and how they work
  • How to create your own reusable modules from scratch
  • How to structure and call modules in real projects
  • How to reuse modules across environments
  • How to tap into the Terraform Registry and Azure Verified Modules
  • How to create flexible, scalable infrastructure using composition and advanced module patterns
  • And finally, how to bring it all together in a full-featured, multi-tier Azure deployment

What You’ll Build

By the end, you’ll have built a modular, production-inspired infrastructure on Azure. Using your own custom modules — and a few community ones — you’ll define networking, compute, storage, and databases in a clear, reusable architecture.

This isn’t just a hands-on project. It’s a foundation you can adapt and grow for real-world use.

Let’s dive in and explore the power of Terraform modules.

What Are Terraform Modules?

Before we build anything, let’s take a moment to really understand what a module is — and why Terraform uses them.

At its simplest, a module is just a way to group related Terraform resources together into a reusable unit. You define it once, and you can use it again — in the same project or in others — by passing in different inputs.

If you’ve ever written a function in code, it’s the same idea:

  • You give it some inputs (variables)
  • It performs a task (provisions infrastructure)
  • It gives you outputs (like resource IDs or connection strings)

You’ve Already Used One

Here’s something that might surprise you:
If you’ve been following this series, you’ve already been using a module all along — the root module.

Every Terraform project has a root module. It’s simply the collection of .tf files in your working directory — usually files like main.tf, variables.tf, and outputs.tf. When you run terraform apply, Terraform treats this directory as the root module and starts building from there.

So in a way, every Terraform configuration is a module. The difference is, now we’re going to start creating and calling our own — and that unlocks a whole new level of structure and reusability.

Why Use Modules?

So why split your code into modules?

As your infrastructure grows, you’ll notice repeated patterns:
Maybe you deploy the same virtual network structure in dev, test, and prod. Or maybe your application stack always includes the same storage account, database, and compute resources.

Instead of copying and pasting those resources into different environments or files, you can define them once inside a module and reuse them.

That gives you:

  • Cleaner configurations: your root files stay focused on high-level structure
  • Easier changes: update a module once, and it applies everywhere it’s used
  • Environment flexibility: pass in different variable values for different environments (like region, tags, or VM size)
  • More confidence: you’re using something that already works and has been tested

What a Module Looks Like

A Terraform module is just a folder that contains one or more .tf files. These files work together to define a specific piece of infrastructure — like a virtual network, a resource group, or even a full application environment.

Here’s a basic example of what a module might look like:

1
2
3
4
5
modules/
└── my_module/
    ├── main.tf         # Main resource definitions
    ├── variables.tf    # Input variable declarations
    └── outputs.tf      # Output value declarations

Notice anything familiar?

That’s right — it’s basically the same layout as the root module. The difference is, now you can call this module from elsewhere and pass in different values each time.

How We’ll Use Modules in This Guide

In the rest of this guide, we’ll build our own modules for real Azure resources. We’ll start small — defining a resource group — and gradually build up to more complex setups like networking, storage, and Kubernetes.

By the end, we’ll combine these modules into a full-scale, multi-tier Azure environment — cleanly composed, reusable, and easy to manage.

But first, let’s roll up our sleeves and create our very first module.

Creating Your First Terraform Module

Let’s put the concept of modules into practice with a simple, hands-on example.

We’re going to create a Terraform module that provisions an Azure Resource Group. Nothing fancy — just a clean, minimal setup that shows how modules work from the inside out.

Step 1: Project and Module Structure

Let’s start by setting up a basic project directory:

1
2
3
4
5
6
7
project_root/
├── main.tf              # Root configuration (will call the module)
└── modules/
    └── resource_group/  # Our custom module
        ├── main.tf
        ├── variables.tf
        └── outputs.tf

In this structure:

  • project_root is our root module — this is where we’ll run Terraform from
  • modules/resource_group is our child module — the reusable logic for creating a resource group

Step 2: Writing the Module

Inside the modules/resource_group folder, create three files.

main.tf — This defines the resource:

1
2
3
4
5
6
resource "azurerm_resource_group" "rg" {
  name     = var.rg_name
  location = var.location

  tags = var.tags
}

variables.tf — This declares the module’s inputs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
variable "rg_name" {
  type        = string
  description = "Name of the Azure Resource Group"
}

variable "location" {
  type        = string
  description = "Azure region where the resource group will be created"
}

variable "tags" {
  type        = map(string)
  description = "Optional tags to apply to the resource group"
  default     = {}
}

outputs.tf — This returns values from the module:

1
2
3
4
5
6
7
8
9
output "rg_id" {
  value       = azurerm_resource_group.rg.id
  description = "ID of the created resource group"
}

output "rg_location" {
  value       = azurerm_resource_group.rg.location
  description = "Azure region of the resource group"
}

At this point, we’ve created a complete Terraform module — one that’s small, focused, and reusable.

Step 3: Calling the Module from the Root

Now let’s switch back to our root module (project_root/) and create a main.tf that uses the module we just wrote.

main.tf — in your root directory:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
provider "azurerm" {
  features {}
}

module "resource_group" {
  source   = "./modules/resource_group"
  rg_name  = "dev-infra-rg"
  location = "Australia East"
  tags = {
    environment = "dev"
    owner       = "team-infra"
  }
}

output "resource_group_id" {
  value = module.resource_group.rg_id
}

Here’s what’s happening:

  • The source = "./modules/resource_group" line tells Terraform where to find the module
  • We’re passing in input variables like rg_name, location, and tags
  • We expose one of the module’s outputs (the rg_id) from the root module as well

You might be wondering: “Hang on — didn’t we just talk about avoiding hard-coded values? Why are we writing values like ‘Australia East’ and ‘dev-infra-rg’ directly here?”

That’s a great question — and you’re absolutely right to notice it.

The key idea is this: we’re not hardcoding values inside the module, we’re just passing them into the module from the root configuration. This is exactly how Terraform modules are designed to work — like functions that accept input.

In real-world scenarios, we wouldn’t usually write these values directly in the main.tf file like this. Instead, we’d often:

  • Define variables at the root level
  • Use .tfvars files to provide environment-specific values (e.g. one for dev, one for prod)

For now, though, we’re keeping the example straightforward so you can clearly see:

  • How we pass values into a module
  • How the module uses those values to build infrastructure
  • How the module returns useful outputs

We’ll explore how to make these values more dynamic and environment-aware later — but right now, our focus is on understanding how to pass values into a module and get values out.

Step 4: Initialise and Apply

From the project_root/ directory, run:

1
2
terraform init
terraform apply

Terraform will:

  1. Initialise your project
  2. Detect the module and load it
  3. Prompt you to review the changes
  4. Create the Azure Resource Group using your module

Once the apply is complete, you’ll see the output:

1
2
3
Outputs:

resource_group_id = "/subscriptions/xxxxxxx/resourceGroups/dev-infra-rg"

You’ve just written, reused, and deployed your first Terraform module!

Why This Matters

This small win isn’t just a one-off — it’s a pattern you can use again and again.

Want to create multiple resource groups in different regions?
Want to reuse the same logic in other environments or projects?
Just call the same module with different inputs.

In the next section, we’ll build on this foundation by exploring how root modules and child modules relate — and why this structure makes scaling up so much easier.

Understanding Root and Child Modules

Now that you’ve created and called your first module, it’s a great time to introduce an important concept in Terraform’s architecture: the difference between root modules and child modules.

This isn’t just a technical distinction — it’s a core part of how Terraform works under the hood.

What Is the Root Module?

The root module is the starting point of every Terraform project. It’s the directory where you run Terraform commands like terraform init, terraform plan, and terraform apply.

If you’ve been following along, the folder where you created your main.tf file — the one that called your resource_group module — is your root module.

It usually contains:

  • main.tf: the primary configuration
  • variables.tf: inputs to be passed in
  • outputs.tf: final values to expose

You might think, “Hang on, that’s exactly what a module looks like…”
And you’d be right — because the root module is a module, just a special one. It’s the one Terraform starts with.

What Is a Child Module?

A child module is any module that’s called by another module.

When you created the modules/resource_group folder and called it from your root configuration using a module block, you were using a child module.

Think of it like this:

1
2
3
4
5
6
7
project_root/              <-- Root Module
├── main.tf                <-- Calls the module
└── modules/
    └── resource_group/    <-- Child Module
        ├── main.tf
        ├── variables.tf
        └── outputs.tf

Terraform starts at the root, reads the main.tf, and then recursively loads any child modules it finds via module blocks.

Why the Distinction Matters

Understanding the root vs child distinction helps with:

  • Variable handling: Only root modules accept values via CLI flags (-var) or .tfvars files. Child modules can only get their input from module blocks.
  • State tracking: Terraform keeps track of everything starting from the root — even if resources are defined in nested modules.
  • Debugging and structure: When something goes wrong, knowing where a module lives in the hierarchy helps you troubleshoot faster.

Visualising the Relationship

Here’s a simple mental model:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[ Root Module ]
    |
    ├── module "resource_group"
    │       └── [ Child Module #1 ]
    |
    ├── module "networking"
    │       └── [ Child Module #2 ]
    |
    └── module "compute"
            └── [ Child Module #3 ]

And guess what? Those child modules can also call their own modules — making them both children and parents. That’s where nested composition (which we’ll cover soon) comes into play.

Quick Recap

  • The root module is where you run Terraform from
  • It can call one or more child modules using module blocks
  • Child modules are just folders with .tf files and no direct CLI interaction
  • Terraform builds everything starting from the root and branching into each child

By giving your infrastructure this kind of structure — one central entry point and clearly scoped building blocks — you make it easier to manage, scale, and reason about as your projects grow.

In the next section, we’ll build on this by expanding our module into something a bit more real-world: a networking module with a virtual network and a subnet.

Let’s take your module skills up a level.

Creating a More Capable Module: Azure Networking

So far, you’ve built and called your first Terraform module — congratulations again on that win!

Now let’s take things up a notch. In this section, we’ll build a more practical, real-world module: one that provisions not just a single resource, but a virtual network and a subnet — the foundation of most Azure environments.

This gives us a chance to:

  • Combine multiple related resources inside one module
  • Work with more flexible input variables
  • Explore how modules can manage internal relationships between resources
  • Return useful outputs that future modules (like AKS or databases) will need

Let’s jump in.

Step 1: Create the Networking Module Directory

We’ll start by creating a dedicated folder for this module. From your project root:

1
2
mkdir -p modules/networking
cd modules/networking

Just like before, we’ll create three files:

  • main.tf – where the infrastructure is defined
  • variables.tf – where we declare the inputs the module will accept
  • outputs.tf – where we define what values the module returns

Step 2: Define the Module’s Resources (main.tf)

Let’s define two resources:

  • A virtual network (azurerm_virtual_network)
  • A subnet inside that virtual network (azurerm_subnet)

Here’s the full main.tf:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
resource "azurerm_virtual_network" "vnet" {
  name                = var.vnet_name
  address_space       = var.address_space
  location            = var.location
  resource_group_name = var.resource_group_name

  tags = var.tags
}

resource "azurerm_subnet" "subnet" {
  name                 = var.subnet_name
  resource_group_name  = var.resource_group_name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = var.subnet_address_prefix
}

You’ll notice the subnet resource references the virtual network using:

1
virtual_network_name = azurerm_virtual_network.vnet.name

This shows how resources inside a module can depend on and connect to each other. Terraform automatically detects this dependency and builds them in the correct order.

Step 3: Declare Input Variables (variables.tf)

Now we’ll declare the inputs this module expects. Each of these variables controls one aspect of the virtual network or subnet configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
variable "vnet_name" {
  type        = string
  description = "Name of the virtual network"
}

variable "address_space" {
  type        = list(string)
  description = "Address space for the virtual network"
}

variable "location" {
  type        = string
  description = "Azure region for the virtual network"
}

variable "resource_group_name" {
  type        = string
  description = "Name of the resource group"
}

variable "subnet_name" {
  type        = string
  description = "Name of the subnet"
}

variable "subnet_address_prefix" {
  type        = list(string)
  description = "Address prefix for the subnet"
}

variable "tags" {
  type        = map(string)
  description = "Tags to assign to resources"
  default     = {}
}

Step 4: Define Useful Outputs (outputs.tf)

We’ll return the most relevant values so other parts of the infrastructure can use them later:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
output "vnet_id" {
  value       = azurerm_virtual_network.vnet.id
  description = "The ID of the virtual network"
}

output "vnet_name" {
  value       = azurerm_virtual_network.vnet.name
  description = "The name of the virtual network"
}

output "subnet_id" {
  value       = azurerm_subnet.subnet.id
  description = "The ID of the subnet"
}

Step 5: Use the Module in Your Root Configuration

Let’s now use this networking module from our root module. You can either add this into the same root module you created earlier, or create a new test project for this.

Here’s a root main.tf that:

  • Uses the azurerm provider
  • Calls our networking module and passes in the required inputs
  • Exposes outputs
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "rg" {
  name     = "demo-networking-rg"
  location = "Australia East"
}

module "networking" {
  source              = "./modules/networking"
  vnet_name           = "demo-vnet"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  subnet_name         = "demo-subnet"
  subnet_address_prefix = ["10.0.1.0/24"]
  tags = {
    environment = "dev"
    team        = "infra"
  }
}

output "vnet_id" {
  value = module.networking.vnet_id
}

output "subnet_id" {
  value = module.networking.subnet_id
}

Quick note: You’ll notice we’re still passing variable values directly in the root module — just like we did earlier. That’s intentional.
In this section, we’re focused on showing how modules interact and pass values between each other.
We’ll explore more dynamic, environment-aware ways to pass these values very soon.

What You’ve Learned

In this section, you’ve learned how to:

  • Create a module that manages multiple related resources
  • Handle resource dependencies and relationships inside a module
  • Declare and use real-world input variables like lists and maps
  • Return useful outputs from a module for future use

You’ve now gone from building single-resource modules to constructing more capable, practical infrastructure components — the kind you’ll actually use in production.

In the next section, we’ll take this even further by exploring module composition — where modules themselves can use other modules as building blocks.

Let’s keep going.

Module Composition: Modules Inside Modules

Now that you’re familiar with how to create and use modules, let’s take it one step further and explore one of the most powerful ideas in Terraform module design:

Modules can use other modules.
This is called composition, and it’s how you can build complex systems from simple building blocks.

If this reminds you of functions calling other functions in programming, you’re exactly right — the same principle applies. You can design a module to take care of a larger task, and inside it, call other smaller modules to handle the individual pieces.

Example: Composing a Web Application Module

Let’s say we want to define a reusable module for deploying a web application. But that application depends on networking and database resources. Rather than cramming everything into one big module, we break it down like this:

1
2
3
4
5
6
project_root/
└── modules/
    ├── networking/
    ├── database/
    └── webapp/

The webapp module will compose both the networking and database modules to set up everything needed for the application to run.

Here’s what modules/webapp/main.tf might look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
module "networking" {
  source              = "../networking"
  resource_group_name = var.resource_group_name
  vnet_name           = var.vnet_name
  address_space       = var.address_space
}

module "database" {
  source              = "../database"
  resource_group_name = var.resource_group_name
  db_name             = var.db_name
  admin_username      = var.db_admin_username
  admin_password      = var.db_admin_password
  subnet_id           = module.networking.subnet_ids["db"]
}

resource "azurerm_app_service" "webapp" {
  name                = var.app_name
  location            = var.location
  resource_group_name = var.resource_group_name
  app_service_plan_id = var.app_service_plan_id

  app_settings = {
    DB_CONNECTION = module.database.connection_string
  }
}

In this setup:

  • The webapp module delegates responsibility to the networking and database modules
  • It connects the pieces together using outputs and inputs
  • It exposes only the final result to the outside world — like a clean API

Why Use Composition?

There are some real advantages to thinking in this way:

  • Separation of concerns: Each module does one thing well. If something breaks, you know where to look.
  • Reusability: The networking and database modules could be reused in other projects — they’re not tied to this specific web app.
  • Maintainability: You can evolve or swap out parts of the system without rewriting everything.
  • Clarity: The webapp module tells a story — it says, “to deploy this app, I need a network, a database, and an App Service.”

What’s Happening in the Root Module?

Once you’ve composed a higher-level module like webapp, using it in the root module is simple:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
module "webapp" {
  source              = "./modules/webapp"
  resource_group_name = azurerm_resource_group.main.name
  location            = var.location
  vnet_name           = "demo-vnet"
  address_space       = ["10.0.0.0/16"]
  db_name             = "webapp-db"
  db_admin_username   = var.db_admin_username
  db_admin_password   = var.db_admin_password
  app_name            = "demo-webapp"
  app_service_plan_id = azurerm_app_service_plan.main.id
}

This keeps your root main.tf clean and focused — it simply says, “Give me a working web app with all its dependencies.”

“Wait… Isn’t This Just a Module Calling a Module?”

Great observation — yes, technically what we’re doing is calling a module from another module. So what makes this different from when our root module called the networking module earlier?

The answer comes down to how we’re using modules — and what we expect them to represent.

In the earlier example:

  • Our root module was calling a single-purpose child module (like networking) to provision one part of the infrastructure.

In this example:

  • Our webapp module is more than a simple building block — it’s a composed system, bundling multiple modules together (networking, database, app service) to deliver a higher-level unit of functionality.

Think of it like this:

Use Case What It Represents
Root module calling networking “I want to create a virtual network.”
webapp module calling networking + database + resources “I want to create a full application environment.”

So yes — the mechanics are the same. But the intent is different.

  • In the first case, the root module is stitching things together directly.
  • In the second case, we’re starting to build layered architecture — where one module represents a complete subsystem, made from smaller parts.

This style of module design becomes extremely valuable in large systems. It lets you treat an entire web app stack as one unit, or an internal platform service as a reusable package.

Where We’re Headed Next

You’ve now seen how modules can call other modules, just like reusable functions — and how composition helps you build up infrastructure layers step by step.

Next, we’ll explore another powerful benefit of modules: reusability across different environments. You’ll see how the same module can power your dev, test, and prod setups — without rewriting anything.

Ready? Let’s go.

Module Reuse Across Environments

By now, you’ve seen how modules help us package infrastructure into neat, reusable components. But we’ve only scratched the surface of what makes this approach powerful.

Let’s explore one of the most practical benefits of using modules:

You can reuse the same module across multiple environments — like dev, test, and prod — with different settings.

Why Reuse Modules Across Environments?

In a real-world project, you rarely deploy everything just once.
You might have:

  • A dev environment for testing
  • A staging environment for pre-production validation
  • A prod environment for your live users

All of these environments often need the same kind of infrastructure, just with different configurations.

For example:

  • Dev might need one VM, Prod might need five
  • Dev might use “Australia Southeast”, Prod might use “Australia East”
  • Tagging, naming conventions, and security rules might vary

But the structure of the infrastructure — the resources being created — is often identical.

And that’s exactly where modules shine.

Modules Stay the Same. Input Values Change.

Let’s say you created a module for provisioning a virtual network. You don’t need to rewrite or duplicate that module for every environment. Instead, you reuse the same module, and pass in different values:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Root main.tf for dev
module "networking_dev" {
  source              = "./modules/networking"
  vnet_name           = "dev-vnet"
  address_space       = ["10.0.0.0/16"]
  location            = "Australia Southeast"
  resource_group_name = "dev-rg"
}

# Root main.tf for prod
module "networking_prod" {
  source              = "./modules/networking"
  vnet_name           = "prod-vnet"
  address_space       = ["10.10.0.0/16"]
  location            = "Australia East"
  resource_group_name = "prod-rg"
}

Notice the pattern?
We’re using the same module (./modules/networking) in both cases. The only difference is the input values.

This keeps your module clean and focused, and lets your root module adapt to different environments by simply tweaking configuration.

Organising for Environment-Based Reuse

There are a few common ways to structure this in practice. One simple approach is to use separate folders for each environment:

1
2
3
4
5
6
7
8
9
environments/
├── dev/
   ├── main.tf
   └── terraform.tfvars
├── prod/
   ├── main.tf
   └── terraform.tfvars
└── modules/
    └── networking/

Or, if you want to keep a single main.tf, you can inject environment-specific values using -var-file flags:

1
2
terraform apply -var-file="dev.tfvars"
terraform apply -var-file="prod.tfvars"

Inside dev.tfvars:

1
2
3
4
vnet_name           = "dev-vnet"
address_space       = ["10.0.0.0/16"]
location            = "Australia Southeast"
resource_group_name = "dev-rg"

This lets you keep one shared module, and just swap out values as needed.

Reuse in Composed Modules

You can also reuse composed modules. For example, you might have a complete webapp module — the one we built earlier — and now you want to deploy it to different environments:

1
2
3
4
5
6
7
8
9
module "webapp" {
  source              = "./modules/webapp"
  resource_group_name = var.resource_group_name
  location            = var.location
  vnet_name           = "${var.environment}-vnet"
  db_name             = "${var.environment}-db"
  app_name            = "${var.environment}-app"
  # ...other inputs
}

With input variables like environment = "dev" or environment = "prod" passed in from a .tfvars file, you can now use the same composed module, tailored for each deployment.

A Mental Model

Here’s a helpful way to think about it:

  • Your modules are reusable templates — like blueprints for infrastructure.
  • Your root module is where you say, “Let’s use this blueprint to build something.”
  • Your variable values decide what the final infrastructure looks like in each case.

Change the values, and you get a different deployment — but the blueprint stays the same.

What’s Next?

You’ve now seen how modules can scale horizontally — not just across resources, but across environments. This reusability is what makes Terraform such a strong foundation for real-world infrastructure automation.

Next up, we’ll zoom out and explore how to tap into the power of the wider Terraform ecosystem by using public modules from the Terraform Registry — including the excellent Azure Verified Modules.

Ready to supercharge your module toolbox? Let’s go!

Using Public Modules (Including Azure Verified Modules)

So far, you’ve built modules by hand — and that’s a great way to learn. But as your infrastructure needs grow, it doesn’t always make sense to reinvent the wheel every time.

The good news?
The Terraform community — including Microsoft — has already done a lot of the heavy lifting.

Let’s explore how you can use public modules to save time, stay aligned with best practices, and accelerate your Azure projects.

What Are Public Modules?

Public modules are pre-built, reusable Terraform modules published by the community and trusted organisations.

They live in the Terraform Registry, which is like a package manager for infrastructure — you search for a module, plug it into your project, and provide the inputs it needs.

Introducing Azure Verified Modules (AVM)

Microsoft publishes a curated set of production-ready modules under the Azure Verified Modules (AVM) initiative. These are:

  • Officially supported by Microsoft
  • Designed to follow Azure best practices
  • Actively maintained and versioned
  • Great for bootstrapping robust, secure environments quickly

If you’re managing infrastructure on Azure, AVMs are an excellent way to start strong with confidence.

How to Find Azure Modules

There are two main places to explore public modules for Azure:

  1. Terraform Registry – Azure namespace
    → These are official Microsoft-maintained modules

  2. Azure Verified Modules site
    → A friendlier landing page curated by the Azure team

You’ll find modules for networking, storage, compute, security, and more.

Example: Using an AVM for a Virtual Network

Let’s say you want to create a virtual network with subnets using the official AVM.

You can plug it into your Terraform configuration like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
module "vnet" {
  source  = "Azure/avm-res-network-virtualnetwork/azurerm"
  version = "0.2.4"  # Always pin your version!

  name                = "my-vnet"
  location            = "Australia East"
  resource_group_name = "infra-rg"

  address_spaces = ["10.0.0.0/16"]

  subnets = {
    subnet1 = {
      name             = "frontend"
      address_prefixes = ["10.0.1.0/24"]
    }
    subnet2 = {
      name             = "backend"
      address_prefixes = ["10.0.2.0/24"]
    }
  }
}

In just a few lines, you’ve got:

  • A virtual network with a custom address space
  • Two subnets
  • Naming handled according to Azure standards

All without writing the low-level resource blocks yourself.

How This Fits Into What You’ve Learned

This might look a little different from the modules you’ve written, but it works exactly the same under the hood:

  • You provide input variables
  • The module provisions resources
  • You can access outputs just like with your own modules

You can even treat AVMs as building blocks within your own composed modules if you want to layer in custom logic or automation.

Outputs from Public Modules

Let’s say this AVM provides an output like vnet_id. You can expose it from your root module just like you did before:

1
2
3
output "vnet_id" {
  value = module.vnet.resource.id
}

Note: Every AVM includes a resource output — this is a map that gives you access to the underlying resource’s properties. Super handy!

Best Practices When Using Public Modules

  1. Pin the version
    Always specify the version = "x.y.z" to avoid breaking changes.

  2. Read the docs
    Public modules often have required inputs or assumptions — check the documentation before using them.

  3. Test before production
    Try them out in a dev environment first to make sure they work for your use case.

  4. Customise thoughtfully
    If the module doesn’t support something you need, you can often wrap it in your own module and extend it — no need to fork it unless absolutely necessary.

Public vs. Custom Modules: When to Use What?

Use Public Modules When… Use Custom Modules When…
You need a standard resource setup You have specific implementation requirements
You’re starting a project quickly You want tighter control over every detail
You’re OK with some level of abstraction You need full flexibility or edge-case support
You trust the publisher (e.g., Microsoft) Your org has specific compliance or naming rules

In practice, many teams use a blend of public and custom modules depending on the task.

Wrapping Up

Public modules — especially Azure Verified Modules — are a great way to accelerate your work while benefiting from community-tested patterns and Azure-recommended practices.

Think of them as “pre-built Lego kits” — ready-made pieces you can plug into your own configurations, saving you time and helping you follow best practices.

By now, you’ve seen how powerful modules can be — whether you’re writing your own, composing them into larger patterns, or using high-quality community modules. You’ve learned how to make your Terraform code more organised, reusable, and scalable.

So, what’s next?

Now that we’ve built up a solid foundation in Terraform modules — from the ground up — it’s time to put it all into action.

In the next and final part of this series, we’ll combine everything you’ve learned into a complete, modular, production-style deployment: a multi-tier Azure application using Terraform.

Let’s roll up our sleeves and build something real.

Grand Finale Project: Building a Multi-Tier Azure Application

You’ve done the groundwork. You’ve learned what modules are, created your own, composed them, and even explored high-quality public modules. Now it’s time to bring all of that together into one cohesive, production-style deployment.

This is the grand finale of your Terraform learning journey — and a major step forward in your real-world Infrastructure as Code skills.

What We’ll Build

We’re going to provision a multi-tier Azure application using Terraform modules. This project mirrors what you might see in a real-world environment: distinct infrastructure components, clearly scoped responsibilities, clean separation of concerns, and reusable building blocks.

Here’s what your infrastructure will include:

  • Networking Layer: A virtual network with multiple subnets and a network security group
  • Compute Layer: An Azure Kubernetes Service (AKS) cluster to host containerised workloads
  • Data Layer: An Azure SQL Database provisioned securely
  • Storage Layer: An Azure Blob Storage account for unstructured data

Each component will live in its own module, and we’ll tie it all together in a root module that coordinates the full application stack.

By the end, you’ll have:

  • A modular Terraform setup you can reuse in real projects
  • A deeper understanding of how modules interact and share data
  • A complete, working Azure deployment — all created through code

Let’s get started by outlining the overall structure of our project and then dive into each module step-by-step.

Project Structure Overview

Before we write any code, let’s lay out the folder structure for our Terraform project. We’ll use a root module to orchestrate the deployment and several child modules, each focused on a specific piece of infrastructure.

Here’s what the directory tree will look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
project_root/

├── main.tf             # Root configuration file that ties everything together
├── variables.tf        # Root-level input variables (like region, project name, etc.)
├── outputs.tf          # Root-level outputs to expose values like ACR URL, AKS name, etc.

└── modules/
    ├── networking/
       ├── main.tf
       ├── variables.tf
       └── outputs.tf
    
    ├── aks/
       ├── main.tf
       ├── variables.tf
       └── outputs.tf
    
    ├── database/
       ├── main.tf
       ├── variables.tf
       └── outputs.tf
    
    └── storage/
        ├── main.tf
        ├── variables.tf
        └── outputs.tf

Each child module is responsible for one tier of the application:

  • The networking module sets up a virtual network, subnets, and NSG.
  • The aks module provisions the Kubernetes cluster.
  • The database module creates an Azure SQL Server + database.
  • The storage module sets up a storage account with a container.

The root module (main.tf in project_root/) ties all these modules together, passes in input values, and wires up any dependencies between them — like passing a subnet ID from the networking module to the aks module.

Now that we’ve seen the big picture, let’s start at the beginning — by setting up the resource group and networking layer.

Step 1: Creating the Azure Resource Group

In this project, we’re not putting the resource group in a separate module — we’re keeping it in the root module. This is because the resource group acts more like a central context that everything else plugs into. You could modularise it, but for this scenario, we’re keeping it simple and focused.

Here’s what it looks like.

main.tf (Root Module)

Create this in your project_root/ directory:

1
2
3
4
5
6
7
8
provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "main" {
  name     = var.resource_group_name
  location = var.location
}

This block creates the Azure Resource Group we’ll use to host all our other resources. Its name and location are passed in as variables — let’s define those next.

Step 2: Defining Input Variables

Create a variables.tf file in the root directory:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
variable "project_name" {
  type        = string
  description = "A short name for the project, used for naming resources"
}

variable "resource_group_name" {
  type        = string
  description = "The name of the Azure resource group"
}

variable "location" {
  type        = string
  description = "Azure region for the deployment"
}

And a quick terraform.tfvars file (or you can pass these values via -var flags):

1
2
3
project_name         = "myapp"
resource_group_name  = "myapp-rg"
location             = "Australia East"

Step 3: Exposing the Resource Group Details

Create an outputs.tf file to share values from this resource group with the child modules:

1
2
3
4
5
6
7
8
9
output "resource_group_name" {
  value       = azurerm_resource_group.main.name
  description = "Name of the created resource group"
}

output "resource_group_location" {
  value       = azurerm_resource_group.main.location
  description = "Location of the resource group"
}

Why this matters: we’ll use these outputs to feed values into our networking, AKS, database, and storage modules — keeping everything in sync and environment-aware.

What’s Next?

With the foundation in place, we’re now ready to build our networking module — complete with a virtual network, subnets, and a network security group. This will provide the shared infrastructure our compute and data layers will build on.

Step 4: Create the Networking Module

Inside your project, create a new folder:

1
mkdir -p modules/networking

modules/networking/main.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
resource "azurerm_virtual_network" "vnet" {
  name                = var.vnet_name
  address_space       = var.address_space
  location            = var.location
  resource_group_name = var.resource_group_name

  tags = var.tags
}

resource "azurerm_subnet" "subnet" {
  for_each = var.subnets

  name                 = each.key
  resource_group_name  = var.resource_group_name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = each.value.address_prefixes
  service_endpoints    = each.value.service_endpoints
}

resource "azurerm_network_security_group" "nsg" {
  name                = "${var.vnet_name}-nsg"
  location            = var.location
  resource_group_name = var.resource_group_name

  security_rule {
    name                       = "AllowHTTP"
    priority                   = 100
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "80"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }

  tags = var.tags
}

modules/networking/variables.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
variable "vnet_name" {
  type        = string
  description = "Name of the virtual network"
}

variable "address_space" {
  type        = list(string)
  description = "CIDR block for the virtual network"
}

variable "location" {
  type        = string
  description = "Azure region"
}

variable "resource_group_name" {
  type        = string
  description = "Name of the resource group to deploy into"
}

variable "tags" {
  type        = map(string)
  description = "Tags to apply to the resources"
  default     = {}
}

variable "subnets" {
  type = map(object({
    address_prefixes   = list(string)
    service_endpoints  = list(string)
  }))
  description = "Subnet definitions keyed by subnet name"
}

modules/networking/outputs.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
output "vnet_id" {
  value       = azurerm_virtual_network.vnet.id
  description = "The ID of the created virtual network"
}

output "subnet_ids" {
  value       = { for k, v in azurerm_subnet.subnet : k => v.id }
  description = "Map of subnet names to their IDs"
}

output "nsg_id" {
  value       = azurerm_network_security_group.nsg.id
  description = "The ID of the network security group"
}

Step 5: Use the Networking Module in the Root

Now, back in your root main.tf, let’s add the networking module below the resource group:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
module "networking" {
  source              = "./modules/networking"
  vnet_name           = "${var.project_name}-vnet"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  subnets = {
    aks = {
      address_prefixes  = ["10.0.1.0/24"]
      service_endpoints = ["Microsoft.Sql"]
    },
    db = {
      address_prefixes  = ["10.0.2.0/24"]
      service_endpoints = ["Microsoft.Sql"]
    }
  }

  tags = {
    environment = "dev"
    project     = var.project_name
  }
}

And don’t forget to expose the subnet and VNet info for other modules to use:

1
2
3
4
5
6
7
output "vnet_id" {
  value = module.networking.vnet_id
}

output "subnet_ids" {
  value = module.networking.subnet_ids
}

At this point, we’ve set up the base infrastructure and networking — the foundation is ready!

Next up: let’s move to the AKS module so we can provision our Kubernetes compute layer.

Step 6: Create the AKS Module

In your project folder, run:

1
mkdir -p modules/aks

modules/aks/main.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
resource "azurerm_kubernetes_cluster" "aks" {
  name                = var.cluster_name
  location            = var.location
  resource_group_name = var.resource_group_name
  dns_prefix          = var.dns_prefix

  default_node_pool {
    name       = "default"
    node_count = var.node_count
    vm_size    = var.vm_size
  }

  identity {
    type = "SystemAssigned"
  }

  network_profile {
    network_plugin     = "azure"
    service_cidr       = var.service_cidr
    dns_service_ip     = var.dns_service_ip
    docker_bridge_cidr = var.docker_bridge_cidr
  }

  tags = var.tags
}

modules/aks/variables.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
variable "cluster_name" {
  type        = string
  description = "Name of the AKS cluster"
}

variable "location" {
  type        = string
  description = "Azure region"
}

variable "resource_group_name" {
  type        = string
  description = "Name of the resource group"
}

variable "dns_prefix" {
  type        = string
  description = "DNS prefix for the AKS cluster"
}

variable "node_count" {
  type        = number
  default     = 1
  description = "Number of nodes in the default pool"
}

variable "vm_size" {
  type        = string
  default     = "Standard_D2_v2"
  description = "VM size for the AKS node pool"
}

variable "service_cidr" {
  type        = string
  description = "CIDR for the AKS service network"
}

variable "dns_service_ip" {
  type        = string
  description = "DNS service IP address"
}

variable "docker_bridge_cidr" {
  type        = string
  description = "Docker bridge CIDR"
}

variable "tags" {
  type        = map(string)
  default     = {}
  description = "Tags to apply to the cluster"
}

modules/aks/outputs.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
output "cluster_id" {
  value       = azurerm_kubernetes_cluster.aks.id
  description = "The ID of the AKS cluster"
}

output "kube_config" {
  value       = azurerm_kubernetes_cluster.aks.kube_config_raw
  description = "Raw kubeconfig to connect to the cluster"
  sensitive   = true
}

Step 7: Use the AKS Module in the Root Configuration

Now, back in the root main.tf, add this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
module "aks" {
  source               = "./modules/aks"
  cluster_name         = "${var.project_name}-aks"
  location             = azurerm_resource_group.main.location
  resource_group_name  = azurerm_resource_group.main.name
  dns_prefix           = "${var.project_name}-dns"

  node_count           = 2
  vm_size              = "Standard_D2_v2"

  service_cidr         = "10.0.3.0/24"
  dns_service_ip       = "10.0.3.10"
  docker_bridge_cidr   = "172.17.0.1/16"

  tags = {
    environment = "dev"
    project     = var.project_name
  }
}

And surface the outputs:

1
2
3
4
5
6
7
8
output "aks_cluster_id" {
  value = module.aks.cluster_id
}

output "aks_kube_config" {
  value     = module.aks.kube_config
  sensitive = true
}

At this stage, you’ve now provisioned a reusable AKS module and wired it into your infrastructure.

In the next step, we’ll build the database module to support backend data for your apps.

Step 8: Create the Database Module

Start by creating the module directory:

1
mkdir -p modules/database

modules/database/main.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
resource "azurerm_sql_server" "sql" {
  name                         = var.server_name
  resource_group_name          = var.resource_group_name
  location                     = var.location
  version                      = "12.0"
  administrator_login          = var.admin_username
  administrator_login_password = var.admin_password
}

resource "azurerm_sql_database" "db" {
  name                = var.db_name
  resource_group_name = var.resource_group_name
  location            = var.location
  server_name         = azurerm_sql_server.sql.name
  edition             = var.db_edition
}

modules/database/variables.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
variable "server_name" {
  type        = string
  description = "SQL server name"
}

variable "resource_group_name" {
  type        = string
  description = "Azure resource group"
}

variable "location" {
  type        = string
  description = "Azure region"
}

variable "admin_username" {
  type        = string
  description = "Administrator username for SQL server"
}

variable "admin_password" {
  type        = string
  description = "Administrator password for SQL server"
  sensitive   = true
}

variable "db_name" {
  type        = string
  description = "Name of the SQL database"
}

variable "db_edition" {
  type        = string
  description = "Edition of the SQL database"
  default     = "Standard"
}

modules/database/outputs.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
output "server_name" {
  value       = azurerm_sql_server.sql.name
  description = "SQL server name"
}

output "db_name" {
  value       = azurerm_sql_database.db.name
  description = "Database name"
}

output "connection_string" {
  value       = "Server=tcp:${azurerm_sql_server.sql.fully_qualified_domain_name},1433;Initial Catalog=${azurerm_sql_database.db.name};Persist Security Info=False;User ID=${var.admin_username};Password=${var.admin_password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
  description = "SQL Database connection string"
  sensitive   = true
}

Step 9: Use the Database Module in the Root Configuration

Back in your root main.tf, add:

1
2
3
4
5
6
7
8
9
module "database" {
  source              = "./modules/database"
  server_name         = "${var.project_name}-sql"
  db_name             = "${var.project_name}-db"
  resource_group_name = azurerm_resource_group.main.name
  location            = azurerm_resource_group.main.location
  admin_username      = var.db_admin_username
  admin_password      = var.db_admin_password
}

And expose useful outputs:

1
2
3
4
output "db_connection_string" {
  value     = module.database.connection_string
  sensitive = true
}

That’s it — you now have a secure, fully modular Azure SQL backend in place!

Up next: let’s wire in some persistent file storage with a Storage Account Module.

Step 10: Create the Storage Module

First, create the module folder:

1
mkdir -p modules/storage

modules/storage/main.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
resource "azurerm_storage_account" "storage" {
  name                     = var.storage_account_name
  resource_group_name      = var.resource_group_name
  location                 = var.location
  account_tier             = "Standard"
  account_replication_type = "LRS"

  tags = var.tags
}

resource "azurerm_storage_container" "container" {
  name                  = var.container_name
  storage_account_name  = azurerm_storage_account.storage.name
  container_access_type = var.container_access_type
}

modules/storage/variables.tf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
variable "storage_account_name" {
  type        = string
  description = "Name of the storage account (must be globally unique)"
}

variable "resource_group_name" {
  type        = string
  description = "Resource group name"
}

variable "location" {
  type        = string
  description = "Azure region"
}

variable "container_name" {
  type        = string
  description = "Name of the blob container"
}

variable "container_access_type" {
  type        = string
  description = "Access level for the container (private, blob, or container)"
  default     = "private"
}

variable "tags" {
  type        = map(string)
  description = "Tags to apply to the storage account"
  default     = {}
}

modules/storage/outputs.tf

1
2
3
4
5
6
7
8
9
output "storage_account_name" {
  value       = azurerm_storage_account.storage.name
  description = "The name of the storage account"
}

output "primary_blob_endpoint" {
  value       = azurerm_storage_account.storage.primary_blob_endpoint
  description = "Primary blob endpoint URL"
}

Step 11: Use the Storage Module in the Root Configuration

In your root main.tf, add:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
module "storage" {
  source               = "./modules/storage"
  storage_account_name = "${lower(var.project_name)}storage"
  resource_group_name  = azurerm_resource_group.main.name
  location             = azurerm_resource_group.main.location
  container_name       = "app-data"
  tags = {
    environment = "dev"
    project     = var.project_name
  }
}

And expose outputs in outputs.tf:

1
2
3
4
5
6
7
output "storage_account_name" {
  value = module.storage.storage_account_name
}

output "blob_endpoint" {
  value = module.storage.primary_blob_endpoint
}

Nicely done! You now have persistent storage ready to support your app — and it’s wrapped in a reusable, environment-ready module.

We’ve now built and composed four clean, independent modules: networking, compute, database, and storage — all orchestrated by a single root module.

Let’s take a step back, review what we’ve built, and then talk about where to go from here.

Step 12: Wrapping It All Up

Let’s pause and appreciate what you’ve just done:

  • Created a modular, scalable Terraform project
  • Used four reusable modules for networking, compute (AKS), database (SQL), and storage
  • Passed outputs between modules to connect infrastructure components
  • Kept the root configuration clean, focused, and high-level
  • Followed best practices around input variables, tagging, and separation of concerns

And the best part? All of this is ready to be reused and adapted for future projects.

Whether you’re deploying to a dev sandbox, preparing for production, or scaling across regions — this structure is built for growth.

What’s Next?

Now that you’ve seen how modular Terraform design works in practice, it’s time to think about maintaining and scaling your project over time:

  • How do you collaborate with teammates without stepping on each other’s toes?
  • How do you manage state securely and reliably?
  • What happens when infrastructure needs evolve?
  • How do you automate deployments using CI/CD?

In the next section, we’ll cover the operational best practices that make large-scale Terraform projects successful — from team workflows to secrets management, remote state, and more.

Let’s go!

Best Practices for Working with Terraform Modules at Scale

Now that you’ve built a multi-tier application using Terraform modules, let’s step back and talk about what it takes to manage modular infrastructure in the real world — especially as your team grows and your environment becomes more complex.

These practices will help you write cleaner code, reduce risk, and set yourself up for long-term maintainability.

1. Keep Modules Focused and Purpose-Driven

Each module should do one thing well.
Avoid “mega modules” that try to do everything in one place.

For example:

  • ✅ A networking module that creates VNets and subnets
  • ✅ A compute module for VMs or AKS clusters
  • ❌ A project_infra module that does networking, compute, storage, and databases in one

This makes your modules easier to test, reuse, and reason about.

2. Use Clear and Consistent Naming Conventions

This applies across:

  • Module directory names (networking, aks, database)
  • Variable and output names (resource_group_name, not rg_name)
  • Resource names inside your modules (use a prefix like ${var.project}-vnet)

You’ll thank yourself when you revisit the code 6 months later — or when teammates start collaborating on it.

3. Document Every Module

Every module should include a README.md that answers:

  • What does this module do?
  • What variables does it expect?
  • What outputs does it produce?
  • How do I use it?

Bonus points for usage examples. A well-documented module can be picked up and used by anyone on the team — without guesswork.

4. Use Inputs and Outputs Intentionally

  • Don’t expose everything — only what’s actually needed outside the module.
  • Use description for each variable and output.
  • Use type and default wisely. Avoid making everything optional — be intentional about what’s required.

This creates a clean “interface” between modules, just like designing a good API.

5. Use Version Control and Tag Releases

If you’re storing modules in Git (which you should), tag stable versions using semantic versioning:

1
v1.0.0, v1.1.0, v2.0.0

This allows you to:

  • Pin specific versions when using remote modules
  • Safely evolve modules without breaking existing deployments
  • Roll back easily if something breaks

6. Store State Remotely and Enable Locking

When using Terraform with a team, always:

  • Use remote state (e.g. Azure Blob Storage)
  • Enable state locking (Azure does this automatically)
  • Separate state files by environment (dev.tfstate, prod.tfstate)

This prevents the dreaded “state file conflict” and keeps infrastructure changes predictable.

7. Structure Your Project with Separation of Concerns

Your root configuration should feel like an orchestrator — calling well-defined modules and stitching them together.

Consider this layout:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
project/

├── main.tf
├── variables.tf
├── outputs.tf
├── terraform.tfvars
├── modules/
   ├── networking/
   ├── compute/
   └── storage/

Avoid cluttering the root with too many resource definitions. If it’s not orchestration, it probably belongs in a module.

8. Secure Secrets with Environment-Aware Patterns

Never hard-code secrets into variables or tfvars files. Instead:

  • Use Azure Key Vault to store secrets securely
  • Use data "azurerm_key_vault_secret" to fetch them
  • Or pass secrets via environment variables or CI/CD pipeline variables

This keeps your infrastructure secure and environment-specific.

9. Test and Validate Your Modules

Before using a module in production:

  • Run terraform validate and terraform plan
  • Consider using tools like tflint, checkov, or even Terratest for automated testing
  • Create sandbox environments (e.g. dev) to verify changes safely

Infrastructure is code — so treat it like code. Test before you deploy.

10. Use CI/CD for Automation

Automate your Terraform workflows:

  • Use pipelines (GitHub Actions, Azure DevOps, etc.) to validate and apply
  • Store your Terraform state securely via backends
  • Gate changes with pull requests, code reviews, and automated plans

Here’s the golden workflow:

  • Developers propose changes via PR
  • CI runs terraform plan
  • Changes are reviewed
  • CD runs terraform apply once approved

This gives you repeatability, visibility, and safety at scale.

11. Tag Everything

Use consistent tagging across your modules and resources:

1
2
3
4
5
tags = {
  environment = var.environment
  project     = var.project_name
  owner       = "team-devops"
}

Tags help with:

  • Cost allocation
  • Ownership tracking
  • Automation and cleanup
  • Dashboards and governance

12. Refactor and Evolve Gradually

As your infrastructure grows:

  • Refactor larger modules into smaller ones
  • Introduce input validation with validation blocks
  • Add defaults and nullable = false constraints where appropriate
  • Don’t be afraid to change — just use versioning to manage risk

Infrastructure code ages just like application code — so invest in maintaining it.

Final Thought

Building a great Terraform setup is like designing software: start simple, modularise early, document everything, and evolve with purpose.

Modules are the backbone of reusable, scalable cloud infrastructure. Treat them with the same care and attention as you would production code — and you’ll have a system that’s easy to grow, troubleshoot, and share.

Conclusion: From Modules to Mastery

You’ve done it! 🙌

You started this guide curious about Terraform modules — maybe even a little uncertain. Now you’ve not only learned what modules are and why they matter, but you’ve also built, composed, and reused them in realistic, Azure-based scenarios.

Let’s recap the journey:

  • You began by understanding what Terraform modules are — not just as folders and files, but as a mindset for building maintainable infrastructure.
  • You created your first module and learned how to call it from a root module — getting hands-on experience with input variables and outputs.
  • You explored the concept of root vs child modules — building the mental model that ties everything together.
  • You leveled up to building more capable, realistic modules that encapsulate real-world Azure infrastructure.
  • You discovered the power of composition — using modules within modules to build higher-level components, just like software architecture.
  • You saw how to reuse modules across environments and keep your deployments clean and consistent.
  • You tapped into the Terraform community by learning how to consume public modules — including Azure Verified Modules — with confidence.
  • You tackled best practices that will set you and your team up for long-term success.
  • And finally, you brought it all together with a grand finale project: a fully modular, multi-tier Azure infrastructure, built the right way.

What This Means for You

You’ve gone from individual resources and manual repetition…
To clean, reusable patterns, layered architecture, and production-grade infrastructure.

You now know how to:

  • Build modules that encapsulate complexity
  • Compose infrastructure like LEGO bricks
  • Reuse and share your patterns across teams and projects
  • Keep your infrastructure code clean, scalable, and testable

This isn’t just about writing better Terraform — it’s about thinking like an engineer who builds systems, not scripts.

Final Words

Infrastructure as Code is not just a technical practice — it’s a shift in how we design, collaborate, and deliver on the cloud. And modules are one of the most powerful tools in your IaC toolkit.

Thanks for following along on this journey. I hope you had fun, built confidence, and saw what’s possible with Terraform and Azure.

Stay strong and happy Terraforming!