Featured image of post OpenTofu on Azure: Deploying Azure Infrastructure with OpenTofu

OpenTofu on Azure: Deploying Azure Infrastructure with OpenTofu

Learn how to deploy a three-tier application on Azure using OpenTofu, the open-source alternative to Terraform. This hands-on walkthrough shows how familiar Terraform workflows translate seamlessly to OpenTofu.

Introduction

If you’ve worked with Terraform before, you’ll feel right at home with OpenTofu. OpenTofu is a community-driven alternative to Terraform, created in response to HashiCorp’s shift to the Business Source License (BSL). In contrast, OpenTofu continues under the Mozilla Public License 2.0 (MPL 2.0), keeping it open-source and truly accessible to individuals and organisations alike.

So what exactly is OpenTofu? In short, it’s a drop-in replacement for Terraform, compatible with version 1.7 and earlier. It maintains the same familiar HCL syntax, supports existing Terraform workflows, and is backed by a growing ecosystem of contributors who are committed to keeping infrastructure as code open and community-governed.

Who this guide is for
This article is aimed at DevOps engineers, cloud practitioners, and infrastructure specialists who are familiar with Terraform and want to explore OpenTofu—either out of curiosity or in response to the licensing changes. If you’re completely new to Infrastructure as Code or to Terraform, consider starting with our Terraform on Azure introduction to build a solid foundation before continuing.

In this step-by-step guide, you’ll learn how to use OpenTofu to deploy a typical multi-tier application on Microsoft Azure. We’ll walk through the process of:

  • Setting up an OpenTofu project tailored for Azure,
  • Defining and deploying infrastructure components like a web server, application server, and database,
  • And configuring networking and inter-tier communication.

This article is not just a tutorial—it’s also meant to show that OpenTofu is a fully capable, production-ready alternative to Terraform. If you’re already comfortable with Terraform, you’ll find that OpenTofu offers the same maturity, syntax, and workflow, but under a truly open-source license. And if licensing concerns have pushed you to explore alternatives, OpenTofu provides a seamless path forward without sacrificing capability.

Tools We’ll Use

Before we get hands-on, let’s take a moment to align on the tools you’ll need to follow along. This guide assumes you’re already working in a cloud or DevOps context, so most of these should feel familiar.

Here’s what we’ll use:

  1. Azure CLI
    We’ll be using the Azure CLI for tasks like authenticating, retrieving IP addresses, and inspecting resources. If you don’t already have it installed, refer to the official Azure CLI installation guide.

  2. Visual Studio Code (or any HCL-compatible editor)
    You’ll need a code editor to manage your OpenTofu configuration files. VS Code is a popular choice with solid HCL support, but feel free to use whatever editor you’re comfortable with.

  3. OpenTofu
    OpenTofu is the infrastructure-as-code tool we’re using throughout this guide. If you’ve used Terraform before, OpenTofu will feel instantly familiar—it’s syntax-compatible with Terraform 1.7 and earlier and follows the same workflow (init, plan, apply, destroy), just under a truly open-source license.

    Choose the installation method that fits your platform:

    • macOS (via Homebrew):

      1
      2
      
      brew update
      brew install opentofu
      
    • Linux/macOS/Unix (Installer Script):

      1
      2
      3
      4
      
      curl --proto '=https' --tlsv1.2 -fsSL https://get.opentofu.org/install-opentofu.sh -o install-opentofu.sh
      chmod +x install-opentofu.sh
      ./install-opentofu.sh --install-method standalone
      rm install-opentofu.sh
      
    • Windows (PowerShell):

      1
      2
      3
      
      Invoke-WebRequest -outfile "install-opentofu.ps1" -uri "https://get.opentofu.org/install-opentofu.ps1"
      & .\install-opentofu.ps1 -installMethod standalone
      Remove-Item install-opentofu.ps1
      

    For additional installation details or troubleshooting, refer to the official OpenTofu documentation.

Environment Setup and Baseline Checks

With our tooling in place, let’s quickly verify that everything is working as expected. These steps aren’t meant to teach you Azure CLI or terminal workflows—they’re here to make sure we’re all starting from the same baseline before we dive into the configuration.

1. Authenticate with Azure

Use the Azure CLI to log in to your Azure account:

1
az login

This opens a browser window for authentication. Once authenticated, your CLI session will be authorised to create and manage Azure resources. If you’ve used terraform with azurerm before, this will feel identical—OpenTofu uses the same provider under the hood.

2. Confirm OpenTofu Installation

Run the following command to confirm OpenTofu is installed and available in your shell:

1
tofu --version

Just like terraform version, this gives you the installed version of the tool and confirms your installation was successful. If the command isn’t recognised, revisit the installation section above.

3. Create a Working Directory

Create a directory where you’ll keep your OpenTofu configuration files:

1
2
mkdir opentofu-azure-project
cd opentofu-azure-project

We’ll use this directory throughout the guide to define, plan, and deploy infrastructure. It’s the equivalent of your usual Terraform project root.

Once these steps are complete, we’ll begin defining the infrastructure for our multi-tier application—starting with variables, provider configuration, and networking.

Multi-Tier Application Architecture Overview

Now that your environment is ready, it’s time to look at what we’re going to build.

In this hands-on project, we’ll deploy a simple but realistic three-tier architecture—a pattern widely used in both development and production environments. It consists of three key layers: a web server, an application server, and a database server. If you’ve worked on modern web apps or enterprise systems, this structure will likely feel familiar.

OpenTofu Multi-Tier App on Azure

Here’s how the architecture is structured:

  1. Web Server
    This is the public-facing entry point for users. It handles incoming HTTP requests, serves static content, and forwards dynamic requests to the application layer.

  2. Application Server
    The application server contains the business logic of your application. It processes requests passed from the web server and interacts with the database to fetch or update data. For security, this server is placed on a private subnet and is not directly accessible from the internet.

  3. Database Server
    This is where all persistent data lives—user records, application state, transactions, and so on. It responds only to internal traffic from the application server, not from public endpoints.

In many real-world setups, administrators might also access the web or app servers via SSH for configuration and troubleshooting, but general users interact only through HTTP(S).

We’ve chosen this architecture not just because it’s easy to understand, but because it reflects how real systems are often structured—making this a practical example of how OpenTofu can be used to manage production-style infrastructure.

Next, we’ll start defining the infrastructure in code. Open your project folder in VS Code (or your preferred editor), create a file named main.tf, and let’s begin configuring each layer, step by step.

Defining the Multi-Tier Infrastructure

Now that we’ve explored the architecture, let’s start defining it using OpenTofu.

In this section, we’ll walk through the core infrastructure setup—resource group, networking, virtual machines, and database—expressed in OpenTofu’s configuration language. If you’ve used Terraform before, you’ll find this process immediately familiar: we’re defining resources declaratively, using the same HCL syntax, and applying the same principles of stateful, reproducible infrastructure.

Why this matters
This isn’t just about writing configuration files—it’s about showing that OpenTofu gives you the same power and workflow you’ve come to expect from Terraform. You can define infrastructure in a way that’s modular, version-controlled, and production-ready—without changing how you work or what you already know. This hands-on project mirrors common cloud deployment patterns, making it a practical demonstration of how seamless the switch to OpenTofu can be.

We’ll break the configuration into small, focused parts so you can follow the logic step by step.

1. Define Variables and Azure Provider

We begin by defining a few reusable variables, such as the resource group name and location. Then we configure the Azure provider, which tells OpenTofu to use Microsoft Azure as the cloud platform.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
variable "resource_group_name" {
  description = "The name of the resource group in which to create the resources."
  type        = string
  default     = "iaMachs_rg"
}

variable "location" {
  description = "The location/region where the resources will be created."
  type        = string
  default     = "Australia East"
}

variable "vm_size" {
  description = "The size of the Virtual Machines."
  type        = string
  default     = "Standard_B2s"
}

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "iaMachs_rg" {
  name     = var.resource_group_name
  location = var.location
}

Why this matters: The provider block sets up our connection to Azure, and variables make our code flexible and easier to update or reuse across environments.

2. Define the Virtual Network and Subnets

Next, we create a virtual network to logically isolate our infrastructure. Inside it, we define two subnets—one for the web server and one for the application server.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
resource "azurerm_virtual_network" "iaMachs_VNet" {
  name                = "iaMachs-vnet"
  resource_group_name = azurerm_resource_group.iaMachs_rg.name
  location            = azurerm_resource_group.iaMachs_rg.location
  address_space       = ["10.0.0.0/16"]
}

resource "azurerm_subnet" "iaMachs_Web_Subnet" {
  name                 = "iaMachs-web-subnet"
  resource_group_name  = azurerm_resource_group.iaMachs_rg.name
  virtual_network_name = azurerm_virtual_network.iaMachs_VNet.name
  address_prefixes     = ["10.0.1.0/24"]
}

resource "azurerm_subnet" "iaMachs_App_Subnet" {
  name                 = "iaMachs-app-subnet"
  resource_group_name  = azurerm_resource_group.iaMachs_rg.name
  virtual_network_name = azurerm_virtual_network.iaMachs_VNet.name
  address_prefixes     = ["10.0.2.0/24"]
}

Quick note: Separating workloads into subnets helps with security and network segmentation. For example, the application server shouldn’t be exposed directly to the internet, so we place it in a private subnet.

3. Define the Web Server Resources

The web server is public-facing, so we’ll assign it a public IP and attach that to its network interface. Then we’ll define the virtual machine that hosts the web tier.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
resource "azurerm_public_ip" "iaMachs_Web_PIP" {
  name                = "iaMachs-web-pip"
  location            = azurerm_resource_group.iaMachs_rg.location
  resource_group_name = azurerm_resource_group.iaMachs_rg.name
  allocation_method   = "Dynamic"
}

resource "azurerm_network_interface" "iaMachs_Web_NIC" {
  name                = "iaMachs-web-nic"
  location            = azurerm_resource_group.iaMachs_rg.location
  resource_group_name = azurerm_resource_group.iaMachs_rg.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.iaMachs_Web_Subnet.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.iaMachs_Web_PIP.id
  }
}

resource "azurerm_virtual_machine" "iaMachs_Web_VM" {
  name                  = "iaMachs-web-vm"
  location              = azurerm_resource_group.iaMachs_rg.location
  resource_group_name   = azurerm_resource_group.iaMachs_rg.name
  network_interface_ids = [azurerm_network_interface.iaMachs_Web_NIC.id]
  vm_size               = var.vm_size

  storage_os_disk {
    name              = "web-os-disk"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }

  os_profile {
    computer_name  = "iaMachs-web-vm"
    admin_username = "adminuser"
    admin_password = "P@ssw0rd1234!"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  depends_on = [azurerm_public_ip.iaMachs_Web_PIP]
}

Tip: In production, you’d typically use SSH keys instead of passwords and consider additional security like NSGs and firewalls.

4. Define the Application Server Resources

This VM runs internally, without a public IP. It will communicate with the web server and the database, but it’s not directly accessible from outside the VNet.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
resource "azurerm_network_interface" "iaMachs_App_NIC" {
  name                = "iaMachs-app-nic"
  location            = azurerm_resource_group.iaMachs_rg.location
  resource_group_name = azurerm_resource_group.iaMachs_rg.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.iaMachs_App_Subnet.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_virtual_machine" "iaMachs_App_VM" {
  name                  = "iaMachs-app-vm"
  location              = azurerm_resource_group.iaMachs_rg.location
  resource_group_name   = azurerm_resource_group.iaMachs_rg.name
  network_interface_ids = [azurerm_network_interface.iaMachs_App_NIC.id]
  vm_size               = var.vm_size

  storage_os_disk {
    name              = "app-os-disk"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }

  os_profile {
    computer_name  = "iaMachs-app-vm"
    admin_username = "adminuser"
    admin_password = "P@ssw0rd1234!"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }
}

5. Define the Database Server Resources

Finally, we define an Azure SQL Server instance and a SQL database to store our application’s data.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
resource "azurerm_mssql_server" "iaMachs_SQL_Server" {
  name                         = "iamachs-sql-server"
  resource_group_name          = azurerm_resource_group.iaMachs_rg.name
  location                     = azurerm_resource_group.iaMachs_rg.location
  version                      = "12.0"
  administrator_login          = "sqladmin"
  administrator_login_password = "P@ssw0rd1234!"
}

resource "azurerm_mssql_database" "iaMachs_SQL_DB" {
  name      = "iamachs-sql-db"
  server_id = azurerm_mssql_server.iaMachs_SQL_Server.id
  sku_name  = "S0"
}

Why it matters: The database server is isolated in the private network and only accessible to the app server. This structure helps reinforce the principle of least privilege.

6. Define Output Variables

Before we move on to deployment, let’s expose a few useful values from our configuration—specifically the public IP of the web server and the private IP of the application server. These outputs will make it easier to test and connect to the right machines after deployment.

1
2
3
4
5
6
7
8
9
output "web_server_public_ip" {
  value       = azurerm_public_ip.iaMachs_Web_PIP.ip_address
  description = "Public IP address of the web server"
}

output "app_server_private_ip" {
  value       = azurerm_network_interface.iaMachs_App_NIC.private_ip_address
  description = "Private IP address of the application server"
}

Tip: Once your infrastructure is deployed, you can retrieve these values using:

1
2
tofu output web_server_public_ip
tofu output app_server_private_ip

Notice that we’re using tofu output, just like you would use terraform output in a typical Terraform workflow. The process is identical—another example of how OpenTofu maintains full compatibility with the Terraform experience you’re already familiar with.

Curious to learn more?
If you’d like to explore the full potential of variables in Infrastructure as Code—especially how they’re used in production-ready setups—we cover this in detail in our Terraform Variables Explained article from the Terraform on Azure series.

With the full infrastructure now defined in code, you’re ready to deploy it using OpenTofu. In the next section, we’ll walk through the deployment process step by step.

Deploying the Infrastructure

With the infrastructure now fully defined in code, the next step is to deploy it to Microsoft Azure using OpenTofu. This is where your .tf configuration files turn into real, running resources in the cloud.

We’ll walk through the two main steps required to initialise and apply the configuration.

1. Initialise the Project

Run the following command in your project directory:

1
tofu init

This command downloads the required provider plugins—in this case, the Azure provider—and sets up your local working directory for deployment.

Note: If you’ve used terraform init before, this will feel completely familiar. OpenTofu follows the same workflow—just a new command name with the same underlying behaviour.

It’s important to run this whenever you start a new OpenTofu project or introduce new providers or modules.

2. Apply the Configuration

Once the configuration has been initialised, you’re ready to apply it:

1
tofu apply

OpenTofu will present an execution plan, summarising the resources it’s about to create:

1
2
3
4
5
6
7
Plan: 10 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  OpenTofu will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

Type yes when prompted to proceed.

Just like with Terraform: This command validates your configuration, shows you the plan, and waits for confirmation before provisioning. The experience is intentionally familiar—OpenTofu is meant to be a drop-in replacement, so you can keep your workflow and tooling exactly as it is.

This step may take a few minutes as Azure provisions the infrastructure. Once complete, OpenTofu will display a list of the resources created and any relevant output values you’ve configured.

Tip: If anything goes wrong, review the error messages closely. OpenTofu’s error feedback is usually helpful in identifying missing variables, authentication issues, or configuration problems.

Verifying the Deployment

After deploying the infrastructure, it’s a good practice to run a few basic checks to confirm that the components were provisioned successfully and can communicate with each other as expected.

What we’re doing here is intentionally simple.
We’re not aiming to fully test a production environment or run application-level health checks. Instead, we’re tracing the flow of network connectivity through each tier of the architecture: from the public-facing web server to the internal application server, and finally to the managed database service. These checks help us verify that everything is wired up correctly.

1. Retrieve IP Addresses from Outputs

Earlier, we defined output variables to expose the key IP addresses. You can retrieve those values directly using OpenTofu:

1
2
3
4
5
# Get the web server's public IP
tofu output web_server_public_ip

# Get the app server's private IP
tofu output app_server_private_ip

This is the same approach you would use with Terraform. The tofu output command behaves exactly like terraform output, giving you access to useful values declared in your configuration.

2. Install Nginx on the Web Server

SSH into the web server using the public IP you retrieved above:

1
ssh adminuser@<web-vm-public-ip>

In a real-world setup, this might be part of an automated provisioning step, but for this demo, we’re doing it manually for simplicity.

Once connected, install Nginx:

1
2
3
4
sudo apt-get update
sudo apt-get install -y nginx
sudo systemctl enable nginx
sudo systemctl start nginx

Why Nginx?
It’s a common and lightweight web server that’s easy to install and recognise. Seeing the default “Welcome to nginx” page helps confirm that the web server is running, reachable, and serving traffic correctly.

3. Test the Web Server

Open a browser and navigate to the public IP of the web server.

If everything is working, you should see the default Nginx welcome page. That tells us:

  • The VM was provisioned correctly
  • The networking is working
  • HTTP traffic is successfully reaching the server from outside the virtual network

4. Test Communication Between Tiers

Now that you’re on the web server, let’s try reaching the application server internally.

Still inside your SSH session on the web server, run:

1
ssh adminuser@<app-vm-private-ip>

This step helps validate subnet-level communication. Since the app server doesn’t have a public IP, we can only reach it from within the virtual network.

Note: If you try to SSH into the app server directly from your local machine using its private IP, it won’t work. That’s by design—private IPs are only routable inside the virtual network. But if you can connect from the web server, it means the internal networking setup is working as intended.

5. Verify SQL Server Connectivity

Now that we’ve confirmed internal VM communication, we’ll finish by checking connectivity to the Azure SQL Server—our managed database service.

From the application server, we’ll test whether it can reach the SQL endpoint on port 1433 (the default port for SQL Server).

Step 1: Install Telnet

Telnet is a simple tool that allows us to test TCP connectivity:

1
sudo apt-get install telnet

Step 2: Attempt the Connection

1
telnet iamachs-sql-server.database.windows.net 1433

You should see output like:

1
2
3
4
Trying 20.53.46.128...
Connected to cr12.australiaeast1-a.control.database.windows.net.
Escape character is '^]'.
Connection closed by foreign host.

What this means:
This output confirms that the SQL server is reachable on port 1433. The connection was accepted and then closed because SQL Server doesn’t actually speak Telnet—but the fact that it responded means the network path is open.

If something goes wrong:

  • A “connection refused” message likely means there’s a firewall or NSG blocking access
  • A “connection timed out” could indicate a DNS issue or that the service isn’t reachable from the current network context

Once you’ve completed these tests, you’ve successfully validated the connectivity across your three-tier setup:

  • From the internet to the web server
  • From the web server to the app server
  • From the app server to the managed SQL database

In a real production environment, you’d rely on automated health checks, monitoring, and app-level testing. But as a foundational walkthrough, this setup is a clear and practical way to trace and validate the core infrastructure.

In the next section, we’ll walk through how to clean up these resources to avoid any ongoing Azure costs.

Cleaning Up Resources

Once you’re done with your deployment and testing, it’s good practice to clean up any unused resources. This helps prevent unnecessary Azure charges and keeps your environment tidy.

1. Destroy the Infrastructure with OpenTofu

Navigate to the same directory where you previously ran tofu apply, and run:

1
tofu destroy

OpenTofu will present a destruction plan and prompt for confirmation. Type yes to proceed. This will delete all resources defined in your configuration.

Just like terraform destroy, this command tears down the infrastructure you’ve provisioned using OpenTofu. The workflow is identical—another example of how OpenTofu maintains the same user experience, while giving you the benefits of a fully open-source, community-driven tool.

Note: This step only removes the resources OpenTofu created. It won’t affect any manually provisioned resources or unmanaged dependencies.

2. Clean Up OS Disks Manually

OpenTofu intentionally does not delete managed disks by default. This is a safety mechanism to avoid accidental data loss. If you’d like to fully clean up, including these disks, follow the steps below:

a. List Managed Disks

1
az disk list --resource-group iaMachs_rg --output table

b. Delete Disks by Name

Identify the names of the OS disks, then delete each one individually:

1
az disk delete --name <disk-name> --resource-group iaMachs_rg --yes

Repeat this for each disk associated with the VMs in your deployment.

Why this matters: Azure charges for managed disks even if the associated VM is deleted. Manually removing leftover disks ensures you don’t get billed for resources you’re no longer using.

Conclusion

OpenTofu is a powerful and open alternative to Terraform, offering the same infrastructure-as-code capabilities—without the licensing constraints. In this guide, we built and deployed a practical, three-tier application architecture on Microsoft Azure. Along the way, you saw how OpenTofu handles everything from defining resources to provisioning them in the cloud, using a clear, declarative approach.

More importantly, this walkthrough was designed to demonstrate that OpenTofu isn’t just a side project—it’s a mature, production-ready tool that slots right into the Terraform workflow you’re already familiar with. From syntax to workflow to outputs and destroy operations, the experience remains consistent—just under a truly open and community-driven license.

With OpenTofu now generally available (GA), and supported by an active and fast-growing community, it’s a compelling option for individuals and teams looking to build modern cloud infrastructure with freedom, transparency, and long-term flexibility.

Want to go further?
Now that you’ve seen how OpenTofu fits into a standard deployment workflow, try extending this setup:

  • Add another VM or tier
  • Break the code into reusable modules
  • Integrate with remote state or CI/CD pipelines

And if you’re curious about how variables, modules, or state management work in Terraform (and by extension, OpenTofu), check out our Terraform on Azure series for deeper dives into real-world usage patterns.

Thanks for following along. If you have thoughts, feedback, or ideas for future walkthroughs, I’d love to hear them. Happy coding!