Terraform Remote State on Azure: Moving State Out of Your Local Folder

Terraform Remote State on Azure: Moving State Out of Your Local Folder

Introduction: When Local State Stops Being Enough

So far, we have been running Terraform from our own project folder.

That made sense while we were learning.

We could write Terraform code, run terraform init, check the plan, apply the changes, and see Terraform create resources in Azure.

Behind the scenes, Terraform also created something important in the same folder:

terraform.tfstate

That file is Terraform’s state file.

It is how Terraform remembers what it has already created.

Your Terraform configuration says what you want.

Azure contains the real resources.

The state file connects those two worlds together.

For a small learning project, local state is fine.

You are the only person running Terraform. The state file is easy to find. Everything lives close to the code, and the workflow is simple.

But real projects usually do not stay that way.

At some point, the Terraform code may live in Git.

Another engineer may need to run the same Terraform project.

A pipeline may need to run terraform plan or terraform apply.

The infrastructure may become more important.

And the state file should no longer depend on one person’s laptop.

If the state file is lost, Terraform loses track of what it manages.

If two people run Terraform separately with different local state files, they may not be working from the same view of the infrastructure.

If the state file is committed to Git by mistake, sensitive information may be exposed.

So the problem is not just where the Terraform code lives.

The problem is also where Terraform stores the truth about what it has created.

That is what remote state solves.

Instead of keeping terraform.tfstate only in the local project folder, Terraform can store state in a remote backend.

On Azure, a common choice is an Azure Storage Account with a Blob Container.

The Terraform code still lives in your project.

The infrastructure still lives in Azure.

But the state file moves to a safer shared location:

Terraform project
→ Azure Storage backend
→ Blob Container
→ terraform.tfstate

In this article, we will move from local state to remote state on Azure.

We will create a Storage Account and Blob Container for Terraform state, configure Terraform to use the Azure backend, migrate state into Azure Storage, and look at why this matters for real projects.

By the end, remote state should feel like a practical tool for one clear job:

Store Terraform state somewhere safer than one local project folder.

What Terraform Remote State Solves

Local state is simple.

Terraform creates a terraform.tfstate file in your project folder, and that file records what Terraform has created.

For a learning project, that works well.

You write the code.

You run Terraform.

Terraform updates the local state file.

Everything stays on your machine.

But as soon as the project becomes more serious, local state starts to create problems.

The first problem is that local state is tied to one folder on one machine.

If you run Terraform from your laptop, the state file lives there.

If another engineer clones the same Terraform code from Git, they get the code, but they do not automatically get the same state.

That means they may have the same Terraform configuration, but not the same understanding of what Terraform already manages.

The code might say:

Create this Resource Group.
Create this virtual network.
Create this subnet.

But the state file says:

These are the real Azure resources Terraform is already tracking.

Both matter.

Terraform needs the configuration and the state together.

The second problem is safety.

If the state file only lives locally, it can be deleted, overwritten, or left behind when someone changes machines.

It can also be committed to Git by mistake.

That matters because Terraform state can contain sensitive information. Depending on what your configuration manages, state may include resource IDs, generated values, connection strings, secrets, or other details you would not want stored casually in a repository.

The third problem is collaboration.

In a real project, Terraform may be run by more than one person, or by a CI/CD pipeline.

If two people are working from separate local state files, they are not really working on the same Terraform deployment.

They may both think they know what Terraform manages.

But their state files may tell different stories.

That is risky.

Terraform remote state solves this by moving the state file into a shared backend.

Instead of this:

Terraform project folder
└── terraform.tfstate

We move to this:

Terraform project
→ remote backend
→ shared state file

On Azure, that usually means storing the state file in Azure Blob Storage:

Terraform project
→ azurerm backend
→ Azure Storage Account
→ Blob Container
→ terraform.tfstate

Now the state no longer depends on one local folder.

The Terraform code can still live in Git.

Engineers can still work from their own machines.

Pipelines can still run Terraform from build agents.

But they all use the same remote state location.

That gives Terraform one shared view of the infrastructure.

Remote state also helps with locking.

When Terraform is making changes, it needs to update the state file. If two Terraform runs try to update the same state at the same time, the state can become inconsistent.

A remote backend can help prevent that by locking the state during an operation.

The flow looks like this:

terraform apply starts
→ Terraform locks the state
→ Terraform makes the infrastructure changes
→ Terraform updates the state
→ Terraform releases the lock

That does not mean remote state magically solves every teamwork problem.

You still need good access control.

You still need a sensible environment structure.

You still need to be careful with who can run terraform apply.

But remote state gives you the foundation.

It moves Terraform’s source of truth out of one local folder and into a safer shared place.

That is what remote state solves:

Without remote state
→ state lives on one machine
→ collaboration is risky
→ pipelines have no shared state location

With remote state
→ state lives in a shared backend
→ Terraform has one source of truth
→ teams and pipelines can work from the same state

In the next section, we will look at what we are building on Azure to support that remote state.

What We Are Building

For this article, we are going to build one thing:

A safe remote home for Terraform state in Azure.

We are not building a full application platform.

We are not deploying virtual machines, AKS, databases, or private networking.

We are only creating the Azure resources Terraform needs so it can store its state remotely.

The remote state setup will have three Azure pieces:

Resource Group
→ holds the backend storage resources

Storage Account
→ stores the state file

Blob Container
→ contains the Terraform state blob

Once those pieces exist, Terraform can use them as a backend.

The final flow will look like this:

Terraform project
→ azurerm backend
→ Azure Storage Account
→ Blob Container
→ terraform.tfstate

The Terraform project is still the folder where your .tf files live.

The Azure infrastructure you manage still lives in Azure.

The difference is that the state file will no longer live only beside your code.

Instead, Terraform will store it in Azure Blob Storage.

That gives us a cleaner split:

Terraform code
→ lives in your project folder and can be committed to Git

Azure resources
→ live in Azure

Terraform state
→ lives in an Azure Storage Account backend

To build that, we will do four things.

First, we will create a small backend storage setup in Azure:

tfstate Resource Group
tfstate Storage Account
tfstate Blob Container

Then we will add a backend configuration to Terraform:

terraform {
  backend "azurerm" {
    resource_group_name  = "..."
    storage_account_name = "..."
    container_name       = "..."
    key                  = "dev.terraform.tfstate"
  }
}

Then we will run terraform init again.

This is the point where Terraform sees the backend configuration and prepares the project to use remote state.

If there is already local state, Terraform can migrate it into the Azure backend.

Finally, we will verify that the state file now exists in the Blob Container.

The end result is simple:

Before
→ terraform.tfstate is stored locally

After
→ terraform.tfstate is stored in Azure Blob Storage

That is the target for this article.

Move Terraform state out of the local project folder and into a safer Azure backend.

Quick Recap: What Terraform State Is

Before we configure remote state, it is worth slowing down for a moment and remembering what Terraform state actually does.

Terraform state is how Terraform keeps track of the real infrastructure it manages.

Your Terraform code describes what you want:

resource "azurerm_resource_group" "rg" {
  name     = "dev-infra-rg"
  location = "Australia East"
}

Azure contains the real resource:

/subscriptions/.../resourceGroups/dev-infra-rg

Terraform state connects those two things.

It records that this Terraform resource block:

azurerm_resource_group.rg

matches this real Azure resource:

/subscriptions/.../resourceGroups/dev-infra-rg

That mapping is important.

When you run terraform plan, Terraform compares three things:

Terraform configuration
→ what your code says you want

Terraform state
→ what Terraform believes it already manages

Azure
→ what actually exists

Then Terraform works out what needs to change.

Without state, Terraform does not have a reliable memory of what it created before.

It might see your configuration and know what you want, but it would not know which real Azure resources are already connected to that configuration.

That is why state matters.

It is not just a log file.

It is not just a history file.

It is Terraform’s working record of the infrastructure it manages.

For local learning, that record can live in a local terraform.tfstate file.

For real projects, that record should live somewhere safer.

That is what we are going to configure next.

The Backend Idea

Now that we have reminded ourselves what state does, we can introduce the next Terraform concept:

backend

A backend is where Terraform stores its state.

So far, Terraform has been using the default backend: the local backend.

That means Terraform stores state in a local file:

terraform.tfstate

The local backend looks like this:

Terraform project folder
└── terraform.tfstate

That is simple, and it is exactly what we want while learning.

But now we want Terraform to store state somewhere else.

We want this:

Terraform project
→ Azure backend
→ Azure Storage Account
→ Blob Container
→ terraform.tfstate

That is where the AzureRM backend comes in.

The AzureRM backend tells Terraform to store its state in Azure Blob Storage instead of a local file.

The important thing to understand is that the backend does not change what Terraform creates.

It only changes where Terraform stores its state.

Your Terraform code still describes Azure resources.

Your Azure resources still live in Azure.

Your terraform plan and terraform apply workflow still feels familiar.

The difference is where Terraform keeps its working record.

Before
→ Terraform state lives in the local project folder

After
→ Terraform state lives in Azure Blob Storage

The backend configuration tells Terraform where that remote state should live.

It needs to know four main things:

resource group name
→ where the backend storage account exists

storage account name
→ the storage account that holds the state

container name
→ the blob container inside the storage account

key
→ the name/path of the state file inside the container

The key can be confusing at first.

In this context, key does not mean an encryption key or an Azure Key Vault secret.

It means the blob name Terraform will use for the state file.

For example:

dev.terraform.tfstate

So the backend configuration is basically saying:

Store this Terraform project’s state
inside this storage account,
inside this blob container,
using this blob name.

That is the backend idea.

Terraform still runs from your project folder.

But the state file lives in Azure.

Create the Azure Storage Backend

Before Terraform can store state in Azure, the backend storage needs to exist.

That gives us a small chicken-and-egg problem.

Terraform can use Azure Storage as a backend, but the Storage Account and Blob Container must already exist before Terraform can put state there.

There are a few ways to handle this.

You can create the backend storage with a separate bootstrap Terraform project.

You can create it through the Azure Portal.

Or you can create it with Azure CLI.

For this beginner-friendly walkthrough, we will use Azure CLI.

That keeps the focus on the backend idea, not on building another Terraform project just to hold the state storage.

We are going to create three things:

Resource Group
→ holds the backend storage resources

Storage Account
→ stores the Terraform state file

Blob Container
→ contains the state blob

Sign in to Azure

First, sign in:

az login

Then make sure you are using the right subscription:

az account show

If you need to switch subscriptions, run:

az account set --subscription "<subscription-id-or-name>"

The backend storage should be created in the Azure subscription where you want to keep Terraform state.

Set Some Values

To make the commands easier to read, set a few variables in your terminal.

If you are using Bash, run:

RESOURCE_GROUP_NAME="tfstate-rg"
LOCATION="australiaeast"
STORAGE_ACCOUNT_NAME="tfstate$RANDOM"
CONTAINER_NAME="tfstate"

The Storage Account name must be globally unique across Azure.

It also needs to use lowercase letters and numbers only.

That is why the example uses:

tfstate$RANDOM

You can replace it with your own name, for example:

tfstateahmed001

Just make sure the name is unique.

Create the Resource Group

Now create the Resource Group:

az group create \
  --name "$RESOURCE_GROUP_NAME" \
  --location "$LOCATION"

This Resource Group is only for the backend storage resources.

It is not the Resource Group your application infrastructure must use.

Think of it as the place where Terraform keeps its own working record.

tfstate-rg
→ Storage Account
→ Blob Container
→ Terraform state file

Create the Storage Account

Next, create the Storage Account:

az storage account create \
  --name "$STORAGE_ACCOUNT_NAME" \
  --resource-group "$RESOURCE_GROUP_NAME" \
  --location "$LOCATION" \
  --sku Standard_LRS \
  --kind StorageV2

This Storage Account will hold the Terraform state file.

For this walkthrough, Standard_LRS is enough.

In a production environment, you may also care about things like network restrictions, private endpoints, soft delete, versioning, immutability, and tighter access control.

But we will keep the first version simple.

The goal here is to understand remote state first.

Create the Blob Container

Now create the Blob Container:

az storage container create \
  --name "$CONTAINER_NAME" \
  --account-name "$STORAGE_ACCOUNT_NAME"

The container is where the state blob will live.

Later, Terraform will create a blob inside this container using the name we set in the backend configuration.

For example:

dev.terraform.tfstate

At this point, the backend storage exists:

tfstate-rg
└── Storage Account
    └── Blob Container: tfstate
        └── state file will be stored here

We have not moved Terraform state yet.

We have only created the Azure location where Terraform state can live.

In the next section, we will configure Terraform to use this Storage Account and Blob Container as its backend.

Configure Terraform to Use the Azure Backend

Now that the backend storage exists in Azure, we can tell Terraform to use it.

So far, Terraform has been using the local backend by default.

That means the state file has been stored in the project folder as:

terraform.tfstate

Now we want Terraform to use the Storage Account and Blob Container we created.

To do that, we add a backend configuration to the Terraform project.

In your Terraform project, create or update a file called versions.tf.

Add this:

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 4.0"
    }
  }

  backend "azurerm" {
    resource_group_name  = "tfstate-rg"
    storage_account_name = "<your-storage-account-name>"
    container_name       = "tfstate"
    key                  = "dev.terraform.tfstate"
  }
}

Replace this value:

storage_account_name = "<your-storage-account-name>"

with the Storage Account name you created earlier.

For example:

storage_account_name = "tfstateahmed001"

The backend block is the important new part:

backend "azurerm" {
  resource_group_name  = "tfstate-rg"
  storage_account_name = "tfstateahmed001"
  container_name       = "tfstate"
  key                  = "dev.terraform.tfstate"
}

This tells Terraform:

Store this project’s state
in the tfstate-rg Resource Group,
inside this Storage Account,
inside the tfstate Blob Container,
using dev.terraform.tfstate as the blob name.

The key value is the name of the state file inside the container.

So this:

key = "dev.terraform.tfstate"

means Terraform will store the state as a blob called:

dev.terraform.tfstate

That name is useful because later, when you have more than one environment, you can use different state keys:

dev.terraform.tfstate
test.terraform.tfstate
prod.terraform.tfstate

For now, we are only using one state file.

But giving it a clear name now makes the pattern easier to grow later.

One important detail: backend configuration is loaded very early by Terraform.

That means backend blocks cannot use normal Terraform variables like this:

backend "azurerm" {
  storage_account_name = var.storage_account_name
}

That will not work in the normal way, because Terraform needs to configure the backend before it evaluates the rest of the configuration.

For this beginner walkthrough, we will keep the backend values written directly in the backend block.

The important thing is the idea:

Terraform starts in your project folder,
reads the backend configuration,
connects to Azure Storage,
and stores state there instead of using a local state file.

At this point, the project knows where remote state should live.

But Terraform has not moved the state yet.

To make Terraform start using this backend, we need to run terraform init again.

Run terraform init and Migrate the State

Now that the backend block exists, Terraform needs to initialise the project again.

Run this from your Terraform project folder:

terraform init

This time, terraform init does more than install the provider.

Terraform reads the backend block:

backend "azurerm" {
  resource_group_name  = "tfstate-rg"
  storage_account_name = "tfstateahmed001"
  container_name       = "tfstate"
  key                  = "dev.terraform.tfstate"
}

Then it prepares the project to use Azure Storage as the backend.

If this project already has local state, Terraform may detect that the backend has changed and ask whether you want to migrate the existing state.

That is the important moment.

Terraform is effectively saying:

I found state in the local project folder.
I found a new backend configuration.
Do you want me to copy the state into the new backend?

For this walkthrough, the answer is yes.

You want Terraform to move the existing state from the local terraform.tfstate file into Azure Blob Storage.

The flow looks like this:

Before
→ local terraform.tfstate file

terraform init
→ reads backend configuration
→ connects to Azure Storage
→ migrates state

After
→ state blob in Azure Storage

Once the migration finishes, Terraform will use the remote backend for future operations.

That means when you run:

terraform plan

or:

terraform apply

Terraform will read and update the state from Azure Blob Storage instead of relying on the local terraform.tfstate file.

The workflow still feels familiar.

You still run Terraform from the project folder.

You still edit .tf files locally.

You still use plan and apply.

But the state now lives in Azure:

Terraform project folder
→ reads backend configuration
→ uses state from Azure Blob Storage

If Terraform cannot connect to the backend, check the basics first:

Is the Resource Group name correct?
Is the Storage Account name correct?
Is the Blob Container name correct?
Are you signed in to the right Azure subscription?
Does your account have permission to read and write blobs?

Those are the most common issues at this stage.

The backend configuration has to point to the exact Azure resources you created earlier.

Once terraform init succeeds, the project is now using remote state.

Verify the State Is Now Remote

Now that terraform init has succeeded, we should confirm that Terraform is really using Azure Storage for state.

There are a few ways to check this.

The simplest way is to look inside the Storage Account.

Go to the Azure Portal and open the Storage Account you created for Terraform state.

Then go to:

Data storage
→ Containers
→ tfstate

Inside the container, you should see a blob with the name you used in the backend block.

For example:

dev.terraform.tfstate

That blob is now the Terraform state file for this project.

The flow has changed from this:

Terraform project folder
└── terraform.tfstate

To this:

Terraform project
→ Azure Storage Account
→ Blob Container
→ dev.terraform.tfstate

You can also check using Azure CLI:

az storage blob list \
  --account-name "<your-storage-account-name>" \
  --container-name "tfstate" \
  --output table

You should see a blob with the same state key from your backend configuration.

If your backend block used:

key = "dev.terraform.tfstate"

then the blob should be called:

dev.terraform.tfstate

At this point, you may still see a .terraform/ folder in your local project.

That is normal.

Terraform still keeps some local working files and backend metadata in the project directory.

But the actual state is now stored in Azure Blob Storage.

The important distinction is:

.terraform/
→ local Terraform working directory

Azure Blob Storage
→ remote Terraform state location

You can also run another plan:

terraform plan

If nothing has changed, Terraform should report that there are no changes to make.

But behind the scenes, it is now reading state from the Azure backend.

That is the main verification.

The state blob exists in Azure Storage.

Terraform can initialise successfully.

Terraform can run a plan using that backend.

The project is now using remote state.

What State Locking Means

Remote state also gives Terraform a safer way to handle updates.

When Terraform runs, it does not only read the state file.

It may also need to change it.

For example, when you run:

terraform apply

Terraform compares your configuration, the current state, and the real Azure resources. Then it creates, updates, or deletes infrastructure as needed.

After that, Terraform updates the state file so it still matches what exists.

That update matters.

If two Terraform runs try to update the same state file at the same time, the state can become inconsistent.

Imagine this:

Engineer A
→ runs terraform apply

Engineer B
→ runs terraform apply at the same time

Both runs may read the same starting state.

Both runs may try to make changes.

Both runs may try to write back to the same state file.

That is dangerous.

State locking helps prevent this.

When Terraform starts an operation that needs to update state, it locks the state first.

The flow looks like this:

terraform apply starts
→ Terraform locks the state
→ Terraform makes the infrastructure changes
→ Terraform updates the state
→ Terraform releases the lock

While the state is locked, another Terraform operation should not be able to write to the same state.

That protects the state file from overlapping updates.

This is one of the reasons remote state matters for real projects.

With a local state file, the state is tied to one machine and one folder.

With a remote backend, Terraform has a shared state location, and that shared location can be protected during changes.

The AzureRM backend uses Azure Storage locking behavior to help coordinate access to the state.

So the mental model is simple:

remote state
→ gives Terraform a shared state location

state locking
→ protects that shared state during changes

State locking does not replace good process.

You still need to control who can run terraform apply.

You still need to avoid multiple pipelines applying to the same environment at the same time.

You still need separate state files for separate environments.

But locking gives Terraform an important safety mechanism.

It helps make sure that when one Terraform run is updating the state, another run does not write over it at the same time.

How This Fits with Teams and Pipelines

Remote state is not only about moving a file out of your project folder.

It changes how Terraform can be used in a real project.

When state is local, the workflow is tied to one machine:

Engineer laptop
→ Terraform code
→ local terraform.tfstate

That is fine while learning.

But teams and pipelines need something different.

They need a shared state location:

Engineer laptop
→ Terraform code
→ remote state in Azure Storage

CI/CD pipeline
→ Terraform code
→ remote state in Azure Storage

Now everyone working on that Terraform project can use the same backend.

An engineer can run terraform plan from their machine.

A pipeline can run terraform plan during a pull request.

A controlled release process can run terraform apply.

All of those operations are based on the same state location.

That is the important shift.

The code can be shared through Git.

The state can be shared through the Azure backend.

The infrastructure lives in Azure.

Git
→ stores Terraform code

Azure Storage
→ stores Terraform state

Azure
→ hosts the real infrastructure

That separation is what makes Terraform easier to use safely as a project grows.

Remote state does not mean everyone should freely run terraform apply whenever they want.

It does not remove the need for code review.

It does not replace environment separation.

It does not replace access control.

But it gives teams the foundation they need.

Without remote state, a pipeline has no reliable shared state location to work from.

Without remote state, two engineers may be looking at the same code but different state files.

Without remote state, Terraform’s view of the deployment can become tied to one person’s machine.

With remote state, the project has one shared place for Terraform’s working record.

That makes it possible to build safer workflows on top:

pull request
→ terraform fmt
→ terraform validate
→ terraform plan
→ review the plan
→ controlled terraform apply

We are not building that full pipeline in this article.

That is a separate step.

For now, the point is simpler:

Remote state is the foundation that makes team-based Terraform and pipeline-based Terraform practical.

Common Mistakes to Avoid

Remote state is simple once the pattern is clear, but there are a few mistakes worth avoiding.

The first mistake is committing Terraform state to Git.

Your Terraform code belongs in Git.

Your Terraform state does not.

State can contain information about your infrastructure that should not be casually stored in a repository. Depending on what Terraform manages, it may include resource IDs, generated values, connection details, or sensitive values.

So the rule is simple:

Commit Terraform code.
Do not commit Terraform state.

That means files like these should not be committed:

terraform.tfstate
terraform.tfstate.backup

The second mistake is using the same state file for every environment.

Dev, test, and prod should not all write to the same state file.

If they do, Terraform may treat them as one deployment instead of separate environments.

A safer pattern is to give each environment its own state location or state key:

dev.terraform.tfstate
test.terraform.tfstate
prod.terraform.tfstate

That way, each environment has its own working record.

The third mistake is deleting or casually changing the backend storage.

Once Terraform state lives in Azure Storage, that storage becomes important.

The Storage Account and Blob Container are no longer just setup resources.

They hold Terraform’s record of what it manages.

If you delete the state blob, Terraform loses that record.

If you delete the Storage Account, Terraform loses access to the backend.

So treat the backend storage as protected infrastructure.

The fourth mistake is thinking remote state solves every teamwork problem by itself.

Remote state gives Terraform a shared place to store state.

State locking helps protect that state during changes.

But you still need good process around it.

You still need to decide who can run terraform apply.

You still need code review.

You still need separate state for separate environments.

You still need sensible access control on the Storage Account.

The fifth mistake is giving too much access to the backend.

Anyone who can read the state may be able to see details about your infrastructure.

Anyone who can modify or delete the state can cause serious problems for Terraform.

So access to the backend Storage Account should be intentional.

The goal is not just to move the state file to Azure.

The goal is to move it somewhere safer.

A good remote state setup should make the state:

centralised
protected
backed by access control
separate per environment
available to the people and pipelines that need it

Remote state is a foundation.

It makes Terraform safer to use in real projects, but only if the backend is treated as important infrastructure.

The Remote State Mental Model

At this point, we have moved Terraform state out of the local project folder and into Azure Storage.

So let’s slow down and put the full mental model together.

Terraform configuration describes what you want.

Terraform configuration
→ the .tf files in your project
→ describes the infrastructure you want Terraform to manage

Azure contains the real infrastructure.

Azure
→ Resource Groups
→ networks
→ storage accounts
→ virtual machines
→ other deployed resources

Terraform state is Terraform’s working record.

Terraform state
→ remembers what Terraform already manages
→ maps Terraform resource blocks to real Azure resources

The backend decides where that state is stored.

backend
→ controls where Terraform stores state

By default, Terraform uses the local backend.

local backend
→ stores state in terraform.tfstate
→ keeps the state file in the project folder

That is fine while learning.

But for real projects, we usually want a remote backend.

azurerm backend
→ stores state in Azure Blob Storage
→ lets Terraform use a shared state location

So the full picture looks like this:

Terraform code
→ lives in your project folder or Git repository

Terraform state
→ lives in Azure Blob Storage

Azure resources
→ live in Azure

That separation is the important idea.

The code is what you want.

The infrastructure is what exists.

The state is Terraform’s record of the connection between them.

configuration
→ desired infrastructure

state
→ Terraform’s current record

Azure
→ real infrastructure

When you run:

terraform plan

Terraform compares those three things.

It asks:

What does the code say should exist?

What does the state say Terraform already manages?

What actually exists in Azure?

Then it works out what needs to change.

When you run:

terraform apply

Terraform makes the changes and updates the state.

With remote state, that update happens in Azure Storage instead of only in your local folder.

That is the mental shift:

Before remote state
→ Terraform’s working record lives beside the code

After remote state
→ Terraform’s working record lives in a shared backend

This does not make Terraform more complicated.

It makes the responsibility clearer.

Your project folder is where you write and run Terraform.

Your Azure backend is where Terraform stores its memory.

Your Azure subscription is where the real resources exist.

project folder
→ run Terraform

Azure backend
→ store Terraform state

Azure subscription
→ host the infrastructure

That is the remote state model.

Once that model makes sense, the backend configuration becomes less mysterious.

This block:

terraform {
  backend "azurerm" {
    resource_group_name  = "tfstate-rg"
    storage_account_name = "tfstateahmed001"
    container_name       = "tfstate"
    key                  = "dev.terraform.tfstate"
  }
}

is simply telling Terraform:

When you need to read or write state,
do not use a local terraform.tfstate file.

Use this Azure Storage Account,
this Blob Container,
and this state blob name.

That is all the backend is doing.

It gives Terraform a safer place to store the state file.

And that safer place becomes the foundation for real Terraform usage.

Where This Leads

We started this article with a simple problem.

Local state is fine while learning Terraform.

But real projects need something safer.

A local terraform.tfstate file depends too much on one folder and one machine.

It can be lost.

It can be committed to Git by mistake.

It makes collaboration harder.

It gives pipelines no reliable shared state location.

Remote state solves that by moving Terraform’s working record into a shared backend.

On Azure, that means:

Azure Storage Account
→ Blob Container
→ Terraform state blob

The Terraform code still lives in your project.

The infrastructure still lives in Azure.

But the state now lives in Azure Storage.

That gives us a much cleaner model:

Git
→ stores Terraform code

Azure Storage
→ stores Terraform state

Azure
→ hosts the real infrastructure

That is a major step forward.

It moves the reader from local, single-person Terraform toward safer real-world Terraform usage.

But remote state also introduces the next important question:

If state is now remote,
how do we manage separate environments safely?

For example, you might have:

dev
test
prod

Each environment may use the same Terraform code.

But each environment should have its own values.

Dev may use smaller resources.

Prod may use larger resources.

Dev may use one region.

Prod may use another.

Tags, names, address spaces, and sizing may all change.

And most importantly, each environment should have its own state.

You do not want dev, test, and prod all writing to the same state file.

So the next step is learning how to use the same Terraform code with different environment values and separate state files.

That leads naturally into:

terraform.tfvars files
-var-file
environment-specific values
separate backend state keys
dev.terraform.tfstate
test.terraform.tfstate
prod.terraform.tfstate

Remote state gives us the safe storage foundation.

Environment-specific configuration builds on top of that.

In the next article, we will use the same Terraform project for multiple environments.

We will keep the code reusable.

We will change the values per environment.

And we will keep each environment’s state separate.

That is where Terraform starts to feel less like a local script and more like a proper infrastructure workflow.