Exposing AKS Workloads with Application Gateway for Containers

Exposing AKS Workloads with Application Gateway for Containers

Introduction

In the previous article, we used Application Gateway Ingress Controller to connect Kubernetes Ingress resources to the classic Azure Application Gateway model.

The flow looked like this:

Ingress rule in Kubernetes
→ AGIC watches it
→ AGIC configures Azure Application Gateway
→ Application Gateway routes external traffic
→ Kubernetes Service
→ Pod

That gave us an Azure-native ingress path while keeping the Kubernetes Ingress model familiar. We still wrote an Ingress rule. We still routed / to the frontend Service and /api to the API Service. But instead of managed NGINX receiving the external traffic, Azure Application Gateway became the external Layer 7 entry point.

That worked, but it also showed the shape of the older model.

AGIC connects Kubernetes to the classic Application Gateway service. Kubernetes describes the routing rules, AGIC watches those rules, and Application Gateway is updated to match.

Application Gateway for Containers asks a slightly different question:

What would Azure application load balancing look like if it were designed around Kubernetes-style change from the start?

That matters because Kubernetes does not sit still. Pods scale up and down. New versions roll out. Old pods disappear. Routes change. Health probes change. The edge of the application needs to keep up with that movement.

Application Gateway for Containers is Azure’s newer application load-balancing service built for that world. It is not just AGIC with a new name. It is a newer model for routing traffic to container workloads, with faster reaction to pod and route changes, support for more advanced traffic patterns like weighted traffic splitting, and support for both the familiar Ingress API and the newer Gateway API.

In this article, we are not going to jump straight into Gateway API. We will start from the Ingress model we already understand and use the same basic application shape as before:

frontend Deployment → frontend Service
api Deployment      → api Service

And the same routing idea:

/      → frontend Service
/api   → api Service

The application stays familiar. The routing idea stays familiar. What changes is the Azure load-balancing model behind it.

By the end, the flow should feel clear:

External user
→ Application Gateway for Containers
→ Kubernetes Service
→ Pod

The Kubernetes idea stays familiar. The Azure application-routing model moves forward.

Why Application Gateway for Containers?

Application Gateway Ingress Controller gave us a useful bridge between Kubernetes and Azure Application Gateway.

It let us write Kubernetes Ingress rules and use Azure Application Gateway as the external entry point. That is useful, especially when Application Gateway is already part of the Azure platform design.

But AGIC is still connecting Kubernetes to the classic Application Gateway model.

That matters because Kubernetes changes often.

A pod might be replaced during a rollout. A Deployment might scale from two replicas to five. A backend might become unhealthy and recover again. A route might be added or changed. In all of those situations, the external edge needs to keep up with what is happening inside the cluster.

If the edge is slow to reflect those changes, users can feel it. Requests might go to the wrong place. A backend might still receive traffic after it should have been removed. A new backend might be ready inside Kubernetes but not yet used at the edge.

That is the problem Application Gateway for Containers is designed to improve.

It is still an Azure application gateway in front of your workloads, but it is built more directly around container patterns: pods changing, routes changing, health probes changing, and traffic needing to move between backends quickly.

That gives us a stronger reason to care than simply “it is newer.”

Application Gateway for Containers can still work with the Ingress API, so we can begin with the model we already understand. But it also brings a more Kubernetes-focused load-balancing model, with faster updates and support for more advanced traffic patterns such as weighted traffic splitting.

For this first walkthrough, we will keep the routing simple.

We will not build a canary release or weighted traffic split yet. We will use the same / and /api paths from the previous articles so the comparison stays clean.

The point is to understand the new baseline:

Same application.
Same Services.
Same Ingress idea.
New Azure load-balancing model.

Once that baseline is clear, the more advanced features will make much more sense.

What Changes Compared with AGIC?

The easiest way to understand Application Gateway for Containers is to compare it with the model we just used.

With Application Gateway Ingress Controller, the flow looked like this:

Kubernetes Ingress
→ AGIC watches it
→ AGIC configures classic Azure Application Gateway
→ Application Gateway routes traffic

AGIC is a controller. It runs in AKS, watches Kubernetes Ingress resources, and updates Azure Application Gateway to match.

So the simple version is:

AGIC controls classic Application Gateway.

Application Gateway for Containers changes two things.

First, the Azure edge service changes. We are no longer using classic Azure Application Gateway. We are using Application Gateway for Containers, Azure’s newer application-routing service for container workloads.

Second, the controller changes. Instead of AGIC, Application Gateway for Containers uses the ALB Controller, short for Application Load Balancer Controller. This is not Azure Load Balancer, which is a separate Layer 4 TCP/UDP service.

With Application Gateway for Containers, the flow looks like this:

Kubernetes Ingress
→ ALB Controller watches it
→ ALB Controller configures Application Gateway for Containers
→ Application Gateway for Containers routes traffic

So the matching simple version is:

ALB Controller controls Application Gateway for Containers.

That is the core difference.

Both AGIC and the ALB Controller are Kubernetes controllers. Both watch Kubernetes resources. Both translate what you describe in Kubernetes into Azure application-routing configuration.

But they connect Kubernetes to different Azure services.

AGIC connects Kubernetes to classic Azure Application Gateway.

ALB Controller connects Kubernetes to Application Gateway for Containers.

That is why this article can still feel familiar.

The Ingress idea still carries across. We can still say:

/      → frontend Service
/api   → api Service

The difference is the Azure service and controller that make those rules real.

Deployment Model, Kept Simple

Application Gateway for Containers has more moving parts than the managed NGINX and AGIC walkthroughs.

That is because there are two sides to the setup.

There is the Kubernetes side, where the ALB Controller runs and watches routing resources.

And there is the Azure side, where Application Gateway for Containers exists and handles the external traffic.

Those two sides need to be connected.

Microsoft supports two main ways to do that. In a bring your own deployment, you create the Azure-side Application Gateway for Containers resources yourself, then reference them from Kubernetes. In a managed by ALB Controller deployment, the ALB Controller is responsible for the lifecycle of the Application Gateway for Containers resource and its child resources, based on Kubernetes resources you define.

For this first walkthrough, we will use the managed approach.

That keeps the story closer to Kubernetes. We create the controller, define the resources in the cluster, and let the ALB Controller create or configure the Azure-side Application Gateway for Containers pieces for us.

The important idea is this:

You define the routing intent in Kubernetes.
The ALB Controller watches that configuration.
Application Gateway for Containers becomes the Azure edge that handles the traffic.

This is different from AGIC, but the learning pattern is still familiar.

In the AGIC article, we checked two things:

Is the controller running in AKS?
Does the Azure edge resource exist?

We will do the same here.

First, we will deploy or enable the ALB Controller. Then we will create the Application Gateway for Containers resources using the managed model. After that, we will deploy the same frontend and API application and expose it with the Ingress pattern we already understand.

The setup has more pieces, but the goal is still simple:

External user
→ Application Gateway for Containers
→ Kubernetes Service
→ Pod

That is the path we are building.

Prepare the Azure CLI and Subscription

Before we create the AKS cluster, we need to prepare the Azure CLI and the subscription.

This part is more involved than the managed NGINX and AGIC walkthroughs because Application Gateway for Containers uses newer AKS add-on capabilities. The setup needs three things in place:

Azure CLI extensions
Azure resource providers
AKS preview feature registrations

We only need to do this setup once for the subscription.

First, make sure you are using the right subscription:

$sub = "<your-subscription-id>"

az account set --subscription $sub

az account show `
  --query "{SubscriptionName:name, SubscriptionId:id, TenantId:tenantId, User:user.name}" `
  -o table

Install the required Azure CLI extensions:

az extension add --name alb
az extension add --name aks-preview

If they are already installed, you can update them instead:

az extension update --name alb
az extension update --name aks-preview

The alb extension gives the Azure CLI the Application Gateway for Containers commands. The aks-preview extension gives the Azure CLI access to the preview AKS add-on flags we need for this walkthrough.

Now register the required resource providers:

az provider register --subscription $sub --namespace Microsoft.ContainerService
az provider register --subscription $sub --namespace Microsoft.Network
az provider register --subscription $sub --namespace Microsoft.NetworkFunction
az provider register --subscription $sub --namespace Microsoft.ServiceNetworking

The resource providers allow the subscription to create and manage the Azure resources used by AKS, networking, and Application Gateway for Containers.

You can check their state with:

az provider show --subscription $sub --namespace Microsoft.ContainerService --query "{Provider:namespace, State:registrationState}" -o table
az provider show --subscription $sub --namespace Microsoft.Network --query "{Provider:namespace, State:registrationState}" -o table
az provider show --subscription $sub --namespace Microsoft.NetworkFunction --query "{Provider:namespace, State:registrationState}" -o table
az provider show --subscription $sub --namespace Microsoft.ServiceNetworking --query "{Provider:namespace, State:registrationState}" -o table

You want each provider to show:

Registered

Next, register the preview features required for the AKS-managed Gateway API and Application Load Balancer add-ons:

az feature register `
  --subscription $sub `
  --namespace "Microsoft.ContainerService" `
  --name "ManagedGatewayAPIPreview"

az feature register `
  --subscription $sub `
  --namespace "Microsoft.ContainerService" `
  --name "ApplicationLoadBalancerPreview"

These two features belong together for this setup.

ManagedGatewayAPIPreview enables the AKS-managed Gateway API capability that Application Gateway for Containers builds on.

ApplicationLoadBalancerPreview enables the Application Gateway for Containers ALB Controller add-on.

The walkthrough will still start with the familiar Ingress model, but the platform foundation includes both pieces.

Check that both features are registered:

az feature show `
  --subscription $sub `
  --namespace "Microsoft.ContainerService" `
  --name "ManagedGatewayAPIPreview" `
  --query "{Feature:name, State:properties.state}" `
  -o table

az feature show `
  --subscription $sub `
  --namespace "Microsoft.ContainerService" `
  --name "ApplicationLoadBalancerPreview" `
  --query "{Feature:name, State:properties.state}" `
  -o table

You want both to show:

Registered

After the preview features are registered, refresh the Microsoft.ContainerService provider registration:

az provider register --subscription $sub --namespace Microsoft.ContainerService

This last step matters. When the feature registration completes, Azure CLI tells you to run the provider registration again so the new feature state is propagated into Microsoft.ContainerService.

At this point, the subscription is ready for the AKS cluster creation step.

The short version is:

Extensions give the Azure CLI the commands it needs.
Resource providers allow the subscription to use the required Azure services.
Preview feature registrations enable the new AKS capabilities.
The final Microsoft.ContainerService registration propagates those preview features.

Create an AKS Cluster with the ALB Controller Add-on

Now that the Azure CLI and subscription are ready, we can create the AKS cluster.

For this walkthrough, we will use the AKS add-on path. That means AKS will install the ALB Controller for us when the cluster is created. This keeps the setup cleaner than installing the controller manually with Helm.

Set a few variables:

$RESOURCE_GROUP = "rg-aks-agc-demo"
$AKS_NAME = "aks-agc-demo"
$LOCATION = "australiaeast"
$VM_SIZE = "Standard_B2s"

Create the resource group:

az group create `
  --name $RESOURCE_GROUP `
  --location $LOCATION

Create the AKS cluster with the required add-ons enabled:

az aks create `
  --resource-group $RESOURCE_GROUP `
  --name $AKS_NAME `
  --location $LOCATION `
  --node-count 2 `
  --node-vm-size $VM_SIZE `
  --network-plugin azure `
  --enable-oidc-issuer `
  --enable-workload-identity `
  --enable-gateway-api `
  --enable-application-load-balancer `
  --generate-ssh-keys

You may see a warning that --enable-application-load-balancer is in preview. That is expected. At the time of writing, the Application Gateway for Containers AKS add-on still uses preview AKS functionality.

There are a few important parts in the command.

--network-plugin azure creates the cluster using Azure CNI. For this walkthrough, that keeps the networking model aligned with the supported Application Gateway for Containers path.

--enable-oidc-issuer and --enable-workload-identity enable the identity features the add-on needs so the controller can work with Azure resources without using the older service principal model.

--enable-gateway-api enables the AKS-managed Gateway API foundation required by the Application Gateway for Containers add-on. We are still starting with the Ingress model in this walkthrough, but the platform add-on expects this foundation to be present.

--enable-application-load-balancer enables the Application Gateway for Containers ALB Controller add-on. This is the controller that watches Kubernetes routing resources and translates them into Application Gateway for Containers load-balancing configuration.

If you inspect the AKS create output, you should see the add-ons reflected under the cluster’s ingress profile. The important pieces are that Application Load Balancer is enabled and Gateway API is installed:

ingressProfile:
  applicationLoadBalancer:
    enabled: true
  gatewayApi:
    installation: Standard

Once the cluster is ready, connect kubectl to it:

az aks get-credentials `
  --resource-group $RESOURCE_GROUP `
  --name $AKS_NAME

Now check the nodes:

kubectl get nodes

You should see the AKS nodes in a Ready state:

NAME                                STATUS   ROLES    AGE     VERSION
aks-nodepool1-29250322-vmss000000   Ready    <none>   3m48s   v1.34.4
aks-nodepool1-29250322-vmss000001   Ready    <none>   3m19s   v1.34.4

Now confirm that the ALB Controller pods are running:

kubectl get pods -n kube-system | Select-String "alb-controller"

You should see the ALB Controller pods in the Running state:

alb-controller-57c5bb7d57-8ctf5   1/1   Running   0   3m
alb-controller-57c5bb7d57-brbfl   1/1   Running   0   3m

Finally, check that the GatewayClass exists:

kubectl get gatewayclass azure-alb-external

You should see the azure-alb-external GatewayClass accepted by the ALB Controller:

NAME                 CONTROLLER                               ACCEPTED   AGE
azure-alb-external   alb.networking.azure.io/alb-controller   True       2m57s

We are not doing a full Gateway API walkthrough yet. For now, this confirms that the Gateway API foundation and the Application Gateway for Containers add-on are in place.

At this point, the Kubernetes side of the setup is ready:

Kubernetes side:
ALB Controller is running and ready to watch routing resources.

Next, we need to create the Application Gateway for Containers resources that give us the Azure edge.

Create the Application Gateway for Containers Resource

At this point, the ALB Controller is running inside AKS.

Now we need the Azure edge resource that will receive external traffic.

The ALB Controller is not the edge itself. It is the controller that watches Kubernetes resources and configures Application Gateway for Containers. In the managed deployment model, we create a Kubernetes custom resource called ApplicationLoadBalancer. That resource tells the ALB Controller to create and manage the Azure-side Application Gateway for Containers resource for this cluster.

This is not the same thing as an Ingress resource.

An Ingress describes application routing rules, such as:

/      → frontend Service
/api   → api Service

The ApplicationLoadBalancer resource describes the Azure load-balancing infrastructure those routing rules will use.

So the order is:

ApplicationLoadBalancer: create and manage the Azure edge
Ingress: define the application routing rules

First, create a namespace for the Application Gateway for Containers infrastructure objects:

kubectl create namespace alb-infra

We are creating a separate namespace because these objects are infrastructure objects, not application objects. Later, our frontend and API workloads can live in the default namespace or an application namespace. Keeping the Application Gateway for Containers infrastructure in alb-infra makes the separation clearer.

Now we need the subnet ID that Application Gateway for Containers will use for its association into the cluster network.

Because we used the AKS add-on path, Azure creates a delegated subnet called aks-appgateway for Application Gateway for Containers. We need the ID of that subnet.

Run this:

$NODE_RESOURCE_GROUP = az aks show `
  --resource-group $RESOURCE_GROUP `
  --name $AKS_NAME `
  --query nodeResourceGroup `
  --output tsv

$VNET_NAME = az network vnet list `
  --resource-group $NODE_RESOURCE_GROUP `
  --query "[0].name" `
  --output tsv

$ALB_SUBNET_ID = az network vnet subnet show `
  --resource-group $NODE_RESOURCE_GROUP `
  --vnet-name $VNET_NAME `
  --name "aks-appgateway" `
  --query id `
  --output tsv

$ALB_SUBNET_ID

The output should look similar to this:

/subscriptions/<subscription-id>/resourceGroups/<aks-managed-resource-group>/providers/Microsoft.Network/virtualNetworks/<vnet-name>/subnets/aks-appgateway

That one value contains everything Kubernetes needs to point Application Gateway for Containers at the correct delegated subnet.

Now create a file called application-load-balancer.yaml:

apiVersion: alb.networking.azure.io/v1
kind: ApplicationLoadBalancer
metadata:
  name: agc-demo
  namespace: alb-infra
spec:
  associations:
    - <paste-your-aks-appgateway-subnet-id-here>

Replace:

<paste-your-aks-appgateway-subnet-id-here>

with the value printed from $ALB_SUBNET_ID.

Apply the file:

kubectl apply -f application-load-balancer.yaml

You should see:

applicationloadbalancer.alb.networking.azure.io/agc-demo created

Now check the resource:

kubectl get applicationloadbalancer -n alb-infra

You should see something like this:

NAME       DEPLOYMENT   AGE
agc-demo   True         11s

This tells you that the ApplicationLoadBalancer custom resource exists and that the deployment process has started.

For more detail, describe it:

kubectl describe applicationloadbalancer agc-demo -n alb-infra

You may see conditions like this:

Type:     Accepted
Status:   True
Message:  Valid Application Gateway for Containers resource

Type:     Deployment
Status:   True
Reason:   InProgress
Message:  Application Gateway for Containers resource ... is undergoing an update.

There are two useful signals here.

Accepted=True means the Kubernetes custom resource is valid. The ALB Controller understands the ApplicationLoadBalancer object and has accepted it.

Deployment=True with Reason=InProgress means the Azure-side Application Gateway for Containers resource is still being created or updated.

That is expected. kubectl apply creates the Kubernetes custom resource immediately, but the Azure-side Application Gateway for Containers resource can take a little longer to finish creating behind the scenes.

After a little while, run the describe command again:

kubectl describe applicationloadbalancer agc-demo -n alb-infra

When the Azure-side deployment is ready, the Deployment condition should move from InProgress to Ready.

The status flow is:

Accepted=True          → Kubernetes accepted the resource
Deployment/InProgress  → Azure-side deployment is still being created or updated
Deployment/Ready       → Azure-side deployment is ready

The model now looks like this:

ApplicationLoadBalancer custom resource
→ ALB Controller watches it
→ Application Gateway for Containers is created or configured in Azure

That gives us the Azure side of the setup.

Next, we can deploy the same frontend and API application, then create the Ingress rule that routes traffic through Application Gateway for Containers.

Deploy the Sample Application

Now that the Application Gateway for Containers infrastructure is in place, we need an application for it to route traffic to.

We will use the same simple application shape as before:

frontend Deployment → frontend Service
api Deployment      → api Service

Both Services will be ClusterIP Services. They are reachable inside the cluster, but they are not directly exposed to the internet. Application Gateway for Containers will become the external entry point, and the Ingress rule will decide which Service should receive each request.

Create a file called sample-app.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: frontend
          image: hashicorp/http-echo:1.0
          args:
            - "-text=Hello from the frontend service"
          ports:
            - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: ClusterIP
  selector:
    app: frontend
  ports:
    - port: 80
      targetPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        - name: api
          image: hashicorp/http-echo:1.0
          args:
            - "-text=Hello from the API service"
          ports:
            - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  type: ClusterIP
  selector:
    app: api
  ports:
    - port: 80
      targetPort: 5678

This file creates four Kubernetes objects.

The frontend Deployment creates two frontend pods, and the frontend Service gives those pods a stable internal address.

The api Deployment creates two API pods, and the api Service gives those pods a stable internal address.

Both Services expose port 80, but send traffic to targetPort: 5678 on the pods. That is because the http-echo container listens on port 5678, while the Service gives us a cleaner port to route to.

Apply the file:

kubectl apply -f sample-app.yaml

Check the pods:

kubectl get pods

You should see two frontend pods and two API pods running:

NAME                        READY   STATUS    RESTARTS   AGE
api-6f7d8c9b9c-k2f9x        1/1     Running   0          20s
api-6f7d8c9b9c-n5k2m        1/1     Running   0          20s
frontend-7c8d9f4b6-m8kl9    1/1     Running   0          20s
frontend-7c8d9f4b6-p9j7r    1/1     Running   0          20s

Now check the Services:

kubectl get service

You should see both Services as ClusterIP:

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
api          ClusterIP   10.0.120.15     <none>        80/TCP    20s
frontend     ClusterIP   10.0.230.44     <none>        80/TCP    20s
kubernetes   ClusterIP   10.0.0.1        <none>        443/TCP   10m

You will also see the default kubernetes Service. Kubernetes creates that automatically so workloads inside the cluster have a stable way to reach the Kubernetes API server. For this walkthrough, we can ignore it and focus on the two Services we created: frontend and api.

At this point, the application is running inside AKS, but neither Service is exposed directly to the internet.

That is exactly what we want.

External user
→ Application Gateway for Containers
→ frontend or api Service
→ matching pods

The application is ready. The next step is to create the Ingress rule that Application Gateway for Containers will use to route external traffic to these Services.

Create the Ingress Rule

Now that the application is running, we can create the Ingress rule.

The routing pattern stays the same:

/      → frontend Service
/api   → api Service

This is the important continuity from the previous articles. We are still using Kubernetes Ingress to describe the routing rule. The difference is that this time the ALB Controller reads the Ingress and configures Application Gateway for Containers.

Create a file called ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: agc-demo-ingress
  annotations:
    alb.networking.azure.io/alb-name: agc-demo
    alb.networking.azure.io/alb-namespace: alb-infra
spec:
  ingressClassName: azure-alb-external
  rules:
    - http:
        paths:
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: api
                port:
                  number: 80
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend
                port:
                  number: 80

Most of this should look familiar.

We still have an Ingress object. We still have path rules. We still send traffic to Services, not directly to pods.

The first important part is the Ingress class:

ingressClassName: azure-alb-external

This tells Kubernetes that the Ingress should be handled by the ALB Controller for Application Gateway for Containers.

The second important part is the pair of annotations:

annotations:
  alb.networking.azure.io/alb-name: agc-demo
  alb.networking.azure.io/alb-namespace: alb-infra

These annotations connect the Ingress rule to the ApplicationLoadBalancer resource we created earlier.

In plain English, they say:

Use the ApplicationLoadBalancer called agc-demo in the alb-infra namespace.

Then we define the two paths.

The /api path sends traffic to the api Service:

- path: /api
  pathType: Prefix
  backend:
    service:
      name: api
      port:
        number: 80

The / path sends the rest of the traffic to the frontend Service:

- path: /
  pathType: Prefix
  backend:
    service:
      name: frontend
      port:
        number: 80

As before, /api appears before / because it is more specific. The / path is broad, so placing /api first makes the routing intention clear.

Apply the Ingress:

kubectl apply -f ingress.yaml

Now check it:

kubectl get ingress

You may not see the ADDRESS value immediately. That is normal. The Ingress object is created first, then the ALB Controller configures Application Gateway for Containers and publishes the frontend address.

At first, you might see something like this:

NAME               CLASS                HOSTS   ADDRESS   PORTS   AGE
agc-demo-ingress   azure-alb-external   *                 80      11s

If the ADDRESS column is empty, wait a little and run the command again:

kubectl get ingress

After a short time, you should see a generated Azure hostname:

NAME               CLASS                HOSTS   ADDRESS                               PORTS   AGE
agc-demo-ingress   azure-alb-external   *       hzb5acdzeca3h2e8.fz88.alb.azure.com   80      91s

With Application Gateway for Containers, the ADDRESS column usually shows a generated Azure hostname rather than a raw public IP address. That hostname belongs to the Application Gateway for Containers frontend.

At this point, the routing pieces are connected:

Ingress rule
→ ALB Controller
→ Application Gateway for Containers
→ frontend or api Service
→ matching pods

The next step is to test both routes through the generated Application Gateway for Containers hostname.

Test External Access Through Application Gateway for Containers

Now that the Ingress has an address, we can test whether Application Gateway for Containers is routing traffic correctly.

First, save the Ingress address in a variable:

$AGC_HOSTNAME = kubectl get ingress agc-demo-ingress -o jsonpath="{.status.loadBalancer.ingress[0].hostname}"

$AGC_HOSTNAME

You should see a generated Azure hostname. It will look similar to this, but your value will be different:

hzb5acdzeca3h2e8.fz88.alb.azure.com

Now call the root path:

curl.exe "http://$AGC_HOSTNAME/"

You should see the frontend response:

Hello from the frontend service

Now call the API path:

curl.exe "http://$AGC_HOSTNAME/api"

You should see the API response:

Hello from the API service

Both requests went to the same Application Gateway for Containers hostname.

The difference was the path:

http://<agc-hostname>/      → frontend Service
http://<agc-hostname>/api   → api Service

That is the Ingress rule working through Application Gateway for Containers.

When you called /, Application Gateway for Containers routed the request to the frontend Service. When you called /api, it routed the request to the api Service.

From there, the Services did their normal Kubernetes job. They sent the requests to one of the matching pods behind them.

So the API request followed this path:

curl http://<agc-hostname>/api
→ Application Gateway for Containers
→ api Service
→ one api pod

And the frontend request followed this path:

curl http://<agc-hostname>/
→ Application Gateway for Containers
→ frontend Service
→ one frontend pod

That is the key result.

The Kubernetes Ingress idea stayed the same: one external hostname, two internal Services, routing decided by path.

What changed is the Azure edge. In the previous article, classic Application Gateway received the traffic and AGIC kept it configured. In this article, Application Gateway for Containers receives the traffic and the ALB Controller keeps it configured from Kubernetes.

What Application Gateway for Containers Added

At this point, the application is working through Application Gateway for Containers.

The shape of the application did not change:

frontend Deployment → frontend Service
api Deployment      → api Service

The Ingress idea did not change either:

/      → frontend Service
/api   → api Service

What changed was the Azure edge behind that Ingress.

In the managed NGINX walkthrough, AKS gave us a managed NGINX ingress controller. That was the cleanest way to prove the basic Ingress flow on AKS.

In the AGIC walkthrough, we kept Kubernetes Ingress but moved the edge to classic Azure Application Gateway. AGIC watched the Ingress resources and kept Application Gateway configured.

In this walkthrough, we moved to Application Gateway for Containers. The ALB Controller watched the Kubernetes resources and configured Azure’s newer container-focused application load-balancing service.

So the progression looks like this:

Managed NGINX:
Ingress → managed NGINX ingress controller → Service → Pod

AGIC:
Ingress → AGIC → classic Azure Application Gateway → Service → Pod

Application Gateway for Containers:
Ingress → ALB Controller → Application Gateway for Containers → Service → Pod

That is the main lesson.

Application Gateway for Containers is not interesting only because it supports Ingress. If that were the whole story, it would not feel very different from what we already did.

The more important point is that it is designed for a Kubernetes world where things change often. Pods are created and removed. Deployments roll out new versions. Routes change. Health probes change. Traffic may need to move between backends more carefully.

That is where Application Gateway for Containers starts to matter.

In this article, we kept the walkthrough simple. We only routed / to the frontend Service and /api to the API Service. But the model we used opens the door to more advanced traffic management later, such as weighted traffic splitting and Gateway API.

For now, the important baseline is clear:

Same application.
Same Services.
Same Ingress idea.
New Azure application load-balancing model.

That gives us a clean place to stop before moving into the next layer.

Clean Up

This walkthrough creates real Azure resources, including an AKS cluster and Application Gateway for Containers resources.

When you are finished testing, it is worth deleting them so you do not leave anything running.

Because we created everything inside one resource group, cleanup is simple. Delete the resource group:

az group delete `
  --name $RESOURCE_GROUP `
  --yes `
  --no-wait

The --yes flag skips the confirmation prompt.

The --no-wait flag returns control to your terminal immediately while Azure deletes the resources in the background.

This removes the AKS cluster, the Application Gateway for Containers resources, and the related Azure resources created for the demo.

If you want to confirm the deletion later, run:

az group show `
  --name $RESOURCE_GROUP

If the resource group has been deleted, Azure will return an error saying it could not be found.

Where This Leads

You now have a working Application Gateway for Containers ingress path on AKS.

We started from the Ingress model we already understood:

/      → frontend Service
/api   → api Service

Then we changed the Azure application-routing model behind it.

The final flow looked like this:

External user
→ Application Gateway for Containers
→ Kubernetes Service
→ Pod

That gives us a clean baseline. Application Gateway for Containers can work with the familiar Ingress API, so we did not have to jump into a completely new routing model just to understand the service.

But Ingress is not the end of the story.

Application Gateway for Containers also supports Gateway API, which is the newer Kubernetes model for describing traffic routing. Gateway API is more expressive than Ingress, but the most important shift is how it separates responsibilities.

Instead of putting everything into one Ingress object, Gateway API separates the model into different resources:

GatewayClass → what kind of gateway exists
Gateway      → the entry point and listener
HTTPRoute    → the application route to a Service

That separation matters because platform teams and application teams often care about different things.

A platform team might own the gateway itself, the public listener, TLS settings, and the shared edge. An application team might only need to say, “send /api traffic to my API Service.”

That is where the next article goes.

We will step back from the Azure-specific walkthrough and look at Gateway API itself: what problem it solves, how GatewayClass, Gateway, and HTTPRoute fit together, and why this model is becoming important for Kubernetes traffic routing.