Exposing AKS Workloads with Application Gateway Ingress Controller
-
Ahmed Muhi - 14 Jul, 2025
Introduction
In the previous article, we exposed an AKS application using the application routing add-on.
That gave us a clean managed NGINX ingress path:
External user
→ Azure public entry point
→ managed NGINX ingress controller
→ Kubernetes Service
→ Pod
It was a good baseline. We created an AKS cluster, deployed two internal Services, created one Ingress rule, and proved that traffic could enter through one external address and route to different Services based on the path.
Now we are going to keep the same Kubernetes idea, but change the Azure edge.
Instead of using managed NGINX as the application entry point, we will use Azure Application Gateway.
That raises the question:
Can we keep using Kubernetes Ingress rules, but have Azure Application Gateway handle the external traffic?
That is what Application Gateway Ingress Controller does.
Application Gateway Ingress Controller, often shortened to AGIC, runs inside the AKS cluster. It watches Kubernetes Ingress resources and uses them to configure Azure Application Gateway. The Ingress rule still lives in Kubernetes, but the external Layer 7 entry point is Azure Application Gateway.
This matters because many Azure environments already use Application Gateway at the edge, especially when they want Azure-native features like TLS termination, autoscaling, zone redundancy, and Web Application Firewall integration.
The goal of this article is not to compare every ingress option. We already have a working managed NGINX baseline. Now we want to understand what changes when Application Gateway becomes the external entry point.
We will use the same basic application shape as before:
frontend Deployment → frontend Service
api Deployment → api Service
Then we will create an Ingress rule and let AGIC translate that rule into Application Gateway configuration.
By the end, the flow should feel clear:
External user
→ Azure Application Gateway
→ Kubernetes Service
→ Pod
The Kubernetes Ingress idea stays familiar. The implementation changes.
Why Application Gateway?
The previous article already gave us a working ingress path on AKS.
Managed NGINX received the external traffic, read the Ingress rule, and sent each request to the right Kubernetes Service. For learning the basic flow, that was exactly what we needed.
So why look at Application Gateway?
Because in many Azure environments, Application Gateway is already the standard application edge. It is not just an ingress controller. It is an Azure Layer 7 load balancer designed for HTTP and HTTPS traffic.
That means it can sit in front of your application and provide Azure-native edge capabilities, including TLS termination, autoscaling, zone redundancy, and Web Application Firewall integration.
That last one is a common reason teams care about Application Gateway. If your platform needs WAF controls at the public boundary, Application Gateway is often part of the conversation.
So the point is not:
AGIC is better than NGINX in every situation.
The point is:
Some teams want Kubernetes Ingress rules, but they also want Azure Application Gateway as the external edge.
That is the gap Application Gateway Ingress Controller fills.
It lets application teams keep using familiar Kubernetes Ingress resources, while Azure Application Gateway handles the traffic coming from outside the cluster.
The Kubernetes object still says:
Send /api traffic to the api Service.
Send / traffic to the frontend Service.
But instead of NGINX being the component that receives and routes the external traffic, Application Gateway takes that role.
AGIC is the piece that keeps both worlds connected. It watches Kubernetes, reads the Ingress rules, and updates Application Gateway so the Azure edge matches what you described in the cluster.
What Application Gateway Ingress Controller Does
Application Gateway Ingress Controller connects two worlds.
On one side, you have Kubernetes.
That is where your application lives. You create Deployments, Services, and Ingress rules. The Ingress rule describes what you want:
/api traffic should go to the api Service.
/ traffic should go to the frontend Service.
On the other side, you have Azure Application Gateway.
That is the external Layer 7 entry point. It receives HTTP and HTTPS traffic from outside the cluster and routes that traffic toward the right backend.
AGIC sits between those two worlds.
It runs inside the AKS cluster and watches Kubernetes Ingress resources. When you create or update an Ingress rule, AGIC notices the change. It then updates the Azure Application Gateway configuration so Application Gateway knows how to route the traffic.
The flow looks like this:
Ingress rule in Kubernetes
→ AGIC watches it
→ AGIC configures Azure Application Gateway
→ Application Gateway routes external traffic
That is the main idea.
This is different from the managed NGINX article.
With managed NGINX, traffic flowed through the NGINX ingress controller inside the cluster:
External user
→ managed NGINX ingress controller
→ Service
→ Pod
With AGIC, external traffic reaches Azure Application Gateway:
External user
→ Azure Application Gateway
→ Service
→ Pod
AGIC is still important, but it is not the main traffic entry point. Its job is to keep Application Gateway in sync with the Kubernetes Ingress rules.
So the Kubernetes model stays familiar:
Ingress → Service → Pod
But the Azure edge changes:
Azure Application Gateway becomes the component receiving and routing the external traffic.
That distinction is the heart of this article.
Create an AKS Cluster with AGIC Enabled
We will start by creating a new AKS cluster with Application Gateway Ingress Controller enabled as an add-on.
For this walkthrough, we will let Azure create a new Application Gateway for us. That keeps the setup focused. In a real environment, you might attach AGIC to an existing Application Gateway instead, but that adds extra networking decisions that we do not need for the first pass.
First, create a resource group:
az group create \
--name rg-aks-agic-demo \
--location australiaeast
Now create the AKS cluster with AGIC enabled:
az aks create \
--resource-group rg-aks-agic-demo \
--name aks-agic-demo \
--location australiaeast \
--node-count 2 \
--node-vm-size Standard_B2s \
--network-plugin azure \
--enable-managed-identity \
--enable-addons ingress-appgw \
--appgw-name agw-aks-agic-demo \
--appgw-subnet-cidr "10.225.0.0/16" \
--generate-ssh-keys
There are a few important pieces here.
--enable-addons ingress-appgw enables the Application Gateway Ingress Controller add-on.
--appgw-name agw-aks-agic-demo gives the new Application Gateway a name.
--appgw-subnet-cidr "10.225.0.0/16" gives Azure an address range for the Application Gateway subnet. When Azure creates the Application Gateway for us, it needs a subnet to place it in.
--network-plugin azure uses Azure CNI for the cluster networking. This keeps the setup aligned with the AGIC add-on tutorial.
--enable-managed-identity lets AKS use managed identity rather than the older service principal model.
Once the cluster is created, connect kubectl to it:
az aks get-credentials \
--resource-group rg-aks-agic-demo \
--name aks-agic-demo
Now check the nodes:
kubectl get nodes
You should see two AKS nodes in the Ready state:
NAME STATUS ROLES AGE VERSION
aks-nodepool1-12345678-vmss000000 Ready <none> 5m v1.xx.x
aks-nodepool1-12345678-vmss000001 Ready <none> 5m v1.xx.x
At this point, Azure has created two important things for us: the AKS cluster and an Application Gateway connected to it through the AGIC add-on.
Next, we should confirm both sides are really there: the controller inside Kubernetes, and the Application Gateway resource in Azure.
Confirm AGIC and Application Gateway Exist
Before we deploy the sample application, let’s confirm that Azure created the pieces we need.
There are two things to check.
First, we want to see the AGIC controller running inside the AKS cluster.
Second, we want to see the Azure Application Gateway resource that AGIC will configure.
Start with the Kubernetes side:
kubectl get pods -n kube-system
Look for a pod related to Application Gateway Ingress Controller. The exact name can vary, but you should see an AGIC pod running in the kube-system namespace.
You can also filter for the AGIC app label:
kubectl get pods -n kube-system -l app=ingress-appgw
If the add-on is running correctly, the pod should be in the Running state.
Now check the Ingress classes:
kubectl get ingressclass
You should see an Ingress class for Application Gateway. Depending on the AKS and AGIC version, the name may appear as something like:
azure-application-gateway
or another Application Gateway related class.
The important idea is the same as in the managed NGINX article: an Ingress class connects an Ingress rule to the controller that should handle it.
Now check the Azure side.
List the Application Gateway resources in the resource group:
az network application-gateway list \
--resource-group rg-aks-agic-demo \
--output table
You should see the Application Gateway that Azure created for this demo:
Name ResourceGroup Location OperationalState
----------------- ----------------- ------------ ----------------
agw-aks-agic-demo rg-aks-agic-demo australiaeast Running
This confirms that both sides exist:
Kubernetes side: AGIC is running inside AKS.
Azure side: Application Gateway exists and is ready to receive configuration.
That is the important difference from the managed NGINX walkthrough.
In the managed NGINX article, the ingress controller itself received the external traffic. Here, Application Gateway is the external entry point, and AGIC is the controller that keeps Application Gateway configured from Kubernetes. Microsoft describes AGIC as a Kubernetes application that monitors AKS and continuously updates Application Gateway so selected services are exposed to the internet.
Now that the controller and gateway are in place, we can deploy the same sample application we used before.
Deploy the Sample Application
Now we need something for Application Gateway to route traffic to.
We will use the same simple application shape as before:
frontend Deployment → frontend Service
api Deployment → api Service
Both Services will be ClusterIP Services. They are reachable inside the cluster, but they are not directly exposed to the internet. Application Gateway will become the external entry point, and the Ingress rule will tell it which Service should receive each request.
Create a file called sample-app.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: hashicorp/http-echo:1.0
args:
- "-text=Hello from the frontend service"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: ClusterIP
selector:
app: frontend
ports:
- port: 80
targetPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 2
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: hashicorp/http-echo:1.0
args:
- "-text=Hello from the API service"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: api
spec:
type: ClusterIP
selector:
app: api
ports:
- port: 80
targetPort: 5678
This file creates four Kubernetes objects.
The frontend Deployment creates two frontend pods, and the frontend Service gives those pods a stable internal address.
The api Deployment creates two API pods, and the api Service gives those pods a stable internal address.
Both Services expose port 80, but send traffic to targetPort: 5678 on the pods. That is because the http-echo container listens on port 5678, while the Service gives us a cleaner port to route to.
Apply the file:
kubectl apply -f sample-app.yaml
Check the pods:
kubectl get pods
You should see two frontend pods and two API pods running:
NAME READY STATUS RESTARTS AGE
api-6f7d8c9b9c-k2f9x 1/1 Running 0 20s
api-6f7d8c9b9c-n5k2m 1/1 Running 0 20s
frontend-7c8d9f4b6-m8kl9 1/1 Running 0 20s
frontend-7c8d9f4b6-p9j7r 1/1 Running 0 20s
Now check the Services:
kubectl get service
You should see both Services as ClusterIP:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api ClusterIP 10.0.120.15 <none> 80/TCP 20s
frontend ClusterIP 10.0.230.44 <none> 80/TCP 20s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10m
You will also see the default kubernetes Service. Kubernetes creates that automatically so workloads inside the cluster have a stable way to reach the Kubernetes API server. For this walkthrough, we can ignore it and focus on the two Services we created: frontend and api.
At this point, the application is running inside AKS, but neither Service is exposed directly to the internet.
That is exactly what we want.
External user
→ Azure Application Gateway
→ frontend or api Service
→ matching pods
The application is ready. The next step is to create the Ingress rule that AGIC will use to configure Application Gateway.
Create the Ingress Rule
Now that the application is running, we can create the Ingress rule.
We will use the same path-based routing pattern as before:
/ → frontend Service
/api → api Service
The difference is who handles the rule.
In the managed NGINX article, the NGINX ingress controller watched the Ingress resource and routed the traffic itself. In this article, AGIC watches the Ingress resource and uses it to configure Azure Application Gateway.
Create a file called ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: agic-demo-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Most of this should look familiar. We still have an Ingress object. We still have path rules. We still send traffic to Services, not directly to pods.
The important AGIC-specific part is the annotation:
annotations:
kubernetes.io/ingress.class: azure/application-gateway
This tells AGIC that this Ingress should be handled by Azure Application Gateway.
Then we define the two paths.
The /api path sends traffic to the api Service:
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 80
The / path sends the rest of the traffic to the frontend Service:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
As before, /api appears before / because it is more specific. The / path is broad, so placing /api first makes the routing intention clear.
Apply the Ingress:
kubectl apply -f ingress.yaml
Now check it:
kubectl get ingress
You should see something like this:
NAME CLASS HOSTS ADDRESS PORTS AGE
agic-demo-ingress <none> * 20.53.100.25 80 30s
The ADDRESS column should show the public address associated with Application Gateway.
At this point, AGIC has seen the Ingress rule and started configuring Azure Application Gateway. Once the configuration is applied, Application Gateway has the routing information it needs:
/api → api Service
/ → frontend Service
The next step is to test that traffic really reaches both Services through Application Gateway.
Create the Ingress Rule
Now that the application is running, we can create the Ingress rule.
We will use the same path-based routing pattern as before:
/ → frontend Service
/api → api Service
The difference is who handles the rule.
In the managed NGINX article, the NGINX ingress controller watched the Ingress resource and routed the traffic itself. In this article, AGIC watches the Ingress resource and uses it to configure Azure Application Gateway.
Create a file called ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: agic-demo-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Most of this should look familiar. We still have an Ingress object. We still have path rules. We still send traffic to Services, not directly to pods.
The important AGIC-specific part is the annotation:
annotations:
kubernetes.io/ingress.class: azure/application-gateway
This tells AGIC to observe this Ingress and translate it into Application Gateway configuration.
You may wonder why we are using this annotation instead of spec.ingressClassName. In a general Kubernetes Ingress article, spec.ingressClassName is usually the cleaner modern field. But for AGIC, Microsoft’s examples and troubleshooting guidance still commonly use the annotation. Using it here keeps the walkthrough aligned with the AGIC documentation readers are likely to see.
Then we define the two paths.
The /api path sends traffic to the api Service:
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 80
The / path sends the rest of the traffic to the frontend Service:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
As before, /api appears before / because it is more specific. The / path is broad, so placing /api first makes the routing intention clear.
Apply the Ingress:
kubectl apply -f ingress.yaml
Now check it:
kubectl get ingress
You should see something like this:
NAME CLASS HOSTS ADDRESS PORTS AGE
agic-demo-ingress <none> * 20.53.100.25 80 30s
You may notice that the CLASS column shows <none>. That is expected in this example because we used the AGIC annotation instead of spec.ingressClassName. AGIC still watches this Ingress because the annotation tells it to.
The important thing to check here is the ADDRESS column. It should show the public address associated with Application Gateway.
At this point, AGIC has seen the Ingress rule and started configuring Azure Application Gateway. Once the configuration is applied, Application Gateway has the routing information it needs:
/api → api Service
/ → frontend Service
The next step is to test that traffic really reaches both Services through Application Gateway.
Test External Access Through Application Gateway
Now that the Ingress has been created, we can test whether Application Gateway is routing traffic correctly.
First, get the public address from the Ingress:
APPGW_IP=$(kubectl get ingress agic-demo-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Print it to confirm:
echo $APPGW_IP
Now call the root path:
curl http://$APPGW_IP/
You should see the frontend response:
Hello from the frontend service
Now call the API path:
curl http://$APPGW_IP/api
You should see the API response:
Hello from the API service
Both requests went to the same public address.
The difference was the path:
http://<app-gateway-ip>/ → frontend Service
http://<app-gateway-ip>/api → api Service
That is the Ingress rule working through Application Gateway.
When you called /, Application Gateway routed the request to the frontend Service. When you called /api, it routed the request to the api Service.
From there, the Services did their normal Kubernetes job. They sent the requests to one of the matching pods behind them.
So the API request followed this path:
curl http://<app-gateway-ip>/api
→ Azure Application Gateway
→ api Service
→ one api pod
And the frontend request followed this path:
curl http://<app-gateway-ip>/
→ Azure Application Gateway
→ frontend Service
→ one frontend pod
That is the key result.
The Kubernetes Ingress idea stayed the same. One external address, two internal Services, routing decided by path.
What changed is the edge. In the previous article, managed NGINX received the traffic. In this article, Azure Application Gateway receives the traffic, and AGIC keeps Application Gateway configured from the Ingress rule.
What Changed Compared with Managed NGINX?
The application we deployed in this article is almost the same as the one from the managed NGINX walkthrough.
We used the same basic shape:
frontend Deployment → frontend Service
api Deployment → api Service
We also used the same routing idea:
/ → frontend Service
/api → api Service
The Kubernetes model did not change.
Ingress still described the routing rule. Services still gave the pods stable internal addresses. Pods still ran the application.
What changed was the component at the edge.
In the managed NGINX article, the traffic path looked like this:
External user
→ Azure public entry point
→ managed NGINX ingress controller
→ Kubernetes Service
→ Pod
The managed NGINX ingress controller was the component receiving the HTTP traffic and applying the Ingress rule.
In this article, the path looks like this:
External user
→ Azure Application Gateway
→ Kubernetes Service
→ Pod
Application Gateway is now the external Layer 7 entry point.
AGIC is still important, but it is not the main place where user traffic lands. AGIC watches the Ingress rule inside Kubernetes and updates Application Gateway so Application Gateway knows how to route requests.
That is the key distinction:
Managed NGINX:
The ingress controller receives and routes the traffic.
AGIC:
Application Gateway receives and routes the traffic.
AGIC keeps Application Gateway configured from Kubernetes.
So the Ingress idea stayed the same, but the implementation changed.
That is why this article matters. It shows that Kubernetes Ingress is not tied to one specific controller. The same kind of Ingress rule can be implemented by different controllers and different edge technologies. In the previous article, the edge was managed NGINX. In this article, the edge is Azure Application Gateway.
Clean Up
This walkthrough creates real Azure resources, including an AKS cluster and an Application Gateway. When you are finished testing, it is worth deleting them so you do not leave anything running.
Because we created everything inside one resource group, cleanup is simple. Delete the resource group:
az group delete \
--name rg-aks-agic-demo \
--yes \
--no-wait
The --yes flag skips the confirmation prompt.
The --no-wait flag returns control to your terminal immediately while Azure deletes the resources in the background.
This removes the AKS cluster, the Application Gateway, and the related Azure resources created for the demo.
If you want to confirm the deletion later, run:
az group show \
--name rg-aks-agic-demo
If the resource group has been deleted, Azure will return an error saying it could not be found.
Where This Leads
You now have an AKS ingress path backed by Azure Application Gateway.
We kept the same application shape from the managed NGINX walkthrough:
frontend Deployment → frontend Service
api Deployment → api Service
We also kept the same Ingress idea:
/ → frontend Service
/api → api Service
But the edge changed.
In the previous article, managed NGINX received the external traffic and applied the Ingress rule. In this article, Azure Application Gateway became the external Layer 7 entry point, and AGIC kept Application Gateway configured from Kubernetes.
That gives us this model:
Ingress rule in Kubernetes
→ AGIC watches it
→ AGIC configures Azure Application Gateway
→ Application Gateway routes external traffic
→ Kubernetes Service
→ Pod
That is useful when Application Gateway is already part of your Azure edge, especially when you want Azure-native capabilities such as TLS termination, Web Application Firewall integration, autoscaling, and zone redundancy.
But this is still the classic Application Gateway model connected to Kubernetes through AGIC.
Azure now has a newer application load-balancing service built specifically for container workloads: Application Gateway for Containers.
Application Gateway for Containers is not just AGIC with a new name. It introduces a newer control plane and data plane designed around Kubernetes-style workloads, with faster updates as pods and routes change, and support for more advanced traffic patterns such as weighted traffic splitting.
It also supports the familiar Ingress API we have been using, while opening the door to Gateway API, the newer Kubernetes model for traffic routing.
That is where the next article goes. We will start from the Ingress model we already understand, then use Application Gateway for Containers to see how Azure’s Kubernetes application-routing story is evolving.