Deploying Ingress on AKS: Exposing a Kubernetes App with Managed NGINX

Deploying Ingress on AKS: Exposing a Kubernetes App with Managed NGINX

Introduction

In a previous article, we looked at Kubernetes Ingress as a concept.

The model was simple:

External user → Ingress controller → Service → Pod

Ingress gives external HTTP and HTTPS traffic a clean way into your cluster. Instead of exposing every Service separately, you define routing rules: this hostname goes to this Service, this path goes to that Service.

That is the idea.

Now we are going to make it real on Azure Kubernetes Service.

In this article, we will create an AKS cluster, enable the application routing add-on, deploy a small application with two internal Services, and expose both through one external entry point using Ingress.

The application routing add-on gives us a managed NGINX ingress controller in AKS. That means Azure creates, configures, and manages the ingress controller for us, and our job is to provide the Kubernetes resources: Deployments, Services, and Ingress rules. Microsoft describes the application routing add-on as the recommended way to configure an Ingress controller in AKS, and notes that it provides managed NGINX ingress controllers for Kubernetes Ingress objects.

We will keep the example deliberately simple. One frontend Service. One API Service. One Ingress rule that routes traffic by path:

/      → frontend Service
/api   → api Service

By the end, the flow should feel concrete:

External user
→ Azure public entry point
→ managed NGINX ingress controller
→ Kubernetes Service
→ Pod

This gives us a clean AKS ingress baseline before we move into more advanced ingress patterns later.

AKS Ingress Options, Kept Simple

AKS gives you more than one way to handle ingress.

That can be useful, but it can also be confusing when you are just trying to learn the flow. You might see references to NGINX ingress, Application Gateway, Application Gateway for Containers, Istio ingress gateways, Gateway API, or bring-your-own ingress controllers.

Those are all real options, but they do not need to be learned at the same time.

For this article, we are using the application routing add-on.

The reason is simple: it gives us a managed NGINX ingress controller without making us install and operate the controller ourselves. We can focus on the Kubernetes pieces that matter for learning:

Deployment → Service → Ingress

The add-on handles the ingress controller. We provide the application and the routing rules.

At a high level, the options look like this:

Application routing add-on: managed NGINX ingress in AKS
Application Gateway for Containers: Azure-native application load balancing for Kubernetes
Istio ingress gateway: ingress through a service mesh gateway
Bring your own controller: install and manage something like NGINX, Traefik, HAProxy, or Cilium yourself

For a first AKS ingress walkthrough, managed NGINX through the application routing add-on is the cleanest path. It lets us prove the core idea without introducing too many moving parts.

The goal in this article is not to compare every ingress option. The goal is to build one working path end to end:

External user → managed NGINX ingress controller → Service → Pod

Once that baseline makes sense, the more advanced options become easier to understand later.

Create an AKS Cluster with Application Routing

We will start by creating a small AKS cluster with the application routing add-on enabled.

The add-on can be enabled when you create the cluster by passing the --enable-app-routing flag to az aks create. That gives the cluster a managed NGINX ingress controller that can watch Kubernetes Ingress objects. Microsoft documents this as the supported way to enable application routing on a new AKS cluster.

First, create a resource group:

az group create \
  --name rg-aks-ingress-demo \
  --location australiaeast

Now create the AKS cluster:

az aks create \
  --resource-group rg-aks-ingress-demo \
  --name aks-ingress-demo \
  --location australiaeast \
  --node-count 2 \
  --node-vm-size Standard_B2s \
  --enable-app-routing \
  --generate-ssh-keys

There are only a few important pieces here.

--node-count 2 gives us two worker nodes. This is still small enough for a demo, but it feels more like a real cluster than a single-node setup.

--node-vm-size Standard_B2s keeps the cluster lightweight.

--enable-app-routing is the important flag for this article. It enables the application routing add-on during cluster creation, so AKS can manage the NGINX ingress controller for us.

Once the cluster is created, connect kubectl to it:

az aks get-credentials \
  --resource-group rg-aks-ingress-demo \
  --name aks-ingress-demo

Now check the nodes:

kubectl get nodes

You should see the AKS nodes in a Ready state:

NAME                                STATUS   ROLES    AGE   VERSION
aks-nodepool1-12345678-vmss000000   Ready    <none>   3m    v1.xx.x
aks-nodepool1-12345678-vmss000001   Ready    <none>   3m    v1.xx.x

At this point, we have an AKS cluster running, and the application routing add-on is enabled.

But before we create any Ingress rules, we should prove that the Ingress controller actually exists. That is the next step.

Confirm the Ingress Controller Exists

Before we deploy any application, let’s confirm that the application routing add-on created the pieces we need.

The main thing we are looking for is the managed NGINX ingress controller. This is the component that will receive external HTTP traffic and apply our Ingress rules.

Start by checking the application routing namespace:

kubectl get pods -n app-routing-system

You should see pods running for the managed NGINX ingress controller. The exact pod names can vary, but the important thing is that the pods are in the Running state.

Next, check the Ingress classes:

kubectl get ingressclass

You should see an Ingress class called:

webapprouting.kubernetes.azure.com

The application routing add-on creates this Ingress class in the cluster. When we create an Ingress object using this class, the managed NGINX ingress controller knows it should handle that Ingress. Microsoft documents this class name as the one created by the add-on.

That is the connection between the rule and the controller:

IngressClass: tells Kubernetes which controller should handle the Ingress
Ingress: defines the routing rule
Ingress controller: receives traffic and applies the rule

So before we even deploy the sample app, we have confirmed the important part: the cluster has an ingress controller ready to watch Ingress resources and route traffic for us.

Now we can deploy the application that the Ingress will send traffic to.

Deploy the Sample Application

Now we need an application for the Ingress controller to route traffic to.

We will keep the setup simple:

frontend Deployment → frontend Service
api Deployment      → api Service

Both Services will be ClusterIP Services. That means they are reachable inside the cluster, but not directly exposed to the internet. This is exactly what we want, because Ingress will become the external entry point.

Create a file called sample-app.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: frontend
          image: hashicorp/http-echo:1.0
          args:
            - "-text=Hello from the frontend service"
          ports:
            - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: ClusterIP
  selector:
    app: frontend
  ports:
    - port: 80
      targetPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        - name: api
          image: hashicorp/http-echo:1.0
          args:
            - "-text=Hello from the API service"
          ports:
            - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  type: ClusterIP
  selector:
    app: api
  ports:
    - port: 80
      targetPort: 5678

There are four objects in this file.

The frontend Deployment creates two frontend pods. The frontend Service gives those pods a stable internal address.

The api Deployment creates two API pods. The api Service gives those pods a stable internal address.

Notice that both Services expose port 80, but both send traffic to targetPort: 5678 on their pods. That is because the http-echo container listens on port 5678, while the Service gives us a cleaner port to call.

Apply the file:

kubectl apply -f sample-app.yaml

Now check the pods:

kubectl get pods

You should see two frontend pods and two API pods running:

NAME                        READY   STATUS    RESTARTS   AGE
api-6f7d8c9b9c-k2f9x        1/1     Running   0          20s
api-6f7d8c9b9c-n5k2m        1/1     Running   0          20s
frontend-7c8d9f4b6-m8kl9    1/1     Running   0          20s
frontend-7c8d9f4b6-p9j7r    1/1     Running   0          20s

Then check the Services:

kubectl get service

You should see both Services as ClusterIP:

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
api          ClusterIP   10.0.120.15     <none>        80/TCP    20s
frontend     ClusterIP   10.0.230.44     <none>        80/TCP    20s
kubernetes   ClusterIP   10.0.0.1        <none>        443/TCP   10m

You will also see a Service called kubernetes. You did not create that one. Kubernetes creates it automatically so workloads inside the cluster have a stable way to reach the Kubernetes API server. For this article, we can ignore it and focus on the two Services we created: frontend and api.

At this point, the application exists inside the cluster, but nothing is exposed publicly yet.

That is exactly the shape we want:

External user
→ Ingress controller
→ frontend or api Service
→ matching pods

The Deployments and Services are ready. The next step is to create the Ingress rule that connects external traffic to them.

Create the Ingress Rule

Now that the application is running inside the cluster, we can create the Ingress rule.

For this demo, we will use path-based routing:

/      → frontend Service
/api   → api Service

In production, you would often use real hostnames and DNS records. For example, shop.example.com might route to one Service and api.example.com might route to another. For this walkthrough, path-based routing keeps the demo simple because we can test everything through one external IP address.

Create a file called ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-ingress
spec:
  ingressClassName: webapprouting.kubernetes.azure.com
  rules:
    - http:
        paths:
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: api
                port:
                  number: 80
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend
                port:
                  number: 80

There are a few important parts here.

First, the ingressClassName:

ingressClassName: webapprouting.kubernetes.azure.com

This tells Kubernetes that this Ingress should be handled by the managed NGINX ingress controller created by the application routing add-on.

Then we define the paths.

The first path sends /api traffic to the api Service:

- path: /api
  pathType: Prefix
  backend:
    service:
      name: api
      port:
        number: 80

The second path sends everything else under / to the frontend Service:

- path: /
  pathType: Prefix
  backend:
    service:
      name: frontend
      port:
        number: 80

The order matters here. We put /api before / because / is broad. It can match almost everything. By placing the more specific path first, we make the routing intention clear: API traffic goes to the API Service, and normal site traffic goes to the frontend Service.

Apply the Ingress:

kubectl apply -f ingress.yaml

Now check it:

kubectl get ingress

You should see something like this:

NAME           CLASS                                HOSTS   ADDRESS        PORTS   AGE
demo-ingress   webapprouting.kubernetes.azure.com   *       20.53.100.25   80      30s

The important part is the ADDRESS column. That is the external address that Azure has made available for this Ingress path.

At this point, traffic has a way into the cluster.

The next step is to test that both routes work through the same external entry point.

Test the Ingress

Now that the Ingress has an external address, we can test the two routes.

First, save the external address in a variable:

INGRESS_IP=$(kubectl get ingress demo-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

You can print it to confirm:

echo $INGRESS_IP

Now call the root path:

curl http://$INGRESS_IP/

You should see the frontend response:

Hello from the frontend service

Now call the API path:

curl http://$INGRESS_IP/api

You should see the API response:

Hello from the API service

Both requests went to the same external address.

The difference was the path.

http://<external-ip>/      → frontend Service
http://<external-ip>/api   → api Service

That is the Ingress rule doing its job. The managed NGINX ingress controller receives the request, looks at the path, and sends the request to the right Kubernetes Service.

From there, the Service does what Services always do. It sends the request to one of the matching pods behind it.

So the full flow looks like this:

curl http://<external-ip>/api
→ Azure public entry point
→ managed NGINX ingress controller
→ api Service
→ one api pod

And for the frontend:

curl http://<external-ip>/
→ Azure public entry point
→ managed NGINX ingress controller
→ frontend Service
→ one frontend pod

That is the key result of this walkthrough.

You have one external entry point, but more than one internal Service behind it. The Ingress rule decides where each request should go.

What AKS Created for You

At this point, the Ingress is working.

You created the application, the Services, and the Ingress rule. But you did not manually install NGINX, create an ingress controller Deployment, or wire up external traffic yourself.

That is what the application routing add-on handled for you.

The add-on creates and manages the NGINX ingress controller inside the cluster. That controller watches for Ingress resources that use the webapprouting.kubernetes.azure.com Ingress class. When it sees one, it knows it should handle the routing for it.

So when you created this Ingress:

ingressClassName: webapprouting.kubernetes.azure.com

you were connecting your routing rule to the managed ingress controller that AKS created.

There is also an Azure side to this.

For traffic to reach the ingress controller from outside the cluster, Azure needs to provide an external entry point. In this walkthrough, that is why you saw an external address on the Ingress. A request from your machine reaches that public entry point first, then flows into the managed NGINX ingress controller inside AKS.

The path looks like this:

External user
→ Azure public entry point
→ managed NGINX ingress controller
→ Kubernetes Service
→ Pod

The important thing is that the add-on hides most of the controller setup from you. You still need to understand the Kubernetes objects, because you are the one creating the Deployments, Services, and Ingress rules. But you do not have to install and operate the ingress controller manually for this demo.

That is why this is a good baseline for learning AKS ingress.

You get a real external path into the cluster, but the number of moving parts stays manageable.

Clean Up

This walkthrough creates real Azure resources, so it is worth cleaning them up when you are finished.

Because everything was created inside one resource group, cleanup is simple. Delete the resource group:

az group delete \
  --name rg-aks-ingress-demo \
  --yes \
  --no-wait

The --yes flag skips the confirmation prompt.

The --no-wait flag returns control to your terminal immediately while Azure deletes the resources in the background.

This removes the AKS cluster and the related Azure resources created for the demo.

If you want to confirm the deletion later, you can run:

az group show \
  --name rg-aks-ingress-demo

If the resource group has been deleted, Azure will return an error saying it could not be found.

Where This Leads

You now have a working AKS ingress path.

We created an AKS cluster, enabled the application routing add-on, deployed two internal Services, and used one Ingress rule to route traffic to both of them.

The final flow looked like this:

External user
→ Azure public entry point
→ managed NGINX ingress controller
→ Kubernetes Service
→ Pod

That is a strong baseline. It shows the Kubernetes Ingress model working on a real Azure-managed cluster without making us install and operate the ingress controller ourselves.

But this is only one way to expose applications on AKS.

In this article, the application routing add-on gave us managed NGINX. That kept the walkthrough simple and let us focus on the Kubernetes pieces: Deployments, Services, Ingress rules, and path-based routing.

In many Azure environments, though, the application edge is not NGINX. Teams may already use Azure Application Gateway as their Layer 7 entry point, especially when they want Azure-native features such as TLS termination, Web Application Firewall integration, autoscaling, and zone redundancy.

That raises the next question:

Can we keep using Kubernetes Ingress rules, but have Azure Application Gateway handle the external traffic?

That is what Application Gateway Ingress Controller does.

It runs inside AKS, watches Kubernetes Ingress resources, and configures Azure Application Gateway based on those rules. The Kubernetes idea stays familiar, but the external edge changes.

That is where the next article goes. We will keep the same Ingress model, but replace the managed NGINX path with Azure Application Gateway as the application edge.