Introduction
Azure offers multiple ways to run containerised applications — but choosing the right one can be harder than it seems.
Should you go serverless with Azure Container Apps? Embrace the power (and complexity) of Azure Kubernetes Service (AKS)? Stick with App Service, or run containers directly on Azure Container Instances (ACI)?
Pick wrong, and you risk overengineering, missing scaling opportunities, or burning budget on the wrong resource model. But pick right — and you’ll save time, reduce cost, and run your app with clarity and confidence.
In this guide, we’ll break down Azure’s four core container services:
- Azure Container Apps
- Azure Kubernetes Service (AKS)
- App Service
- Azure Container Instances (ACI)
You’ll learn when to use each one based on the kind of app you’re running, the traffic it handles, and how much control, flexibility, and scalability you actually need.
We’ll compare your options by:
- 🚦 Traffic protocol – HTTP, gRPC, TCP, or background jobs
- 🛠️ Environment control – Full cluster access vs. code-push simplicity
- 📈 Autoscaling – Based on HTTP traffic, queue depth, or scale-to-zero
- 💸 Cost shape – Always-on vs. event-driven or spiky workloads
And to keep things grounded, we’ll use real-world examples — from running an MQTT broker, to scaling microservices, to spinning up short-lived background jobs.
👤 Who this is for:
Developers, solution architects, and operations engineers building or modernising container workloads on Azure — whether you’re starting fresh or revisiting an existing deployment.
Let’s jump in — and make sure your next container decision is the right one, the first time.
Compare Azure’s Four Container Services
Azure gives you four main ways to run containers — but each comes with different trade-offs around control, scalability, protocol support, and operational effort. While they all run containers, what they manage — and what you have to manage — varies significantly.
Here’s a quick breakdown of each service:
What it’s best at, where it fits, and how much control it gives you.
Azure Container Apps (ACA)
The sweet spot for serverless containers
ACA is a fully managed, event-driven container platform that runs on Kubernetes behind the scenes — but abstracts away the cluster complexity. It supports microservices, APIs, background jobs, and scales down to zero when idle. With native support for HTTP, gRPC, WebSockets*, and autoscaling via KEDA triggers (e.g. HTTP load, queue length, events), ACA is a great fit for cloud-native apps without the Kubernetes overhead.
- Best for: Microservices, APIs, background jobs, event-driven tasks
- Managed: Fully (serverless)
- Scales: Automatically (including to zero)
- Control: Low to moderate (you define services, not clusters)
- Note: WebSocket support is available but has some limitations
Azure Kubernetes Service (AKS)
Kubernetes power, your control
AKS gives you a managed Kubernetes control plane, while leaving workload and node management in your hands. You choose node types, networking, ingress, policies, and scale strategies. It supports Linux and Windows containers, any traffic protocol (HTTP, TCP, UDP), sidecars, custom CRDs — the works. With that power comes operational responsibility: upgrades, scaling config, observability, and deployment patterns are yours to define.
- Best for: Complex microservice architectures, custom protocols, hybrid clusters
- Managed: Partially (Azure manages control plane; you manage nodes, workloads)
- Scales: With HPA, KEDA, and Cluster Autoscaler (requires setup)
- Control: Full Kubernetes API access
Azure App Service (Web Apps for Containers)
The fastest path to running web apps in containers
App Service is Azure’s PaaS for web workloads. It supports both code and container deployments, manages your hosting environment, TLS termination, scaling, and more. But it’s strictly web-focused — no support for gRPC, raw TCP, or background job containers.
- Best for: HTTP APIs, websites, lightweight web backends
- Managed: Fully
- Scales: Automatically (rule-based), but does not scale to zero
- Control: Limited (you control the app, not the environment)
- Note: No support for gRPC, background jobs, or custom protocols
Azure Container Instances (ACI)
Just run this container – now
ACI is Azure’s simplest way to run a container. No cluster, no orchestration — just define your image and run. Billing is per second, and there’s no persistent compute unless you orchestrate it yourself. ACI is great for short-lived tasks like CI jobs, scheduled scripts, or ephemeral workers.
- Best for: One-off jobs, testing, batch processing
- Managed: Fully
- Scales: One container or group at a time (no autoscaling built-in)
- Control: Low (no orchestration, but full container isolation)
- Note: Use with orchestrators (e.g. Durable Functions, Logic Apps) for coordination
TL;DR — Service Fit at a Glance
Service | Best for | Scaling | Protocols | Control |
---|---|---|---|---|
ACA | Serverless microservices & events | Auto, incl. scale-to-zero | HTTP, gRPC, WebSockets* | Low |
AKS | Kubernetes power users | HPA/KEDA/Auto (config needed) | All protocols | Full |
App Service | Simple web apps & APIs | Auto (no scale-to-zero) | HTTP/S only | Limited |
ACI | Short-lived scripts & tasks | Manual | Any | Minimal (no orchestration) |
Each of these services plays a distinct role. You might use more than one together — but for any given workload, choosing your primary hosting model means weighing the trade-offs:
Speed vs. control. Simplicity vs. flexibility. Cost vs. capability.
In the next section, we’ll walk through a decision-making framework that helps you map your use case to the right service — clearly, confidently, and efficiently.
How to Choose the Right Azure Container Service (Decision Framework)
Now that you’ve seen what each Azure container service offers, let’s turn that knowledge into action.
Choosing the right platform means aligning your app’s real-world needs — protocol, scaling behavior, startup time, customisation, cost — to the strengths of each service.
The following factors will shape your experience from development through production. Let’s break them down:
Traffic Protocol
What kind of network traffic does your app need to handle?
- HTTP(S), gRPC, WebSockets:
Use Azure Container Apps (ACA) or App Service - TCP (e.g., MQTT):
Supported in AKS and ACI; ACA has limited TCP support - UDP:
Only AKS and ACI support UDP - App Service:
HTTP/S only — no support for WebSockets in Linux containers without workarounds
Scaling Pattern
Does your app need to handle bursts or go idle?
- ACA:
Event-driven autoscaling with KEDA. Can scale to zero - AKS:
Powerful scaling via HPA/KEDA/Cluster Autoscaler — manual setup required - App Service:
Rule-based autoscaling (CPU, memory) — can’t scale to zero - ACI:
No autoscaling — manual scripting required
Startup Latency
Is fast startup important for your app?
- App Service (Always On):
Warm container — zero cold start - AKS:
Pods stay warm unless scaled down - ACA:
Cold starts if scaled to zero (seconds to 10s), but can pin a warm replica - ACI:
Cold start every time — expect 30–60s
Runtime & OS Support
Does your app need a specific OS or platform?
- All support Linux containers
- AKS and ACI support Windows containers
- App Service:
Supports built-in runtimes (.NET, Node, Java) and custom containers - ACA:
Linux containers only (Windows support in preview)
Customisation & Control
Do you need fine-grained control over orchestration, networking, or security?
- AKS:
Full Kubernetes API — DaemonSets, Ingress, CNIs, custom networking - ACA:
Supports volumes, sidecars, Envoy-based networking — no Kubernetes API access - App Service:
Limited config — great for web, not low-level networking - ACI:
Minimal — one container or group, no orchestration
Operational Complexity
How much infra are you prepared to manage?
- App Service, ACA, ACI:
Low-ops — Azure manages the infrastructure - AKS:
You manage upgrades, node health, scaling logic, observability, and security - If your team isn’t fluent in Kubernetes, AKS comes with a steeper learning curve
Cost Efficiency
Are you optimising for steady-state or spiky workloads?
- App Service:
Great for always-on workloads — fixed plan-based billing - AKS:
Efficient at scale — but idle nodes = wasted cost - ACA / ACI:
Pay-per-use — best for bursty, short-lived, or idle-tolerant workloads- ACI can get expensive if left running by mistake
- ACA adds generous free tier and flexible plans
What’s Next?
These dimensions will help you map your app’s requirements to the right Azure container service.
In the next section, we’ll take a closer look at protocol and traffic support — the first (and often most decisive) factor.
Ready to dive in?
Traffic & Protocol Support Across Azure Container Services
When choosing a container service, one of the first — and most decisive — questions is:
What kind of network traffic does your application need to handle, and which platform supports it?
Azure’s container services vary significantly in their protocol support. Whether you’re deploying a REST API, WebSocket service, gRPC backend, or a raw TCP/UDP listener, this single dimension can immediately narrow your choices — or open them up.
Here’s how the four core services compare.
App Service – HTTP/S Only for Web Workloads
App Service is purpose-built for web apps and HTTP APIs. It’s highly managed and efficient — but strictly web-facing.
- ✅ Supports:
- HTTP(S)
- WebSockets (on supported plans)
- gRPC (via HTTP/2 over TLS)
- ❌ Does not support:
- Custom TCP/UDP ports
- Background listeners or non-web protocols
⚠️ You can’t expose raw sockets or non-HTTP ports. Everything routes through the App Service front-end (IIS for Windows or NGINX for Linux).
Best for: Public websites, REST APIs, HTTP-only apps
Not for: Custom protocol listeners or anything requiring TCP/UDP beyond HTTP
Container Apps (ACA) – Web-Native with Emerging TCP Support
ACA is ideal for HTTP-based services and is adding limited TCP support.
- ✅ Supports:
- HTTP(S), gRPC, WebSockets (via Envoy)
- Custom TCP (⚠️ only with a dedicated VNet)
- ❌ Does not support:
- UDP
- TCP if deployed to the consumption plan
⚠️ TCP support requires a custom VNet environment (Dedicated Plan only)
⚠️ Limit: Maximum 5 custom TCP ports per container app
Best for: APIs, gRPC services, background jobs
Use with caution for: Lightweight TCP apps (e.g. MQTT)
Not for: UDP or services needing more than 5 TCP ports
Azure Kubernetes Service (AKS) – Full Control for Any Protocol
AKS offers complete flexibility. If Kubernetes supports it, AKS supports it.
- ✅ Supports:
- HTTP/S, WebSockets, gRPC
- Custom TCP and UDP ports
- Internal + external endpoints
- Custom networking (CNI plugins, Ingress controllers)
- ✅ You define services, routing, scaling, and everything else
Best for:
- Protocol-heavy workloads (e.g., game servers, brokers)
- Advanced ingress/routing needs
- Multi-protocol or hybrid stacks
You manage: Service definitions, Ingress, port routing, firewall rules
Azure Container Instances (ACI) – Run TCP/UDP Containers Instantly
ACI is the easiest way to run a container with raw TCP or UDP ports, with no orchestration.
- ✅ Supports:
- Arbitrary TCP/UDP ports
- Public or private IPs
- ❌ Does not support:
- Autoscaling
- Ingress controllers or load balancing
⚠️ You’ll need to manually script orchestration or scaling
⚠️ Billing is per second, but containers left running can incur cost quickly
Best for:
- One-off tasks
- TCP/UDP-based batch jobs
- Testing services
Not for:
- Long-running services
- Anything needing orchestration, autoscaling, or HA
Quick Comparison Table
Service | HTTP/S | WebSockets / gRPC | Custom TCP Ports | UDP | Ingress / Scaling Notes |
---|---|---|---|---|---|
App Service | ✔️ | ✔️ (WebSockets/gRPC) | ❌ | ❌ | Web-only. No raw ports. Autoscale, no scale to zero. |
Container Apps | ✔️ | ✔️ (gRPC/WebSockets) | ⚠️ 5 ports (VNet only) | ❌ | Scale to zero. TCP only on dedicated plans. |
AKS | ✔️ | ✔️ | ✔️ | ✔️ | Full K8s. Manual config for ingress + autoscaling. |
ACI | ✔️ | ✔️ | ✔️ | ✔️ | No autoscaling. Great for single tasks and custom ports. |
Real-World Example: Hosting an MQTT Broker
You want to host an MQTT broker, which listens on TCP port 1883 and optionally WebSockets:
- App Service? ❌ No TCP support — not possible
- Container Apps? ⚠️ Possible with TCP + VNet configuration
- ACI? ✔️ Good for a standalone, low-traffic broker
- AKS? ✔️ Best for high-availability MQTT with full scaling and protocol support
Up Next: Scaling Behavior & Hands-On Effort
Now that you know which services can handle which traffic patterns, it’s time to explore the next key factor:
How do they scale — and how hands-on do you want to be?
Let’s break it down.
How Azure Container Services Handle Autoscaling and Scale to Zero
Not all workloads scale the same.
Some run 24/7. Others idle for hours and spike suddenly. Some respond to event queues, others to traffic surges.
Choosing the right Azure container service means knowing how it scales — automatically or manually, instantly or gradually — and whether it can scale down to zero when idle to save cost.
This section breaks down how each Azure service approaches autoscaling, scale-to-zero, and operational effort.
ACA: Built-In KEDA Scaling with Scale-to-Zero
Azure Container Apps was built for dynamic workloads. It uses KEDA under the hood, letting you scale based on traffic, queues, schedules, or metrics — no manual setup needed.
- ✅ Built-in autoscaling — powered by KEDA (you don’t need to install anything)
- ✅ Scale to zero — containers shut down completely when idle
- ✅ Event-driven triggers — HTTP traffic, Azure Service Bus, CPU, memory, etc.
- ⚙️ Just define scale rules — no infrastructure to manage
Example scale rule:
|
|
Best for: APIs with unpredictable load, background jobs, microservices triggered by events
ℹ️ ACA uses KEDA internally, but you don’t need to install or manage it — just declare your scaling needs.
AKS: Flexible Autoscaling (Manual Setup Required)
Azure Kubernetes Service gives you full control — and that includes how scaling works. But with great power comes… manual effort.
You can combine multiple tools:
- ✅ Horizontal Pod Autoscaler (HPA) — scale pods based on metrics
- ✅ KEDA (install manually) — event-driven triggers
- ✅ Cluster Autoscaler — scale node pools based on pending pods
- ⚠️ No scale-to-zero by default — requires custom KEDA config
- ❌ No built-in scaling rules — you have to configure it all
Best for: Teams with Kubernetes experience who need custom scaling logic and control over pods, nodes, and orchestration
App Service: Easy UI Rules, No Scale-to-Zero
Azure App Service provides rule-based autoscaling through the Azure portal or CLI:
- ✅ Autoscale based on CPU, memory, queue length, or time
- ⚠️ Scaling is per instance, not per container
- ❌ No scale-to-zero — one instance always running
- ⚙️ Simple rule definitions like:
Add instance when CPU > 70% for 10 mins
Remove when queue length < 10 for 5 mins
Best for: Web apps and APIs with steady or predictable usage
⚠️ App Service scaling is coarse-grained — it scales whole instances, not containers.
ACI: No Autoscaling – DIY Required
Azure Container Instances are ideal for single container runs — but there’s no native scaling.
- ❌ No autoscaling — each container group is launched manually
- ⚠️ You can simulate scaling with Azure Logic Apps, Functions, or scripts
- ❌ No replica management, load balancing, or KEDA-style triggers
Best for:
- One-off batch jobs
- Background tasks
- Event-driven containers triggered by external logic
⚠️ ACI is great for burst jobs — but scale-out and orchestration are your responsibility.
Quick Comparison: Scaling at a Glance
Service | Autoscaling? | Scale to Zero? | Scaling Level | Setup Effort |
---|---|---|---|---|
Container Apps | ✅ Built-in (KEDA) | ✅ Yes | Container replicas | 🟢 Low – declare triggers |
AKS | ✅ Manual setup (HPA/KEDA) | ⚠️ With KEDA config | Pods + node pools | 🔵 Medium – config required |
App Service | ✅ Rule-based | ❌ No | Entire app service | 🟢 Low – portal/CLI-based |
ACI | ❌ No autoscaling | ❌ N/A | Single container/group | 🔴 Manual or script-based |
Real-World Scenario: Queue-Driven Image Processor
You’re building an image processor that runs whenever a message arrives in a queue. It should scale to zero when idle and spin up only when there’s work.
- ✅ ACA: Best fit — define a Service Bus trigger, scale to zero when idle, no ops overhead
- ⚠️ AKS: Possible — install KEDA and wire your event source
- ⚠️ ACI: Works with Logic App or Function trigger, but no scale-out or replica management
- ❌ App Service: Not suitable — no scale-to-zero, not built for background tasks
Wrap-Up
Autoscaling isn’t just about handling more traffic — it’s about doing so efficiently, automatically, and without over-provisioning. It’s also about how much control (or simplicity) you want.
Next up, let’s look at another deal-breaker dimension:
Startup Latency — because no matter how fast you scale, it won’t matter if your app takes 60 seconds to wake up.
Startup Latency: Cold Starts, Warm Instances & Always-On Containers
When your app receives its first request after sitting idle, how quickly can it respond?
That’s the core of startup latency — often referred to as cold start time. Some Azure services keep your containers always running (warm), others spin them up on demand (cold), and that difference can mean seconds or even minutes of delay.
Whether this matters depends entirely on your app’s nature:
- Is it interactive or event-driven?
- Is low-latency response time critical?
- Can you afford a few seconds of spin-up time?
Let’s break down how each Azure container service behaves when idle — and what startup time you can expect.
❄️ Cold start = container isn’t running. Azure must schedule the container, pull the image, and start the process before it can handle traffic.
Azure Container Apps – Cold Start (Mitigated with minReplicas
)
ACA is serverless by default — containers scale to zero when idle. The next request triggers a cold start, which includes image pull, container init, and app boot.
- Cold Start Time: ~5–30 seconds depending on image size and app logic
- Workarounds:
- Set
minReplicas: 1
to keep one instance always warm - Use the Dedicated Plan, which keeps your containers hot
- Pre-warm with a scheduled ping or health-check job
- Set
✅ Cold start exists
⚙️ Avoidable with warm replicas or plan tuning
💡 Great for background jobs or infrequent workloads, less ideal for real-time APIs unless warmed
Azure Kubernetes Service – Always-On (Unless You Scale to Zero)
AKS pods run on always-on VM nodes (unless you scale them down). As long as your deployment is active, your app is hot and ready.
- Warm Startup: <5 seconds
- Cold Edge Cases:
- Scaling a node pool from zero adds 2–3 minutes of VM provisioning latency
- Using KEDA for pod-level scale-to-zero can introduce cold starts (~5–30s)
✅ No cold start when pods are running
⚠️ Cold start possible if node pool is scaled down
🧠 You control pod lifecycle and scheduling
Azure App Service – Warm Instances with “Always On”
App Service keeps your app warm by default — especially on paid plans with the Always On setting enabled.
- Cold Start: ~1–5 seconds (if idle and Always On disabled)
- Warm Start: Near-instant
- Cold triggers:
- Free or lower-tier plans without Always On
- Initial cold start when scaling out to a new instance
✅ Cold starts are rare with Always On
⚡ Fastest startup for continuous web workloads
💡 Ideal for user-facing APIs, websites, dashboards
Azure Container Instances – Cold Every Time
ACI always performs a cold start — every container run is a fresh spin-up.
- Startup Delay: 30–60 seconds typical
- No Always-On Mode: Each container is provisioned from scratch
- Cold Factors: Image pull, region caching, lack of orchestration
❌ No way to keep warm unless you manually prevent teardown
✅ Works fine for batch tasks, not suitable for latency-sensitive APIs
💡 You’ll need orchestration if you want real-time responsiveness
Quick Comparison: Startup Behavior
Service | Cold Start Risk | Avoidable? | Startup Time | Best For |
---|---|---|---|---|
App Service | ⚠️ Possible | ✔️ Yes – use “Always On” | ~1–5s (cold), near-zero (warm) | Web APIs, user apps needing fast response |
Container Apps | ✅ Yes (scale to zero) | ✔️ Use minReplicas: 1 |
~5–30s (cold), ~<1s (warm) | Background jobs, event-driven APIs (warm) |
AKS | ⚠️ In some scaling cases | ⚠️ Avoid if not scaled down | <5s (warm), 2–3m (cold VM) | Full control apps with constant traffic |
ACI | ✅ Always cold | ❌ No | 30–60s typical | Short-lived, async workloads, testing |
Real-World Scenario
You’re deploying a public-facing REST API that must respond in under 1 second — even after sitting idle for an hour.
Let’s evaluate your options:
Service | Fit | Why |
---|---|---|
App Service | ✅ Best | Always On = consistent low-latency, simplest setup |
ACA | ⚠️ Maybe | Needs minReplicas: 1 or Dedicated Plan to avoid 5–30s cold start |
AKS | ✅ Good | No cold start if pods are always running — but comes with ops overhead |
ACI | ❌ No | Always cold — not suited for latency-sensitive endpoints |
What’s Next?
Cold start is just one half of the runtime story.
Now let’s explore environment and runtime support — because not every container service supports Windows, sidecars, or custom runtimes. And choosing the right service also means making sure your app can actually run where you deploy it.
Runtime & OS Support: What Azure Container Services Can Run
When evaluating container platforms on Azure, a key question arises:
Will this service actually run my app?
Whether you’re deploying .NET, Java, Go, or a custom binary, the answer depends on:
- What runtimes and language stacks the service supports
- Whether you can bring a custom container image
- Whether the platform supports Linux or Windows containers
Let’s break it down, service by service.
Azure Container Apps – Flexible Linux Containers Only
❗ Linux containers only
❌ No support for Windows containers
ACA is a runtime-agnostic, container-native platform designed for modern microservices and event-driven workloads. You bring the image; ACA runs it — with no cluster or VM management required.
- Accepts any Linux-based container image (
x86_64
) - Works with public and private registries: ACR, Docker Hub, GitHub, etc.
- No built-in runtimes — you must package your own container
- No Windows container support, no ARM64 support (as of 2025)
✅ Best for: .NET Core, Go, Node.js, Python, Java, Rust — anything in a Linux container
❌ Not suitable for legacy Windows workloads or code-first deployment without containers
💡 Use ACA when you want flexibility with low ops — and you’re happy to stick with Linux.
Azure Kubernetes Service – Full OS and Runtime Freedom
✅ Supports both Linux and Windows containers
🔧 Total flexibility — you manage what and how it runs
AKS gives you the most freedom of any Azure container service. You run your app in standard Kubernetes — Linux or Windows — with complete access to pod lifecycle, container orchestration, and runtime choices.
- Mix Linux and Windows node pools
- Run sidecars, init containers, daemonsets, GPUs, etc.
- Use any registry and any OCI-compliant image
- Ideal for advanced workloads with custom control needs
✅ Best for: .NET Framework on Windows, hybrid runtimes, ML pipelines, game servers
✅ Great for legacy, modern, and mixed environments
💡 If you can build it, AKS can run it — just be ready to manage it.
Azure App Service – Built-In Runtimes + Container Support
✅ Supports both Linux and Windows containers
⚠️ Windows containers require Premium or Isolated plans
App Service gives you two options: deploy code to a managed runtime or run a container. Either way, it’s optimised for web apps and APIs.
a) Code-Based (No container required)
- Built-in support for .NET, Java, Python, PHP, Node.js, Ruby
- Ideal for developers who just want to push code and go
b) Container-Based
- Bring your own Linux or Windows container
- Windows containers require Premium v3 or Isolated plans
- Your app must expose an HTTP port (web workloads only)
✅ Best for: Developers who want fast web/API hosting with minimal ops
❌ Not for: background jobs, custom protocol servers, or binary-only apps
💡 Great balance of simplicity and flexibility — if you stay within web boundaries.
Azure Container Instances – Any Container, Just Run It
✅ Supports both Linux and Windows containers
⚠️ One container group = one OS type
ACI is a lightweight, on-demand container runtime. It doesn’t care what’s inside your image — as long as it fits in the resource limits.
- Run any custom image (Linux or Windows)
- Great for one-off scripts, test runs, batch jobs
- No orchestration or lifecycle management — fire and forget
- Not ideal for apps that require scaling or high availability
✅ Best for: data processing, test runners, CLI tools, queue consumers
❌ Not for: long-lived services or multi-container orchestration
💡 ACI is like a cloud shell for containers — launch anything, anytime.
Quick Comparison
Service | Built-in Runtimes | Custom Containers | Windows Containers | Best Fit For |
---|---|---|---|---|
Container Apps | ❌ None | ✅ Linux only | ❌ Not supported | Microservices, APIs, event jobs (Linux) |
AKS | ❌ Bring your own | ✅ Linux & Windows | ✅ Yes | Any workload — total control |
App Service | ✅ .NET, Node, Java, etc. | ✅ Linux & Windows¹ | ⚠️ Premium v3 or higher only¹ | Web apps & APIs, low-op deployment |
ACI | ❌ None | ✅ Linux & Windows | ✅ Yes | Jobs, tests, one-off containers |
¹ Windows containers require Premium v3 or Isolated plan.
Real-World Scenario
You’re containerising a legacy .NET Framework app that only runs on Windows.
What are your options?
- ✅ AKS: Full support with Windows node pools. Great for scale, flexibility, and control.
- ✅ App Service: Works with Premium v3 or Isolated plan. Ideal for web-based workloads.
- ⚠️ ACI: Fine for dev/test or one-time jobs, but not scalable or orchestrated.
- ❌ ACA: Not supported — Linux containers only.
Up Next: Control & Complexity
Runtime support is only half the story. Next, let’s talk about how much control you really want over your container environment — and how much operational complexity you’re willing to accept in exchange.
DevOps Effort & Customisation: How Much Control Does Each Azure Service Give You?
When you choose an Azure container platform, you’re not just deciding how to run your app — you’re also deciding how much infrastructure you want to manage, and how much control you need over networking, orchestration, and runtime behaviour.
Some services give you full access to the underlying platform (like Kubernetes nodes and networking stacks), while others abstract all of that away — letting you focus purely on the app.
Let’s compare how Azure’s four container services stack up in terms of customisation power and DevOps complexity.
Azure Container Apps – Low Ops with Moderate Flexibility
✅ Best for: Teams that want to deploy containers without managing Kubernetes — but still need sidecars, secrets, and basic networking.
- Linux-only platform with no Kubernetes API access
- Supports sidecar and init containers for patterns like logging or proxies
- Mount Azure Files, use Key Vault and Managed Identity
- VNet integration, outbound IP control, and basic DNS/networking settings available
- ❌ Cannot deploy DaemonSets, install agents, or control the host
🛠 DevOps Effort: 🔵 Low–Moderate
You define container images, environment settings, and scaling rules — Azure handles everything else.
⚠️ Limitations: No orchestration layer, no pod-level security policies, no privileged workloads
Azure Kubernetes Service (AKS) – Full Control, Full Responsibility
✅ Best for: Platform teams and advanced workloads that demand deep control over runtime, orchestration, and infrastructure.
- Access to full Kubernetes API
- Supports custom CNI plugins, pod security policies, network policies, and ingress controllers
- Run DaemonSets, sidecars, init containers, privileged containers, GPU workloads
- SSH into nodes, customise autoscaling, use Azure CSI drivers, deploy third-party agents
- Monitor and secure everything via Container Insights, Azure Monitor, or custom tools
🛠 DevOps Effort: 🔴 High
You own everything from upgrades to node pools, scaling logic, and security posture.
💡 Ideal for building internal platforms or running workloads that need isolation, low-level access, or custom networking.
Azure App Service – Hands-Off Simplicity, Limited Power
✅ Best for: Teams that want to focus purely on building web apps or APIs with minimal DevOps effort.
- Offers built-in runtimes or lets you deploy custom containers
- Supports environment variables, TLS, VNets, and Managed Identity
- Multi-container support via Docker Compose (Linux only)
- ❌ No orchestration, no sidecars across containers, no pod-level control
🛠 DevOps Effort: 🟢 Very Low
You push code or container, define app settings — and that’s it.
⚠️ Limitations:
- No SSH or host access
- No support for non-web workloads
- No control over ports, daemon behaviour, or underlying OS
Azure Container Instances – Quick Launch, No Orchestration
✅ Best for: Lightweight jobs or one-off containers with no orchestration required.
- Define containers, expose ports, set environment variables — and go
- Supports VNet integration, and multiple containers per group
- ❌ No orchestration: groups can’t coordinate across pods
- ❌ No SSH or host access
- Logs available via Azure Monitor or command line
🛠 DevOps Effort: 🟢 Very Low
Great for running a container quickly — but not for managing distributed systems.
⚠️ Limitation (Critical): You cannot build scalable, orchestrated apps with ACI alone — coordination must come from Logic Apps, Functions, or external schedulers.
Comparison Table: DevOps Control vs. Customisation
Service | Level of Control | DevOps Effort | Custom Networking & Host Access |
---|---|---|---|
Container Apps | 🟡 Moderate (sidecars, secrets) | 🔵 Low–Moderate | ⚠️ Limited (VNet, outbound IPs only) |
AKS | 🟢 Very High (full K8s access) | 🔴 High | ✅ Full control (networking, security, nodes, etc.) |
App Service | 🔴 Low (web app features only) | 🟢 Very Low | ❌ No host-level access, minimal container config |
ACI | 🔴 Low (run & forget model) | 🟢 Very Low | ⚠️ Limited (some VNet, no orchestration or grouping) |
Real-World Scenario: Can I Deploy a Sidecar Without Kubernetes?
Use Case: You want to run a sidecar container (e.g. for logging or metrics), mount secrets from Key Vault, and allow outbound VNet traffic — without managing AKS.
Service | Supports Sidecar? | Secrets + VNet? | Avoids Full Kubernetes? | Good Fit? |
---|---|---|---|---|
ACA | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes |
AKS | ✅ Yes | ✅ Yes | ❌ No | ✅ Yes |
App Service | ⚠️ Limited (Linux only) | ⚠️ Partial | ✅ Yes | ❌ No |
ACI | ⚠️ Yes (same group only) | ⚠️ Partial | ✅ Yes | ❌ No |
Trade-Offs in Control Come With a Cost
The more control you need, the more infrastructure you’ll manage — and the deeper your DevOps muscle must be. If you don’t need orchestration or host-level customisation, don’t overbuild.
Up next: Cost Models — because even the best-designed platform doesn’t help if it quietly drains your budget.
Cost Models Compared: How Azure Container Services Charge You
Choosing a container platform isn’t just about features — it’s about cost alignment.
Azure’s container services use very different billing models. Some charge for provisioned infrastructure, while others bill for actual usage.
Pick the wrong one for your workload shape — steady vs. spiky, always-on vs. bursty — and you could overpay by 2–3x.
Let’s compare how each service charges you, and where each is most efficient.
Azure Container Apps – Pay Only When You Use It
ACA follows a serverless, usage-based billing model, perfect for event-driven and idle-friendly workloads.
- Charged per vCPU-second, memory-second, and HTTP requests
- Free tier: 180,000 vCPU-seconds + 360,000 GiB-seconds per month
- Scales to zero, so no idle cost
- Optional Dedicated Plan for flat-rate capacity
💰 Ideal for: bursty APIs, queues, background jobs
⚠️ Watch out: for high sustained traffic, the Dedicated Plan or AKS may be more cost-effective
💡 Example cost: ~NZD $0.00002 per vCPU-second
🔗 ACA Pricing Calculator
Azure Kubernetes Service – You Pay for the Nodes, Always
AKS itself is free — but you pay for everything underneath:
- VMs (node pools), storage, outbound IPs, load balancers
- Control plane is free (or NZD ~$120/month for SLA-backed uptime)
- You’re billed 24/7 — even if the cluster is idle
💰 Ideal for: high-throughput, always-on workloads with consistent volume
⚠️ Watch out: underused nodes = wasted spend
💡 Example: A 3-node B4ms cluster can cost NZD $300–500+/month when idle
Azure App Service – Flat-Rate, Always-On Pricing
App Service uses a per-instance/hour pricing model tied to a selected plan:
- You pick a plan tier (Basic, Standard, Premium)
- Billed based on number of instances × hours
- Always-on billing — even if traffic is zero
- Multiple apps can share a plan to improve value
💰 Ideal for: consistent web workloads, multi-app hosting on shared plans
⚠️ Watch out: idle apps still cost you, and Windows containers require Premium v3 or higher
💡 Example: A Premium v3 plan starts at NZD ~$180/month per instance
Azure Container Instances – Pay Per Second While It Runs
ACI is the most granular usage-based model in the lineup:
- Billed per vCPU-second and memory-second
- Charges start when the container starts, stop when it exits
- No base cost or idle billing
- Slight price premium vs VMs (for convenience)
💰 Ideal for: short-lived tasks, queue processors, on-demand jobs
⚠️ Watch out: if you forget to stop a long-running container, costs add up quickly
💡 Example: NZD ~$0.0012 per vCPU-second, ~$0.0001 per GiB-second
🔗 ACI Pricing Calculator
Quick Cost Model Comparison
Service | Billing Trigger | Idle Cost? | Best Fit |
---|---|---|---|
Container Apps | Per vCPU/mem/request (usage) | ❌ None (scales to 0) | Event-driven, background jobs, APIs with burst traffic |
ACI | Per vCPU/mem-second (runtime) | ❌ None (on-run only) | One-off tasks, dev/test, burst jobs |
App Service | Per-instance/hour (plan tier) | ✅ Always-on billing | Consistent web workloads, shared app plans |
AKS | VM-based (node pool billing) | ✅ VM cost 24/7 | High-scale clusters, shared infra, platform teams |
Real-World Scenario
You’re running a background job that processes messages from a queue every few hours. You only want to pay when work is being done.
Service | Fit | Why |
---|---|---|
ACA | ✅ Best | KEDA triggers scale-up; scales to zero; billed only on use |
ACI | ✅ Good | Can be triggered on demand with Logic Apps/Functions |
App Service | ❌ Poor | Always-on billing, even when idle |
AKS | ❌ Poor | VM cost incurred 24/7, even if pods are idle |
💡 Key takeaway: Don’t just ask “what’s cheapest per hour?”
Ask “what’s cheapest across a week or month — given how your app behaves?”
What’s Next?
Cost efficiency isn’t about the lowest hourly rate — it’s about aligning billing with your workload shape.
With that in mind, let’s wrap everything up with a clear summary and recommendation, so you can confidently choose the right Azure container service for your app.
Final Summary & Recommendation: Choose the Right Azure Container Service with Confidence
You’ve explored the features, trade-offs, scaling patterns, and cost models. Now it’s time to make your choice.
Azure offers four distinct ways to run containers, each tailored to a different application shape, DevOps preference, and traffic profile. When you align your app’s protocol needs, runtime environment, scaling pattern, and cost profile to the right platform — everything works better, scales faster, and runs cheaper.
Here’s a recap of the strengths and ideal fit for each option:
Azure Container Apps (ACA)
Serverless scale, minimal ops — best for event-driven and bursty workloads
- ✅ Autoscaling (via KEDA), scale to zero
- ✅ Great for APIs, background jobs, microservices
- ✅ Built-in HTTP, gRPC, WebSocket support
- ⚠️ Linux containers only
- ❌ Not suited for custom networking or Windows workloads
💡 Choose ACA when you want a modern, cloud-native platform with dynamic scale and low operational overhead.
Azure Kubernetes Service (AKS)
Maximum control, full orchestration — best for complex systems and hybrid environments
- ✅ Linux and Windows containers, advanced networking, and full K8s API
- ✅ Ideal for custom workloads, sidecars, ingress controllers, and multi-container apps
- ✅ Deep integration with Azure services
- ❌ Higher complexity — DevOps skill required
- ❌ Pay for nodes even when idle
💡 Choose AKS when you need complete customisation, control, and scalability.
Azure App Service
Fastest path to HTTP web apps — best for developers who want simplicity
- ✅ Built-in runtimes (.NET, Java, Node.js, etc.)
- ✅ Supports containerised deployments (Linux & Windows*)
- ✅ SSL, auto-scaling, deployment slots built-in
- ❌ HTTP/S traffic only — no custom protocols or ports
- ⚠️ Windows container support requires Premium/Isolated plans
💡 Choose App Service when you’re deploying web apps and APIs and want zero infrastructure management.
Azure Container Instances (ACI)
Run containers instantly — no setup, pay-per-second
- ✅ Supports Linux and Windows containers
- ✅ Any protocol (TCP/UDP), no web-only restrictions
- ✅ Ideal for batch jobs, quick testing, one-off scripts
- ❌ No autoscaling, no orchestration
- ⚠️ Not designed for persistent or high-availability services
💡 Choose ACI when you need to run something quickly, on demand, without provisioning infrastructure.
Decision Helper: Which Azure Container Service Should You Use?
Use Case | ✅ Best Fit | ⚠️ Consider | ❌ Not Recommended |
---|---|---|---|
I’m deploying a simple HTTP API | App Service | ACA | AKS, ACI |
I want serverless containers that scale to zero | ACA | ACI (with Logic App) | App Service, AKS |
I need raw TCP/UDP or custom ports | AKS, ACI | — | ACA, App Service |
I’m building a queue-based background processor | ACA | ACI | App Service |
I’m containerising a legacy Windows app | AKS | App Service (Premium) | ACA |
I need full control over networking and orchestration | AKS | — | ACA, ACI, App Service |
I just want to run a task quickly and shut it down | ACI | ACA | AKS, App Service |
Final Thoughts: Start Simple, Scale Smart
The best choice is the one that meets your needs today — and scales with you tomorrow.
Start with the simplest platform that delivers what you need. Only step up to more complex options like AKS when you hit a real limitation.
And remember — you don’t have to pick just one:
- Use ACA for microservices and background jobs
- Add ACI for lightweight event-driven tasks
- Deploy customer-facing portals on App Service
- Use AKS when deep platform control is non-negotiable
Thanks to Azure’s integration across identity, networking, and DevOps tooling — combining services is not only possible, it’s often the best approach.
Wherever your app lands, you now have the clarity and context to choose with confidence.
Go build something awesome.