AKS Announcements Roll-up from Microsoft Ignite 2020

There were a whole lot of announcements around Azure Kubernetes Service (AKS) at Ignite 2020. I thought I’d quickly sum them all up and provide links:

Brendan Burn’s post on AKS Updates

A great summary of recent investments in AKS from Kubernetes co-creator, Brendan Burns.

Preview: AKS now available on Azure Stack HCI

AKS on Azure Stack HCI enables customers to deploy and manage containerized apps at scale on Azure Stack HCI, just as they can run AKS within Azure.

Public Preview: AKS Stop/Start Cluster

Pause an AKS cluster and pick up where they left off later with a switch of a button, saving time and cost.

GA: Azure Policy add on for AKS

Azure Policy add on for AKS allows customers to audit and enforce policies to their Kubernetes resources.

Public Preview: Confidential computing nodes on Azure Kubernetes Service

Azure Kubernetes Service (AKS) supports adding DCsv2 confidential computing nodes on Intel SGX.

GA: AKS support for new Base image Ubuntu 18.04

You can now create Node Pools using Ubuntu 18.04.

GA: Mutate default storage class

You can now use a different storage class in place of the default storage class to better fit their workload needs.

Public preview: Kubernetes 1.19 support

AKS now supports Kubernetes release 1.19 in public preview. Kubernetes release 1.19 includes several new features and enhancements such as support for TLS 1.3, Ingress and seccomp feature GA, and others.

Public preview: RBAC for K8s auth

With this capability, you can now manage RBAC for AKS and its resources using Azure or native Kubernetes mechanisms. When enabled, Azure AD users will be validated exclusively by Azure RBAC while regular Kubernetes service accounts are exclusively validated by Kubernetes RBAC.

Public Preview: VSCode ext. diag+periscope

This Visual Studio Code extension enables developers to use AKS periscope and AKS diagnostics in their development workflow to quickly diagnose and troubleshoot their clusters.This Visual Studio Code extension enables developers to use AKS periscope and AKS diagnostics in their development workflow to quickly diagnose and troubleshoot their clusters.

Enhanced protection for containers

Enhanced protection for containers: As containers and specifically Kubernetes are becoming more widely used, the Azure Defender for Kubernetes offering has been extended to include Kubernetes-level policy management, hardening and enforcement with admission control to make sure that Kubernetes workloads are secured by default. In addition, container image scanning by Azure Defender for Container Registries will now support continuous scanning of container images to minimize the exploitability of running containers

Learn more about Microsoft DefenderAzure Defender and Azure Sentinel.

There may indeed been more, and I’ll update them as they come to hand. Hope this roll up helps.

Head over to https://myignite.microsoft.com and watch some of the AKS content to get even an even better view of the updates.


Enable AKS Azure Active Directory integration with a Managed Identity from an ARM template

When you’re deploying an Azure Kubernetes Service (AKS) cluster in Azure, it is common that you’ll want to integrate it into Azure Active Directory (AAD) to use it as an authentication provider.

The original (legacy) method for enabling this was to manually create a Service Principal and use that to grant your AKS cluster access to AAD. The problem with this approach was that you would need to manage this manually and as well as rolling worry about rolling secrets.

More recently an improved method of integrating your AKS cluster into AAD was announced: AKS-managed Azure Active Directory integration. This method allows your AKS cluster resource provider to take over the task of integrating to AAD for you. This simplifies things significantly.

You can easily do this integration by running PowerShell or Bash scripts, but if you’d prefer to use an ARM template, here is what you need to know.

  1. You will need to have an object id of an Azure Active Directory group to use as your Cluster Admins.
    $clusterAdminGroupObjectIds = (New-AzADGroup `
    -DisplayName "AksClusterAdmin" `
    -MailNickname "AksClusterAdmin").Id


    This will return the object Id for the newly create group in the variable $clusterAdminGroupObjectIds. You will need to pass this variable into your ARM template.
  2. You need to add an aadProfile block into the properties of your AKS cluster deployment definition:

    For example:
    "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#",
    "contentVersion": "",
    "parameters": {
    "clusterAdminGroupObjectIds": {
    "defaultValue": [],
    "type": "array",
    "metadata": {
    "description": "Array of Azure AD Group object Ids to use for cluster administrators."
    "resources": [
    "name": "MyAksCluster",
    "type": "Microsoft.ContainerService/managedClusters",
    "apiVersion": "2020-04-01",
    "location": "eastus",
    "properties": {
    "kubernetesVersion": "1.18.4",
    "enableRBAC": true,
    "aadProfile": {
    "managed": true,
    "adminGroupObjectIds": "[parameters('clusterAdminGroupObjectIds')]",
    "tenantId": "[subscription().tenantId]"
    // Other cluster properties here...
    "identity": {
    "type": "SystemAssigned"
  3. When you deploy the ARM template (using whatever method you choose), you’ll need to pass the $clusterAdminGroupObjectIds‚Äčas a parameter. For example:
    New-AzResourceGroupDeployment `
    -ResourceGroupName 'MyAksWithAad_RG `
    -TemplateFile 'ArmTemplateAksClusterWithManagedAadIntegration.json' `
    -TemplateParameterObject @{
    clusterAdminGroupObjectIds = @( $clusterAdminGroupObjectIds )

That is all you really need to get AKS-managed AAD integration going with your AKS cluster.

For a fully formed ARM template for that will deploy an AKS cluster with AKS-managed AAD integration plus a whole lot more, check out this ARM template. It will deploy an AKS cluster including the following:

  • A Log Analytics workspace integrated into the cluster with Cluster Insights and Diagnostics.
  • A VNET for the cluster nodes.
  • An ACR integrated into the VNET with Private Link and Diagnostics into the Log Analytics Workspace.
  • Multiple node pools spanning availability zones:
    • A system node pool including automatic node scaler.
    • A Linux user node pool including automatic node scaler.
    • A Windows node pool including automatic node scaler and taints.

Thanks for reading.