Azure Resource Manager Templates Hands-on Lab and #GlobalAzure 2019

Recently I helped organize and present at the 2019 Global Azure Bootcamp in Auckland. The Global Azure Bootcamp is an huge event run by Azure communities throughout the world all on the same day every year. It is an opportunity for anyone with an interest in Azure to come and learn from experts and presenters and share their knowledge. If you’re new to Azure or even if you’re an expert it is well worth your time to attend these free events.

AucklandGAB2019-1

The Global Azure Bootcamp is also an awful lot of fun to be a part of and I got to meet some fantastic people!

We also got to contribute to the Global Azure Bootcamp Science lab, which was a really great way to learn Azure as well as contribute to the goal of finding potential exosolar planets (how cool is that?). A global dashboard was made available where all locations could compare their contributions. The Auckland Team did fantastically well, given the size of Auckland comparatively: We managed to get 8th on the team leaderboard:

AucklandGAB2019-TeamLeaderboard

Hands-On Workshop Material

As part of my session this year, I produced a Hands-on workshop and presentation showing attendees the basics of using Azure Resource Manager templates as well as some of the more advanced topics such as linked/nested templates and security.

The topics covered are:

AucklandGAB2019-ARMTemplatesWorkshop

I’ve made all of this material open and free for the community to use to run your own sessions or modify and improve.

You can find the material in GitHub here:

https://github.com/PlagueHO/Workshop-ARM-Templates

Thanks for reading and hope to see some of you at a future Global Azure Bootcamp!

Enable CORS Support in Cosmos DB using PowerShell

Support for Cross-Origin Resource Sharing (CORS) was recently added to Cosmos DB. If you want to enable CORS on an existing Cosmos DB account or create a new Cosmos DB account with CORS enabled it is very easy to do with Azure Resource Manager (ARM) templates or the Azure Portal.

But what if you’re wanting to find out the state of the CORS setting on an account or set it using PowerShell? Well, look no further.

The Cosmos DB PowerShell module (version 3.0.0 and above) supports creating Cosmos DB accounts with CORS enabled as well as updating and removing the CORS headers setting on an existing account. You can also retrieve the CORS setting for an existing Cosmos DB account.

Installing the CosmosDB Module

The first thing you need to do is install the CosmosDB PowerShell module from the PowerShell Gallery by running this in a PowerShell console:

Install-Module -Name CosmosDB -MinimumVersion 3.0.0.0

ss_cosmosdbcors_installmodule

This will also install the Az PowerShell modules Az.Accounts and Az.Resources modules if they are not installed on your machine. The *-CosmosDbAccount functions in the CosmosDB module are dependent on these modules.

Note: The CosmosDB PowerShell module and the Az PowerShell modules are completely cross-platform and support Linux, MacOS and Windows. Running in either Windows PowerShell (Windows) or PowerShell Core (cross-platform) is supported.

Versions of the CosmosDB PowerShell module earlier than 3.0.0.0 use the older AzureRm/AzureRm.NetCore modules and do not support the CORS setting.

Authenticating to Azure with ‘Az’

Before using the CosmosDB PowerShell module accounts functions to work with CORS settings you’ll first need to authenticate to Azure using the Az PowerShell Modules. If you’re planning on automating this process you’ll want to authenticate to Azure using a Service Principal identity.

Side note: if you’re using this module in an Azure DevOps build/release pipeline the Azure PowerShell task will take care of the Service Principal authentication process for you:

ss_cosmosdbcors_azuredevopspowershelltask

But if you’re just doing a little bit of experimentation then you can just use an interactive authentication process.

To use the interactive authentication process just enter into your PowerShell console:

Connect-AzAccount

then follow the instructions.

ss_cosmosdbcors_authenticateaz.png

Create a Cosmos DB Account with CORS enabled

Once you have authenticated to Azure, you can use the New-CosmosDbAccount function to create a new account:

New-CosmosDbAccount `
-Name 'dsrcosmosdbtest' `
-ResourceGroupName 'dsrcosmosdbtest-rgp' `
-Location 'westus' `
-AllowedOrigin 'https://www.fabrikam.com','https://www.contoso.com'

ss_cosmosdbcors_newcosmosdbaccountThis will create a new Cosmos DB account with the name dsrcosmosdbtest in the resource group dsrcosmosdbtest-rgp in the West US location and with CORS allowed origins of https://www.fabrikam.com and https://www.contoso.com.

Important: the New-CosmosDbAccount command assumes the resource group that is specified in the ResourceGroup parameter already exists and you have contributor access to it. If the resource group doesn’t exist then you can create it using the New-AzResourceGroup function or some other method.

It will take Azure a few minutes to create the new Cosmos DB account for you.

Side note: But if you want your PowerShell automation or script to be able to get on and do other tasks in the meantime, then add the -AsJob parameter to the New-CosmosDbAccountcall. This will cause the function to immediately return and provide you a Job object that you can use to periodically query the state of the Job. More information on using PowerShell Jobs can be found here.

Be aware, you won’t be able to use the Cosmos DB account until the Job is completed.

If you look in the Azure Portal, you will find the new Cosmos DB account with the CORS allowed origin values set as per your command:

ss_cosmosdbcors_cosmosdbinportalwithcors

Get the CORS Allowed Origins on a Cosmos DB Account

Getting the current CORS Allowed Origins value on an account is easy too. Just run the following PowerShell command:

(Get-CosmosDbAccount `
-Name 'dsrcosmosdbtest' `
-ResourceGroupName 'dsrcosmosdbtest-rgp').Properties.Cors.AllowedOrigins

ss_cosmosdbcors_getcosmosdbcors

This will return a string containing all the CORS Allowed Origins for the Cosmos DB account dsrcosmosdbtest.

You could easily split this string into an array variable by using:

$corsAllowedOrigins = (Get-CosmosDbAccount `
-Name 'dsrcosmosdbtest' `
-ResourceGroupName 'dsrcosmosdbtest-rgp').Properties.Cors.AllowedOrigins -split ','

ss_cosmosdbcors_getcosmosdbcorssplit

Update the CORS Allowed Origins on an existing Cosmos DB Account

To set the CORS Allowed Origins on an existing account use the Set-CosmosDbAccount function:

Set-CosmosDbAccount `
-Name 'dsrcosmosdbtest' `
-ResourceGroupName 'dsrcosmosdbtest-rgp' `
-AllowedOrigin 'http://www.mycompany.com'

ss_cosmosdbcors_setcosmosdbcors

This will take a few minutes to update. So you can use the -AsJob parameter to run this as a Job.

Remove the CORS Allowed Origins from an existing Cosmos DB Account

You can remove the CORS Allowed Origins setting by setting using the Set-CosmosDbAccount function but passing in an empty string to the AllowedOrigin parameter:

Set-CosmosDbAccount `
-Name 'dsrcosmosdbtest' `
-ResourceGroupName 'dsrcosmosdbtest-rgp' `
-AllowedOrigin ''

ss_cosmosdbcors_removecosmosdbcors

This will take a few minutes to update as well. As always, you can use the -AsJob parameter to run this as a Job.

 

Final Words

Hopefully, you can see it is fairly simple to automate and work with the Cosmos DB CORS Allowed Origins setting using the PowerShell Cosmos DB module.

If you have any issues or queries or would like to contribute to the PowerShell Cosmos DB module, please head over to the GitHub repository.

 

Disable TLS 1.0, TLS 1.1 and 3DES in Azure API Management using an ARM Template

Recently, I’ve been putting together a continuous delivery pipeline (using VSTS) for our Azure API Management service using Azure Resource Manager (ARM) templates. One of the things I needed to be able to do to secure this service properly is to disable TLS 1.0, TLS 1.1 and 3DES. This is pretty easy to do in the portal:

ss_apim_disabletls3des

However, we only allow changes to be made via our continuous delivery pipeline (a good thing by the way) then I had to change the ARM template.

Side note: Disabling TLS 1.0, TLS 1.1 and 3DES is pretty important for keeping your system secure. But if you have an Azure Application Gateway in front of your API Management service, then you’ll also need to configure the Azure Application Gateway to disable TLS 1.0 and TLS 1.1. This is done in a slightly different way, but can also be done in an ARM Template (post a comment if you’re not sure how to do this and I’ll write another post).

I found the documentation for the API Management service resource here. This shows it can be done by setting the customProperties object in the ARM Template. But the documentation isn’t completely clear.

But after a little bit of trial and error I managed to figure it out and get it working. What you need to do is add the following customProperties to the properties of the API Management service resource:

"customProperties": {
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TripleDes168": "false",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls11": "false",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls10": "false"
}

This is what the complete ARM template looks like:

{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"publisherEmail": {
"type": "string",
"minLength": 1,
"metadata": {
"description": "The email address of the owner of the service"
}
},
"publisherName": {
"type": "string",
"minLength": 1,
"metadata": {
"description": "The name of the owner of the service"
}
},
"sku": {
"type": "string",
"allowedValues": [
"Developer",
"Standard",
"Premium"
],
"defaultValue": "Developer",
"metadata": {
"description": "The pricing tier of this API Management service"
}
},
"skuCount": {
"type": "string",
"allowedValues": [
"1",
"2"
],
"defaultValue": "1",
"metadata": {
"description": "The instance size of this API Management service."
}
}
},
"variables": {
"apiManagementServiceName": "[concat('apiservice', uniqueString(resourceGroup().id))]"
},
"resources": [
{
"apiVersion": "2017-03-01",
"name": "[variables('apiManagementServiceName')]",
"type": "Microsoft.ApiManagement/service",
"location": "West US",
"tags": {},
"sku": {
"name": "[parameters('sku')]",
"capacity": "[parameters('skuCount')]"
},
"properties": {
"publisherEmail": "[parameters('publisherEmail')]",
"publisherName": "[parameters('publisherName')]",
"customProperties": {
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TripleDes168": "false",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls11": "false",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls10": "false"
}
}
}
]
}
view raw azuredeploy.json hosted with ❤ by GitHub

Side note: the template above is based off the Azure Quickstart Template for API Management.

Hopefully you find this if you’re looking for an example of how to do this and it saves you some time.

 

Managing Users & Permissions in Cosmos DB with PowerShell

If you’re just getting started with Cosmos DB, you might not have come across users and permissions in a Cosmos DB database. However, there are certain use cases where managing users and permissions are necessary. For example, if you’re wanting to be able to limit access to a particular resource (e.g. a collection, document, stored procedure) by user.

The most common usage scenario for users and permissions is if you’re implementing a Resource Token Broker type pattern, allowing client applications to directly access the Cosmos DB database.

Side note: The Cosmos DB implementation of users and permissions only provides authorization – it does not provide authentication. It would be up to your own implementation to manage the authentication. In most cases you’d use something like Azure Active Directory to provide an authentication layer.

But if you go hunting through the Azure Management Portal Cosmos DB data explorer (or Azure Storage Explorer) you won’t find any way to configure or even view users and permissions.

ss_cdb_cosmosdbdataexplorer

To manage users and permissions you need to use the Cosmos DB API directly or one of the SDKs.

But to make Cosmos DB users and permissions easier to manage from PowerShell, I created the Cosmos DB PowerShell module. This is an open source project hosted on GitHub. The Cosmos DB module allows you to manage much more than just users and permissions, but for this post I just wanted to start with these.

Requirements

This module works on PowerShell 5.x and PowerShell Core 6.0.0. It probably works on PowerShell 3 and 4, but I don’t have any more machines running this version to test on.

The Cosmos DB module does not have any dependencies, except if you call the New-Cosmos DbContext function with the ResourceGroup parameter specified as this will use the AzureRM PowerShell modules to read the Master Key for the connection directly from your Cosmos DB account. So I’d recommend installing the Azure PowerShell modules or if you’re using PowerShell 6.0, install the AzureRM.NetCore modules.

Installing the Module

The best way to install the Cosmos DB PowerShell module is from the PowerShell Gallery. To install it for only your user account execute this PowerShell command:

Install-Module -Name CosmosDB -Scope CurrentUser

ss_cdb_cosmosdbinstallmodulecurrentuser

Or to install it for all users on the machine (requires administrator permissions):

Install-Module -Name CosmosDB

ss_cdb_cosmosdbinstallmoduleallusers

Context Variable

Update 2018-03-06

As of Cosmos DB module v2.0.1, the connection parameter has been renamed to context and the New-CosmosDbConnection function has been renamed New-CosmosDbContext. This was to be more inline with naming adopted by the Azure PowerShell project. The old connection parameters and New-CosmosDbConnection function is still available as an alias, so older scripts won’t break. But these should be changed to use the new naming if possible as I plan to deprecate the connection version at some point in the future.

This post was updated to specify the new naming, but screenshots still show the Connection aliases.

Before you get down to the process of working with Cosmos DB resources, you’ll need to create a context variable containing the information required to connect. This requires the following information:

  1. The Cosmos DB Account name
  2. The Cosmos DB Database name
  3. The Master Key for the account (you can have the Cosmos DB PowerShell module get this directly from your Azure account if you wish).

To create the connection variable we just use the New-CosmosDbContext:

$account = 'MyCosmosDBAccount'
$database = 'MyDatabase'
$key = ConvertTo-SecureString -String 'this is your master key, get it from the Azure portal' -AsPlainText -Force
$context = New-CosmosDbContext -Account $account -Database $database -Key $key

ss_cdb_cosmosdbnewconnection

If you do not wish to specify your master key, you can have the New-CosmosDbContext function pull your master key from the Azure Management Portal directly:

Add-AzureRmAccount
$account = 'MyCosmosDBAccount'
$database = 'MyDatabase'
$resourceGroup = 'MyCosmosDBResourceGroup'
$context = New-CosmosDbContext -Account $account -Database $database -ResourceGroup $resourceGroup

ss_cdb_cosmosdbnewconnectionviaportal

Note: This requires the AzureRM.Profile and AzureRM.Resoures module on Windows PowerShell 5.x or AzureRM.Profile.NetCore and AzureRM.Resources.NetCore on PoweShell Core 6.0.0.

Managing Users

To add a user to the Cosmos DB Database use the New-CosmosDBUser function:

New-CosmosDbUser -Context $context -Id 'daniel'

ss_cdb_cosmosdbnewuser

To get a list of users in the database:

Get-CosmosDbUser -Context $context

ss_cdb_cosmosdbgetusers

To get a specific user:

Get-CosmosDbUser -Context $context -Id 'daniel'

ss_cdb_cosmosdbgetuser

To remove a user (this will also remove all permissions assigned to the user):

Remove-CosmosDbUser -Context $context -Id 'daniel'

ss_cdb_cosmosdbremoveuser

Managing Permissions

Permissions in Cosmos DB are granted to a user for a specific resource. For example, you could grant a user access to just a single document, an entire collection or to a stored procedure.

To grant a permission you need to provide four pieces of information:

  1. The Id of the user to grant the permission to.
  2. An Id for the permission to create. This is just string to uniquely identify the permission.
  3. The permission mode to the permission: All or Read.
  4. The Id of the resource to grant access to. This can be generated from one of the Get-CosmosDb*ResourcePath functions in the CosmosDB PowerShell module.

In the following example, we’ll grant the user daniel all access to the TestCollection:

$userId = 'TestUserId'
$resourcePath = Get-CosmosDbCollectionResourcePath -Database 'TestDatabase' -Id 'TestCollection'
New-CosmosDbPermission -Context $context -Id 'AccessTestCollection' -UserId $userId -PermissionMode All -Resource $resourcePath

ss_cdb_cosmosdbnewpermission

Once a permission has been granted, you can use the Get-CosmosDbPermission function to retrieve the permission and with it the Resource Token that can be used to access the resource for a limited amount of time (between 10 minutes and 5 hours).

Note: as you have the Master Key already, using the Resource Token isn’t required.

For example, to retrieve all permissions for the user with Id daniel and a resource token expiration of 600 seconds:

Get-CosmosDbPermission -Context $context -UserId 'daniel' -TokenExpiry '600' |
fl *

ss_cdb_cosmosdbgetpermission

You can as expected delete a permission by using the Remove-CosmosDbPermission function:

Remove-CosmosDbPermission -Context $context -UserId 'daniel' -Id 'AccessTestCollection'

ss_cdb_cosmosdbremovepermission

Final Thoughts

So this is pretty much all there is to managing users and permissions using the Cosmos DB PowerShell module. This module can also be used to manage the following Cosmos DB resources:

  • Attachments
  • Collections
  • Databases
  • Documents
  • Offers
  • Stored procedures
  • Triggers
  • User Defined Functions

You can find additional documentation and examples of how to manage these resources over in the Cosmos DB PowerShell module readme file on GitHub.

Hopefully this will help you in any Cosmos DB automation tasks you might need to implement.

 

Stop, Start or Restart all Web Apps in Azure using PowerShell

Here is a short (and sometimes handy) single line of PowerShell code that can be used to restart all the Azure Web Apps in a subscription:

(Get-AzureRmWebApp).GetEnumerator() | Restart-AzureRmWebApp

ss_azurecloudshell_restartallwebapps

Note: Use this with care if you’re working with production systems because this _will_ restart these Web Apps without confirming first.

This would be a handy snippet to be able to run in the Azure Cloud Shell. It could also be adjusted to perform different actions on other types of resources.

To stop all Web Apps in a subscription use:

(Get-AzureRmWebApp).GetEnumerator() | Stop-AzureRmWebApp

To start them all again:

(Get-AzureRmWebApp).GetEnumerator() | Start-AzureRmWebApp

The key part of this command is the GetEnumerator() method because most Azure Cmdlets don’t return an array of individual objects into the pipeline like typical PowerShell cmdlets. Instead returning a System.Collections.Generic.List object, which requires a slight adjustment to the code. This procedure can be used for most Azure Cmdlets to allow the results to be iterated through.

ss_azurecloudshell_systemcollections

Thanks for reading.

Install Nightly Build of Azure CLI 2.0 on Windows

The Azure PowerShell cmdlets are really first class if you’re wanting to manage Azure with PowerShell. However, they don’t always support the very latest Azure components and features. For example, at the time of writing this there is no Azure PowerShell module for managing Azure Container Instances.

The solution to this is to install the Nightly Build of Azure CLI 2.0. However, on Windows it is not entirely clear the easiest way to do this. So, in this post I’ll provide a PowerShell script that will:

  1. Install Python 3.x using Chocolatey
  2. Use PIP (Python package manager) to install the latest nightly build packages
  3. Update the Environment Path variable so that you can use Azure CLI 2.0.

Note: If you have the stable build of Azure CLI 2.0 installed using the MSI then you’ll need to configure your Environment Path variable to find the Az command that you’d like to use by default. I personally removed the stable build of Azure CLI 2.0 to make it easier.

Performing the Install

Make sure you’ve got Chocolatey installed. If you aren’t sure what Chocolatey is, it is a package management system for Windows – not unlike Apt-Get or Yum for Linux. It is free and awesome. In this process we’ll use Chocolatey to install Python for us. If you haven’t got Chocolatey installed, see this page for instructions.

Next, download and run this PowerShell script in a PowerShell Administrator Console:

<#
.SYNOPSIS
Install Azure CLI 2.0 Nightly Build on Windows using Chocolatey and PowerShell
#>
if (-not (Get-Command -Name Choco -ErrorAction SilentlyContinue))
{
Throw 'Chocolatey is not installed. Please install it. See https://chocolatey.org/install for instructions.'
}
Write-Host -Object 'Installing Python 3 with Chocolatey...'
& choco @('install','python3','-y')
Update-SessionEnvironment
$pyhtonScriptsPath = Join-Path -Path $ENV:APPDATA -ChildPath 'Python\Python36\Scripts'
$currentPath = [System.Environment]::GetEnvironmentVariable('Path',[System.EnvironmentVariableTarget]::User) -split ';'
if ($currentPath -notcontains $pyhtonScriptsPath)
{
Write-Host -Object 'Adding Python Scripts to User Environment Path...'
$newPath = @()
$newPath += $currentPath
$newPath += $pyhtonScriptsPath
$newPathJoined = $newPath -join ';'
[System.Environment]::SetEnvironmentVariable('Path',$newPathJoined,[System.EnvironmentVariableTarget]::User)
}
if (-not $currentPath.Contains($pyhtonScriptsPath))
{
Write-Host -Object 'Adding Python Scripts to Current PowerShell session path...'
$ENV:Path = "$($ENV:Path);$pyhtonScriptsPath"
}
Write-Host -Object 'Installing nightly build of Az CLI 2.0...'
& pip @('install','--no-cache-dir','--user','--upgrade','--pre','azure-cli','--extra-index-url','https://azureclinightly.blob.core.windows.net/packages')
Write-Host -Object 'Installation of nightly build of Az CLI 2.0 complete. Execute "az" to start.'

You could save the content of this script into a PS1 file and then execute it like this:

ss_azurecli_installnightlybuild

It will then download and install Python, then use PIP to install the current nightly build packages. After a few minutes the installation will complete:

ss_azurecli_installnightlybuildcompete

You can then run:

Az Login

To get started.

If you’re a bit new to Azure CLI 2.0, then another great way is to use Azure CLI Interactive:

Az Interactive

ss_azurecli_interactive

If you need to update to a newer nightly build, just run the script again and it will update your packages.

Easy as that! Now you can experiment with all the latest automation features in Azure without needing to wait for a new version of Azure CLI 2.0 or for latest Azure PowerShell cmdlets.

Edge Builds

If you want to install even more “bleeding edge” builds (built straight off the master branch on every merge to master) then you can make a small adjustment to the script above:

On line 34 change the URL of the feed from:

https://azureclinightly.blob.core.windows.net/packages

To:

https://azurecliprod.blob.core.windows.net/edge

Thanks for reading!

 

 

Sonatype Nexus Containers with Persistent Storage in Azure Container Instances

On the back of yesterdays post on running Azure Container Instance containers with persistent storage, I thought I’d try a couple of other containers with my script.

Note: I don’t actually plan on running any of these apps, I just wanted to test out the process and my scripts to identify any problems.

I tried:

And here are the results of my tests:

Sonatype Nexus 2

Works perfectly and the container starts up quickly (under 10 seconds):

ss_aci_sonatypenexus2

I passed the following parameters to the script:

.\Install-AzureContainerInstancePersistStorage.ps1 `
-ServicePrincipalUsername 'ce6fca5e-a22d-44b2-a75a-f3b20fcd1b16' `
-ServicePrincipalPassword (ConvertTo-SecureString -String 'JUJfenwe89hwNNF723ibw2YBybf238ybflA=' -AsPlainText -Force) `
-TenancyId '8871b1ba-7d3d-45f3-8ee0-bb60c0e4733e' `
-SubscriptionName 'Visual Studio Enterprise' `
-AppCode 'nexus' `
-UniqueCode 'mine' `
-ContainerImage 'sonatype/nexus:oss' `
-ContainerPort '8081' `
-VolumeName 'nexus' `
-MountPoint '/sonatype-work/' `
-Verbose

Note: The Nexus 2 server is only accessible on the path /nexus/.

Sonatype Nexus 3

Works perfectly but after takes at least a minute to be accessible after the container starts. But this is normal behavior for Nexus 3.

ss_aci_sonatypenexus3

I passed the following parameters to the script:

.\Install-AzureContainerInstancePersistStorage.ps1 `
-ServicePrincipalUsername 'ce6fca5e-a22d-44b2-a75a-f3b20fcd1b16' `
-ServicePrincipalPassword (ConvertTo-SecureString -String 'JUJfenwe89hwNNF723ibw2YBybf238ybflA=' -AsPlainText -Force) `
-TenancyId '8871b1ba-7d3d-45f3-8ee0-bb60c0e4733e' `
-SubscriptionName 'Visual Studio Enterprise' `
-AppCode 'nexus3' `
-UniqueCode 'mine' `
-ContainerImage 'sonatype/nexus3:latest' `
-ContainerPort '8081' `
-VolumeName 'nexus3' `
-MountPoint '/nexus-data/' `
-Verbose

Jenkins

Unfortunately Jenkins does not work with a persistent storage volume from an Azure Share. It seems to be trying to set the timestamp of the file that will contain the InitialAdminPassword, which is failing:

ss_aci_jenkins

I passed the following parameters to the script:

.\Install-AzureContainerInstancePersistStorage.ps1 `
-ServicePrincipalUsername 'ce6fca5e-a22d-44b2-a75a-f3b20fcd1b16' `
-ServicePrincipalPassword (ConvertTo-SecureString -String 'JUJfenwe89hwNNF723ibw2YBybf238ybflA=' -AsPlainText -Force) `
-TenancyId '8871b1ba-7d3d-45f3-8ee0-bb60c0e4733e' `
-SubscriptionName 'Visual Studio Enterprise' `
-AppCode 'jenkinshome' `
-UniqueCode 'dsr' `
-ContainerImage 'jenkins/jenkins:lts' `
-ContainerPort '8080' `
-VolumeName 'jenkinshome' `
-MountPoint '/var/jenkins_home/' `
-Verbose

So, this is still a little bit hit and miss, but in general Azure Container Instances look like a very promising way to run different types of services in containers without a lot of overhead. With a bit of automation, this could turn out to be a cost effective way to quickly and easily run some common services.

Persistent Storage in Azure Container Instances

Update 2018-04-26: At some point Microsoft made a change to the requirements of the ARM template creating the Azure Container Instance. It now requires the Ports to be specified within the container as well as we the container group. I have improved the ARM template to meet the current requirements.

Update 2017-08-06: I have improved the script so that it is idempotent (can be run more than once and will only create anything that is missing). The Azure Container Instance resource group can be deleted once you’ve finished with the container and then recreated again with this same script when you next need it. The storage will be preserved in the separate storage account resource group. The script can now be run with the -verbose parameter and will produce much better progress information.

Azure Container Instances (ACI) is a new resource type in Azure that allows you to quickly and easily create containers without the complexity or overhead of Azure Service Fabric, Azure Container Services or provisioning a Windows Server 2016 VM.

It allows you to quickly create containers that are billed by the second from container images stored in Docker Hub or your own Azure Container Registry (ACR). Even though this feature is still in preview, it is very easy to get up and running with it.

But this post isn’t about creating basic container instances, it is about running container instances where some of the storage must persist. This is a basic function of a container host, but if you don’t have access to the host storage then things get more difficult. That said, Azure Container Instances do support mounting Azure File Shares into the container as volumes. It is fairly easy to do, but requires quite a number of steps.

There is some provided documentation for persisting storage in a container instance, but it is quite a manual process and the example ARM templates are currently broken: there are some typos and missing properties. So this post aims to make the whole thing a lot simpler and automatable.

So in this post, I’m going to share a PowerShell function and Azure Resource Manager (ARM) template that will allow you to easily provision an Azure Container Instance with an Azure File Share mounted. The process defaults to installing a GoCD Server container (version 17.8.0 if you’re interested), but you could use it to install any other Linux Container that needs persistent storage. The script is parameterized so other containers and mount points can be specified – e.g. it should be fairly easy to use this for other servers like Sonatype Nexus or Jenkins Server.

Update 2017-08-06: I documented my findings trying out these other servers in my following blog post.

Requirements

To perform this process you will need the following:

  • PowerShell 5.0+ (PowerShell 4.0 may work, but I haven’t tested it).
  • The Azure PowerShell module installed.
  • Created an Application Service Principal – see below.

Azure Service Principal

Before you start this process you will need to have created an Application Service Principal in Azure that will be used to perform the deployment. Follow the instructions on this page to create an application and then get the Service Principal from it.

You will need to record these values as they will be provided to the script later on:

  • Application Id
  • Application Key
  • Tenant Id
  • Subscription Name

The Process

The process will perform the following tasks:

  1. The Service Principal is used to login to Azure to perform the deployment.
  2. An Azure Resource Group is created to contain a Azure Storage Account and Azure Key Vault.
  3. An Azure Storage Account is created and an Azure File Share is created in it.
  4. An Azure Key Vault is created to store the Storage Account Key and make it accessible to the Azure Container Instance.
  5. The Service Principal is granted permission to the Azure Key Vault to read and write secrets.
  6. The key to the Storage Account Key is added as a secret to the Azure Key Vault.
  7. The parameters are set in an ARM Template parameter file.
  8. An Azure Resource Group is created to contain the Azure Container Instance.

The Script

This is the content of the script:

[CmdletBinding()]
param
(
[Parameter(Mandatory = $True)]
[String] $ServicePrincipalUsername,
[Parameter(Mandatory = $True)]
[SecureString] $ServicePrincipalPassword,
[Parameter(Mandatory = $True)]
[String] $TenancyId,
[Parameter(Mandatory = $True)]
[String] $SubscriptionName,
[String] $AppCode = 'gocd', # just a short code to identify this app
[String] $UniqueCode = 'dsr', # a short unique code to ensure that resources are unique
[String] $ContainerImage = 'gocd/gocd-server:v17.8.0', # the container image name and version to deploy
[String] $ContainerPort = '8153', # The port to expose on the container
[String] $VolumeName = 'gocd', # The name of the volume to mount
[String] $MountPoint = '/godata/', # The mount point
[Int] $CPU = 1, # The number of CPUs to assign to the instance
[String] $MemoryInGB = '1.5' # The amount of memory to assign to the instance
)
$supportRGName = '{0}{1}rg' -f $UniqueCode, $AppCode
$storageAccountName = '{0}{1}storage' -f $UniqueCode, $AppCode
$storageShareName = '{0}{1}share' -f $UniqueCode, $AppCode
$keyvaultName = '{0}{1}akv' -f $UniqueCode, $AppCode
$keyvaultStorageSecretName = '{0}key' -f $storageAccountName
$aciRGName = '{0}{1}acirg' -f $UniqueCode, $AppCode
$aciName = '{0}{1}aci' -f $UniqueCode, $AppCode
$location = 'eastus'
# Login to Azure using Service Principal
Write-Verbose -Message ('Connecting to Azure Subscription "{0}" using Service Principal account "{1}"' -f $SubscriptionName, $ServicePrincipalUsername)
$servicePrincipalCredential = New-Object -TypeName 'System.Management.Automation.PSCredential' -ArgumentList ($ServicePrincipalUsername, $ServicePrincipalPassword)
$null = Add-AzureRmAccount -TenantId $TenancyId -SubscriptionName $SubscriptionName -ServicePrincipal -Credential $servicePrincipalCredential
# Create resource group for Key Vault and Storage Account
if (-not (Get-AzureRmResourceGroup -Name $supportRGName -ErrorAction SilentlyContinue))
{
Write-Verbose -Message ('Creating Resource Group "{0}" for Storage Account and Key Vault' -f $supportRGName)
$null = New-AzureRmResourceGroup -Name $supportRGName -Location $location
}
# Create Key Vault
if (-not (Get-AzureRmKeyVault -ResourceGroupName $supportRGName -VaultName $keyVaultName -ErrorAction SilentlyContinue))
{
Write-Verbose -Message ('Creating Key Vault "{0}" in Resource Group "{1}"' -f $keyVaultName, $supportRGName)
$null = New-AzureRmKeyVault -ResourceGroupName $supportRGName -VaultName $keyVaultName -Location $location -EnabledForTemplateDeployment -EnabledForDeployment
}
Write-Verbose -Message ('Setting Key Vault "{0}" access policy to enable Service Principal "{1}" to Get,List and Set secrets' -f $keyVaultName, $ServicePrincipalUsername)
$null = Set-AzureRmKeyVaultAccessPolicy -ResourceGroupName $supportRGName -VaultName $keyVaultName -ServicePrincipalName $ServicePrincipalUsername -PermissionsToSecrets get, list, set
Write-Verbose -Message ('Getting Key Vault "{0}" Id' -f $keyVaultName)
$keyvaultNameId = (Get-AzureRmKeyVault -Name $keyVaultName).ResourceId
# Create Storage Account
if (-not (Get-AzureRmStorageAccount -ResourceGroupName $supportRGName -Name $storageAccountName -ErrorAction SilentlyContinue))
{
Write-Verbose -Message ('Creating Storage Account "{0}" in Resource Group "{1}"' -f $storageAccountName, $supportRGName)
$null = New-AzureRmStorageAccount -ResourceGroupName $supportRGName -Name $storageAccountName -SkuName Standard_LRS -Location $location
}
Write-Verbose -Message ('Getting Storage Account "{0}" key' -f $storageAccountName)
$storageAccountKey = Get-AzureRmStorageAccountKey -ResourceGroupName $supportRGName -Name $storageAccountName
$storageConnectionString = 'DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1};' -f $storageAccountName, $storageAccountKey[0].value
$storageContext = New-AzureStorageContext -ConnectionString $storageConnectionString
if (-not (Get-AzureStorageShare -Name $storageShareName -Context $storageContext -ErrorAction SilentlyContinue))
{
Write-Verbose -Message ('Creating Azure Storage Share "{0}" in Storage Account {1}' -f $storageShareName, $storageAccountName)
$null = New-AzureStorageShare -Name $storageShareName -Context $storageContext
}
# Add the Storage Key to the Key Vault
Write-Verbose -Message ('Adding Storage Account "{0}" key to Key Vault "{1}"' -f $storageAccountName, $keyvaultName)
$null = Set-AzureKeyVaultSecret -VaultName $keyvaultName -Name $keyvaultStorageSecretName -SecretValue (ConvertTo-SecureString -String $storageAccountKey[0].value -AsPlainText -Force)
# Create Azure Container Intstance
if (-not (Get-AzureRmResourceGroup -Name $aciRGName -ErrorAction SilentlyContinue))
{
Write-Verbose -Message ('Creating Resource Group "{0}" for Container Group' -f $aciRGName)
$null = New-AzureRmResourceGroup -Name $aciRGName -Location $location
}
# Generate the azure deployment parameters
$azureDeployParametersPath = (Join-Path -Path $PSScriptRoot -ChildPath 'aci-azuredeploy.parameters.json')
$azureDeployPath = (Join-Path -Path $PSScriptRoot -ChildPath 'aci-azuredeploy.json')
$azureDeployParameters = ConvertFrom-Json -InputObject (Get-Content -Path $azureDeployParametersPath -Raw)
$azureDeployParameters.parameters.containername.value = $aciName
$azureDeployParameters.parameters.containerimage.value = $ContainerImage
$azureDeployParameters.parameters.cpu.value = $CPU
$azureDeployParameters.parameters.memoryingb.value = $MemoryInGB
$azureDeployParameters.parameters.containerport.value = $ContainerPort
$azureDeployParameters.parameters.sharename.value = $storageShareName
$azureDeployParameters.parameters.storageaccountname.value = $storageAccountName
$azureDeployParameters.parameters.storageaccountkey.reference.keyVault.id = $keyvaultNameId
$azureDeployParameters.parameters.storageaccountkey.reference.secretName = $keyvaultStorageSecretName
$azureDeployParameters.parameters.volumename.value = $VolumeName
$azureDeployParameters.parameters.mountpoint.value = $MountPoint
Set-Content -Path $azureDeployParametersPath -Value (ConvertTo-Json -InputObject $azureDeployParameters -Depth 6) -Force
$deploymentName = ((Get-ChildItem -Path $azureDeployPath).BaseName + '-' + ((Get-Date).ToUniversalTime()).ToString('MMdd-HHmm'))
Write-Verbose -Message ('Deploying Container Group "{0}" to Resource Group "{1}"' -f $aciName, $aciRGName)
$null = New-AzureRmResourceGroupDeployment -Name $deploymentName `
-ResourceGroupName $aciRGName `
-TemplateFile $azureDeployPath `
-TemplateParameterFile $azureDeployParametersPath `
-Force `
-ErrorVariable errorMessages
# Get the container info and display it
$subscriptionId = (Get-AzureRmSubscription -SubscriptionName $SubscriptionName).Id
$resourceId = ('/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.ContainerInstance/containerGroups/{2}' -f $subscriptionId, $aciRGName, $aciName)
$containerState = 'Unknown'
while ($containerState -ne 'Running')
{
Write-Verbose -Message 'Waiting for container to enter running state'
$containerResource = Get-AzureRmResource -ResourceId $resourceId
$containerState = $containerResource.Properties.state
Start-Sleep -Seconds 2
}
Write-Verbose -Message ('Container is running on http://{0}:{1}' -f $containerResource.Properties.ipAddress.ip, $containerResource.Properties.ipAddress.ports.port)

The script requires a four parameters to be provided:

  • ServicePrincipalUsername – the Application Id obtained when creating the Service Principal.
  • ServicePrincipalPassword – the Application Key we got (or set) when creating the Service Principal.
  • TenancyId – The Tenancy Id we got during the Service Principal creation process.
  • SubscriptionName – the name of the subscription to install the ACI and other resources into.

There are also some other optional parameters that can be provided that allow the container image that is used, the TCP port the container listens on and mount point for the Auzre File Share. If you don’t provide these parameters will be used which will create a GoCD Server.

 

  • AppCode – A short code to identify this application. It gets added to the resource names and resource group names. Defaults to ‘gocd’.
  • UniqueCode – this string is just used to ensure that globally unique names for the resources can be created. Defaults to ‘zzz‘.
  • ContainerImage – this is the name and version of the container image to be deployed to the ACI. Defaults to ‘gocd/gocd-server:v17.8.0‘.
  • CPU – The number of cores to assign to the container instance. Defaults to 1.
  • MemoryInGB – The amount of memory (in GB) to assign to the container instance. Defaults to 1.5.
  • ContainerPort – The port that the container listens on. Go CD Server defaults to 8153.
  • VolumeName – this is a volume name that is used to represent the volume in the ARM template. It can really be set to anything. Defaults to ‘gocd‘.
  • MountPoint – this is the folder in the Container that the Azure File Share is mounted to. Defaults to ‘/godata/‘.

ARM Template Files

There are two other files that are required for this process:

  1. ARM templatethe ARM template file that will be used to install the ACI.
  2. ARM template parameters – this file will be used to pass in the settings to the ARM Template.

ARM Template

This file is called aci-azuredeploy.json and should be downloaded to the same folder as the script above.

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"containername": {
"type": "string"
},
"containerimage": {
"type": "string"
},
"cpu": {
"type": "int"
},
"memoryingb": {
"type": "string"
},
"containerport": {
"type": "string"
},
"sharename": {
"type": "string"
},
"storageaccountname": {
"type": "string"
},
"storageaccountkey": {
"type": "securestring"
},
"volumename": {
"type": "string"
},
"mountpoint": {
"type": "string"
}
},
"resources": [{
"name": "[parameters('containername')]",
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2018-04-01",
"location": "[resourceGroup().location]",
"properties": {
"containers": [{
"name": "[parameters('containername')]",
"properties": {
"image": "[parameters('containerimage')]",
"ports": [{
"port": "[parameters('containerport')]"
}],
"resources": {
"requests": {
"cpu": "[parameters('cpu')]",
"memoryInGb": "[parameters('memoryingb')]"
}
},
"volumeMounts": [{
"name": "[parameters('volumename')]",
"mountPath": "[parameters('mountpoint')]"
}]
}
}],
"osType": "Linux",
"ipAddress": {
"type": "Public",
"ports": [{
"protocol": "tcp",
"port": "[parameters('containerport')]"
}]
},
"volumes": [{
"name": "[parameters('volumename')]",
"azureFile": {
"shareName": "[parameters('sharename')]",
"storageAccountName": "[parameters('storageaccountname')]",
"storageAccountKey": "[parameters('storageaccountkey')]"
}
}]
}
}]
}
view raw aci-azuredeploy.json hosted with ❤ by GitHub

ARM Template Parameters

This file is called aci-azuredeploy.parameters.json and should be downloaded to the same folder as the script above.

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"containername": {
"value": ""
},
"containerimage": {
"value": ""
},
"cpu": {
"value": 1
},
"memoryingb": {
"value": "1.5"
},
"containerport": {
"value": ""
},
"sharename": {
"value": ""
},
"storageaccountname": {
"value": ""
},
"storageaccountkey": {
"reference": {
"keyVault": {
"id": ""
},
"secretName": ""
}
},
"volumename": {
"value": ""
},
"mountpoint": {
"value": ""
}
}
}

Steps

To use the script the following steps need to be followed:

  1. Download the three files above (the script and the two ARM template files) and put them into the same folder:ss_aci_filesrequires
  2. Open a PowerShell window.
  3. Change directory to the folder you place the files into by executing:
  4. CD <folder location>
  5. Execute the script like this (passing in the variables):
    .\Install-AzureContainerInstancePersistStorage.ps1 `
    -ServicePrincipalUsername 'ce6fca5e-a22d-44b2-a75a-f3b20fcd1b16' `
    -ServicePrincipalPassword (ConvertTo-SecureString -String 'JUJfenwe89hwNNF723ibw2YBybf238ybflA=' -AsPlainText -Force) `
    -TenancyId '8871b1ba-7d3d-45f3-8ee0-bb60c0e4733e' `
    -SubscriptionName 'Visual Studio Enterprise' `
    -AppCode 'gocd' `
    -UniqueCode 'mine' `
    -ContainerImage 'gocd/gocd-server:v17.8.0' `
    -ContainerPort '8153' `
    -VolumeName 'gocd' `
    -MountPoint '/godata/' `
    -Verbose
    ss_aci_executingscript
  6. The process will then begin and make take a few minutes to complete:ss_aci_creategocdNote: I’ve changed the keys to this service principal and deleted this storage account, so I using these Service Principal or Storage Account keys won’t work!
  7. Once completed you will be able to log in to the Azure Portal and find the newly created Resource Groups:ss_aci_resourcegroup
  8. Open the resource group *gocdacirg and then select the container group *gocdaci:ss_aci_getcontainerip
  9. The IP Address of the container is displayed. You can copy this and paste it into a browser window along with the port the container exposed. In the case of Go CD it is 8153:ss_aci_runninggocdserver
  10. The process is now completed.

The Azure Container Instance can now be deleted and recreated at will, to reduce cost or simply upgrade to a new version. The Azure File Share will persist the data stored by the container into the mounted volume:

ss_aci_storageexplorerfileshare

Hopefully this process will help you implement persisted storage containers in Azure Container Instances more easily and quickly.

Thanks for reading!

Using Azure Key Vault with PowerShell – Part 1

Azure Key Vault is used to safeguard and manage cryptographic keys, certificates and secrets used by cloud applications and services (you can still consume these on-premise though). This allows other application, services or users in an Azure subscription to store and retrieve these cryptographic keyscertificates and secrets.

Once cryptographic keys, certificates and secrets have been stored in a Azure Key Vault access policies can be configured to provide access to them by other users or applications.

Azure Key Vault also stores all past versions of a cryptographic key, certificate or secret when they are updated. So this allows easily rolling back if anything breaks.

This post is going to show how:

  1. Set up an Azure Key Vault using the PowerShell Azure Module.
  2. Set administration access policies on the Azure Key Vault.
  3. Grant other users or applications access to cryptographic keyscertificates or secrets.
  4. Add, retrieve and remove a cryptographic key from the Azure Key Vault.
  5. Add, retrieve and remove a secret from the Azure Key Vault.

Requirements

Before getting started there is a few things that will be needed:

  1. An Azure account. I’m sure you’ve already got one, but if not create a free one here.
  2. The Azure PowerShell module needs to be installed. Click here for instructions on how install it.

Install the Key Vault

The first task is to customize and install the Azure Key Vault using the following PowerShell script.

# The name of the Azure subscription to install the Key Vault into
$subscriptionName = 'MySubscription'
# The resource group that will contain the Key Vault to create to contain the Key Vault
$resourceGroupName = 'MyKeyVaultRG'
# The name of the Key Vault to install
$keyVaultName = 'MyKeyVault'
# The Azure data center to install the Key Vault to
$location = 'southcentralus'
# These are the Azure AD users that will have admin permissions to the Key Vault
$keyVaultAdminUsers = @('Joe Boggs','Jenny Biggs')
# Login to Azure
Login-AzureRMAccount
# Select the appropriate subscription
Select-AzureRmSubscription -SubscriptionName $subscriptionName
# Make the Key Vault provider is available
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.KeyVault
# Create the Resource Group
New-AzureRmResourceGroup -Name $resourceGroupName -Location $location
# Create the Key Vault (enabling it for Disk Encryption, Deployment and Template Deployment)
New-AzureRmKeyVault -VaultName $keyVaultName -ResourceGroupName $resourceGroupName -Location $location `
-EnabledForDiskEncryption -EnabledForDeployment -EnabledForTemplateDeployment
# Add the Administrator policies to the Key Vault
foreach ($keyVaultAdminUser in $keyVaultAdminUsers) {
$UserObjectId = (Get-AzureRmADUser -SearchString $keyVaultAdminUser).Id
Set-AzureRmKeyVaultAccessPolicy -VaultName $keyVaultName -ResourceGroupName $resourceGroupName -ObjectId $UserObjectId `
-PermissionsToKeys all -PermissionsToSecrets all -PermissionsToCertificates all
}

But first, the variables in the PowerShell script need to be customized to suit. The variables in the PowerShell script that needs to be set are:

  • $subscriptionName – the name of the Azure subscription to install the Key Vault into.
  • $resourceGroupName – the name of the Resource Group to create to contain the Key Vault.
  • $keyVaultName – the name of the Key Vault to create.
  • $location – the Azure data center to install the Key Vault to (use Get-AzureRMLocation to get a list of available Azure data centers).
  • $keyVaultAdminUsers – an array of users that will be given administrator (full control over cryptographic keys, certificates and secrets). The user names specified must match the full name of users found in the Azure AD assigned to the Azure tenancy.

ss_akv_create

It will take about 30 seconds for the Azure Key Vault to be installed. It will then show up in the Azure Subscription:

ss_akv_createcompleteportal

Assigning Permissions

Once the Azure Key Vault is setup and an administrator or two have been assigned, other access policies will usually need to be assigned to users and/or application or service principal.

To create an access policy to allow a user to get and list cryptographic keys, certificates and secrets if you know the User Principal Name:

Set-AzureRmKeyVaultAccessPolicy -VaultName $keyVaultName -ResourceGroupName $resourceGroupName `
-UserPrincipalName 'Joe.Boggs@contoso.com' `
-PermissionsToCertificates list,get `
-PermissionsToKeys list,get `
-PermissionsToSecrets list,get

Note: the above code assumes you still have the variables set from the ‘Install the Key Vault’ section.

If you only have the full name of the user then you’ll need to look up the Object Id for the user in the Azure AD:

$userObjectId = (Get-AzureRmADUser -SearchString 'Joe Bloggs').Id
Set-AzureRmKeyVaultAccessPolicy -VaultName $keyVaultName -ResourceGroupName $resourceGroupName `
-ObjectId $userObjectId `
-PermissionsToCertificates list,get `
-PermissionsToKeys list,get `
-PermissionsToSecrets list,get

Note: the above code assumes you still have the variables set from the ‘Install the Key Vault’ section.

To create an access policy to allow a service principal or application to get and list cryptographic keys if you know the Application Id (a GUID):

Set-AzureRmKeyVaultAccessPolicy -VaultName $keyVaultName -ResourceGroupName $resourceGroupName `
-ServicePrincipalName 'e9b1bc3c-4769-4a98-9014-b315fd2adf53' `
-PermissionsToCertificates list,get `
-PermissionsToKeys list,get `
-PermissionsToSecrets list,get

Note: the above code assumes you still have the variables set from the ‘Install the Key Vault’ section.

Changing the values of the PermissionsToKeys, PermissionsToCertificates and PermissionsToSecrets parameters in the cmdlets above allow different permissions to be set for each policy.

The available permissions for certificates, keys and secrets are:

An access policy can be removed from users or service principals using the Remove-AzureRmKeyVaultAccessPolicy cmdet:

Remove-AzureRmKeyVaultAccessPolicy -VaultName $keyVaultName -ResourceGroupName $resourceGroupName `
-UserPrincipalName 'Joe.Boggs@contoso.com'

Note: the above code assumes you still have the variables set from the ‘Install the Key Vault’ section.

Working with Secrets

Secrets can be created, updated, retrieved and deleted by users or applications that have been assigned with the appropriate policy.

Creating/Updating Secrets

To create a new secret, use the Set-AzureKeyVaultSecret cmdlet:

Set-AzureKeyVaultSecret -VaultName $keyVaultName -Name 'MyAdminPassword' `
-SecretValue (ConvertTo-SecureString -String 'P@ssword!1' -AsPlainText -Force)

Note: the above code assumes you still have the variables set from the ‘Install the Key Vault’ section.

This will create a secret called MyAdminPassword with the value P@ssword!1 in the Azure Key Vault.

The secret can be updated to a new value using the same cmdlet:

Set-AzureKeyVaultSecret -VaultName $keyVaultName -Name 'MyAdminPassword' `
-SecretValue (ConvertTo-SecureString -String 'Sup3rS3cr3tP4ss!' -AsPlainText -Force)

Additional parameters can also be assigned to each version of a secret to control how it can be used:

  • ContentType – the type of content the secret contains (e.g. ‘txt’)
  • NotBefore – the date that the secret is valid after.
  • Expires – the date the secret is valid until.
  • Disable – marks the secret as disabled.
  • Tag – assigns tags to the secret.

For example:

Set-AzureKeyVaultSecret -VaultName $keyVaultName -Name 'MyAdminPassword' `
-SecretValue (ConvertTo-SecureString -String 'Sup3rS3cr3tP4ss!' -AsPlainText -Force) `
-ContentType 'txt' `
-NotBefore ((Get-Date).ToUniversalTime()) `
-Expires ((Get-Date).AddYears(2).ToUniversalTime()) `
-Disable:$false `
-Tags @{ 'Risk' = 'High'; }

ss_akv_secretupdatewithparameters

Retrieving Secrets

To retrieve the latest (current) version of a secret, use the Get-AzureKeyVaultSecret cmdlet:

$secretText = (Get-AzureKeyVaultSecret -VaultName $keyVaultName -Name 'MyAdminPassword').SecretValue

This will assign the stored secret to the variable $secretText as a SecureString. This can then be passed to any other cmdlets that require a SecureString.

To list all the versions of a secret, add the IncludeVersions parameter:

Get-AzureKeyVaultSecret -VaultName $keyVaultName -Name 'MyAdminPassword' -IncludeVersions

ss_akv_secretallhistory

To retrieve a specific version of a secret, use the Get-AzureKeyVaultSecret cmdlet with the Version parameter specified:

$secretText = (Get-AzureKeyVaultSecret -VaultName $keyVaultName -Name 'MyAdminPassword' -Version '02218af0521749b084bb08bd13184efb')

Removing Secrets

Finally, to remove a secret use the Remove-AzureKeyVaultSecret cmdlet:

Remove-AzureKeyVaultSecret -VaultName $keyVaultName -Name 'MyAdminPassword' -Force

That pretty much covers managing and using secrets in Azure Key Vault using PowerShell.

Cryptographic keys and Certificates

In the next part of this series I’ll cover using Azure Key Vault to use and manage cryptographic keys and certificates. Thanks for sticking with me this far.