Deploy Sonarqube to Azure App Service Linux Containers using an Azure DevOps Pipeline

Sonarqube is a web application that development teams typically use during the application development process to continuous validate the quality of the code.

This post is not specifically about Sonarqube and how it works. It is intended to show Developers & IT Pros how to deploy a service to Azure using contemporary infrastructure as code and DevOps patterns.

The Implementation

A Sonarqube installation is made up of a web application front end backed by database.

ss_sonarqube_architecture

Sonarqube supports many different types of databases, but I chose to use Azure SQL Database. I decided to use Azure SQL Database for the following reasons:

    1. It is a managed service, so I don’t have to worry about patching, securing and looking after SQL servers.
    2. I can scale the database performance up and down easily with code. This allows me to balance my performance requirements with the cost to run the server or even dial performance right back at times when the service is not being used.
    3. I can make use of the new Azure SQL Database serverless (spoiler alert: there are still SQL servers). This allows the SQL Database to be paused when not being accessed by the Sonarqube front end. It can be used to further reduce costs running Sonarqube by allowing me to delete the front end every night and only pay for the storage costs when developers aren’t developing code.ss_sonarqube_sql_server_serverless

For the front end web application I decided to use the Azure Web App for Containers running a Linux container using the official Sonarqube Docker image. Because the Sonarqube web application is stateless it is a great target for being able to be delete and recreate from code. The benefits to using Azure Web App for Containers are:

  1. Azure Web App for Containers is a managed service, so again, no patching, securing or taking care of servers.
  2. I can scale the performance up and down and in and out from within my pipeline. This allows me to quickly and easily tune my performance/cost, even on a schedule.
  3. I can delete and rebuild my front end web application by running the pipeline in under 3 minutes. So I can completely delete my front end and save money when it is not in use (e.g. when teams aren’t developing in the middle of the night).

Architectural Considerations

The Sonarqube web application, as it has been architected, is accessible from the public internet. This might not meet your security requirements, so you might wish to change the architecture in the following ways:

  1. Putting an Azure Application Gateway (a layer 7 router) in front of the service.
  2. Isolate the service in a Azure Virtual Network from the internet and make it only accessible to your development services. This may also require Azure ExpressRoute or other VPN technologies to be used.
  3. We are using the SQL Server administrator account for the Sonarqube front end to connect to the backend. This is not advised for a production service – instead, a user account specifically for the use of Sonarqube should be created and the password stored in an Azure Key Vault.

These architectural changes are beyond the scope of this document though as I wanted to keep the services simple. But the pattern defined in this post will work equally well with these architectures.

Techniques

Before we get into the good stuff, it is important to understand why I chose to orchestrate the deployment of these services using an Azure Pipeline.

I could have quite easily built the infrastructure manually straight into the Azure Portal or using some Azure PowerShell automation or the Azure CLI, so why do it this way?

There are a number of reasons that I’ll list below, but this is the most mature way to deploy applications and services.

ss_sonarqube_journey_of_an_Azure_professional

  1. I wanted to define my services using infrastructure as code using an Azure Resource Manager template.
  2. I wanted the infrastructure as code under version control using Azure Repos. I could have easily used GitHub here or one of a number of other Git repositories, but I’m using Azure Repos for simplicity.
  3. I wanted to be able to orchestrate the deployment of the service using a CI/CD pipeline using Azure Pipelines so that the process was secure, repeatable and auditable. I also wanted to parameterize my pipeline so that I could configure the parameters of the service (such as size of the resources and web site name) outside of version control. This would also allow me to scale the services by tweaking the parameters and simply redeploying.
  4. I wanted to use a YAML multi-stage pipeline so that the pipeline definition was stored in version control (a.k.a. pipeline as code). This also enabled me to break the process of deployment into two stages:
    • Build – publish a copy of the Azure Resource Manager templates as an artifact.
    • Deploy to Dev – deploy the resources to Azure using the artifact produced in the build.

Note: I’ve made my version of all these components public, so you can see how everything is built. You can find my Azure DevOps repository here and the Azure Pipeline definition here.

Step 1 – Create a project in Azure DevOps

First up we need to have an Azure DevOps organization. You can sign up for a completely free one that will everything you need by going here and clicking start free. I’m going to assume you have your DevOps organization all set up.

  1. In your browser, log in to your Azure DevOps organization.
  2. Click + Create project to create a new project.
  3. Enter a Project Name and optionally a Description.
  4. Select Public if you want to allow anyone to view your project (they can’t contribute or change it). Otherwise leave it as Private to make it only visible to you.
  5. Click Create.

ss_sonarqube_createproject

You’ve now got an Azure Repo (version control) as well as a place to create Azure Pipelines as well as a whole lot of other tools, such as Azure Boards, that we’re not going to be using for this project.

Step 2 – Add ARM Template Files to the Repo

Next, we need to initialize our repository and then add the Azure Resource Manager (ARM) template files and the Azure Pipeline definition (YAML) file. We’re going to be adding all the files to the repository directly in the browser, but if you’re comfortable using Git, then I’d suggest using that.

  1. Select Repos > Files from the nav bar.
  2. Make sure Add a README is ticked and click Initialize.ss_sonarqube_initializerepo
  3. Click the ellipsis (…) next to the repo name and select Create a new folder.
  4. Set the Folder name to infrastructure. The name matters because the pipeline definition expects to find the ARM template files in that folder.
  5. Enter a checkin comment of “Added infrastructure folder”.
  6. Click Create.ss_sonarqube_createinfrastructurefolder
  7. Once the folder has been created, we need to add two files to it:
    • sonarqube.json – The ARM template representing the infrastructure to deploy.
    • sonarqube.parameters.json – The ARM template default parameters.
  8. Click here to download a copy of the sonarqube.json. You can see the content of this file here.
  9. Click here to download a copy of the sonarqube.parameters.json. You can see the content of this file here.
  10. Click the ellipsis (…) next to the infrastructure folder and select Upload file(s).
  11. Click the Browse button and select the sonarqube.json and sonarqube.parameters.json files you downloaded.
  12. Set the Comment to something like “Added ARM template”.
  13. Ensure Branch name is set to master (it should be if you’re following along).
  14. Click Commit.ss_sonarqube_uploadarmtemplate

We’ve now got the ARM Template in the repository and under version control. So we can track any changes to them.

Note: When we created the infrastructure folder through the Azure DevOps portal a file called _PlaceHolderFile.md was automatically created. This is created because Git doesn’t allow storing empty folders. You can safely delete this file from your repo if you want.

Step 3 – Create your Multi-stage Build Pipeline

Now that we’ve got a repository we can create our mulit-stage build pipeline. This build pipeline will package the infrastructure files and store them and then perform a deployment. The multi-stage build pipeline is defined in a file called azure-pipelines.yml that we’ll put into the root folde of our repository.

  1. Click here to download a copy of the azure-pipelines.yml. You can see the content of this file here.
  2. Click the ellipsis (…) button next to the repository name and select Upload file(s).
  3. Click Browse and select the azure-pipelines.yml file you dowloaded.
  4. Set the Comment to something like “Added Pipeline Defnition”.
  5. Click Commit.ss_sonarqube_uploadpipelinefile
  6. Click Set up build button.
  7. Azure Pipelines will automatically detect the azure-pipelines.yml file in the root of our repository and configure our pipeline.
  8. Click the Run button. The build will fail because we haven’t yet created the service connection called Sonarqube-Azure to allow our pipeline to deploy to Azure. We also still still need to configure the parameters for the pipeline.ss_sonarqube_createbuildpipeline

Note: I’ll break down the contents of the azure-pipelines.yml at the end of this post so you get a feel for how a multi-stage build pipeline can be defined.

Step 4 – Create Service Connection to Azure

For Azure Pipelines to be able to deploy to Azure (or access other external services) it needs a service connection defined. In this step we’ll configure the service sonnection called Sonarqube-Azure that is referred to in the azure-pipelines.yml file. I won’t go into too much detail about what happens when we create a service connection as Azure Pipelines takes care of the details for you, but if you want to know more, read this page.

Important: This step assumes you have permissions to create service connections in the project and permissions to Azure to create a new Serivce Principal account with contributor permissions within the subscription. Many users won’t have this, so you might need to get a user with the enough permissions to the Azure subscription to do this step for you.

  1. Click the Project settings button in your project.
  2. Click Service connections under the Pipelines section.
  3. Click New service connection.
  4. Select Azure Resource Manager.
  5. Make sure Service Principal Authentication is selected.
  6. Enter Sonarqube-Azure for the Connection name. This must be exact, otherwise it won’t match the value in the azure-pipelines.yml file.
  7. Set Scope level to Subscription.
  8. From the Subscription box, select your Azure Subscription.
  9. Make sure the Resource group box is empty.
  10. Click OK.
  11. An authorization box will pop up requesting that you authenticate with the Azure subscription you want to deploy to.
  12. Enter the account details of a user who has permissions to create a Service Principal with contributor access to the subscription selected above.ss_sonarqube_createserviceconnection

You now have a service connection to Azure that any build pipeline (including the one we created earlier) in this project can use to deploy services to Azure.

Note: You can restrict the use of this Service connection by changing the Roles on the Service connection. See this page for more information.

Step 5 – Configure Pipeline Parameters

The ARM template contains a number of parameters which allow us to configure some of the things about the Azure resources we’re going to deploy, such as the Location (data center) to deploy to, the size of the resources and the site name our Sonarqube service will be exposed on.

In the azure-pipelines.yml file we configure the parameters that are passed to the ARM template from pipeline variables. Note: There are additional ARM template parameters that are exposed (such as sqlDatabaseSkuSizeGB and sonarqubeImageVersion), but we’ll leave the configuration of those parameters as a seprate exercise.

The parameters that are exposed as pipeline variables are:

  • siteName – The name of the web site. This will result in the Sonarqube web site being hosted at [siteName].azurewebsites.net.
  • sqlServerAdministratorUsername – The administrator username that will be used to administer this SQL database and for the Sonarqube front end to connct using. Note: for a Production service we should actually create another account for Sonarqube to use.
  • sqlServerAdministratorPassword – The password that will be used by Sonarqube to connect to the database.
  • servicePlanCapacity – The number of App Service plan nodes to use to run this Sonarqube service. Recommend leaving it at 1 unless you’ve got really heavy load.
  • servicePlanPricingTier – This is the App Service plan pricing tier to use for the service. Suggest S1 for testing, but for systems requiring greater performance then S2, S3, P1V2, P2V2 or P3V2.
  • sqlDatabaseSkuName – this is the performance of the SQL Server. There are a number of different performance options here and what you chose will need to depend on your load.
  • location – this is the code for the data center to deploy to. I use WestUS2, but chose whatever datacenter you wish.

The great thing is, you can change these variables at any time and then run your pipeline again and your infrastructure will be changed (scaled up/down/in/out) accordingly – without losing data. Note: You can’t change location or siteName after first deployment however.

To create your variables:

  1. Click Pipelines.
  2. Click the SonarqubeInAzure pipeline.
  3. Click the Edit button.
  4. Click the menu button (vertical ellipsis) and select Variables.
  5. Click the Add button and add the following parameters and values:
    • siteName – The globally unique name for your site. This will deploy the service to [siteName].azurewebsites.net. If this does not result in a globally unique name an error will occur during deployment.
    • sqlServerAdministratorUsername – Set to sonarqube.
    • sqlServerAdministratorPassword – Set to a strong password consisting of at least 8 characters including upper and lower case, numbers and symbols. Make sure you click the lock symbol to let Azure DevOps know this is a password and to treat it accordignly.
    • servicePlanCapacity – Set to 1 for now (you can always change and scale up later).
    • servicePlanPricingTier – Set to S1 for now (you can always change and scale up later).
    • sqlDatabaseSkuName – Set to GP_Gen5_2 for now (you can always change and scale up later). If you want to use the SQL Serverless database, use GP_S_Gen5_1, GP_S_Gen5_2 or GP_S_Gen5_4.
    • location – set to WestUS2 or whatever the code is for your preferred data center.
  6. You can also click the Settable at Queue time box against any of the parameters you want to be able to set when the job is manually queued.ss_sonarqube_createvariablesss_sonarqube_variables
  7. Click the Save and Queue button and select Save.

We are now ready to deploy our service by triggering the pipeline.

Step 6 – Run the Pipeline

The most common way an Azure Pipeline is going to get triggered is by committing a change to the repository the build pipeline is linked to. But in this case we are just going to trigger a manual build:

  1. Click Pipelines.
  2. Click the SonarqubeInAzure pipeline.
  3. Click the Run pipeline.
  4. Set any of the variables we want to change (for example if we wanted to scale up our services).
  5. Click Run.
  6. You can then watch the build and deploy stages complete.ss_sonarqube_runpipeline.gif

Your pipeline should have completed and your resources will be on thier way to being deployed to Azure. You can rerun this pipeline at any time with different variables to scale your services. You could even delete the front end app service completely and use this pipeline to redeploy the service again – saving lots of precious $$$.

Step 7 – Checkout your new Sonarqube Service

You can login to the Azure Portal to see the new resource group and resources that have been deployed.

  1. Open the Azure portal and log in.
  2. You will see a new resource group named [siteName]-rg.
  3. Open the [siteName]-rg.ss_sonarqube_resources
  4. Select the Web App with the name [siteName].ss_sonarqube_webapp
  5. Click the URL.
  6. Your Sonarqube application will open after a few seconds. Note: It may take a little while to load the first time depending on the performance you configured on your SQL database.ss_sonarqube_theapplication
  7. Login to Sonarqube using the username admin and the password admin. You’ll want to change this immediately.

You are now ready to use Sonarqube in your build pipelines.

Step 8 – Scaling your Sonarqube Services

One of the purposes of this process was to enable the resources to be scaled easily and non-desrtructively. All we need to do is:

  1. Click Pipelines.
  2. Click the SonarqubeInAzure pipeline.
  3. Click the Run pipeline.
  4. Set any of the variables we want to change to scale the service up/down/in/out.
  5. Click Run.
  6. You can then watch the build and deploy stages complete.

Of course you could do a lot of the scaling with Azure Automation, which is a better idea in the long term than using your build pipeline to scale the services because you’ll end up with hundreds of deployment records over time.

A Closer look at the Multi-stage Build Pipeline YAML

At the time of writing this post, the Multi-stage Build Pipeline YAML was relatively new and still in a preview state. This means that it is not fully documented. So, I’ll break down the file and highlight the interesting pieces:

Trigger

ss_sonarqube_yamltrigger

This section ensures the pipeline will only be triggered on changes to the master branch.

Stages

ss_sonarqube_yamlstages

This section contains the two stages: Build and Deploy. We could have as many stages as we like. For example: Build, Deploy Test, Deploy Prod.

Build Stage

ss_sonarqube_yamlbuildstage

This defines the steps to run in the build stage. It also requires the execution of the stage on an Azure DevOps agent in the vs2017-win2016 pool.

Build Stage Checkout Step

ss_sonarqube_yamlbuildcheckout

This step causes the repository to be checked out onto the Azure DevOps agent.

Build Stage Publish Artifacts Step

ss_sonarqube_yamlbuildpublish

This step takes the infrastructure folder from the checked out repository and stores it as an artifact that will always be accessible as long as the build record is stored. The artifact will also be made available to the next stage (the Deploy Stage). The purpose of this step is to ensure we have an immutable artifact available that we could always use to redeploy this exact build.

Deploy Stage

ss_sonarqube_yamldeploystage

The deploy stage takes the artifact produced in the build stage and deploys it. It runs on an Azure DevOps agent in the vs2017-win2016 pool.

It also specifies that this is a deployment to an environment called “dev“. This will cause the environment to show up in the environments section under pipelines in Azure DevOps.

ss_sonarqube_environments.png

The strategy and runOnce define that this deployment should only execute once each time the pipeline is triggered.

Deploy Stage Azure Resource Group Deployment Step

ss_sonarqube_yamldeploystep

This deploy step takes the ARM template from the infrastructure artifact and deploys it to the Sonarqube-Azure Service connection. It overrides the parameters (using the overrideParameters) property using build variables (e.g. $(siteName), $(servicePlanCapacity)).

But what about Azure Blueprints?

One final thing to consider: this deployment could be a great use case for implementing with Azure Blueprints. I would strongly suggest taking a look at using your build pipeline to deploy an Azure Blueprint containing the ARM template above.

Thank you very much for reading this and I hope you found it interesting.

 

Advertisements

Azure Resource Manager Templates Hands-on Lab and #GlobalAzure 2019

Recently I helped organize and present at the 2019 Global Azure Bootcamp in Auckland. The Global Azure Bootcamp is an huge event run by Azure communities throughout the world all on the same day every year. It is an opportunity for anyone with an interest in Azure to come and learn from experts and presenters and share their knowledge. If you’re new to Azure or even if you’re an expert it is well worth your time to attend these free events.

AucklandGAB2019-1

The Global Azure Bootcamp is also an awful lot of fun to be a part of and I got to meet some fantastic people!

We also got to contribute to the Global Azure Bootcamp Science lab, which was a really great way to learn Azure as well as contribute to the goal of finding potential exosolar planets (how cool is that?). A global dashboard was made available where all locations could compare their contributions. The Auckland Team did fantastically well, given the size of Auckland comparatively: We managed to get 8th on the team leaderboard:

AucklandGAB2019-TeamLeaderboard

Hands-On Workshop Material

As part of my session this year, I produced a Hands-on workshop and presentation showing attendees the basics of using Azure Resource Manager templates as well as some of the more advanced topics such as linked/nested templates and security.

The topics covered are:

AucklandGAB2019-ARMTemplatesWorkshop

I’ve made all of this material open and free for the community to use to run your own sessions or modify and improve.

You can find the material in GitHub here:

https://github.com/PlagueHO/Workshop-ARM-Templates

Thanks for reading and hope to see some of you at a future Global Azure Bootcamp!

Enable CORS Support in Cosmos DB using PowerShell

Support for Cross-Origin Resource Sharing (CORS) was recently added to Cosmos DB. If you want to enable CORS on an existing Cosmos DB account or create a new Cosmos DB account with CORS enabled it is very easy to do with Azure Resource Manager (ARM) templates or the Azure Portal.

But what if you’re wanting to find out the state of the CORS setting on an account or set it using PowerShell? Well, look no further.

The Cosmos DB PowerShell module (version 3.0.0 and above) supports creating Cosmos DB accounts with CORS enabled as well as updating and removing the CORS headers setting on an existing account. You can also retrieve the CORS setting for an existing Cosmos DB account.

Installing the CosmosDB Module

The first thing you need to do is install the CosmosDB PowerShell module from the PowerShell Gallery by running this in a PowerShell console:

ss_cosmosdbcors_installmodule

This will also install the Az PowerShell modules Az.Accounts and Az.Resources modules if they are not installed on your machine. The *-CosmosDbAccount functions in the CosmosDB module are dependent on these modules.

Note: The CosmosDB PowerShell module and the Az PowerShell modules are completely cross-platform and support Linux, MacOS and Windows. Running in either Windows PowerShell (Windows) or PowerShell Core (cross-platform) is supported.

Versions of the CosmosDB PowerShell module earlier than 3.0.0.0 use the older AzureRm/AzureRm.NetCore modules and do not support the CORS setting.

Authenticating to Azure with ‘Az’

Before using the CosmosDB PowerShell module accounts functions to work with CORS settings you’ll first need to authenticate to Azure using the Az PowerShell Modules. If you’re planning on automating this process you’ll want to authenticate to Azure using a Service Principal identity.

Side note: if you’re using this module in an Azure DevOps build/release pipeline the Azure PowerShell task will take care of the Service Principal authentication process for you:

ss_cosmosdbcors_azuredevopspowershelltask

But if you’re just doing a little bit of experimentation then you can just use an interactive authentication process.

To use the interactive authentication process just enter into your PowerShell console:

then follow the instructions.

ss_cosmosdbcors_authenticateaz.png

Create a Cosmos DB Account with CORS enabled

Once you have authenticated to Azure, you can use the New-CosmosDbAccount function to create a new account:

ss_cosmosdbcors_newcosmosdbaccountThis will create a new Cosmos DB account with the name dsrcosmosdbtest in the resource group dsrcosmosdbtest-rgp in the West US location and with CORS allowed origins of https://www.fabrikam.com and https://www.contoso.com.

Important: the New-CosmosDbAccount command assumes the resource group that is specified in the ResourceGroup parameter already exists and you have contributor access to it. If the resource group doesn’t exist then you can create it using the New-AzResourceGroup function or some other method.

It will take Azure a few minutes to create the new Cosmos DB account for you.

Side note: But if you want your PowerShell automation or script to be able to get on and do other tasks in the meantime, then add the -AsJob parameter to the New-CosmosDbAccountcall. This will cause the function to immediately return and provide you a Job object that you can use to periodically query the state of the Job. More information on using PowerShell Jobs can be found here.

Be aware, you won’t be able to use the Cosmos DB account until the Job is completed.

If you look in the Azure Portal, you will find the new Cosmos DB account with the CORS allowed origin values set as per your command:

ss_cosmosdbcors_cosmosdbinportalwithcors

Get the CORS Allowed Origins on a Cosmos DB Account

Getting the current CORS Allowed Origins value on an account is easy too. Just run the following PowerShell command:

ss_cosmosdbcors_getcosmosdbcors

This will return a string containing all the CORS Allowed Origins for the Cosmos DB account dsrcosmosdbtest.

You could easily split this string into an array variable by using:

ss_cosmosdbcors_getcosmosdbcorssplit

Update the CORS Allowed Origins on an existing Cosmos DB Account

To set the CORS Allowed Origins on an existing account use the Set-CosmosDbAccount function:

ss_cosmosdbcors_setcosmosdbcors

This will take a few minutes to update. So you can use the -AsJob parameter to run this as a Job.

Remove the CORS Allowed Origins from an existing Cosmos DB Account

You can remove the CORS Allowed Origins setting by setting using the Set-CosmosDbAccount function but passing in an empty string to the AllowedOrigin parameter:

ss_cosmosdbcors_removecosmosdbcors

This will take a few minutes to update as well. As always, you can use the -AsJob parameter to run this as a Job.

 

Final Words

Hopefully, you can see it is fairly simple to automate and work with the Cosmos DB CORS Allowed Origins setting using the PowerShell Cosmos DB module.

If you have any issues or queries or would like to contribute to the PowerShell Cosmos DB module, please head over to the GitHub repository.

 

Use Pester to Test Azure Resource Manager Templates for Best Practices

Recently I came across the amazing Secure DevOps Kit for Azure (AzSK). This contains a really useful AzSK PowerShell Module that contains cmdlets for performing different types of security scanning on Azure Resources, Subscriptions and Resource Manager Templates.

The feature of this module that I was most interested in for my current project was being able to scan ARM templates for best practice violations. The module contains several

To install the module, open a PowerShell Window and run:

Important: At the time of writing this post, the AzSK module has dependencies on the AzureRM.Profile and other AzureRM.* PowerShell modules. As of December 2018, the AzureRM.* PowerShell Modules are going to be renamed to Az.* (see this post). The AzureRM and Az modules can not be installed side-by-side, so if you’ve installed the Az PowerShell modules on your system then the installation of AzSK will fail because the AzureRM modules will also be installed and a conflict will occur.

The cmdlet we’re most interested in is the Get-AzSKARMTemplateSecurityStatus. It can be used to scan one or more ARM templates or entire folders of ARM templates for best practice violations:

ss_azsk_scanning

This will scan the ARM templates and produce a CSV report in a folder Microsoft\AzSKLogs\ARMChecker within your $ENV:LOCALAPPDATA folder and open the folder in Explorer. This isn’t ideal for automation scenarios or using during Continuous Integration or Continuous Delivery pipelines. I’ve raised an issue with the AzSK team on GitHub to see if this can be improved.

In my case, I wanted to be able to use the PowerShell Pester Module, a PowerShell testing framework, to execute tests on the output and then use the nUnit output Pester generates to publish into a Continuous Integration pipeline. To do that I needed to create a custom test script that would take the CSV report, count the failures of each level (High, Medium or Low) and fail if any are counted in the specific level.

This is what the script looks like:

You can download the script from GitHub Gist directly or get it from the PowerShell Gallery by running:

Install-Script -Name AzSKARMTemplateSecurityStatus.Test

To use it you will need to install Pester 4.3.0 and AzSK 3.6.1 modules:

Once that is done, you can use Invoke-Pester and pass in the TemplatePath and Severity parameters to the test script:

This will execute the Pester tests in the file above on the specified ARM template. The tests will fail when there are any best practice violations with the specified Severity or above. If you didn’t pass in a Severity then it will default to failing on Medium and High.

ss_azsk_invokepester

If you use the OutputFile and OutputFormat parameters to cause Pester to output an NUnit format file that most Continuous Integration tools will happily accept and use to display the output of the tests.

If you installed the script from the PowerShell Gallery, you can also run the tests like this:

AzSKARMTemplateSecurityStatus.Test -TemplatePath D:\101-webapp-basic-windows\azuredeploy.json

Finally, if you’re using Azure DevOps, you can also get this function as part Secure DevOps Kit (AzSK) CICD Extensions for Azure in the Azure DevOps Marketplace.

Which ever way you choose to consume AzSK, it is a great module and well worth including in your CI/CD pipelines to ensure your ARM templates meet best practices.

 

 

Managing Users & Permissions in Cosmos DB with PowerShell

If you’re just getting started with Cosmos DB, you might not have come across users and permissions in a Cosmos DB database. However, there are certain use cases where managing users and permissions are necessary. For example, if you’re wanting to be able to limit access to a particular resource (e.g. a collection, document, stored procedure) by user.

The most common usage scenario for users and permissions is if you’re implementing a Resource Token Broker type pattern, allowing client applications to directly access the Cosmos DB database.

Side note: The Cosmos DB implementation of users and permissions only provides authorization – it does not provide authentication. It would be up to your own implementation to manage the authentication. In most cases you’d use something like Azure Active Directory to provide an authentication layer.

But if you go hunting through the Azure Management Portal Cosmos DB data explorer (or Azure Storage Explorer) you won’t find any way to configure or even view users and permissions.

ss_cdb_cosmosdbdataexplorer

To manage users and permissions you need to use the Cosmos DB API directly or one of the SDKs.

But to make Cosmos DB users and permissions easier to manage from PowerShell, I created the Cosmos DB PowerShell module. This is an open source project hosted on GitHub. The Cosmos DB module allows you to manage much more than just users and permissions, but for this post I just wanted to start with these.

Requirements

This module works on PowerShell 5.x and PowerShell Core 6.0.0. It probably works on PowerShell 3 and 4, but I don’t have any more machines running this version to test on.

The Cosmos DB module does not have any dependencies, except if you call the New-Cosmos DbContext function with the ResourceGroup parameter specified as this will use the AzureRM PowerShell modules to read the Master Key for the connection directly from your Cosmos DB account. So I’d recommend installing the Azure PowerShell modules or if you’re using PowerShell 6.0, install the AzureRM.NetCore modules.

Installing the Module

The best way to install the Cosmos DB PowerShell module is from the PowerShell Gallery. To install it for only your user account execute this PowerShell command:

Install-Module -Name CosmosDB -Scope CurrentUser

ss_cdb_cosmosdbinstallmodulecurrentuser

Or to install it for all users on the machine (requires administrator permissions):

Install-Module -Name CosmosDB

ss_cdb_cosmosdbinstallmoduleallusers

Context Variable

Update 2018-03-06

As of Cosmos DB module v2.0.1, the connection parameter has been renamed to context and the New-CosmosDbConnection function has been renamed New-CosmosDbContext. This was to be more inline with naming adopted by the Azure PowerShell project. The old connection parameters and New-CosmosDbConnection function is still available as an alias, so older scripts won’t break. But these should be changed to use the new naming if possible as I plan to deprecate the connection version at some point in the future.

This post was updated to specify the new naming, but screenshots still show the Connection aliases.

Before you get down to the process of working with Cosmos DB resources, you’ll need to create a context variable containing the information required to connect. This requires the following information:

  1. The Cosmos DB Account name
  2. The Cosmos DB Database name
  3. The Master Key for the account (you can have the Cosmos DB PowerShell module get this directly from your Azure account if you wish).

To create the connection variable we just use the New-CosmosDbContext:

ss_cdb_cosmosdbnewconnection

If you do not wish to specify your master key, you can have the New-CosmosDbContext function pull your master key from the Azure Management Portal directly:

ss_cdb_cosmosdbnewconnectionviaportal

Note: This requires the AzureRM.Profile and AzureRM.Resoures module on Windows PowerShell 5.x or AzureRM.Profile.NetCore and AzureRM.Resources.NetCore on PoweShell Core 6.0.0.

Managing Users

To add a user to the Cosmos DB Database use the New-CosmosDBUser function:

New-CosmosDbUser -Context $context -Id 'daniel'

ss_cdb_cosmosdbnewuser

To get a list of users in the database:

Get-CosmosDbUser -Context $context

ss_cdb_cosmosdbgetusers

To get a specific user:

Get-CosmosDbUser -Context $context -Id 'daniel'

ss_cdb_cosmosdbgetuser

To remove a user (this will also remove all permissions assigned to the user):

Remove-CosmosDbUser -Context $context -Id 'daniel'

ss_cdb_cosmosdbremoveuser

Managing Permissions

Permissions in Cosmos DB are granted to a user for a specific resource. For example, you could grant a user access to just a single document, an entire collection or to a stored procedure.

To grant a permission you need to provide four pieces of information:

  1. The Id of the user to grant the permission to.
  2. An Id for the permission to create. This is just string to uniquely identify the permission.
  3. The permission mode to the permission: All or Read.
  4. The Id of the resource to grant access to. This can be generated from one of the Get-CosmosDb*ResourcePath functions in the CosmosDB PowerShell module.

In the following example, we’ll grant the user daniel all access to the TestCollection:

ss_cdb_cosmosdbnewpermission

Once a permission has been granted, you can use the Get-CosmosDbPermission function to retrieve the permission and with it the Resource Token that can be used to access the resource for a limited amount of time (between 10 minutes and 5 hours).

Note: as you have the Master Key already, using the Resource Token isn’t required.

For example, to retrieve all permissions for the user with Id daniel and a resource token expiration of 600 seconds:

Get-CosmosDbPermission -Context $context -UserId 'daniel' -TokenExpiry '600' |
fl *

ss_cdb_cosmosdbgetpermission

You can as expected delete a permission by using the Remove-CosmosDbPermission function:

Remove-CosmosDbPermission -Context $context -UserId 'daniel' -Id 'AccessTestCollection'

ss_cdb_cosmosdbremovepermission

Final Thoughts

So this is pretty much all there is to managing users and permissions using the Cosmos DB PowerShell module. This module can also be used to manage the following Cosmos DB resources:

  • Attachments
  • Collections
  • Databases
  • Documents
  • Offers
  • Stored procedures
  • Triggers
  • User Defined Functions

You can find additional documentation and examples of how to manage these resources over in the Cosmos DB PowerShell module readme file on GitHub.

Hopefully this will help you in any Cosmos DB automation tasks you might need to implement.

 

Configure Azure SQL Server Automatic Tuning with PowerShell

One thing I’ve found with configuring Azure services using automation (e.g. Azure PowerShell Modules, Azure Resource Manager template) is that the automation features are a little bit behind the feature set. For example, the Azure PowerShell modules may not yet implement settings for new or preview features. This can be a an issue if you’re strictly deploying everything via code (e.g. infrastructure as code). But if you run into a problem like this, all is not lost. So read on for an example of how to solve this issue.

Azure REST APIs

One of the great things about Azure is that everything is configurable by making direct requests to the Azure REST APIs, even if it is not available in ARM templates or Azure PowerShell.

Depending on the feature/configuration you can sometimes use the Set-AzureRmResource cmdlets to make calls to the REST APIs. But this cmdlet is limited to using an HTTP method of POST. So if you need to use PATCH, you’ll need to find an alternate way to make the call.

So, what you need then is to use the Invoke-RestMethod cmdlet to create a custom call to the REST API. This is the process I needed to use to configure the Azure SQL Server Automatic Tuning settings and what I’ll show in my script below.

The Script

The following script can be executed in PowerShell (of course) and requires a number of parameters to be passed to it:

  • SubscriptionId – the subscription Id of the Azure subscription that contains the Azure SQL Server.
  • ResourceGroupName – The name of the resource group containing SQL Server or
    database.
  • ServerName – The name of the Azure SQL Server to set the automatic tuning
    options on.
  • DatabaseNameThe name of the Azure SQL Database to set the automatic tuning options on. If you pass this parameter then the automatic tuning settings are applied to the Azure SQL Database, not the server.
  • Mode – This defines where the settings for the automatic tuning are
    obtained from. Inherit is only valid if the DatabaseName is specified.
  • CreateIndexEnable automatic tuning for creating an index.
  • DropIndexEnable automatic tuning for dropping an index.
  • ForceLastGoodPlan – Enable automatic tuning for forcing last good plan.

Requirements: You need to have the installed the AzureRM.Profile PowerShell module (part of the AzureRM PowerShell Modules) to use this script. The script also requires you to have logged into your Azure Subscription using Add-AzureRmAccount (as a user or Service Principal).

Example Usage

To apply custom automatic tuning to an Azure SQL Server:

.\Set-AzureRMSqlServerAutotuning.ps1 -SubscriptionId '<Subscription Id>' -ResourceGroupName '<Resource Group name>' -ServerName '<Azure SQL server name>' -Mode Custom -CreateIndex On -DropIndex On -ForceLastGoodPlan Off

ss_sqlserver_serverautotuning

To apply custom automatic tuning to an Azure SQL Database:

.\Set-AzureRMSqlServerAutotuning.ps1 -SubscriptionId '<Subscription Id>' -ResourceGroupName '<Resource Group name>' -ServerName '<Azure SQL server name>' -DatabaseName '<Azure SQL database name>' -Mode Custom -CreateIndex On -DropIndex On -ForceLastGoodPlan Off

ss_sqlserver_databaseautotuning

Conclusion

I’ve not yet encountered something in Azure that I can’t configure via the Azure REST APIs. This is because the Azure Management Portal uses the same APIs – so if it is available in the portal then you can do it via the Azure REST APIs. The biggest challenge is determining the body, header and methods available if the APIs are not yet documented.

If the API you need is not documented then you can raise a question in the Microsoft Azure Forums or on Stack Overflow. Failing that you can use the developer tools in your browser of choice to watch the API calls being made to the portal – I’ve had to resort to this many times, but documenting that process is something I’ll save for another day.

 

Stop, Start or Restart all Web Apps in Azure using PowerShell

Here is a short (and sometimes handy) single line of PowerShell code that can be used to restart all the Azure Web Apps in a subscription:

ss_azurecloudshell_restartallwebapps

Note: Use this with care if you’re working with production systems because this _will_ restart these Web Apps without confirming first.

This would be a handy snippet to be able to run in the Azure Cloud Shell. It could also be adjusted to perform different actions on other types of resources.

To stop all Web Apps in a subscription use:

To start them all again:

The key part of this command is the GetEnumerator() method because most Azure Cmdlets don’t return an array of individual objects into the pipeline like typical PowerShell cmdlets. Instead returning a System.Collections.Generic.List object, which requires a slight adjustment to the code. This procedure can be used for most Azure Cmdlets to allow the results to be iterated through.

ss_azurecloudshell_systemcollections

Thanks for reading.