Deploy Sonarqube to Azure App Service Linux Containers using an Azure DevOps Pipeline

Sonarqube is a web application that development teams typically use during the application development process to continuous validate the quality of the code.

This post is not specifically about Sonarqube and how it works. It is intended to show Developers & IT Pros how to deploy a service to Azure using contemporary infrastructure as code and DevOps patterns.

The Implementation

A Sonarqube installation is made up of a web application front end backed by database.

ss_sonarqube_architecture

Sonarqube supports many different types of databases, but I chose to use Azure SQL Database. I decided to use Azure SQL Database for the following reasons:

    1. It is a managed service, so I don’t have to worry about patching, securing and looking after SQL servers.
    2. I can scale the database performance up and down easily with code. This allows me to balance my performance requirements with the cost to run the server or even dial performance right back at times when the service is not being used.
    3. I can make use of the new Azure SQL Database serverless (spoiler alert: there are still SQL servers). This allows the SQL Database to be paused when not being accessed by the Sonarqube front end. It can be used to further reduce costs running Sonarqube by allowing me to delete the front end every night and only pay for the storage costs when developers aren’t developing code.ss_sonarqube_sql_server_serverless

For the front end web application I decided to use the Azure Web App for Containers running a Linux container using the official Sonarqube Docker image. Because the Sonarqube web application is stateless it is a great target for being able to be delete and recreate from code. The benefits to using Azure Web App for Containers are:

  1. Azure Web App for Containers is a managed service, so again, no patching, securing or taking care of servers.
  2. I can scale the performance up and down and in and out from within my pipeline. This allows me to quickly and easily tune my performance/cost, even on a schedule.
  3. I can delete and rebuild my front end web application by running the pipeline in under 3 minutes. So I can completely delete my front end and save money when it is not in use (e.g. when teams aren’t developing in the middle of the night).

Architectural Considerations

The Sonarqube web application, as it has been architected, is accessible from the public internet. This might not meet your security requirements, so you might wish to change the architecture in the following ways:

  1. Putting an Azure Application Gateway (a layer 7 router) in front of the service.
  2. Isolate the service in a Azure Virtual Network from the internet and make it only accessible to your development services. This may also require Azure ExpressRoute or other VPN technologies to be used.
  3. We are using the SQL Server administrator account for the Sonarqube front end to connect to the backend. This is not advised for a production service – instead, a user account specifically for the use of Sonarqube should be created and the password stored in an Azure Key Vault.

These architectural changes are beyond the scope of this document though as I wanted to keep the services simple. But the pattern defined in this post will work equally well with these architectures.

Techniques

Before we get into the good stuff, it is important to understand why I chose to orchestrate the deployment of these services using an Azure Pipeline.

I could have quite easily built the infrastructure manually straight into the Azure Portal or using some Azure PowerShell automation or the Azure CLI, so why do it this way?

There are a number of reasons that I’ll list below, but this is the most mature way to deploy applications and services.

ss_sonarqube_journey_of_an_Azure_professional

  1. I wanted to define my services using infrastructure as code using an Azure Resource Manager template.
  2. I wanted the infrastructure as code under version control using Azure Repos. I could have easily used GitHub here or one of a number of other Git repositories, but I’m using Azure Repos for simplicity.
  3. I wanted to be able to orchestrate the deployment of the service using a CI/CD pipeline using Azure Pipelines so that the process was secure, repeatable and auditable. I also wanted to parameterize my pipeline so that I could configure the parameters of the service (such as size of the resources and web site name) outside of version control. This would also allow me to scale the services by tweaking the parameters and simply redeploying.
  4. I wanted to use a YAML multi-stage pipeline so that the pipeline definition was stored in version control (a.k.a. pipeline as code). This also enabled me to break the process of deployment into two stages:
    • Build – publish a copy of the Azure Resource Manager templates as an artifact.
    • Deploy to Dev – deploy the resources to Azure using the artifact produced in the build.

Note: I’ve made my version of all these components public, so you can see how everything is built. You can find my Azure DevOps repository here and the Azure Pipeline definition here.

Step 1 – Create a project in Azure DevOps

First up we need to have an Azure DevOps organization. You can sign up for a completely free one that will everything you need by going here and clicking start free. I’m going to assume you have your DevOps organization all set up.

  1. In your browser, log in to your Azure DevOps organization.
  2. Click + Create project to create a new project.
  3. Enter a Project Name and optionally a Description.
  4. Select Public if you want to allow anyone to view your project (they can’t contribute or change it). Otherwise leave it as Private to make it only visible to you.
  5. Click Create.

ss_sonarqube_createproject

You’ve now got an Azure Repo (version control) as well as a place to create Azure Pipelines as well as a whole lot of other tools, such as Azure Boards, that we’re not going to be using for this project.

Step 2 – Add ARM Template Files to the Repo

Next, we need to initialize our repository and then add the Azure Resource Manager (ARM) template files and the Azure Pipeline definition (YAML) file. We’re going to be adding all the files to the repository directly in the browser, but if you’re comfortable using Git, then I’d suggest using that.

  1. Select Repos > Files from the nav bar.
  2. Make sure Add a README is ticked and click Initialize.ss_sonarqube_initializerepo
  3. Click the ellipsis (…) next to the repo name and select Create a new folder.
  4. Set the Folder name to infrastructure. The name matters because the pipeline definition expects to find the ARM template files in that folder.
  5. Enter a checkin comment of “Added infrastructure folder”.
  6. Click Create.ss_sonarqube_createinfrastructurefolder
  7. Once the folder has been created, we need to add two files to it:
    • sonarqube.json – The ARM template representing the infrastructure to deploy.
    • sonarqube.parameters.json – The ARM template default parameters.
  8. Click here to download a copy of the sonarqube.json. You can see the content of this file here.
  9. Click here to download a copy of the sonarqube.parameters.json. You can see the content of this file here.
  10. Click the ellipsis (…) next to the infrastructure folder and select Upload file(s).
  11. Click the Browse button and select the sonarqube.json and sonarqube.parameters.json files you downloaded.
  12. Set the Comment to something like “Added ARM template”.
  13. Ensure Branch name is set to master (it should be if you’re following along).
  14. Click Commit.ss_sonarqube_uploadarmtemplate

We’ve now got the ARM Template in the repository and under version control. So we can track any changes to them.

Note: When we created the infrastructure folder through the Azure DevOps portal a file called _PlaceHolderFile.md was automatically created. This is created because Git doesn’t allow storing empty folders. You can safely delete this file from your repo if you want.

Step 3 – Create your Multi-stage Build Pipeline

Now that we’ve got a repository we can create our mulit-stage build pipeline. This build pipeline will package the infrastructure files and store them and then perform a deployment. The multi-stage build pipeline is defined in a file called azure-pipelines.yml that we’ll put into the root folde of our repository.

  1. Click here to download a copy of the azure-pipelines.yml. You can see the content of this file here.
  2. Click the ellipsis (…) button next to the repository name and select Upload file(s).
  3. Click Browse and select the azure-pipelines.yml file you dowloaded.
  4. Set the Comment to something like “Added Pipeline Defnition”.
  5. Click Commit.ss_sonarqube_uploadpipelinefile
  6. Click Set up build button.
  7. Azure Pipelines will automatically detect the azure-pipelines.yml file in the root of our repository and configure our pipeline.
  8. Click the Run button. The build will fail because we haven’t yet created the service connection called Sonarqube-Azure to allow our pipeline to deploy to Azure. We also still still need to configure the parameters for the pipeline.ss_sonarqube_createbuildpipeline

Note: I’ll break down the contents of the azure-pipelines.yml at the end of this post so you get a feel for how a multi-stage build pipeline can be defined.

Step 4 – Create Service Connection to Azure

For Azure Pipelines to be able to deploy to Azure (or access other external services) it needs a service connection defined. In this step we’ll configure the service sonnection called Sonarqube-Azure that is referred to in the azure-pipelines.yml file. I won’t go into too much detail about what happens when we create a service connection as Azure Pipelines takes care of the details for you, but if you want to know more, read this page.

Important: This step assumes you have permissions to create service connections in the project and permissions to Azure to create a new Serivce Principal account with contributor permissions within the subscription. Many users won’t have this, so you might need to get a user with the enough permissions to the Azure subscription to do this step for you.

  1. Click the Project settings button in your project.
  2. Click Service connections under the Pipelines section.
  3. Click New service connection.
  4. Select Azure Resource Manager.
  5. Make sure Service Principal Authentication is selected.
  6. Enter Sonarqube-Azure for the Connection name. This must be exact, otherwise it won’t match the value in the azure-pipelines.yml file.
  7. Set Scope level to Subscription.
  8. From the Subscription box, select your Azure Subscription.
  9. Make sure the Resource group box is empty.
  10. Click OK.
  11. An authorization box will pop up requesting that you authenticate with the Azure subscription you want to deploy to.
  12. Enter the account details of a user who has permissions to create a Service Principal with contributor access to the subscription selected above.ss_sonarqube_createserviceconnection

You now have a service connection to Azure that any build pipeline (including the one we created earlier) in this project can use to deploy services to Azure.

Note: You can restrict the use of this Service connection by changing the Roles on the Service connection. See this page for more information.

Step 5 – Configure Pipeline Parameters

The ARM template contains a number of parameters which allow us to configure some of the things about the Azure resources we’re going to deploy, such as the Location (data center) to deploy to, the size of the resources and the site name our Sonarqube service will be exposed on.

In the azure-pipelines.yml file we configure the parameters that are passed to the ARM template from pipeline variables. Note: There are additional ARM template parameters that are exposed (such as sqlDatabaseSkuSizeGB and sonarqubeImageVersion), but we’ll leave the configuration of those parameters as a seprate exercise.

The parameters that are exposed as pipeline variables are:

  • siteName – The name of the web site. This will result in the Sonarqube web site being hosted at [siteName].azurewebsites.net.
  • sqlServerAdministratorUsername – The administrator username that will be used to administer this SQL database and for the Sonarqube front end to connct using. Note: for a Production service we should actually create another account for Sonarqube to use.
  • sqlServerAdministratorPassword – The password that will be used by Sonarqube to connect to the database.
  • servicePlanCapacity – The number of App Service plan nodes to use to run this Sonarqube service. Recommend leaving it at 1 unless you’ve got really heavy load.
  • servicePlanPricingTier – This is the App Service plan pricing tier to use for the service. Suggest S1 for testing, but for systems requiring greater performance then S2, S3, P1V2, P2V2 or P3V2.
  • sqlDatabaseSkuName – this is the performance of the SQL Server. There are a number of different performance options here and what you chose will need to depend on your load.
  • location – this is the code for the data center to deploy to. I use WestUS2, but chose whatever datacenter you wish.

The great thing is, you can change these variables at any time and then run your pipeline again and your infrastructure will be changed (scaled up/down/in/out) accordingly – without losing data. Note: You can’t change location or siteName after first deployment however.

To create your variables:

  1. Click Pipelines.
  2. Click the SonarqubeInAzure pipeline.
  3. Click the Edit button.
  4. Click the menu button (vertical ellipsis) and select Variables.
  5. Click the Add button and add the following parameters and values:
    • siteName – The globally unique name for your site. This will deploy the service to [siteName].azurewebsites.net. If this does not result in a globally unique name an error will occur during deployment.
    • sqlServerAdministratorUsername – Set to sonarqube.
    • sqlServerAdministratorPassword – Set to a strong password consisting of at least 8 characters including upper and lower case, numbers and symbols. Make sure you click the lock symbol to let Azure DevOps know this is a password and to treat it accordignly.
    • servicePlanCapacity – Set to 1 for now (you can always change and scale up later).
    • servicePlanPricingTier – Set to S1 for now (you can always change and scale up later).
    • sqlDatabaseSkuName – Set to GP_Gen5_2 for now (you can always change and scale up later). If you want to use the SQL Serverless database, use GP_S_Gen5_1, GP_S_Gen5_2 or GP_S_Gen5_4.
    • location – set to WestUS2 or whatever the code is for your preferred data center.
  6. You can also click the Settable at Queue time box against any of the parameters you want to be able to set when the job is manually queued.ss_sonarqube_createvariablesss_sonarqube_variables
  7. Click the Save and Queue button and select Save.

We are now ready to deploy our service by triggering the pipeline.

Step 6 – Run the Pipeline

The most common way an Azure Pipeline is going to get triggered is by committing a change to the repository the build pipeline is linked to. But in this case we are just going to trigger a manual build:

  1. Click Pipelines.
  2. Click the SonarqubeInAzure pipeline.
  3. Click the Run pipeline.
  4. Set any of the variables we want to change (for example if we wanted to scale up our services).
  5. Click Run.
  6. You can then watch the build and deploy stages complete.ss_sonarqube_runpipeline.gif

Your pipeline should have completed and your resources will be on thier way to being deployed to Azure. You can rerun this pipeline at any time with different variables to scale your services. You could even delete the front end app service completely and use this pipeline to redeploy the service again – saving lots of precious $$$.

Step 7 – Checkout your new Sonarqube Service

You can login to the Azure Portal to see the new resource group and resources that have been deployed.

  1. Open the Azure portal and log in.
  2. You will see a new resource group named [siteName]-rg.
  3. Open the [siteName]-rg.ss_sonarqube_resources
  4. Select the Web App with the name [siteName].ss_sonarqube_webapp
  5. Click the URL.
  6. Your Sonarqube application will open after a few seconds. Note: It may take a little while to load the first time depending on the performance you configured on your SQL database.ss_sonarqube_theapplication
  7. Login to Sonarqube using the username admin and the password admin. You’ll want to change this immediately.

You are now ready to use Sonarqube in your build pipelines.

Step 8 – Scaling your Sonarqube Services

One of the purposes of this process was to enable the resources to be scaled easily and non-desrtructively. All we need to do is:

  1. Click Pipelines.
  2. Click the SonarqubeInAzure pipeline.
  3. Click the Run pipeline.
  4. Set any of the variables we want to change to scale the service up/down/in/out.
  5. Click Run.
  6. You can then watch the build and deploy stages complete.

Of course you could do a lot of the scaling with Azure Automation, which is a better idea in the long term than using your build pipeline to scale the services because you’ll end up with hundreds of deployment records over time.

A Closer look at the Multi-stage Build Pipeline YAML

At the time of writing this post, the Multi-stage Build Pipeline YAML was relatively new and still in a preview state. This means that it is not fully documented. So, I’ll break down the file and highlight the interesting pieces:

Trigger

ss_sonarqube_yamltrigger

This section ensures the pipeline will only be triggered on changes to the master branch.

Stages

ss_sonarqube_yamlstages

This section contains the two stages: Build and Deploy. We could have as many stages as we like. For example: Build, Deploy Test, Deploy Prod.

Build Stage

ss_sonarqube_yamlbuildstage

This defines the steps to run in the build stage. It also requires the execution of the stage on an Azure DevOps agent in the vs2017-win2016 pool.

Build Stage Checkout Step

ss_sonarqube_yamlbuildcheckout

This step causes the repository to be checked out onto the Azure DevOps agent.

Build Stage Publish Artifacts Step

ss_sonarqube_yamlbuildpublish

This step takes the infrastructure folder from the checked out repository and stores it as an artifact that will always be accessible as long as the build record is stored. The artifact will also be made available to the next stage (the Deploy Stage). The purpose of this step is to ensure we have an immutable artifact available that we could always use to redeploy this exact build.

Deploy Stage

ss_sonarqube_yamldeploystage

The deploy stage takes the artifact produced in the build stage and deploys it. It runs on an Azure DevOps agent in the vs2017-win2016 pool.

It also specifies that this is a deployment to an environment called “dev“. This will cause the environment to show up in the environments section under pipelines in Azure DevOps.

ss_sonarqube_environments.png

The strategy and runOnce define that this deployment should only execute once each time the pipeline is triggered.

Deploy Stage Azure Resource Group Deployment Step

ss_sonarqube_yamldeploystep

This deploy step takes the ARM template from the infrastructure artifact and deploys it to the Sonarqube-Azure Service connection. It overrides the parameters (using the overrideParameters) property using build variables (e.g. $(siteName), $(servicePlanCapacity)).

But what about Azure Blueprints?

One final thing to consider: this deployment could be a great use case for implementing with Azure Blueprints. I would strongly suggest taking a look at using your build pipeline to deploy an Azure Blueprint containing the ARM template above.

Thank you very much for reading this and I hope you found it interesting.

 

Install Docker on Windows Server 2016 using DSC

Windows Server 2016 is now GA and it contains some pretty exciting stuff. Chief among them for me is support for containers by way of Docker. So, one of the first things I did was start installing Windows Server 2016 VM’s (Server Core and Nano Server naturally) and installing Docker on them so I could begin experimenting with Docker Swarms and other cool stuff.

Edit: If you’re looking for a DSC configuration for setting up Docker on a Windows 10 Anniversary Edition machine, see the Windows 10 AE section below.

At first I started using the standard manual instructions provided by Docker, but this doesn’t really suit any kind of automation or infrastructure as code methodology. This of course was a good job for PowerShell Desired State Configuration (DSC).

So, what I did was put together a basic DSC config that I could load into a DSC Pull Server and build out lots of Docker nodes quickly and easily. This worked really nicely for me to build out lots of Windows Server 2016 Container hosts in very short order:

ss_dockerdsc_installing

If you don’t have a DSC Pull server or you just want a simple script that you can use to quickly configure a Windows Server 2016 (Core or Core with GUI only) then read on.

Note: This script and process is really just an example of how you can configure Docker Container hosts with DSC. In a real production environment you would probably want to use a DSC Pull Server.

Get it Done

Edit: After a suggestion from Michael Friis (@friism) I have uploaded the script to the PowerShell Gallery and provided a simplified method of installation. The steps could be simplified even further into a single line, but I’ve kept them separate to show the process.

Using PowerShell Gallery

On a Windows Server 2016 Server Core or Windows Server 2016 Server Core with GUI server:

  1. Log on as a user with Local Administrator privileges.
  2. Start an Administrator PowerShell console – if you’re using Server Core just enter PowerShell at the command prompt:ss_dockerdsc_console
  3. Install the Install-DockerOnWS2016UsingDSC.ps1 script from the PowerShell Gallery using this command:

    You may be asked to confirm installation of these modules, answer yes to any confirmations.
    ss_dockerdsc_consolegetscript
  4. Run the Install-DockerOnWS2016UsingDSC.ps1 script using:

    ss_dockerdsc_consolerunscriptfromgallery

The script will run and reboot the server once. Not long after the reboot the Docker service will start up and you can get working with containers:

ss_dockerdsc_consoledockerdetails

You’re now ready to start working with Containers.

The Older Method (without PowerShell Gallery)

On a Windows Server 2016 Server Core or Windows Server 2016 Server Core with GUI server:

  1. Log on as a user with Local Administrator privileges.
  2. Start an Administrator PowerShell console – if you’re using Server Core just enter PowerShell at the command prompt:ss_dockerdsc_console
  3. Install the DSC Resources required for the DSC configuration by executing these commands:

    You may be asked to confirm installation of these modules, answer yes to any confirmations.
    ss_dockerdsc_consoleinstallresources
  4. Download the Docker installation DSC script by executing this command:

    ss_dockerdsc_consoledownloadscript
  5. Run the Docker installation DSC script by executing this command:

    ss_dockerdsc_consolerunscript

The script will run and reboot the server once. Not long after the reboot the Docker service will start up and you can get working with containers:

ss_dockerdsc_consoledockerdetails

You’re now ready to start working with Containers.

What the Script Does

In case you’re interested in what the script actually contains, here are the components:

  1. Configuration ContainerHostDsc – the DSC configuration that configures the node as a Docker Container host.
  2. Configuration ConfigureLCM – the LCM meta configuration that sets Push Mode, allows the LCM to reboot the node if required and configures ApplyAndAutoCorrect mode.
  3. ConfigData – a ConfigData object that contains the list of node names to apply this DSC Configuration to – in this case LocalHost.
  4. ConfigureLCM – the call to the Configuration ConfigureLCM to compile the LCM meta configuration MOF file.
  5. Set-DscLocalConfigurationManager – this applies the compiled LCM meta configuration MOF file to LocalHost to configure the LCM.
  6. ContainerHostDsc – the call to the Configuration ContainerHostDsc to compile the DSC MOF file.
  7. Start-DSCConfiguration – this command starts the LCM applying the DSC MOF file produces by the ContainerHostDsc.

The complete script can be found here. Feel free to use this code in anyway that makes sense to you.

What About Windows 10 AE?

If you’re looking for a DSC configuration that does the same thing for Windows 10 Anniversary edition, Ben Gelens (@bgelens) has written an awesome DSC config that will do the trick. Check it out here.

 

Happy containering!

NanoServer Container Base Image – It does Exist…Somewhere!

A really interesting video from Microsoft was just released with Mark Russinovich (CTO of Azure if you don’t already know) demonstrating Windows Server Containers. What is really interesting about this demo is that he is demonstrating containers using a Windows NanoServer Base Image:

Nano Server Containers Base Image - it does exist.

Nano Server Containers Base Image – it does exist.

If you’ve read any of my previous posts here and here you’ll know I spent quite some time looking at this and trying to get it going with TP3. I deduced it was not possible yet without the Windows NanoServer Base Image for containers – which had not been provided by Microsoft.

Other eagle eyed viewers will also note that he appears to be running a Nano Server container on a Full Server container host which I didn’t actually think was possible. From what I originally understood about containers is that you could only instantiate a container using a base container image matching the version of the OS the container host used. E.g. You can not instantiate a Server Core container on a NanoServer container host – I confirmed this was the case in TP3. But perhaps I misunderstood, or perhaps containers can be instantiated on “up” version container hosts but not “down” version.

Edit: Actually on further examination he is remoting into a different server that is acting as a Container Host (10.205.158.127). So I can’t assume that this remote host is a Full Server – it could well be a NanoServer. So the above paragraph isn’t relevant.

I also notice that he demos Hyper-V Containers, which as far as I am aware aren’t working on TP3. So this would indicate a more recent build than TP3.

So perhaps we’ll see this image being made available in the Windows Server 2016 TP4 release?

Detatch from a Docker Container without Stopping It

Saturday morning Docker fun times (still only on Windows Server Core) – here is something I found out that might be useful. It is in the Docker documentation but it is not mentioned in the Microsoft container documentation.

Once you have attached to a Docker Container via a CMD console typing exit at the console detatches from the container and Stops it. This is not usually what I want to do. To detatch from the container without stopping it press CTRL+P followed by CTRL+Q.

The container is still running after being detached.

The container is still running after being detached.

Note: This only applies to Docker Containers that have been attached to via docker attach or docker run. Windows Server Containers that have been connected to via Enter-PSSession can be exited using the Exit command.

Well that is enough for a Saturday morning.

Docker and Containers on Nano Server Continued

This is a continuation of my investigation of how to get Containers and also possibly the Docker engine running on Windows Server Nano 2016 TP 3. The initial investigation into this can be found here: How to use Containers on Windows Nano Server.

This post is mainly documenting the process of manually creating containers on Windows Nano Server 2016 TP3 as well as some additional details about what I have managed to find out. The documentation on Windows Server Containers from Microsoft is relatively thin at the moment (not surprising – this is very much in technical preview) and so a lot of the information here is speculation on my part. Still, it might be useful to get an idea of how things eventually will work. But of course a good deal of it could change in the near future. This information is really to help me get my head around the concepts and how it will work, but it might be useful for others.

Step 1 – Create a Nano Server Virtual Machine

Anyone who has played around with Nano Server should already be very familiar with this step. The only thing to remember is that the following packages must be included in the VHDx:

  1. Guest – All Nano Server VHDx files running as  VM should have this package. If you’re installing Nano Server onto bare metal you won’t need this.
  2. Compute – Includes the Nano Server Hyper-V components. Required because Containers use Hyper-V networking and are a form of Virtualization.
  3. OEM-Drivers – Not strictly required but I tend to include it anyway.
  4. Containers – This package provides the core of Windows Server Containers.

If you’re unfamiliar with creating a Nano Server VHDx, please see this post.

Step 2 – Configure the Container Host Networking

Any container that needs to be connected to a network (most of them usually) will need to connect to a Hyper-V Virtual Switch configured on this Container Host. There are two virtual switch types that can be configured for this purpose:

  1. NAT – This seems to be a new switch type in Windows Server 2016 that causes performs some kind of NAT on the connected adapters.
  2. DHCP – this is actually just a standard External switch with a connection to a physical network adapter on the Container Host.

The installation script normally performs one of the above depending on which option you select. However on Nano Server both of these processes fail:

NAT

Creating a NAT VM Switch on Nano Server actually works. But the command to create a NAT Network connection to the VM Switch fails because the NETNAT module is not available on Nano Server.

DHCP

Creating a standard External VM Switch on Nano Server

Creating a DHCP/External VM Switch on Nano Server just fails with a cryptic error message. The same error occurs when creating a Private or Internal VM Switch, so I expect Hyper-V on Nano Server isn’t working so well (or at all). Not much point pursuing this method of networking.

Step 3 – Install a Base OS Image from a WIM File

Every container you create requires a Base OS Image. This Base OS Image contains all the operating system files and registry settings for the OS a container uses. Windows Server Containers expects to be provided with at least one Base OS Image in the form of a WIM file. You can’t create a container without one of these. At this point I am unsure if the WIM file that Windows Server Containers will use is a customized version of the WIM file provided with an OS or if it is standard.

During an installation of Windows Server Containers onto a Windows Server Core operating system, the process automatically downloads a WIM file that is used as the Base OS Image.

To install a Base OS Image from a WIM File to the Container Host using the PowerShell function:

Install-ContainerOSImage -WimPath CoreServer.wim -Verbose

Installing a Base OS Image

This function does several things:

  1. Creates a new folder in c:\programdata\microsoft\windows\images with Canonical Name of the new Base OS Image:
    Contents of the Images folder
  2. Inside the Canonical Name folder a subfolder called files is created where the Base OS Image file is extracted to:The contents of an Image Files
  3. Another subfolder called hives is also created in the Canonical Name folder which contains the default registry hives for the Base OS Image:
    The Image Registry Hives
  4. Two additional files are created in the Canonical Name folder that contain metadata about the image:
    Metadata.json
    Version.wcxImage Metadata
  5. Adds the Base OS Image into the list of Image Containers that are available to create new Containers from:
    All Base OS Images installed

I have tried using the Install.wim from the ISO, the NanoServer.wim from the ISO and the Core.wim downloaded using the Core Edition Containers install script. Also note, the INSTALL.WIM file on the TP3 ISO still refers to Windows Server 2012 R2 SERVERSTANDARDCORE (I double checked this and you can confirm by the Version number in the OS Image).

The Test-ContainerImage cmdlet can be used to identify “problems” with container images:

Testing Containers

None of the container images report any problems which is nice to know.

Step 4 – Create a Container

This is obviously where things should start to get exciting! The next step is to create a shiny new container using one of our Base OS Images. However, if you try and create a new container at this point a cryptic error message will occur:

New Container? Nope!

I don’t know what causes this, but if you reboot your Nano Server VM the error goes away and you should be able to successfully create the container:

First Container - making progress

Unfortunately only the Base OS Image downloaded from Microsoft and used with Containers for Windows Server 2016 Core results in a valid Container. So it would seem there are some things that are done to a WIM file to make it able to be Containerized (is that a word?).

Step 5 – Start the Container

I’m not holding my breath here. This is what happens when the container is started:

Starting up the Container - nope!

Looking closely at the text of the error it would appear that there was a mismatch between the Container Host OS version and that of the Base OS version that the container was using. This is probably because the Container Host is a Nano Server and the Base OS that was downloaded was for a Core Server.

Next Steps

It would seem at this point we have to wait for Microsoft to provide a Base OS file for Nano Server and also fix the Virtual Switch issues with Nano Server before any further progress can be made experimenting with Containers on Nano Server.

However, it may still be possible to get the Docker Engine working under Nano Server and see if that offers any more information. So that will be what I’ll look into next.

Also, it is interesting to dig around into the files that are created when the new container was created:

Files Created with a Container

When a container is created the container files are stored in the C:\ProgramData\Microsoft\Windows\Hyper-V\containers folder. Unfortunately the files all binary so we aren’t able to dig around in them to glean any other information.

Well, that is enough for today.

How To use Containers on Windows Nano Server

Edit: I wrote this article when examining containers on Windows Nano Server TP3which wasn’t in a working state. I have not yet had a chance to fully examine containers on Windows Nano Server TP4, but when I get a spare day hours I will no doubt deep dive into it.

If you’re looking for instructions on installing and using containers on Windows Nano Server TP4, start here.

These instructions are more focused on setting up a container host on Windows Server Core TP4, but I have managed to get them working on Windows Nano Server TP4 just fine:

ss_nano_containerhostworking

I do plan to document this process over the next week or so.


 

You’d be forgiven for believing that it was just a simple click of a button (or addition of a package) to get Docker Containers working on a shiny new Windows Nano Server TP3 install. That is what I thought too. But after careful examination of the available documentation I found that there isn’t much information on running actually getting containers working on Nano Server. Sure, there is lots of information on running it on a full or core version of WIndows Server TP3, but Nano is lacking. So, because I’m a bit obsessive I decided I’d have a try and adapting the standard installation process.

Edit: Initially I had a bit of success, but I’ve run into some rather stop dead issues that I haven’t been able to resolve (see later on in this post).

I have continued the investigation here with a much more in depth look at the issues.

tl;dr: Containers on Windows Server Nano 2016 TP3 does not work yet! The Base OS WIM file for Windows Server Nano is required, but has not been provided.

Problems with the Standard Containers Install Script

First up I grabbed a copy of this script from Microsoft which is what is used to install containers on a full Windows Server 2016 install. I took a look at it and identified the things that wouldn’t work on a Nano Server 2016 install. This is what I found:

  1. The script can optionally configure a NAT switch – this requires the NetNat PS module which isn’t available on Nano.
  2. The script will install a VM Switch – therefore the Compute package is required to be installed on the Nano Server (the Compute package contains the Hyper-V components).
  3. The script can download various files from the internet (using the alias wget). Wget and Invoke-WebRequest are not available on Nano Server – so we’ll need to download the files to another machine and pre-copy them to the Nano Server.
  4. The Expand-Archive is used to extract the NSSM executable, but this cmdlet is not available on Nano Server either – so we’ll need to extract the NSSM.exe on another machine and copy it to the server.

The Process of Installing a Container Host

The process of actually installing a Container Host in Windows Nano Server is as follows:

  1. Create a Nano Server VHDx with the packages Guest, OEM-Drivers, Compute and Containers.
  2. Create a new VM booting from the VHDx – this is our Container Host.
  3. Upload a Base OS WIM file to the Container Host containing that will be used to create new containers.
  4. Upload Docker.exe to c:\windows\system32\ on the Container Host.
  5. Upload NSSM.exe to c:\windows\system32\ on the Container Host – this is used to create and run the Docker Service.
  6. Run the installation script on the Container Host – this will install the networking components and configure the Docker service as well as create the container OS image.
  7. Create a Container!

In theory the Container Host is now ready to go!

What is Required to build a Nano Server Container Host

A bit of experience with PowerShell is a good help here!

So, to create a Nano Server Container Host you’ll need a few things:

  1. A machine that can run Generation 2 Hyper-V machines (Gen 1 will probably work but I’m using Gen 2) – this will host your Nano Server. This machine must also be running PowerShell 5.0 (I’m using some PS5.0 only cmdlets)!
  2. A copy of the Windows Server 2016 TP 3 ISO from here.
  3. A working folder (I used D:\Temp) where you’ll put all the scripts and other files etc.
  4. The scripts (I’ll provide them all in a zip file), but they are:
    1. New-ContainerHostNano.ps1 – this will do everything and is the only script you’ll run.
    2. Install-ContainerHostNano.ps1 – this is the script that gets automatically run on the Container Host. It is a version of the Microsoft one from here that I have adjusted to work with Nano Server.
    3. New-NanoServerVHD.ps1 – this is a script I wrote a while back to create Nano Server VHDx files (see this post for more details).
    4. Convert-WindowsImage.ps1 – this script is required by New-NanoServerVHD.ps1 and is available on Microsoft Script Center.

How Can I use all This?

I haven’t really finished implementing or testing these scripts and I am encountering a problem creating the VM Switch on the Nano Server, but if you’re interested you can get a hold of the scripts in my GitHub repository.

To use them:

  1. Create a working folder (I used d:\temp).
  2. Download the four PS1 scripts from the GitHub repository to the working folder.
  3. Download the Windows Server 2016 TP3 ISO from here and put it in the working folder.
  4. Download the Base OS Container Image from here (3.5GB download) and put it in the working folder.
  5. Edit the New-ContainerHostNano.ps1 file in the working folder and customize the variables at the top to suit your paths and such – fairly self explanatory.
  6. In an Administrative PowerShell run the New-ContainerHostNano.ps1 file.

Please note: This is a work in progress. There are definitely some bugs in it:

  1. An error is occurring when trying to create the VM Switch in DHCP mode or NAT mode.
  2. If using NAT mode the NAT module isn’t included in Nano Server so although the VM switch gets created the NAT Network adapter can’t be created.
  3. NSSM isn’t creating the Docker Service – which may just be an issue with running the PowerShell installation script remotely.

None of the above will stop containers being created though. The containers might not be able to communicate with the world via networking and the Docker management engine might not work, but in theory the containers should still work (at least that is my understanding).

The BIG Problem

Any container that you create requires a WIM file that contains the container base OS image that container will use. Microsoft has so far only provided a base WIM file for WIndows Server 2016 Core installations – they haven’t provided a container base OS Image for Windows Server 2016 Nano yet. You can download the Core one from here (3.5GB download).

If you try to use the NanoServer.WIM file from the Windows Server 2016 ISO as the container base OS image you can’t even create the container at all.

I did try putting the Core WIM file downloaded above onto the Nano Server. I could then create a container OK, but an error would occur starting it up:

Nope - can't use the Core WIM with a Nano Server Container Host!

Nope – can’t use the Core WIM with a Nano Server Container Host!

Update 2015-10-29: There is a new video available online from Microsoft of Mark Russinovich (Azure CTO) doing a container demonstration using a Nano Server. It clearly shows that the NanoServer Base Container Image does exist. So perhaps we’ll see this in the TP4 release.

The video can be seen here.

Feel free to let me know if you can solve any of these issues! Any help is appreciated. I’ll continue to work on this and post any additional results.

PowerShell CmdLets available in the Containers Module on a Nano Server

Just a Monday morning quickie:

Here is a list of all the cmdlets available in the PowerShell containers module on a Nano Server with the containers package installed:

Containers - the next big thing!

Containers – the next big thing!

And here is the text version:

Function        Install-ContainerOSImage                           1.0.0.0    Containers
Function        Uninstall-ContainerOSImage                         1.0.0.0    Containers
Cmdlet          Add-ContainerNetworkAdapter                        1.0.0.0    Containers
Cmdlet          Connect-ContainerNetworkAdapter                    1.0.0.0    Containers
Cmdlet          Disconnect-ContainerNetworkAdapter                 1.0.0.0    Containers
Cmdlet          Export-ContainerImage                              1.0.0.0    Containers
Cmdlet          Get-Container                                      1.0.0.0    Containers
Cmdlet          Get-ContainerHost                                  1.0.0.0    Containers
Cmdlet          Get-ContainerImage                                 1.0.0.0    Containers
Cmdlet          Get-ContainerNetworkAdapter                        1.0.0.0    Containers
Cmdlet          Import-ContainerImage                              1.0.0.0    Containers
Cmdlet          Move-ContainerImageRepository                      1.0.0.0    Containers
Cmdlet          New-Container                                      1.0.0.0    Containers
Cmdlet          New-ContainerImage                                 1.0.0.0    Containers
Cmdlet          Remove-Container                                   1.0.0.0    Containers
Cmdlet          Remove-ContainerImage                              1.0.0.0    Containers
Cmdlet          Remove-ContainerNetworkAdapter                     1.0.0.0    Containers
Cmdlet          Set-ContainerNetworkAdapter                        1.0.0.0    Containers
Cmdlet          Start-Container                                    1.0.0.0    Containers
Cmdlet          Stop-Container                                     1.0.0.0    Containers
Cmdlet          Test-ContainerImage                                1.0.0.0    Containers