ss_ovfoms_omslogsearchevents

Continuously Testing your Infrastructure with OVF and Microsoft Operations Management Suite

Introduction

One of the cool new features in Windows Server 2016 is Operation Validation Framework. Operation Validation Framework (OVF) is an (open source) PowerShell module that contains:

A set of tools for executing validation of the operation of a system. It provides a way to organize and execute Pester tests which are written to validate operation (rather than limited feature tests)

One of the things I’ve been using OVF for is to continuously test parts of my infrastructure. Any time a failure occurs an error event is written to the Event Log. I then have Microsoft Operations Management Suite set up to monitor my Event Log for any errors and then alert me (by e-mail) if they occur.

Note: OVF tests are just Pester tests, so if you’ve ever written a Pester test then you’ll have no trouble at all with OVF. If you haven’t written a Pester test before, here is a great introduction. If you’re going to just learn one thing about PowerShell this year, I’d definitely suggest Pester.

The great thing about OVF is that it can test anything that PowerShell can get information about – which is practically anything. For some great examples showing how to use OVF to test your infrastructure, see this article or this article by the insightful Irwin Strachan.

In this article I’m going to show you how to set up a basic set of OVF tests to continuously test some DNS components on a server, write any failures to the Event Log and then configure Microsoft OMS to monitor the Event Log for OVF failures and alert you if and something breaks.

I am calling my tests ValidateDNS which is reflected in the files and Event Log Source for the events that are created, but you can call these tests what ever you like. I’m also going to create my tests and related files as a PowerShell Module with the same name (ValidateDNS). You don’t have to do this – you could make a much simpler process, but for me it made sense because I could then publish my set of tests to my own private PowerShell Gallery or to a Sonatype Nexus OSS server.

I am also going to assume you’ve got an Microsoft Operations Management Suite account all setup. If you don’t have an OMS account, you can get a free trial one here. You should also have the OMS Windows Agent installed onto the machine that will execute your OVF tests.

Let’s Do This!

Step 1 – Installing the OperationValidation Module

The OperationValidation PowerShell module comes in-box with Windows Server 2016 (and Windows 10 AE), but must be downloaded for earlier operating systems.

To download and install the OperationValidation PowerShell module on earlier operating systems enter the following cmdlet in an Administrator PowerShell console:

ss_ovfoms_installoperationvalidation

This will download the module from the PowerShell Gallery.

Note: to download modules from the PowerShell Gallery you’ll either need WMF 5.0 or PowerShell PackageManagement installed. I strongly recommend the former option if possible.

Step 2 – Create OVF Test Files

The next step is to create the OVF tests in a file. OVF work best if they are contained in a specific folder structure in the PowerShell Modules folder.

Note: you can create the the OVF tests in a different location if that works for you, but it requires specifying the -testFilePath parameter when calling Invoke-OperationValidation.

    1. Create a folder in your PowerShell Modules folder (c:\program files\WindowsPowerShell\Modules) with the name of your tests. I used ValidateDNS.
    2. In the ValidateDNS folder create the following folder structure:
        • Diagnostics\
          • Simple\
          • Comprehensive\

      ss_ovfoms_folderstructure

    3. In the Simple folder create a file called ValidateDNS.Simple.Tests.ps1 with the contents:
    4. Edit the tests and create any that are validate the things you want to test for.

The OVF tests above just check some basic settings of a DNS Server and so would normally be run on a Windows DNS Server. As noted above, you could write tests for almost anything, including validating things on other systems. I intentionally have setup one of the tests to fail for demonstration purposes (a gold star for anyone who can tell which test will fail).

In a future article I’ll cover how to test components on remote machines so you can use a single central node to perform all your OVF testing.

Step 3 – Create a Module for Running the Tests

Although we could just run the tests as is, the output will just end up in the console, which is not what we want here. We want any failed tests to be put into the Application Event Log.

  1. Create a file called ValidateDNS.psm1 in the ValidateDNS folder created earlier.
  2. Add the following code to this ValidateDNS.psm1 file:
  3. Save the ValidateDNS.psm1

The above file is a PowerShell Module will make available a single cmdlet called Invoke-ValidateDNS. We can now just run Invoke-ValidateDNS in a PowerShell console and the following tasks will be performed:

  • create a new Event Source for the Applications Event Log that we can use in OMS to identify any errors thrown by our tests.
  • Execute the OVF tests in ValidateDNS.Simple.Tests.ps1.
  • Add Error entries to the Applications Event Log for each failed test.

Step 4 – Schedule the Script

This step we will create a Scheduled Task to run the cmdlet we created in Step 3. You could use the Task Scheduler UI to do this, but this is a PowerShell blog after all, so here is a script you can run that will create the scheduled task:

ss_ovfoms_scheduletask

You will be prompted for the account details to run the task under, so enter valid credentials for this machine that give the task the correct access to run the tests. E.g. if the tests need Local Administrator access to the machine to run correctly, then ensure the account assigned is a Local Administrator.

This will run the script every 60 minutes. You could adjust it easily to run more or less frequently if you want to. This is what the Task Scheduler UI will show:

ss_ovfoms_scheduletaskui.png

Every time the tests run and a test failure occurs the Application Event Log will show:

ss_ovfoms_errorevent

Now that we have any test failures appearing in the Event Log, we can move onto Microsoft Operations Management Suite.

Step 5 – Create a Log Search and Alert

As noted earlier, I’m assuming you have already set up the computer running your OVF tests to your OMS account as a data source:

ss_ovfoms_omsagent

What we need to do now is create and save a new Log Search that will select our OVF test failures. To do this:

  1. In OMS, click the Log Search button.
  2. In the Search box enter (adjust the Source= if you used a different name for your tests in earlier tests):
    (Type=Event) (EventLevelName=error) (Source=ValidateDNS)

    ss_ovfoms_omslogsearch

  3. You will now be shown all the events on all computers matching these criteria:
    ss_ovfoms_omslogsearcheventsFrom here you could further refine your search if you want, for example, I could have added additional filters on Computer or EventId. But for me this is all I needed.
  4. Click Save to save the Log Search.
  5. In the Name enter ‘Validate DNS Events’ and in the Category enter ‘OVF’:
    ss_ovfoms_omslogsearchsave
  6. You can actually enter whatever works for you here.
  7. Click Save.
  8. Click the Alert button to add a new Alert Rule.
  9. Configure the Alert Rule as follows (customizing to suit you):
    ss_ovfoms_omslogsearchalert
  10. Click Save to save the Alert.

You’re now done!

The DNS Admins will now receive an e-mail whenever any of the DNS validation tests fail:

ss_ovfoms_omserroremail

If you look down in the ParamaterXML section you can even see the test that failed. So the DNS Admins can dive straight to the root of the problem.

How cool is that? Now we can feel more confident that problems will be noticed by our technical teams when they happen rather than waiting for an end user to complain.

Of course the tests above are fairly basic. They are just meant as an example of what sort of things can be done. Some of our teams have put together far more comprehensive sets of tests that validate things like ADFS tokens and SSL certificate validity.

Final Thoughts

There are a few things worth pointing about the process above:

  1. I chose to use OVF to execute the tests but I could have just as easily used plain old Pester. If it makes more sense to you to use Pester, go right ahead.
  2. I used Microsoft OMS to centrally monitor the events, but I could just of easily used Microsoft System Center Operations Manager (SCOM). There are many other alternatives as well. I chose OMS though because of the slick UI and mobile apps. Use what works for you.
  3. This guide is intended to show what sort of thing can be done. It isn’t intended to tell you what you must do or how you must do it. If there is something else that works better for you, then use it!

 

Although I’ve focused on the technology and methods here, if you take away one thing from this article I’d like it to be this:

Continuous Testing of your infrastructure is something that is really easy to implement and has so many benefits. It will allow you and your stakeholders to feel more confident that problems are not going unnoticed and allow them to sleep better. It will also ensure that when things do go wrong (and they always do) that the first people to notice are the people who can do something about it!

Happy infrastructure testing!

 

ss_dockerdsc_installing

Install Docker on Windows Server 2016 using DSC

Windows Server 2016 is now GA and it contains some pretty exciting stuff. Chief among them for me is support for containers by way of Docker. So, one of the first things I did was start installing Windows Server 2016 VM’s (Server Core and Nano Server naturally) and installing Docker on them so I could begin experimenting with Docker Swarms and other cool stuff.

Edit: If you’re looking for a DSC configuration for setting up Docker on a Windows 10 Anniversary Edition machine, see the Windows 10 AE section below.

At first I started using the standard manual instructions provided by Docker, but this doesn’t really suit any kind of automation or infrastructure as code methodology. This of course was a good job for PowerShell Desired State Configuration (DSC).

So, what I did was put together a basic DSC config that I could load into a DSC Pull Server and build out lots of Docker nodes quickly and easily. This worked really nicely for me to build out lots of Windows Server 2016 Container hosts in very short order:

ss_dockerdsc_installing

If you don’t have a DSC Pull server or you just want a simple script that you can use to quickly configure a Windows Server 2016 (Core or Core with GUI only) then read on.

Note: This script and process is really just an example of how you can configure Docker Container hosts with DSC. In a real production environment you would probably want to use a DSC Pull Server.

Get it Done

Edit: After a suggestion from Michael Friis (@friism) I have uploaded the script to the PowerShell Gallery and provided a simplified method of installation. The steps could be simplified even further into a single line, but I’ve kept them separate to show the process.

Using PowerShell Gallery

On a Windows Server 2016 Server Core or Windows Server 2016 Server Core with GUI server:

  1. Log on as a user with Local Administrator privileges.
  2. Start an Administrator PowerShell console – if you’re using Server Core just enter PowerShell at the command prompt:ss_dockerdsc_console
  3. Install the Install-DockerOnWS2016UsingDSC.ps1 script from the PowerShell Gallery using this command:

    You may be asked to confirm installation of these modules, answer yes to any confirmations.
    ss_dockerdsc_consolegetscript
  4. Run the Install-DockerOnWS2016UsingDSC.ps1 script using:

    ss_dockerdsc_consolerunscriptfromgallery

The script will run and reboot the server once. Not long after the reboot the Docker service will start up and you can get working with containers:

ss_dockerdsc_consoledockerdetails

You’re now ready to start working with Containers.

The Older Method (without PowerShell Gallery)

On a Windows Server 2016 Server Core or Windows Server 2016 Server Core with GUI server:

  1. Log on as a user with Local Administrator privileges.
  2. Start an Administrator PowerShell console – if you’re using Server Core just enter PowerShell at the command prompt:ss_dockerdsc_console
  3. Install the DSC Resources required for the DSC configuration by executing these commands:

    You may be asked to confirm installation of these modules, answer yes to any confirmations.
    ss_dockerdsc_consoleinstallresources
  4. Download the Docker installation DSC script by executing this command:

    ss_dockerdsc_consoledownloadscript
  5. Run the Docker installation DSC script by executing this command:

    ss_dockerdsc_consolerunscript

The script will run and reboot the server once. Not long after the reboot the Docker service will start up and you can get working with containers:

ss_dockerdsc_consoledockerdetails

You’re now ready to start working with Containers.

What the Script Does

In case you’re interested in what the script actually contains, here are the components:

  1. Configuration ContainerHostDsc – the DSC configuration that configures the node as a Docker Container host.
  2. Configuration ConfigureLCM – the LCM meta configuration that sets Push Mode, allows the LCM to reboot the node if required and configures ApplyAndAutoCorrect mode.
  3. ConfigData – a ConfigData object that contains the list of node names to apply this DSC Configuration to – in this case LocalHost.
  4. ConfigureLCM – the call to the Configuration ConfigureLCM to compile the LCM meta configuration MOF file.
  5. Set-DscLocalConfigurationManager – this applies the compiled LCM meta configuration MOF file to LocalHost to configure the LCM.
  6. ContainerHostDsc – the call to the Configuration ContainerHostDsc to compile the DSC MOF file.
  7. Start-DSCConfiguration – this command starts the LCM applying the DSC MOF file produces by the ContainerHostDsc.

The complete script can be found here. Feel free to use this code in anyway that makes sense to you.

What About Windows 10 AE?

If you’re looking for a DSC configuration that does the same thing for Windows 10 Anniversary edition, Ben Gelens (@bgelens) has written an awesome DSC config that will do the trick. Check it out here.

 

Happy containering!

Easily Create a Hyper-V Windows Server 2016 AD & Nano Server Lab

Introduction

One of the PowerShell Modules I’ve been working on for the last year is called LabBuilder.The goal of this module is:

To automatically build a multiple machine Hyper-V Lab environment from an XML configuration file and other optional installation scripts.

What this essentially does is allow you to easily build Lab environments using a specification file. All you need to do is provide the Hyper-V environment and the Operating System disk ISO files that will be used to build the lab. This is great for getting a Lab environment spun up for testing or training purposes.

Note: Building a new Lab can take a little while, depending on the number of VM’s in the Lab as well as the number of different Operating Systems used. For example, a Lab with 10 VMs could take an hour or two to spin up, depending on your hardware.

The LabBuilder module comes with a set of sample Labs that you can build “as is” or modify for your own purpose. There are samples for simple one or two machine Labs as well as more complex scenarios such as failover clusters and two tier PKI environments. Plus, if you’re feeling adventurous you can easily create your own LabBuilder configurations from scratch or by modifying an existing LabBuilder configuration.

In this article I’ll show how to use a configuration sample that will build a lab containing the following servers:

  • 1 x Windows Server 2016 RTM Domain Controller (with DNS)
  • 1 x Windows Server 2016 RTM DHCP Server
  • 1 x Windows Server 2016 RTM Certificate Authority Server
  • 1 x Windows Server 2016 RTM Edge Node (Routing and Remote Access server)
  • 8 x Windows Server 2016 RTM Nano Servers (not yet automatically Domain Joined – but I’m working on it).

This is a great environment for experimenting with both Windows Server 2016 as well as Nano Server.

So, lets get started.

Requirements

To follow along with this guide your Lab host (the machine that will host your Lab) will need to have the following:

Be running Windows Server 2012 R2, Windows Server 2016 or Windows 10

I strongly recommend using Windows 10 Anniversary Edition.

If you are using Windows Server 2012 R2 you will need to install WMF 5.0 or above. Although WMF 4.0 should work, I haven’t tested it.

Have enough RAM, Disk and CPU available for your Lab

Running a lot of VMs at once can be fairly taxing on your hardware. For most Sample Lab I’d recommend at least a quad core CPU, 16 GB RAM and a fast SSD with at least 10 GB per VM free (although for Nano Server VMs only 800MB is required).

The amount of disk used is minimized by using differencing disks, but Labs can still get pretty big.

Hyper-V Enabled

If you’re using Windows 10, see this guide.

If you’re using Windows Server 2012 R2 or Windows Server 2016, you probably already know how to do this, so I won’t cover this here.

Copies of any Windows install media that is used by the Lab

In our case this is just a copy of the Windows Server 2016 Evaluation ISO. You can download this ISO from here for free.

You can use non-evaluation ISOs instead if you have access to them, but at the time of writing this the Windows Server 2016 non-evaluation ISO wasn’t yet available on my MSDN subscription.

An Internet Connection

Most Labs use DSC to configure each VM once it has been provisioned, so the ability to download any required DSC Resources from the PowerShell Gallery is required. Some sample Labs also download MSI packages and other installers that will be deployed to the Lab Virtual Machines during installation – for example RSAT is often installed onto Windows 10 Lab machines automatically.

The Process

Step 1 – Install the Module

The first thing you’ll need to do is install the LabBuilder Module. Execute this PowerShell command at an Administrator PowerShell prompt:

ss_labbuilder_installmodule

Note: If you have an older version of LabBuilder installed, I’d recommend you update it to at least 0.8.3.1081 because this was the version I was using to write this guide.

Step 2 – Create the ISOs and VHDs Folders

Most labs are built using Windows Install media contained in ISO files. These are converted to VHD files that are then used by one or more Labs. We need a location to store these files.

By default all sample Labs expect these folders to be D:\ISOs and D:\VHDs. If you don’t have a D: Drive on your computer, you’ll need to adjust the LabBuilder configuration file in Step 4.

Execute the following PowerShell commands at an Administrator PowerShell prompt:

ss_labbuilder_createisosandvhdsfolders

Step 3 – Create a Folder to Contain the Lab

When building a Lab with LabBuilder it will create all VMs, VHDs and other related files in a single folder.

For all sample LabBuilder configurations, this folder defaults to a folder in C:\vm. For the sample Lab we’re building in this guide it will install the Lab into c:\vm\NANOTEST.COM. This can be changed by editing the configuration in Step 4.

Note: Make sure you have enough space on your chosen drive to store the Lab. 10GB per VM is a good rough guide to the amount of space required (although it usually works out as a lot less because of the use of differencing disks).

Execute the following PowerShell commands at an Administrator PowerShell prompt:

Step 4 – Customize the Sample Lab file

We’re going to build the Lab using the sample Lab found in the samples folder in the LabBuilder module folder. The sample we’re using is called Sample_WS2016_NanoDomain.xml. I’d suggest editing this file in an editor like Notepad++.

If you changed the paths in Step 2 or Step 3 then you’ll need to change the paths shown in this screenshot:

ss_labbuilder_nanodomainconfig

You may also change other items in the Settings section, but be aware that some changes (such as changing the domain name) will also need to be changed elsewhere in the file.

If you already have an External Switch configured in Hyper-V that you’d like to use for this Lab to communicate externally, then you should set the name of the switch here:

ss_labbuilder_nanodomainconfigexternalswitch

If you don’t already have an External Switch defined in Hyper-V then one called General Purpose External will be created for you. It will use the first Network Adapter (physical or team) that is not already assigned to an External Switch. You can control this behavior in the LabBuilder configuration file but it is beyond the scope of this guide.

Save the Sample_WS2016_NanoDomain.xml once you’ve finished changing it.

Step 5 – Copy the Windows Media ISOs

Now that the ISOs folder is ready, you will need to copy the Windows Install media ISO files into it. In this case we need to copy in the ISO for Windows Server 2016 (an evaluation copy can be downloaded from here).

The ISO file must be name:

14393.0.160715-1616.RS1_RELEASE_SERVER_EVAL_X64FRE_EN-US.ISO

If it is named anything else then you will either need to rename it or go back to Step 4 and adjust the sample Lab configuration file.

ss_labbuilder_isofoldercontents

Step 6 – Build the Lab

We’re now ready to build the lab from the sample configuration.

Execute the following PowerShell commands at an Administrator PowerShell prompt:

This will begin the task of building out your Lab. The commands just determine the location of your LabBuilder sample file and then call the Install-Lab cmdlet. I could have specified the path to the sample file manually, and you can if you prefer.

ss_labbuilder_installlabbuilding

So sit back and grab a tea or coffee (or beer), because this will take a little while.

Note: The individual virtual machines are configured using PowerShell DSC after they are first started up. This means that it might actually take some time for things like domain joins and other post configuration tasks to complete. So if you find a Lab VM hasn’t yet joined the domain, it is most likely that the DSC configuration is still being applied.

Using the Lab

Once you’ve built the Lab, you can log into the VMs like any other Hyper-V VM. Just double click the Virtual Machine and enter your login details:
ss_labbuilder_installlab_hypervvms

ss_labbuilder_installlab_domainlogin

For the sample Lab the Domain Administrator account password is configured as P@ssword!1. This is set in the Lab Sample configuration and you can change it if you like.

Note: Nano Server is not designed to have an interactive GUI. You interact with Nano Server via PowerShell Remoting. You’ll want to have a basic knowledge of PowerShell and PowerShell Remoting before attempting to administer Nano Servers.

Shutting Down the Lab

Once the Lab has been completely built, you can shut it down with the Stop-Lab command. You need to pass the path to the Lab Configuration file to shut it down:

The Virtual Machines in the Lab will be shut down in an order defined in the Lab Configuration file. This will ensure that the VMs are shut down in the correct order (e.g. shut down the domain controllers last).

Starting the Lab Up

If you need to start up a previously created Lab, use the Start-Lab command. You will again need to provide the path to the Lab Configuration file of the Lab you want to shut down:

The Virtual Machines in the Lab will be started up in an order defined in the Lab Configuration file. This will ensure that the VMs are started up in the correct order.

Uninstalling the Lab

If you want to completely remove a Lab, use the Uninstall-Lab command. You will again need to provide the path to the Lab Configuration file of the Lab you want to unisntall:

Note: You will be asked to confirm the removals.

Wrapping Up

This article has hopefully given you a basic understanding of how to use LabBuilder to stand up a Hyper-V Lab in relatively short order and without a lot of commands and clicks. This project is still in Beta and so there may be bugs as well as some incomplete features. If you want to raise an issue with this project (or even submit a PR), head on over to the GitHub repository.

Export a Base-64 x.509 Cert using PowerShell on Windows 7

Exporting a Base-64 Encoded x.509 certificate using PowerShell is trivial if you have the Export-Certificate cmdlet available. However, many of the nodes I work with are Windows 7 which unfortunately doesn’t include these cmdlets. Therefore I needed an alternate method of exporting these Base-64 encoded x.509 certificates from these nodes.

So I came up with this little snippet of code:

Hope someone finds it useful.

ss_hqrmreview_codeofconductgood

Tips for HQRM DSC Resources

I’ve spent a fair amount of time recently working on getting some of my DSC Resources (SystemLocaleDsc, WSManDsc, iSCSIDsc and FSRMDsc) accepted into the Microsoft DSC Community Resource Kit. Some are nearly there (SystemLocaleDsc and WSManDsc), whereas others have a way to go yet.

I’ve had one resource already accepted (xDFS) into the DSC Community Resource kit, but this was before the High Quality Resource Module (HQRM) guidelines became available. The HQRM guidelines are a set of standards that DSC modules must meet and maintain to be considered a High Quality Resource Module. Once they meet these requirements they may be eligible to have the ‘x’ moniker removed with ‘Dsc‘ being added to the name.

More information: If you want to read a bit more about the HQRM standards, you can find the HQRM Guidelines here.

Any modules being submitted for inclusion into the DSC Community Resource kit will be expected to meet the HQRM standards. The process of acceptance requires three reviewers from the Microsoft DSC team to review the module.

I thought it might be helpful to anyone else who might want to submit a DSC Resource into the DSC Community Resource kit to get a list of issues the reviewers found with my submissions. This might allow you to fix up your modules before the review process – which will help the reviewers out (they hate having to be critical of your code as much as you do). This enables the submission process to go much faster as well.

More information: If you want to read more about the submission process, you can find the documentation here.

I’ll keep this post updated with any new issues the reviewers pick up. Feel free to ask for clarifications on the issues.

So here is my list of what I have done wrong (so far):

Missing Get-Help Documentation

Every function (public or private) within the DSC resource module must contain a standard help block containing at least a .SYNOPSIS and .PARAMETER block:

This will get rejected:

ss_hqrmreview_gethelpbad

This is good:

ss_hqrmreview_gethelpgood

Examples Missing Explanation

All examples in the Examples folder and the Readme.md must contain an explanation of what the example will do.

This is bad:

ss_hqrmreview_exampledescriptionbad

This is good:

ss_hqrmreview_exampledescriptiongood

Old or Incorrect Unit/Integration Test Headers

There is a standard method of unit and integration testing DSC Resources. Your DSC resources should use these methods where ever possible. Any tests should therefore be based on the latest unit test templates and integration test templates. You should therefore ensure your tests are based on the latest practices and contain the latest header.

This is probably the hardest thing to get right if you’re not paying close attention to the current DSC community best practices around testing. So feel free to ask me for help.

This is bad:

ss_hqrmreview_testheaderbad

This is good:

ss_hqrmreview_testheadergood

Incorrect Capitalization of Local Variables

Local variables must start with a lower case letter. I needed to correct this on several occasions.

Note: this is for local variables. Parameter names should start with Uppercase.

This is bad:

ss_hqrmreview_localparameterbad

This is good:

ss_hqrmreview_localparametergood

Spaces around = in Localization Files

In any localization files you should make sure there is a space on either side of the = sign. This greatly improves message readability.

This is bad:

ss_hqrmreview_localizationbad.png

This is good:

ss_hqrmreview_localizationgood

Missing code of Conduct in Readme.md

All modules that are part of the DSC Resource Kit must contain this message in the Readme.md:

This project has adopted the Microsoft Open Source Code of Conduct.
For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

This is bad:

ss_hqrmreview_codeofconductbad

This is good:

ss_hqrmreview_codeofconductgood

Missing Localization file indent

All strings in localization files should be indented.

This is bad:

ss_hqrmreview_localizationdatabad

This is good:

ss_hqrmreview_localizationdatagood

 

Final Words

There were some other issues raised which I will also document, however I am still in discussion with the DSC team over the best methods to use to solve the issues (specifically the use of InModuleScope in unit tests).

The main thing you can do to help speed this process up and reduce the load on the reviewers however is to implement all the best practices and guidelines listed.

I hope this helps someone out there.

fi_brokencontainers

Failed to Start Docker Service on Windows 10 AE

So, pretty much the first thing I did when the Windows 10 Anniversary Edition was installed onto my primary development machine was to installer the Windows Container Service and Docker on it.

I used the Windows Containers on Windows 10 Quick start guide to perform the installation. This is the same method I’d been using on my secondary development machine (running Insider Preview builds) since it was first available in build 14372.

Note: The Windows Containers on Windows 10 Quick Start guide doesn’t mention the Anniversary Edition specifically, but the method still works.

Unfortunately though, this time it didn’t work. When I attempted to start the Docker Service I received the error:

start-service : Failed to start service 'Docker Engine (docker)'.

ss_docker_startserviceerror

So, after a bit of digging around I found the following error in the Windows Event Log in the Application logs:

ss_docker_startserviceerror_eventlog

Basically what this was telling me was that the Docker Daemon couldn’t create the new virtual network adapter that it needed – because it already existed. So a quick run of Get-NetAdapter and I found that the docker adapter “vEthernet (HNS Internal)” already existed:

ss_docker_startserviceerror_eventlog

So what I needed to do was uninstall this adapter so that the Docker Service could recreate it. I’m not actually aware of a command line method of doing (except for using DevCon) so I had to resort to using Device Manager:

ss_docker_startservice_uninstalldevice

You’ll need to use the output of the Get-NetAdapter to find he right adapter uninstall. Once it has been uninstalled you should be able to start the service again:

ss_docker_startservice_dockerstarts

This time the service should start successfully. A quick call to docker ps shows that the container service is indeed working. So now I can get onto the process pulling down the base container images.

Hopefully if anyone else runs into this problem in Windows 10 AE this will help them resolve it.

 

 

 

fi_useproxy

Allow PowerShell to Traverse a Secure Proxy

One of the first things I like to do when setting up my development machine in a new environment is to update PowerShell help with the update-help cmdlet. After that, I will then go and download a slew of modules from the PowerShell Gallery.

However, recently I needed to set up my development machine on an environment that is behind an internet proxy that requires authentication. This meant that a lot of PowerShell cmdlets can’t be used because they don’t have support for traversing a proxy – or at least, not one that requires authentication. Take the aforementioned update-help and install-module cmdlets – I just couldn’t do with out these.

So I set about trying to find a way around this. So after lots of googling and trial and error (and also getting my Active Directory account locked out on more than one occasion) I came up with a solution.

Basically it requires using the NETSH command to configure the proxy settings and then configure the web client with my proxy credentials (which were AD integrated):

The code I needed to traverse the proxy could then be executed. Once it has completed the task using the proxy I would then reset it back to the default state (using settings from internet explorer):

After using this a bit I thought it would be great to turn it into a function that I could just call, passing a script block that I wanted to be able to traverse the proxy with. So I came up with this:

To use this script, simply save it as a PS1 file (e.g. Use-Proxy.ps1), customizing the default proxy URL if you like and then dot source the file. Once it has been dot sourced it can be called, optionally passing the URL of the proxy server and credentials to use to authenticate to it:

If you don’t pass any credentials, you will be prompted to enter them. I also added some code into this function so that you can specify a global variable containing the credentials to use to traverse the proxy. This can save on lots of typing, but might be frowned upon by your security team.

Finally, I also added the proxy reset code into the finally block of a trycatch to ensure that if the code in the script block throws an error the proxy will be reset. In my case I also loaded this function into a PowerShell module that can be distributed to other team members.

Happy proxy traversing!