Downloading GitHub .GitIgnore templates with PowerShell

This will be a relatively short post today to get be back into the blogging rhythm. Most of my time has been spent of late working on the DSC Resource Kit adding code coverage reporting and new xCertificate features.

So, today’s post shows how you can use some simple PowerShell code to pull down the list of .gitIgnore templates from GitHub and then retrieve the one I wanted. There are lots of different ways I could have done this, but I decided to use the GitHub REST API.

First up, lets get the list of available .gitIgnore templates:

This will get the list of .GitIgnore templates to an array variable called $templateList. I could then display the list to a user:

ss_ghgi_getgitignoretemplates

Now, all I need to do is to download the named .gitIgnore Template to a folder:

This will download the VisualStudio .giIgnore template and save it with the filename .gitignore to the current folder.

ss_ghgi_getgitignorefile

I could have specified a different .gitIgnore template by changing the VisualStudio in the URL to another template that appears in the $templateList.

You might have noticed that I included the -UseBasicParsing parameter in the Invoke-WebRequest call. This is to ensure the cmdlet works on machines that don’t have Internet Explorer installed – e.g. Nano Server or Linux/OSX. I haven’t tried this on PowerShell running on Linux or OSX, but I can’t see any reason why it wouldn’t work on those OS’s.

The next steps for this code might be to get these included as some new cmdlets in Trevor Sullivan’s PSGitHub PowerShell Module. You can download his module from the PowerShell Gallery if you’re not familiar with it.

Thanks for reading.

Using PFX Files in PowerShell

One of the things I’ve been working on lately is adding a new resource to the xCertificate DSC Resource module for exporting an certificate with (or without) the private key from the Windows Certificate Store as a .CER or .PFX file. The very insightful (and fellow DSC Resource maintainer) @JohanLjunggren has been giving some really great direction on this new resource.

One of these suggested features was to be able to identify if the certificate chain within a PFX file is different to the chain in the Windows Certificate Store. This is because a PFX file can contain not just a single certificate but the entire trust chain required by the certificate being exported.

Therefore what we would need to do is be able to step through the certificates in the PFX and examine each one. It turns out this is pretty simple using the .NET Class:

System.Security.Cryptography.X509Certificates.X509Certificate2Collection

So, to read the PFX in to a variable called $PFX all we need to do is this:

The $PFXPath variable is set to the path to the PFX file we’re going to read in. The $PFXPassword is a string (not SecureString) containing the password used to protect the PFX file when it was exported.

We now have all the certificates loaded into an array in the $PFX variable and work with them like any other array:

ss_readpfx_loadingthepfx

Now, that we have the #PFX array, we can identify the thumbprint of the certificate that was actually exported (as opposed to the certificates in the trust chain) by looking at the last array item:

I’m piping the output Format-List so we can see the entire x509 certificate details.

ss_readpfx_showissuedcertificate

In the case of the DSC Resource we’ll compare the certificate thumbprint of the last certificate in the PFX with the thumbprint that of the certificate in the Windows Certificate Store that we’re wanting to export. If they’re different we will then perform another export using the Export-PFXCertificate cmdlet.

Protip: You can actually verify the certificate and the entire trust chain is valid and not expired by calling the verify method on the last certificate:

ss_readpfx_validateissuedcertificate

In the case above, the certificate I exported was actually invalid (it had expired):

ss_readpfx_expiredcertificate

So we could easily use the Validate method to test the certificates validity before we import them into the Windows Certificate Store. But beware, the Validate method will check that the certificate chain is trusted. To be trusted the entire chain must have been imported into the Windows Certificate Store in the appropriate stores (e.g. Trusted Root CA/Intermedicate CA stores).

So, finally this gives us the code required to implement the xCertificateExport Resource in the DSC Resource Kit. We can now perform a comparison of the certificates a PFX file to ensure that they are the same as the certificates that have already been exported.

This information is not something that you might use every day, but hopefully it’s information that someone might find useful. So thank you for taking the time to read this.

Test Website SSL Certificates Continuously with PowerShell and Pester

One of the most common problems that our teams deal with is ensuring that SSL certificates are working correctly. We’ve all had that urgent call in telling us that the web site is down or some key API or authentication function is offline – only to find out it was caused by an expired certificate.

An easy way of preventing this situation would have been to set up a task that continuously tests your SSL endpoints (internal and external web apps and sites, REST API’s etc.) and warns us if:

  • The certificate is about to expire (with x days).
  • The SSL endpoint is using safe SSL protocols (e.g. TLS 1.2).
  • The certificate is using SHA256.

This seemed like a good task for Pester (or Operation Validation Framework). So, after a bit of digging around I found this awesome blog post from Chris Duck showing how to retrieve the certificate and SSL protocol information from an SSL endpoint using PowerShell.

Chris’ post contained this PowerShell cmdlet:

So that was the hard part done, all I needed was to add this function to some Pester tests.

Note: If you are running these tests on an operating system older than Windows 10 or Windows Server 2016 then you will need to install the PowerShell Pester module by running this command in an Administrator PowerShell console:

Install-Module -Name Pester

So after a little bit of tinkering I ended up with a set of tests that I combined into the same file as Chris’ function from earlier. I called the file SSL.tests.ps1. I used the file extension .tests.ps1 because that is the file extension Pester looks for when it runs.

The tests are located at the bottom of the file below the Test-SslProtocol function.

So, now to test these SSL endpoints all I need to do is run in a PowerShell console with the current folder set to the folder containing my SSL.tests.ps1 file:

cd C:\SSLTests\
Invoke-Pester

This is the result:

ss_testssl_pesteroutput

This shows that all the SSL endpoint certificates being used by google.com, bing.com and yahoo.com are all valid SHA-256 certificates and aren’t going to expire in 14 days.

All I would then need to do is put this in a task to run every hour or so and perform some task when the tests fail:

At this point you will still need to use some mechanism to notify someone when they fail. One method could be to write an event into the Windows Event Log and then use Microsoft Operations Management Suite (or SCOM) to monitor for this event and send an e-mail or other alert to the appropriate administrators.

For an example showing how to use OMS to monitor custom events created by failed Pester and OVF tests, see my previous article here.

Potential Improvements

There are a number of ways you could go about improving this process, which our teams have in fact implemented. If you’re considering implementing this process then you might want to also consider them:

  1. Put the Test-SSLProtocol cmdlet into a PowerShell Module that you can share easily throughout your organization.
  2. Put your tests into source control and have the task clone the tests directly from source control every time they are run – this allows tests to be stored centrally and can be change tracked.
  3. Parameterize the tests so that you don’t have to hard code the endpoints to test in the script file. Parameters can be passed into Pester tests fairly easily.
  4. Use something like Jenkins, SCOM or Splunk to run the tests continuously.
  5. Run the tests in an Azure Automation account in the Cloud.

Really, the options for implementing this methodology are nearly limitless. You can engineer a solution that will work for you and your teams, using whatever tools are at your disposal.

At the end of the day, the goal here should be:

  • Reduce the risk that your internal or external applications or websites are using bad certificates.
  • Reduce the risk that an application or website will be deployed without a valid certificate (write infrastructure tests before you deploy your infrastructure – TDD for operations).
  • Reduce the risk you’ll get woken up in the middle of the night with an expired certificate.

So, in this holiday season, I hope this post helps you ensure your certificates won’t expire in the next two weeks and you won’t get called into fix a certificate problem when you should be lying on a beach in the sun (in the southern hemisphere anyway).

Have a good one!

Continuously Testing your Infrastructure with OVF and Microsoft Operations Management Suite

Introduction

One of the cool new features in Windows Server 2016 is Operation Validation Framework. Operation Validation Framework (OVF) is an (open source) PowerShell module that contains:

A set of tools for executing validation of the operation of a system. It provides a way to organize and execute Pester tests which are written to validate operation (rather than limited feature tests)

One of the things I’ve been using OVF for is to continuously test parts of my infrastructure. Any time a failure occurs an error event is written to the Event Log. I then have Microsoft Operations Management Suite set up to monitor my Event Log for any errors and then alert me (by e-mail) if they occur.

Note: OVF tests are just Pester tests, so if you’ve ever written a Pester test then you’ll have no trouble at all with OVF. If you haven’t written a Pester test before, here is a great introduction. If you’re going to just learn one thing about PowerShell this year, I’d definitely suggest Pester.

The great thing about OVF is that it can test anything that PowerShell can get information about – which is practically anything. For some great examples showing how to use OVF to test your infrastructure, see this article or this article by the insightful Irwin Strachan.

In this article I’m going to show you how to set up a basic set of OVF tests to continuously test some DNS components on a server, write any failures to the Event Log and then configure Microsoft OMS to monitor the Event Log for OVF failures and alert you if and something breaks.

I am calling my tests ValidateDNS which is reflected in the files and Event Log Source for the events that are created, but you can call these tests what ever you like. I’m also going to create my tests and related files as a PowerShell Module with the same name (ValidateDNS). You don’t have to do this – you could make a much simpler process, but for me it made sense because I could then publish my set of tests to my own private PowerShell Gallery or to a Sonatype Nexus OSS server.

I am also going to assume you’ve got an Microsoft Operations Management Suite account all setup. If you don’t have an OMS account, you can get a free trial one here. You should also have the OMS Windows Agent installed onto the machine that will execute your OVF tests.

Let’s Do This!

Step 1 – Installing the OperationValidation Module

The OperationValidation PowerShell module comes in-box with Windows Server 2016 (and Windows 10 AE), but must be downloaded for earlier operating systems.

To download and install the OperationValidation PowerShell module on earlier operating systems enter the following cmdlet in an Administrator PowerShell console:

ss_ovfoms_installoperationvalidation

This will download the module from the PowerShell Gallery.

Note: to download modules from the PowerShell Gallery you’ll either need WMF 5.0 or PowerShell PackageManagement installed. I strongly recommend the former option if possible.

Step 2 – Create OVF Test Files

The next step is to create the OVF tests in a file. OVF work best if they are contained in a specific folder structure in the PowerShell Modules folder.

Note: you can create the the OVF tests in a different location if that works for you, but it requires specifying the -testFilePath parameter when calling Invoke-OperationValidation.

    1. Create a folder in your PowerShell Modules folder (c:\program files\WindowsPowerShell\Modules) with the name of your tests. I used ValidateDNS.
    2. In the ValidateDNS folder create the following folder structure:
        • Diagnostics\
          • Simple\
          • Comprehensive\

      ss_ovfoms_folderstructure

    3. In the Simple folder create a file called ValidateDNS.Simple.Tests.ps1 with the contents:
    4. Edit the tests and create any that are validate the things you want to test for.

The OVF tests above just check some basic settings of a DNS Server and so would normally be run on a Windows DNS Server. As noted above, you could write tests for almost anything, including validating things on other systems. I intentionally have setup one of the tests to fail for demonstration purposes (a gold star for anyone who can tell which test will fail).

In a future article I’ll cover how to test components on remote machines so you can use a single central node to perform all your OVF testing.

Step 3 – Create a Module for Running the Tests

Although we could just run the tests as is, the output will just end up in the console, which is not what we want here. We want any failed tests to be put into the Application Event Log.

  1. Create a file called ValidateDNS.psm1 in the ValidateDNS folder created earlier.
  2. Add the following code to this ValidateDNS.psm1 file:
  3. Save the ValidateDNS.psm1

The above file is a PowerShell Module will make available a single cmdlet called Invoke-ValidateDNS. We can now just run Invoke-ValidateDNS in a PowerShell console and the following tasks will be performed:

  • create a new Event Source for the Applications Event Log that we can use in OMS to identify any errors thrown by our tests.
  • Execute the OVF tests in ValidateDNS.Simple.Tests.ps1.
  • Add Error entries to the Applications Event Log for each failed test.

Step 4 – Schedule the Script

This step we will create a Scheduled Task to run the cmdlet we created in Step 3. You could use the Task Scheduler UI to do this, but this is a PowerShell blog after all, so here is a script you can run that will create the scheduled task:

ss_ovfoms_scheduletask

You will be prompted for the account details to run the task under, so enter valid credentials for this machine that give the task the correct access to run the tests. E.g. if the tests need Local Administrator access to the machine to run correctly, then ensure the account assigned is a Local Administrator.

This will run the script every 60 minutes. You could adjust it easily to run more or less frequently if you want to. This is what the Task Scheduler UI will show:

ss_ovfoms_scheduletaskui.png

Every time the tests run and a test failure occurs the Application Event Log will show:

ss_ovfoms_errorevent

Now that we have any test failures appearing in the Event Log, we can move onto Microsoft Operations Management Suite.

Step 5 – Create a Log Search and Alert

As noted earlier, I’m assuming you have already set up the computer running your OVF tests to your OMS account as a data source:

ss_ovfoms_omsagent

What we need to do now is create and save a new Log Search that will select our OVF test failures. To do this:

  1. In OMS, click the Log Search button.
  2. In the Search box enter (adjust the Source= if you used a different name for your tests in earlier tests):
    (Type=Event) (EventLevelName=error) (Source=ValidateDNS)

    ss_ovfoms_omslogsearch

  3. You will now be shown all the events on all computers matching these criteria:
    ss_ovfoms_omslogsearcheventsFrom here you could further refine your search if you want, for example, I could have added additional filters on Computer or EventId. But for me this is all I needed.
  4. Click Save to save the Log Search.
  5. In the Name enter ‘Validate DNS Events’ and in the Category enter ‘OVF’:
    ss_ovfoms_omslogsearchsave
  6. You can actually enter whatever works for you here.
  7. Click Save.
  8. Click the Alert button to add a new Alert Rule.
  9. Configure the Alert Rule as follows (customizing to suit you):
    ss_ovfoms_omslogsearchalert
  10. Click Save to save the Alert.

You’re now done!

The DNS Admins will now receive an e-mail whenever any of the DNS validation tests fail:

ss_ovfoms_omserroremail

If you look down in the ParamaterXML section you can even see the test that failed. So the DNS Admins can dive straight to the root of the problem.

How cool is that? Now we can feel more confident that problems will be noticed by our technical teams when they happen rather than waiting for an end user to complain.

Of course the tests above are fairly basic. They are just meant as an example of what sort of things can be done. Some of our teams have put together far more comprehensive sets of tests that validate things like ADFS tokens and SSL certificate validity.

Final Thoughts

There are a few things worth pointing about the process above:

  1. I chose to use OVF to execute the tests but I could have just as easily used plain old Pester. If it makes more sense to you to use Pester, go right ahead.
  2. I used Microsoft OMS to centrally monitor the events, but I could just of easily used Microsoft System Center Operations Manager (SCOM). There are many other alternatives as well. I chose OMS though because of the slick UI and mobile apps. Use what works for you.
  3. This guide is intended to show what sort of thing can be done. It isn’t intended to tell you what you must do or how you must do it. If there is something else that works better for you, then use it!

 

Although I’ve focused on the technology and methods here, if you take away one thing from this article I’d like it to be this:

Continuous Testing of your infrastructure is something that is really easy to implement and has so many benefits. It will allow you and your stakeholders to feel more confident that problems are not going unnoticed and allow them to sleep better. It will also ensure that when things do go wrong (and they always do) that the first people to notice are the people who can do something about it!

Happy infrastructure testing!

 

Export a Base-64 x.509 Cert using PowerShell on Windows 7

Exporting a Base-64 Encoded x.509 certificate using PowerShell is trivial if you have the Export-Certificate cmdlet available. However, many of the nodes I work with are Windows 7 which unfortunately doesn’t include these cmdlets. Therefore I needed an alternate method of exporting these Base-64 encoded x.509 certificates from these nodes.

So I came up with this little snippet of code:

Hope someone finds it useful.

Tips for HQRM DSC Resources

I’ve spent a fair amount of time recently working on getting some of my DSC Resources (SystemLocaleDsc, WSManDsc, iSCSIDsc and FSRMDsc) accepted into the Microsoft DSC Community Resource Kit. Some are nearly there (SystemLocaleDsc and WSManDsc), whereas others have a way to go yet.

I’ve had one resource already accepted (xDFS) into the DSC Community Resource kit, but this was before the High Quality Resource Module (HQRM) guidelines became available. The HQRM guidelines are a set of standards that DSC modules must meet and maintain to be considered a High Quality Resource Module. Once they meet these requirements they may be eligible to have the ‘x’ moniker removed with ‘Dsc‘ being added to the name.

More information: If you want to read a bit more about the HQRM standards, you can find the HQRM Guidelines here.

Any modules being submitted for inclusion into the DSC Community Resource kit will be expected to meet the HQRM standards. The process of acceptance requires three reviewers from the Microsoft DSC team to review the module.

I thought it might be helpful to anyone else who might want to submit a DSC Resource into the DSC Community Resource kit to get a list of issues the reviewers found with my submissions. This might allow you to fix up your modules before the review process – which will help the reviewers out (they hate having to be critical of your code as much as you do). This enables the submission process to go much faster as well.

More information: If you want to read more about the submission process, you can find the documentation here.

I’ll keep this post updated with any new issues the reviewers pick up. Feel free to ask for clarifications on the issues.

So here is my list of what I have done wrong (so far):

Missing Get-Help Documentation

Every function (public or private) within the DSC resource module must contain a standard help block containing at least a .SYNOPSIS and .PARAMETER block:

This will get rejected:

ss_hqrmreview_gethelpbad

This is good:

ss_hqrmreview_gethelpgood

Examples Missing Explanation

All examples in the Examples folder and the Readme.md must contain an explanation of what the example will do.

This is bad:

ss_hqrmreview_exampledescriptionbad

This is good:

ss_hqrmreview_exampledescriptiongood

Old or Incorrect Unit/Integration Test Headers

There is a standard method of unit and integration testing DSC Resources. Your DSC resources should use these methods where ever possible. Any tests should therefore be based on the latest unit test templates and integration test templates. You should therefore ensure your tests are based on the latest practices and contain the latest header.

This is probably the hardest thing to get right if you’re not paying close attention to the current DSC community best practices around testing. So feel free to ask me for help.

This is bad:

ss_hqrmreview_testheaderbad

This is good:

ss_hqrmreview_testheadergood

Incorrect Capitalization of Local Variables

Local variables must start with a lower case letter. I needed to correct this on several occasions.

Note: this is for local variables. Parameter names should start with Uppercase.

This is bad:

ss_hqrmreview_localparameterbad

This is good:

ss_hqrmreview_localparametergood

Spaces around = in Localization Files

In any localization files you should make sure there is a space on either side of the = sign. This greatly improves message readability.

This is bad:

ss_hqrmreview_localizationbad.png

This is good:

ss_hqrmreview_localizationgood

Missing code of Conduct in Readme.md

All modules that are part of the DSC Resource Kit must contain this message in the Readme.md:

This project has adopted the Microsoft Open Source Code of Conduct.
For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

This is bad:

ss_hqrmreview_codeofconductbad

This is good:

ss_hqrmreview_codeofconductgood

Missing Localization file indent

All strings in localization files should be indented.

This is bad:

ss_hqrmreview_localizationdatabad

This is good:

ss_hqrmreview_localizationdatagood

 

Final Words

There were some other issues raised which I will also document, however I am still in discussion with the DSC team over the best methods to use to solve the issues (specifically the use of InModuleScope in unit tests).

The main thing you can do to help speed this process up and reduce the load on the reviewers however is to implement all the best practices and guidelines listed.

I hope this helps someone out there.

Allow PowerShell to Traverse a Secure Proxy

One of the first things I like to do when setting up my development machine in a new environment is to update PowerShell help with the update-help cmdlet. After that, I will then go and download a slew of modules from the PowerShell Gallery.

However, recently I needed to set up my development machine on an environment that is behind an internet proxy that requires authentication. This meant that a lot of PowerShell cmdlets can’t be used because they don’t have support for traversing a proxy – or at least, not one that requires authentication. Take the aforementioned update-help and install-module cmdlets – I just couldn’t do with out these.

So I set about trying to find a way around this. So after lots of googling and trial and error (and also getting my Active Directory account locked out on more than one occasion) I came up with a solution.

Basically it requires using the NETSH command to configure the proxy settings and then configure the web client with my proxy credentials (which were AD integrated):

The code I needed to traverse the proxy could then be executed. Once it has completed the task using the proxy I would then reset it back to the default state (using settings from internet explorer):

After using this a bit I thought it would be great to turn it into a function that I could just call, passing a script block that I wanted to be able to traverse the proxy with. So I came up with this:

To use this script, simply save it as a PS1 file (e.g. Use-Proxy.ps1), customizing the default proxy URL if you like and then dot source the file. Once it has been dot sourced it can be called, optionally passing the URL of the proxy server and credentials to use to authenticate to it:

If you don’t pass any credentials, you will be prompted to enter them. I also added some code into this function so that you can specify a global variable containing the credentials to use to traverse the proxy. This can save on lots of typing, but might be frowned upon by your security team.

Finally, I also added the proxy reset code into the finally block of a trycatch to ensure that if the code in the script block throws an error the proxy will be reset. In my case I also loaded this function into a PowerShell module that can be distributed to other team members.

Happy proxy traversing!