Stop, Start or Restart all Web Apps in Azure using PowerShell

Here is a short (and sometimes handy) single line of PowerShell code that can be used to restart all the Azure Web Apps in a subscription:

ss_azurecloudshell_restartallwebapps

Note: Use this with care if you’re working with production systems because this _will_ restart these Web Apps without confirming first.

This would be a handy snippet to be able to run in the Azure Cloud Shell. It could also be adjusted to perform different actions on other types of resources.

To stop all Web Apps in a subscription use:

To start them all again:

The key part of this command is the GetEnumerator() method because most Azure Cmdlets don’t return an array of individual objects into the pipeline like typical PowerShell cmdlets. Instead returning a System.Collections.Generic.List object, which requires a slight adjustment to the code. This procedure can be used for most Azure Cmdlets to allow the results to be iterated through.

ss_azurecloudshell_systemcollections

Thanks for reading.

Advertisements

Get Azure API Management Git Credentials using PowerShell

One of the many great features of Azure API Management is the fact that it has a built in Git repository for storing the current configuration as well as publishing new configurations.

ss_apim_gitrepository

This allows you to push updated Azure API Management configurations to this internal Git repository as a new branch and then Deploy the configuration to API Management.

The internal Git repository in Azure API Management is not intended to be used for a normal development workflow. You’ll still want to develop and store your Azure API management configuration in an external Git repository such as GitHub or TFS/VSTS and then copy configuration updates to the internal Git repository in Azure API Management using some sort of automated process (e.g. Continuous Integration/Continuous Delivery could be adopted for this).

The Internal Git Repository

To access the Internal Git Repository requires short lived (30 days maximum) Git credentials to be generated. This is fairly easy through the Azure API Management portal:

ss_apim_gitrepositorygeneratecreds

Unfortunately using the portal to get these credentials is a manual process and so would not be so good for an automated delivery process (e.g. CI/CD). You’d need to update these Git credentials in your CI/CD automation system every time they expired (every 30 days).

Get Git Credentials

A better approach to generating the Git Credentials is to use Azure PowerShell API Management cmdlets connected with a Service Principal to generate the Git credentials whenever you need them in your CI/CD pipeline.

This is not a completely straightforward process right now (which is unusual for the Azure PowerShell team), so I’ve created a simple PowerShell script that will take care of the nuts and bolts for you.

Requirements

To run this script you’ll need:

  1. PowerShell 5 (WMF 5.0) or greater.
  2. Azure PowerShell Modules installed (make sure you’ve got the latest versions – 4.0.3 at the time of writing this).

You’ll also need to supply the following parameters to the script:

  1. The Azure Subscription Id of the subscription containing the API Management instance.
  2. The name of the Resource Group where the API Management instance is installed to.
  3. The service name of the API Management instance.

You can also optionally supply which of the two internal API Management keys, primary or secondary, to use to generate the credential and also the length of time that the Git credential will be valid for (up to 30 days).

Steps

Download the Script

  1. Download the script Get-AzureRMApiManagementGitCredential.ps1 using the PowerShell command:
  2. Unblock the script using the PowerShell command:

Using the Script

  1. Use the Login-AzureRMAccount cmdlet to authenticate to Azure. This would normally be done using a Service Principal if using an automated process, but could be done interactively when testing.
  2. Execute the script providing the SubscriptionId, ResourceGroup and ServiceName parameters (and optionally the KeyType and ExpiryTimespan) using the following PowerShell command:

ss_apim_gitrepositoryinvoke

The script will return an object containing the properties GitUsername and GitPassword that can be provided to Git when cloning the internal Git repository.

The GitPassword is not escaped so can not be directly used within a Git Clone URL without replacing any / or @ with %2F and %40 respectively.

In the example above I generated an internal Git Credential using the Primary Secret Key that will expire in 4 hours.

Typically you’d assign the output of this script to a variable and use the properties to generate the URL to pass into the Git Clone. For example:

ss_apim_gitrepositoryclone

Tips

  • When cloning the internal Git Repository you’ll need the clone URL of the repository. This is always the name of your Azure API Management instance followed by with scm.azure-api.net appended to it E.g. https://myapimanagementinstance.scm.azure-api.net
  • Once you’ve uploaded a new Git branch containing a new or updated Azure API Management configuration you’ll need to use the Publish-AzureRmApiManagementTenantGitConfiguration cmdlet to tell Azure API Management to publish the configuration contained in the branch. I have not detailed this process here, but if there is interest I can cover the entire end-to-end process.
  • The Primary and Secondary Secret Keys that are used to generate the internal Git Credential can be re-generated (rolled) individually if a Git credential is compromised. However, this will invalidate all Git Credentials generated using that Secret Key.

The Script

If you wish to review the script itself, here it is:

So, hopefully that will be enough information to get anyone else started on building a CI/CD pipeline for deploying Azure API Management configurations.

 

Publish an Azure RM Web App using a Service Principal in PowerShell

Introduction

Deploying an Azure Web App is almost stupidly simple. If I were to list the methods and tools I’d still be typing next week. The problem with many of these tools and process is that they do a whole lot of magic under the hood which makes the process difficult to manage in source control.

I’m a big believer that all code (including deployment code) should be in the application source repository so it can be run by any tool or release pipeline – including manually by development teams. This ensures that whatever deployment process is used, it is the same no matter who or what runs it – and we end up continuously testing the deployment code and process.

So I decided to go and find out how to deploy an Azure Web App using PowerShell using an Service Principal.

Where is Publish-AzureRMWebsiteProject?

If you look through the Azure PowerShell cmdlets you’ll find a service manager one called Publish-AzureWebsiteProject. This cmdlet looks like it should do the trick, but it isn’t suitable because it requires authentication by a user account instead of a service principal.

Only service principal accounts can be authenticated using automation. Therefore using Publish-AzureWebsiteProject would only work if a development team member was able to interactively login– which would prevent the same process being used for automation or our continuous delivery pipeline. The newer Azure Resource Manager cmdlets (*-AzureRM*) all support a login using a service principal, but the problem is that there is no Publish-AzureRMWebsiteProject cmdlet.

So, to work around this limitation I determined I had to use Web Deploy/MSDeploy. The purpose of this post is to share the PowerShell function/code and process I used to do this. This will work with and without Web App deployment slots.

Note: in my case our teams put all deployment code into a PowerShell PSake task in the application source code repository to make it trivial for anyone to run the deployment. The continuous delivery pipeline was also able to call the exact same task to perform the deployment. There is no requirement to use PowerShell PSake – just a simple PowerShell script will do.

The Code

So, I’ll start by just pasting the function that does performs the task:

Just save this file as Publish-AzureRMWebappProject.ps1 and you’re ready to start publishing (almost).

Before you can use this function you’ll need to get a few things sorted:

  1. Create a Service Principal with a password to use to deploy the web app using the instructions on this page.
  2. Make sure you have got the latest version of the Azure PowerShell Modules installed (I used v4.0.0). See this page for instructions.
  3. Make sure you’ve got MSDeploy.exe installed on your computer – see this page for instructions. You can pass the path to MSDeploy.exe into the Publish-AzureRMWebappProject.ps1 using the MSDeployPath parameter.
  4. Gather the following things (there are many ways of doing that – but I’ll leave it up to you to figure out what works for you):
    1. the Subscription Id of the subscription you’ll be deploying to.
    2. the Tenant Id of the Azure Active Directory containing your Service Principal.
    3. the Application Id that was displayed to you when you created the Service Principal.
    4. the Password you assigned when you created the Service Principal.

Once you have got all this information you can call the script above like this:

Note: You’ll need to make sure to replace the variables $SubscriptionId, $TenantId, $Password and $Username with the values for your Azure Subscription, Tenancy and Service Principal.

When everything is done correctly this is what happens when you run it (with -Verbose enabled):

ss_webappdeploy_publishazurermwebappproject

Note: in the case above I was installing to a deployment staging slot called offline, so the new version of my website wouldn’t have been visible in my production slot until I called the Swap-AzureRmWebAppSlot cmdlet to swap the offline slot with my production slot.

All in all, this is fairly robust and allows our development teams and our automation and continuous delivery pipeline to all use the exact same deployment code which reduces deployment failures.

If you’re interested in more details about the code/process, please feel free to ask questions.

Thanks for reading.

Change the Friendly Name of a Cert with PowerShell

While working on adding a new feature in the certificate request DSC resource, I came across this handy little trick: You can change the Friendly Name of a certificate using PowerShell.

All you need to do is identify the certificate using Get-ChildItem and then assign the new FriendlyName to it.

ss_cert_changefriendlyname

ss_cert_changefriendlynamecertlm

Sometimes PowerShell still surprises me at how easy it can make things. I didn’t need to search help or the internet – just typed it in and it worked!

Downloading GitHub .GitIgnore templates with PowerShell

This will be a relatively short post today to get be back into the blogging rhythm. Most of my time has been spent of late working on the DSC Resource Kit adding code coverage reporting and new xCertificate features.

So, today’s post shows how you can use some simple PowerShell code to pull down the list of .gitIgnore templates from GitHub and then retrieve the one I wanted. There are lots of different ways I could have done this, but I decided to use the GitHub REST API.

First up, lets get the list of available .gitIgnore templates:

This will get the list of .GitIgnore templates to an array variable called $templateList. I could then display the list to a user:

ss_ghgi_getgitignoretemplates

Now, all I need to do is to download the named .gitIgnore Template to a folder:

This will download the VisualStudio .giIgnore template and save it with the filename .gitignore to the current folder.

ss_ghgi_getgitignorefile

I could have specified a different .gitIgnore template by changing the VisualStudio in the URL to another template that appears in the $templateList.

You might have noticed that I included the -UseBasicParsing parameter in the Invoke-WebRequest call. This is to ensure the cmdlet works on machines that don’t have Internet Explorer installed – e.g. Nano Server or Linux/OSX. I haven’t tried this on PowerShell running on Linux or OSX, but I can’t see any reason why it wouldn’t work on those OS’s.

The next steps for this code might be to get these included as some new cmdlets in Trevor Sullivan’s PSGitHub PowerShell Module. You can download his module from the PowerShell Gallery if you’re not familiar with it.

Thanks for reading.

Using PFX Files in PowerShell

One of the things I’ve been working on lately is adding a new resource to the xCertificate DSC Resource module for exporting an certificate with (or without) the private key from the Windows Certificate Store as a .CER or .PFX file. The very insightful (and fellow DSC Resource maintainer) @JohanLjunggren has been giving some really great direction on this new resource.

One of these suggested features was to be able to identify if the certificate chain within a PFX file is different to the chain in the Windows Certificate Store. This is because a PFX file can contain not just a single certificate but the entire trust chain required by the certificate being exported.

Therefore what we would need to do is be able to step through the certificates in the PFX and examine each one. It turns out this is pretty simple using the .NET Class:

System.Security.Cryptography.X509Certificates.X509Certificate2Collection

So, to read the PFX in to a variable called $PFX all we need to do is this:

The $PFXPath variable is set to the path to the PFX file we’re going to read in. The $PFXPassword is a string (not SecureString) containing the password used to protect the PFX file when it was exported.

We now have all the certificates loaded into an array in the $PFX variable and work with them like any other array:

ss_readpfx_loadingthepfx

Now, that we have the #PFX array, we can identify the thumbprint of the certificate that was actually exported (as opposed to the certificates in the trust chain) by looking at the last array item:

I’m piping the output Format-List so we can see the entire x509 certificate details.

ss_readpfx_showissuedcertificate

In the case of the DSC Resource we’ll compare the certificate thumbprint of the last certificate in the PFX with the thumbprint that of the certificate in the Windows Certificate Store that we’re wanting to export. If they’re different we will then perform another export using the Export-PFXCertificate cmdlet.

Protip: You can actually verify the certificate and the entire trust chain is valid and not expired by calling the verify method on the last certificate:

ss_readpfx_validateissuedcertificate

In the case above, the certificate I exported was actually invalid (it had expired):

ss_readpfx_expiredcertificate

So we could easily use the Validate method to test the certificates validity before we import them into the Windows Certificate Store. But beware, the Validate method will check that the certificate chain is trusted. To be trusted the entire chain must have been imported into the Windows Certificate Store in the appropriate stores (e.g. Trusted Root CA/Intermedicate CA stores).

So, finally this gives us the code required to implement the xCertificateExport Resource in the DSC Resource Kit. We can now perform a comparison of the certificates a PFX file to ensure that they are the same as the certificates that have already been exported.

This information is not something that you might use every day, but hopefully it’s information that someone might find useful. So thank you for taking the time to read this.

Test Website SSL Certificates Continuously with PowerShell and Pester

One of the most common problems that our teams deal with is ensuring that SSL certificates are working correctly. We’ve all had that urgent call in telling us that the web site is down or some key API or authentication function is offline – only to find out it was caused by an expired certificate.

An easy way of preventing this situation would have been to set up a task that continuously tests your SSL endpoints (internal and external web apps and sites, REST API’s etc.) and warns us if:

  • The certificate is about to expire (with x days).
  • The SSL endpoint is using safe SSL protocols (e.g. TLS 1.2).
  • The certificate is using SHA256.

This seemed like a good task for Pester (or Operation Validation Framework). So, after a bit of digging around I found this awesome blog post from Chris Duck showing how to retrieve the certificate and SSL protocol information from an SSL endpoint using PowerShell.

Chris’ post contained this PowerShell cmdlet:

So that was the hard part done, all I needed was to add this function to some Pester tests.

Note: If you are running these tests on an operating system older than Windows 10 or Windows Server 2016 then you will need to install the PowerShell Pester module by running this command in an Administrator PowerShell console:

Install-Module -Name Pester

So after a little bit of tinkering I ended up with a set of tests that I combined into the same file as Chris’ function from earlier. I called the file SSL.tests.ps1. I used the file extension .tests.ps1 because that is the file extension Pester looks for when it runs.

The tests are located at the bottom of the file below the Test-SslProtocol function.

So, now to test these SSL endpoints all I need to do is run in a PowerShell console with the current folder set to the folder containing my SSL.tests.ps1 file:

cd C:\SSLTests\
Invoke-Pester

This is the result:

ss_testssl_pesteroutput

This shows that all the SSL endpoint certificates being used by google.com, bing.com and yahoo.com are all valid SHA-256 certificates and aren’t going to expire in 14 days.

All I would then need to do is put this in a task to run every hour or so and perform some task when the tests fail:

At this point you will still need to use some mechanism to notify someone when they fail. One method could be to write an event into the Windows Event Log and then use Microsoft Operations Management Suite (or SCOM) to monitor for this event and send an e-mail or other alert to the appropriate administrators.

For an example showing how to use OMS to monitor custom events created by failed Pester and OVF tests, see my previous article here.

Potential Improvements

There are a number of ways you could go about improving this process, which our teams have in fact implemented. If you’re considering implementing this process then you might want to also consider them:

  1. Put the Test-SSLProtocol cmdlet into a PowerShell Module that you can share easily throughout your organization.
  2. Put your tests into source control and have the task clone the tests directly from source control every time they are run – this allows tests to be stored centrally and can be change tracked.
  3. Parameterize the tests so that you don’t have to hard code the endpoints to test in the script file. Parameters can be passed into Pester tests fairly easily.
  4. Use something like Jenkins, SCOM or Splunk to run the tests continuously.
  5. Run the tests in an Azure Automation account in the Cloud.

Really, the options for implementing this methodology are nearly limitless. You can engineer a solution that will work for you and your teams, using whatever tools are at your disposal.

At the end of the day, the goal here should be:

  • Reduce the risk that your internal or external applications or websites are using bad certificates.
  • Reduce the risk that an application or website will be deployed without a valid certificate (write infrastructure tests before you deploy your infrastructure – TDD for operations).
  • Reduce the risk you’ll get woken up in the middle of the night with an expired certificate.

So, in this holiday season, I hope this post helps you ensure your certificates won’t expire in the next two weeks and you won’t get called into fix a certificate problem when you should be lying on a beach in the sun (in the southern hemisphere anyway).

Have a good one!