WSUS – Declining all Superceded Updates – NOW!

Just a quick snippet today. I wrote this because I was didn’t want to have to wait for 30 days before unusused superceded updates in my WSUS server were automatically declined – especially those daily “Definition Update for Windows Defender”.

ss_wsus_definitionupdates

If you’re happy waiting for these unused superceded updates to be declined after 30 days then you can just use the following cmdlet:

Invoke-WsusServerCleanup -DeclineSupersededUpdates

However, if you don’t want to wait you can fire off this little PowerShell script. It is just a single line of PowerShell code that will automatically decline all updates with a status of anything except for declined and has at least one superceding update:

Get-WSUSUpdate -Classification All -Status Any -Approval AnyExceptDeclined `
    | Where-Object { $_.Update.GetRelatedUpdates(([Microsoft.UpdateServices.Administration.UpdateRelationship]::UpdatesThatSupersedeThisUpdate)).Count -gt 0 } `
    | Deny-WsusUpdate

The command will take a few minutes to run (depending on how many updates your WSUS Server has) – on my WSUS server it took about 5 minutes. Once the process has completed you could then trigger the cmdlet to perform a WSUS Server cleanup (to get rid of any obsolete content files):

Invoke-WsusServerCleanup -CleanupObsoleteUpdates -CleanupUnneededContentFiles

That is about it for today!

Reporting on File, Folder and Share Permissions and how they Change

Introduction

Late last year I was asked by a friend if I could write a program that could look at the ACL’s defined within a folder structure and report back how they differed from some previously recorded state. This was basically so the administrators could report back what ACL’s had change and verify that the “security Elves’ hadn’t been messing about.

So after several almost complete re-writes, the end result is the ACLReportTools PowerShell module.

Overview

The intended purpose of this module is to allow an administrator to report on how ACL’s for a set of path or shares have changed since a baseline was last created.

Basically it allows administrators to easily see what ACL changes are being made so they keep an eye on any security issues arising. If performing SMB share comparisons, the report generation can be performed remotely (from a desktop PC for example) and can also be run against shares on multiple computers.

The process that is normally followed using this module is:

  1. Produce a baseline ACL Report from a set of Folders or Shares (even on multiple computers).
  2. Export the baseline ACL Report as a file.
  3.   … Sometime later …
  4. Import the baseline ACL Report from a stored file.
  5. Produce an ACL Difference report comparing the imported baseline ACL Report with the current ACL state of the Folders or Shares
  6. Optionally, export the ACL Difference report as HTML.
  7. Repeat from step 1.

The comparison is always performed recursively scanning a specified set of folders or SMB shares. All files and folders within these locations will be scanned, but only non-inherited ACL’s will be added to the ACL Reports.

Report Details

An ACL Report is a list of non-inherited ACLs for a set of Shares or Folders. It is stored as a serialized array of [ACLReportTools.Permission] objects. ACL Reports are returned by the New-ACLShareReport, New-ACLPathFileReport and Import-ACLReport cmdlets.

An ACL Difference Report is a list of all ACL differences between two ACL reports. It is stored as serialized array of [ACLReportTools.PermissionDiff] objects that are returned by the Compare-ACLReports and Import-ACLDiffReport cmdlet.

ACL Reports produced for shares rather than folders differ in that the share name is provided in each [ACLReportTools.Permission] object and that the SMB Share ACL is also provided in the [ACLReportTools.Permission] array.

Important Notes

When performing a comparison, make sure the baseline report used covers the same set of folders/shares you want to compare now. For example, don’t try to compare ACL’s for c:\windows and c:\wwwroot – that would make no sense and result in non-sensical output.

If shares or folders that are being compared have large numbers of non-inherited ACL’s (perhaps because some junior admin doesn’t understand inheritance) then a comparison can take a long time (hours) and really hog your CPU. If this is the case, run the comparison from another machine using share mode or run it after hours – or better yet, teach junior admins about inheritance! 🙂

You should also ensure that the account that is being used to generate any reports has read access to all paths and all content (including recursive content) that will be reported on and can also read the ACL’s. If it can’t access them then you may get access denied warnings (although the process will continue).

NTFS Security Module

This Module uses the awesome NTFS Security Module to be installed in your PowerShell Modules path.

Ensure that you unblock all files in the NTFS Security Module folder before attempting to Import Module ACLReportTools. The ACLReportTools module automatically looks for and Imports the NTFS Security Module if present. If it is missing, an error will be returned stating that the module is missing. If you receive any other errors importing ACL Report tools, it is usually because some of the NTFS Security Module files are blocked and need to be unblocked manually or with Unblock-File. You can confirm this by calling Import-Module NTFSSecurity – if any errors appear then it is most likely the caused by blocked files. After unblocking the module files you may need to restart PowerShell.

Installing ACLReportTools

  1. Unzip the archive containing the ACLReportTools module into the one of the PowerShell Modules folders (E.g. $Home\documents\windowspowershell\modules).
  2. This will create a folder called ACLReportTools containing all the files required for this module.
  3. In PowerShell execute:
Import-Module ACLReportTools

How to Use It

The basic steps for using this module is as follows:

  1. Create a Baseline ACL Report file on a set of Folders.
  2. Compare the Baseline ACL Report file with the current ACL’s for the same set of Folders.
  3. Optionally, convert the ACL Comparisson Report into an HTML report file.

In the for following examples, the e:\work and d:\profiles are being used to produce an ACL Difference report for. The Baseline ACL Report and the ACL Difference Report will be saved into the current users documents folder.

Step 1: Create a Baseline ACL Report file on a set of Folders

The first step is to create Baseline ACL Report on the folders e:\work and d:\profiles and store it in the baseline.acl file in the current users documents folder:

Import-Module ACLReportTools 
New-ACLPathFileReport -Path "e:\Work","d:\Profiles" | Export-ACLReport -Path "$HOME\Documents\Baseline.acl" -Force 

Step 2: Compare the Baseline ACL Report file with the current ACLs for the same set of Folders

This step is usually performed a few days or weeks after step 1. In this step the Baseline ACL Report created in step 1 is compared with the current ACL’s for the same set of folders used in step 1. The output is put into the variable $DiffReport which can then be exported as a file using the Export-ACLDiffReport cmdlet or saved as HTML using Export-ACLPermissionDiffHTML for easier review.

Import-Module ACLReportTools
$DiffReport = Compare-ACLReports -Baseline (Import-ACLReport -Path "$HOME\Documents\Baseline.acl") -Path "e:\Work","d:\Profiles"

Step 3: Convert the ACL Comparisson Report into an HTML Report File

Once the ACL Difference Report has been produced, it could be simply dumped straight into the pipeline or converted into an HTML using the Export-ACLPermissionDiffHTML cmdlet. The title that will appear on the HTML page is also provided.

$DiffReport | Export-ACLPermissionDiffHtml -Title 'ACL Diff Report for e:\work and d:\profile'

Reporting on Shares Instead of Folders

Instead of specifying a set of folders it is also possible to specify a list of computers and/or SMB shares to pull the ACL Reports from. For example if we wanted to report on the shares Share1 and Share2 on computer Client the following commands could be used for step 1:

Import-Module ACLReportTools
Compare-ACLReports -Baseline (Import-ACLReport -Path "$HOME\Documents\Baseline.acl") -ComputerName Client -Include Share1,Share2 

Then for step 2 we would use:

Import-Module ACLReportTools 
$DiffReport = Compare-ACLReports -Baseline (Import-ACLReport -Path "$HOME\Documents\Baseline.acl") -ComputerName Client -Include Share1,Share2

Step 3 in would be exactly the same as in the Folder scenario.

Final Word

What started as a simple script actually ended up turning into quite a large module that taught me a huge amount about PowerShell. So I hope someone else out there is also able to find a use for this and it helps track down some of those ‘Permission Elves’.

Install Windows Server Nano the Easy Way

All the recent talk about the new Windows Server Nano (Windows Server Core on diet pills) that is available for installation on the Windows Server 2016 Technical Preview 2 ISO got me quite interested. So, I thought I’d give it a whirl to see what all the fuss was about. Well, first up, it really is as small (568Mb with my chosen packages) and fast as Microsoft says. Second, however, it is most definitely a Tech Preview and is missing lots of stuff and has some obvious issues.

Edit – 14 May 2016: The scripts and process on this page have now been updated to support Windows Server 2016 Technical Preview 5. The process will not work for versions earlier than Windows Server 2016 Technical Preview 5. I removed a some parts of this document as they were not correct for TP5. I decided to update this blog post with TP5 information because it is still getting a lot of traffic.

Manual Install

Installing a copy of Server Nano is not quite as straight forward as mounting the ISO into a VM and booting it up – at least not yet. But if you’re still keen to give it a try, Microsoft provides some instructions on how you can install Nano into a VHDx which you can then mount as the boot disk in a Gen-1 or Gen-2 Hyper-V Machine.

The manual instructions to create a VHD with Nano Server on it can be found here:

Getting Started With Nano Server

It is well worth reading this to get an idea of how Nano Server is different from regular Core Server.

Easy Install

As you can see, installing Nano Server requires quite a few steps. None of them are difficult, but I wouldn’t be much of a nerd if I didn’t convert it into a script. So after a quiet Friday night I managed to cobble something together. You can find it here:

Create a New Nano Server VHD

It is fairly straight forward to install and use:

  1. Create a Working Folder on your computer in the case of this example I used c:\Nano.
  2. Download the New-NanoServerVHD.ps1 to the Working Folder.
    Download the Windows Server 2016 Technical Preview ISO (download here) to the Working Folder.
  3. Open an Administrative PowerShell window.
  4. Change directory to the Working Folder (cd c:\nano).
  5. Execute the following command (customizing the parameters to your needs):
.\New-NanoServerVHD.ps1 `
-ServerISO 'c:\nano\14300.1000.160324-1723.RS1_RELEASE_SVC_SERVER_OEMRET_X64FRE_EN-US.ISO'' `
-DestVHD c:\nano\NanoServer01.vhdx `
-VHDFormat 'VHDx' `
-ComputerName NANOTEST01 `
-AdministratorPassword 'P@ssword!1' `
-Packages 'Storage','OEM-Drivers','Guest' `
-IPAddress '192.168.1.65'

Note: If you run this in the PowerShell ISE, a pop-up message appears during the execution of the above command:

This error can be ignored without it causing a problem.

This error can be ignored without it causing a problem.

If this happens, just click Continue. I’m not sure why this happens in the ISE, but the script still functions fine if it occurs. I tried to get a screenshot of this but I couldn’t get it to happen.

Booting it up

Once you’ve the VHD has been created, just create a new or Gen-2 (or Gen-1) Hyper-V Machine and assign this VHDx as the boot disk. Start up the VM and after about a minute (for me anyway) you should be able to use PS remoting (Enter-PSSession) to connect to it and begin playing.

Remember, Server Nano is completely headless (which just sounds cool), so if you try to connect to it using the Hyper-V Console you will see the recovery console:

ss_nano_login

Observations

Edit – 14 May 2016: These “issues” have been resolved in more recent versions of Windows Server 2016 Nano Server.

One thing I have noted though, is that if you watch the machine with the Hyper-V Console while it is booting it will show the nice little Windows Start up screen for up to 5 minutes – even though the machine appears to be completely booted and can be connected to. I’m not sure why this is, but I’m sure MS will sort it out.

Nano booting up - on my system it can be connected to and used even while the boot screen is showing.

Nano booting up – on my system it can be connected to and used even while the boot screen is showing.

A second thing I found while writing this script was that in the Unattend.xml file the ComputerName is supposed to be set in the offlineServicing phase (according to the MS instructions). But this didn’t seem to work for me so my script sets it in both the offlineServicing phase and the Specialize phase. This actually doubles the first boot time from 5 seconds to 10 seconds because it needs to reboot to apply the ComputerName.

If anyone reads this and has any ideas on how to improve the process (or if I’ve gone wrong somewhere), please let me know!

fi_dsctools

DSC Tools- Hopefully Making DSC Easier

Introduction

Desired State Configuration (DSC) is definitely one of the coolest features of WMF 4.0. This article however is not about what DSC is or how to implement it – there are already many great introductions to DSC out there. If you’re new to DSC I’d suggest you take a look at the great free e-book from PowerShell.org: The DSC Book.

I’ve been working with DSC for several months now in my lab environments and I really love it. But it can be a little bit tricky to set up, with lots of steps and things to remember (or forget in my case). It is very easy to miss a step and spend many hours trying to figure out what went wrong – I know this from experience.

I’m certain in the future that Microsoft and/or other tool providers will provide easier methods of implementing DSC, but at the moment it is still a fairly manual process. There are products like Chef (see Chef for Windows) that are going to use DSC to provide configuration of Windows operating systems (and perhaps non-Windows as well), but as of writing this these tools aren’t yet available.

So, after about the 20th time of setting up DSC, I figured that it might be an idea to write some simple tools that could make my life a little bit easier. With that in mind I created a PowerShell module (supporting WMF 4.0 and above) that provides some helper functions and configurations that make setting up DSC Pull Servers and configuring Local Configuration Manager (LCM) on the nodes slightly less painful.

The Module

With all of the above in mind I set about creating a module that would combine all of these steps into simple PS cmdlets that could be used without having to remember exactly how to do things such as creating an LCM configuration file for a Pull Server or something like that. This in theory would leave more time for the really fun part about DSC: creating node configuration files and resources.

For example, to set up a DSC Pull Server (in HTTP mode) there are several steps required:

  1. Download and Install the DSC Resources your configurations will use to the PS modules folder.
  2. Publish the DSC Resources to a folder in your Pull Server and create checksum files for them.
  3. Create a DSC Configuration file for your Pull Server(s).
  4. Run the DSC Configuration file for your Pull Server(s) into to create MOF files.
  5. Push the DSC Configuration MOF files to the LCM on the Pull Servers.

These steps are easy to understand and perform, but if you don’t do them often you will quickly forget exactly how to do it. I found myself referring back to long sets of instructions (found many places online) every time I did it.

To perform the above steps using the DSCTools module simply requires the following cmdlets to be executed on the computer that will become the DSC Pull Server.

# Download the DSC Resource Kit and install it to the local DSC Pull Server
Install-DSCResourceKit

# Copy all the resources up to the local DSC Pull Server (zipped and with a checksum file).
Publish-DSCPullResources

# Install a DSC Pull Server to the local machine
Enable-DSCPullServer

The cmdlets can be executed on a computer other than the Pull Server by providing ComputerName and Credential parameters.

DSCTool CmdLet Parameters

Like most PS cmdlets, you can also provide additional configuration options to them as well. For example, you might want your DSC Pull Server resources folder to be placed in a different location, in which case you could provide the commands with the PullServerResourcePath parameter:

# Copy all the resources up to the local DSC Pull Server (zipped and with a checksum file).
Publish-DSCPullResources -PullServerResourcePath e:\DSC\Resources

# Install a DSC Pull Server to the local machine
Enable-DSCPullServer -PullServerResourcePath e:\DSC\Resources

Most DSCTools cmdlets have many other parameters for controlling most aspects of the functions such as the type of Pull Server to install (HTTP, HTTPS or SMB), the location where files should be stored. If these parameters aren’t passed then the default values will be used.

Default DSCTools Module Settings

If you’re lazy (like me) and don’t want to have to pass the same parameters in to every cmdlet, you can change the default values by overriding script variables once the module has been loaded. For example you might always want your Pull Servers to use a configuration path of e:\DSC\Configuration. You could pass the PullServerConfigurationPath parameter to each cmdlet that needs it (which could be many), or you could just change the value of the $Script:DSCTools_DefaultPullServerConfigurationPath variable:

# Set the default location where all Pull Server cmdlets will put DSC Pull Server configuration files.
$Script:DSCTools_DefaultPullServerConfigurationPath = 'e:\DSC\Configuration'

# Copy all the resources up to the local DSC Pull Server (zipped and with a checksum file).
Publish-DSCPullResources

# Install a DSC Pull Server to the local machine
Enable-DSCPullServer

Configuring a Node

Configuring a node to use the Pull Server also used to be a more complicated process (not counting the actual creation of the node configuration file):

  1. Copy the configuration MOF file to the Pull Server.
  2. Generate a checksum file for the configuration.
  3. Configure the LCM on the node to pull its configuration from the Pull Server.
  4. Trigger the node to immediately pull it’s DSC configuration from the Pull Server (rather than wait 30 minutes).

This process can now be performed by just two cmdlets:

# Set up the node NODE01 to pull from the pull server on machine MYDSCSERVER.
# The MOF file for this node will be looked for in:
# $Home\Documents\NODE01.MOF
# This can be configured.
Start-DSCPullMode `
-ComputerName 'NODE01'
-PullServerURL 'http://MYDSCSERVER:8080/PSDSCPullServer.svc'

# Force the node to pull its configuration from the Pull Server
Invoke-DSCCheck -ComputerName NODE01

The second cmdlet, Invoke-DSCCheck, is not even required if you’re willing to wait for the automatic DSC check to occur (every 30 minutes or so).

Configuring Lots of Nodes at Once

Once again, this module is all about laziness. So, why configure one node at a time when you can configure lots of them all at once. Many of the cmdlets in this module support a Nodes parameter, which take an array of hash tables. This array of hash tables will contain the definitions of the nodes that need to be set up or have other procedures performed on them (e.g. invoke a configuration check).

For example, to configure seven different nodes for using a DSC Pull Server (with different configurations even) would require the following code:

$Nodes = @( `
@{Name='NODE01';Guid='115929a0-61e2-41fb-a9ad-0cdcd66fc2e1';RebootIfNeeded=$true;MofFile="$PSScriptRoot\Configuration\Config_StandardSrv\NODE01.MOF"} , `
@{Name='NODE02';Guid='115929a0-61e2-41fb-a9ad-0cdcd66fc2e2';RebootIfNeeded=$true;MofFile="$PSScriptRoot\Configuration\Config_StandardSrv\NODE02.MOF"} , `
@{Name='NODE03';Guid='115929a0-61e2-41fb-a9ad-0cdcd66fc2e3';RebootIfNeeded=$true;MofFile="$PSScriptRoot\Configuration\Config_StandardSrv\NODE03.MOF"} , `
@{Name='NODE04';Guid='115929a0-61e2-41fb-a9ad-0cdcd66fc2e4';RebootIfNeeded=$true;MofFile="$PSScriptRoot\Configuration\Config_StandardSrv\NODE04.MOF"} , `
@{Name='NODE05';Guid='115929a0-61e2-41fb-a9ad-0cdcd66fc2e5';RebootIfNeeded=$true;MofFile="$PSScriptRoot\Configuration\Config_StandardSrv\NODE05.MOF"} , `
@{Name='NODE06';Guid='115929a0-61e2-41fb-a9ad-0cdcd66fc2e6';RebootIfNeeded=$true;MofFile="$PSScriptRoot\Configuration\Config_StandardSrv\NODE06.MOF"} , `
@{Name='NODE07';Guid='115929a0-61e2-41fb-a9ad-0cdcd66fc2e7';RebootIfNeeded=$true;MofFile="$PSScriptRoot\Configuration\Config_StandardSrv\NODE07.MOF"} )

Start-DSCPullMode `
-Nodes $Nodes `
-PullServerURL 'http://MYDSCSERVER:8080/PSDSCPullServer.svc'

Invoke-DSCCheck -Nodes $Node

The nodes array could even be populated from a CSV file to make it even easier.

Where to get It

Once the new PowerShell Gallery is available (for public consumption) I’ll upload the module there. But in the mean time it is available on the Microsoft Script Center here:

DSCTools on Script Center

If you want to see some more example files (as well as sample node config files I’ve used in testing the module) you can check out the GitHub repository for the module:

DSCTools on GitHub

The GitHub repository contains additional files not available in the download from the script center including details examples and test configuration files. It is also contained within a Visual Studio 2013 solution so if you’re using VS2013 and the PowerShell VS add-in you can easily load it up.

Future Versions

Over the next few weeks I am planning to update the module so that some of the functions will create the node configuration MOF files by running the actual configuration PS1 files (provided in a nodes array or as parameters). This will eliminate another step when updating any node configuration files.

After creating this module it occurred to me that there might be an even better way to implement a DSC set up: using a basic DSC system XML configuration file that could define all the pull servers and nodes. This XML file could be passed to a cmdlet that could configure all the applicable servers and nodes from the content. So this is the next long-term goal I have for this module.

If anyone out there finds this module useful and has a request for additional features or finds a (shudder) bug, please let me know!