If you aren’t familiar with Pester yet, this article will give you a high level overview of what Pester is, what you could achieve with it, and why it would be a great idea to learn it, and implement it in your environment. If you already know Pester, this article will give you a brief overview of how we have been implementing in Windows Operations & Engineering at Swisscom.
In this article I’ll start to explain the basics concepts behind pester, and talk about what the different use cases of Pester can be. In the second part of this post I will go through how we are using within my team (Windows Operations & engineering) at Swisscom. I’ll demonstrate how we use Pester to test a large framework we use, and how Pester helps us to measure the compliancy of our Servers.
Pester is a test unit framework for PowerShell. It gives us the possibility to write code that will automate the testing of our scripts. Therefore, it will automatically validate (or not) that our script actually works as expected. This is a great functionality that developers have been using for years already, and PowerShell finally got his unit test framework: Pester.
The story behind pester is very interesting; Pester was an open source community project that Microsoft forked, and shipped with Windows 10. This is actually a big thing, since adding community projects officially to be part of the OS has never been done before by Microsoft. And we love it! 🙂
Pester is generally used to answer two main questions:
How do I ensure that the code I wrote actually works as expected?
Which is generally what we refer as ‘unit testing’. The principle is simple: you want to be sure to send code out that works in all cases. That all the parameters have been tested. That the script exits gracefully in case of an error with a specific error message. You also want to ensure that the script returns the values and types you expect it to, and that 100% (or maybe a bit less) of your code is covered by your tests. Like this you can guarantee that your script will work in production, and won’t fail.
The second question would be:
How can I ensure that what I have configured using my script, actually does what we intended it to do? This might sound a bit vague, but you might have a script that configures a server to implement some predefined standards. It set’s registry keys, firewall rules, WSUS server, backup settings etc… It is a good idea to verify that the communication to external servers are working, for example that the firewall rules that have been implemented actually opened the correct ports, and that the communication really takes place between your server and the remote resource. Opening the ports is an action that provides a service to our clients / end-clients, and it makes sense to ensure that that service is actually working before we ship the server to the client.
This is what is called “Operational Validation“.
I’ll show in the next part how we implemented it at Swisscom in our Operations daily tasks.
Now that we have covered some of the fundamental concepts of unit testing, we can have a look at how we have implemented it in our team, Windows Operations & engineering at Swisscom.
In my current team (Windows Operations & engineering) all of the engineers use a custom build framework to write our scripts. It helps us to keep our scripts consistent throughout the team, and guarantee that the scripter will have the same scripting experience (logging behavior, configuration file, outputs etc..) on every script that he writes, and that the operator that will execute the script (if any) will also have the same experience regardless of who wrote the script.
Since several engineers depend on my framework, it is crucial that each change that I, or anyone of my colleague’s push to the repository, is bug free. This is where we start to talk about writing unit tests using with Pester.
To test our framework, we have around 50 tests that tests various things from the framework. Things such as creating a new script, executing it, checking if the log files are generated, does the log file contains our standard output etc…
The output of a Pester test, will look something similar to the screenshot below.
Figure 1 Pester Test results
Purple is informational, green means that the test passed, and red that it failed.
I would like to emphasize, that having some red (which would point out that your test failed) is actually not a bad thing. It simply means that the Pester test got something that need to be corrected in your script, instead of an end user, or worse, a client that would have face the error first.
A simplified version of one of the tests I wrote to can be found here under.
The describe block actually adds an informational layer (the purple part of the output), and the test Itself is always done in a ‘it‘ block. If the it block is validated, the output will be green. Otherwise, it will be output in red. The text added next to the ‘describe‘ or ‘it‘ blocks is informative, and can be anything that helps you to explain which test you are calling, and what it is doing.
In the above code extract, I simply check if the variable $testFolder contains a subfolder named ‘logs‘. I pipe my expression to the ‘should‘ keyword, which is the keyword that will evaluate the test condition, and either throw an error, or validate my test.
Even though the pester syntax is easy to understand, it doesn’t mean that complex tasks cannot be done with it.
In this next code extract, we do a few more checks.
It ensures that the framework we use to generate our script templates actually generate the script in the format we expect them to be, and that when the script is called, that it generates the appropriate message in the log file. In this particular case, I check if two different messages are present. If only one of them is present, the global test will fail.
I use unit testing to validate blocs of code, or complete functions. Generally, I try to go with the following pattern: One ‘describe’ block to test a function, and an ‘it’ block per functionality from that function that I want to test.
Operational Validation consists in writing Pester code not to test the script you wrote, but to test that the modifications or implementations that your scripts does, actually answers your original business need. Basically, if you create a firewall rule to open a specific port, with the sole purpose of being able to access a webserver and read data from a database, you want to be sure that you can actually interact with that web server, and query the needed data. If you have successfully set the firewall rule, and open the good ports, but if another firewall sits in between your client server and your webserver, or if the network cable is simply unplugged, you would fail in providing the service you intended to, even though you successfully implemented that firewall rule.
At Windows Operations & engineering we use Operational Validation to ensure that our servers are compliant with our standards (security, configuration etc…). Like this, we can guarantee that the server we provide, is consistent with our standards. The same tests are used in a later step, to validate that no configuration drift has occurred.
The tests consist of a number of pester scripts that are deployed to one or more servers. A script will execute the pester tests and collect the data in a standard XML format (NunitXML). This standard XML allows us to build automation around the results of our tests, and generate really cool reports out of that!
I have written a PowerShell Class that we use to generate either an individual report per server, or a global report that can be generated in different formats (docx, html etc..).
The individual report is based on an open source project called “ReportUnit”, and you can see an example below.
Figure 2 individual test results
If needed, we can generate very quickly a report of a specific server, and can see immediately if everything is as we expect it to be. But although the report looks really nice, it is not the one I use the most.
The report I use the most it the global report. It allows us to see at a glance an overview of our complete environment. The report is composed of 5/6 pages per server. This means that it can get pretty big. Since it can get so big, I’ll show only the most interesting parts of these reports.
Here under you find an extract of one of the first pages of the report. It contains a table that lists all the servers that are in a result of failure. (which means that at least one or more test has failed). Using the ‘PercentageSuccess’ row we can quickly see the average rate of compliancy per server that we have.
Figure 3 Global test overview
This allows us to rapidly see where something is misconfigured. Each server contains a detailed report of each test. It contains the success state of the test, the time it took to get executed and a short description. (As showed here under).
Figure 4 Detailed test overview
With the use of bright colors, the failed tests will be visible immediately. In the end, we have a detailed view of each failed tests, with a short description of what failed as demonstrated in the screenshot here under.
Figure 5 Detail failed tests view
Following this methodology, we have a way to check for the current state of our servers, and identify a server that has been misconfigured (or modified) since we have staged it. If this impacts a big number of servers we would see it in the global report, and we are then able to provide a fix for the whole group of servers in a timely fashion.
I would be curious to know, if you have implemented Pester in your environment and if so, how do you use it in your environment? Only unit testing? Operation Validation as well? Please share your experience with us!
I hope you enjoyed reading this article as much as I did writing it, and I look forward to read about your Pester implementation below in the comment section.
Stéphane Van Gulick
DevOps Engineer III
Find the job or career to suit you. A career where you can make a difference and continue your personal development.
What you do is who we are.