Unit testing MongoDB queries

When writing unit tests, it’s common to stop at the database access layer.
You might have a “dumb” data access layer that just passes stored procedure names and parameters to the SQL database, for instance.
It’s usually very hard to test this layer, since it requires either that your build server has access to a SQL Server instance, or that a SQL Server is running on the build server.
Plus, we’re getting out of unit tests and are entering integration tests, here.

Using a more advanced tool like Entity Framework, in order to test your complex EF queries, there are usually methods to insert fake data into a fake container, like Test Doubles, InMemory, or Effort.

Using MongoDB, you might encounter the same problem : how do I test that my complex queries are working ?

Here for instance, I get the possible colors of a product; how do I know it works using unit tests?

MongoDB has an “inMemory” storage engine, but it’s reserved to the (paid) Enterprise edition. Fortunately, since 3.2, even the Community edition has a not-very-well-documented “ephemeralForTests” storage engine, which loads up an in-memory Mongo instance, and does not store anything on the hard drive (but has poor performances). Exactly what we need!

Before running the data access layer tests, we will need to fire up an in-memory instance of MongoDB.
This instance will be common to all the tests for the layer, otherwise the test runner will fire up new tests faster than the system releases resources (file and ports locks).

You will have to extract the MongoDB binaries in your sources repository somewhere, and copy them besides your binaries on build.

The following wrapper provides a “Query” method that allows us to access the Mongo instance through command-line, bypassing the data access layer, in order to insert test data or query insertion results.

We’re using the IClassFixture  interface of xUnit to fire up a MongoDaemon instance that will be common to all our tests using it.
It means we need to clean up previously inserted test data at each run.

There you have it: a kind-of-easy way to test your Mongo data access layer.

Unit testing your Powershell scripts using Pester

In order to stabilize our middle office, we need to test it. Not that it’s buggy, but hey, testing is good.

We have a huge 500-lines script that processes PDF files through a bunch of programs. We need to test a few things:

  • Every step and each piece of the code must be working
  • The configuration files must be properly read
  • The proper programs must be run in the proper order
  • The processed PDFs must look like what we’re expecting

To do that, we have a lot of work to do.

There is a series of great articles on Pester on PowerShell Magazine.

Refactor your script

First, we need to extract the methods and code blocks we will test. Our script is not that hard to refactor into a bunch of methods, since it’s pretty well organized so far. A simple refactor will simply be moving a bunch of independent code blocks into methods, regrouped and externalized by feature.

Prefer using modules to do that. Create your methods into .psm1 files. It will allow you to just Import-Module mymodule.psm1 and use it the same way as if you were dot-sourcing your file (. .\myfile.ps1), but will allow you to do more awesome things later; for instance, you can get the list of available commands through Get-Command -Module mymodule, get the comment-based help of your module methods through Get-Help my-module-method, get tab-expansion on your methods, etc.

If you’re not sure how to refactor your script to extract units of work, I can’t really help you there, and you should go learn more about unit testing; there are a lot of places where you can do that.

Unit Test your extracted code

There is an awesome unit test and BDD tool called Pester. Download the latest nuget package into your scripts directory by running nuget install pester. PsGet also has a Pester package, but including the Pester files with your sources (or a way to get them like with Nuget) is very important if you intend to run your tests on a continuous integration platform, especially if you externalize them on a service like AppVeyor.

Create a _run_tests.ps1 file (or whatever your naming convention calls for), with the very simple following contents:

Invoke-Pester will run all the “*.Tests.ps1” files it finds in the current directory. Unfortunately, the Pester Nuget package comes with the Pester tests, and it’s pretty annoying to see the thousands of unit tests in the middle of yours. You can either not use Invoke-Pester and roll your own “look for *.Tests.ps1 file except in the Pester folder” method, or (like I did) forget about nuget update pester and remove the test files from the Pester folder.

To create unit test files, you can either write them manually, or use the New-Fixture module command, which will create both a file to contain your methods, and a test file to test your methods. If you’re working with modules, you will not really be able to use the power of this command, but if you’re creating a new script, it will provide you with a BDD workflow.

Your test file for your module will look like this:

As you can see, for now I’m using a custom config files with the expected values filled. This is not the best way to unit test, so we’re going to use mocking.

Mock the system methods

Here is the true power of Pester and Powershell: the ability to mock system methods. Your method reads files from the disk? No problem. Just provide an alternative implementation and you don’t have to setup a bunch of test data and config files.

My Read-MailConfig looks like this:

So, there is a Get-Content method that I want to mock and control its return value. I can now modify my test so that the values used by my test are right next to them:

Note the usage of -ModuleName mail in the Mock call: modules have their own scope (which is not the case of plain dot-sourced script files), and so need a bit more work to inject mocks.

My mail module actually sends emails through the System.Net.Mail classes, but Pester can’t mock .Net objects (note that .Net mocking frameworks can’t mock most system classes either).

In order to bypass that, we’re going to extract the .Net object calls into separate methods doing only that, we’re going to mock this extraction, and not test the .Net method call:

And the test:

Use TestDrive to test file system processes

If you need to test for complex file access, mocking system methods will quickly become too hard. For instance, I need to test two methods: one that removes files older than X days, and one that removes empty folders. Using TestDrive is much more straightforward, simple and compact than mocking the Get-ChildItem cmdlet:

Unit testing custom querystring-based authorization on a WebApi controller

I needed to add a very simple authorization mechanism to my API: use a query string parameter “api_key”, so that it’s compatible with Swagger (using Swashbuckle, there is a field “api_key” in the Swagger UI) and is easily callable through Ruby On Rails.

Following and adapting a nice tutorial, I have done the following.

Implement the authorization filter

Create an interface for your API key “getter”:

Implement this interface; here it’s extremely simple:

Inject this interface through your dependency injector of choice. You don’t have to modify your controllers, which is great.

Then create a filter attribute to use this implementation:

Now you just have to add the [ApiKeyAuthorize] attribute to your controller(s), and you now need to add the proper api_key query string parameter to all your requests.

Test the filter

A few things to test: that your class uses this attribute, and that the attribute does what it says it does.

Test the attribute presence

Here I’m using XUnit and FluentAssertions.
It’s just a matter of listing the attributes on the class, and checking that an attribute matching the one created exists.

Test the attribute inner workings

What are we testing there? That the attribute throws a HttpResponseException when no parameter exists or when the value is wrong, and that it doesn’t throw an exception when it matches.

Setup the tests

Using XUnit and Moq, the test setup looks like this:

The ContextUtil.CreateActionContext method can be picked from the ASP.Net source. The corresponding tests can be found here.

My InjectionSetup.Register is a unit-test-specific injection that allows to use a specific instance instead of creating one, here using LightInject:

Test the attribute

Still using XUnit and FluentAssertions, three simple tests allow to check that the responses are what is expected:

 

Testing Entity Framework layer in database-first mode

In your app, you may have a “query” layer, where all your Linq queries are regrouped. If you don’t, you should. Having your queries inside your controller leads to poor testing and strong coupling.

You will want to test that your queries return what they say they return. In order to do that, you need to mock your Entity Framework entities container.

There is an awesome and complete tutorial here, explaining everything.

If you’re stuck on Visual Studio 2010, you will need to do a few things: first, download and install the ADO.NET DbContext Generator code template, so that your entities use the DbSet type. Then, open your Entity Framework container, right click inside and select “Add a code generation element” (or something like that, my VS is not in english) and select the DbContext elements you just downloaded. You will then be able to customize the way your entities are generated (awesome tool).

In all Visual Studio versions, if you’re not using code-first like the tutorial, you will have to modify the template generation to mark your entities list as virtual (so that they can be mocked). Open your xxx.Context.tt file, find the line with DbSet<<#=Code.Escape(entitySet.ElementType)#>> and add virtual in front of it. Now you can check out the generated xxx.Context.cs file to be sure it’s not doing crazy things.

While you’re at it, since you’re modifying the code templates, follow these awesome instructions to xml-document your generated entities.

Now, following the MSDN article, in your query tests (the ones returning a set of data), you will have to manually create the data to return, and “bind” it to the mock instance. Since you will have to do that in each of your query tests, don’t forget to extract it to a method:

Faking objects with FakeItEasy

Remember that time (yesterday) when I created my own fake implementations for my tests? Man, was I crazy.

There is an awesome library called FakeItEasy, which ease up your fake implementations.

Now, instead of implementing your own IEntitiesContainer (or whatever your data layer injection interface is), you can tell FakeItEasy to generate a new fake of this interface, and you get a bunch of assertion methods on it thrown in for good measure.

It also helps you make sure you’re testing the right things. For instance, I had a test method return a list of paginated result, and went to great lengths to make sure the proper number of items were returned. After spending a bit of time trying to refactor my test (because FakeItEasy doesn’t make it easy to provide specific parameters), I found out I don’t care about the number of returned results, since this is tested elsewhere.

Creating a web API using ASP.Net WebApi over a legacy database

I’m trying to create a coherent API to access a crazy legacy database, do I decided to use an Asp.Net WebApi project. Since I want to get a somewhat sane result, I’ll test everything.

First things first: create a new WebApi project. It’s a type of project, located under Asp.Net MVC websites.
Don’t forget to update the project’s nuget packages ASAP, since doing it too late will make you tear you hairs out with broken links and missing files.

I will not go over creating API controllers, models etc, because there are a billion websites to help you with that, and it’s pretty straightforward. Instead, I will try to regroup all “best practices” when dealing with a web API in a single place, applied to the problems of dealing with a legacy database. I will also not dwell on the various concepts (data transfer objects, dependency injection, etc), you can read more about them at length with a quick Google search.

Version your API

If your project has any kind of risk associated with it (which means: once in production, will breaking it make you lose money?), you should version your API. So that your website can use the latest and most awesome version, but an executable running somewhere lost and forgotten in the bowels of your middle office doesn’t suddenly (and probably silently) crash once a method returns differently-named fields.

For that, there are several method, each one with various cons and pros. You can read a nice summary (and how to include all main methods at once in your project) here.
I decided to use URL versioning, because it’s the most simple to use and understand, and the most “visible”. Working with various contractors that use various languages and technologies, I always have to choose the less painful option.

To implement API URL-based versioning, the most simple, painless option is to use attribute-based routing. Make sure you have the AttributeRouting.WebApi Nuget package, and just add attributes to your controller’s methods:

I advise you against using namespace-based versioning (as seen here), because it makes your life a living nightmare. Everything you will be using (like Swagger) will assume your API is not versioned, so you have to keep default routes, and namespace-based versioning doesn’t allow that (unless you succeed in modifying the linked class, which I didn’t try).

Create a public data layer

Publish Data Transfer Objects (DTO)

One annoying thing when you try to “publish” entities from Entity Framework, is that you get all the foreign keys relationship stuff in your JSON result.
It also strongly links your underlying model with your API, which may be OK, but is not awesome, especially considering that our model is crazy (still working with a legacy DB, remember?).

So, in order to solve these problems (and also because I am not fond of publishing the underlying data model directly), there is a design pattern called Data Transfer Objects.

To implement this pattern, let’s create a Model project, create the Entity Framework entities inside, and publish them through custom and “clean” objects.

Always be remembering that we’re dealing with a legacy DB. Our crazy fields with random casing, cryptic names and insane types (for instance, everything in my main table is a varchar 100, even dates, booleans and numbers) would really love a nice grooming. You’re in luck, because tools exist to make your life easier.

We will use AutoMapper to bind the entities to the custom data objects. This awesome tool will do much of the grunt work for us.

Let’s pretend that in the actual DB, we have entities looking a bit like this:

Since having all fields as string and with random casing  is clearly insane (or lazy, or both, who knows?), we want to map it to an object like this:

In order to do that, we will use AutoMapper so that we don’t have to manually create object converters. It doesn’t prevent us to write code, though, it just means that all mapping code can be stored in a single place.

System types are best converted using system conversion, but in my case the datetime have some weird things going on, so I created my own converter.

Note that you can also write a custom converter to convert from and to Order and OrderDto. However, it makes you write mapping method “by hand” in various places like constructors, and I like having all the casting in one place, even though it can get quite long after a while. The mapping can also be much more complicated than the above example, as seen here for instance, and using AutoMapper helps a lot in these cases.

Use dependency injection to publish the DTOs

In order to help in the next step (creating test), we will use dependency injection in our controllers.

In your Model project, create a IEntitiesContainer interface with all the methods to fetch the data you need.

Then, still in your Model project, implement this interface in a class that will wrap around the actual Entity Framework container.

Note that tying the controller to the MyEntities() implementation is not the best approach, but solving it is well documented.

Note that we’re not using AutoMapper’s Project().To(), because it doesn’t work great with Entity Framework when using type conversion.

Now use this new layer in your controller:

Now we can use the controller with any implementation of IEntitiesContainer in our tests.

Test the controller

Now that we have a foundation for the project, we can finally do some yummy tests. I will use xUnit (because it’s a great tool) and FluentAssertions (because it makes tests easier to read and write).

I will not go over testing the model, since it’s pretty well documented. However, testing the WebApi controller requires a bit more work.

In order to not hit the actual DB, we’ll use the dependency injection we’ve set up. Create an implementation of the IEntitiesContainer in your test project. I suggest using an awesome method that I found on a blog: create a mostly-empty implementation, for which you will provide the implementation on a case-by-case basis.

Then your controller test can use this fake like this:

The TestsBoostrappers.SetupControllerForTests() method configures the routes of the controller. If we don’t do that, the controller won’t know what to do.

One last, but very important, point of interest on your tests. In order to follow the REST principles, after certain actions, your API should return some specific data. For instance, after an item creation, it should return the URL to this new item. Your POST method may look like this:

This simple this.Url creates an URL based on your routes and such, but if you’re using WebApi 1 like me (because you’re stuck on .Net 4.0 in Visual Studio 2010), all documented solutions won’t work, and will make you want to flip your desktop. In order to solve that, I simply used another dependency injection to wrap the UrlHelper class, and provide my own implementations in my tests.

Document your API

Documenting your API is pretty easy using Swagger. An Nuget package wrapping the implementation for .Net is available as Swashbuckle. Add it to your API project, and then you can access the documentation through /swagger/.