Unit testing MongoDB queries

When writing unit tests, it’s common to stop at the database access layer.
You might have a “dumb” data access layer that just passes stored procedure names and parameters to the SQL database, for instance.
It’s usually very hard to test this layer, since it requires either that your build server has access to a SQL Server instance, or that a SQL Server is running on the build server.
Plus, we’re getting out of unit tests and are entering integration tests, here.

Using a more advanced tool like Entity Framework, in order to test your complex EF queries, there are usually methods to insert fake data into a fake container, like Test Doubles, InMemory, or Effort.

Using MongoDB, you might encounter the same problem : how do I test that my complex queries are working ?

Here for instance, I get the possible colors of a product; how do I know it works using unit tests?

MongoDB has an “inMemory” storage engine, but it’s reserved to the (paid) Enterprise edition. Fortunately, since 3.2, even the Community edition has a not-very-well-documented “ephemeralForTests” storage engine, which loads up an in-memory Mongo instance, and does not store anything on the hard drive (but has poor performances). Exactly what we need!

Before running the data access layer tests, we will need to fire up an in-memory instance of MongoDB.
This instance will be common to all the tests for the layer, otherwise the test runner will fire up new tests faster than the system releases resources (file and ports locks).

You will have to extract the MongoDB binaries in your sources repository somewhere, and copy them besides your binaries on build.

The following wrapper provides a “Query” method that allows us to access the Mongo instance through command-line, bypassing the data access layer, in order to insert test data or query insertion results.

We’re using the IClassFixture  interface of xUnit to fire up a MongoDaemon instance that will be common to all our tests using it.
It means we need to clean up previously inserted test data at each run.

There you have it: a kind-of-easy way to test your Mongo data access layer.

Start a WPF application in the notification area with Caliburn.Micro

Sometimes you’re writing a quick utility app that you wish to start directly in the notification area (near the clock), and not display anything on startup.
It’s actually very simple using Caliburn.Micro.

You probably already have customized your application bootstrapper already. All you have to do is modify the OnStartup  method from DisplayRootViewFor<T>  to displaying a custom tooltip. Here I’m using the Hardcodet.NotifyIcon.Wpf Nuget package:

It seems obvious in retrospect, but it took me a while to find this, because I don’t modify the app bootstrapper often, so I forgot about this method.

Don’t forget to add behavior to this icon! Double-click, context menu…

I have created a Caliburn.Micro + MahApps template app if you wish a starting point : https://github.com/cosmo0/Caliburn.MahApps.Metro.Template

WPF Behaviors: adding mouse-event adorners

This is part 3 of the WPF drag & drop exploration. Part 2 can be found here.

Let’s recap what we have right now. We have a sample application that allow us to freely drag items across an area. Now, since there might be multiple items stacking on top of each other, we want the user to be sure which item will be moved. For that, we want to add a visual indication on the item. This kind of stuff is called an adorner in WPF.

Now, you will see many samples of code-driven adorners implementation. But we don’t want to break the MVVM pattern, and to do that, we want the adorner to be implemented in the view.

So we’ll create an Adorner that allows us to use a DataTemplate. For that, I’ll just copy some existing code :

We can now use this adorner in the behavior.

Now, we just need to create the adorner’s DataTemplate, and wire up the events. We just have to add an AdornerDecorator in the controls tree, otherwise the adorner may get a random behavior (it would be attached to the window), or not work at all.

Note that from the adorner’s DataTemplate, we set the DataContext property, so that we can bind the adorner’s properties directly to the ones of the item view model.

Now, on mouse-over, a black border will appear, and will disappear when the mouse leaves. There are a few problems with this method: most notably, the adorner flickers when the mouse stays over it. But we’ll try to tackle them later on.

WPF Behaviors: switching to Windows.Interactivity.Behavior

This is part 2 of the WPF drag & drop exploration. Part 1 can be found here. Among other things, this CodeProject sample helped me much.

Yesterday I created a WPF behavior following WPF conventions for custom dependency properties. While trying to add an adorner to my item (more on that later), I noticed many samples were using System.Windows.Interactivity.Behavior<T>, which I thought was Blend-related, but apparently is not. Because it is much more concise to write and provides useful overridable methods, I decided to switch to it instead.
In addition, with the previous code, I had a problem when defining multiple dependency properties on my behavior, so I needed to find a way to solve this also.

The first thing is of course to inherit from Behavior<T>. You will find it in the System.Windows.Interactivity namespace, which is not included in .Net. You can grab it through the Blend SDK, in a Nuget package, or it’s included in MVVM frameworks like Caliburn.Micro or MVVM Light.

Then, we can remove the Getxxx and Setxxx methods, and replace them with an actual property (that does the same thing). We can also access the behavior instance from the dependency property variable, so we don’t need the instance singleton anymore. We will, however, need to attach the mouse events to the draggable item through ICommands. Since there are no standard ICommand implementation that we can use, we must create one ourselves. I have copied a “standard” implementation from CodeProject, and you will find many similar implementations if you do a quick Google search:

This command will be the bridge between the view and the other layers of the application.

Now, on to inheriting from Behavior<T>. You will notice that I have changed a few names here and there, because they weren’t reflecting the reality of the application anymore: IDragDropHandler becomes IDraggable ; the DropHandler property becomes DraggableItem. I also have replaced the mouse event names with descriptive names (OnStartDrag instead of “mouse button down”), especially since we will now be able to bind these handlers to any event.

You will notice that we don’t attach the commands to any particular item event anymore. This is because it’s handled by the view. It kind of makes sense, because a different device may have a different method of dragging. The rest of the view can be found in part 1.

The events handlers are also similar to the ones in part 1. The only difference is that they access the item through Behavior<T> properties. For instance :

That’s it! Your behavior event handlers are now linked through the view, which means better testing, and better flexibility.

WPF Drag&drop items in a canvas: communicate between behavior and ViewModel

The source code of the sample application is available at GitHub. It has been strongly inspired by Gong WPF DragDrop and has started life as a copy of another “WPF behavior lab”.

The final goal of the application is to allow a user to freely drag an item in a canvas, and that the items “snap” among themselves, so that the user can align them easily. I’m taking it as an opportunity to learn more about WPF.

The first thing I had to learn was how to create and use custom behaviors. They allow to bind elements from the View into the ViewModel using custom properties.
In “pure” WPF (Blend has different behavior conventions), a behavior is a regular class, with a few conventions to declare a custom property. It is called a Dependency Property. It’s pretty easy to bind a simple value (boolean, integer…), examples are plentiful.

Now, in the view, I want to say “when the user moves this item, this method should be run”, because I want not only the item to move, but also the container to compare the item’s new position to its neighbors. A custom property can be anything a variable can be, but not a method. So, to circumvent this, the behavior must be aware of the ViewModel, but obviously we don’t want to strongly link the behavior with the ViewModel.

  • The ViewModel will have to implement an interface
  • The View will be bound to this ViewModel through {Binding}
  • The Behavior will use this interface’s methods to send messages to the ViewModel

The IDragDropHandler interface is very simple. The Moved method will notify the ViewModel that the item has been moved, and the Dropped method that the mouse button has been released.

Now, the behavior will use these methods. To do that, we need to create mouse click and move handlers and assign them to the UIElement. Because the OnxxxChanged method is static, but the click/move handlers are not, we need a way to “memorize” the handlers. We’ll do that through a singleton instance:

Now, the behavior is notifying the ViewModel that it is being moved. Let’s handle the movement:

Now that we have a ViewModel that know when it’s being moved, let’s actually allow the user to move it. First, let’s write a container for the items:

Then the view (I’m removing things like styling for brevity). Note that it follows Caliburn.Micro conventions on naming (among other things), so it automatically binds some things, like the ItemsControl items through its name.

To summarize:

  • The behavior is attached to the item’s viewmodel from the view
  • The behavior memorizes the item instance through the IDragDropHandler interface, and assigns mouse events to the item
  • The mouse events call the IDragDropHandler methods
  • The item’s viewmodel implements these methods and handles coordinates changes itself (which are visually updated through binding in the view)
  • The item viewmodel notifies the container viewmodel of its coordinates changes through Caliburn.Micro events

Unit testing custom querystring-based authorization on a WebApi controller

I needed to add a very simple authorization mechanism to my API: use a query string parameter “api_key”, so that it’s compatible with Swagger (using Swashbuckle, there is a field “api_key” in the Swagger UI) and is easily callable through Ruby On Rails.

Following and adapting a nice tutorial, I have done the following.

Implement the authorization filter

Create an interface for your API key “getter”:

Implement this interface; here it’s extremely simple:

Inject this interface through your dependency injector of choice. You don’t have to modify your controllers, which is great.

Then create a filter attribute to use this implementation:

Now you just have to add the [ApiKeyAuthorize] attribute to your controller(s), and you now need to add the proper api_key query string parameter to all your requests.

Test the filter

A few things to test: that your class uses this attribute, and that the attribute does what it says it does.

Test the attribute presence

Here I’m using XUnit and FluentAssertions.
It’s just a matter of listing the attributes on the class, and checking that an attribute matching the one created exists.

Test the attribute inner workings

What are we testing there? That the attribute throws a HttpResponseException when no parameter exists or when the value is wrong, and that it doesn’t throw an exception when it matches.

Setup the tests

Using XUnit and Moq, the test setup looks like this:

The ContextUtil.CreateActionContext method can be picked from the ASP.Net source. The corresponding tests can be found here.

My InjectionSetup.Register is a unit-test-specific injection that allows to use a specific instance instead of creating one, here using LightInject:

Test the attribute

Still using XUnit and FluentAssertions, three simple tests allow to check that the responses are what is expected:

 

Testing Entity Framework layer in database-first mode

In your app, you may have a “query” layer, where all your Linq queries are regrouped. If you don’t, you should. Having your queries inside your controller leads to poor testing and strong coupling.

You will want to test that your queries return what they say they return. In order to do that, you need to mock your Entity Framework entities container.

There is an awesome and complete tutorial here, explaining everything.

If you’re stuck on Visual Studio 2010, you will need to do a few things: first, download and install the ADO.NET DbContext Generator code template, so that your entities use the DbSet type. Then, open your Entity Framework container, right click inside and select “Add a code generation element” (or something like that, my VS is not in english) and select the DbContext elements you just downloaded. You will then be able to customize the way your entities are generated (awesome tool).

In all Visual Studio versions, if you’re not using code-first like the tutorial, you will have to modify the template generation to mark your entities list as virtual (so that they can be mocked). Open your xxx.Context.tt file, find the line with DbSet<<#=Code.Escape(entitySet.ElementType)#>> and add virtual in front of it. Now you can check out the generated xxx.Context.cs file to be sure it’s not doing crazy things.

While you’re at it, since you’re modifying the code templates, follow these awesome instructions to xml-document your generated entities.

Now, following the MSDN article, in your query tests (the ones returning a set of data), you will have to manually create the data to return, and “bind” it to the mock instance. Since you will have to do that in each of your query tests, don’t forget to extract it to a method:

Faking objects with FakeItEasy

Remember that time (yesterday) when I created my own fake implementations for my tests? Man, was I crazy.

There is an awesome library called FakeItEasy, which ease up your fake implementations.

Now, instead of implementing your own IEntitiesContainer (or whatever your data layer injection interface is), you can tell FakeItEasy to generate a new fake of this interface, and you get a bunch of assertion methods on it thrown in for good measure.

It also helps you make sure you’re testing the right things. For instance, I had a test method return a list of paginated result, and went to great lengths to make sure the proper number of items were returned. After spending a bit of time trying to refactor my test (because FakeItEasy doesn’t make it easy to provide specific parameters), I found out I don’t care about the number of returned results, since this is tested elsewhere.

Creating a web API using ASP.Net WebApi over a legacy database

I’m trying to create a coherent API to access a crazy legacy database, do I decided to use an Asp.Net WebApi project. Since I want to get a somewhat sane result, I’ll test everything.

First things first: create a new WebApi project. It’s a type of project, located under Asp.Net MVC websites.
Don’t forget to update the project’s nuget packages ASAP, since doing it too late will make you tear you hairs out with broken links and missing files.

I will not go over creating API controllers, models etc, because there are a billion websites to help you with that, and it’s pretty straightforward. Instead, I will try to regroup all “best practices” when dealing with a web API in a single place, applied to the problems of dealing with a legacy database. I will also not dwell on the various concepts (data transfer objects, dependency injection, etc), you can read more about them at length with a quick Google search.

Version your API

If your project has any kind of risk associated with it (which means: once in production, will breaking it make you lose money?), you should version your API. So that your website can use the latest and most awesome version, but an executable running somewhere lost and forgotten in the bowels of your middle office doesn’t suddenly (and probably silently) crash once a method returns differently-named fields.

For that, there are several method, each one with various cons and pros. You can read a nice summary (and how to include all main methods at once in your project) here.
I decided to use URL versioning, because it’s the most simple to use and understand, and the most “visible”. Working with various contractors that use various languages and technologies, I always have to choose the less painful option.

To implement API URL-based versioning, the most simple, painless option is to use attribute-based routing. Make sure you have the AttributeRouting.WebApi Nuget package, and just add attributes to your controller’s methods:

I advise you against using namespace-based versioning (as seen here), because it makes your life a living nightmare. Everything you will be using (like Swagger) will assume your API is not versioned, so you have to keep default routes, and namespace-based versioning doesn’t allow that (unless you succeed in modifying the linked class, which I didn’t try).

Create a public data layer

Publish Data Transfer Objects (DTO)

One annoying thing when you try to “publish” entities from Entity Framework, is that you get all the foreign keys relationship stuff in your JSON result.
It also strongly links your underlying model with your API, which may be OK, but is not awesome, especially considering that our model is crazy (still working with a legacy DB, remember?).

So, in order to solve these problems (and also because I am not fond of publishing the underlying data model directly), there is a design pattern called Data Transfer Objects.

To implement this pattern, let’s create a Model project, create the Entity Framework entities inside, and publish them through custom and “clean” objects.

Always be remembering that we’re dealing with a legacy DB. Our crazy fields with random casing, cryptic names and insane types (for instance, everything in my main table is a varchar 100, even dates, booleans and numbers) would really love a nice grooming. You’re in luck, because tools exist to make your life easier.

We will use AutoMapper to bind the entities to the custom data objects. This awesome tool will do much of the grunt work for us.

Let’s pretend that in the actual DB, we have entities looking a bit like this:

Since having all fields as string and with random casing  is clearly insane (or lazy, or both, who knows?), we want to map it to an object like this:

In order to do that, we will use AutoMapper so that we don’t have to manually create object converters. It doesn’t prevent us to write code, though, it just means that all mapping code can be stored in a single place.

System types are best converted using system conversion, but in my case the datetime have some weird things going on, so I created my own converter.

Note that you can also write a custom converter to convert from and to Order and OrderDto. However, it makes you write mapping method “by hand” in various places like constructors, and I like having all the casting in one place, even though it can get quite long after a while. The mapping can also be much more complicated than the above example, as seen here for instance, and using AutoMapper helps a lot in these cases.

Use dependency injection to publish the DTOs

In order to help in the next step (creating test), we will use dependency injection in our controllers.

In your Model project, create a IEntitiesContainer interface with all the methods to fetch the data you need.

Then, still in your Model project, implement this interface in a class that will wrap around the actual Entity Framework container.

Note that tying the controller to the MyEntities() implementation is not the best approach, but solving it is well documented.

Note that we’re not using AutoMapper’s Project().To(), because it doesn’t work great with Entity Framework when using type conversion.

Now use this new layer in your controller:

Now we can use the controller with any implementation of IEntitiesContainer in our tests.

Test the controller

Now that we have a foundation for the project, we can finally do some yummy tests. I will use xUnit (because it’s a great tool) and FluentAssertions (because it makes tests easier to read and write).

I will not go over testing the model, since it’s pretty well documented. However, testing the WebApi controller requires a bit more work.

In order to not hit the actual DB, we’ll use the dependency injection we’ve set up. Create an implementation of the IEntitiesContainer in your test project. I suggest using an awesome method that I found on a blog: create a mostly-empty implementation, for which you will provide the implementation on a case-by-case basis.

Then your controller test can use this fake like this:

The TestsBoostrappers.SetupControllerForTests() method configures the routes of the controller. If we don’t do that, the controller won’t know what to do.

One last, but very important, point of interest on your tests. In order to follow the REST principles, after certain actions, your API should return some specific data. For instance, after an item creation, it should return the URL to this new item. Your POST method may look like this:

This simple this.Url creates an URL based on your routes and such, but if you’re using WebApi 1 like me (because you’re stuck on .Net 4.0 in Visual Studio 2010), all documented solutions won’t work, and will make you want to flip your desktop. In order to solve that, I simply used another dependency injection to wrap the UrlHelper class, and provide my own implementations in my tests.

Document your API

Documenting your API is pretty easy using Swagger. An Nuget package wrapping the implementation for .Net is available as Swashbuckle. Add it to your API project, and then you can access the documentation through /swagger/.

Switching program from .Net 4.0 to 4.5

If, like me, you switched your .Net program from 4.0 to 4.5, you might have encountered a bunch of problems.

What worked for me is the following.

First, modify your projects to use .Net 4.5. Make sure it builds (even if you might get a bunch of errors). Save and commit.

Next, run the following command in the VS package manager console:

This should reinstall your Nuget packages without touching their dependencies.

If you still get MSB3277 errors (“Found conflicts between different versions of the same dependent assembly that could not be resolved”), follow these steps:

  • In Visual Studio, open tools > options > projects and solutions > build and run > select “detailed” for the two msbuild output options.
  • Build your project
  • In the “output” window, search for MSB3277. Go up a little bit, and look for “A conflict exists between…”. It will tell you which assembly is conflicted.

Myself, it was the System assembly… weird.

The both most simple, and most annoying solution, is to remove all references and Nuget packages, then add them back.