WPF Behaviors: adding mouse-event adorners

This is part 3 of the WPF drag & drop exploration. Part 2 can be found here.

Let’s recap what we have right now. We have a sample application that allow us to freely drag items across an area. Now, since there might be multiple items stacking on top of each other, we want the user to be sure which item will be moved. For that, we want to add a visual indication on the item. This kind of stuff is called an adorner in WPF.

Now, you will see many samples of code-driven adorners implementation. But we don’t want to break the MVVM pattern, and to do that, we want the adorner to be implemented in the view.

So we’ll create an Adorner that allows us to use a DataTemplate. For that, I’ll just copy some existing code :

We can now use this adorner in the behavior.

Now, we just need to create the adorner’s DataTemplate, and wire up the events. We just have to add an AdornerDecorator in the controls tree, otherwise the adorner may get a random behavior (it would be attached to the window), or not work at all.

Note that from the adorner’s DataTemplate, we set the DataContext property, so that we can bind the adorner’s properties directly to the ones of the item view model.

Now, on mouse-over, a black border will appear, and will disappear when the mouse leaves. There are a few problems with this method: most notably, the adorner flickers when the mouse stays over it. But we’ll try to tackle them later on.

WPF Behaviors: switching to Windows.Interactivity.Behavior

This is part 2 of the WPF drag & drop exploration. Part 1 can be found here. Among other things, this CodeProject sample helped me much.

Yesterday I created a WPF behavior following WPF conventions for custom dependency properties. While trying to add an adorner to my item (more on that later), I noticed many samples were using System.Windows.Interactivity.Behavior<T>, which I thought was Blend-related, but apparently is not. Because it is much more concise to write and provides useful overridable methods, I decided to switch to it instead.
In addition, with the previous code, I had a problem when defining multiple dependency properties on my behavior, so I needed to find a way to solve this also.

The first thing is of course to inherit from Behavior<T>. You will find it in the System.Windows.Interactivity namespace, which is not included in .Net. You can grab it through the Blend SDK, in a Nuget package, or it’s included in MVVM frameworks like Caliburn.Micro or MVVM Light.

Then, we can remove the Getxxx and Setxxx methods, and replace them with an actual property (that does the same thing). We can also access the behavior instance from the dependency property variable, so we don’t need the instance singleton anymore. We will, however, need to attach the mouse events to the draggable item through ICommands. Since there are no standard ICommand implementation that we can use, we must create one ourselves. I have copied a “standard” implementation from CodeProject, and you will find many similar implementations if you do a quick Google search:

This command will be the bridge between the view and the other layers of the application.

Now, on to inheriting from Behavior<T>. You will notice that I have changed a few names here and there, because they weren’t reflecting the reality of the application anymore: IDragDropHandler becomes IDraggable ; the DropHandler property becomes DraggableItem. I also have replaced the mouse event names with descriptive names (OnStartDrag instead of “mouse button down”), especially since we will now be able to bind these handlers to any event.

You will notice that we don’t attach the commands to any particular item event anymore. This is because it’s handled by the view. It kind of makes sense, because a different device may have a different method of dragging. The rest of the view can be found in part 1.

The events handlers are also similar to the ones in part 1. The only difference is that they access the item through Behavior<T> properties. For instance :

That’s it! Your behavior event handlers are now linked through the view, which means better testing, and better flexibility.

WPF Drag&drop items in a canvas: communicate between behavior and ViewModel

The source code of the sample application is available at GitHub. It has been strongly inspired by Gong WPF DragDrop and has started life as a copy of another “WPF behavior lab”.

The final goal of the application is to allow a user to freely drag an item in a canvas, and that the items “snap” among themselves, so that the user can align them easily. I’m taking it as an opportunity to learn more about WPF.

The first thing I had to learn was how to create and use custom behaviors. They allow to bind elements from the View into the ViewModel using custom properties.
In “pure” WPF (Blend has different behavior conventions), a behavior is a regular class, with a few conventions to declare a custom property. It is called a Dependency Property. It’s pretty easy to bind a simple value (boolean, integer…), examples are plentiful.

Now, in the view, I want to say “when the user moves this item, this method should be run”, because I want not only the item to move, but also the container to compare the item’s new position to its neighbors. A custom property can be anything a variable can be, but not a method. So, to circumvent this, the behavior must be aware of the ViewModel, but obviously we don’t want to strongly link the behavior with the ViewModel.

  • The ViewModel will have to implement an interface
  • The View will be bound to this ViewModel through {Binding}
  • The Behavior will use this interface’s methods to send messages to the ViewModel

The IDragDropHandler interface is very simple. The Moved method will notify the ViewModel that the item has been moved, and the Dropped method that the mouse button has been released.

Now, the behavior will use these methods. To do that, we need to create mouse click and move handlers and assign them to the UIElement. Because the OnxxxChanged method is static, but the click/move handlers are not, we need a way to “memorize” the handlers. We’ll do that through a singleton instance:

Now, the behavior is notifying the ViewModel that it is being moved. Let’s handle the movement:

Now that we have a ViewModel that know when it’s being moved, let’s actually allow the user to move it. First, let’s write a container for the items:

Then the view (I’m removing things like styling for brevity). Note that it follows Caliburn.Micro conventions on naming (among other things), so it automatically binds some things, like the ItemsControl items through its name.

To summarize:

  • The behavior is attached to the item’s viewmodel from the view
  • The behavior memorizes the item instance through the IDragDropHandler interface, and assigns mouse events to the item
  • The mouse events call the IDragDropHandler methods
  • The item’s viewmodel implements these methods and handles coordinates changes itself (which are visually updated through binding in the view)
  • The item viewmodel notifies the container viewmodel of its coordinates changes through Caliburn.Micro events