Previously I wrote an introduction of MVCommand which explained at a high-level what exactly MVCommand was.   I promised to follow up with a post on how to get started, and to list and describe the handful of classes you should implement (or inherit from) to get MVCommand up and running.

Before I go into that, I wanted to give a shout out to Dan Donahue’s great post on building a Hello World app with MVCommand.  I appreciate him taking the effort to try out the framework and also write down his experiences.

And now below are a list of classes to implement (or inherit from) to get started.  Note: this info is also available in the “docs” folder in the repository (located at git@github.com:cerikpete/MVCommand.git).

CommandApplication

Your Global.asax file should inherit from this class.  This is an abstract class that will force you to implement a few methods and properties, listed below.

ServiceLocatorProvider: MVCommand uses the Service Locator pattern to resolve instances of the commands in your application.  Override this property to pass in your own class that implements the ServiceLocatorImplBase class (this class is contained in the MVCommand dll).  Your class should return an instance of your container.  If you do some searching, you should be able to find classes written for your IoC container that can handle all of this (for example, Ayende wrote a WindsorServiceLocator class that allows you to use Windsor with Service Locator).

Example:

protected override ServiceLocatorImplBase ServiceLocatorProvider
{
    get { return new WindsorServiceLocator(IoC.Container); }
}

CommandControllerFactoryType: The framework requires you to provide a factory that creates your front controller (mentioned in detail below). For this property your Global.asax should simply return the type of your factory class.

Example:

protected override Type CommandControllerFactoryType
{
    get { return typeof (MyControllerFactory); }
}

RegisterCommandsWithIoCContainer: This needs to be implemented in order to register your commands with your IoC container.  This should just contain code like you usually would have to register your objects with your container.

RegisterRoutes: This is simply a method where you set up your routes. More details on that below.

Routing in MVCommand

Routing in MVCommand is almost identical to how it works in the ASP.NET MVC framework. The only difference is the naming of the attributes in the routes. Where ASP.NET MVC uses “controller” and “action”, MVCommand uses “context” and “event”. In your RegisterRoutes method in your Global.asax (mentoned above), you should have the following:

routes.MapRoute( "Default", // Route name "{context}.mvc/{event}/{id}", // URL with parameters
new { controller = "Config", action = "DefaultAction", id = "" } // Defaults );

This should handle all of the requests needed by MVCommand, unless you create secure routes (more on that in another post).

One thing you’ll note is that the controller and action property still exist in the default route.   The controller value should point to the front controller class for your app (described below in “The Front Controller”).  This is needed because we still need to default to your front controller in order to then correctly resolve the commands later on in the request pipeline.

The Controller Factory

The controller factory is needed in order to pass your front controller instance to the MVCommand framework (mentioned in more detail below). To implement this, simply create a class that inherits from CommandControllerFactory. It will have to implement one method, CreateController, which will simply return an instance of your front controller (more on that below).

Example:

public override IController CreateController(RequestContext requestContext, string controllerName)
{
    var controller = new MyController();
    return controller;
}

The Front Controller

The front controller will act as a gateway between your app and the framework. It sets some properties so that the front controller in the framework can correctly react to any commands fired in your app.

To create one, simply create a class and have it inherit from CommandController. Two properties will need to be implemented:

CommandTypes: This simply returns all types in the assembly that contains your commands.

BindableCommandType: This returns the type you created in your app that implements the IBindableCommand<ModelType> interface (mentioned in more detail below).

Here’s an example of what your front controller class might look like:

public class MyController : CommandController
{
    public override Type[] CommandTypes
    {
        get { return typeof (MyCommandClass).Assembly.GetTypes(); }
    }
    public override Type BindableCommandType {
        get { return typeof (BindableCommand<>); }
    }
}

BindableCommand and Validation Objects

In the framework there is the concept of a BindableCommand, which is any command that implements the IBindableCommand<ModelType> interface. This tells the framework that you have a command that needs data loaded from the view (for example, form values). Behind the scenes, the framework will load data from the view to whatever you passed in to the ModelType generic property.

To do this, create a class that implements the IBindableCommand<ModelType> interface. This is the class that your commands will inherit from if they require view data. This class will need to implement two methods, one for when the result of the command is a success, and one if an error occurs.

You will note that these methods return ISuccess and IErrror, respectively. The MVCommand framework provides a default implementation of ISucess (a class named Success) that you can use, however there is no default implementation of IError (since typically it will depend on the type of validation you use), so you will have to create a class that implements the IError interface.

Once those classes are created, you can implement your BindableCommand class to return the appropriate data.

View Base Classes

The pages and user controls in your app will need to inherit from one of the base view classes provided by the framework. These are similar to the classes provided by the ASP.NET MVC framework, but their functionality is a bit different. Behind the scenes, the MVCommand base classes check the ViewData dictionary for the model passed in to the generic property of the base class. The provided classes are listed below:

ViewBasePage<ModelType>: Useful for aspx pages, gives the page access to the object in the ViewData dictionary of type ModelType

ViewBaseControl<ModelType>: Works like the ViewBasePage<ModelType> class, but used on controls (ascx) instead of pages

NullableModelControl<ModelType>: Useful for controls that have a model that might not exist in the ViewData dictionary (ex: if a control is used both to create a new object and edit an existing object, and when creating a new object nothing will be present in the ViewData dictionary)

ViewResultControl: Useful as a base class for controls whose purpose is to see if an IError or ISuccess object is present in the ViewData dictionary. It has two properties on it: ErrorResult and SuccessResult, which return IError and ISuccess, respectively.

Utility Classes

There are many classes in the framework that help with other functions, such as generating a url for a specific command, as well as for redirecting to a specific page once a command is complete.  Those are out of the scope of this post, but are documented in the file in the docs folder, mentioned at the beginning of this post.

What’s Next

That’s it as far as classes you need to implement.  With this info you should be able to get started with an MVCommand app.  In future posts I want to show examples of a simple command that uses model binding (that inherits from BindableCommand<ModelType>).

When I first started working with ASP.NET MVC, I was excited to be rid of the drawbacks of the WebForms model: memorizing the order of the page life cycle events, the weight of ViewState, etc.  After working with MVC for a few months, I began to realize that the mantra of how it “moves your code-behind to a separate class” was all too real.  My controllers got huge while trying to handle all basic CRUD operations for a single page (and yes, IIMHO, once you approach 300 lines your classes are getting way too big).

Faced with this, I attempted to build a framework with ideas that I liked that Brian Donahue espoused in several blog posts.  I was able to mess around with my own framework, and I gave a high-level overview of it a while back.  Since then, I’ve been able to flesh out the framework as it was dog-fooded on several internal projects here.  Because it’s been used with some success (and a lot of changes occurred that made my previous post inaccurate now), I’d like to take some time to go into detail about the framework in a series of blog posts.

Introducing MVCommand

Yes, the name of the framework is the extremely original “MVCommand”.  The point was that I wanted to replace the concept of controllers with the ability to have a request map to one or more commands, whose only job is to retrieve data and return it.  The command should have no knowledge of the HttpContext, or even that it’s being used on the web at all.  It simply gets data from the view (in the form of a DTO), and interacts with the data access layer to retrieve the appropriate data, which is then returned.

The Front Controller

The title of this post has “Controllerless” in quotes because, in reality, this framework actually does use one controller.  Using the Front Controller pattern here allows the framework to utilize as much as possible of the already in place ASP.NET MVC framework.  My goal was to not rewrite MVC, but put a wrapper around it so that we could utilize its good parts without having to stick with the contoller model.

In MVCommand, the purpose of the front controller is to take in the current request, map it to the correct command (or set of commands), and return the data in the correct format (whether its putting data returned by the command into the ViewData dictionary, or serializing it to JSON).  My goal was to not have the command need to know how the data should be returned, but simply to have it returned and let the type of request determine how it should be used.

Mapping a Request to Commands

To understand this, first we must start by remembering how ASP.NET MVC breaks up the components of the request’s url for handling routing.  If you look in the Global.asax of any MVC project, you will see there are 2 keys by default: “controller” and “action”.  In MVCommand, I renamed these to “context” and “event”.  I liked these names better for the framework because controllers don’t exist in it, and I felt that a page really represented the context of the system.

Now, there are two ways that a request can map to one or more commands.  The first way is simply by placing the command in the appropriate namespace.  If the namespace ends with the context and event that maps to the request, any command in that namespace will be fired.  For example, if your url is http://myapp/user/edit, the context is “user”, and the event is “edit”.  Therefore, any commands that have a namespace that ends with “user.edit” will be fired.  I liked this functionality because I didn’t want to require configuration changes every time you add a command to your application.

The second way to map commands is by using a dictionary, which MVCommand contains a fluid API for.  Basically, you can tell MVCommand to map the context and event to the set of commands you supply in your dictionary, instead of using namespaces.  This is useful both for sharing commands across multiple requests (where different pages need the same data), and for when firing commands in a specific order is important.

The Code

Now that you’ve read through all of this, feel free to check out the code.  It’s hosted on github at: git@github.com:cerikpete/MVCommand.git.  You will notice 2 branches up there: master and mvc2.  Master, as you might guess, works with the MVC 1.0 framework, and the MVC2 branch contains updates needed to work nicely with ASP.NET MVC 2.0 (and also VisualStudio 2010).

What’s Next?

I plan to write a series of blog posts detailing how to get started with MVCommand on your project, as well as go into detail about what the internals are doing.

My hope is you find that there is pretty low friction in getting started with MVCommand, and that its APIs are useful, and I look forward to any feedback.

Currently I’m writing a simple Rails app in an attempt to learn about 100 new things at once: Ruby on Rails, VIM, and of course Mongo DB.  There were a few steps to remember in order to get Mongo to work with my app, so I figured I’d document them here, in one spot, so that maybe they’d help someone out.

1. Install the Mongo gems

First, I installed the following gems:

  • mongo
  • mongo_mapper

MongoMapper is a great little wrapper around Mongo that makes your domain objects still feel like you’re using ActiveRecord objects.  The link gives a great little demo on how to get started using it, and it really is quite simple.

2. Update your environment file

In your Rails app, go to your environment.rb file, and uncomment the following line:

config.frameworks -= [:active_record, etc…]

Since we don’t need ActiveRecord anymore, it just seemed to make sense not to include it anymore. However I don’t believe this step is required.

Also in your environment.rb file, add the following lines to ensure the gems are used throughout the app:

config.gem ‘mongo’
config.gem ‘mongo_mapper’

3. Update Your Database Connection

Update your database.yml file to use the Mongo adapter.  It should look something like:

development:
  adapter: mongodb
  database: mydatabase

etc...

Next, create a file in your config/initializers directory, which will tell Mongo where to get the database connection info from.  In my case, I created a file named mongodb.rb, and it contains the following:

db_config = YAML::load(File.read(RAILS_ROOT + "/config/database.yml"))

if db_config[Rails.env] && db_config[Rails.env]['adapter'] == 'mongodb'
  mongo = db_config[Rails.env]
  MongoMapper.connection = Mongo::Connection.new(mongo['hostname'])
  MongoMapper.database = mongo['database']
end

The first line tells Mongo to look at the database.yml file to retrieve the appropriate connection properties (ex: hostname and database).

4. Install and run Mongo

You can download the Mongo install file here: http://www.mongodb.org/display/DOCS/Downloads

This will download a zip  file which you can unzip anywhere.  As an example, let’s say you unzipped your file to c:\mongodb.  You will also need a place to store the database (since it’s file system-based).  For myself, I created a directory labeled c:\mongodb\db.

Now you need to run Mongo.  This can be done by going to your command prompt and navigating to the directory you just unzipped to (ex: c:\mongodb\mongodb-win32-i386-1.4.1\bin) and execute the mongod.exe file, and supply the path to the db directory you created earlier, like so:

mongod.exe –dbpath c:\mongodb\db

Now Mongo is up and running, enjoy!

When I wrote my last post, I added the caveat that this was not tested with handling multiple NHibernate sessions.  Sure enough, that issue came up and we found the code we had lacking.  The main issue was, while the storage and retrieval of one session in our container worked fine, we ran into an issue where we had two sessions, and we didn’t know how to tell our data access layer (i.e., the class that has the ISession constructor dependency), which session to resolve.

After we did some brainstorming, and some struggled through some failed first attempts, my co-worker Sean came up with a solution that we’re comfortable with… at least for now.  In order to best explain it, I’m going to try to step through the process from the HttpModule on down (like I attempted to do in my last post).

The Solution at a Glance

At a high level, here’s the solution that will be described in detail in this post:

  • Update the NHibernate session module to initialize a dictionary of Sessions in the current HttpContext
  • Create a class that contains the logic as to how to select a session for the current request (a SessionSelector)
  • Create a class that tells Windsor (our IoC container) how to resolve a Session (a SessionResolver)
  • Create a facility that passes the SessionSelector and SessionResolver into the kernel so that Windsor can resolve a Session

The NHibernateSessionModule

This is our module that was described in my previous post, where we hook up session creation and clean up in the BeginRequest and EndRequest events.  During the refactoring, we removed the SessionManager class (also discussed in the last post) and for now the relevant code was placed in the HttpModule.  The BeginRequest code looks like:

 app.Context.Items[SessionContextKey] = new Dictionary<string, ISession>(); 

This creates an item in our context’s items collection for storing our collection of sessions.  For our purpose, the unique key is the name of the NHibernate file used for the session (ex: nhibernate.config).  The SessionContextKey is simply an internal constant string, with a value set to ensure we don’t conflict with any other known items in our context.

Here is the code in our EndRequest event:

var app = (HttpApplication)sender;
var sessions = (IDictionary<string, ISession>)app.Context.Items[SessionContextKey];
try
{
    foreach (var entry in sessions)
    {
        CommitTransaction(entry.Value);
    }
}
finally
{
    foreach (var entry in sessions)
    {
        CloseSession(entry.Value);
    }
}

The code in the committing and closing of sessions hasn’t changed, so I won’t go into detail about those.  I just wanted to point out how we simply are looping through the sessions in our context and cleaning them appropriately.

The NHibernateSessionFacility

Windsor allows you to create custom facilities, which can dictate how items are resolved at runtime.  To implement it, a simple class was created that inherited from the AbstractFacility class provided by the Castle.MicroKernel:

/// <summary>
/// Configures the <see cref="SessionResolver" /> to be used to satisfy IOC requests for <see cref="NHibernate.ISession" /> instances.
/// </summary>
public class NHibernateSessionFacility : AbstractFacility
{
    /// <summary>
    /// Initializes the facility.
    /// </summary>
    protected override void Init()
    {
        var sessionSelectorTypeName = FacilityConfig.Attributes["sessionSelectorType"];
        var selector = (ISessionSelector)Activator.CreateInstance
                (Type.GetType(sessionSelectorTypeName));
        var resolver = new SessionResolver(Kernel, selector);
        Kernel.Resolver.AddSubResolver(resolver);
    }
}

To register your facility, you must add a section to your Windsor.config file (right inside the configuration node):

<facilities>
    <facility id="NHSession"
              type="MyProject.Data.SessionManagement.
NHibernateSessionFacility,
                   MyProject.Data"
              sessionSelectorType="MyProject.Data.
SessionManagement.SingleSessionSelector,
                        MyProject.Data" />
</facilities>

So you can see here we’re providing two classes to Windsor: the facility class and the session selector class.  The latter is where the magic happens.  This is the class to implement the logic that allows your calling class to choose the dependency it wants to resolve.  We’ll get to that a little later, for now I want to discuss the SessionResolver class called in the NHibernateSessionFacility.

The Session Resolver

In the NHibernateSessionFacility code snippet above, you might have noticed that the Init method is newing up a SessionResolver class and adding it to the Windsor kernel.  This class implemented Windsor’s ISubDependencyResolver interface.  What this does is tell our IoC container how to resolve a session.  This is needed because we’re storing the sessions in the HttpContext, not referencing a concrete class anywhere, so this will tell the container how to retrieve a session from our context.  The class looks like this:

/// <summary>
/// Provides the ability to resolve <see cref="ISession" /> instances from a Windsor container.
/// </summary>
public class SessionResolver : ISubDependencyResolver
{
 private readonly IKernel _kernel;
 private readonly ISessionSelector _sessionSelector;

 /// <summary>
 /// Initializes a new instance of the <see cref="SessionResolver"/> class.
 /// </summary>
 /// <param name="kernel">The <see cref="IKernel" /> instance being used.</param>
 /// <param name="sessionSelector">An <see cref="ISessionSelector" /> instance used to choose the correct session to be returned..</param>
 public SessionResolver(IKernel kernel, ISessionSelector sessionSelector)
 {
 _kernel = kernel;
 _sessionSelector = sessionSelector;
 }

 /// <summary>
 /// Should return an instance of a service or property values as
 /// specified by the dependency model instance.
 /// It is also the responsibility of <see cref="T:Castle.MicroKernel.IDependencyResolver"/> to throw an exception in the case a non-optional dependency
 /// could not be resolved.
 /// </summary>
 /// <param name="context">Creation context, which is a resolver itself</param>
 /// <param name="contextHandlerResolver">Parent resolver - normally the IHandler implementation</param>
 /// <param name="model">Model of the component that is requesting the dependency</param>
 /// <param name="dependency">The dependency model</param>
 /// <returns>The dependency resolved value or null</returns>
 public object Resolve(CreationContext context, ISubDependencyResolver contextHandlerResolver, ComponentModel model, DependencyModel dependency)
 {
 return _sessionSelector.GetSession(_kernel, model.Implementation);
 }

 /// <summary>
 /// Returns true if the resolver is able to satisfy this dependency.
 /// </summary>
 /// <param name="context">Creation context, which is a resolver itself</param>
 /// <param name="contextHandlerResolver">Parent resolver - normally the IHandler implementation</param>
 /// <param name="model">Model of the component that is requesting the dependency</param>
 /// <param name="dependency">The dependency model</param>
 /// <returns><see langword="true" /> if the dependency can be satisfied; otherwise <see langword="false" />.
 /// </returns>
 public bool CanResolve(CreationContext context, ISubDependencyResolver contextHandlerResolver, ComponentModel model, DependencyModel dependency)
 {
 return typeof(ISession).IsAssignableFrom(dependency.TargetType);
 }
}

As you can see, it’s calling our SessionSelector (described below) to get and return the session, so that Windsor returns the right object whenever it attempts to resolve ISession.

The SessionSelector

In the example XML above, you can see that we passed in a class named SingleSessionSelector.  This is an example of a class that simply return the same session every time.  It looks like this:

public class SingleSessionSelector : AbstractSessionSelector
{
    protected override string GetConfigName(Type requestingType)
    {
        return "nhibernate.config";
    }
}

As you can see, it simply returns the config name, which, as was mentioned earlier, is the key to retrieve the correct session from our dictionary of items in the HttpContext. You will also notice that it implements the abstract class AbstractSessionSelector. The code for that class follows:

public abstract class AbstractSessionSelector : ISessionSelector
{
 private static readonly ILog _logger = LogManager.GetLogger(typeof(AbstractSessionSelector));

 public ISession GetSession(IKernel kernel, Type requestingType)
 {
 var configName = GetConfigName(requestingType);
 var existingSessions = (IDictionary<string, ISession>)
HttpContext.Current.Items
[NHibernateSessionModule.SessionContextKey];
 if (!existingSessions.ContainsKey(configName))
 {
 var session = CreateSession(kernel, configName);
 _logger.DebugFormat("Created session {0}", session.GetHashCode());
 session.BeginTransaction();
 existingSessions[configName] = session;
 }
 return existingSessions[configName];
 }

 protected virtual ISession CreateSession(IKernel kernel, string configName)
 {
 var sessionFactoryManager = kernel.Resolve<ISessionFactoryManager>();
 return sessionFactoryManager.GetSessionFactory(configName).OpenSession();
 }

 protected abstract string GetConfigName(Type requestingType);
}

This class implements the ISessionSelector interface (which simply has the GetSession(IKernel kernel, Type requestingType) method on it), and it depends on your implementation of it to return the correct config name.  However, your own implementation can provide a whole lot more.  Basically, it’s your implementation of this class that will contain the business logic of how to choose the correct session for your current item that depends on it.

What About the PerWebRequest Lifestyle?

In my earlier post, I mentioned how we utilized Windsor’s PerWebRequest lifestyle to ensure that a request was not reused between requests.  Well, since our sessions are now held in the current HttpContext, we no longer need to worry about that.  This refactoring removed the responsibility of storing the sessions from Windsor, and moved that logic to other classes.

One aspect of our code that has bugged us here where I work is how little we really utilize the power of our IoC container.  It seemed that all we did was keep adding items to our config file, and that was it.  One idea we wanted to do was to utilize more of our container’s functionality, which includes managing an item’s lifestyle.

A good candidate for this was managing our NHibernate sessions, which were utilized in many places but always manually created and destroyed.  Ideally, if you’re using IoC, any class that needs a session should have it in their constructor, and that should be it.  So we came up with a solution that I will describe below that solves this.

Adding the Sessions to the IoC Container

The core of our NHibernate set up is that we use an HttpModule to handle the starting and ending of sessions.  Here is the code fired during the BeginRequest event:

if (!_sessionPerRequestModuleRegistered)
{
    _sessionPerRequestModuleRegistered = true;
    var sessionManager = IoC.Resolve<ISessionManager>();
    sessionManager.InitializeSessions();
}

Basically, we have a class-level flag that determines whether or not we have already loaded the NHibernate sessions for this request.  If not, we call a method on our ISessionManager class.  Here’s the relevant code for that:

public class SessionManager : ISessionManager
{
  private readonly SessionFactoryConfig _sessionFactoryConfig;
  private readonly ISessionFactoryManager _sessionFactoryManager;

  public SessionManager(SessionFactoryConfig sessionFactoryConfig, ISessionFactoryManager sessionFactoryManager)
  {
      _sessionFactoryConfig = sessionFactoryConfig;
      _sessionFactoryManager = sessionFactoryManager;
  }

  public void InitializeSessions()
  {
      foreach (var factoryConfig in _sessionFactoryConfig)
      {
          AddSessionToIoCContainer(factoryConfig.ToString());
      }
  }        

  private void AddSessionToIoCContainer(string configPath)
  {
      IoC.AddInstance(configPath, delegate
                                      {
                                          var session = _sessionFactoryManager.GetSessionFactory().OpenSession();
                                          session.BeginTransaction();
                                          return session;
                                      });            
  }    
}

Our SessionFactoryConfig class is simply a collection of possible NHibernate configs being used by the system (typically we only have one, but sometimes we will use more than one, so this makes the code more flexible – note that this example at the present time assumes only one config, no warranties made on its complete usefulness for multiple configs until I get to try it🙂 ).

So for each config, we call NHibernate to retrieve a session object, and also make sure to start the transaction before adding that session to our IoC container.  At a high-level, the key for our session in the IoC container will be the name of the NHibernate config file attached to that session.

Now at the end of our request life cycle, we need to commit all transactions and close all open sessions.  Here’s the code in our EndRequest event:

var sessionManager = IoC.Resolve<ISessionManager>();
try 
{
    // Commit every open session factory
    sessionManager.CommitAllTransactions();
}
finally
{
    // No matter what happens, make sure all the sessions get closed
    sessionManager.CloseAllSessions();
}

And here are the relevant methods in our SessionManager class:

public void CommitAllTransactions()
{
    foreach (var factoryConfig in _sessionFactoryConfig)
    {
        CommitTransaction(factoryConfig.ToString());
    }
}

public void CloseAllSessions()
{
    foreach (var factoryConfig in _sessionFactoryConfig)
    {
        CloseSession(factoryConfig.ToString());
    }
}

private void CommitTransaction(string configPath)
{
    var session = IoC.Resolve(configPath) as ISession;
    var transaction = session.Transaction;
    try
    {
        if (HasOpenTransaction(transaction))
        {
            transaction.Commit();
        }
    }
    catch (NHibernate.AdoNet.TooManyRowsAffectedException tmex)
    {
        // Squelch this for now, as it fires on batch deletes, but we'll log it
        Log<SessionManager>.Error("TooManyRowsAffectedException was thrown", tmex);
    }
    catch (HibernateException hex)
    {
        Log<SessionManager>.Error("HibernateException thrown", hex);
        RollbackTransaction(transaction, configPath);
        throw;
    }
}

private void RollbackTransaction(ITransaction transaction, string configPath)
{
    try
    {
        if (HasOpenTransaction(transaction))
        {
            transaction.Rollback();
        }
    }
    finally
    {
        CloseSession(configPath);
    }
}

private void CloseSession(string configPath)
{
    var session = IoC.Resolve(configPath) as ISession;
    session.Close();
}

/// <summary>
/// Checks for an open transaction in the specified Session.
/// </summary>
/// <param name="transaction">The transaction to check.</param>
/// <returns>
/// 	<see langword="true"/> if the <paramref name="transaction" /> is not null and open; otherwise, <see langword="false"/>.
/// </returns>
private bool HasOpenTransaction(ITransaction transaction)
{
    return transaction != null && !transaction.WasCommitted && !transaction.WasRolledBack;
}

As you can see, this retrieves all sessions from our IoC container and commits and closes them as expected.

The PerWebRequest Lifestyle

Note that our IoC container is Windsor, and it has the concept of Lifestyles, which indicate how long an item should remain in the container before it is disposed.  In order to manage sessions appropriate in our scenario above (where we open and close them at the beginning and end of a request, respectively), we need to ensure that we add them with the PerWebRequest lifestyle.  This tells Windsor to dispose the sessions once the request is over.  This ensures we do not use the same session on more than one request.

In the code above, when we add the sessions to the IoC container, you saw the IoC.AddInstance method being called.  This is what ensures the sessions go into the container with a PerWebRequest lifestyle.  Here is the code of that method:

public static void AddInstance<ServiceType>(string key, Function<ServiceType> factory)
{
    _defaultContainer.Register(Component.For<ServiceType>().UsingFactoryMethod(factory).Named(key).LifeStyle.PerWebRequest);
}

An important note: anything that depends on the session in the IoC container must also have the same PerWebRequest lifestyle in order for everything to be disposed of properly.  This includes the repositories that I describe below.

Using the Sessions

In our solution, we use repositories as our data access layer, and these inherit from a base Repository<T> class.  Now that our sessions have been added to IoC, we’ll need to inject them into the repositories appropriately.  As mentioned above, they need to be added with the PerWebRequest lifestyle, and the simples way to do this is to load all types that inherit from IRepository<T> into the IoC container in the application start event in the Global.asax:

// Register repositories - this will add all repositories except the main generic Repository<T> class
var repositoryTypes = typeof(IRepository<>).Assembly.GetTypes();
foreach (var type in repositoryTypes)
{
    if (type.IsDerivedFromGenericType(typeof(IRepository<>)) && !type.IsAbstract)
    {
        var service = type.GetInterface("I" + type.Name);
        if (!service.IsGenericType)
        {
            IoC.AddType(service.Name, service, type, IoC.LifeStyle.PerWebRequest);
        }
    }
}

This will automatically register any repositories I create with the IoC container.  Since the constructor of each repository takes an instance of ISession, Windsor will automatically resolve the repository correctly with the session.

So far I’ve liked the solution we came up with.  I like that we’re utilizing more features of our IoC container to really control how our objects are created and their lifestyles.  I also like that we’re adding items more dynamically instead of having a large config file.

Results-Based Testing

March 19, 2010

A while ago I gave a talk on unit testing at a Philly ALT.NET meeting.  On the way home from that meeting, Brian Donahue and I had a discussion that went something like this (somewhat paraphrased (ok, a lot parapharased)):

Brian: Dude, you have to stop testing expectations and start testing the results!
Me: Damn, you’re right!  I used to do that but kinda fell away from it.  Guess it’s time to go back.

What do I mean by results-based testing?

If you’re familiar with most any testing framework (like NUnit) and Rhino.Mocks, you’ll be familiar with two ways of testing objects.  One way is asserting something was called (ex: foo.AssertWasCalled(x => x.MyMethod()), another way is just comparing objects and ensuring the value returned was expected (ex: Assert.AreEqual(foo, bar)).

Results-based testing means testing more of the latter, and less of the former.  As I’ve gone through the learning curve of TDD, I’ve noticed that focusing on using the AssertWasCalled style is often less than useful.  Let’s face it, usually you’re aware you called that method, what you want to know is what was returned from it.  So results-based testing focuses more on comparing results to ensure they’re what you expect.

An Example

Let’s say we have a simple class with one method on it, Execute().  Let’s also say that it has one dependency, a repository whose job it is to get all users.  The Execute method may look something like this:

   1: public IList<User> Execute()

   2: {

   3:     return userRepository.GetAll();

   4: }

Now, if you’re just testing expectations, you might write a test that says something like this:

   1: userRepository.AssertWasCalled(x => x.GetAll())

As mentioned above, this doesn’t feel very useful.  My class is only doing one thing, so I’m pretty confident that the repository was called.  What I want to test is that this method is actually returning all users.

To do this, first you have to set up your expectations, then you can do a simple Assert.AreEqual to ensure that the results are equivalent.  For the first part, Rhino.Mocks lets you stub out expectations on a class, so we’ll take advantage of that to set up the results that we expect to be returned.  Then we’ll get the actual result of the method call, and finally we’ll assert that the result we set up is equal to the one that was actually returned:

   1: var myListOfusers = new List<User> { new User() };

   2: userRepository.Stub(x => x.GetAll()).Return(myListOfUsers);

   3: _result = myClass.Execute();

   4: Assert.AreEqual(myListOfusers, _result);

That’s all there really is to it.  Now your tests are really going to start to become valuable, because as soon as the logic in your class changes, and affects the users that are returned, your test will break in a meaningful way.

Expectation Tests Are Not Useless

Far from it, actually.  They have their place.  Typically I use them in scenarios where different conditions mean that different methods will be called, for example, an if/else statement where the method called on an object changes depending on what branch of the statement you are in:

   1: if (criteria != null)

   2: {

   3:     userRepository.GetByCriteria(criteria);

   4: }

   5: else

   6: {

   7:     userRepository.GetAll();

   8: }

In this case, if you don’t have access to the results returned by these calls, it might be worthwhile to write 2 tests, one for each condition, and each test will simply assert that the appropriate method was called.

In either case, find the test type that’s right for you – the one that really tests the business logic, because that’s where you get your real value.

Finally I have put the code and slides up for my talk a few weeks ago at Philly ALT.NET.

It’s up on google code, and can be grabbed at: http://advanced-bdd.googlecode.com/svn/trunk/

My New Blog Host

July 14, 2009

I was starting to dislike the styling on my old blog so I decided to move on to wordpress.  The official URL is https://erikbase.wordpress.com/ but please follow my feed at: http://feeds.feedburner.com/erikbase.

In my previous post, I described how I created a rake file to handle the building of my .NET project on Team City.  I have since updated the file with some tweaks so that I could see my unit test output in Team City.  This post will describe what I changed in the file as well as how I set up Team City.

The New Rake File

The updates I made to my file, recommended by Sean, was to create and set an environment variable that indicates whether or not Team City is building this project (as opposed to just building it locally).  This is useful because the output in Team City is much more verbose, and not really necessary when building locally.  Here’s the new file:

   1: require 'fileutils'

   2: include FileUtils

   3:

   4: version = 'v3.5'

   5: compile_target = ENV.include?('target') ? ENV['target'] : 'Debug'

   6: project = "MyProject"

   7: framework_dir = File.join(ENV['windir'].dup, 'Microsoft.NET', 'Framework', version)

   8: msbuild = File.join(framework_dir, 'msbuild.exe')

   9: team_city = ENV.include?('teamcity_build') ? '/re:TeamCityExtension,Gallio.TeamCityIntegration' : ''

  10:

  11: task :default => :build

  12:

  13: task :build => [:compile, :test]

  14:

  15: task :compile do

  16:     sh "#{msbuild} #{project}.sln /property:Configuration=#{compile_target}"

  17: end

  18:

  19: task :test do

  20:     runner = 'tools\\Gallio\\Gallio.Echo.exe'

  21:     assembly = "Test\\bin\\#{compile_target}\\MyProject.Test.dll"

  22:     sh "#{runner} #{assembly} #{team_city}"

  23: end

  24:

  25: desc "Rebuild"

  26: task :rebuild do

  27:   sh "#{msbuild} #{project}.sln /t:Rebuild /property:Configuration=#{compile_target}"

  28: end

  29:

  30: desc "Clean"

  31: task :clean do

  32:   sh "#{msbuild} #{project}.sln /t:Clean /property:Configuration=#{compile_target}"

  33: end

You’ll see at the top a variable named “team_city”, which checks for an environment setting (which I just called “teamcity_build”).  If this setting exists, the variable is set to the command line parameters required by Gallio to generate the correct Team City output.

One other tweak was to line 22, where I added the team_city variable to the command line execution string.

Finally, on line 9, I updated the parameter flag seen in my previous post to be “/re:” (instead of “/e:”).

Team City Setup

I’m not going to go into detail about every step in configuring Team City, but will discuss some specific settings I used on my project to get it to compile with my rake file.  The screens I will be discussing are in the Adminitration panel of the Team City UI.  Once there, you can create or edit a build configuration.  In the build configuration section, you will be presented with a screen with several tabs down the side.  It is on these screens that the settings I will describe below live.

Most settings I left as the default.  However, screen 3 allows you to set a runner.  Here I just chose Rake from the “Build runner” drop down.  The next item I set was the value in the “Rake tasks” field.  If you look at the rake file above, you will see that on line 13 I set the “build” task to be my main entry point (since it compiles the code then runs the test).  So in this text box, I simply entered “build”.  This means that when Team City calls rake in the command line, it will send in the “build” parameter as an argument.  A final setting  here, which is needed if the path to your ruby compiler is not set to the Windows “Paths” variable, is the “Ruby interpreter path”.  Simply enter the path to the ruby.exe file on your build server.

The final specific setting I added to Team City was on screen 6, where the environmental variables live.  Here, click on “Add new variable”, then enter in the value you want to use to indicate that this is a Team City build.  As I mentioned above, this the value the rake file checks to see whether or not to add the appropriate Gallio command line parameters for Team City output.  To work with the script above, my environment variable value was “env.teamcity_build”.

That’s it!  Your Team City build should now work with rake.

Before I get into too much detail, wanted to credit Sean with doing the work on this, I just ripped what he did off. :)  I just wanted to document the basics of getting a build script written using Rake that can compile your solution and run your unit tests locally.

We have it running on Team City as well, and I’ll blog more about that as I learn it. :)  It does support it out of the box.

The rakefile

First thing you need to do, of course, is intall Ruby, RubyGems, and then the rake gem. Some links on this are below.

Next, in your project, create a text file and name it rakefile.rb.  In my example, I placed this in my project root (at the same level as all of my .NET project folders and solution file).  What I’m doing with rake here is having it compile my solution, and run my tests using Gallio.Echo (the command line runner for MbUnit).  It compiles the solution by passing in the correct params to msbuild.

   1: require 'fileutils'

   2: include FileUtils

   3:  

   4: version = 'v3.5'

   5: compile_target = ENV.include?('target') ? ENV['target'] : 'Debug'

   6: project = "MyProject"

   7: framework_dir = File.join(ENV['windir'].dup, 'Microsoft.NET', 'Framework', version)

   8: msbuild = File.join(framework_dir, 'msbuild.exe')

   9:  

  10: task :default => :build

  11:  

  12: task :build => [:compile, :test] 

  13:  

  14: task :compile do

  15:     sh "#{msbuild} #{project}.sln /property:Configuration=#{compile_target}"

  16: end

  17:  

  18: task :test do

  19:     runner = 'tools\\Gallio\\Gallio.Echo.exe'

  20:     assembly = "Test\\bin\\#{compile_target}\\MyProject.Test.dll"

  21:     extension = '' #'/e:TeamCityExtension,Gallio.TeamCityIntegration'

  22:     sh "#{runner} #{assembly} #{extension}"

  23: end

  24:  

  25: desc "Rebuild"

  26: task :rebuild do

  27:   sh "#{msbuild} #{project}.sln /t:Rebuild /property:Configuration=#{compile_target}"

  28: end

  29:  

  30: desc "Clean"

  31: task :clean do

  32:   sh "#{msbuild} #{project}.sln /t:Clean /property:Configuration=#{compile_target}"

  33: end

So the first part of the file allows us to set the environment variable containing target info.  Here we default to Debug if nothing is set.

Then we set up a default task, which allows us to just type in “rake” in the command line without passing parameters (on line 10).  This means we default to the “build” task.  The following line then set up dependencies on that task, and indicated that when we call build, we want to run the compile and test tasks.

The compile task simply passes in the appropriate info to msbuild.exe.  The test task simply passes in the correct parameters to Gallio.Echo.exe to run our tests.

That’s really it.  As you can see, rake isn’t doing much but delegating to different applications, but if your builds are more complex, it can definitely help manage things (like loops, etc) if they are necessary in your build script.

Resources

Download ruby: ftp://ftp.ruby-lang.org/pub/ruby/binaries/mswin32/ruby-1.8.7-p72-i386-mswin32.zip

RubyGems: http://rubyforge.org/frs/?group_id=126

Building with Rake: http://testdrivendevelopment.wordpress.com/2009/02/01/nant-sucks-and-rake-rocks/

Follow

Get every new post delivered to your Inbox.