Sergey Akopov

Perpetual adventurer and purveyor of awesomeness.

How I do dependency injection

It seems that dependency injection (DI) as software development pattern has become the standard for many open source projects and software companies. Indeed, if you practice any kind of tiered development and employ test-driven practices, DI is likely a great direction for your projects.

When done right, DI allows developers to rather effortlessly extrapolate a complex design into well-defined and easily consumable modules. It gives the flexibility in how objects are interconnected thus allowing greater configuration facility for different modes of operation of your application. Having said all of this, the real trouble of DI is that in most cases the concept behind it is misunderstood and it becomes a major factor of absue.

Here is just a few problems with dependency injection, which I come across time and time again.

  • DI container is used as service locator.
  • Object constructors explode with dependencies.
  • Everything is abstracted and injected.
  • The application tier has a reference to everything, even when it shouldn't by-design, simply because it is configuring the DI container.

I have come up with a few practices over the years to minimize the negative impact of DI and basically lock it down in an effort to prevent abuse as much as possible.

The application loader

One of the things I completely despise about DI is the fact that the application tier is forced to reference every single component just because it is responsible for configuring the container. I think this smells.

In a typical tiered design all logic runs through some kind of service or business tier. The application tier should have no knowledge of anything below that architectural level. By introducing unnecessary dependencies at this level there are more chances for accidental use of any components contained within any architecturally restricted tiers.

To prevent this from happening I started using a separate project I call "The Loader." This project is responsible for registering and configuring components on behalf of the application tier by providing a single point of access, called "The Kernel", for convention-based component registration. Here is what this looks like in a sample project.

alt text

In the image above, the DependencyModule class implements Autofac's Module to register the dependency graph. The loader project now has a reference to everything instead of the application layer, thus greatly diminishing the chance of abuse. And it just makes the application tier much cleaner.

public sealed class DependencyModule : Module
{
        protected override void Load(ContainerBuilder builder)
        {
            base.Load(builder);

            builder.Register(c => new SqlConnection(ConfigurationManager.ConnectionStrings["MyDatabase"].ConnectionString))
                   .As<IDbConnection>()
                   .InstancePerHttpRequest();

            builder.RegisterType<MemcacheProvider>()
                   .As<ICacheProvider>()
                   .SingleInstance();

            ...
        }
}

The interesting bit here is the Kernel class which provides convention-based component registration for the consumer (a.k.a your app).

public sealed class Kernel
{
    private static readonly ContainerBuilder Builder;

    static Kernel()
    {
        Builder = new ContainerBuilder();
        Builder.RegisterModule(new DependencyModule());
    }    

    public static void RegisterMvcControllers(Assembly assembly)
    {
        Builder.RegisterControllers(assembly);
    }

    public static void RegisterTasks(Assembly assembly)
    {
        Builder.RegisterAssemblyTypes(assembly)
               .Where(task => typeof (IBootstrapperTask).IsAssignableFrom(task))
               .As<IBootstrapperTask>()
               .SingleInstance();
    }

    public static void RegisterMaps(Assembly assembly)
    {
        assembly.GetTypes()
                .Where(type => typeof (Profile).IsAssignableFrom(type))
                .ToList()
                .ForEach(type => Mapper.AddProfile((Profile) Activator.CreateInstance(type)));
    }

    public static void Start()
    {
        var container = Builder.Build();

        var mvcResolver = new AutofacDependencyResolver(container);

        DependencyResolver.SetResolver(mvcResolver);
    }
}

The application tier simply references the loader project and calls the Kernel to configure itself.

Kernel.RegisterMvcControllers(Assembly.GetExecutingAssembly());
Kernel.RegisterTasks(Assembly.GetExecutingAssembly());
Kernel.RegisterMaps(Assembly.GetExecutingAssembly());
Kernel.Start();

The application tier doesn't even know it's using dependency injection container at all. Therefore, there is no way to use the DI container explicitly anywhere in the application tier. This removes the anti-pattern of using container as service locator.

While this is a very simple example of the kernel, this can be extended to cover more complex configuration scenarios where an application can configure components in a variety of different ways and take a different logical path during execution.

I'm sure there are some edge cases, but so far I have successfully used this on multiple large projects with great results.

The dependency overflow

Another thing to easily do wrong when using dependency injection is to mindlessly add constructor dependencies because, well, it's so easy to do because the container will just resolve them.

The issue here is to realize that your architecture should be laid out without any reliance on any kind of inversion of control principles. Set a guideline for the number of dependencies per each component. If that number is exceeded it is time to break things up because your component is probably doing too much work. Always keep in mind the single responsibility principle.

What I found works rather well for me is to strictly adhere to the set number of dependencies in any component below the application tier while always strive to maintain this rule at the application tier (sometimes this isn't possible due to strict design specs). The thought process here is to make sure you can easily construct and test any component below the application tier. As long as this is possible, the way in which components are stitched together at the application tier isn't a very major issue.

The injection craze

DO NOT abstract and inject everything on earth. It is highly unlikely that you're going to switch database technologies, so what's the point of creating a highly flexible repository that works with ungodly amount of data access libraries? Not only are you wasting time, your abstraction is probably not going to be all-encompasing and will inevitably leak like a sieve.

Keep your design tight and sane.

Discuss »

Bootstrapping your .NET MVC apps with executable tasks

When working on large projects in ASP.NET MVC I often try to automate the development workflow as much as possible. One of such automation points is the bootstrapping logic required to fire up a typical MVC app. I'm referring to the following tasks:

  • Registering bundles.
  • Registering global filters.
  • Registering routes.
  • Registering areas.

I always find that this list easily blows up as the number of components grows. One great way to automate the registration of components at start-up is by using the Command pattern.

We'll start by creating the bootstrapper task interface.

public interface IBootstrapperTask
{
    void Run();
}

The next step is to convert all config code located in App_Start to tasks. Here is an example of the bundle registrations.

public class RegisterBundles : IBootstrapperTask
{
    public void Run()
    {
        var bundles = BundleTable.Bundles;

        bundles.Add(
            new ScriptBundle("~/assets/js/app").Include(
                "~/assets/some/kinda/js/script.js"));
    }
}

Now we just need to execute all bootstrapper tasks. This can be done using assembly scanning and reflection in Global.asax.

Assembly.GetExecutingAssembly()
        .GetTypes()
        .Where(task => typeof (IBootstrapperTask).IsAssignableFrom(task))
        .ToList()
        .ForEach(task => task.Run());

And this is all it takes! But let's not stop here. In more complicated cases your bootstrapper tasks could have dependencies in the constructor. We can solve this using dependency injection.

I will show how to handle this using Autofac. You can do a similar thing with any other dependency injection container of your choosing. Let's scan the assembly and register all IBootstrapperTask implementations with the Autofac container.

public static void RegisterTasks(Assembly assembly)
{
    Builder.RegisterAssemblyTypes(assembly)
           .Where(task => typeof (IBootstrapperTask).IsAssignableFrom(task))
           .As<IBootstrapperTask>()
           .SingleInstance();
}

You can call this from Global.asax or wherever you have your start-up code.

RegisterTasks(Assembly.GetExecutingAssembly());

Once the Autofac container is built, all tasks can execute.

container.Resolve<IEnumerable<IBootstrapperTask>>()
         .ToList()
         .ForEach(task => task.Run());

Now you can just drop new bootstrapper tasks in your App_Start (or wherever) and they will be executed automatically on start-up.

Discuss »

Fresh, new look for my blog

I've been away from my blog for a while (as you may have already noticed). I can say that it's partially due to a painfully slo-o-o-w experience running wordpress on BlueHost. I eventually decided to ditch both, build a simple blog and learn a thing or two while i'm at it. I'm well aware that most of the stuff built into this little project is a total overkill, but it was a great learning experience, so here is the full stack:

Best of all is that all of this goodness is hosted on AppHarbor for absolutely zilch! They're like Heroku for .NET trying to make .NET deployments really easy. Anyway, more posts to come.

Thanks for reading!

Discuss »

All things JavaScript – Writing better jQuery plugins

I’ve been doing quite a bit of work in JavaScript recently. It’s a major shift from regular server-side grind. I learned a few patterns and became more aware of different architectural practices when writing front-end code. I am starting to slowly get over my love/hate relationship with all things client-side. I absolutely love the responsiveness and interactivity of a modern user interface.

Yet, I despise writing JavaScript code. To me it’s the mind-boggling dynamic nature of JavaScript that makes it so complicated. So, what I’d like to do is start quick series about different JavaScript tips and tricks I learned over time, which essentially make writing JavaScript a slightly better experience (at least in my opinion).

Like many others, I write all of my JavaScript in jQuery. I’ve already talked about how to structure jQuery code, today I’d like to discuss a cool plugin pattern i picked up from Twitter Bootstrap while using it on one of my projects. Here is the skeleton.

(function () { 
    /*
     Plugin class definition
     */

    var Plugin,
        privateMethod;

    Plugin = (function () {

        /*
         Plugin constructor
         */

        function Plugin(element, options) {
            this.settings = $.extend({}, $.fn.plugin.defaults, options);
            this.$element = $(element);
            /* Do some initialization
             */
        }

        /*
         Public method
         */

        Plugin.prototype.doSomething = function () {
            /* Method body here
             */
        };

        return Plugin;

    })();

    /*
     Private method
     */

    privateMethod = function () {
        /* Method body here
         */
    };

    /*
     Plugin definition
     */

    $.fn.plugin = function (options) {
        var instance;
        instance = this.data('plugin');
        if (!instance) {
            return this.each(function () {
                return $(this).data('plugin', new Plugin(this, options));
            });
        }
        if (options === true) return instance;
        if ($.type(options) === 'string') instance[options]();
        return this;
    };

    $.fn.plugin.defaults = {
        property1: 'value',
        property2: 'value'
    };

    /*
     Apply plugin automatically to any element with data-plugin
     */

    $(function () {
        return new Plugin($('[data-plugin]'));
    }); 
}).call(this);

Calling the above plugin works like so:

$('selector').plugin();

Starting at the top we see that all of our code is inside a self-executing anonymous function. This is a pretty standard pattern in JavaScript used to isolate code into “blocks.” Next, we get to the interesting part, which sets this aside from other patterns. All of the plugin logic resides in the Plugin object. This allows you to use prototypal inheritance of JavaScript to extend plugins when necessary.

Things get a little tricky in our jQuery plugin definition. What we’re attempting to do here is store the Plugin object inside the data attribute of each element resolved through the selector. So here is what’s happening inside:

  1. If the selector in calling code resolves to single element we’ll try to pull the instance of the Plugin object from that element’s data attribute and return it.
  2. If nothing is found, assume we’re working with a collection of elements and try to iterate through it while configuring (using constructor) and storing the instance of Plugin object in the data attribute.
  3. If the calling code passes ‘true’ to the plugin, we’ll attempt to return current instance of the Plugin object. This will only work with selectors resolving to single element.
  4. If the calling code passes a string to the plugin, we’ll assume it’s the name of a method of our plugin class and attempt to execute it.

Last but not least, we’ll try to automatically apply the plugin to any element marked with its specific data attribute.

If you’re using CoffeeScript, here is the code that generates the above skeleton.

###
Plugin class definition
###
class Plugin
    ###
    Plugin constructor
    ###
    constructor: (element, options) ->
        this.settings = $.extend({}, $.fn.plugin.defaults, options)
        this.$element = $(element)
        ### Do some initialization ###

    ###
    Public method
    ###
    doSomething: () ->
        ### Method body here ###

###
Private method
###
privateMethod = () ->
    ### Method body here ###

###
Plugin definition
###
$.fn.plugin = (options) ->
    instance = this.data('plugin')

    if not instance
        return this.each ->
            $(this).data('plugin', new Plugin this, options)

    return instance if options is true

    instance[options]() if $.type(options) is 'string'

    return this

$.fn.plugin.defaults =
    property1: 'value'
    property2: 'value'

###
Apply plugin automatically to any element with data-plugin
###
$ -> new Plugin($('[data-plugin]'))

If you have questions or improvements, share them in comments. Thanks for reading!

Discuss »

From the trenches – tips on installing Team Foundation Server on multiple servers

My company has gone through 2 revision control systems. We’ve been through Visual SourceSafe, which, unfortunately, is still used for legacy projects. And we also had pretty kick-ass time with Subversion. I personally absolutely love Subversion, but you can’t disregard the awesomeness of the Team Foundation Server (TFS). Right out of the box you get source repository, document storage, reporting, bug tracking and development methodology tools.

I digress. The goal of this post is to outline requirements that have to be met before installation and issues you may run across while installing TFS and its prerequisites.

Organizing your tiers

Before proceeding with installation you need to figure out how you want to organize your tiers. My installation utilizes 2 servers. The data-tier server is only running SQL Server 2008 R2. The application-tier server houses everything else such as TFS services, Analysis Services, SQL Server Reporting Services (SSRS) and SharePoint Services (WSS). This setup really depends on the size of your team. If you have a large distributed team that is growing than perhaps you want to consider a different setup that would allow for greater scalability as your company grows. If you have a really small team, than you could combine both tiers on the same server.

Preparing user accounts

It is VERY important not to screw this up. TFS installation requires at least 2 domain accounts (labeled like so), however i will be discussing all standard accounts used.

TFSsetup account is used during the installation, repair & servicing (applying patches and hotfixes). This account has to have local admin rights on the TFS server and be a “sys admin” in SQL while performing these tasks. There is another very crucial step. TFS installation uses Windows Management Instrumentation (WMI) interface to query remote servers in order to validate that a certain service or component is installed and running.

This translates to one thing – TFSsetup user must have administrator rights on all servers involved in the installation, or you will likely see a bunch of permission errors and warning while it’s attempting to use WMI against a remote server. Having said that, if you plan to run your SQL Server on another box like i do, make sure your setup account had admin rights on it. Same applies to SSRS and Analysis Services.

TFSservice account, as the name suggests, is used to run the TFS services. This account is responsible for running several of the TFS jobs at the back end and the account used to access SQL databases. This account will need “DBcreator” and “Security admin” roles in SQL. In addition to that, this account must be added to “Log on as a service” security policy on the TFS server.

TFSreports account is used as a data reader account for SSRS. This account must be added to “Allow log on locally” security policy or you will have problems executing reports. Optionally, you could use TFSservice account for this.

WSSservice account is used to run SharePoint services. Of course, this is only required if you plan to integrate with SharePoint. I simply use TFSservice account for SharePoint.

Installation

There isn’t a lot to note here. The installation process is very straight-forward, but the most crucial part is to carefully read instructions on each step and specify the right user accounts. I found it easier to install all prerequisites prior to running TFS setup. Just as a rule of thumb, make sure your server is up to date with the latest and greatest prior to running TFS setup. Also, verify that TFS Service Pack 2 is installed on the server. Finally, don’t forget to run the entire install as TFSsetup user.

SSRS/Analysis Services

If you’re installing Analysis Services and SSRS, make sure to use TFSservice as your “Service Account.” The installation defaults to Network Service or Local Service if you don’t specify otherwise. These will not work! Using a domain account as Service Account in SSRS actually caused issues with TFS not being able to setup reports for Team Projects. I also had authentication issues (only in IE) while trying to view reports. Changing back to built-in “Network Service” account resolved both problems. Make sure to restart your service as soon as you change your account settings.

SharePoint

TFS setup gives you the option to install WSS for you. However, if you’re installing SharePoint manually as i did, then remember to select “Web Front-end” installation type in the wizard and use 17012 for port number when asked. I also had a strange problem running SharePoint setup where i was getting “This package failed to run” exception. After pulling the 3 hairs I have left on my head, I learned that the cure for this problem is to extract the contents of the setup package by running the following command C:\Path\To\SharePoint.exe /extract:C:\Path\To\Some\Extract\Folder and then run Setup.exe in the destination folder you specified during extract process.

That’s it! Happy installing!

Discuss »