Wednesday, November 4, 2015

Getting Inside Angular: Can a Digest Continue Forever?


No, but let's look at why.

It is very plausible that you could create a situation where a series of watchers that made changes to the scope would lead to the current digest cycle continuing forever. When performing a $digest, all the watchers are checked for changes and only when no changes are detected, does the digest end (from a high level this is true, but the actual change detection algorithm is optimized so that the final cycle is terminated early when it reaches the last watcher that registered a change). During each cycle, if at least one change is made to an attribute that is being watched, this would lead to a never ending digest as one change would trigger another and so on. Luckily the Angular team thought of this!

TTL:

Time To Live is the mechanism that keeps a record of how many cycles the digest has performed and will end the current digest if is reaches its 11th; in other words, a digest will terminate if it exceeds 10 cycles.

A simple way to create a situation where a watched attribute is being changed with each cycle of a digest is to use a counter and have the listener function increment its value:

$scope.counter = 0;
var removeWatch = $scope.$watch(
  function (scope) { return scope.counter; },
  function (newValue, oldValue, scope) { scope.counter++; }
);

After 10 cycles, the digest cycle terminates and an error is generated:


Details of the watchers that were fired in the last five iterations are included to help identify what caused the digest to continue to the TTL limit.

Any undetected changes will remain so until the next digest is run. No future digest is scheduled in this situation as that would essentially be a continuation of the current digest. It is also worth noting that the $$postDigest queue is not flushed when a digest is terminated and won't be until the next digest cycle successfully ends.

View on Plunker (open the developer tool's console).

You'll probably never run into this scenario, but it's good to know that the digest has got your back!

//Code, rinse, repeat

Tuesday, October 13, 2015

Why YOU Need to Read Other People's Code

Take a moment to sit back in your chair and think over the past week. How much coding did you do? And of that, how much was actual greenfield development? How much of your past week involved you working with code that was written by someone else or even by yourself but over six months ago (let's face it, if it was written by you six months ago, then it might as well have been written by another person!)?

I believe it's pretty obvious to say that there is more code out there in production than there is being created. What this means is that there's a lot more code that needs to be maintained on a daily basis than there is being created anew. An enterprise developer might get the opportunity to work on something new, but it is more than likely to be a new feature for an existing application. The constraints and API of the current application will determine the architecture of this new feature and this will be determined from either good documentation (how many places have that?) or reading code.

I frequently hear "I'm not sure how that works. I'll have to go through the code" or something similar at where I work. I feel that more time is spent reading code written by another than is actually writing code. If such a large amount of our working day is spent reading code, shouldn't we look to devote some of our "sharpening the saw" time to developing our skill to do this? Writing code is easy but reading code is harder because you have no idea what the programmer was thinking at the time they wrote it! Spending some time sharpening your skills in this area would be both beneficial to yourself and your team.

There are also other benefits from reading code that was written by someone else than you:
  • You get to see how someone else approached a problem: this insight could help you the next time you face a similar problem
  • You will more than likely learn something: we don't know everything so this could be something to do with the programming language, framework, business domain, or coding style of the author.
  • Reading other people's code will make you more aware that someone else might read your code in the future
Where to go from here?

With open source, it couldn't be easier to start reading code written by someone else. There is lot of code out there (both good and bad) so it shouldn't be hard to find a project that interests you. Jumping straight into the Angular code base probably isn't the best place to start and I would recommend starting with a something smaller that aims to solve a very simple and narrow problem. 

Something to keep in mind is just because you don't understand the code, that doesn't necessarily mean the author is smarter than you so don't let this discourage you! A section of difficult to understand code is a potential opportunity to refactor. I doubt you are alone so making it easier to understand would be a great way to contribute to the project.

And remember the next time you start writing some code, it's probably going to be someone else who is going to have to maintain it!

//Code, rinse and repeat

Wednesday, September 16, 2015

Debugging your Seed method when running Entity Framework's Update-Database Powershell command

Sometimes when running the Update-Database Powershell command when using Entity Framework's Code First development model, you can have an exception generated within your Seed method. Sadly, this is only communicated to you via the Powershell window:


By default, as the Seed method is not being executed within a context which has a debugger attached, there is no way to interact with the exception as it occurs or set a break point within the Seed method to step through the code as it executes.

Wouldn't it be nice if you could take advantage of Visual Studio's debugger and interact with the code within the Seed method to get to the bottom of that pesky exception? Thankfully this is possible with a little bit of custom code that instantiates a new debugger if one is not already attached. Using code to instantiate a new debugger?! Surely this is not possible I hear you asking... I did not know this was possible either until I hit this problem and this piece of code made me smile, ALOT!

if (System.Diagnostics.Debugger.IsAttached == false)
  System.Diagnostics.Debugger.Launch();

That is all there is to it. This code will determine if a debugger is already attached to your execution context. If not, a dialog is displayed allowing you to select what debugger you would like to attach:


As you can see from the capture above, you have the option of opening a new instance of Visual Studio to debug in or not opening a debugger at all. Note, you can also specify the version of Visual Studio you want to use. How cool is that? Hold on, it gets better.

If you select to debug in a new instance of Visual Studio 2013, Configuration.cs will open in a new window and display the warmly welcomed user-unhanded exception dialog as it occurs within the Seed method:


From here you have all the familiar functionality to drill into the exception details to determine what went wrong. Nice. There is still more though.

You can even set a break point in the debug instance of Visual Studio within the Seed method and step through the code as it executes:



Now that is seriously cool.

I generally add this piece of code to the top of my Seed method so the magic can happen whenever I execute the Update-Database Powershell command. However, if you are updating your data model frequently, you may only want to include this code when you have a problem with the Seed method (or any other code being executed as part of the Update-Database process).

You can find a project here which illustrates this in action. All you will need to do is open it in Visual Studio 2013, build it to restore the packages (ensure you have NuGet version 2.7+ installed - see here for why) and run Update-Database from the Package Manager Console.

Tuesday, September 8, 2015

Getting Inside Angular: $scope.$evalAsync

If you want something to be executed while a digest is in progress, then $scope.$evalAsync is probably what you're looking for.

What does $scope.$evalAsync do?

When a digest is in progress, Angular is looping through all the watches and comparing values to detect if any changes have occurred. As the listener function from a watch could cause a change to another watched property, this loop can run multiple times per digest. $scope.$evalAsync adds the supplied function to a queue that will be drained at the beginning of the next loop in a digest cycle. It won't wait for the current digest cycle to complete but will execute the $evalAsync'ed function before starting the next loop through all the watches. If a digest is not in progress, one will be scheduled when a function is added to the async queue.

When would you want to use it?

A good explanation of when you would want to use $scope.$evalAsync can be found here: http://stackoverflow.com/questions/17301572/angularjs-evalasync-vs-timeout

In summary, as it queues work to occur outside of the current stack frame:
  • In a Controller: it will run before the DOM has been manipulated by Angular (take caution as your change could be overwritten) and therefore, before the browser renders
  • In a Service: it will run after the DOM has been manipulated by Angular but before the browser renders
If your change is going to affect something that is being rendered, $evalAsync will prevent a flicker from occurring as its work will occur before rendering.

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
app.controller('MainCtrl', function($scope) {
  $scope.name = 'World';
  
  $scope.$watch(
    function (scope) { return scope.name; },
    function (newValue, oldValue, scope) {
      
      console.log('First watch executed \nAdding $evalAsync');
      scope.$evalAsync(function (scope) {
        
        console.log('$evalAsync executed');
        scope.newValue = "new value!";
      });
    }
  );
  
  $scope.$$postDigest(function () {
    
    console.log('$$postDigest executed. Digest completed');
    console.log($scope.newValue);
  });
});

View on Plunker (open the developer tool's console).

The above example ensures that $evalAsync is being called (line 9) during a digest by calling it in the listener function of a watch. The work that is being added to the async queue logs a message out to the console to indicate that the work has been executed and then updates a value on the scope. Line 17 adds a function to the post digest queue* to indicate when the digest has completed.

From the output it is clear to see that the function that was added to the async queue was in fact executed during the already in progress digest.

*The post digest queue is drained once all the change detection loops of the watches have completed and is the last step of a digest that you can interact with. However, the Angular team would prefer that you don't use it--or that you take great care if you do--by marking the $$postDigest function private by prefixing it with two dollar signs.

Performance considerations

Adding to the async queue is either going to cause a new digest to occur or if one is in progress, for it to loop one more time through the watches. If the queued work causes changes to other properties that are under watch, this will lead to additional loops. Angular has a mechanism to prevent a digest from continuing forever and will only allow 10 loops to occur before the digest is terminated (known as TTL: Time To Live). As you cannot control at what point during a digest (in other words, how many loops it has already performed) your work is being added, it could be approaching its TTL limit. This is something to consider when using $evalAsync.

Versus $scope.$applyAsync

Both $applyAsync and $evalAsync are tools that help you get around "$digest already in progress" errors. In my opinion, $applyAsync is the safer option as this queue will be flushed before a new digest begins. This will reduce the likelihood that your work will cause a digest to be terminated due to reaching the TTL limit. There is the potential though that using $applyAsync will cause a flicker as you will be relinquishing control back to the browser before your queued work is executed. If the browser decides to render first, then a flicker could occur. Some might prefer $evalAsync for this reason as it won't leave them at the mercy of the browser's event loop.

//Code, rinse, repeat

Tuesday, August 25, 2015

Getting Inside Angular: $scope.$applyAsync

Have you ever used $scope.$apply and received a "$digest already in progress" error? If so, $scope.$applyAsync might be what you should be using!


What is $apply actually doing?

The most common reason to use $apply is when something is happening outside the scope of Angular and you want Angular to react to any changes that have occurred. $apply is your way of instructing Angular that something has happened and that it needs to go through its change detection routine--in other words, perform another digest. You might have noticed that "digest" was also mentioned in error message. This is no coincidence.

When you call $apply, it first executes the function that was supplied and then immediately starts a new digest cycle. In preparation to start a new digest, it checks that one is not already in progress before doing anything and if one is, the above exception is thrown. This can be somewhat jarring because you have little control over when Angular is performing a digest.

What’s the solution?

A common solution that I've seen is based on the fact that the function you want to $apply doesn't actually have to be "applied" at this moment in time and can wait to be executed at some point in the very near future. This is achieved by wrapping the $apply statement in a setTimeout with a very small timeout value, normally zero:

setTimeout(function () {

  $scope.$apply(function () {
   
    console.log('This will also work as you are using setTimeout');
  });
}, 0);

While this is all and good and will solve this issue in the majority of cases, there happens to be a more efficient mechanism already built into angular: $scope.$applyAsync!

Note: If the supplied function needs to been executed during the already in progress digest cycle, $evalAsync is probably what you're looking for.

$scope.$applyAsync

$applyAsync looks the same as $apply, but instead of starting a new digest immediately, it will schedule one to start in the very near future. The benefit of using $applyAsync over wrapping $apply in your own timeout, is when another digest cycle starts before the one that you scheduled does. If this happens, then the function supplied to $applyAsync will be executed before the change detection routine starts; when this wouldn't be the case If you had used your own timeout. The order of execution is up to the browser to decide what is next so by using $applyAsync, you are guaranteed that your function will be executed during the next new digest.

Example:

app.controller('MainCtrl', function($scope) {
  
  $scope.$watch(function () { },
    function (newValue, oldValue, scope) { 
      
      scope.$apply(function () {
        
        console.log('This will not work');
      });
    }
  );
    
  $scope.$watch(function () { },
    function (newValue, oldValue, scope) { 
      
      scope.$applyAsync(function () {
        
        console.log('This will work as you are using $applyAsync!');
      });
    }
  );
}); 

View on Plunker (open the developer tool's console).

The above code shows one example where a $apply is being called during a digest cycle which will result in a "$digest already in progress" error (open the developer tool's console to see this). The second example uses $applyAsync and the message "This will work as you are using $applyAsync!" will appear in the console.

//Code, rinse, repeat

Monday, July 27, 2015

How to Use Bootstrap (CSS only) and Webpack

I thought it was going to be easy.

require("bootstrap-webpack");

I thought that with a single line of JavaScript, I could have Bootstrap available for use within my web application. However, that was not to be the case. After battling through Webpack producing errors about CSS, woff2 and various other issues, I finally reached my limit at:

Uncaught ReferenceError: jQuery is not defined

I'm sure I'm not the first person to run into this issue (a quick Google produced this solution) but what pushed me over the edge was that I only wanted Bootstrap for it's CSS; I had no need to add jQuery as I wasn't going to use any of Bootstrap's JavaScriptiness. As far as I was concerned, this was another hoop and I wasn't going to jump through it.

With invigorated motivation and a fierce sense of purpose, I set out to get Webpack running with Bootstrap's CSS and nothing more. There had to be a way where I could load bootstrap.min.css and that is what I sought out. 


From my previous attempts to use bootstrap-webpack, I felt that if I had the correct Webpack loaders installed and configured correctly, Webpack should be able to load the CSS file. After playing around with adding and removing loaders, I settled on five that appear to be the bear minimum you need:
  • babel-loader: to transpile the "require" keyword
  • css-loader & style-loader: for processing CSS
  • file-loader: for handling "eot" resources
  • url-loader: for handling woff, woff2, ttf & svg resources

I expect that as Bootstrap evolves, this set of loaders will change but as of v3.3.5, these are all you need.

webpack.config.js:

module.exports = {
    entry: {
        app: ["webpack/hot/dev-server", "./app/app.js"],
    },
    output: {    
        path: './app',
        filename: 'bundle.js'
    },
    module: {
      loaders: [
            { test: /\.jsx?$/, exclude: /(node_modules|bower_components)/, loader: 'babel' },
            { test: /\.css$/, loader: 'style-loader!css-loader' },
            { test: /\.eot(\?v=\d+\.\d+\.\d+)?$/, loader: "file" },
            { test: /\.(woff|woff2)$/, loader:"url?prefix=font/&limit=5000" },
            { test: /\.ttf(\?v=\d+\.\d+\.\d+)?$/, loader: "url?limit=10000&mimetype=application/octet-stream" },
            { test: /\.svg(\?v=\d+\.\d+\.\d+)?$/, loader: "url?limit=10000&mimetype=image/svg+xml" }
      ]
    }
};

The loaders were configured as above. Most of this was extracted from the configuration for bootstrap-webpack. The only change of note was updating the test for the url-loader to match both woff and woff2.

This solution is perfect for use with an Angular application where you are using angular.ui. Adding Bootstrap's JavaScript and jQuery in this scenario would be a waste as angular.ui provides the same set of features. 


Oh, and with the Babel-loader installed, you are free to use ES6 to your heart's content!

Feel free to check out the repo on GitHub

//Code, rinse, repeat

Monday, July 20, 2015

The Importance of a Pet Project

A pet project is a great way to learn a new technology or get a better understanding of one you are familiar with already. It’s also a fun way to learn.

Here are a few “rules” I have found along the way after having nurtured a few pet projects of my own:
  1. Come up with a project that is not too big or complex. I want to be able to concentrate on exploring the technology instead of battling with the complexities of the project itself.
  2. The more fun a project is, the better chance you have at actually making good progress with it. As you are doing this in your own time, you want something that is interesting to keep you motivated otherwise it is too easy to find something else that needs to be done instead.
  3. Aim to build something that has a well defined feature set. It is too easy to start playing with a technology without an end goal in mind. By setting yourself some goals or features, it forces you to look at utilizing the technology with an end goal in mind.
  4. Be prepared to learn more than you expect – More on this below.
  5. Share what you have built. Sharing the end product with your peers is an excellent way to get some additional learning miles out of your pet project and new ideas.
The biggest surprise I constantly find is that I end up learning a lot more than I envisaged at the start of the project. For example, my latest pet project (Sharpen the Dev Saw) I wanted to learn more about building a client-side JavaScript app and responsive design using Bootstrap. When I sat back at the end of the project and looked at what I had actually covered, it was a lot more:
  • Working with Knockout.js
  • JavaScript design patterns (MVVM, Revealing Module)
  • A better understanding of JavaScript as more than a scripting language.
  • CSS 3 animations using animate.css
  • CSS 3 transitions for hover states
  • Unit testing JavaScript using Jasmine
  • Working without JQuery
  • Flat UI web design
  • Displaying information concisely when using a Flat UI design
  • Modern input interactions using CSS animations
  • Using a light weight IDE - Brackets
  • Using Grunt as a build automation tool
  • Auto-deploying to Azure using Git and Grunt
As you can see, that is quite a list and I probably have missed some items off there as well.

Sharpen the Dev Saw was a straight forward pet project. As a keen believer in Continuous Learning, I am sometimes asked how you fit continuous learning into daily life. I used to explain that 30 minutes a day does not sound like much each day but over the course of a year it adds up to quite a lot of learning. I usually then had to do the sums to back this up (it’s 10950 minutes or 183 hours a year!). I wanted an app that visually displayed this and then gave some examples of what you could achieve with this time. Hence, Sharpen the Dev Saw was born. As the project itself was simple I was able to concentrate on exploring technologies to get it to work how I wanted instead of becoming bogged down with project complexities.



My pet project prior to Sharpen the Dev Saw was an online portfolio site. I originally planned to build a standard portfolio site using a master-detail design. In the end, I went with a totally different design and learnt about building Parallax Scrolling sites instead. Feel free to check it out here.



Another pet project came following a chat I had with a senior manager at a previous employer. They wanted to make a project’s or software release cycle’s progress more visible as all the stats were buried in Team Foundation Server. This chat lead to me building MVC Dashboard as a demo dashboard app with a Metro design. You can see MVC Dashboard here.



So what’s next? I would like to add some automated UI tests to the deployment pipeline for Sharpen the Dev Saw which are run once the deployment to Azure is complete. Also, I wouldn't mind looking into re-writing Sharpen the Dev Saw using TypeScript too. However, my next pet project is calling - a Continuous Learning Management app built with AngularJS and ASP.NET WebAPI. Here comes the sales pitch… Ever spent hours hunting for that blog post you read months ago but could really do with finding now? Want to make all the self study and development you do in your own time more visible?  Have you wondered what tech areas you have spent the last few months looking in to? This could be the app for you. I am hoping it will be an app that will allow you to track what you are learning and gain insights to help you steer your learning efforts in the direction you want. More on this another time though. (It’s an ambitious project and probably breaks rule 1 – Doh!)

You can find the source code for the pet projects mentioned in this blog post here:

Nurture your pet project, it’s an excellent learning tool.

Wednesday, June 24, 2015

Sharpen the Saw with a Podcast

Podcasts are a great way to Sharpen the Saw. You can learn a wealth of knowledge from them on numerous topics. The beauty of the podcast is that you can multi-thread your life with them. You can listen to a podcast whilst doing something else so you do not have to specifically set time aside for it. I generally listen to podcasts when I am commuting to and from work and in the morning whilst getting ready for work. Previously this was time when I would listen to the radio but now I have turned it into a lucrative learning opportunity. Look at it this way, each day I roughly squeeze in an hour and a half of podcast listening-- that's 7.5 hours a week or 390 hours a year. Considering the average podcast is an hour long, that's 390 podcasts a year I am listening to! That is not bad going for doing very little more than I would usually do on a day to day basis.

Below is a very brief summary of the podcasts I am currently listening to:

  • .NET Rocks - This is the first podcast I started listening to. I am quite a religious listener and try to listen to each episode as it's a great way to broaden your technological awareness in general.
  • Hansleminutes - Brought to you by the legendary Scott Hansleman. Each episode is full of his usual humor making them a highly enjoyable listening experience.
  • Herding Code - Great line up of host presenters. I generally cherry pick these as they can have similar material and guests to .NET Rocks.
  • Hello World - There is lots of inspiration and advice to be gleaned from listening to the stories of how influential developers started and progressed their careers. You can read more about this podcast in my blog post here().
  • Adventures in Angular - I have started listening to this to help level up my AngularJS skills. Each show is about half an hour long so you can consume lots of these very fast. It also features one my favorite developers on the panel, John Papa, so it gets a big thumbs up from me.
  • JavaScript Jabber - I have also started to listen to this to help broaden my Javascript knowledge. It's a great way to learn about potentially useful Javascript frameworks (if you do not know they exist, how can you consider using them to solve a problem?) and the world of Javascript in general.
  • EntreProgrammers - This is a great podcast where programmers come entrepreneurs discuss there weekly activities. It's not your standard podcast setup as it's more a recording of them chatting about their entrepreneur activities but it's a great way to get an insight in the entrepreneurial world from a developers perspective. Highly recommended.
There are lots of podcasts freely available on a vast variety of topics and I am barely scratching the surface with the selection I have listed above.

Grab yourself some podcasts, start multi-threading your life and sharpen that saw.

Wednesday, May 13, 2015

Human Readable Code - Fluently Mocking the ASP.NET MVC ControllerContext

I was recently writing some unit tests against an MVC controller action which required a user to be authenticated via Windows Authentication. The test was initially failing as it was not able to access the controller action. This was happen because the context in which the test was being run in did not have an authenticated user.
 

I resolved this issue by mocking the ControllerContext so the controller "believed" it was being accessed by an authenticated user. Using the Moq mocking framework, this can be accomplished using the following mocking code:

var mockControllerContext = new Mock<ControllerContext>();
mockControllerContext
     .SetupGet(c => c.HttpContext.User.Identity.Name)
     .Returns("TestUser");
mockControllerContext
     .SetupGet(c => c.HttpContext.User.Identity.IsAuthenticated)
     .Returns(true);

var homeController = new HomeController();
homeController.ControllerContext = mockControllerContext.Object;

This implementation does the job but it is not particularly readable at a glance. You have to read through each statement to understand which property the mock is setting up a return value for. Unit tests are a great way to understand the workings of a code base as they allow you to see at a high level how the code base is supposed to behave under certain circumstances. However, this benefit is dramatically reduced if the unit tests themselves are long winded and not easy to understand. 

After watching Cory House’s Pluralsight course on Clean Code: Writing Code for Humans I have become acutely aware of the benefits of making your code more human readable. Extension methods are a good way to achieve a more readable syntax and led me to refactoring the above mocking code to the following:


var mockControllerContext = new Mock<ControllerContext>()
                                 .WithIdentityNameAs("testUser")
                                 .IsAuthenticated();

I was able to create this fluent syntax by creating a couple of extension methods that could be chained together to update the mocked ControllerContext object as required.

public static Mock<ControllerContext> WithIdentityNameAs(
     this Mock<ControllerContext> mockControllerContext, string username)
{
 mockControllerContext
          .SetupGet(p => p.HttpContext.User.Identity.Name)
          .Returns(username);

 return mockControllerContext;
}

public static Mock<ControllerContext> IsAuthenticated(
     this Mock<ControllerContext> mockControllerContext)
{
 mockControllerContext
          .SetupGet(p => p.HttpContext.User.Identity.IsAuthenticated)
          .Returns(true);

 return mockControllerContext;
}

The key to writing extension methods that allow method chaining is to return the type which is passed in that the extension method extends. In the case of my extension methods, I had to return the mockControllerContext object or an instance of a Mock<ControllerContext> type as this is what was being extended. As you are returning the same type which your extension method extends, it allows you to call another extension method on that returned object and chain your extension methods together. This chaining makes nice, succinct, readable expressions.

The main draw back with this approach is that you have now moved test setup code away from the unit test itself into a separate extension method helper class. This is not necessarily a bad thing. It helps promote code reuse as you can use these extension methods in many other unit tests without having to duplicate the mock setup code. 

On a separate note, the refactored implementation gives the reader the option of deciding what level of abstraction they want to read the code at. If they only want to understand what is taking place in the test setup, reading the extension method names will be sufficient. However, if they are interested in how these values are actually setup in the mocks, they have the option of diving deeper into the extension methods themselves to view the implementation details. The key point here is that the reader has this option. My first implementation forced the reader to read the details to understand what test setup was taking place, where as my second implementation abstracted these details away. Providing these levels of abstraction to the reader is another technique which can help make your code more readable.

Wednesday, April 29, 2015

On Demand Dependencies in C#

Imagine a situation where you have a business engine that requires different dependencies, such as repositories and services. An easy solution for decoupling these dependencies from the business engine (although a 'business engine' referrers to a class that contains business logic, you could substitute it for any class that requires dependencies for the purpose of this post) would be to employ dependency injection via the constructor. When your business engine only requires a few dependencies, this method works well. However, what about where six dependencies are being injected as illustrated below?

public class MyBusinessEngine
{
    private readonly IDependencyA DependencyA;
    private readonly IDependencyB DependencyB;
    private readonly IDependencyC DependencyC;
    private readonly IDependencyD DependencyD;
    private readonly IDependencyE DependencyE;
    private readonly IDependencyF DependencyF;

    //Constructor injection from IoC
    public MyBusinessEngine(IDependencyA dependencyA, IDependencyB dependencyB, IDependencyC dependencyC,
        IDependencyD dependencyD, IDependencyE dependencyE, IDependencyF dependencyF)
    {
        DependencyA = dependencyA;
        DependencyB = dependencyB;
        DependencyC = dependencyC;
        DependencyD = dependencyD;
        DependencyE = dependencyE;
        DependencyF = dependencyF;
    }

    //Multiple functions that use the above dependencies. 
    //Note that everyone of the methods does not use all of the dependencies; some might use
    //all of them when others might only use a few.
}

With this long list of dependencies, the constructor looks cluttered. In addition, as not all the methods on MyBusinessEngine use all the injected dependencies, resources are being wasted creating them when they aren't going to be needed. There has to be a better way for when you don’t need to inject the kitchen sink!

Resolve dependencies on demand:

public class MyBusinessEngine
{
    public MyBusinessEngine()
    {
    }

    public void SomeMethod()
    {
        var dependencyA = MyContainer.Container.GetInstance<IDependencyA>();
        var dependencyD = MyContainer.Container.GetInstance<IDependencyD>();

        //...
    }
}

Dependencies are no longer injected, but are resolved as they are needed by requesting them from the IoC container directly. Resources are saved by no longer supplying everything the class could need and only creating when is required by the method being invoked. This is an easy solution to implement, however, it has a serious drawback: the class is now coupled to the IoC container. In order to unit test this class, mocks of the dependencies will need to be registered with the IoC container so they can be resolved at runtime by the class under test. Once again, there must be a better way.

Abstracting the IoC container:

public class MyDependencyResolver : IDependencyResolver
{
    public T GetInstance<T>()
    {
        return MyContainer.Container.GetInstance<T>();
    }
}

public class BusinessEngine
{
    private readonly IDependencyResolver DependencyResolver;

    //Constructor injection of MyDependencyResolver from IoC
    public BusinessEngine(IDependencyResolver dependencyResolver)
    {
        DependencyResolver = dependencyResolver;
    }

    public void SomeMethod()
    {
        var dependencyA = DependencyResolver.GetInstance<IDependencyA>();
        var dependencyD = DependencyResolver.GetInstance<IDependencyD>();

        //...
    }
}

In order to remove the coupling of the business engine with the IoC container, the mechanism for which dependencies are resolved needs to be abstracted. This is achieved by creating a dependency resolver that handles the resolution of dependencies. The business engine no longer cares about how the dependency is created, all it cares about is getting the one it requested. You could move from using StructureMap to Castle Windsor or even newing-up instances directly (which is not recommended) and the business engine would not be affected.

Unit testing for the business engine is now possible though mocking the dependency resolver to return mocks of the dependency that are required for the code under test. This makes testing a lot easier and removes the need for your tests to reference the IoC container.

To limit the objects that can be retrieved from your dependency resolver, you can add a constraint upon the GetInstance method. For example, if you created a resolver for repositories, you could apply this constraint: T GetInstance<T>() where T : class, IRepository. This would ensure that only classes that implement IRepository can be served by this dependency resolver. The only issue with this is that as you are now creating multiple dependency resolvers, you will also be adding additional references to your IoC container throughout your project. Fortunately, there's a better way for doing this too.

Decoupling the IoC Container:

public class MyDependencyResolver : IDependencyResolver
{
    public T GetInstance<T>()
    {
        return ServiceLocator.Current.GetInstance<T>();
    }
}

Through using the Common Service Locator (see my blog post from January), the IoC container is no longer coupled to the dependency resolver. If you wanted to change your IoC container, you can now achieve this by making the change in a single location (MyContainer) and rather than having to update all the dependency resolvers as well. Whereas before your project was heavily coupled to your IoC container, by shifting that coupling to the Common Service Locator, you are now free to change your container with minimal impact.

There's the option to only inject dependency resolvers and then to resolve all other instances upon demand. This would certainly help to keep your constructors short and concise. It would also help with the maintenance of unit tests as you would no longer break your tests the moment you add a new dependency to the constructor. As that new dependency would be supplied from the dependency resolver that's already being injected, no change to the constructors signature would be required.

One last thing to consider is that the service locator pattern hides the context from the IoC container when supplying requested dependencies. In other words, it has no idea about what other dependencies have been requested or what is requesting the dependency. If context is important to your application, then it would be wise to inject directly from the IoC container.

The code for the OnDemandDependencies project can be found on GitHub. I've included an example of how you can unit test a business engine with MSTest and Rhino Mocks and how you can create dependencies that live for the lifetime of a dependency resolver.

Wednesday, April 22, 2015

C# Predicates, some Extention Method goodness and a sprinkling of Generic Methods = DIY LINQ

I was reading an article on .Net Design Patterns in Issue 16 of the Dot Net Curry Magazine where they were describing the Pluggable pattern. This implementation of the pattern used the Predicate type to allow you to pass a predicate expression (a statement that resolves to either true or false) in to a method that totaled up a collection of integers. In the example, a totaling up method used the predicate to determine what values were to be selected from a collection (E.g. only even numbers or numbers greater than five), sum them up and returned the total. This pluggable design allowed you to defer specifying the specific behavior of the totaling up method as you are injecting the value selection criteria into the method. This pattern is very powerful as it allows you to write methods that are concerned with executing a process, whilst not being concerned with the exact details of that process.

This article got me thinking about how LINQ (Language Integrated Query) works under the covers as this process of supplying selection criteria is what is happening here too. After chatting to my brother about his blog post where he set himself the challenge of writing his own implementation of Javascript Promises (it’s very cool--check it out here), this motivated me into setting my own mini challenge: Build my own version of the LINQ Where statement.

Like my brother, I decided to set out some constraints:

  • It should work using a fluent syntax.
  • It should accept a lambda predicate statement as the selector.
  • It should work with all types (not just integers or strings).
Effectively, I was wanting to create something which mirrored LINQ’s Where statement's API as closely as possible.
 

I started by breaking down the task into smaller parts. First I created a “WhereMine” method which accepted an IEnumerable<int> and a Predicate<int> that returned an IEnumerable<int> containing the integers which matched the supplied predicate. This can be seen below:

public static IEnumerable<int> WhereMine(IEnumerable<int> values, 
     Predicate<int> predicate)
{
 IList<int> matchedValues = new List<int>();

 foreach(var value in values)
 {
  if(predicate(value))                
   matchedValues.Add(value);                
 }

 return matchedValues;
}

This step allowed me to verify that the predicate expression was working as I had expected it to by only adding a value to the matchedValues collection if the predicate expression was true.

The WhereMine method was called using the following syntax (Please note, DisplayToConsole() is a helper extension method I wrote to write the contents of an enumerable to the console):


IEnumerable<int> numbers = new[] { 1, 2, 3, 4, 5 };
Console.WriteLine("Numbers:");
numbers.DisplayToConsole();

//Predicate Method
Console.WriteLine("Predicate Method");
Console.WriteLine("Only Even:");
WhereMine(numbers, x => x % 2 == 0).DisplayToConsole();
Console.WriteLine("Greater than 2:");
WhereMine(numbers, x => x > 2).DisplayToConsole();

This generated the following output:


So far so good. Next I created an extension method that extended IEnumerable<int> to help provide a similar fluent syntax to LINQ’s Where API:

public static IEnumerable<int> WhereMine(this IEnumerable<int> values, 
 Predicate<int> predicate)
{
 IList<int> matchedValues = new List<int>();

 foreach(var value in values)
 {
  if (predicate(value))
   matchedValues.Add(value);
 }

 return matchedValues;
}

If you are not familiar with creating your own extension methods, you can see this is accomplished by preceding the type you are extending with the "this" keyword (highlighted above in yellow).

This update allowed me to use my WhereMine filter method using a fluent syntax:

IEnumerable<int> numbers = new[] { 1, 2, 3, 4, 5 };
Console.WriteLine("Numbers:");
numbers.DisplayToConsole();

//Extension Method
Console.WriteLine("Extension Method");
Console.WriteLine("Only odd:");
numbers.WhereMine(x => x % 2 == 1).DisplayToConsole();
Console.WriteLine("Greater than 3 but less than 5");
numbers.WhereMine(x => x > 3 && x < 5).DisplayToConsole();

This generated the following output:


Almost there. The last step was to update it so it worked with all types instead of being limited to a single type as my current implementation was. I achieved this by making my WhereMine extension method a generic extension method using the generic template markers highlighted in yellow below:

public static IEnumerable<T> WhereMine<T>(this IEnumerable<T> values, 
     Predicate<T> predicate)
{
 IList<T> matchedValues = new List<T>();

 foreach(var value in values)
 {
  if (predicate(value))
   matchedValues.Add(value);
 }

 return matchedValues;
}

By making it a generic extension method it was now possible to use the WhereMine extension method to filter collections of different types:

//Generic Extension Method
Console.WriteLine("Generic Extension Method");
Console.WriteLine("Numbers doubles:");
IEnumerable<double> numbersDouble = 
     new[] { 1.5d, 5.6d, 9.9d, 10.8d, 20d };
numbersDouble.DisplayToConsole();
Console.WriteLine("Less than 10");
numbersDouble.WhereMine(x => x < 10).DisplayToConsole();

IEnumerable<string> names = 
     new[] { "Jane", "Steve", "Jack", "Betty", "Bill", "Jenny", "Joseph" };
Console.WriteLine("Names:");
names.DisplayToConsole();
Console.WriteLine("Names that start with 'J':");
names.WhereMine(x => x.StartsWith("J")).DisplayToConsole();
Console.WriteLine("Names that are 5 characters in length:");
names.WhereMine(x => x.Length == 5).DisplayToConsole();
Console.WriteLine("Names that start with 'J' and are longer than 4 chars");
names.WhereMine(x => x.Length > 4)
     .WhereMine(x => x.StartsWith("J"))
     .DisplayToConsole();
// Or could be written as
//names.WhereMine(x => x.Length > 4 && x.StartsWith("J")).DisplayToConsole();

This demonstration uses the same WhereMine extension method to filter an enumerable of doubles and an enumerable of strings generating the following output:


By combining a number of the language features of C#, I was able to achieve my end goal of creating my own filter method like the LINQ Where statement with a similar API. I think of this activity as exploratory coding (or sharpening the saw)--where you are trying to build something with the intention of learning more about the language / framework you are using and not necessarily using the end product. I find this learning process a lot more fun and insightful than simply reading about something in a book or on a website. Building something helps solidify what you have learnt and can identify any misunderstandings or gaps in your knowledge.

You can find a project which contains the above source code here.


Happy exploratory coding!