Wednesday, March 25, 2015

How Do I Simultaneously Navigate Multiple Web Browsers?

With some JavaScript and Socket.IO!

However, before getting into that, here's a little background on why I needed to do this (feel free to scroll down if you want to jump to the code). Working as a developer, the majority of your code will be aimed at being deployed in production. By production, I mean an environment where it will be used to provide value to the organization employing you, be this a customer facing web application or a test harness that's used as part of the build process. When you are developing for a production environment, it makes sense to stay within the confines of the current tech stack that is being employed, unless it really makes sense to introduce a new element. For example, if your organization develops their web applications using only ASP.NET, it wouldn't make sense for one developer to start developing a couple new pages in PHP. Sticking to a tech stack improves maintainability tremendously, but it can also limit our ability to see beyond that technology stack. This is why I savor the opportunities to code something that won't go to production and won't need to be maintained by anyone. This gives me the freedom to choose whatever technologies I feel I can be most productive using to solve the problem at hand. It's great not to be confined to your production stack and have the opportunity to step outside of the box and try something new.

One of these opportunities presented itself when I ran into an issue where it appeared either sessions were getting mixed up or the wrong value was being used for the Set-Cookie HTTP header entry. From examining the webserver's logs and the database's audit log, it was clear that the affected parties' requests reached the webserver at exactly the same instance. This meant, that in order to attempt to reproduce this issue, I needed a way to trigger multiple browsers to navigate to the same URL at exactly the same time in order to increase the likelihood of causing an occurrence as this issue did not raise itself every time two or more requests were received simultaneously. This was the moment I had been waiting for, the moment to try something new!

I decided to use node.js as I felt that I would be able to get something up and running with minimal friction. The first solution I considered was where the browsers would repeatedly poll the server and ask if they should navigate or not. When the server said yes, each of the browsers would then navigate to the website I wanted to test. In my mind's eye, the implementation of this would be very easy for both the client and server, but something didn’t feel right about it. I ended up dismissing this approach as the accuracy of the synchronized navigation would rely solely on the shortness of the poll duration. I felt that the command to navigate should be sent from the server to all the browsers in the form of a broadcast. This led me to web sockets and ultimately Socket.IO.

The solution that I created has an index page and an admin page. When browsers load the index page, a connection is established with the web server and a message is broadcasted to the other connected browsers that one more has connected. The admin page allows messages to be sent to the connected clients and to command them to navigate to a url.

Server side

app.js:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
(function () {
 'use strict';
 
 var http = require("http");
 var express = require("express");
 var app = express();
 var router = express.Router();
 
 app.use(express.static(__dirname + '/public'));
 
 router.use(function (req, res) {

  res.status(404).send("Did you get lost?");
 });

 router.use(function (err, req, res, next) {

  console.log("ERROR:");
  console.dir(err);
  
  if (req.xhr) {
   res.set('Content-Type','application/json');
   res.status(500).send({ message: "Oops, something went wrong." });
  } else {
   res.status(500).send("Oops, something went wrong.");
  }
 });

 app.use('/', router);

 var port = process.env.PORT || 3000;
 var server = http.createServer(app);
 server.listen(port);
 require('./sockets.js')(server);
 console.log("Listening on port %d", port);
 
}());

The majority of this is plumping for Express.js. The most significant part is line 34 as this is where the call to configure Socket.IO is being made. I've taken a liking to my modules returning a function when they need to perform some initialization upon import. Once you invoke the initialization function, this can then return the API for the module. This construct mirrors the behavior of a constructor by helping to ensure that the module is in a valid state before you start to use it. This can be seen at line 34 where I am invoking the imported sockets.js module and passing in the server variable. In this case, there is no API returned as once I've setup Socket.IO, I do not need to interact with it again.

sockets.js
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
(function () {
 'use strict';
 var socketio = require("socket.io");
 
 module.exports = function (server) {
  
  var io = socketio.listen(server);
  
  io.on('connection', function (socket) {
   
   socket.on('join', function (data) {
    
    var address = socket.handshake.address;
    console.log("Joined: " + address);
    
    socket.emit('joined', { message: "You are connected!" });
   });
   
   socket.on('send message', function (data) {
    
    console.log("Message: " + data.message);
    
    io.emit('client message', { message: data.message });
   });
   
   socket.on('navigate', function (data) {
    console.log("Navigate: " + data.url);
    
    io.emit('navigate browser', { url: data.url });
   });
  });
 };
        
}());

Socket.IO revolves around emitting events and responding to them. Line 9 is an example of how you would setup a callback to respond to a "connection" event that is received by the server though the use of the on function. Responding to the "connection" event is the standard pattern for setting up the behavior for when a new connection is established. In lines 11 through 30, I am creating three listeners that will react to the connected socket emitting "join", "send message" and "navigate" events. The "send message" and "navigate" listeners both emit events of their own. As there are broadcasted on the io channel, they will be sent to all the other connected sockets. However, the "join" listener is only emitting the "joined" event on the connected socket. When you emit an event, you have the option to send data with it and this can be seen as the second parameter of the emit function.

Client side

index.js
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
(function () {
 'use strict';
 
 var messages = document.getElementById('messages');
 var addMessage = function (message) {

  messages.innerHTML = messages.innerHTML + message + '<br />';
 };
 
 var socket = io.connect(location.host);
 
 socket.on('navigate browser', function (data) {
  
  location.href = data.url;
 });
 
 socket.on('client message', function (data) {
  
  addMessage(data.message);
 });
 
 socket.on('joined', function (data) {
  
  addMessage(data.message);
 });
 
 socket.emit('join');

}());

admin.js
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
(function () {
 'use strict';

 var socket = io.connect(location.host);
 
 var message = document.getElementById('message');
 document.getElementById('message-button').addEventListener('click', function () {

  socket.emit('send message', { message: message.value });
 });
 
 document.getElementById('navigate-button').addEventListener('click', function () {
  
  var url = document.getElementById('url').value;
  socket.emit('navigate', { url: url });
 });

}());

With Socket.IO, the client side of the wire follows a very similar pattern as on the server side. After connecting to the server you are returned a socket upon which you can emit events or listen for events. Like on the server, these are done using functions called on and emit. For an example of listening for an event, look at line 13 in index.js and for an example of emitting a client side event, look at line 9 in admin.js. The entire solution can be found on GitHub.

I realize that I could have stuck with the .NET stack and used ASP.NET SignalR to achieve the same results, but I feel that trying something new in a different stack can bring new insights that can help to make you a better developer. Each stack has its good points and its bad, however, if you only stick with the same stack, you will only ever experience the same pluses and the same minuses. By trying different programming languages, web-frameworks, databases, and so on, you can expand your horizons to new and different ways of doing something. You may even find a new solution for a pain-point that's been nagging you since you can’t remember when. I plan to step outside of the box whenever the opportunity presents itself and try something new. And maybe you should too!

Wednesday, March 11, 2015

Roman Numeral Converter - Functional Programming Style

After watching Venkat Subramanian’s NDC session on Transforming your C# code to functional style I had the urge to try out some functional programming in C# of my own. I decided to re-tackle the Roman Numeral Converter problem which I had previously solved using a strategy pattern design.

My previous solution broke the converter problem down into a number of smaller converters which were each responsible for converting a numeric unit (ones, tens, hundreds and thousands) of the decimal value to its Roman numeral equivalent. I used strategies to represent each of these converters that could be enumerated through to complete the conversion process. You can find more details about my strategy pattern implementation in my blog post here.

Functional programming focuses on treating functions as variables which can be passed about as variables or added into collections that can be enumerated. This shift in programming paradigm can result in very powerful and interesting designs.

From watching Venkat’s session I knew that instead of having a number of strategies as part of my conversion process, I could have a number of algorithmic functions which when combined would perform the conversion process. These algorithms would be functions that could reside in an enumerable type that could be enumerated over to ensure each of the algorithms were applied to the decimal value.

The C# language has a number of types which can be utilized to help facilitate a more functional style of programming. The Action type allows a function which does not have a return type to be assigned to a variable, where a type of Func allows a function that returns a value to be assigned. There are many overloads for each of these types to cater for a range of input and output scenarios. For example, a function which accepts a string but returns an integer could be represented by the type Func<string, int> or Func<T in, T out>.

My functional implementation of the Roman numeral converter uses functions to convert a decimal unit into Roman numerals. These functions accept an integer and return a string so requires the following Func type to represent them:

Func<int, string> 

An implementation of one of the conversion algorithm functions looks like the following:
public static string Ones(int number)
{
 int onesToConvert = number % 10;

 string conversion = string.Empty;

 int i = 1;

 while (i <= onesToConvert)
 {
  if (i == 4)
   conversion = "IV";
  else if (i == 5)
   conversion = "V";
  else if (i == 9)
   conversion = "IX";
  else
   conversion += "I";
  i++;
 }

 return conversion;
} 

I chose to create each of the conversion algorithms as a static method in a static class to help provide an easier API when accessing each of the algorithms:
ConverterAlgorithms.Ones

By using a collection of Func<int, string> types, I was able to create a type which I could add each of my conversion algorithms to and enumerate in order to complete the conversion:
private List<Func<int, string>> _converters;

_converters = new List<Func<int, string>>
{
 ConverterAlgorithms.Thousands,
 ConverterAlgorithms.Hundreds,
 ConverterAlgorithms.Tens,
 ConverterAlgorithms.Ones
};

Once I had an enumerable of Func’s which represented each of the individual conversion algorithms, I enumerated through the converters collection, invoking each function with the number to convert and concatenate the returned conversion result:
string result = string.Empty;

_converters.ForEach(x => result += x(number));

return result;

It’s interesting to compare this approach to my previous solution which used the strategy pattern. The functional implementation feels a little more lightweight and free of ceremony as the strategy pattern required a different type to be created for each conversion strategy, each of which implemented an IConverter interface. The functional approach required none of this with the only constraint being the method signature of each of the conversion algorithms.

A question I debated was when is it preferable to use each approach? Referring to C# 5.0 in a Nutshell, it has some good general advice when the functional (or delegate) approach is more suitable:
  1. If the interface only defines a single method.
  2. Multicast capability is needed.
  3. The subscriber needs to implement the interface multiple times.
In the case of my strategy pattern implementation, points 1 and 3 are applicable. The IConverter interface only included a single method: string Convert(int number) and the IConverter interface needed to be implement multiple times as it was implemented in each of the conversion strategies.

You can find the complete source code, including unit tests, of my functional implementation of the Roman numeral converter here.

Watching Venkat’s NDC session and this brief dabble has peaked my interested in functional programming in general. After listening to a .NET Rocks podcast on the functional programming language Elixir, I could not resist starting to learning this and see what functional goodness I can pick up in 2015.

Wednesday, March 4, 2015

Sharpening the JavaScript Saw by Building a Promise API

I’ve been working with JavaScript a lot lately and wanted to set myself a small challenge where I would aim to build something small, but meaningful within 60 minutes. One thing that I had wondered about how they actually worked was JavaScript promises. I’ve used them with Angular.js and node.js and thought that building my own basic implementation of a promise would be a suitable challenge. With the task decided, I felt that I needed to put together a few constraints that would drive my implementation and give me a set of criteria for the challenge. These I derived from my experience with using promises and what I had read about them. Here are the constraints that I decided on:

-    The promise can only be resolved once--be this keeping or breaking it
-    An optional callback for failure can be supplied with a success callback
-    Multiple callbacks can be supplied and they will be executed in the same order when resolving the promise


With everything laid out, I started my timer and began coding. It didn’t take long to get something up and running that allowed a single set of success and failure callbacks to be queued but the requirement to be able to supply multiple callbacks was a bit more of a challenge. It took a little bit of a design change and some refactoring before I ended up with a solution that I was happy with. This is what I came up with:

(function (promiseApi) {

    promiseApi.Promise = function () {
        
        var isResolved = false;
        var successCallbacks = [];
        var failureCallbacks = [];
        
        var keep = function (data) {
            
            resolve(successCallbacks, data);
        };
        
        var abandon = function (error) { // "break" is a keyword unfortunately
            
            resolve(failureCallbacks, error);
        };
        
        var resolve = function (queue, state) {
            
            if (!isResolved) {
                isResolved = true;
                
                queue.forEach(function (callback) {
                    
                    state = callback(state);
                });
                
                successCallbacks = [];
                failureCallbacks = [];
            }
        };
        
        var when = function (success, failure) {
            
            if (success !== undefined) {
                successCallbacks.push(success);
            }
            
            if (failure !== undefined) {
                failureCallbacks.push(failure);
            }
            
            return Object.freeze({ when : when });
        };

        return Object.freeze({
            keep: keep,
            abandon: abandon,
            promise: Object.freeze({ when: when })
        });
    };

}(module.exports));

Here are some notes about my solution:

-    I decided to include an “isResolved” Boolean even though I am emptying the arrays as a guard against the promise being resolved again before all the callbacks in the appropriate queue have been invoked. This ensures that any additional calls to resolve the promise will result in no further action.
-    The arrays are being re-initialized as a housekeeping measure to release any resources that they are consuming. As a promise can only be resolved once, it doesn’t make sense to hold on to any of the callbacks in its queues. Originally, I was setting the length of the array to 0, but I learned that this actually blocks the garbage collector until the items in the array are overwritten by new items. In normal use of a promise, you wouldn't add additional success and failure callbacks after the promise had been resolved, so the original callbacks would remain in memory.
-    The use of the “setTimeout” function with a timeout of 0 is to ensure that the callbacks are invoked immediately. However, as this is a node.js project it probably would be more efficient to use process.nextTick().
-    To prevent the behavior of a promise being changed, I have used Object.freeze() to make it immutable.
-    In an attempt to keep things clean, I used the revealing module pattern.


This is a very basic implementation of a promise API. I’m sure that there is much that doesn’t adhere to the promise specification, but as I mentioned above, this was just a challenge that I set myself to exercise my JavaScript muscles. I wouldn’t recommend that anyone use this code in production even though—technically—it does work. For one, there’s no error handling and if anything were to go wrong when calling a callback, it would continue to bubble up the call stack. The start of a solution would be to wrap the callback’s invocation in a try-catch block, but what would the appropriate course of action be once an exception has been caught? I think I’d have to refer to the specification for guidance on that. Below is a simple example of how to use the promise API:

carService.js:
(function (carService) {
    
    var promiseApi = require('./promise.js');
    
    carService.get = function (id) {
        
        var promise = new promiseApi.Promise();        

        setTimeout(function () {
            
            var data =  {
                id: id,
                model: 'Ford Mustang ' + id,
                year: 2014
            };

            promise.keep(data);
            //promise.abandon({ message: "Something went wrong" });
        }, 1000);
        
        return promise.promise;
    };

}(module.exports));

app.js:
(function () {

    var carService = require('./fakeCarService.js');

    carService.get(1)
    .when(function (data) {

        console.log(data.model);

        return data.model;
    }, function (error) {

        console.log("Error: " + error.message);

        return error;
    })
    .when(function (data) {

        console.log("Second CB: " + data);

        return data;
    })
    .when(function (data) {

        console.log("Third CB: " + data);
    }, function (error) {

        console.log("Second Error CB: " + error.message);
    });

}()); 


As a challenge to help me practice JavaScript, this was a good problem to solve. The complete node.js project can be found here.