Sunday, October 13, 2013

DDD North 2013 - The aftermath + TDD - Where did it all go wrong?

DDD North 2013 has come and gone but what an event! It was a brilliantly arranged event in a great venue with plenty of very good sessions. This year it was based at Sunderland University so a two hour drive but it was more than worth it.

It was a great learning experience and has given me plenty to think about which I will try and summarize in the next few posts.

TDD - Where did it all go wrong? 
What a first session! Ian Cooper did not let us down. His session was an (re-)eye opener about how we have strayed away from Kent Beck's original vision of TDD. Ian's argument was that current teachings of TDD have lead us to believe we should be testing everything at a granular level - a class, each of the methods of a class, the unit under test should be isolated - All of which are not 100% true to Kent's original intentions for TDD. Ian argues that a new feature is what should be the trigger of a new suite of unit tests and those unit tests should only assert against the output of those features as would be returned to the calling client.

Ian has based this argument on many years of developing in a TDD environment and hitting the point 3 - 4 years down the line in a long term project where requirements have changed and where unit test code is having a negative effect on refactoring the code base as required - "If I make this implementation change, 100 tests will now fail and I will need to fix that as well".

The basic points he was trying to get across were:
1) Writing unit tests should be triggered by a new feature, not by a new class or method.
2) Tests should not have ANY understanding of implementation - They should only be interested in inputs and expected outputs (bye bye to Verify.WasCalled / Verift.WasNotCalled checks on mocked dependencies). If the output matches the expect output based on the inputs, you should be able to determine from that the correct methods were / were not called.
3) Delete tests which test the implementation details once you have finished developing a feature. Delete tests?! I know, a shocking statement but it makes a lot of sense. Unit tests which test implementation specifics are fine to help drive out your design but once your design is finished and the feature is complete, that coupling to implementation will become a handicap when attempting to refactor your implementation at a later date. A simple act of renaming a method, altering it's parameters or return value could cause countless tests to suddenly fail. Delete these tests and rely on your tests which test the feature as a whole instead.

My summary above does not give Ian's session the justice it deserves. It was highly thought provoking and has made me question the way I think about TDD.

You can find Ian delivering his talk at NDC here.