OpenRasta: the new (old) MVC

When ASP.NET MVC came out, it made so much sense. This was how web development was supposed to be; everything felt so clean and simple. I still like MVC, but I've gradually become more and more frustrated with the lack of extensibility points, and they way there are so many dependencies baked in. Controllers are simple, but they inherit from a base class which is anything but. I feel like I've been given a taste of how web development should be, but it seems like the developers of MVC stopped halfway.

Now I've discovered OpenRasta, which seems to me to be everything MVC was supposed to be but didn't quite manage. Instead of Models, Controllers and Views, you have Resources, Handlers and Codecs. Instead of routing, you define a URI for a resource, and specify the handler for the resource. The handler determines how to process the appropriate HTTP action for the resource, and the codec determines how to render it in the format requested by the client. Everything is pure POCO objects, and everything is fully extensible. I've started to have a play with it but so far I think this seems to be the way to go.

Here's a talk on TekPub by its creator, Sebastien Lambla:

Sebastien Lambla talks about OpenRasta on TekPub

I would say OpenRasta is the new MVC, except for the fact that it was written first! I guess the reason I missed it the first time around was that I wasn't looking for it.

Adventures with TDD part 3: Testing MVC views

Views are hard to test in MVC

I've been trying to write ASP.NET MVC code using Test Driven Development (TDD) principles. This has been going very well for the Models and Controllers of MVC, but I have been drawing a blank at the Views. It turns out that writing unit tests for views in ASP.NET MVC is hard. There simply are too many hidden dependencies on the ASP.NET environment built into the view engine to be able to build standalone tests around it.

A common response I have come across is that there shouldn't be any logic in the views, so there shouldn't be any need to write unit tests around them. This argument applies very well to views that are pure HTML which work from a strongly-typed view model, as in that case it should be sufficient to test that the appropriate view model has been created and populated with the correct values. However once you start introducing AJAX functionality into your application then the nice separation between view and controller logic starts to disappear.

Unit testing Javascript is possible with a variety of options, but these all seem to involve running a Javascript test runner from within a HTML page, which has two problems:

  1. It starts to blur the line between test and production code, as the tests need to be run from within the same web project in order to have access to the script files used in the views;
  2. It becomes more inconvenient to run the tests: having to fire up a browser in order to run the automated tests for a part of the application creates an extra step and an extra psychological barrier to running the tests often. Personally, I'd like all my tests to be run from one place, ideally from within Visual Studio.

In addition, I want to be able to test the views on their own rather than having to run the whole system. Automated UI testing (using something like Selenium) is great for integration testing, but I want to be able to test the view logic in isolation from the rest of the application.

A possible solution

I think I have found a way get around this problem. Essentially what I am doing is dynamically generating the HTML of a view from the view path and view model. I can then use headless browser automation to execute the view using a virtual web browser, clicking on links and running scripts dynamically. This allows me to completely decouple my view logic from the controller logic, allowing me to set up views on demand without having to do things like access the database. I doubt I am the first to try something like this, but I haven't been able to find anything online to do view testing the way I would like, at least not in ASP.NET MVC. However, this could mean that I am going about this completely the wrong way, or that I'm just not very good at using Google!

Dynamically generating view HTML

After many hours of frustration, I have been forced to admit that any attempt to try to use ASP.NET MVC's inbuilt view engine from outside the ASP.NET environment (i.e. from within a test project) is doomed to failure. I tried several things, including attempting to mock the ControllerContext, and although I was able to get this working from within the MVC project, I was unable to get this approach to work from within a standalone project. There simply are too many hidden dependencies for this to work.

Instead, I tried a different approach: David Ebbo's Razor Generator. This was a lot more successful, however I encountered two main difficulties in trying to use it:

  1. Razor Generator is available as a Visual Studio extension, but as I don't own a full copy of Visual Studio on my home machine, I needed to get this to work with Visual Web Developer. I was able to accomplish this by changing the extension of the .vsix file .zip, opening it in 7Zip, and editing the <SupportedProducts> section of the extension.vsixmanifest file to read:
    <SupportedProducts> 
        <VisualStudio Version="10.0">		
            <Edition>Ultimate</Edition>
            <Edition>Premium</Edition>
            <Edition>Pro</Edition>
            <Edition>Express</Edition>
        </VisualStudio>
    </SupportedProducts>
    
    I was then able to rename save it back as a .vsix file and open it in Visual Web Developer. This installed the extension without any problems as far as I can tell (Warning: do this at your own risk!)
  2. Razor Generator, as it stands, generates the HTML for each view, partial view and layout view separately, but I needed a way to render the whole page for a view, including the layout view and any partial views. To do this I had to download the source code and edit the PrecompiledMvcViews.Testing project. As it stands, it generates a placeholder value for any partial views referenced from the view page, so I modified this to include the generated HTML for each partial view instead. I also had to add some additional code to embed the view HTML within the generated HTML of its layout view.

Headless browser automation

Once I had the view HTML, I was able to use headless browser automation against it using the HTMLUnit library. This is a Java library, but as it the source is freely available I was able to compile it as a .NET dll using IKVM (using the approach described by Steven Sanderson), allowing me to use it in my test project. So if I have a view as follows:

@model string

@{
    ViewBag.Title = "Test";
    Layout = "~/Views/Shared/_Layout.cshtml";
}

<a href="#" id="linkWithJavascriptClickHandlerAttached">test</a>

<div id="elementChangedByClickHandler">@Model</div>

<script type="text/javascript">
    $(document).ready(function () {
        $('#linkWithJavascriptClickHandlerAttached').click(function () {
            $.getJSON('/Home/ActionReturningJsonResult/', function (data) {
                $('#elementChangedByClickHandler').html(data.NewContent);
            });
        });
    });
</script>
Contents of /Views/Shared/_Layout.cshtml (includes link to jQuery):
<!DOCTYPE html>	
<html>
<head>
    <meta charset="utf-8" />
    <title>@ViewBag.Title</title>
    <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" />
    <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script>
    <script src="@Url.Content("~/Scripts/modernizr-1.7.min.js")" type="text/javascript"></script>
</head>
<body>
    @RenderBody()
</body>
</html>
I can test that the click handler successfully sends an AJAX request to the action "/Home/ActionReturningJsonResult/", and that the HTML of the div with id "elementChangedByClickHandler" is set to the JSON value returned by that action like this (I'm using xUnit.net):
[Fact]
public void ClickingOnLinkSetsElementHtmlToJsonContentFromServer()
{
    var mvcAssembly = typeof(MvcProject.Views.Home.Test).Assembly;
    var mvcRootFilePath = "C:\\Path\\To\\MvcProject";
    var viewTester = MvcViewTestHelperFactory.CreateHelper(mvcAssembly, mvcRootFilePath);

    var controllerToTest = "Home";
    var actionToTest = "Test";
    var actionRequestedViaAjax = "ActionReturningJsonResult";
    var jsonResultToReturn = "{ \"NewContent\": \"Content passed via AJAX request\" }";

    // tell the virtual web client to return a specific response for a specified controller/action combination
    // (needs extending to include routeData and specify response headers, status codes etc)
    viewTester.SetResponseContentForAction(controllerToTest, actionRequestedViaAjax, jsonResultToReturn);

    var viewModel = "Content passed via view model";

    // grab the HTML for the view we're interested in (also needs to handle routeData)
    HtmlPage page = viewTester.GetHtmlPageForView(controllerToTest, actionToTest, viewModel);

    // verify the view renders using the view model
    Assert.Equal("Content passed via view model", page.getElementById("elementChangedByClickHandler").asText());

    page.getElementById("linkWithJavascriptClickHandlerAttached").click();

    // verify the correct action was requested
    Assert.Contains("/Home/ActionReturningJsonResult", viewTester.GetRequestsMade());

    // verify the click handler got the correct JSON and changed the HTML of an element on the page
    Assert.Equal("Content passed via AJAX request", page.getElementById("elementChangedByClickHandler").asText());
}

Obviously it's still a work in progress: it doesn't handle routeData yet among other things, and some of the internals are a bit hacky at the moment, but if anyone's interested you can download the source from bitbucket and have a play; any feedback is appreciated, positive or negative.

Adventures with TDD part 2: Useful resources

I've been doing a lot of searching around for helpful stuff, so I'm just basically dumping what I've found here, mainly for my own benefit, but maybe someone else will find it useful.

Worked examples

So I've been looking around online for tutorials and examples on how to go about TDD.  There seem to be a great deal of really trivial examples out there, but trivial examples aren't very useful.  They're fine for explaining the basic concepts of the methodology, but generally in the real world things just aren't as simple as a "Fizz Buzz" generator!  I imagine all sorts of problems and complexities emerge when trying to do TDD on a real-world system.

Effective Tests

In the end I found a goldmine.  Derek Greer over at Los Techies has a fantastic series of posts on TDD, and among them an in-depth walkthrough of using TDD to build a tic-tac-toe game.  Now obviously this is still much simpler than a full-blown graphical game but it captures the process, which is what I'm interested in: what are the brick walls, the shifts in thinking you have to do, etc.

As there doesn't seem to be an index page for just this series of posts, I'll keep a list of them here for my own use.  The series seems to be ongoing so I'll keep this list updated when I notice any new posts.

  1. Introduction
  2. A Unit Test Example
  3. Test First
  4. A Test-First Example – Part 1
  5. How Faking It Can Help You
  6. A Test-First Example – Part 2
  7. A Test-First Example – Part 3
  8. A Test-First Example – Part 4
  9. A Test-First Example – Part 5
  10. A Test-First Example – Part 6
  11. Test Doubles
  12. Double Strategies
  13. Auto-mocking Containers
  14. Custom Assertions
  15. Expected Objects
  16. Introducing the Expected Objects Library
  17. Avoiding Context Obscurity

Update: Let's Play TDD

I'm updating this post with a video series I've just found on YouTube by James Shore (user jdlshore).  This looks like exactly what I've been looking for, and it's really in-depth: at the time of writing there are 128 videos in the series!  Each one seems to be around 15 minutes long, which adds up to over 30 hours worth of material!  This is incredible, such a valuable resource, and I had no idea it even existed.  Here's a link to the playlist of the entire series, but I'll embed the first video here so you can preview it easily:

Testing MVC

Testing Javascript UI

Unit testing videos

Channel 9: Top 10 Mistakes in Unit Testing

Top 10 Mistakes video

Google TechTalks: Automated Testing Patterns and Smells

Google TechTalks: Behaviour Driven Development

Adventures with TDD part 1: Motivations

The problem: software design degrades over time

In my last few years as a software developer I have been running into the same problems over again. I've been fortunate enough to be able to be involved from the start in the architecture of a couple of systems, and spent a good deal of time planning each system up-front.

Building in disrepair Degrading over time... Image: www.freeimages.co.uk

However, I have found that despite starting out well, the codebase of each project has gradually become messier and less maintainable, and as a consequence, each new task seems to take longer and longer.

One contributing factor is that we developers sometimes just have to write "quick and dirty" code, due to time constraints, or an urgent bug fix. We hate doing it, but sometimes it has to be done. We tell ourselves we'll come back and fix it later, but how often does that actually happen? Usually, "later" means "once we get this release out", which means by definition we will have to come back and change code that's now in production.

Unsurprisingly, with all the risks and testing overhead involved in changing production code, the temporary "quick and dirty" code ends up becoming a part of the system. This only has to happen a few times to create a maintainability nightmare. Developers tend to write in the existing design idiom of the codebase, and "hacky" code tends to obscure this idiom, resulting in inconsistencies and harder-to-understand code, serving to increase the temptation to write even more "hacky" code.

Another large part of the problem I think is caused by the requirements of the project changing, and new features being added that weren't anticipated. I used to think that this was a design problem: the architecture wasn't extensible enough, we didn't think far enough ahead in trying to predict what features might be needed. This is true; however I've since come to appreciate that no matter how hard you try or how deeply you think, you can't anticipate all the changes that might be requested for any project.

In the past I have tried to build extensibility points everywhere, but this has resulted in a lot of complexity that simply was never needed.

Flexible design

Rather than trying to plan and anticipate every future development, the design should instead be able to change throughout the lifetime of the project as the requirements change. This would then free up the initial design process to concentrate only on the bare minimum of functionality in order to get the job done - using the YAGNI principle. The design can then change to accommodate new functionality when it's actually needed.

But changing the design of production code is fraught with difficulties. Any change to the code must be tested, and testing manually is expensive, and still is not guaranteed to find all the potential bugs that may have been introduced by the change. Developers are therefore discouraged psychologically and economically from making design changes to existing code.

Unit tests create confidence to change the design

It seems that the way to remove this inertia towards making design changes is to cover the code with unit tests. All the existing behaviour is then captured by the tests, which will provide immediate feedback whenever a bug is introduced.

The problem is that writing unit tests is a big undertaking, and as code tends not to be written with testability in mind, this means that the tests end up being far more complex and much more effort to write than they should be.

What usually happens is that developers become discouraged by how hard the tests are to write, and end up "forgetting" to write them., or putting them off for a later date. Which of course then defeats the entire point of having unit tests at all, as if there are sections of the code that are not covered by tests, then the developers cannot be confident that the tests will pick up any bugs introduced by changes to the code.

Writing testable code

So, we have a new problem: how to write easy to test code? There are techniques we can use to make our code easier to test, such as decoupling, inversion of control, the Law of Demeter etc, and these will certainly help a great deal in removing the physical barriers towards unit testing. In a previous post I linked to a great series of talks by Miško Hevery on writing testable code, and there are many other resources out there.

Nevertheless, there still remains the psychological barrier: unit testing is not fun, at least not as much fun as writing code that "does stuff", and there is still the temptation there to leave writing the tests till later.

In addition, it is only when trying to write a test for a piece of code that we can truly see how easy or difficult it is to test, and we may end up having to rewrite code we have already written in order to make it amenable to testing.

The solution

Write the tests first!

Don't write a single line of code without first writing a test that covers the behaviour of the code. The tests then serve as a form of specification of the behaviour of the code, and ensure that testability of the code is guaranteed from the outset. Every line of code then will then end up being covered by tests, and the safety net is in place. The code can then be changed as much as you like without fear of breaking anything as the tests will immediately flag up any bugs that are introduced.

Steering wheel with driver's handTest Driven Development? Oh never mind.Image: graur razvan ionut / FreeDigitalPhotos.net

This is the thinking behind Test Driven Development (TDD). It's an iterative process, with each iteration divided into 3 steps:

  1. Write a failing test: write a test describing what the code should do, that it doesn't do already. Then run the test and watch it fail.
  2. Make the test pass: write the least possible amount of code that satisfies the test.
  3. Refactor to a clean design: make any changes to the design needed as a result of adding the new code. Any changes should be covered by existing tests. Run the tests as you refactor to check you haven't broken anything.

Obviously this method will take longer than simply just writing the code, but we are looking for a sustainable solution. Although it will take longer at the outset to write the same amount of code, the benefits will show in the long term. Changes to the code are easy and relatively risk-free, and as a consequence, the code can be redesigned and refactored as needed by changes to requirements. The software then becomes flexible and adaptable, but also reliable, well-designed and easy to maintain.

For me, TDD is the future of software development, and I think it's time for me to bite the bullet and embrace it. I for one have had enough of producing sub-standard software, and I intend to travel as far down the TDD path as I can. Maybe it will work, maybe the grass won't be as green as I thought in the other side, but there is only one way to find out.

Clean Code by 'Uncle' Bob Martin

Clean Code book coverRead this book. Now. Go!

I've just finished reading Clean Code: A Handbook of Agile Software Craftsmanship by Bob Martin. Let me tell you: this book is so good. I've forgotten more things I read in this book than I ever knew before reading it. Which probably isn't saying much, but you get my drift.

So why is it so good? It's packed full to the brim with advice, principles, tips and examples of how to write code that's readable, understandable, maintainable and testable.

The first part of the book is about the principles of clean code, why one shoud write it, and what it looks like in practice. Each chapter focuses on a specific area such as variables, functions, classes, comments, each full of gems of wisdom and rules of thumb that have obviously come about through decades of experience. Each little nugget takes an overarching principle and applies it specifically to the area in question. Take for instance this snippet about try/catch blocks from Chapter 3 (Functions):

Error handling is one thing

Functions should do one thing. Error handling is one thing. Thus, a function that handles errors should do nothing else. This implies [...] that if the keyword try exists in a function, it should be the very first word in the function and that there should be nothing after the catch/finally blocks.

Seems pretty obvious, you say? That's the beauty of it: once it's pointed out to you, it's obvious, but I just never thought about it until I read that. How about this, from Chapter 4 (Comments):

Explain Yourself in Code

There are certainly times when code makes a poor vehicle for explanation. Unfortunately, many programmers have taken this to mean that code is seldom, if ever, a good means for explanation. This is patently false. Which would you rather see? This:

    // Check to see if the employee is eligible for full benefits
    if ((employee.flags & HOURLY_FLAG) && 
        (employee.age > 65))

Or this?

    if (employee.isEligibleForFullBenefits())

It takes only a few seconds of thought to explain most of your intent in code. In many cases it's simple a matter of creating a function that says the same thing as the comment you want to write.

The second part of the book takes the general principles outlined in the first part and puts them into practice, with several really thorough examples of how to turn "bad" code into "good" code. The code is presented in its original form, and then you are taken step by step through the process, at each stage describing the problems with the code, the motivation behind the change, and how to change it in such a way that it still works at the end of it all.

If you're reading this as a seasoned developer and thinking "I know all this stuff, there's no need for me to get a book about it" - I would gently but firmly disagree. Even in the unlikely event that there was nothing at all that you didn't know, I'd still recommend buying it. I've read it all the way through, but I still keep it as a reference to dip in and out of, and each time I'm rewarded by something from the back of my mind being reinforced and brought to the forefront. Not to be coarse, but if you are looking for productive reading material for bathroom breaks, this book is perfect.

Anyway I guess I should stop raving about it now and do something else, but I can't recommend it enough. So stop reading this and buy it already!

Miško Hevery: The Clean Code Talks

This is a great series of talks on the Google Tech Talks YouTube channel by Miško Hevery. 

Why do many developers have difficulty writing unit tests?  One reason is that we simply don't know how.  Writing tests is just as much a skill as writing code, and knowing what to test and how to test it is something that needs to be learnt by experience.  However, another big reason is that our code is hard to test.  Miško Hevery takes a look at what it is about code that makes it hard to test, and looks at how to write clean, decoupled code that is easily testable.  He covers things like global state, implicit dependencies, how conditionals and new operators can be bad, and many other factors that make writing unit tests much harder work than it should be.

This is a great introduction to these ideas and a must-see for any developer interested in writing better code.  It turns out that the things that make code easier to test also make it loosely coupled, easier to maintain, and more aesthetically pleasing.

The following videos are also available as a playlist.

Unit Testing

Don't Look for Things!

Global State and Singletons

Inheritance, Polymorphism & Testing

OO Design for Testability

How to Write Clean, Testable Code

Steve Souders: High Performance Websites

You may have guessed by this point that I love tech videos!  And you would be right.  I'm not sure why; I think I like the lecture format, and the sequential nature of the medium that ensures you're following the speaker's train of thought: I find too often if I'm reading a blog post or article it's way too easy to skim read it and not really take anything in.

The next series of videos I've found really useful are by Steve Souders (blog), who worked previously for Yahoo! and now Google.   The videos below are all essentially of the same talk given at different times, but each one has slightly different material although sharing a lot of the same content.

The main point of the talks is that the focus of website speed optimisations should be at the client side rather than the server side.  The reason for this is that the HTML source (which is where the bulk of the server-side processing is directed) is only a fraction of the resources that are loaded by the browser when a page is loaded.   Souders has a list of 14 rules related to reducing the time the browser takes to load these other resources:

  1. Make fewer HTTP requests
  2. Use a Content Delivery Network
  3. Add an expires header
  4. Gzip components
  5. Put stylesheets at the top
  6. Put scripts at the bottom
  7. Avoid CSS expressions
  8. Make JavaScript and CSS external
  9. Reduce DNS lookups
  10. Minify JavaScript
  11. Avoid redirects
  12. Remove duplicate scripts
  13. Configure eTags
  14. Make AJAX cacheable

Here are the talks I've found so far; between them they should cover all 14 rules, although bear in mind that the oldest one is 5 years old, so some of the content may be a little out of date:

November 2007

June 2008

March 2009

June 2009

Bonus content:  Souders has written two books on the subject, and on his blog there are some live examples of each of the 14 rules, which could be quite useful.

Entity Framework Code First tutorials

I'm really liking the new Code First style in Entity Framework 4.1.  It allows you to write clean domain objects uncluttered with any references to persistence logic, and it uses a convention-over-configuration approach which really simplifies things, but you can configure the default behaviour using a fluent API, and change things from table and column names down to association and inheritance mapping.

Scott Gu has a good introductory tutorial, and Morteza Manavi has a great series of posts exploring configuring associations with the fluent API.

Douglas Crockford on Javascript

Following on from the previous post, there's a great 6 part series of talks by Douglas Crockford on the history and use of Javascript over on the YUI blog.  All but one of them are available as streaming videos and they're all available to download.

Volume One: The Early Years

Chapter 2: And Then There Was JavaScript

Act III: Function the Ultimate

Episode IV: The Metamorphosis of Ajax

Part 5: The End of All Things

Scene 6: Loopage

There isn't a streaming video for this part but there's a hi-res video instead.

Douglas Crockford's website

If you liked those then there's a ton of articles and videos by Crockford available from his website.

Javascript: The Good Parts

Great video from Google Tech Talks by Douglas Crockford, the Javascript guru.  He talks about the good and the bad bits of Javascript, which bits to avoid and the best ways to do things.  He touches on some little-understood features such as closures and looks at a better way to do inheritance.