Rob Smyth

Tuesday 26 August 2008

Cannot Parse System.Double.MaxValue

Surpisingly it seems that System.Double does not parse maximum and minimum values. I've found that double.MaxValue cannot be parsed by double.Parse(MaxValue) nor via the Convert method overload. System.Double.NaN can be passed though.

Here is a test fixture demonstrating the problem:

[TestFixture]
public class DoubleTests
{
[Test]
[ExpectedException(typeof(OverflowException))]
public void MaxValue_ThrowsException_WhenParsing()
{
double readValue = double.Parse(double.MaxValue.ToString());
Assert.AreEqual(double.MaxValue, readValue);
}

[Test]
[ExpectedException(typeof(OverflowException))]
public void MinValue_ThrowsException_WhenParsing()
{
double readValue = double.Parse(double.MinValue.ToString());
Assert.AreEqual(double.MinValue, readValue);
}

[Test]
public void NanValue_ThrowsException_WhenParsing()
{
double readValue = double.Parse(double.NaN.ToString());
Assert.AreEqual(double.NaN, readValue);
}
}

Saturday 16 August 2008

NUnitGridRunner - Grid Processing for NUnit

NUnitGridRunner works! NUnitGridRunner ran NUnit based UATs (User Acceptance Tests) on a real world project, which usually take about 8 minutes, in about three minutes when using three remote boxes (one of which is very very slow). Not simplest tool to configure, but easy to use once going and a big time saver. With a few more boxes, and improvements to NUnitGridRunner, I think we will reach our target of running all tests in one minute from any developer box. A real CI (Continuous Integration) enabler.

Friday 15 August 2008

Acceptance Test Driven Development

I've been using the acronym 'UAT' for User Acceptance Tests to describe automated tests that capture required application behaviour. But while terms like 'unit tests' are relatively well defined UATs is not widely a recognised acronym. Tongiht while updating NUnitGridRunner documentation I came accross a site describing Acceptance Test Driven Development with the aconym 'ATDD'. I find the diagram on this page complelling and I'm now thinking of hanging my hat on the ATDD acronym even though it is not a three letter acronym (TLA).

Check it out here. Although when reading substitute 'Agile' for 'XP'.

Sunday 10 August 2008

Snow On The Mountain


We had snow this morning. Great to see it during the day and on a Sunday when we are home. Last year it snowed during the night and it was a working day.

Made for a real nice day inside watching the snow falling and the snow gathering on the trees. Golly was perplexed. Sue was excited to see Betsy with a good snow covering.

Sunday 3 August 2008

NUnitGridRunner - Run NUnit Tests Distrubuted

I've spent the last couple of days trying to use Alchemi to run NUnit tests on a virtual, distributed, computer system. That is, use many idle computers to run the tests. For this I created the Google project NUnitGridRunner. But, despite early wins, the last day has been spent trying to figure out how to bypass Windows security. Oh the frustration!

The vision is that a developer, or build box, can ask a virtual computer comprising of a grid of computers to run the tests. The grid computers are underused boxes and the grid threads only run in idle time so the distributed load is kinda free. So, UATs (User Acceptance Tests) that would normally take 10-15 minutes could run in only 1 minutes. A greate $ saver for a development team.

I got the basics running no problem, but each time I extended the grid to other computers in my home network I kept hitting Windows security issues. If I try to run the nunit-console on the grid I get FileIOPermission exceptions. If try running NUnit's more low level 'SimpleTestRunner' using a shared folder with the binaries I get a login failure.

I recon somebody with more Windows security knowledge could fix this but so far it has me stumped. I'm not giving up though, I've seen enough to see that this is a real goer.

A quote from the Alchemi documentation:
The idea of meta-computing - the use of a network of many independent computers as if they were one large parallel machine, or virtual supercomputer - is very compelling since it enables supercomputer-scale processing power to be had at a fraction of the cost of traditional supercomputers.
Think of long builds, think of slow UATs, think of your manager's box when he is at meetings!

Saturday 2 August 2008

A Cow At The End Of The Driveway

Not every day you get up to get the newspaper and find a cow at the end of the driveway.

Lovely thing, one of Henry's cows. She is old and has a bit of arthritis in one leg. But like all of Henry's animals you know she is in good care.

This street would not be as nice without Henry.

Friday 1 August 2008

Can Alchemi Turn Web Browser Boxes Into UAT Gold?

Today I stumbled across Alchemi, "a .net based Enterprise Grid System and Framework". Or, in other words, a distributed computing framework. I wonder if it can be used to run UATs (automated User Acceptance Tests) both from developer boxes and build boxes?

I'm thinking of an Alchemi application that is a test runner which uses distributed computing to run NUnit test fixtures. The documentation claims that the 'executors' which run on the remote boxes only run in idle time. So, dedicated build farm boxes are not needed, user boxes can be used with (they claim) no affect on the boxes use. Most boxes in the office have light use so, if so, 20 boxes could be available to act as a virtual build farm.

Picture is worth a thousand words. The attached images are copied from the Alchemi documentation.

I think it is worth a play. Nigel, Duncan, you thinking what I'm thinking? All those web browser boxes ... they have a purpose!

Hey ... what does you Manager's box do? Here is an opportunity to give it meaning in life.

UATs and CI Can Only Play A Fast Game Together

When developing using automated acceptance tests (we call them User Acceptance Tests 'UATs') and continuous integration (CI), the time taken to run the tests impacts directly on team performance. Or, to put it another way, a team process using both UATs and CI is not a viable team process if the UATs are slow.

UATs are inherently slow and teams always find that it is not long before their UATs run for more than 10 minutes. Too slow for me. CI is all about frequent repository commits, and in my case the commit rate can be every 15 minutes. So if I was to run all the tests prior to each commit I would be sitting, waiting, bored, for 30-50% of my time. If I just run all the unit tests and a selection of UATs (as I do) then I am somewhat 'leaning on the build box' and if there is a build failure reported some 10 minutes later I'm already half way through my next chunk of work when I find my last commit had a fault. This means that I have to revert my code (loose the last 10 minutes work) so I can rapidly revert the build box, retrieve the borken revison, switch my thinking back to what I was doing, and fix it. In such a rapid CI environment I do not mind leaning on the build box so long as the box is fixed within 10 minutes. But either way time is lost due to slow UATs.

The other, hidden cost, that arizes from slow UATs, is it is not uncommon for team members, usually newer members, to find breaking the build too confronting/embarrising. So they will run all the tests every time before commiting no matter how trivial the code change. On one hand a good attitude, I'm embarrised too, but a bit of self exposure (trust in the team) can allow you to use the build box as a tool to speed up team developement. In our team I notice that no build break during the day is a good sign that the team has gone 'stale', people are having trouble. If the UATs are a complete specification of customer requirements nobody can totally avoid a failure without running the tests (e.g. Customer wants the font to be 11pts). Same as reading the entire specification prior to each commit.

So speed matters. It directly affects team behaviour and time to delivery ($). With tests taking 10 minutes (say) I recon this must equate to more than 25% of the team's time.

If your unit tests are running slower than 2000 per minute then either they are not unit tests or you should be humane and retire that box. UAT are another matter. They requiring discipline, skill, and grunt.

UATs are usually slow because:
  • UATs simulate an end user running the application. So they are constantly starting and closing the application and maniplulate the application via the UI (we use NUnitForms as the low level UI test framework). This requires CPU grunt and loading of many assemblies. Hence it is not unusual for test cases to take 1 second each.
  • Writing efficient UATs is not easy. It is a learnt skill. There must be balance between truely testing the application from a user level (e.g. click here, click there), fast testing of a specific feature, and the idependence of the test cases. For example, it is faster to test multiple features in one test case. Some 'cheats' can be used (see below).
  • The skill is often in the order of feature implementation (stories) as some stories enable faster testing of other stories.
  • The team may not appreciate (or care) the impact that slow UATs have on their ability to deliver and not give them the on-going attention they need. Consider this at UAT code health.
UAT smells:
  • Test cases taking longer than 2 seconds.
  • Tests have long setup time (> 1 second).
  • Slowing down tests to avoid defects that only appear in 'faster than life' UATs. This is ignoring a code health issue.
  • Unnecessaryily complex setup. e.g. Need to drag a control a long distance to test a transition near the far window edge. First implement a feature of the user positioning the control by X Y coordinates and then use this to position the control near the edge for the test.
  • Hard coded 'blind' pauses or sleeps in the tests. e.g. 'Sleep(500)'. This is a real killer.
  • Developers sitting with glazed eyes watching tests run.
  • Developers who like sitting watching tests run. They probably use the time to web browse. But then this is another, bigger, problem. :-)
  • Intermittent test failures. The UATs are telling you something. They are giving you an opportunity to fix design problems early. Pure gold.
Solutions:
  • Inform your customer of the cost savings that can be achieved from feature implementation order (story order). For example; If the UI has an icon showing a file save in progress then this can be used by the UATs to know when a file save is complete. Hence wasteful 'blind' delays are not required.
  • Team alignment/focus. Be aware of the true cost of slow UATs. Half a second accross 240 tests is 2 minutes. If 4 developers loose 2 minutes say 4 times a day then that is a total of 2 x 4 x 4 = 32 developer minutes a day. So if you spend 1 hour saving half a second off each test that will be paid back in just 2 days.
  • Cheat, but cheat legally. For example, rather than start the whole application (EXE) in another thread instantiate the main class and call its execute method. With good design you are only bypassing the windows main method handling. You might also preload and resuse some read-only configuration files. Can save a lot of time but be careful. :-)
  • Use the fastest computers available.
  • Distributed processing. I've never seen this. It seems to me to be the Utopia. I wonder if products like Alchemi can be used to pass each test fixture out to a different (idle) box. If so it would seem to be viable for a project to keep the time to run tests under 1 minute. Hmmm ... another blog.
  • Reduce the cost of entry for developers by developing a UAT framework of application specific test jigs rich with methods like 'WaitUntilXYZButtonEnables'. Fix once use many times.