Rob Smyth

Friday, 1 August 2008

UATs and CI Can Only Play A Fast Game Together

When developing using automated acceptance tests (we call them User Acceptance Tests 'UATs') and continuous integration (CI), the time taken to run the tests impacts directly on team performance. Or, to put it another way, a team process using both UATs and CI is not a viable team process if the UATs are slow.

UATs are inherently slow and teams always find that it is not long before their UATs run for more than 10 minutes. Too slow for me. CI is all about frequent repository commits, and in my case the commit rate can be every 15 minutes. So if I was to run all the tests prior to each commit I would be sitting, waiting, bored, for 30-50% of my time. If I just run all the unit tests and a selection of UATs (as I do) then I am somewhat 'leaning on the build box' and if there is a build failure reported some 10 minutes later I'm already half way through my next chunk of work when I find my last commit had a fault. This means that I have to revert my code (loose the last 10 minutes work) so I can rapidly revert the build box, retrieve the borken revison, switch my thinking back to what I was doing, and fix it. In such a rapid CI environment I do not mind leaning on the build box so long as the box is fixed within 10 minutes. But either way time is lost due to slow UATs.

The other, hidden cost, that arizes from slow UATs, is it is not uncommon for team members, usually newer members, to find breaking the build too confronting/embarrising. So they will run all the tests every time before commiting no matter how trivial the code change. On one hand a good attitude, I'm embarrised too, but a bit of self exposure (trust in the team) can allow you to use the build box as a tool to speed up team developement. In our team I notice that no build break during the day is a good sign that the team has gone 'stale', people are having trouble. If the UATs are a complete specification of customer requirements nobody can totally avoid a failure without running the tests (e.g. Customer wants the font to be 11pts). Same as reading the entire specification prior to each commit.

So speed matters. It directly affects team behaviour and time to delivery ($). With tests taking 10 minutes (say) I recon this must equate to more than 25% of the team's time.

If your unit tests are running slower than 2000 per minute then either they are not unit tests or you should be humane and retire that box. UAT are another matter. They requiring discipline, skill, and grunt.

UATs are usually slow because:
  • UATs simulate an end user running the application. So they are constantly starting and closing the application and maniplulate the application via the UI (we use NUnitForms as the low level UI test framework). This requires CPU grunt and loading of many assemblies. Hence it is not unusual for test cases to take 1 second each.
  • Writing efficient UATs is not easy. It is a learnt skill. There must be balance between truely testing the application from a user level (e.g. click here, click there), fast testing of a specific feature, and the idependence of the test cases. For example, it is faster to test multiple features in one test case. Some 'cheats' can be used (see below).
  • The skill is often in the order of feature implementation (stories) as some stories enable faster testing of other stories.
  • The team may not appreciate (or care) the impact that slow UATs have on their ability to deliver and not give them the on-going attention they need. Consider this at UAT code health.
UAT smells:
  • Test cases taking longer than 2 seconds.
  • Tests have long setup time (> 1 second).
  • Slowing down tests to avoid defects that only appear in 'faster than life' UATs. This is ignoring a code health issue.
  • Unnecessaryily complex setup. e.g. Need to drag a control a long distance to test a transition near the far window edge. First implement a feature of the user positioning the control by X Y coordinates and then use this to position the control near the edge for the test.
  • Hard coded 'blind' pauses or sleeps in the tests. e.g. 'Sleep(500)'. This is a real killer.
  • Developers sitting with glazed eyes watching tests run.
  • Developers who like sitting watching tests run. They probably use the time to web browse. But then this is another, bigger, problem. :-)
  • Intermittent test failures. The UATs are telling you something. They are giving you an opportunity to fix design problems early. Pure gold.
Solutions:
  • Inform your customer of the cost savings that can be achieved from feature implementation order (story order). For example; If the UI has an icon showing a file save in progress then this can be used by the UATs to know when a file save is complete. Hence wasteful 'blind' delays are not required.
  • Team alignment/focus. Be aware of the true cost of slow UATs. Half a second accross 240 tests is 2 minutes. If 4 developers loose 2 minutes say 4 times a day then that is a total of 2 x 4 x 4 = 32 developer minutes a day. So if you spend 1 hour saving half a second off each test that will be paid back in just 2 days.
  • Cheat, but cheat legally. For example, rather than start the whole application (EXE) in another thread instantiate the main class and call its execute method. With good design you are only bypassing the windows main method handling. You might also preload and resuse some read-only configuration files. Can save a lot of time but be careful. :-)
  • Use the fastest computers available.
  • Distributed processing. I've never seen this. It seems to me to be the Utopia. I wonder if products like Alchemi can be used to pass each test fixture out to a different (idle) box. If so it would seem to be viable for a project to keep the time to run tests under 1 minute. Hmmm ... another blog.
  • Reduce the cost of entry for developers by developing a UAT framework of application specific test jigs rich with methods like 'WaitUntilXYZButtonEnables'. Fix once use many times.

Wednesday, 30 July 2008

Open Source Project Hosting

I've been oscillating the last year between my loyalty to SourceForge and Google Code. A couple of times I have created a Google Code project to move a SourceForge project across but each time I've stayed with SourceForge. But then, I now have a few new Google Code projects. So far, I do not see a clear winner, each has its advantages.

The pros and cons are:
  • Registering A Project

  • A Google Code project registration is instant. Although I have had some trouble with full use of the SubVersion repository until a few hours after the project registration. Except for that the Google Code project is available immediately.

    SourceForge however requires the project to be approved. For me, this has taken anything from a couple of days to months. Usually three days although once I needed to escalate it through support and it took a couple of months.

  • Wiki

  • SourceForge has out sourced their Wiki support to Wikispaces and the result is clearly superior. Good WYSIWYG and default layout creates a good looking site.

    Google Code however provides much more flexibility. It seems that every page is editable as a wiki. Significantly Google Code allows you to edit the project page as it is a wiki page. But ... Google Code formatting is very ordinary and generally speaking Google Code pages just do not look good.

  • Release Upload

  • SourceForge's release upload has long been somewhat 'Unix'. Other words, 'difficult'. Google release uploads are a breeze. Just browse to the uploads page, click the link and browse to the file to upload. SourceForge has created an equivalent but you do need to go looking for it and it does feel 'tacked on'.
So, I'm using Google Code for API/developer utility projects as it very easy to use. I'm using SourceForge for non-developer/UI projects as the presentation is so much better. If your going to the trouble of making an open source project you want an audience. So, I'm choosing by audience.

Sunday, 27 July 2008

Last Time I Give Myself A Haircut?

Well, I thought I would cut my own hair. Thing is I made a mistake and one thing lead to another. Um, no hair now and it is winter.

Sue is not happy with me.

Betsy Gets A Chrome Fuel Cap

Sue has had her Mini Cooper (Betsy) for a few months now and she still adores her. Lovely machine she is, and now she is a little bit more lovely with the proud addition of a chrome fuel cap.

I'm surprised, it does look good with the chrome side mirror, door handle, and now ... the cap.

Yea, I had to get the dogs in one of the photos. That is Violet at the back, and Golly's head & tail at the front. :-)

Thursday, 24 July 2008

Resharper Plugin Test Fixture

I've been writing a Resharper plugin, and although it does seem much easier than a Visual Studio add-in, writing any Visual Studio plugin/add-in is just plain awkward. The cost of testing is so much higher when the target product is the same as your development tool. So it seemed that the best thing to do first was to automate the basic Resharper plugin validation so that the manual testing cost is avoided/minimised. It also has the advantage of documenting my understanding of the Resharper plug-in API.

To manually test I must shutdown VS copy the DLL to a deployment folder, possibly update a registry key, and then start-up VS again. If things fail Resharper/VS are not happy and all may need to be shutodown, dlls deleted, VS started again, and maybe even a little configuration recovery. Nothing too bad but it is definitely a high cost manual test scenario.

The full tests are part of the 'SharpProbe' project. Code fragment is shown below.

[Test]
public void HasActionsRootElement()
{
Assert.IsNotNull(xmlDocument.SelectSingleNode("actions"));
}

[Test]
public void AllInsertElementsHaveValidGroupIDs()
{
Dictionary<string, int> validGroupIds = new Dictionary<string, int>();
validGroupIds.Add("ReSharper", 0);
validGroupIds.Add("VS#Code Window", 0);
validGroupIds.Add("VS#Solution", 0);
validGroupIds.Add("VS#Item", 0);
validGroupIds.Add("VS#Project", 0);

XmlNodeList nodes = xmlDocument.SelectNodes("actions/insert");
foreach (XmlNode node in nodes)
{
string groupID = node.Attributes["group-id"].Value;
Assert.IsTrue(validGroupIds.ContainsKey(groupID),
string.Format("XML insert element has unknown group-id '{0}'.", groupID));
}
}

[Test]
public void AllInsertElementsOtherThanResharperElementHaveValidActionRefIDs()
{
XmlNodeList nodes = xmlDocument.SelectNodes("actions/insert[@group-id!='ReSharper']");
foreach (XmlNode node in nodes)
{
NodeHasValidActionId(node, "action-ref");
}
}

[Test]
public void HasAtLeastOneInsertElement()
{
Assert.IsTrue(xmlDocument.SelectNodes("actions/insert").Count > 0);
}

[Test] public void HasResharperInsertElementsTahtHasValidActionIDsAndMenuText()
{
XmlNodeList nodes = xmlDocument.SelectNodes("actions/insert[@group-id='ReSharper']");
Assert.AreEqual(1, nodes.Count);
NodeHasValidActionId(nodes[0], "action");

XmlNode actionNode = nodes[0].SelectSingleNode("action");
string text = actionNode.Attributes["text"].Value;
Assert.IsTrue(text.Length > 0);
}

Monday, 21 July 2008

Developing Resharper Plugins

I've been having a go at writing a Resharper plugin. As there does not seem to be much in the way of documentation I've created a page on my wiki to keep notes. You can find the page here.

Thursday, 17 July 2008

The Project With Multiple Teams Conundrum

Several years ago I worked for Citect on its shrink wrapped product (also called Citect). The company had several software development teams working on Citect and thinking back on it I saw the teams as separate, each team was independent and had a clear team identity, even though we all worked on the one product. It demonstrates that 'team' is not necessarily tied to 'product' but to 'project', and a product can have multiple projects. In fact, I now think that the concept of a team is inherently linked to an individual project. One team, one project.

It seems that having multiple teams on the one project is a conundrum. If the teams lack identity they become one large team. The advantage of a team is lost. A shared project a team inhibitor.

I feel that what the project manager sees as a single project must be broken down into separate projects that each team can own. A team needs something to own, it needs boundaries so it can celebrate success. The issue of collaboration between teams is another matter for the the "team of teams" team.

Agile developers will collaborate. Other will always find a way ...

NXmlSerializer Rev 3 Released

Uploaded the latest NXmlSerializer release to SourceForge tonight. This release reduces the size of the XML produced, supports serialized objects referencing objects that need to be replaced when deserializing. Project documentation is here.

Wednesday, 9 July 2008

Agile - A Word To Empower Our Decisions

Words are powerful, they effect our perceptions and decisions. In 2001 seventeen leaders people got together in Utah and emerged with the Agile Manifesto which gave us the word/vocabulary of 'Agile' software development. A powerful concept given and now owned by the software development community. With the high profile that eXtreme Programming enjoys will 'Agile' become a byword for XP? XP stands in it own right, I hope that its popularity does not overshadow the concept that 'Agile' give us.

It is difficult to implement the XP methodology and not be Agile, but is Agile just XP? If we use 'Agile' as a byword for XP are we loosing the opportunities of the language of the manifesto offered us. Will the seed of other methodology/processes be lost in the XP disco strobe lights?

Men suppose their reason has command over their words; still it happens that words in return exercise authority on reason. —Francis Bacon.

XP may be Agile, but Agile is not XP. To confuse the two is to limit the possibilities of XP evolving or of using other agile approaches like Crystal Clear. Is XP's popularity now a danger to blocking our ability to adapt and improve? Interesting considering XP's retrospective process.

How To Fail With Agile

This URL to "How To Fail With Agile" came to me from a post to the Melbourne XP Enthusiasts group. I feel the pain of recognition :-).

I'll ignore the blurring of the difference between agile and XP implied in the article.

http://www.nxtbook.com/nxtbooks/sqe/bettersoftware0708/
<http://www.nxtbook.com/nxtbooks/sqe/bettersoftware0708/>