It seems to me that there is a dependency between Continuous Integration (CI) and build time. Each time I hit a slow build it puts pressure on 'fast' CI. It seems that if the build is slow then CI may be more destructive that beneficial. I'm thinking it is all a question of ratio.
CI does not define an integration rate. Some teams see CI as once a week, others see it as every 15 minutes. It is a relative concept.
If a build box can build and run all tests in, say, 1 second then a team's commit rate of once every 5 minutes would seem achievable. Each developer would have instant feedback and be able to fix, or revert, any problem within a couple of minutes, after a commit, with minimal effect (without considering a pre-commit test system). But if the build takes, say, 30 minutes and the team's commit rate was once every 15 minutes then by the time a build failure is detected the whole team is affected.
So it seems to me that there is a relationship between CI's rate and build time. Just do not know hat it is yet.
Rob Smyth
Showing posts with label Continuous Integration. Show all posts
Showing posts with label Continuous Integration. Show all posts
Friday, 9 July 2010
Monday, 14 June 2010
Build Management
Over the last few years I've worked on .Net projects using a few build management sytems. My ratings, best first:
Friday, 17 April 2009
NCover Scalability
We seem to have hit a glass ceiling with NCover on our build process. We run NCover on all builds to ensure that coverage does not fall (thanks to NCoverCop). In the last couple of days we have started to get out of memory exceptions. It seems that the process of generating a coverage report just takes too much memory.
The project is significant but not large (yet) so this was a surprise. The solution seems to be to break up the coverage test into multiple build tasks.
The project is significant but not large (yet) so this was a surprise. The solution seems to be to break up the coverage test into multiple build tasks.
Monday, 13 April 2009
Using Subversion Revision as the AssemblyVersion - Revisited
I've been using MSBuild tasks for a while to automate the generation of an AssemblyInfo.cs file with an AssemblyVersion that matches the current Subversion revision (commit number). See my prior post here. But, I've had mixed success with that approach and this posting is about using the Tortoise SubWCRev.exe utility that I hope will be more appropriate to open source projects.
The Need
The Problem
The MSBuild community task approach works but its configuration is obfuscated from the typical software developer user. It requires all users to have the MSBuild community tasks installed on the developer's (user) PC. That alone is not the 'obfuscation', thing is that if a user tries to build the project on a PC that does not have the MSBuild communicaty tasks intalled they are given an error that cannot be fixed withing the Visual Studio 2005/2008 UI that they are using. This is the critical 'usability' issue that I have found.
It has not been such a problem for commercial use as a software devs are resolved early and only once on each developer box and probably documented for future developement or maintenance.
But I have found it to be a big inhibitor for open source software users. The code cannot just be download and compiled without either reading all the documentation to find such details (a deoderant ... it ought to be self evident or, better still ... just work). It means that if you have many users you will incur a significant support cost or the inhitor will reduce user product acceptance. In other words ... ungood.
The Solution
After much googling I came across Bruce Boughton's blog showing how to use the TortoiseSVN SubWCRev.exe utility in a pre-build event. His solution did require a little pocking around in the Visual Studio project file and also required TortoiseSVN to be installed on the PC. I've adapted it a bit here is my solution:
The Need
- Automate the file and assembly version numbers to use the Subversion revision number for the revion number in '
. . . '. - Make the project configuration visible in the Visual Studio 2008 UI.
- Suitable for debugging by typical developers using the VS2008 UI.
- Project compilation without third party application installation.
The Problem
The MSBuild community task approach works but its configuration is obfuscated from the typical software developer user. It requires all users to have the MSBuild community tasks installed on the developer's (user) PC. That alone is not the 'obfuscation', thing is that if a user tries to build the project on a PC that does not have the MSBuild communicaty tasks intalled they are given an error that cannot be fixed withing the Visual Studio 2005/2008 UI that they are using. This is the critical 'usability' issue that I have found.
It has not been such a problem for commercial use as a software devs are resolved early and only once on each developer box and probably documented for future developement or maintenance.
But I have found it to be a big inhibitor for open source software users. The code cannot just be download and compiled without either reading all the documentation to find such details (a deoderant ... it ought to be self evident or, better still ... just work). It means that if you have many users you will incur a significant support cost or the inhitor will reduce user product acceptance. In other words ... ungood.
The Solution
After much googling I came across Bruce Boughton's blog showing how to use the TortoiseSVN SubWCRev.exe utility in a pre-build event. His solution did require a little pocking around in the Visual Studio project file and also required TortoiseSVN to be installed on the PC. I've adapted it a bit here is my solution:
- Add the existing AssemblyInfo.cs file to Subversion's ignore list. This file will be replaced by the auto generated file later.
- Copy your existing AssemblyInfo.cs file to a file named AssemblyInfo_temp.cs. This will become the source file used by the auto generator. Move this new file into the Properties folder by drag and drop in VS Solution Explorer pane.
- In the new AssemblyInfo_temp.cs file add required SubWCRev.exe keywords. We want to set the version revision so change your [assembly : AssemblyVersion("1.0.0.0")] to [assembly : AssemblyVersion("1.0.0.$WCREV$")]. If you want the FileVersion to be the same, delete the FileVersion entry.
- Copy the TortoiseSVN SubWCRev.exe file into your source tree. I keep a Lib folder in the root (trunk) of all my project source folders. In this case I copy the file to Lib\TortoiseSVN\SubWCRev.exe to make it clear it is a TortoiseSVN file. This way anybody can download the project and compile without the need to install TortoiseSVN (although anybody using Subversion should anyway).
- Now add as a project pre-build event: $(SolutionDir)..\Lib\TortoiseSVN\subwcrev.exe $(ProjectDir). $(ProjectDir)Properties\AssemblyInfo.cs.tmpl $(ProjectDir)Properties\AssemblyInfo.cs.
Tuesday, 27 January 2009
To Revert Is Right To 'Go-on' Is To Look Good
As people come onto a team using continuous integration (CI) we usually hear:
Words seem to lead us. 'Perfromance' is about an act, delivery of working functionality in minimal time is about efficiency. Would we be better off talking about efficency than perfromance?
- 'I cannot commit as there are too many changes. (Has not committed changes as done)'
- 'I cannot commit as I have conflicts. (Has not updated frequently)'
The thing I have taken on is that reverting my changes enables me to deliver faster.So true. But I notice that when somebody says at stand-up that they did X hours and then reverted all changes, it is often seen as a failure. An intereting reflection on time to deliver working code to measured effort/behaviour/complexity/expired time/working. I think we have another dimension to software development about measuring success.
Words seem to lead us. 'Perfromance' is about an act, delivery of working functionality in minimal time is about efficiency. Would we be better off talking about efficency than perfromance?
Saturday, 16 August 2008
NUnitGridRunner - Grid Processing for NUnit

Friday, 1 August 2008
UATs and CI Can Only Play A Fast Game Together
When developing using automated acceptance tests (we call them User Acceptance Tests 'UATs') and continuous integration (CI), the time taken to run the tests impacts directly on team performance. Or, to put it another way, a team process using both UATs and CI is not a viable team process if the UATs are slow.
UATs are inherently slow and teams always find that it is not long before their UATs run for more than 10 minutes. Too slow for me. CI is all about frequent repository commits, and in my case the commit rate can be every 15 minutes. So if I was to run all the tests prior to each commit I would be sitting, waiting, bored, for 30-50% of my time. If I just run all the unit tests and a selection of UATs (as I do) then I am somewhat 'leaning on the build box' and if there is a build failure reported some 10 minutes later I'm already half way through my next chunk of work when I find my last commit had a fault. This means that I have to revert my code (loose the last 10 minutes work) so I can rapidly revert the build box, retrieve the borken revison, switch my thinking back to what I was doing, and fix it. In such a rapid CI environment I do not mind leaning on the build box so long as the box is fixed within 10 minutes. But either way time is lost due to slow UATs.
The other, hidden cost, that arizes from slow UATs, is it is not uncommon for team members, usually newer members, to find breaking the build too confronting/embarrising. So they will run all the tests every time before commiting no matter how trivial the code change. On one hand a good attitude, I'm embarrised too, but a bit of self exposure (trust in the team) can allow you to use the build box as a tool to speed up team developement. In our team I notice that no build break during the day is a good sign that the team has gone 'stale', people are having trouble. If the UATs are a complete specification of customer requirements nobody can totally avoid a failure without running the tests (e.g. Customer wants the font to be 11pts). Same as reading the entire specification prior to each commit.
So speed matters. It directly affects team behaviour and time to delivery ($). With tests taking 10 minutes (say) I recon this must equate to more than 25% of the team's time.
If your unit tests are running slower than 2000 per minute then either they are not unit tests or you should be humane and retire that box. UAT are another matter. They requiring discipline, skill, and grunt.
UATs are usually slow because:
UATs are inherently slow and teams always find that it is not long before their UATs run for more than 10 minutes. Too slow for me. CI is all about frequent repository commits, and in my case the commit rate can be every 15 minutes. So if I was to run all the tests prior to each commit I would be sitting, waiting, bored, for 30-50% of my time. If I just run all the unit tests and a selection of UATs (as I do) then I am somewhat 'leaning on the build box' and if there is a build failure reported some 10 minutes later I'm already half way through my next chunk of work when I find my last commit had a fault. This means that I have to revert my code (loose the last 10 minutes work) so I can rapidly revert the build box, retrieve the borken revison, switch my thinking back to what I was doing, and fix it. In such a rapid CI environment I do not mind leaning on the build box so long as the box is fixed within 10 minutes. But either way time is lost due to slow UATs.
The other, hidden cost, that arizes from slow UATs, is it is not uncommon for team members, usually newer members, to find breaking the build too confronting/embarrising. So they will run all the tests every time before commiting no matter how trivial the code change. On one hand a good attitude, I'm embarrised too, but a bit of self exposure (trust in the team) can allow you to use the build box as a tool to speed up team developement. In our team I notice that no build break during the day is a good sign that the team has gone 'stale', people are having trouble. If the UATs are a complete specification of customer requirements nobody can totally avoid a failure without running the tests (e.g. Customer wants the font to be 11pts). Same as reading the entire specification prior to each commit.
So speed matters. It directly affects team behaviour and time to delivery ($). With tests taking 10 minutes (say) I recon this must equate to more than 25% of the team's time.
If your unit tests are running slower than 2000 per minute then either they are not unit tests or you should be humane and retire that box. UAT are another matter. They requiring discipline, skill, and grunt.
UATs are usually slow because:
- UATs simulate an end user running the application. So they are constantly starting and closing the application and maniplulate the application via the UI (we use NUnitForms as the low level UI test framework). This requires CPU grunt and loading of many assemblies. Hence it is not unusual for test cases to take 1 second each.
- Writing efficient UATs is not easy. It is a learnt skill. There must be balance between truely testing the application from a user level (e.g. click here, click there), fast testing of a specific feature, and the idependence of the test cases. For example, it is faster to test multiple features in one test case. Some 'cheats' can be used (see below).
- The skill is often in the order of feature implementation (stories) as some stories enable faster testing of other stories.
- The team may not appreciate (or care) the impact that slow UATs have on their ability to deliver and not give them the on-going attention they need. Consider this at UAT code health.
- Test cases taking longer than 2 seconds.
- Tests have long setup time (> 1 second).
- Slowing down tests to avoid defects that only appear in 'faster than life' UATs. This is ignoring a code health issue.
- Unnecessaryily complex setup. e.g. Need to drag a control a long distance to test a transition near the far window edge. First implement a feature of the user positioning the control by X Y coordinates and then use this to position the control near the edge for the test.
- Hard coded 'blind' pauses or sleeps in the tests. e.g. 'Sleep(500)'. This is a real killer.
- Developers sitting with glazed eyes watching tests run.
- Developers who like sitting watching tests run. They probably use the time to web browse. But then this is another, bigger, problem. :-)
- Intermittent test failures. The UATs are telling you something. They are giving you an opportunity to fix design problems early. Pure gold.
- Inform your customer of the cost savings that can be achieved from feature implementation order (story order). For example; If the UI has an icon showing a file save in progress then this can be used by the UATs to know when a file save is complete. Hence wasteful 'blind' delays are not required.
- Team alignment/focus. Be aware of the true cost of slow UATs. Half a second accross 240 tests is 2 minutes. If 4 developers loose 2 minutes say 4 times a day then that is a total of 2 x 4 x 4 = 32 developer minutes a day. So if you spend 1 hour saving half a second off each test that will be paid back in just 2 days.
- Cheat, but cheat legally. For example, rather than start the whole application (EXE) in another thread instantiate the main class and call its execute method. With good design you are only bypassing the windows main method handling. You might also preload and resuse some read-only configuration files. Can save a lot of time but be careful. :-)
- Use the fastest computers available.
- Distributed processing. I've never seen this. It seems to me to be the Utopia. I wonder if products like Alchemi can be used to pass each test fixture out to a different (idle) box. If so it would seem to be viable for a project to keep the time to run tests under 1 minute. Hmmm ... another blog.
- Reduce the cost of entry for developers by developing a UAT framework of application specific test jigs rich with methods like 'WaitUntilXYZButtonEnables'. Fix once use many times.
Wednesday, 7 May 2008
TortoiseSVN Revert Guide - Part 2
I thought it would be helpful to extend the guide in my prior post to show how to handle a typical build box break quickly. I've titled it 'TortoiseSVN' as that is the Subversion front end we use. It could equally be just a Subversion guide.
OpenOffice copy available here.
Thursday, 6 March 2008
If You Break The Build - Revert
Our team at Varian Australia, decided in our last iteration to adopt the rule:
I learnt Subversion can revert changes in a commit, even if not the most recent commit, without loosing the changes ... quickly. TortoiseSVN offers options like "revert from ..." which allow a change set of just the changes in that commit to be reverted by a following commit of the change set. If using Continuouse Integration (CI) it means that a revert is very low cost (lost work).
The team has found it very enabling. We can break the build but if we do the break is only for minutes. It does mean that we do need a fast build.
I've worked in companies with slow builds (hours). This experience emphasises the need to always have a fast build. It is always possible it is just a matter of finding how. The increased productivity of Continuouse Integration is significant.
"If you break the build, revert your commit immediately."It has surprised me how successful this has been. We are using Subversion (SVN) and while I thought that I knew how to revert a commit I found I learnt so much more about the powerful features Subversion offers to revert a commit. More importantly I, and I think others, are now much more comfortable/confident on quickly reverting a commit. This is empowering, it lowers cost/inhibitors.
I learnt Subversion can revert changes in a commit, even if not the most recent commit, without loosing the changes ... quickly. TortoiseSVN offers options like "revert from ..." which allow a change set of just the changes in that commit to be reverted by a following commit of the change set. If using Continuouse Integration (CI) it means that a revert is very low cost (lost work).
The team has found it very enabling. We can break the build but if we do the break is only for minutes. It does mean that we do need a fast build.
I've worked in companies with slow builds (hours). This experience emphasises the need to always have a fast build. It is always possible it is just a matter of finding how. The increased productivity of Continuouse Integration is significant.
Thursday, 28 February 2008
Team Build Box Pass Rate Metrics
In software development metrics are always 'interpretable' and prone to what I refer to as (pardon me but I cannot term it better) 'technical masturbation'. An interesting metric at work has been build box % pass rate. I'm not sure what it tells us. Each team's culture is different on how to treat the build box. One team has a 30% pass rate while another has a pass rate around 80%. One uses CI and the other does not. So what does it tell us?
Members in a CI team may find it useful, and acceptable, to "lean on the build box" by not running all tests prior to committing. This can be productive if the tests take longer than 5% of the commit rate and the build box breakages are fixed quickly (is 'quickly' relative to commit rate?). Are the gains can be greater than the cost? Does % pass rate metric reflect productivity. By 'productivity' I mean minimising time to profitable delivery.
Other teams may consider the build box pristine and not to be broken ever.
It occurs to me that perhaps the real issue is the time broken. In companies I've worked if the build took a long time (e.g. hours). So the consequence of a build break was higher and hence the team usually aspired to a no breakages policy. If you have such a slow build, faire enough. But a build time of hours is really a smell of tight coupling, I would eliminate the problem of long build time first.
So, I wonder if a useful metric coming out of this is the % time the build box is broken rather than the build box pass rate against commits?
Related links:
Members in a CI team may find it useful, and acceptable, to "lean on the build box" by not running all tests prior to committing. This can be productive if the tests take longer than 5% of the commit rate and the build box breakages are fixed quickly (is 'quickly' relative to commit rate?). Are the gains can be greater than the cost? Does % pass rate metric reflect productivity. By 'productivity' I mean minimising time to profitable delivery.
Other teams may consider the build box pristine and not to be broken ever.
It occurs to me that perhaps the real issue is the time broken. In companies I've worked if the build took a long time (e.g. hours). So the consequence of a build break was higher and hence the team usually aspired to a no breakages policy. If you have such a slow build, faire enough. But a build time of hours is really a smell of tight coupling, I would eliminate the problem of long build time first.
So, I wonder if a useful metric coming out of this is the % time the build box is broken rather than the build box pass rate against commits?
Related links:
Friday, 22 February 2008
Continuous Integration - More Build Lights
One of the great things we have done at work in recent months has been to raise build box state visibility by both sound (CruiseControl tray) and light (see my prior blog). The light has proved to be very effective.
I found some more interesting photos of what others are doing:
I found some more interesting photos of what others are doing:
Sunday, 2 December 2007
The 'Best' Revison Control Software
One of the teams at work is currently evaluating which revision control software (RCS) product to use as we are moving away for their preferred choice of ClearCase. I've never understood ClearCase and have, in a couple of companies, been perplexed by the strong passion for ClearCase that some teams have had. But, after observing the guys at work evaluating different products the penny has dropped for me on how ClearCase can, truly, be the best RCS software.
There are many revision control tools like subversion (SVN), ClearCase, Perforce, etc, and their functionality varies greatly. No one product that I know of does everything. Interestingly when I worked at a GE company one team used SourceSafe, another used ClearCase, and the other used (I think) Perforce. I've tried to use ClearCase and found that it greatly reduced my productivity. But yet, others insist it is an enabler. How can this be so, we are all software developers, right?
Taking ClearCase as the example here, at my current work we have a team that sees ClearCase as important to their success while others see it as a blocker. I think I can explain this from why I find it a blocker and from what I've learnt as to why this other team sees it as an enabler.
I like to work in an agile fashion and in particular in a eXtreme Programming (XP) style. It suites me, I perform my best this way. So the style that works for me includes the attributes of:
This translates to a process of:
ClearCase is very feature rich. Trouble is that these features make the process above impossible. The features are actually detrimental to this way of working. The fundamental problem is that ClearCase, one way or the other, enforces a level of isolation so that each commit requires multiple steps. The end result is that it takes minutes to do a simple merge and commit with ClearCase. If CI is being used the superior merging capabilities offered by ClearCase's process have no benefit as the merges are always small.
So I find that for an XPish way of working ClearCase's features do not translate to benefits.
The other team, I mentioned at the start, however considers ClearCase's features to be important enablers to their work. I admit that at first I did think that were a 'crazy' but I've come around and recon they are right. After all, they are a successful team so how can they be wrong!
I see them as agile as they are collaborative but their work style is nothing like XP. The style that works for them includes the attributes of:
This translates to a process of:
So, interestingly, SubVersion's (my favourite) features do not translate to benefits for them just like ClearCase didn't for me. For example, SubVersion is fast but they do not really need speed. The big blocker for them is that SubVersion does not currently (scheduled for ver 1.5) record merge history. That is, when a branch is merged to trunk. This is their basic work practice!
So, for them, ClearCase is the best as it matches a work style that works for them.
I really should have known better, people always trump process. People and environments are different so there is no one right way of working.
My spin on this: Now that agile software development is proven and wide spread, Agilists are at risk of now becoming the ones that say 'one size fits all'.
There are many revision control tools like subversion (SVN), ClearCase, Perforce, etc, and their functionality varies greatly. No one product that I know of does everything. Interestingly when I worked at a GE company one team used SourceSafe, another used ClearCase, and the other used (I think) Perforce. I've tried to use ClearCase and found that it greatly reduced my productivity. But yet, others insist it is an enabler. How can this be so, we are all software developers, right?
Taking ClearCase as the example here, at my current work we have a team that sees ClearCase as important to their success while others see it as a blocker. I think I can explain this from why I find it a blocker and from what I've learnt as to why this other team sees it as an enabler.
ClearCase as an blocker
I like to work in an agile fashion and in particular in a eXtreme Programming (XP) style. It suites me, I perform my best this way. So the style that works for me includes the attributes of:
- Continuous Integration (CI)
- Ruthless refactoring
- Teamwork
This translates to a process of:
- Commit code to the repository about every 30 minutes (average on a good day).
- Merge from the repository every 15 minutes (keep up to date with the rest of team's commits).
- Do not work on branches.
ClearCase is very feature rich. Trouble is that these features make the process above impossible. The features are actually detrimental to this way of working. The fundamental problem is that ClearCase, one way or the other, enforces a level of isolation so that each commit requires multiple steps. The end result is that it takes minutes to do a simple merge and commit with ClearCase. If CI is being used the superior merging capabilities offered by ClearCase's process have no benefit as the merges are always small.
So I find that for an XPish way of working ClearCase's features do not translate to benefits.
ClearCase as an enabler
The other team, I mentioned at the start, however considers ClearCase's features to be important enablers to their work. I admit that at first I did think that were a 'crazy' but I've come around and recon they are right. After all, they are a successful team so how can they be wrong!
I see them as agile as they are collaborative but their work style is nothing like XP. The style that works for them includes the attributes of:
- Non-CI. Each developer works in a branch with, what I consider infrequent (but that is qualitative) merging.
- Up front design approach (I think).
- Teamwork.
- Little or no refactoring. (Refactoring is seen as a result of failure as opposed to my style of it being an enabler.)
- Low requirement churn.
This translates to a process of:
- Infrequent commits. I'm guessing an average of once every 2 days.
- Careful merging (essential with infrequent commits).
- Detailed history records of merges to trunk as this is the 'delivery' and work is usually on a branch.
So, interestingly, SubVersion's (my favourite) features do not translate to benefits for them just like ClearCase didn't for me. For example, SubVersion is fast but they do not really need speed. The big blocker for them is that SubVersion does not currently (scheduled for ver 1.5) record merge history. That is, when a branch is merged to trunk. This is their basic work practice!
So, for them, ClearCase is the best as it matches a work style that works for them.
Conclusion
I really should have known better, people always trump process. People and environments are different so there is no one right way of working.
My spin on this: Now that agile software development is proven and wide spread, Agilists are at risk of now becoming the ones that say 'one size fits all'.
Tuesday, 20 November 2007
Accelerating VS Build and Test Times
The latest CodeProject newsletter has an advert for a product that accelerates Visual Studio builds by using distributed computing. Cool. I wonder if it can be used to run the automated tests?
I expect that as a project proceeds the automated User Acceptance Tests (UATs) will take longer and longer. They currently take about 5 minutes in my current team. Even 5 minutes is an inhibitor, we do not always run all the UATs before committing. Especially for those who commit frequently (15-60 minutes). As they take longer this will affect productivity.
So ... can this tool, or another, use distributed computing to run the UATs? If so then perhaps they can run continuously with a dashboard showing current status. Stop typing and then watch them go green. Now that would be cool.
The product advertised is here.
I expect that as a project proceeds the automated User Acceptance Tests (UATs) will take longer and longer. They currently take about 5 minutes in my current team. Even 5 minutes is an inhibitor, we do not always run all the UATs before committing. Especially for those who commit frequently (15-60 minutes). As they take longer this will affect productivity.
So ... can this tool, or another, use distributed computing to run the UATs? If so then perhaps they can run continuously with a dashboard showing current status. Stop typing and then watch them go green. Now that would be cool.
The product advertised is here.
Friday, 16 November 2007
How Often Do You Commit?
When talking "Continuous Integration" (CI) just what is "Continuous"? I've often had this discussion where one person considers continuous to be once a week while another is thinking every few minutes. The perception difference is based on their experiences, nature, and the environment in which they work. I do not think there is any wrong answer except one that excludes all others.
So I'm wondering what is the practice out there so I've added a poll on the right sidebar of this blog to get an indication from actual practices. Regardless of what you think CI is or would like it to be, what is your current practice.
Take a few seconds and record your actual (average) commit rate. This is nothing to do with agile development or the like, just an indicator of practice.
So I'm wondering what is the practice out there so I've added a poll on the right sidebar of this blog to get an indication from actual practices. Regardless of what you think CI is or would like it to be, what is your current practice.
Take a few seconds and record your actual (average) commit rate. This is nothing to do with agile development or the like, just an indicator of practice.
Wednesday, 14 November 2007
Build Box Lights
Yesterday our Varian Australia software team got a USB indicator light which was quickly hooked up to our build box (CruiseControl). These lights are great. We are using cctray to play music on build pass and fails but we have found that sometimes we do miss the sound or it just gets ignored for a few hours. The light we got is programmable for a mix or read, blue, green with different intensities and flash rates. The cost was around $100.
Check them out here. Google with "usb indicator light".
It comes with a driver DLL but info to write your own is available here.
Nigel's post here.
Check them out here. Google with "usb indicator light".
It comes with a driver DLL but info to write your own is available here.
Nigel's post here.
NCoverCop - A Must Have Team Tool
A little while back I mentioned how at Varian Australia the software team has developed a tool that check our code coverage on every commit (each time the build box runs) and fails the build if the coverage falls. Now Nigel is publishing the tool to SourceForge. You can find it here.
This has proven to be very effective. Even with well intentioned TDD it is surprising how easy it is to miss one functional point. As the tool only allows the coverage to go up or stay the same the coverage must, and does, increase with time. It has made a real difference to the team. All code now committed must be 100% tested by unit tests or the fail music will sound within just a few minutes :-).
This has proven to be very effective. Even with well intentioned TDD it is surprising how easy it is to miss one functional point. As the tool only allows the coverage to go up or stay the same the coverage must, and does, increase with time. It has made a real difference to the team. All code now committed must be 100% tested by unit tests or the fail music will sound within just a few minutes :-).
Monday, 29 October 2007
Using Continous Integration To Raise Code Coverage
I'm constantly amazed by what I learn working in a team, somebody is always coming up with something real useful. This last couple of weeks it was a simple extension to our continuous integration (CI) build box that measures the code coverage of the build and compares it to the highest recorded code coverage. If it is less the build fails, yep fails. If it is higher then the highest recorded is automatically updated with the builds coverage. Being a CI environment any build box failure must be fixed immediately. As a result the code coverage can only go up.
So far this has been amazing successful, but then our team does practice TDD (well mostly).
The implications are that if you are committing new code you are responsible to ensure that your code is covered by unit tests (code coverage is only calculated from unit tests). This is something of a subtle move in responsibility as before it was acceptable to 'think' that the code was covered now we must 'ensure' that it is covered or the build will break within minutes of the check-in.
For this to work developer boxes were set-up so that each developer can run their own code coverage (before check-in) and view the added code for coverage (using NCoverExplorer) quickly and easily. I think that things being easy is really essential.
Like many things I've blogged about I'm really recording great ideas from the team or others. This is no exception (credit here to Nigel Thorne ... a wickedly good idea Nigel).
So far this has been amazing successful, but then our team does practice TDD (well mostly).
The implications are that if you are committing new code you are responsible to ensure that your code is covered by unit tests (code coverage is only calculated from unit tests). This is something of a subtle move in responsibility as before it was acceptable to 'think' that the code was covered now we must 'ensure' that it is covered or the build will break within minutes of the check-in.
For this to work developer boxes were set-up so that each developer can run their own code coverage (before check-in) and view the added code for coverage (using NCoverExplorer) quickly and easily. I think that things being easy is really essential.
Like many things I've blogged about I'm really recording great ideas from the team or others. This is no exception (credit here to Nigel Thorne ... a wickedly good idea Nigel).
Friday, 14 September 2007
Virtual Build Box Lava Lamps
Continuous Integration is the blood supply of an XP style team. The visibility of build box failures and time broken is important. It helps the team focus on keeping the build pristine. The team I'm working with uses CruiseControl to fire a warning audio and to change the state of a tray icon. This is great, particularly the audio, but it does not give us indication of how long it has been broken. So come back from lunch and not notice, or we loose the plot for a few hours and forget it.
One solution is to use Lava lamps. As they need to warm up their activity gives a nice feedback of just how green or how read we are. But they need power wiring (You could use the CruiseCotnrol X10 interface) and control and how do you handle multiple builds and multiple boxes?
The other option is to dedicate a screen on a wall. One company I worked with displayed the state of all builds on all boxes as a grid on this screen. It could be seen across the floor and worked well. But it does require a dedicated box and screen etc.
So what about the new digital picture frames now on the market. One from ThinkGeek looks interesting as it can be updated my email so a build box only needs to send an email. This means that a more informative and dynamic display in a team pit without the overhead of computer. I like the idea.
My thoughts of an ideal virtual lava lamp set-up:
One solution is to use Lava lamps. As they need to warm up their activity gives a nice feedback of just how green or how read we are. But they need power wiring (You could use the CruiseCotnrol X10 interface) and control and how do you handle multiple builds and multiple boxes?
The other option is to dedicate a screen on a wall. One company I worked with displayed the state of all builds on all boxes as a grid on this screen. It could be seen across the floor and worked well. But it does require a dedicated box and screen etc
So what about the new digital picture frames now on the market. One from ThinkGeek looks interesting as it can be updated my email so a build box only needs to send an email. This means that a more informative and dynamic display in a team pit without the overhead of computer. I like the idea.
My thoughts of an ideal virtual lava lamp set-up:
- Audio play on build failure and on build fixed (as provided by CruiseControl tray tool).
- Screen in team pit displaying status as colour and animating time broken.
- Tray icon access for build box details (e.g. CruiseControl tray tool).
- Screen displays a history graph one or two (only) metrics like build time, code coverage, or number of unit tests. Whatever the team feels helps them at that time.
Subscribe to:
Posts (Atom)