Visual Studio creates an AssemblyInfo.cs file for each project in a solution in a 'Properties' folder. Thing is that, more often than not, I'm more interesting in an application's version than the individual assembly's version, and the solution is the application. So I find that each time I create solution with multiple projects (assemblies) I add a 'Common' project that has an AssemblyInfo.cs file to be used by all other projects. That way all assemblies have the same version information.
To do this delete the AssemblyInfo.cs file in each project and then 'Add Existing item' browse to the common AssemblyInfo.cs and add as link. This way all projects share the same AssemblyInfo.cs file. Unfortunately Visual Studio does not allow adding any items to the 'Properties' folder.
Good way to go even if the AssemblyInfo.cs file is manually updated or created/updated automatically.
Rob Smyth
Wednesday 3 December 2008
NPlot Problem On Vista
CPU loading goes to 100% on a Vista box when the mouse is moved over an NPlot .Net 2.0 chart. The displayed XY data tooltip flickers constantly. All fine on an XP box. The problem seems to be caused by Vista generating a mouse move event when a tooltip is displayed, even though the mouse has not moved. So if the tooltip is displayed while handlinge a mouse move (e.g. Control.OnMouseMove) you will get an infinite loop.
There is a bit of chatter on the net about tooltips and Control.OnMouseMove, so I added some code to NPlot's OnMouseMove override to do nothing if the mouse has not moved since the last call. This fixed the problem.
So if you have a WinForms application with any tooltips displayed over a control that change with mouse movement, best check CPU loading with mouse over on Vista.
There is a bit of chatter on the net about tooltips and Control.OnMouseMove, so I added some code to NPlot's OnMouseMove override to do nothing if the mouse has not moved since the last call. This fixed the problem.
So if you have a WinForms application with any tooltips displayed over a control that change with mouse movement, best check CPU loading with mouse over on Vista.
Wednesday 26 November 2008
More Thoughts - Is XP really Agile
'Agile' is term created at a meeting in 1999 (might have been 2000) by a gathering of mythologists who each promote a different methodology. This group concluded that they needed a word to communicate attributes that they all agreed on. The word they chose was 'Agile' and its definition is given by the agile manifesto here. Those present included Kent Beck who is co founder of XP but also those authors of many other methodologies. So 'Agile' was intentionally not intended to mean XP but was to enable communication of attributes that many mythologist find to be important.
Agile refers to 4 attributes concerning colloration. XP however is a process based methodology for teams and project management. Its primary focus is on the software development team. Doing XP is clearly defined as doing the 12 basic practices found here. Agile is a term used to describe collaboration while XP is a set of processes. XP is about how a team does things while Agile is about the attitude.
To say that Agile is XP is to redefine the intended and rob us of the language its authors intended.
In the software development industry process/methodology skills are not well developed. As a result it is not uncommon to find teams that claim to use RUP, waterfall, XP, etc, but in fact do not. Likewise most teams claiming to do XP do not.
XP does work. I know as I have seen it introduced into one company with some amazing successes. XP, like all methodologies, can also fail miserably. I've also seen this at the same company. Every methodology addresses a particular set of problems (e.g. efficiency or sustainability, short delivery time or predictability) and situations such as team size and critically. No process fits all and people always trump process.
The most common anti-pattern is to apply a methodology, such as XP, to an entire existing work force. People are different, projects are different, so processes/methodologies should be different. No company, with a large number of developers, will never be able to apply XP to all teams without first replacing most of its software development staff (not recommended).
Agile refers to 4 attributes concerning colloration. XP however is a process based methodology for teams and project management. Its primary focus is on the software development team. Doing XP is clearly defined as doing the 12 basic practices found here. Agile is a term used to describe collaboration while XP is a set of processes. XP is about how a team does things while Agile is about the attitude.
To say that Agile is XP is to redefine the intended and rob us of the language its authors intended.
In the software development industry process/methodology skills are not well developed. As a result it is not uncommon to find teams that claim to use RUP, waterfall, XP, etc, but in fact do not. Likewise most teams claiming to do XP do not.
XP does work. I know as I have seen it introduced into one company with some amazing successes. XP, like all methodologies, can also fail miserably. I've also seen this at the same company. Every methodology addresses a particular set of problems (e.g. efficiency or sustainability, short delivery time or predictability) and situations such as team size and critically. No process fits all and people always trump process.
The most common anti-pattern is to apply a methodology, such as XP, to an entire existing work force. People are different, projects are different, so processes/methodologies should be different. No company, with a large number of developers, will never be able to apply XP to all teams without first replacing most of its software development staff (not recommended).
Monday 10 November 2008
The New UAT Framework Contenter - White
White is a new automated UAT (User Acceptance Testing) framework for Win32/WinForms/WPF applications. Perhaps a replacement for NUnitForms.
NUnitForms is a framework that layers over NUnit to enable run time application/user acceptance testing. e.g. Click button X and then expect dialog Y. It is not a unit testing framework.
I'm a developer on the NUnitForms team and a few weeks ago I took on the job of doing a new, much needed, NUnitForms release. But since then I've learnt about White and wonder if White is the future. I know one friend in Melbourne that is using it for UATs on a commercial project and the feedback is so far, positive.
I'm currently using NUnitForms. It has Vista compatibility issues that I think can be overcome without too much effort but WPF provides a brave new world. The crunch will come in the next few months, I would like to see if we can change our underlying UAT framework from NUnitForms to White. If nothing else, it would confirm how well we designed our own test jig / framework.
Get White here.
Get NUnitFramwork here (but download and compile, the last release is a bit old).
NUnitForms is a framework that layers over NUnit to enable run time application/user acceptance testing. e.g. Click button X and then expect dialog Y. It is not a unit testing framework.
I'm a developer on the NUnitForms team and a few weeks ago I took on the job of doing a new, much needed, NUnitForms release. But since then I've learnt about White and wonder if White is the future. I know one friend in Melbourne that is using it for UATs on a commercial project and the feedback is so far, positive.
I'm currently using NUnitForms. It has Vista compatibility issues that I think can be overcome without too much effort but WPF provides a brave new world. The crunch will come in the next few months, I would like to see if we can change our underlying UAT framework from NUnitForms to White. If nothing else, it would confirm how well we designed our own test jig / framework.
Get White here.
Get NUnitFramwork here (but download and compile, the last release is a bit old).
Sunday 19 October 2008
Can eXtreme Programming Survive Being Main Stream
A few days ago I noticed a job email where the company announced that they did "waterfall". Gosh, waterfall has up until the end of last century always been the defacto status quo. That any company feels a need to say this is testomony to that eXtreme Programming (XP) processes have now become main stream. But has XP survived the transition? I think it will take several year to find out. I'm hearing 'Agile' often being confused with XP and it is so common to find redefintions (abortions) of XP processes / intent.
I will wait and see. XP rocks, but in a general sense I think the attributes of 'agile' have wider application if the intent is not lost in transalation. Given a choice of agile attributes or the XP process, I would take agile any day. It is just that it seems harder to use attributes than it is to use a process cook book that XP provides.
I will wait and see. XP rocks, but in a general sense I think the attributes of 'agile' have wider application if the intent is not lost in transalation. Given a choice of agile attributes or the XP process, I would take agile any day. It is just that it seems harder to use attributes than it is to use a process cook book that XP provides.
Tuesday 7 October 2008
NamedPipeClientStream Gotcha
I've been playing with the .Net 3.5 NamedPipeClientStream for NUnitGridRunner and had been testing my application at home on my portable, but when I moved to cafe I found that my application threw excetpions. The difference was that even though I wasd running both client and server on the one box, the bahaviour changes depending if I have a network.
As a result the code ended up like this:
As a result the code ended up like this:
public void Start()
{
string hostName = Environment.MachineName.ToLower() == gridHostName.ToLower()
? "."
: gridHostName;
pipeClient = new NamedPipeClientStream(hostName, "NUnitGrid", PipeDirection.InOut,
PipeOptions.Asynchronous);
}
Sunday 5 October 2008
A Castle in a Blackout
We live in a 'Residential Bushland' area with lot of large trees. So we each year we loose power, usually after a storm, for a day or two. No complaints as we choose to live here because of the trees but ... it is inconvieniant. So we got a generator. Great but to power the freezer, fridge, TV, heater, etc takes a lot of power cords. What we found was when we lost power we would not start the genertor as we needed to balance the effort of all the power cords against how long we guessed the power would be out. So we needed a way to make it all easier. So we got a changeover switch installed on the house to make it easy.
The change overswitch means that we run a couple of extension cables from the generator to the house (like powering a caravan). Flick the switch over to 'Generator' and the whole house is powered by the generator.
Now I'm just waiting for the next power outage so I can use it. Cost of generator and switch was about $2,500. Value of having TV, fidge, and all lights on during an outage ... priceless. Hmm, maybe we should hook up the christmas lights?
The change overswitch means that we run a couple of extension cables from the generator to the house (like powering a caravan). Flick the switch over to 'Generator' and the whole house is powered by the generator.
Now I'm just waiting for the next power outage so I can use it. Cost of generator and switch was about $2,500. Value of having TV, fidge, and all lights on during an outage ... priceless. Hmm, maybe we should hook up the christmas lights?
Thursday 2 October 2008
VisualSVN Visual Studio Subversion Integration
For 1-2 years I have been using AnkSVN to provide a level of subversion (SVN) integration with Visual Studio. But in the last few days I've been upgrading to Visual Studio 2008 and decided to give VisualSVN a go. Very pleased I did, it is very good.
I tend to use TortoiseSVN both at home and at work as my primary subversion tool. No matter how good any VS revision control can be I feel that Explorer integration will always be required as, after all, we are dealing with files. Also, TortoiseSVN's ability to be configured to require a bug ID (ala story ID) is essential in some work environments. So any VS subversion integration tool that integrates with TortoiseSVN is leveraging off TortoiseSVN's functionality that you would be using anyway. Big win.
Also, even after a few hours of use, VisualSVN provides many other very useful benefits:
I tend to use TortoiseSVN both at home and at work as my primary subversion tool. No matter how good any VS revision control can be I feel that Explorer integration will always be required as, after all, we are dealing with files. Also, TortoiseSVN's ability to be configured to require a bug ID (ala story ID) is essential in some work environments. So any VS subversion integration tool that integrates with TortoiseSVN is leveraging off TortoiseSVN's functionality that you would be using anyway. Big win.
Also, even after a few hours of use, VisualSVN provides many other very useful benefits:
- Files moved in VS are shown in commit as "Add(+)" and history is maintained.
- Tortoise options accessible via VisualSVN menu in Visual Studio.
- File modified indicators visible in solution explorer as you would expect.
- Modified files in VS status bar is very useful. I often want to know if a box has any changes and this little indicator gives me instant indication.
- 'Show Changes' menu option within VS very useful.
- Changed line are highlighted in VS editor pane. Impressively, still only shows changed lines on a moved file, that is no line changed if not edited before/after move.
- VS tool bar buttons for commit, update, show changes.
- Full synchronization with Tortoise actions outside of VS. e.g. Files added to subversion using TortoiseSVN outside of VS, without commit, show up in VS immediately.
Wednesday 17 September 2008
Vista 'Command Prompt Here'
Thanks to Tim Sneath's blog entry I found that Windows Vista has a 'Command Prompt Here' feature built in. Albiet nicely hidden. But there is a little gotcha. On my first go with Explorer up I tried on the folders view in the left pane (I like having the folders open in the left pane) and found it did not work ... it only works if you hold the shift key down and right mouse click on a folder in the files pane on the right.
Nice though.
Nice though.
Sunday 14 September 2008
Nick Cave and the Bad Seeds - Coding?
Tonight while watching, over the top of the laptop, a documentary aired earlier on from the series Great Australian Albums about Nick Cave and the Bad Seeds, Nick made the comment about his song writing:
Gosh that hit me like, that is how I try (or would like to) to code. It feels to me like the developer attitude that leads to a zero defect software development environment that I have blogged on before.
On another level I've always felt uncomfortable about their murder bollards, listening to Nick in this documentary put that into a different context.
"Now if I write some thing down it is there because I feels that at the time I'm writing that down that is in the only line that can only down against the line that I've written ... exactly the write line"He said this in the context of reflecting back and saying he can see now that in the past his writing was not right, it was getting excited because "something came out".
Gosh that hit me like, that is how I try (or would like to) to code. It feels to me like the developer attitude that leads to a zero defect software development environment that I have blogged on before.
On another level I've always felt uncomfortable about their murder bollards, listening to Nick in this documentary put that into a different context.
Tuesday 26 August 2008
Cannot Parse System.Double.MaxValue
Surpisingly it seems that System.Double does not parse maximum and minimum values. I've found that double.MaxValue cannot be parsed by double.Parse(MaxValue) nor via the Convert method overload. System.Double.NaN can be passed though.
Here is a test fixture demonstrating the problem:
Here is a test fixture demonstrating the problem:
[TestFixture]
public class DoubleTests
{
[Test]
[ExpectedException(typeof(OverflowException))]
public void MaxValue_ThrowsException_WhenParsing()
{
double readValue = double.Parse(double.MaxValue.ToString());
Assert.AreEqual(double.MaxValue, readValue);
}
[Test]
[ExpectedException(typeof(OverflowException))]
public void MinValue_ThrowsException_WhenParsing()
{
double readValue = double.Parse(double.MinValue.ToString());
Assert.AreEqual(double.MinValue, readValue);
}
[Test]
public void NanValue_ThrowsException_WhenParsing()
{
double readValue = double.Parse(double.NaN.ToString());
Assert.AreEqual(double.NaN, readValue);
}
}
Saturday 16 August 2008
NUnitGridRunner - Grid Processing for NUnit
NUnitGridRunner works! NUnitGridRunner ran NUnit based UATs (User Acceptance Tests) on a real world project, which usually take about 8 minutes, in about three minutes when using three remote boxes (one of which is very very slow). Not simplest tool to configure, but easy to use once going and a big time saver. With a few more boxes, and improvements to NUnitGridRunner, I think we will reach our target of running all tests in one minute from any developer box. A real CI (Continuous Integration) enabler.
Friday 15 August 2008
Acceptance Test Driven Development
I've been using the acronym 'UAT' for User Acceptance Tests to describe automated tests that capture required application behaviour. But while terms like 'unit tests' are relatively well defined UATs is not widely a recognised acronym. Tongiht while updating NUnitGridRunner documentation I came accross a site describing Acceptance Test Driven Development with the aconym 'ATDD'. I find the diagram on this page complelling and I'm now thinking of hanging my hat on the ATDD acronym even though it is not a three letter acronym (TLA).
Check it out here. Although when reading substitute 'Agile' for 'XP'.
Check it out here. Although when reading substitute 'Agile' for 'XP'.
Sunday 10 August 2008
Snow On The Mountain
We had snow this morning. Great to see it during the day and on a Sunday when we are home. Last year it snowed during the night and it was a working day.
Made for a real nice day inside watching the snow falling and the snow gathering on the trees. Golly was perplexed. Sue was excited to see Betsy with a good snow covering.
Sunday 3 August 2008
NUnitGridRunner - Run NUnit Tests Distrubuted
I've spent the last couple of days trying to use Alchemi to run NUnit tests on a virtual, distributed, computer system. That is, use many idle computers to run the tests. For this I created the Google project NUnitGridRunner. But, despite early wins, the last day has been spent trying to figure out how to bypass Windows security. Oh the frustration!
The vision is that a developer, or build box, can ask a virtual computer comprising of a grid of computers to run the tests. The grid computers are underused boxes and the grid threads only run in idle time so the distributed load is kinda free. So, UATs (User Acceptance Tests) that would normally take 10-15 minutes could run in only 1 minutes. A greate $ saver for a development team.
I got the basics running no problem, but each time I extended the grid to other computers in my home network I kept hitting Windows security issues. If I try to run the nunit-console on the grid I get FileIOPermission exceptions. If try running NUnit's more low level 'SimpleTestRunner' using a shared folder with the binaries I get a login failure.
I recon somebody with more Windows security knowledge could fix this but so far it has me stumped. I'm not giving up though, I've seen enough to see that this is a real goer.
A quote from the Alchemi documentation:
The vision is that a developer, or build box, can ask a virtual computer comprising of a grid of computers to run the tests. The grid computers are underused boxes and the grid threads only run in idle time so the distributed load is kinda free. So, UATs (User Acceptance Tests) that would normally take 10-15 minutes could run in only 1 minutes. A greate $ saver for a development team.
I got the basics running no problem, but each time I extended the grid to other computers in my home network I kept hitting Windows security issues. If I try to run the nunit-console on the grid I get FileIOPermission exceptions. If try running NUnit's more low level 'SimpleTestRunner' using a shared folder with the binaries I get a login failure.
I recon somebody with more Windows security knowledge could fix this but so far it has me stumped. I'm not giving up though, I've seen enough to see that this is a real goer.
A quote from the Alchemi documentation:
The idea of meta-computing - the use of a network of many independent computers as if they were one large parallel machine, or virtual supercomputer - is very compelling since it enables supercomputer-scale processing power to be had at a fraction of the cost of traditional supercomputers.Think of long builds, think of slow UATs, think of your manager's box when he is at meetings!
Saturday 2 August 2008
A Cow At The End Of The Driveway
Friday 1 August 2008
Can Alchemi Turn Web Browser Boxes Into UAT Gold?
Today I stumbled across Alchemi, "a .net based Enterprise Grid System and Framework". Or, in other words, a distributed computing framework. I wonder if it can be used to run UATs (automated User Acceptance Tests) both from developer boxes and build boxes?
I'm thinking of an Alchemi application that is a test runner which uses distributed computing to run NUnit test fixtures. The documentation claims that the 'executors' which run on the remote boxes only run in idle time. So, dedicated build farm boxes are not needed, user boxes can be used with (they claim) no affect on the boxes use. Most boxes in the office have light use so, if so, 20 boxes could be available to act as a virtual build farm.
Picture is worth a thousand words. The attached images are copied from the Alchemi documentation.
I think it is worth a play. Nigel, Duncan, you thinking what I'm thinking? All those web browser boxes ... they have a purpose!
Hey ... what does you Manager's box do? Here is an opportunity to give it meaning in life.
I'm thinking of an Alchemi application that is a test runner which uses distributed computing to run NUnit test fixtures. The documentation claims that the 'executors' which run on the remote boxes only run in idle time. So, dedicated build farm boxes are not needed, user boxes can be used with (they claim) no affect on the boxes use. Most boxes in the office have light use so, if so, 20 boxes could be available to act as a virtual build farm.
Picture is worth a thousand words. The attached images are copied from the Alchemi documentation.
I think it is worth a play. Nigel, Duncan, you thinking what I'm thinking? All those web browser boxes ... they have a purpose!
Hey ... what does you Manager's box do? Here is an opportunity to give it meaning in life.
UATs and CI Can Only Play A Fast Game Together
When developing using automated acceptance tests (we call them User Acceptance Tests 'UATs') and continuous integration (CI), the time taken to run the tests impacts directly on team performance. Or, to put it another way, a team process using both UATs and CI is not a viable team process if the UATs are slow.
UATs are inherently slow and teams always find that it is not long before their UATs run for more than 10 minutes. Too slow for me. CI is all about frequent repository commits, and in my case the commit rate can be every 15 minutes. So if I was to run all the tests prior to each commit I would be sitting, waiting, bored, for 30-50% of my time. If I just run all the unit tests and a selection of UATs (as I do) then I am somewhat 'leaning on the build box' and if there is a build failure reported some 10 minutes later I'm already half way through my next chunk of work when I find my last commit had a fault. This means that I have to revert my code (loose the last 10 minutes work) so I can rapidly revert the build box, retrieve the borken revison, switch my thinking back to what I was doing, and fix it. In such a rapid CI environment I do not mind leaning on the build box so long as the box is fixed within 10 minutes. But either way time is lost due to slow UATs.
The other, hidden cost, that arizes from slow UATs, is it is not uncommon for team members, usually newer members, to find breaking the build too confronting/embarrising. So they will run all the tests every time before commiting no matter how trivial the code change. On one hand a good attitude, I'm embarrised too, but a bit of self exposure (trust in the team) can allow you to use the build box as a tool to speed up team developement. In our team I notice that no build break during the day is a good sign that the team has gone 'stale', people are having trouble. If the UATs are a complete specification of customer requirements nobody can totally avoid a failure without running the tests (e.g. Customer wants the font to be 11pts). Same as reading the entire specification prior to each commit.
So speed matters. It directly affects team behaviour and time to delivery ($). With tests taking 10 minutes (say) I recon this must equate to more than 25% of the team's time.
If your unit tests are running slower than 2000 per minute then either they are not unit tests or you should be humane and retire that box. UAT are another matter. They requiring discipline, skill, and grunt.
UATs are usually slow because:
UATs are inherently slow and teams always find that it is not long before their UATs run for more than 10 minutes. Too slow for me. CI is all about frequent repository commits, and in my case the commit rate can be every 15 minutes. So if I was to run all the tests prior to each commit I would be sitting, waiting, bored, for 30-50% of my time. If I just run all the unit tests and a selection of UATs (as I do) then I am somewhat 'leaning on the build box' and if there is a build failure reported some 10 minutes later I'm already half way through my next chunk of work when I find my last commit had a fault. This means that I have to revert my code (loose the last 10 minutes work) so I can rapidly revert the build box, retrieve the borken revison, switch my thinking back to what I was doing, and fix it. In such a rapid CI environment I do not mind leaning on the build box so long as the box is fixed within 10 minutes. But either way time is lost due to slow UATs.
The other, hidden cost, that arizes from slow UATs, is it is not uncommon for team members, usually newer members, to find breaking the build too confronting/embarrising. So they will run all the tests every time before commiting no matter how trivial the code change. On one hand a good attitude, I'm embarrised too, but a bit of self exposure (trust in the team) can allow you to use the build box as a tool to speed up team developement. In our team I notice that no build break during the day is a good sign that the team has gone 'stale', people are having trouble. If the UATs are a complete specification of customer requirements nobody can totally avoid a failure without running the tests (e.g. Customer wants the font to be 11pts). Same as reading the entire specification prior to each commit.
So speed matters. It directly affects team behaviour and time to delivery ($). With tests taking 10 minutes (say) I recon this must equate to more than 25% of the team's time.
If your unit tests are running slower than 2000 per minute then either they are not unit tests or you should be humane and retire that box. UAT are another matter. They requiring discipline, skill, and grunt.
UATs are usually slow because:
- UATs simulate an end user running the application. So they are constantly starting and closing the application and maniplulate the application via the UI (we use NUnitForms as the low level UI test framework). This requires CPU grunt and loading of many assemblies. Hence it is not unusual for test cases to take 1 second each.
- Writing efficient UATs is not easy. It is a learnt skill. There must be balance between truely testing the application from a user level (e.g. click here, click there), fast testing of a specific feature, and the idependence of the test cases. For example, it is faster to test multiple features in one test case. Some 'cheats' can be used (see below).
- The skill is often in the order of feature implementation (stories) as some stories enable faster testing of other stories.
- The team may not appreciate (or care) the impact that slow UATs have on their ability to deliver and not give them the on-going attention they need. Consider this at UAT code health.
- Test cases taking longer than 2 seconds.
- Tests have long setup time (> 1 second).
- Slowing down tests to avoid defects that only appear in 'faster than life' UATs. This is ignoring a code health issue.
- Unnecessaryily complex setup. e.g. Need to drag a control a long distance to test a transition near the far window edge. First implement a feature of the user positioning the control by X Y coordinates and then use this to position the control near the edge for the test.
- Hard coded 'blind' pauses or sleeps in the tests. e.g. 'Sleep(500)'. This is a real killer.
- Developers sitting with glazed eyes watching tests run.
- Developers who like sitting watching tests run. They probably use the time to web browse. But then this is another, bigger, problem. :-)
- Intermittent test failures. The UATs are telling you something. They are giving you an opportunity to fix design problems early. Pure gold.
- Inform your customer of the cost savings that can be achieved from feature implementation order (story order). For example; If the UI has an icon showing a file save in progress then this can be used by the UATs to know when a file save is complete. Hence wasteful 'blind' delays are not required.
- Team alignment/focus. Be aware of the true cost of slow UATs. Half a second accross 240 tests is 2 minutes. If 4 developers loose 2 minutes say 4 times a day then that is a total of 2 x 4 x 4 = 32 developer minutes a day. So if you spend 1 hour saving half a second off each test that will be paid back in just 2 days.
- Cheat, but cheat legally. For example, rather than start the whole application (EXE) in another thread instantiate the main class and call its execute method. With good design you are only bypassing the windows main method handling. You might also preload and resuse some read-only configuration files. Can save a lot of time but be careful. :-)
- Use the fastest computers available.
- Distributed processing. I've never seen this. It seems to me to be the Utopia. I wonder if products like Alchemi can be used to pass each test fixture out to a different (idle) box. If so it would seem to be viable for a project to keep the time to run tests under 1 minute. Hmmm ... another blog.
- Reduce the cost of entry for developers by developing a UAT framework of application specific test jigs rich with methods like 'WaitUntilXYZButtonEnables'. Fix once use many times.
Wednesday 30 July 2008
Open Source Project Hosting
I've been oscillating the last year between my loyalty to SourceForge and Google Code. A couple of times I have created a Google Code project to move a SourceForge project across but each time I've stayed with SourceForge. But then, I now have a few new Google Code projects. So far, I do not see a clear winner, each has its advantages.
The pros and cons are:
The pros and cons are:
- Registering A Project
- Wiki
- Release Upload
A Google Code project registration is instant. Although I have had some trouble with full use of the SubVersion repository until a few hours after the project registration. Except for that the Google Code project is available immediately.
SourceForge however requires the project to be approved. For me, this has taken anything from a couple of days to months. Usually three days although once I needed to escalate it through support and it took a couple of months.
SourceForge has out sourced their Wiki support to Wikispaces and the result is clearly superior. Good WYSIWYG and default layout creates a good looking site.
Google Code however provides much more flexibility. It seems that every page is editable as a wiki. Significantly Google Code allows you to edit the project page as it is a wiki page. But ... Google Code formatting is very ordinary and generally speaking Google Code pages just do not look good.
SourceForge's release upload has long been somewhat 'Unix'. Other words, 'difficult'. Google release uploads are a breeze. Just browse to the uploads page, click the link and browse to the file to upload. SourceForge has created an equivalent but you do need to go looking for it and it does feel 'tacked on'.
Sunday 27 July 2008
Last Time I Give Myself A Haircut?
Betsy Gets A Chrome Fuel Cap
Sue has had her Mini Cooper (Betsy) for a few months now and she still adores her. Lovely machine she is, and now she is a little bit more lovely with the proud addition of a chrome fuel cap.
I'm surprised, it does look good with the chrome side mirror, door handle, and now ... the cap.
Yea, I had to get the dogs in one of the photos. That is Violet at the back, and Golly's head & tail at the front. :-)
I'm surprised, it does look good with the chrome side mirror, door handle, and now ... the cap.
Yea, I had to get the dogs in one of the photos. That is Violet at the back, and Golly's head & tail at the front. :-)
Thursday 24 July 2008
Resharper Plugin Test Fixture
I've been writing a Resharper plugin, and although it does seem much easier than a Visual Studio add-in, writing any Visual Studio plugin/add-in is just plain awkward. The cost of testing is so much higher when the target product is the same as your development tool. So it seemed that the best thing to do first was to automate the basic Resharper plugin validation so that the manual testing cost is avoided/minimised. It also has the advantage of documenting my understanding of the Resharper plug-in API.
To manually test I must shutdown VS copy the DLL to a deployment folder, possibly update a registry key, and then start-up VS again. If things fail Resharper/VS are not happy and all may need to be shutodown, dlls deleted, VS started again, and maybe even a little configuration recovery. Nothing too bad but it is definitely a high cost manual test scenario.
The full tests are part of the 'SharpProbe' project. Code fragment is shown below.
To manually test I must shutdown VS copy the DLL to a deployment folder, possibly update a registry key, and then start-up VS again. If things fail Resharper/VS are not happy and all may need to be shutodown, dlls deleted, VS started again, and maybe even a little configuration recovery. Nothing too bad but it is definitely a high cost manual test scenario.
The full tests are part of the 'SharpProbe' project. Code fragment is shown below.
[Test]
public void HasActionsRootElement()
{
Assert.IsNotNull(xmlDocument.SelectSingleNode("actions"));
}
[Test]
public void AllInsertElementsHaveValidGroupIDs()
{
Dictionary<string, int> validGroupIds = new Dictionary<string, int>();
validGroupIds.Add("ReSharper", 0);
validGroupIds.Add("VS#Code Window", 0);
validGroupIds.Add("VS#Solution", 0);
validGroupIds.Add("VS#Item", 0);
validGroupIds.Add("VS#Project", 0);
XmlNodeList nodes = xmlDocument.SelectNodes("actions/insert");
foreach (XmlNode node in nodes)
{
string groupID = node.Attributes["group-id"].Value;
Assert.IsTrue(validGroupIds.ContainsKey(groupID),
string.Format("XML insert element has unknown group-id '{0}'.", groupID));
}
}
[Test]
public void AllInsertElementsOtherThanResharperElementHaveValidActionRefIDs()
{
XmlNodeList nodes = xmlDocument.SelectNodes("actions/insert[@group-id!='ReSharper']");
foreach (XmlNode node in nodes)
{
NodeHasValidActionId(node, "action-ref");
}
}
[Test]
public void HasAtLeastOneInsertElement()
{
Assert.IsTrue(xmlDocument.SelectNodes("actions/insert").Count > 0);
}
[Test] public void HasResharperInsertElementsTahtHasValidActionIDsAndMenuText()
{
XmlNodeList nodes = xmlDocument.SelectNodes("actions/insert[@group-id='ReSharper']");
Assert.AreEqual(1, nodes.Count);
NodeHasValidActionId(nodes[0], "action");
XmlNode actionNode = nodes[0].SelectSingleNode("action");
string text = actionNode.Attributes["text"].Value;
Assert.IsTrue(text.Length > 0);
}
Monday 21 July 2008
Developing Resharper Plugins
I've been having a go at writing a Resharper plugin. As there does not seem to be much in the way of documentation I've created a page on my wiki to keep notes. You can find the page here.
Thursday 17 July 2008
The Project With Multiple Teams Conundrum
Several years ago I worked for Citect on its shrink wrapped product (also called Citect). The company had several software development teams working on Citect and thinking back on it I saw the teams as separate, each team was independent and had a clear team identity, even though we all worked on the one product. It demonstrates that 'team' is not necessarily tied to 'product' but to 'project', and a product can have multiple projects. In fact, I now think that the concept of a team is inherently linked to an individual project. One team, one project.
It seems that having multiple teams on the one project is a conundrum. If the teams lack identity they become one large team. The advantage of a team is lost. A shared project a team inhibitor.
I feel that what the project manager sees as a single project must be broken down into separate projects that each team can own. A team needs something to own, it needs boundaries so it can celebrate success. The issue of collaboration between teams is another matter for the the "team of teams" team.
Agile developers will collaborate. Other will always find a way ...
It seems that having multiple teams on the one project is a conundrum. If the teams lack identity they become one large team. The advantage of a team is lost. A shared project a team inhibitor.
I feel that what the project manager sees as a single project must be broken down into separate projects that each team can own. A team needs something to own, it needs boundaries so it can celebrate success. The issue of collaboration between teams is another matter for the the "team of teams" team.
Agile developers will collaborate. Other will always find a way ...
NXmlSerializer Rev 3 Released
Uploaded the latest NXmlSerializer release to SourceForge tonight. This release reduces the size of the XML produced, supports serialized objects referencing objects that need to be replaced when deserializing. Project documentation is here.
Wednesday 9 July 2008
Agile - A Word To Empower Our Decisions
Words are powerful, they effect our perceptions and decisions. In 2001 seventeen leaders people got together in Utah and emerged with the Agile Manifesto which gave us the word/vocabulary of 'Agile' software development. A powerful concept given and now owned by the software development community. With the high profile that eXtreme Programming enjoys will 'Agile' become a byword for XP? XP stands in it own right, I hope that its popularity does not overshadow the concept that 'Agile' give us.
It is difficult to implement the XP methodology and not be Agile, but is Agile just XP? If we use 'Agile' as a byword for XP are we loosing the opportunities of the language of the manifesto offered us. Will the seed of other methodology/processes be lost in the XP disco strobe lights?
XP may be Agile, but Agile is not XP. To confuse the two is to limit the possibilities of XP evolving or of using other agile approaches like Crystal Clear. Is XP's popularity now a danger to blocking our ability to adapt and improve? Interesting considering XP's retrospective process.
It is difficult to implement the XP methodology and not be Agile, but is Agile just XP? If we use 'Agile' as a byword for XP are we loosing the opportunities of the language of the manifesto offered us. Will the seed of other methodology/processes be lost in the XP disco strobe lights?
Men suppose their reason has command over their words; still it happens that words in return exercise authority on reason. —Francis Bacon.
XP may be Agile, but Agile is not XP. To confuse the two is to limit the possibilities of XP evolving or of using other agile approaches like Crystal Clear. Is XP's popularity now a danger to blocking our ability to adapt and improve? Interesting considering XP's retrospective process.
How To Fail With Agile
This URL to "How To Fail With Agile" came to me from a post to the Melbourne XP Enthusiasts group. I feel the pain of recognition :-).
I'll ignore the blurring of the difference between agile and XP implied in the article.
http://www.nxtbook.com/nxtbooks
<http://www.nxtbook.com/nxtbooks
Tuesday 24 June 2008
Specific Data Types Avoid Primitives Confusion
An interesting confusion occured today at work one the interpretation of a method parameter named something like 'widgetPercentage'. One developer took 5% as being 0.05 and the original author expected 5% to 5.0. My first thought is that we are missing a 'Percentage' type to remove this ambiguity. But, this would require a constuctor that took a percentage as a sting like 'Percentage("5%")'. Seems fine, but both an example of my loathing of such strings and the lack of use of strong types. I do 'feel' that a strong type could have avoided the problem here.
NCoverCop And Warm Fuzzy Green
Right now I'm coding at home on an open source project and it is surprising just how much I miss a build box running NCoverCop . Each time I want to add a feature I feel the old tug between TDD and hacking. It seem that this 'tug' is dimished at work as NCoverCop makes sure that no new code goes in without tests, at home I still have the temptress devil on my shoulder.
'Interesting' how I find working with NCoverCop a relief rather than a burden. NCoverCop in fact provides the "warm and fuzzy" of a TDD green. It just feels good with and like driving without a seatbelt without.
'Interesting' how I find working with NCoverCop a relief rather than a burden. NCoverCop in fact provides the "warm and fuzzy" of a TDD green. It just feels good with and like driving without a seatbelt without.
Friday 20 June 2008
Golley & Violet's New Kennel
When we moved into our home we inherited an old kennel for the dog's in their dog run. The dogs have loved it, but it finally fell apart. So I've built them a new one.
In the past our dogs have had individual kennels and never used them. The differences were:
The new one is bigger and the dogs love it. :-)
In the past our dogs have had individual kennels and never used them. The differences were:
- Lots of straw
- Large - 1.6 meters square
- Large front opening (three walls)
- Postioned so they could see the house and garden from it
The new one is bigger and the dogs love it. :-)
Tuesday 17 June 2008
Mocking Generic Methods with NMock2
Mocking generic methods is a bit of a problem using my preferred mocking framework NMock2 but Nigel has found a way. Check out his blog or download his NUnitExtensions project. Great time savers.
The NUnitExtensions project I have in my NoeticTools google project is inspired by Nigel's. I intend to delete mine and use Nigel's one.
Another good one Nigel.
The NUnitExtensions project I have in my NoeticTools google project is inspired by Nigel's. I intend to delete mine and use Nigel's one.
Another good one Nigel.
Sunday 1 June 2008
Programmatic Pane Nesting Using DockPanel Suite
I've blogged before on DockPanel Suite and I'm using this framework in my VicFireReader project. In the last week I've had a need to programmatically split the right pane vertically so that I have a top right pane and a bottom right pane as shown in the picture. The code fragment below shows how it was done. 'contentPanel3' is docked to the bottom right.
public MDIParent()
{
InitializeComponent();
DockPanel dockPanel = new DockPanel();
dockPanel.Dock = DockStyle.Fill;
dockPanel.BackColor = Color.Beige;
Controls.Add(dockPanel);
dockPanel.BringToFront();
DockContent content1 = GetDockContentForm("Content 1", DockState.Document, Color.SteelBlue);
content1.Show(dockPanel);
DockContent content2 = GetDockContentForm("Content 2", DockState.DockRight, Color.DarkSeaGreen);
content2.Show(dockPanel);
DockContent content3 = GetDockContentForm("Content 3", DockState.Float, Color.PaleGoldenrod);
content3.Show(dockPanel);
content3.DockHandler.FloatPane.DockTo(dockPanel.DockWindows[DockState.DockRight]);
}
private DockContent GetDockContentForm(string name, DockState showHint, Color backColour)
{
DockContent content1 = new DockContent();
content1.Name = name;
content1.TabText = name;
content1.Text = name;
content1.ShowHint = showHint;
content1.BackColor = backColour;
return content1;
}
Tuesday 13 May 2008
eBay Doing Something About Retaliatory Feedback?
Today when placing some eBay feedback I got a notification from eBay that sellers can no longer give negative feedback to buyers. Great! A while back I blogged how a HK eBay seller used retaliatory feedback against me. I'm real pleased that eBay sees how retaliatory feedback had undermined the feedback system.
The customer is always right (I feel an existential eXtreme Programming moment coming on).
The customer is always right (I feel an existential eXtreme Programming moment coming on).
Sunday 11 May 2008
MSN USB Rocket Launcher
Some toys are just so cool! When the first USB rocket launchers came out I got one and the first thing people asked was "do they come with a camera?". And now, yes it does at Think Geek.
I've added it to my wish list, my birthday is in August. :-)
I've added it to my wish list, my birthday is in August. :-)
Wednesday 7 May 2008
TortoiseSVN Revert Guide - Part 2
I thought it would be helpful to extend the guide in my prior post to show how to handle a typical build box break quickly. I've titled it 'TortoiseSVN' as that is the Subversion front end we use. It could equally be just a Subversion guide.
OpenOffice copy available here.
OpenOffice copy available here.
Tuesday 6 May 2008
TortoiseSVN Revert Guide
Each revision control application like Subversion (SVN), ClearCase, etc seem to have their own terminology. I've keep finding this very confusing for people learning how to revert Subversion changes using Tortoise. It is just a language thing so I've made up a simple guide to show what the different reverts do.
OpenOffice copy available here.
OpenOffice copy available here.
Sunday 6 April 2008
Another Storm, Another Tree
Last week Melbourne had some wild weather. Of course, we lost power and the roads were blocked by fallen trees. By the end of the storm we had gained two tree from our neighbours (one in the photo).
A bad storm but not the worst we have had. It was real comforting for us to know we can take a day or two without power as we have set ourselves up well with a generator and we just expect tree damage.
The photo shows Golly down the back inspecting our new horizontal tree.
A bad storm but not the worst we have had. It was real comforting for us to know we can take a day or two without power as we have set ourselves up well with a generator and we just expect tree damage.
The photo shows Golly down the back inspecting our new horizontal tree.
Wednesday 26 March 2008
HTML C# Code Fragments - Generator
I've found a great online tool to generate HTML for C# code fragments so I can put code fragments in posts on this blog. Check it out here.
Dependency Injection (IoC) & NDependencyInjection
In the last couple of months we have, at work, started using a new dependency injection (IoC) framework (NDependencyInjection) on a code base that had not been fully using IoC. As this is a more advanced pattern being retrospectively applied to an existing code base, it has been difficult to demonstrate the benefits. It had been a bit of leap of faith. But, in the last few days we have reached that 'critical mass' point were we are repeatedly finding that it has reduced our cost of change. Adding new features and refactoring has become easier. It is already saving us time ($$).
I'm finding that IoC simplifies:
Before:
After:
What I'm now finding is that I can add a parameter to a constructor and the application 'just works' without any other code changes as the IoC framework just finds the required objects using its wiring rules. No need to work out life cycles and walk up the ladder of factories. This means that the product's architecture is less likely to become corrupted as developers add more features. So it becomes an enabler for less skilled developers to work on the code.
One 'gotcha', and perhaps a future feature for NDependencyInjection is that adding a new parameter to a constructor does mean that unit tests must be updated to provide the mocked object. Wouldn't it be great if NDependencyInjection could be set to a unit testing mode for a given type so that when and instance of the type is requested all required types are generated as mocked objects? Perhaps a 'GetTestObject' method to compliment the current 'Get' method? Further code generation automation ... write less code and reduce the risk of less skilled developers introducing integration tests disguised as unit tests.
NDependencyInjection is well worth the effort. Great work Nigel!.
I'm finding that IoC simplifies:
- Object life cycle control.
- Object wiring.
- Object construction - Automates code generation for circular dependencies.
Before:
CfaRegionsChangedListenerConduit regionChangedConduit = new CfaRegionsChangedListenerConduit();
ICfaRegions cfaRegions = new CfaRegions(regionChangedConduit, persistenceService);
FormatterListenerConduit formatterListenerConduit = new FormatterListenerConduit();
IncidentGridViewCellFormatter incidentGridViewCellFormatter =
new IncidentGridViewCellFormatter(cfaRegions, formatterListenerConduit);
regionChangedConduit.SetTarget(incidentGridViewCellFormatter);
IncidentsGridViewController incidentsGridViewController = new IncidentsGridViewController();
IncidentsGridView incidentsGridView =
new IncidentsGridView(cfaDataSet, incidentsGridViewController, incidentGridViewCellFormatter);
incidentsGridViewController.Inject(incidentsGridView, new BrowserMapView());
formatterListenerConduit.SetTarget(incidentsGridViewController);
incidentsView = new IncidentsView(new RegionSelectionControl(cfaRegions), incidentsGridView);
hostServices.Show(incidentsView, DockState.Document);
After:
system.HasSubsystem(new IncidentsViewBuilder()).Provides<DockContent>();Although a trivial example, I find the after code to be more readable. It has also separated the object wiring from the usage code. The wiring necessary to build a ContentForm object is hidden in the builder. This makes code reuse easier. It also abstracts the use of common objects (in this case the CFADataSet and the PersistenceService. Traditionally this is done by using a factory, but that would required common objects to be passed explicitly down in a series of constructors/methods. This makes managing object life cycles much easier.
hostServices.Show(system.Get<ContentForm>(), DockState.Document);
:
public class IncidentsViewBuilder : ISubsystemBuilder
{
public void Build(ISystemDefinition system)
{
system.HasSingleton<CfaRegions>().Provides<ICfaRegions>();
system.HasSingleton<IncidentGridViewCellFormatter>().Provides<IncidentGridViewCellFormatter>();
system.HasSingleton<IncidentsGridViewController>().Provides<IncidentsGridViewController>();
system.HasSingleton<IncidentsGridView>().Provides<IncidentsGridView>();
system.HasSingleton<BrowserMapView>().Provides<BrowserMapView>();
system.HasSingleton<RegionSelectionControl>().Provides<RegionSelectionControl>();
system.HasSingleton<IncidentsView>().Provides<ContentForm>();
}
}
What I'm now finding is that I can add a parameter to a constructor and the application 'just works' without any other code changes as the IoC framework just finds the required objects using its wiring rules. No need to work out life cycles and walk up the ladder of factories. This means that the product's architecture is less likely to become corrupted as developers add more features. So it becomes an enabler for less skilled developers to work on the code.
One 'gotcha', and perhaps a future feature for NDependencyInjection is that adding a new parameter to a constructor does mean that unit tests must be updated to provide the mocked object. Wouldn't it be great if NDependencyInjection could be set to a unit testing mode for a given type so that when and instance of the type is requested all required types are generated as mocked objects? Perhaps a 'GetTestObject' method to compliment the current 'Get' method? Further code generation automation ... write less code and reduce the risk of less skilled developers introducing integration tests disguised as unit tests.
NDependencyInjection is well worth the effort. Great work Nigel!.
Thursday 13 March 2008
CFA Reader Rev:1.0.0 Available
CFA incidents RSS reader (CFA Reader) rev 1.0.0 is now available here.
Features:
Features:
- Region filtering
- Colour highlighting of incident relevance with emphasis on bushfire/wildfire incidents.
- Double clicking on an incident shows the general location in Google Maps.
- Total fire ban notifications.
- Real time 'no touch' incident updates.
- Internet connection is not configurable. Will probably not work from behind a business firewall.
- Windows positions are not saved to disk. The application's panels will be in the same position each time the application is opened.
- No documentation/help.
Thursday 6 March 2008
If You Break The Build - Revert
Our team at Varian Australia, decided in our last iteration to adopt the rule:
I learnt Subversion can revert changes in a commit, even if not the most recent commit, without loosing the changes ... quickly. TortoiseSVN offers options like "revert from ..." which allow a change set of just the changes in that commit to be reverted by a following commit of the change set. If using Continuouse Integration (CI) it means that a revert is very low cost (lost work).
The team has found it very enabling. We can break the build but if we do the break is only for minutes. It does mean that we do need a fast build.
I've worked in companies with slow builds (hours). This experience emphasises the need to always have a fast build. It is always possible it is just a matter of finding how. The increased productivity of Continuouse Integration is significant.
"If you break the build, revert your commit immediately."It has surprised me how successful this has been. We are using Subversion (SVN) and while I thought that I knew how to revert a commit I found I learnt so much more about the powerful features Subversion offers to revert a commit. More importantly I, and I think others, are now much more comfortable/confident on quickly reverting a commit. This is empowering, it lowers cost/inhibitors.
I learnt Subversion can revert changes in a commit, even if not the most recent commit, without loosing the changes ... quickly. TortoiseSVN offers options like "revert from ..." which allow a change set of just the changes in that commit to be reverted by a following commit of the change set. If using Continuouse Integration (CI) it means that a revert is very low cost (lost work).
The team has found it very enabling. We can break the build but if we do the break is only for minutes. It does mean that we do need a fast build.
I've worked in companies with slow builds (hours). This experience emphasises the need to always have a fast build. It is always possible it is just a matter of finding how. The increased productivity of Continuouse Integration is significant.
Unit Testing Internal C# Classes
I like to keep my production code and my unit testing code in separate assemblies. A downside of this is has been that all classes must be public but I have now found that C# does support 'friend' assemblies via an AssemblyInfo.cs attribute:
I have not used this attribute yet, but I like the idea of making classes as internal. It makes the intent (usage scope) self evident. I wonder if it will help detect orphaned code?[assembly: InternalsVisibleTo("UnitTests")]
Wednesday 5 March 2008
It Takes GUTs To Succeed
Alistair Cockburn proposed the value of a TLA for Good Unit Tests in his blog mentioned on InfoQ here. The idea is best put in the InfoQ article as:
I could not resist the title :-)
Checkout his blog entry The modern programming professional has GUTs.He (Alistair) suggests that there is a shift in assertion by Bob on what makes a true professional. Though Bob starts with TDD, he seems to agree that to be a professional you need to have good unit tests.
Alistair believes that, till date, there has been no good term for "good unit tests," as there is for TDD. Had there been a term like 'GUTs' for good unit tests then people could brag about GUTs without implying whether they were written before or after the code.
I could not resist the title :-)
Sunday 2 March 2008
VS Production/Test Macro Jumper
When using TDD it seems that repetitive patterns that we perform are:
- Create a test fixture for a class
- Create a test for a test a method
- Switch between the test fixture and the production code.
"while the job of software developers is to automate end user processes, it seems that developers are the last to automate their processes."So with this in mind I have written a Visual Studio macro to do one of the above, switch between a test fixture and the related production code class. Depending how this goes at work I may extend this to create the fixture and create template test cases for method. See how it goes.
Imports System
Imports EnvDTE
Imports EnvDTE80
Imports System.Diagnostics
Public Module Jumper
Sub BetweenProductionClassTestFixture()
'Copyright 2008 Robert Smyth'
''
'Jump between product class file and its test fixture'
'This macro makes the assumes that all unit tests for'
'classes in a files are located in a file of the same'
'name as the file being tested (production code) with'
'a Tests suffix.'
''
'This is based on the common practice of one class per'
'file, the filename being the class name, and one test
'fixture per class.'
''
'It also assumes:'
'- All test fixtures for an assembly are located in a'
' child folder called Tests.'
'- All test fixtures are located in a mirror folder'
' structure within the child folder called Tests.'
Dim fileNameExtension = System.IO.Path.GetExtension(Application.ActiveDocument.FullName)
Dim activeProject As Project = GetActiveSolutionProject()
Dim projectPath = System.IO.Path.GetDirectoryName(activeProject.FullName)
Dim currentFilePath = System.IO.Path.GetDirectoryName(Application.ActiveDocument.FullName)
Dim classRelativePath = Right(currentFilePath, Len(currentFilePath) - Len(projectPath))
Dim currentClassName
Dim newFilePath = ""
currentClassName = System.IO.Path.GetFileName(Application.ActiveDocument.FullName)
currentClassName = Left(currentClassName, Len(currentClassName) - Len(fileNameExtension))
If (Right(currentClassName, 5) = "Tests") Then
newFilePath = Left(currentClassName, Len(currentClassName) - Len("Tests")) + fileNameExtension
newFilePath = Left(projectPath, Len(projectPath) - Len("Tests")) + newFilePath
Else
newFilePath = currentClassName + "Tests" + fileNameExtension
newFilePath = projectPath + "\Tests" + classRelativePath + "\" + newFilePath
End If
If newFilePath <> "" Then
Application.Documents.Open(newFilePath)
End If
End Sub
Public Function GetActiveSolutionProject() As Project
' Sets global miPrj = currently selected project and
' return the project to the caller.
Dim projs As System.Array
Dim proj As Project
Dim projects As Projects
projs = DTE.ActiveSolutionProjects
If projs.Length > 0 Then
proj = CType(projs.GetValue(0), EnvDTE.Project)
Return proj
End If
End Function
End Module
Thursday 28 February 2008
Team Build Box Pass Rate Metrics
In software development metrics are always 'interpretable' and prone to what I refer to as (pardon me but I cannot term it better) 'technical masturbation'. An interesting metric at work has been build box % pass rate. I'm not sure what it tells us. Each team's culture is different on how to treat the build box. One team has a 30% pass rate while another has a pass rate around 80%. One uses CI and the other does not. So what does it tell us?
Members in a CI team may find it useful, and acceptable, to "lean on the build box" by not running all tests prior to committing. This can be productive if the tests take longer than 5% of the commit rate and the build box breakages are fixed quickly (is 'quickly' relative to commit rate?). Are the gains can be greater than the cost? Does % pass rate metric reflect productivity. By 'productivity' I mean minimising time to profitable delivery.
Other teams may consider the build box pristine and not to be broken ever.
It occurs to me that perhaps the real issue is the time broken. In companies I've worked if the build took a long time (e.g. hours). So the consequence of a build break was higher and hence the team usually aspired to a no breakages policy. If you have such a slow build, faire enough. But a build time of hours is really a smell of tight coupling, I would eliminate the problem of long build time first.
So, I wonder if a useful metric coming out of this is the % time the build box is broken rather than the build box pass rate against commits?
Related links:
Members in a CI team may find it useful, and acceptable, to "lean on the build box" by not running all tests prior to committing. This can be productive if the tests take longer than 5% of the commit rate and the build box breakages are fixed quickly (is 'quickly' relative to commit rate?). Are the gains can be greater than the cost? Does % pass rate metric reflect productivity. By 'productivity' I mean minimising time to profitable delivery.
Other teams may consider the build box pristine and not to be broken ever.
It occurs to me that perhaps the real issue is the time broken. In companies I've worked if the build took a long time (e.g. hours). So the consequence of a build break was higher and hence the team usually aspired to a no breakages policy. If you have such a slow build, faire enough. But a build time of hours is really a smell of tight coupling, I would eliminate the problem of long build time first.
So, I wonder if a useful metric coming out of this is the % time the build box is broken rather than the build box pass rate against commits?
Related links:
Adoption of Agile Methods Survey
Interesting survey on community adoption/awareness of agile methods arrived in my mail tonight here. The number of participants is not that large so I'm unsure how to read it. But scroll down to the bottom of the page for other surveys.
Wednesday 27 February 2008
NDependencyInjection
NDependencyInjection is a new, very promising, IoC / dependency inject framework. The team I'm on is shifting from Ninject to NDependencyInjection. The most useful feature already is resolving circular constructor references. Love it and I know it is under active development. Expect great things.
Tuesday 26 February 2008
Prefactoring
Language is such an important thing in software development. Patterns are a the classic example, they provide a vocabulary to communicate concepts like 'it is like a state pattern would be useful'. Likewise we have adopted a vocabulary for agile style development. Like TDD, red-green-refactor, refactoring, and now (hopefully) 'prefactoring'. It is term, invented by Nigel, that has become common in our team (Varian Australia).
The first time it 'jelled' with me was when taking a class that was ... well ... hmm ... 'legacy' code. It was full of 'if' statements making the intent of the class difficult to see. We wanted to add a simple corner case to the functionality. Nigel suggested to prefactor the code so that it would become self evident where to make the change. He meant refactoring done prior to change as opposed to the RED-GREEN-REFACTOR it was added a PREFACTOR initial step. The feature seemed to simple I thought 'na ... I can find were to insert this little bit of code with the time to refactor/prefactor it'. But half a day later the code was still bucket of worms.
So, "delete plan A and insert plan B" we did then try Nigel's idea of prefactoring. Prefactoring means refactoring the code prior to making the change so that the code's purpose is self evident. Or, in other words, where to insert the change is self evident (I will avoid a discussion of open-close principle for the moment). So when I said 'we did then try' I was using the royal 'we'. In other words off Nigel went prefactoring/refactoring the code. After just 3 hours the code became clear like a ship emerging from a fog and the change needed was just easy.
So, if you are confronted with unintelligible code to maintain, prefactor. It saves time.
Should we now say PREFACTOR-RED-GREEN-REFACTOR. Na, that just means you did not refactor last time.
Go Nigel!
The first time it 'jelled' with me was when taking a class that was ... well ... hmm ... 'legacy' code. It was full of 'if' statements making the intent of the class difficult to see. We wanted to add a simple corner case to the functionality. Nigel suggested to prefactor the code so that it would become self evident where to make the change. He meant refactoring done prior to change as opposed to the RED-GREEN-REFACTOR it was added a PREFACTOR initial step. The feature seemed to simple I thought 'na ... I can find were to insert this little bit of code with the time to refactor/prefactor it'. But half a day later the code was still bucket of worms.
So, "delete plan A and insert plan B" we did then try Nigel's idea of prefactoring. Prefactoring means refactoring the code prior to making the change so that the code's purpose is self evident. Or, in other words, where to insert the change is self evident (I will avoid a discussion of open-close principle for the moment). So when I said 'we did then try' I was using the royal 'we'. In other words off Nigel went prefactoring/refactoring the code. After just 3 hours the code became clear like a ship emerging from a fog and the change needed was just easy.
So, if you are confronted with unintelligible code to maintain, prefactor. It saves time.
Should we now say PREFACTOR-RED-GREEN-REFACTOR. Na, that just means you did not refactor last time.
Go Nigel!
Sunday 24 February 2008
Mental Firewalls & Borg Group Think
While browsing some humour I came across a Flickr image with the title "Borg Warning". It had the following description:
Just for fun, more on these "Warning Signs For Tomorrow" signs here.
While well-behaved group minds no doubt are selective of who joins (it is after all rather intimate) and unlikely to assimilate everybody nearby, there might be applications or situations where mental firewalls are down and brains easily form group intellects. Maybe the people in the icon should all be raising their hands in the same way, but this is the clipart I found. "While written as humour about borg-like group think, I find it insightful into group/team behaviour. Particularly the reference to intimacy and the term "metal firewall".
Just for fun, more on these "Warning Signs For Tomorrow" signs here.
Saturday 23 February 2008
Threaded Applications Do Become Unstiched
A design defect that I keep coming across is the overuse of threads in applications. The worst case as several years ago when I came across code that used threads as a type of queue. That application crashed when the number threads got to a few thousand threads. More recently an application became much faster and more stable when the design was changed to remove all threads.
Using threads causes:
When the design is changed to remove the threads the design becomes simpler (and hence the code healthier). Of course the change is done to eliminate intermittent defects (product health) which do disappear when the threads are removed.
Threads are added to code either because:
If you have threads in your application, get rid of them. You are probably already having to update code to make it thread safe. If so, you just adding complexity as a design defect deodorant. The increased complexity will lead to more defects ... sound familiar?
Using threads causes:
- Application speed problems due to inherent delays in inter-thread communications and the inherent wait/sleeps that will creep in.
- Require thread safe objects.
- Leads to complex design which will lower code health.
- Not deterministic.
When the design is changed to remove the threads the design becomes simpler (and hence the code healthier). Of course the change is done to eliminate intermittent defects (product health) which do disappear when the threads are removed.
Threads are added to code either because:
- The future impact on the code health (the cost of that design) is not understood.
- The alternative design approach is not known.
- There is a fear of processing time to do what is required. You have no choice anyway as you must do what is required as "it is required" :-). If you use a thread it will just take longer anyway.
If you have threads in your application, get rid of them. You are probably already having to update code to make it thread safe. If so, you just adding complexity as a design defect deodorant. The increased complexity will lead to more defects ... sound familiar?
Friday 22 February 2008
Continuous Integration - More Build Lights
One of the great things we have done at work in recent months has been to raise build box state visibility by both sound (CruiseControl tray) and light (see my prior blog). The light has proved to be very effective.
I found some more interesting photos of what others are doing:
I found some more interesting photos of what others are doing:
Thursday 21 February 2008
eXtreme Programming War Rooms
I came across this site with many great photos of XP team war rooms and their charts. Interesting to see how other teams work spaces look like, how they organise their story boards, and the build box lights they use. One even has a set of traffic lights on the wall.
Check it our here:Room and Chart Gallery.
While I'm on it, some other big visible chart sites I would like to remember:
Check it our here:Room and Chart Gallery.
While I'm on it, some other big visible chart sites I would like to remember:
- Earned-value and burn charts (Alistair Cockburn)
- Big Visible Charts (Ron Jeffries)
Tuesday 19 February 2008
A Lovelly Farewell To Edith Miller
Yesterday I was in Perth to attend the funeral of my mother's best friend of 40 years; Edith Miller. I hadn't seen Edith for some years but she was a lady of such presence, and I often heard so much from my mother, that she just felt to me as part of our family. Strange how you see some people so rarely but they they feel part of your grain. It was a lovely funeral with a real sense of humanity. First time I've felt a positive feeling from a funeral. Glad I was there.
Robin: Sorry for not visiting while I was in Perth. It just did not fit. See ya next time though!
Robin: Sorry for not visiting while I was in Perth. It just did not fit. See ya next time though!
NXmlSerializer Rev 2.0 Beta Released
I've today posted a Rev 2.0 release on the sourceforge project site. Much improved code but more importantly new features:
- Private field value serialization
- Parameterised constructor support (does not require default constructor)
- 'Just in time' type discovery
- Simplified API
Sunday 10 February 2008
NXmlSerializer updated
I've finally completed refactoring the NXmlSerializer code. It could almost be said to be a 'rewrite'. I now feel happy with the code. It now also, optionally, serializes classes by private fields. The idea is serialize the class's internal state rather than by public properties.
I know the code is in use in a couple of commercial projects successfully but it is still beta. I'm yet to update the SourceForge project documentation. Get latest on the source code, I will update the release binaries when I've updated the documentation. See how to use it my looking at the tests.
Known limitations:
I know the code is in use in a couple of commercial projects successfully but it is still beta. I'm yet to update the SourceForge project documentation. Get latest on the source code, I will update the release binaries when I've updated the documentation. See how to use it my looking at the tests.
Known limitations:
- Does not serialize read only public properties with an object type.
- Cannot handle classes with read only fields.
- Field reading does not, yet, support XmlIgnore attribute.
- Cannot serialize classes that do not have a default constructor.
- Still needs upfront discovery of domain types. Yet to add a more intelligent to discover 'just in time'. This will probably not be done until we see a speed issue.
Saturday 2 February 2008
A Pattern For Task Focus
Newcomers to an XP style of development, and especially completer finishers, often have trouble breaking down tasks. It is a much underrated skill.
The warning flag is the discussion mentions classes outside of the class being worked on, or, in other words 'the big picture' keeps looming up to hide the work.
So here is a behaviour pattern I'm working on:
The anti-patterns here are hearing talk of 'how to do it' rather than what is needed. For example 'inject object X into Y'. This stuff is how, but does not add functionality. Make the code tell you to do this.
The warning flag is the discussion mentions classes outside of the class being worked on, or, in other words 'the big picture' keeps looming up to hide the work.
So here is a behaviour pattern I'm working on:
- State the story in multiple single sentences. These are the use cases that will be used to the automated user acceptance tests (UATs).
- Now break the story in developer tasks, each a simple sentence of the form: "When ..... (set/tell) .... to .....". Smaller the better, do not be shy to have many.
- Implement each task.
- Run the UATs ... gosh it works!
The anti-patterns here are hearing talk of 'how to do it' rather than what is needed. For example 'inject object X into Y'. This stuff is how, but does not add functionality. Make the code tell you to do this.
Tuesday 29 January 2008
CFA Incidents RSS Reader
As I've not been able to find any RSS reader suitable for reading the CFA Incidents RSS feed I've started writing my own. To get something going I have first created a simple reader that gives 'no touch updates' (incidents update without need of a manual action) and incident relevance highlighting. Quite pleased with the initial outcomes (see picture).
Now I'm wondering if I can hook it up with google maps .... hmm.
Now I'm wondering if I can hook it up with google maps .... hmm.
Friday 25 January 2008
The'In My Experience' Falacy
How often have you heard 'in my experience' in a discussion of what is possible? I know I've said it. What I find surprising is how easily we discard, or block out, each others experience.
I have worked in an agile style (specifically XP) in a company that produces a "shrink wrapped" software product that goes out to a large, growing worldwide customer base and accounts for more than AU$40M of sales each year. So when I'm told "I can see how agile would work in a consulting company but not in a company producing a product going to many customers" and I point out that 'in my experience' I have worked successfully in an XP team in just such a company, my experience seem to be ignored. More to the point it seems like I never said anything. I'm not considered a liar but it is just like my experiences did not exist. In one company I had one person who came back with this same comment several times and seemed each time to have no knowledge of 'my experience'. It seems to me that if one person's experience so outside of our own, or threatens what we what to believe, then we may block it.
When we cannot respect each other experience we are limiting our own potential experiences. But perhaps that is the point of the protective blind spot? I'm think of next time dropping the 'in my experience' for some other less confrontation approach.
P.S. I'm not suggesting that an agile approach suites all, it just suites me and is used here as an example.
I have worked in an agile style (specifically XP) in a company that produces a "shrink wrapped" software product that goes out to a large, growing worldwide customer base and accounts for more than AU$40M of sales each year. So when I'm told "I can see how agile would work in a consulting company but not in a company producing a product going to many customers" and I point out that 'in my experience' I have worked successfully in an XP team in just such a company, my experience seem to be ignored. More to the point it seems like I never said anything. I'm not considered a liar but it is just like my experiences did not exist. In one company I had one person who came back with this same comment several times and seemed each time to have no knowledge of 'my experience'. It seems to me that if one person's experience so outside of our own, or threatens what we what to believe, then we may block it.
When we cannot respect each other experience we are limiting our own potential experiences. But perhaps that is the point of the protective blind spot? I'm think of next time dropping the 'in my experience' for some other less confrontation approach.
P.S. I'm not suggesting that an agile approach suites all, it just suites me and is used here as an example.
Friday 18 January 2008
Ask Not What You Need To Code, But What The Code Needs Of You
A repeating theme at work recently has been developer's with rising skills having difficulty working out 'where to put the code'. What I've noticed is they are trying hard (perhaps too hard I think) to 'understand the design to know where to put the code'. Often this becomes an inhibitor, kinda like not being able to see the forest for the trees :-).
It seems to me that this is a case of believing that a developer's work is to know where to put the code. This is being really hard on yourself, it means you recon you need to be omnipotent, to add functionality you must know all. Kind of a god of the code approach.
Alternatively ask not what you need to code but what the code needs of you. Understand what functionality (not code) is required and then where this type of functionality is handled (belong) in the code. I like to put this as 'who cares?'. Then it no longer becomes a big problem of god like knowledge, but adding code to single identifiable point, hopefully a new class.
Life was supposed to be simple ... right?
It seems to me that this is a case of believing that a developer's work is to know where to put the code. This is being really hard on yourself, it means you recon you need to be omnipotent, to add functionality you must know all. Kind of a god of the code approach.
Alternatively ask not what you need to code but what the code needs of you. Understand what functionality (not code) is required and then where this type of functionality is handled (belong) in the code. I like to put this as 'who cares?'. Then it no longer becomes a big problem of god like knowledge, but adding code to single identifiable point, hopefully a new class.
Life was supposed to be simple ... right?
CFA Incidents RSS Feed - Finding A Suitable Reader
Wednesday 16 January 2008
Zero Defect Software Development - A Positive Mind Model?
If you are in a team that practices zero defect development then I'm sure you are familiar with the discussions that zero defect development is not possible, idealistic, or 'we are special' so it does not suit us. While not proposing that it is for everyone or any situation I've been perplexed as to why it is still considered 'impossible' when I'm in a team doing just that, I've working in another company in a team that has done it, and I'm aware that there are many other teams doing it.
Today I was involved in such a discussion with a person from another team. It was a good discussion between intelligent experienced people who each could put a good case for their position. In reflection some time later it occurs to me that the issue is that we hold different mind models of processes, mostly based on experiences, that in particular give very different meaning to what is a 'defect'. The difference is not on severity or priority but on a fundamental definition that is consistent in each mind model (view) but completely different.
When I talk about zero defect development my 'defects' are just as real but are very different to defects in a non-zero defect team. Take for example my work today (which I have mangled for commercial confidence reasons) ...
This week our team's customer asked us to implement a feature to read and write some very basic configuration data to a network device. This was our first story to talk to such a device so it did require us to create the structure used to communicate to devices over a network. So as to reduce the scope (get the story size down to a nice small chunk of commercial value) we asked the customer if we could, in this story, assume that the device is powered and available before we connect to it and there is no communication fault during the transactions. This eliminates all the error handling like:
So is the product defective at this point? From my 'zero defect development' point of view ... no. It is not defective as:
The import issue to me is that I feel good. I get a positive feedback working this way as my customer is happy, the work added is visible and the missing functionality is also visible and manageable as another story. Nothing is hidden, the process is positive.
If on the other hand we discovered that a previously implemented feature is now broken then we can still easily maintain zero defect by removing the feature. This may mean a zero or negative velocity but it is keeping true working functionality visible, developers are not accepting low quality, and the project visibility (and hence predictability) is high.
What I perceive as the 'traditional' approach is less collaborative. It asks developers to just do everything in one hit and anything missed is logged as a defect. This gives negative feedback. It is setting up developers for failure and by maintaining a defect list is communicating that 'defects are normal, managed, and acceptable'. It is very hard to get predictable development as defects are wild cards. It is hard to realise the benefits of higher quality development (avoid the defects in the first place) as the process is giving negative feedback.
So in zero defect development development is possible by agreeing to reduced scope in advance and making what is missing more visible. Less surprises, smaller positive steps. In other words zero defect development is possible by avoiding the risk of failure in the first place. It is collaborating to win.
Today I was involved in such a discussion with a person from another team. It was a good discussion between intelligent experienced people who each could put a good case for their position. In reflection some time later it occurs to me that the issue is that we hold different mind models of processes, mostly based on experiences, that in particular give very different meaning to what is a 'defect'. The difference is not on severity or priority but on a fundamental definition that is consistent in each mind model (view) but completely different.
When I talk about zero defect development my 'defects' are just as real but are very different to defects in a non-zero defect team. Take for example my work today (which I have mangled for commercial confidence reasons) ...
This week our team's customer asked us to implement a feature to read and write some very basic configuration data to a network device. This was our first story to talk to such a device so it did require us to create the structure used to communicate to devices over a network. So as to reduce the scope (get the story size down to a nice small chunk of commercial value) we asked the customer if we could, in this story, assume that the device is powered and available before we connect to it and there is no communication fault during the transactions. This eliminates all the error handling like:
- Cannot connect
- Not responding
- Returned error codes like invalid command etc
So is the product defective at this point? From my 'zero defect development' point of view ... no. It is not defective as:
- It has given the customer what he asked
- It has, due to collaboration, given the customer what he expected.
- It can be used, that is, we have added commercial value.
The import issue to me is that I feel good. I get a positive feedback working this way as my customer is happy, the work added is visible and the missing functionality is also visible and manageable as another story. Nothing is hidden, the process is positive.
If on the other hand we discovered that a previously implemented feature is now broken then we can still easily maintain zero defect by removing the feature. This may mean a zero or negative velocity but it is keeping true working functionality visible, developers are not accepting low quality, and the project visibility (and hence predictability) is high.
What I perceive as the 'traditional' approach is less collaborative. It asks developers to just do everything in one hit and anything missed is logged as a defect. This gives negative feedback. It is setting up developers for failure and by maintaining a defect list is communicating that 'defects are normal, managed, and acceptable'. It is very hard to get predictable development as defects are wild cards. It is hard to realise the benefits of higher quality development (avoid the defects in the first place) as the process is giving negative feedback.
So in zero defect development development is possible by agreeing to reduced scope in advance and making what is missing more visible. Less surprises, smaller positive steps. In other words zero defect development is possible by avoiding the risk of failure in the first place. It is collaborating to win.
Tuesday 15 January 2008
When Designing, Patterns Are Music To Our Ears
One member in our software team at work is a musician in a local band, and there was a discussion on 'how do you learn songs'. He explained that he only has to listen to a song a few time to get 'the groove'. When asked what 'the groove' was he explained that each song has a structure, at the highest level song and chorus. In the details there are segment with guitar patterns. He explained that he recognises guitar techniques so it all makes sense and he can perform the song, on guitar, as he knows the pattern of these recognisable sub-patterns (my words).
The discussion went on with other team members, with me listening, on how this was similar to software patterns both those we know as 'patterns' and general lower level patterns. We reflected on how when we come to code some functionality we do not need to thing of all the lines, the details, as we can thing of 'yea we iterate through the collection and do XYZ on 'em'. We recognise this pattern and when we need to type we do a 'foreach' etc.. The end result is we can discuss design at a higher level without then need of the 'then we do a foreach, and then ....'.
It made me think of (many years ago) when I was practising Morse Code for my Amateur radio licence (ham radio). We had two levels of licenses, a 'Novice License' and a 'Full Call'. The Novice license exam included morse at a rather slow rate (from memory: 6 words per minute), and the Full Call exam includes morse at (from memory) 16 words per minutes. The interesting thing is that the faster test was easier because with the slow machine generated morse you needed to actually hear the number of dots and dashes while with the faster morse you could hear the 'song'. For example 'dot dot dot dot' sounds like 'diddledee'. You do not listen for dots and dashes but listen for sound patterns.
So learning a song is helped by the expertise of learned patterns. Same for morse and software design. As another team member said today 'Not surprising given the brain's pattern recognition ability'.
So is a corner stone of training patterns small and great?
The discussion went on with other team members, with me listening, on how this was similar to software patterns both those we know as 'patterns' and general lower level patterns. We reflected on how when we come to code some functionality we do not need to thing of all the lines, the details, as we can thing of 'yea we iterate through the collection and do XYZ on 'em'. We recognise this pattern and when we need to type we do a 'foreach' etc.. The end result is we can discuss design at a higher level without then need of the 'then we do a foreach, and then ....'.
It made me think of (many years ago) when I was practising Morse Code for my Amateur radio licence (ham radio). We had two levels of licenses, a 'Novice License' and a 'Full Call'. The Novice license exam included morse at a rather slow rate (from memory: 6 words per minute), and the Full Call exam includes morse at (from memory) 16 words per minutes. The interesting thing is that the faster test was easier because with the slow machine generated morse you needed to actually hear the number of dots and dashes while with the faster morse you could hear the 'song'. For example 'dot dot dot dot' sounds like 'diddledee'. You do not listen for dots and dashes but listen for sound patterns.
So learning a song is helped by the expertise of learned patterns. Same for morse and software design. As another team member said today 'Not surprising given the brain's pattern recognition ability'.
So is a corner stone of training patterns small and great?
Saturday 5 January 2008
CFA Incidents RSS Feed - Is There A Suitable Reader?
During summer I like to keep track of CFA incidents, especially on high fire danger days. This is part of our bushfire plan and a couple of years ago I wrote a simple a web scraper to notify me at work of any brushfires in our area. In the last week I've noticed that the Current Incidents page has an RSS feed. Great!
But then I started the search for a suitable RSS reader. This application is a little different from most RSS applications in that it is real time, we need notification within minutes, and only a portion of the incidents are of interest. The feed, and the web page, list incidents for all of Victoria. So for it to be of any real use I need a feed filtered at least by out local region. Incidents hundreds of kilometres away will flood any alarm system I may use.
So what I think I need is a reader that will:
I'm still looking. My notes on readers I'm testing are kept here.
But then I started the search for a suitable RSS reader. This application is a little different from most RSS applications in that it is real time, we need notification within minutes, and only a portion of the incidents are of interest. The feed, and the web page, list incidents for all of Victoria. So for it to be of any real use I need a feed filtered at least by out local region. Incidents hundreds of kilometres away will flood any alarm system I may use.
So what I think I need is a reader that will:
- Check for updates every 1 or 2 minutes.
- Filter incidents by region, town, and type.
- Email filtered incidents.
I'm still looking. My notes on readers I'm testing are kept here.
Subscribe to:
Posts (Atom)