Rob Smyth

Wednesday 24 June 2009

Bushfire Bunkers #2

We are actively considering a bunker as a last resort option for our home, but I remain worried about them. Tonight I came accross the following from CSIRO's page 'Q&A: Victorian bushfires':

What research has CSIRO done on fire bunkers?

CSIRO is not currently conducting research into bushfire bunkers or shelters. Previous research by the Department of Defence indicated that underground bunkers may not be safe in bushfires due to the accumulation of toxic gases coming from a bushfire itself.


As well as the technical issues, there are a range of other considerations including:

  • decision making processes and education around when to retreat to the bunker
  • when to close off a bunker
  • how long to remain in the bunker
  • how to determine when it is safe to exit the bunker.


It is a worry.

Thursday 18 June 2009

Bushfire Bunkers

I've not been a fan of bushfire bunkers. I think that if you cannot defend your home you should not be there (leave early meaning prior to 10AM before there is a threat). But, living in a bushfire risk area, and given recent credabile information we are now looking at a bunker as a backup / last option. There are a few bunker options on the market but a couple that I find 'interesting' have emerged (I'm not an expert, just my opinion).

Interesting options (with very different pricing) are:

* Wildfire Safety Bunkers
* MineArc Bush-Fire Chambers

(Wildfire Safety Bunker image shown)

A bunker approach does worry me. See my notes here.

Gosh You Would Hope It Is Worth It

In order to sell a safety product it must surely inspire confidence. But does it need to inspire fashion, good looks?

The image of an excape mask makes me wonder about the old saying ...
"I would not be caught dead ..."

Friday 12 June 2009

Bushfire House Survival Meter

This week I purchased a 'House Survival Meter' produced by CSIRO. Really cheap (less than $20) and it is a bit of an eye opener. If this of any interest you, just phone the company given on the site and you will have it tomorrow or the next day. Just do it!

I wonder if one was sent one to every home in 'rural' and 'bushland residential' Australia if it would raise community awareness of risk. I recon it would.

BTW, the meter gives our home a 30% probability of survival if attended and has changed my plans. Hmmm ...

Thursday 11 June 2009

Componentalization Can Be An Enabler For Automated Testing

While working on componentalizing a project (think Domain Driven Design) I have become aware that componentalization may reduce reliance on User Acceptance Testing (UATs) of the application to something simpler and lighter kinda like integration tests. Thing is that these tests are not quite UATs nor integration tests so I've found I need another name to enable effective focus and conversation. The name I'm currently using is 'Component Tests'.

Here is the thing ... you could say that the user of a component, via its API, is a different user. In this case a developer or team. So, component tests are UATs. Sure, but this language looses the focus that UATs have given us of creating automated test as close to the real thing (mouse clicking a button) as possible. Thing is that this makes UATs inherently slow and difficult to test all the corner cases. For example, what if the USB memory stick fails after the user clicks the 'Do It' button but before the save is completed? Hard from a UI level. But if the UI code is moved to its own component and the 'do it' logic in another domain component then this testing is on the domain component via its API. Not hard.

So component tests differ is difficulty, speed, and only the UATs actually test the final functionality. Whenever developers use tests at code level the question is 'how can I know this is the functionality used in the product?'. By componentalizing the products code this risk is greatly mitigated giving the end customer confidence. He/she can believe that the component tests do test the functionality used in the product as the component is clearly identified. A user database component is more likely to provide the functionality of adding a user than the product help framework component.

Critically a component must provide encapsulation, typical via an API. The API is the line of demarcation defining it from integration or unit tests.

The potential advantages are:
  • Simpler to test corner cases, more unit test like. An enabler for better tests.
  • Test run faster. Essential for Continuous Integration (CI) and a cost saving for any project.
  • Componentalization means fast developer and build box compile times.

Wednesday 3 June 2009

Developer Continuous Integration Visibility

I wonder if it would be useful to measure individual developer commit rate (say over last day) and display this on a screen alongside a teams story board, or in its area. This metric gives an indication if a developer is in trouble and has gone 'stale'. Happens to me some days :-).

Of course all days are not equal and some days can be dominated by non-coding work. Hence the reason for placing the monitor next to the story board so it becomes useful information during stand-up (if the team does that kind of stuff). So a team member can report ... "yesterday I was distracted by administrative tasks so you can see my commit rate was low". Great to raise the visibility on this kind of stuff.

Perhaps the screen could look something like this ...
In a team using CI I often find a drop of commit rate as an early indicator that somebody needs help (sometimes me!).

Cyclomatic Complexity To Monitor Unit Test Coverage

NCoverCop has proven to be a great tool that I like a lot. It fails the build if code coverage falls. But, people always trump process and there is always the temptation to write test cases for code coverage rather than test the code. All too easy. I wonder if a better approach would be to measure project cyclomatic complexity (CC) against number of test cases.

If the code CC rises then the number of unit tests is expected to rise by the same amount. The benefit is that this approach gives more focus on testing functionality rather than ensuring coverage. It does not ensure that the tests are completed, but perhaps it is an improvement.

It is interesting to note that NCover version 3 offers CC metrics. Only problem is that NCover can be set to fail the build if a metric falls below a threshold but it does not, of its own accord, detect falls from last value. Besides, if code is deleted, it is acceptable for the number of test cases to fall by the same difference as the CC.

Here the metric that matters, and should fall with time, is the difference between the test case count and the CC.