Rob Smyth

Showing posts with label Software Testing. Show all posts
Showing posts with label Software Testing. Show all posts

Tuesday, 27 July 2010

Agile Software Testing 101: Stress Testing

So your a software tester and you have received a new build with a new feature to test. How do you stress test it? Not difficult, you just need to know the pattern of "Stress Testing" to find those weak points. Here is a simple agile "how to".

Agile principles can be applied to testing. When done they leverage the tester's skills with the developer's skills.

Steps:
  1. Being agile (as we all are) you let the developer responsible know that you are about to test the new feature.
  2. As agile means collocated, you make sure you are near the developer and he/she can see your screen. Think of this as limited-pair testing perhaps, an agile balance of full pairing and letting the developer get on with his job.
  3. To enhance your pairing make sure you have a mirror so you can see the developer. Collaboration is critical to stress testing.
  4. Now go to the UI page that uses the feature and sweep your mouse pointer over the page watching the developers. When you see signs of stress ... click.
Stress testing saves time and leverages agile principles of, well, whatever your company says is agile today.

Saturday, 3 April 2010

Fault Tolerant Automated Functional Tests Oxymoron

Microsoft seems to be pushing making coded UI functional tests fault tolerant by using multiple methods of 'finding' controls on a page. If it can't find a control as its text has changed then it tries another approach. I recon it more likely to cause more problems that it would solve. At best it is unnecessary, and worst it will allow tests to pass when they should fail. Like over use of null reference guards in code that hide defects.

Microsoft's automation tools generate code that uses multiple approaches (fault tolerant) to find controls. I'm also seeing examples of this approach in VS2010 documentation/tutorials.

e.g: 4 minutes into: Introduction to Creating Coded UI Tests with Visual Studio 2010.

I do not understand the need nor the intent. On the 'need' level it implies that there is not a reliable method for finding a control although each control has a name or AutomationId that is independent of location, inner text, colour, visibility. On the intent level, if the control changes so you can not find it ... well ... I would rather the test failed.

I use test jigs and testers (thanks Nigel ... a pattern that should be documented) to access UI controls. Typically a page has a test jig and a property on the test jig provides access to control's tester or test jig. So, I have one place in which I define the control's id (e.g. AutomationId). So if it changes I just change one line.

Fault tolterant test code is an oxymoron.

Perhaps the next step is to make tests intermittent failure tolerant by retry on fail. :-)

Monday, 19 October 2009

NMate - Missing In Action?

Anybody know were to download NMate? There seems to be plenty of download links but not work. Sounds like a useful NUnit application.

Thursday, 11 June 2009

Componentalization Can Be An Enabler For Automated Testing

While working on componentalizing a project (think Domain Driven Design) I have become aware that componentalization may reduce reliance on User Acceptance Testing (UATs) of the application to something simpler and lighter kinda like integration tests. Thing is that these tests are not quite UATs nor integration tests so I've found I need another name to enable effective focus and conversation. The name I'm currently using is 'Component Tests'.

Here is the thing ... you could say that the user of a component, via its API, is a different user. In this case a developer or team. So, component tests are UATs. Sure, but this language looses the focus that UATs have given us of creating automated test as close to the real thing (mouse clicking a button) as possible. Thing is that this makes UATs inherently slow and difficult to test all the corner cases. For example, what if the USB memory stick fails after the user clicks the 'Do It' button but before the save is completed? Hard from a UI level. But if the UI code is moved to its own component and the 'do it' logic in another domain component then this testing is on the domain component via its API. Not hard.

So component tests differ is difficulty, speed, and only the UATs actually test the final functionality. Whenever developers use tests at code level the question is 'how can I know this is the functionality used in the product?'. By componentalizing the products code this risk is greatly mitigated giving the end customer confidence. He/she can believe that the component tests do test the functionality used in the product as the component is clearly identified. A user database component is more likely to provide the functionality of adding a user than the product help framework component.

Critically a component must provide encapsulation, typical via an API. The API is the line of demarcation defining it from integration or unit tests.

The potential advantages are:
  • Simpler to test corner cases, more unit test like. An enabler for better tests.
  • Test run faster. Essential for Continuous Integration (CI) and a cost saving for any project.
  • Componentalization means fast developer and build box compile times.

Wednesday, 3 June 2009

Developer Continuous Integration Visibility

I wonder if it would be useful to measure individual developer commit rate (say over last day) and display this on a screen alongside a teams story board, or in its area. This metric gives an indication if a developer is in trouble and has gone 'stale'. Happens to me some days :-).

Of course all days are not equal and some days can be dominated by non-coding work. Hence the reason for placing the monitor next to the story board so it becomes useful information during stand-up (if the team does that kind of stuff). So a team member can report ... "yesterday I was distracted by administrative tasks so you can see my commit rate was low". Great to raise the visibility on this kind of stuff.

Perhaps the screen could look something like this ...
In a team using CI I often find a drop of commit rate as an early indicator that somebody needs help (sometimes me!).

Cyclomatic Complexity To Monitor Unit Test Coverage

NCoverCop has proven to be a great tool that I like a lot. It fails the build if code coverage falls. But, people always trump process and there is always the temptation to write test cases for code coverage rather than test the code. All too easy. I wonder if a better approach would be to measure project cyclomatic complexity (CC) against number of test cases.

If the code CC rises then the number of unit tests is expected to rise by the same amount. The benefit is that this approach gives more focus on testing functionality rather than ensuring coverage. It does not ensure that the tests are completed, but perhaps it is an improvement.

It is interesting to note that NCover version 3 offers CC metrics. Only problem is that NCover can be set to fail the build if a metric falls below a threshold but it does not, of its own accord, detect falls from last value. Besides, if code is deleted, it is acceptable for the number of test cases to fall by the same difference as the CC.

Here the metric that matters, and should fall with time, is the difference between the test case count and the CC.

Friday, 17 April 2009

NCover Scalability

We seem to have hit a glass ceiling with NCover on our build process. We run NCover on all builds to ensure that coverage does not fall (thanks to NCoverCop). In the last couple of days we have started to get out of memory exceptions. It seems that the process of generating a coverage report just takes too much memory.

The project is significant but not large (yet) so this was a surprise. The solution seems to be to break up the coverage test into multiple build tasks.

Monday, 10 November 2008

The New UAT Framework Contenter - White

White is a new automated UAT (User Acceptance Testing) framework for Win32/WinForms/WPF applications. Perhaps a replacement for NUnitForms.

NUnitForms is a framework that layers over NUnit to enable run time application/user acceptance testing. e.g. Click button X and then expect dialog Y. It is not a unit testing framework.

I'm a developer on the NUnitForms team and a few weeks ago I took on the job of doing a new, much needed, NUnitForms release. But since then I've learnt about White and wonder if White is the future. I know one friend in Melbourne that is using it for UATs on a commercial project and the feedback is so far, positive.

I'm currently using NUnitForms. It has Vista compatibility issues that I think can be overcome without too much effort but WPF provides a brave new world. The crunch will come in the next few months, I would like to see if we can change our underlying UAT framework from NUnitForms to White. If nothing else, it would confirm how well we designed our own test jig / framework.

Get White here.

Get NUnitFramwork here (but download and compile, the last release is a bit old).

Saturday, 16 August 2008

NUnitGridRunner - Grid Processing for NUnit

NUnitGridRunner works! NUnitGridRunner ran NUnit based UATs (User Acceptance Tests) on a real world project, which usually take about 8 minutes, in about three minutes when using three remote boxes (one of which is very very slow). Not simplest tool to configure, but easy to use once going and a big time saver. With a few more boxes, and improvements to NUnitGridRunner, I think we will reach our target of running all tests in one minute from any developer box. A real CI (Continuous Integration) enabler.

Friday, 15 August 2008

Acceptance Test Driven Development

I've been using the acronym 'UAT' for User Acceptance Tests to describe automated tests that capture required application behaviour. But while terms like 'unit tests' are relatively well defined UATs is not widely a recognised acronym. Tongiht while updating NUnitGridRunner documentation I came accross a site describing Acceptance Test Driven Development with the aconym 'ATDD'. I find the diagram on this page complelling and I'm now thinking of hanging my hat on the ATDD acronym even though it is not a three letter acronym (TLA).

Check it out here. Although when reading substitute 'Agile' for 'XP'.

Sunday, 3 August 2008

NUnitGridRunner - Run NUnit Tests Distrubuted

I've spent the last couple of days trying to use Alchemi to run NUnit tests on a virtual, distributed, computer system. That is, use many idle computers to run the tests. For this I created the Google project NUnitGridRunner. But, despite early wins, the last day has been spent trying to figure out how to bypass Windows security. Oh the frustration!

The vision is that a developer, or build box, can ask a virtual computer comprising of a grid of computers to run the tests. The grid computers are underused boxes and the grid threads only run in idle time so the distributed load is kinda free. So, UATs (User Acceptance Tests) that would normally take 10-15 minutes could run in only 1 minutes. A greate $ saver for a development team.

I got the basics running no problem, but each time I extended the grid to other computers in my home network I kept hitting Windows security issues. If I try to run the nunit-console on the grid I get FileIOPermission exceptions. If try running NUnit's more low level 'SimpleTestRunner' using a shared folder with the binaries I get a login failure.

I recon somebody with more Windows security knowledge could fix this but so far it has me stumped. I'm not giving up though, I've seen enough to see that this is a real goer.

A quote from the Alchemi documentation:
The idea of meta-computing - the use of a network of many independent computers as if they were one large parallel machine, or virtual supercomputer - is very compelling since it enables supercomputer-scale processing power to be had at a fraction of the cost of traditional supercomputers.
Think of long builds, think of slow UATs, think of your manager's box when he is at meetings!

Friday, 1 August 2008

Can Alchemi Turn Web Browser Boxes Into UAT Gold?

Today I stumbled across Alchemi, "a .net based Enterprise Grid System and Framework". Or, in other words, a distributed computing framework. I wonder if it can be used to run UATs (automated User Acceptance Tests) both from developer boxes and build boxes?

I'm thinking of an Alchemi application that is a test runner which uses distributed computing to run NUnit test fixtures. The documentation claims that the 'executors' which run on the remote boxes only run in idle time. So, dedicated build farm boxes are not needed, user boxes can be used with (they claim) no affect on the boxes use. Most boxes in the office have light use so, if so, 20 boxes could be available to act as a virtual build farm.

Picture is worth a thousand words. The attached images are copied from the Alchemi documentation.

I think it is worth a play. Nigel, Duncan, you thinking what I'm thinking? All those web browser boxes ... they have a purpose!

Hey ... what does you Manager's box do? Here is an opportunity to give it meaning in life.

UATs and CI Can Only Play A Fast Game Together

When developing using automated acceptance tests (we call them User Acceptance Tests 'UATs') and continuous integration (CI), the time taken to run the tests impacts directly on team performance. Or, to put it another way, a team process using both UATs and CI is not a viable team process if the UATs are slow.

UATs are inherently slow and teams always find that it is not long before their UATs run for more than 10 minutes. Too slow for me. CI is all about frequent repository commits, and in my case the commit rate can be every 15 minutes. So if I was to run all the tests prior to each commit I would be sitting, waiting, bored, for 30-50% of my time. If I just run all the unit tests and a selection of UATs (as I do) then I am somewhat 'leaning on the build box' and if there is a build failure reported some 10 minutes later I'm already half way through my next chunk of work when I find my last commit had a fault. This means that I have to revert my code (loose the last 10 minutes work) so I can rapidly revert the build box, retrieve the borken revison, switch my thinking back to what I was doing, and fix it. In such a rapid CI environment I do not mind leaning on the build box so long as the box is fixed within 10 minutes. But either way time is lost due to slow UATs.

The other, hidden cost, that arizes from slow UATs, is it is not uncommon for team members, usually newer members, to find breaking the build too confronting/embarrising. So they will run all the tests every time before commiting no matter how trivial the code change. On one hand a good attitude, I'm embarrised too, but a bit of self exposure (trust in the team) can allow you to use the build box as a tool to speed up team developement. In our team I notice that no build break during the day is a good sign that the team has gone 'stale', people are having trouble. If the UATs are a complete specification of customer requirements nobody can totally avoid a failure without running the tests (e.g. Customer wants the font to be 11pts). Same as reading the entire specification prior to each commit.

So speed matters. It directly affects team behaviour and time to delivery ($). With tests taking 10 minutes (say) I recon this must equate to more than 25% of the team's time.

If your unit tests are running slower than 2000 per minute then either they are not unit tests or you should be humane and retire that box. UAT are another matter. They requiring discipline, skill, and grunt.

UATs are usually slow because:
  • UATs simulate an end user running the application. So they are constantly starting and closing the application and maniplulate the application via the UI (we use NUnitForms as the low level UI test framework). This requires CPU grunt and loading of many assemblies. Hence it is not unusual for test cases to take 1 second each.
  • Writing efficient UATs is not easy. It is a learnt skill. There must be balance between truely testing the application from a user level (e.g. click here, click there), fast testing of a specific feature, and the idependence of the test cases. For example, it is faster to test multiple features in one test case. Some 'cheats' can be used (see below).
  • The skill is often in the order of feature implementation (stories) as some stories enable faster testing of other stories.
  • The team may not appreciate (or care) the impact that slow UATs have on their ability to deliver and not give them the on-going attention they need. Consider this at UAT code health.
UAT smells:
  • Test cases taking longer than 2 seconds.
  • Tests have long setup time (> 1 second).
  • Slowing down tests to avoid defects that only appear in 'faster than life' UATs. This is ignoring a code health issue.
  • Unnecessaryily complex setup. e.g. Need to drag a control a long distance to test a transition near the far window edge. First implement a feature of the user positioning the control by X Y coordinates and then use this to position the control near the edge for the test.
  • Hard coded 'blind' pauses or sleeps in the tests. e.g. 'Sleep(500)'. This is a real killer.
  • Developers sitting with glazed eyes watching tests run.
  • Developers who like sitting watching tests run. They probably use the time to web browse. But then this is another, bigger, problem. :-)
  • Intermittent test failures. The UATs are telling you something. They are giving you an opportunity to fix design problems early. Pure gold.
Solutions:
  • Inform your customer of the cost savings that can be achieved from feature implementation order (story order). For example; If the UI has an icon showing a file save in progress then this can be used by the UATs to know when a file save is complete. Hence wasteful 'blind' delays are not required.
  • Team alignment/focus. Be aware of the true cost of slow UATs. Half a second accross 240 tests is 2 minutes. If 4 developers loose 2 minutes say 4 times a day then that is a total of 2 x 4 x 4 = 32 developer minutes a day. So if you spend 1 hour saving half a second off each test that will be paid back in just 2 days.
  • Cheat, but cheat legally. For example, rather than start the whole application (EXE) in another thread instantiate the main class and call its execute method. With good design you are only bypassing the windows main method handling. You might also preload and resuse some read-only configuration files. Can save a lot of time but be careful. :-)
  • Use the fastest computers available.
  • Distributed processing. I've never seen this. It seems to me to be the Utopia. I wonder if products like Alchemi can be used to pass each test fixture out to a different (idle) box. If so it would seem to be viable for a project to keep the time to run tests under 1 minute. Hmmm ... another blog.
  • Reduce the cost of entry for developers by developing a UAT framework of application specific test jigs rich with methods like 'WaitUntilXYZButtonEnables'. Fix once use many times.

Tuesday, 24 June 2008

NCoverCop And Warm Fuzzy Green

Right now I'm coding at home on an open source project and it is surprising just how much I miss a build box running NCoverCop . Each time I want to add a feature I feel the old tug between TDD and hacking. It seem that this 'tug' is dimished at work as NCoverCop makes sure that no new code goes in without tests, at home I still have the temptress devil on my shoulder.

'Interesting' how I find working with NCoverCop a relief rather than a burden. NCoverCop in fact provides the "warm and fuzzy" of a TDD green. It just feels good with and like driving without a seatbelt without.

Tuesday, 17 June 2008

Mocking Generic Methods with NMock2

Mocking generic methods is a bit of a problem using my preferred mocking framework NMock2 but Nigel has found a way. Check out his blog or download his NUnitExtensions project. Great time savers.

The NUnitExtensions project I have in my NoeticTools google project is inspired by Nigel's. I intend to delete mine and use Nigel's one.

Another good one Nigel.

Thursday, 6 March 2008

Unit Testing Internal C# Classes

I like to keep my production code and my unit testing code in separate assemblies. A downside of this is has been that all classes must be public but I have now found that C# does support 'friend' assemblies via an AssemblyInfo.cs attribute:
[assembly: InternalsVisibleTo("UnitTests")]
I have not used this attribute yet, but I like the idea of making classes as internal. It makes the intent (usage scope) self evident. I wonder if it will help detect orphaned code?

Wednesday, 5 March 2008

It Takes GUTs To Succeed

Alistair Cockburn proposed the value of a TLA for Good Unit Tests in his blog mentioned on InfoQ here. The idea is best put in the InfoQ article as:

He (Alistair) suggests that there is a shift in assertion by Bob on what makes a true professional. Though Bob starts with TDD, he seems to agree that to be a professional you need to have good unit tests.

Alistair believes that, till date, there has been no good term for "good unit tests," as there is for TDD. Had there been a term like 'GUTs' for good unit tests then people could brag about GUTs without implying whether they were written before or after the code.
Checkout his blog entry The modern programming professional has GUTs.

I could not resist the title :-)

Sunday, 2 March 2008

VS Production/Test Macro Jumper

When using TDD it seems that repetitive patterns that we perform are:
  • Create a test fixture for a class
  • Create a test for a test a method
  • Switch between the test fixture and the production code.
I forget the author but I'm reminded I once read:
"while the job of software developers is to automate end user processes, it seems that developers are the last to automate their processes."
So with this in mind I have written a Visual Studio macro to do one of the above, switch between a test fixture and the related production code class. Depending how this goes at work I may extend this to create the fixture and create template test cases for method. See how it goes.

Imports System
Imports EnvDTE
Imports EnvDTE80
Imports System.Diagnostics

Public Module Jumper

Sub BetweenProductionClassTestFixture()
'Copyright 2008 Robert Smyth'
''
'Jump between product class file and its test fixture'
'This macro makes the assumes that all unit tests for'
'classes in a files are located in a file of the same'
'name as the file being tested (production code) with'
'a Tests suffix.'
''
'This is based on the common practice of one class per'
'file, the filename being the class name, and one test
'fixture per class.'
''
'It also assumes:'
'- All test fixtures for an assembly are located in a'
' child folder called Tests.'
'- All test fixtures are located in a mirror folder'
' structure within the child folder called Tests.'

Dim fileNameExtension = System.IO.Path.GetExtension(Application.ActiveDocument.FullName)
Dim activeProject As Project = GetActiveSolutionProject()
Dim projectPath = System.IO.Path.GetDirectoryName(activeProject.FullName)
Dim currentFilePath = System.IO.Path.GetDirectoryName(Application.ActiveDocument.FullName)
Dim classRelativePath = Right(currentFilePath, Len(currentFilePath) - Len(projectPath))
Dim currentClassName
Dim newFilePath = ""

currentClassName = System.IO.Path.GetFileName(Application.ActiveDocument.FullName)
currentClassName = Left(currentClassName, Len(currentClassName) - Len(fileNameExtension))

If (Right(currentClassName, 5) = "Tests") Then
newFilePath = Left(currentClassName, Len(currentClassName) - Len("Tests")) + fileNameExtension
newFilePath = Left(projectPath, Len(projectPath) - Len("Tests")) + newFilePath
Else
newFilePath = currentClassName + "Tests" + fileNameExtension
newFilePath = projectPath + "\Tests" + classRelativePath + "\" + newFilePath
End If


If newFilePath <> "" Then
Application.Documents.Open(newFilePath)

End If

End Sub

Public Function GetActiveSolutionProject() As Project
' Sets global miPrj = currently selected project and
' return the project to the caller.
Dim projs As System.Array
Dim proj As Project
Dim projects As Projects

projs = DTE.ActiveSolutionProjects
If projs.Length > 0 Then
proj = CType(projs.GetValue(0), EnvDTE.Project)
Return proj
End If
End Function

End Module

Wednesday, 14 November 2007

NCoverCop - A Must Have Team Tool

A little while back I mentioned how at Varian Australia the software team has developed a tool that check our code coverage on every commit (each time the build box runs) and fails the build if the coverage falls. Now Nigel is publishing the tool to SourceForge. You can find it here.

This has proven to be very effective. Even with well intentioned TDD it is surprising how easy it is to miss one functional point. As the tool only allows the coverage to go up or stay the same the coverage must, and does, increase with time. It has made a real difference to the team. All code now committed must be 100% tested by unit tests or the fail music will sound within just a few minutes :-).

Monday, 29 October 2007

Using Continous Integration To Raise Code Coverage

I'm constantly amazed by what I learn working in a team, somebody is always coming up with something real useful. This last couple of weeks it was a simple extension to our continuous integration (CI) build box that measures the code coverage of the build and compares it to the highest recorded code coverage. If it is less the build fails, yep fails. If it is higher then the highest recorded is automatically updated with the builds coverage. Being a CI environment any build box failure must be fixed immediately. As a result the code coverage can only go up.

So far this has been amazing successful, but then our team does practice TDD (well mostly).

The implications are that if you are committing new code you are responsible to ensure that your code is covered by unit tests (code coverage is only calculated from unit tests). This is something of a subtle move in responsibility as before it was acceptable to 'think' that the code was covered now we must 'ensure' that it is covered or the build will break within minutes of the check-in.

For this to work developer boxes were set-up so that each developer can run their own code coverage (before check-in) and view the added code for coverage (using NCoverExplorer) quickly and easily. I think that things being easy is really essential.

Like many things I've blogged about I'm really recording great ideas from the team or others. This is no exception (credit here to Nigel Thorne ... a wickedly good idea Nigel).