Rob Smyth

Showing posts with label Software Development. Show all posts
Showing posts with label Software Development. Show all posts

Sunday, 8 April 2018

Recliner workstation

I like the look of this Altwork station. They describe it as "the world’s first workstation designed for high-intensity computer users".

Focus position.

I like the way it converts from seat to recline. If the base could be made a little less 'industrial' it would be perfect.

Thursday, 13 March 2014

Software R&D manager Yin & Yang

Taking on-board "the fastest way to do something is to not do it at all"  (is this an Alistair Cockburn quote?) I'm thinking that a software R&D manager's performance might come from how many things a team does not have to do.

A yin and yang relationship with a software developer who's performance could be measured by how much they (or help a team to) deliver working functionality and a manger.

How do you measure what was not done?

Thursday, 27 February 2014

Cyber Security

The Australian Signals Directorate (ASD) has done a great job publishing some real useful, and practical, guides on cyber security. Nice work.

I'm interested in:

The ASD's strategies documents are useful and helpful. Check out their top 4 strategies, they say that of the thousands of breaches they have investigated 85% would have been prevented by just these 4 strategies.

Monday, 11 November 2013

NLog MethodCallTarget configuration

I wanted to add an error indicator to a UI to give the user indication that a error or warning has been logged. I've found this very useful to get feedback from users. But each time I do this it takes me a while to get NLog to play nice with the application's XML logging configuration file. The thing I keep missing is to reload the configuration by:
NLog.LogManager.Configuration = loggingConfig;
Here is an example:

{

                var target = new NLog.Targets.MethodCallTarget();
                target.ClassName = this.GetType().AssemblyQualifiedName;
                target.MethodName = "OnErrorLogged";
                target.Parameters.Add(new MethodCallParameter("${level}"));
                target.Parameters.Add(new MethodCallParameter("${message}"));

                var loggingConfig = NLog.LogManager.Configuration;
                loggingConfig.AddTarget("UIErrorMonitor", target);

                var loggingRule = new LoggingRule("*", NLog.LogLevel.Error, target);
                loggingConfig.LoggingRules.Add(loggingRule);

                NLog.LogManager.Configuration = loggingConfig;
}

        public static void OnErrorLogged(string level, string message)
        {
// do stuff
         }

Wednesday, 8 May 2013

StyleCop

StyleCop is a great Visual Studio / Resharper add on to compliment lines of code (LOC) metric tools.

If your team values LOC then StyleCop is for you.

Thursday, 2 May 2013

Agile Project Management Team Tools

Hmm ... kinda by definition of 'agile' should the post be about 'team feedback tools' ... but lets skip that. I'll even try to skip the agile/scrum/XP question (is XP or scrum inherently 'agile'? ... darn I did not skip it).

The main players seem to be:
  • VersionOne
  • Mingle
  • Rally
  • Scrumworks
  • Excel
I have no experience with Mingle or Rally, but have a lot of experience with VersionOne and, over the last few months, with Scrumworks. My impressions are ...

VersionOne

When I first used VersionOne back, I guess about 2004, I thought it was great. But last year (2012) I was on a team using it and it was ... hmmm ... good but not exciting. It offered a very rich feature set,  but it seemed to me to have lost focus on useable features. I remember several years ago a projects page that showed an estimated time of delivery. To me, kinda fundamental. Not really available in the current version.

The good:
  • Free for small teams/use with limited features ... very restrictive.
  • Large range of features.
  • Multiple projects
  • Actual effort reporting independent of iteration.
  • Great customisation.
  • Track lifecyle of user wish list right through to tasks.
  • Support for automated release documentation.
 The bad:
  • Forget hosting if your not in the US of A. The product does not support time zones and does not update burn down until the end of the day which for me is in the middle of the day ... if your trying to promote team acceptance this is a red flag. (I reported this as an issue several years ago and I know it was also reported in 2012.)
  • Hosted options are very very slow.
  • No retrospective time entry, which is greatly aggravated by the time zone problem. You cannot enter what you did yesterday.

Scrumworks

After a few months of using it ... I'm not impressed. It provides a web view and a Java application. The two seem to written by different teams. While the web view allows entry of today's effort and updating of "to do" (velocity), the Java application allows editing the original estimate but not the "to do".

The Java application just does not cut the Scrum intent. I cannot see how a team can manage "burn down" using it.

The good:
  • Free 
  • Retrospective time entry (enter effort tomorrow).
 The bad:
  • Primitive
  • Different views (Java app / web) have different models.
To me, although I do it, retrospective time entry of effort is a process anti-pattern. If team members cannot be bothered to enter effort before going home then there are more problems than implied here.

Simple, but in the end Excel would be better.

Excel

Dunno, but I think Excel has more to offer.


I summary, today, I would consider other options than VersionOne or Scrumworks. But VersionOne is way better that Scrumworks. Actually, Scrumworks seems to me to be more of an inhibitor than useful  :-(.

Sunday, 16 September 2012

Can Gnatt chart project management be agile?

Can a project be managed using Gantt charts and be agile? Of course, but why would you want do? Life is too short. The question is more telling than any answer.

Gantt charts have been widely used for a hundred years. Like all tools they have their sweet spot. Gantt charts are great for visualizing projects with immutable dependencies. That is dependencies that fundamentally cannot be changed. Like; do step A and then  step B, where step B cannot start until step A is finished. The common example is building a house, you cannot start the walls until the floor is completed. Likewise the roof cannot be constructed until the walls are completed. This approach has been, successfully, used by software development teams for decades. The UI can be built after the middle ware.

It is a question of what can you can do and efficiencies. If your team works best with a big design up front (BDUF) then this is probably your sweet spot, that is, you optimal. But if your team is more adaptable (dare I say agile) then this is a solution but not the optimal and may be a much slower way to completion. A question of potential and achievable.

Try managing  a project using a Gantt chart in Microsoft Project where the developers, to reduce time to delivery, develop the house's roof before the floor is done.

Assuming that commercial success is the goal (big assumption), the best option to report to higher management is always the facts. The project is X% completed and its estimated time of completion is XYZ. Management understands that. In fact, that is what they try to extract from a Gantt chart. But, like many approaches, Gantt charts can be used to redefine success with complex presentations that nobody understands.

The real issue is to measure functional completion where functional means 'functional' not just developed. This means accepted (testing) by the real users.

Planning for failure warning signes:
  • More reference to "Gantt charts" than project management.
  • Releases defined as 'coded' prior to user testing.
  • Management repeatedly referring to 'sign off' on specifications.
  • Demonizing the customers.
  • Focus on department values over company commercial needs.

Wednesday, 28 December 2011

Migrating from Silverlight to WPF

I've just completed migrating a Silverlight application to WPF. All up it took a few days. Not difficult, mostly tedious, but it become evident that some preparation and knowledge helps. So here are my notes.

The application's vital statistics:
  • Silverlight 4
  • Visual Studio 2010 with Resharper 6
  • Several VS Silverlight projects with multiple common library assemblies
  • MVVM
  • No Silverlight IoC used
  • Automated UATs using White (Silverlight) framework
  • Telerik RadControls using charting, ribbon bar, data grid views, etc
  • Highly interactive UI
  • Multiple WCF services
  • Entity framework & SQL server back end

Preparation


Changes and research needed prior to changing references to the WPF framework ...

Do a spike
I found a spike run useful. I converted a few base assemblies until I got to one that had a Silverlight control (e.g. a user control or page). This took a few hours and help greatly in planning the real attack. Discard the spike, you will do it better the second time.
Assembly Dependencies
Understand your application's Silverlight assembly dependencies, you need to work up from the bottom.
WPF reference assemblies
Do some research into Microsoft's WPF/Winforms reference assemblies. Referencing the wrong library is just too easy and can be time consuming to correct. Both WPF and Winforms have TextBox controls and Resharper is great but not a mind reader (yet). Take care when adding references.

e.g. PresentationFramework.dll, WindowsBase.dll, PresentationCore.dll
Clean-up orphaned files
Sometimes files are removed from an assembly without being deleted or removed from the repository. You need to make sure these 'orphaned' files are removed before migrating. Select each project in turn and click on the show all files button in Visual Studio. Delete the orphans from your disk and the repository.
Navigation
Remove all web page navigation before you start the migration proper. This may be the biggest part of the job, the application needs to behave as one application rather than multiple web pages.

Search your code for 'href', 'Uri', 'MappedUri', and 'NavigationService'. These must go.
Web specific controls
In our case we had a RadControls HtmlPlaceholder which we used to display PDF files. This was removed prior to migrating the code and a PDF reader added post code migration. I could not see a refactoring option here.

Code Migration


Getting the code to compile as a WPF application ...

Telerik RadControls
Possibly the easiest part. Got the WPF version and it compiled with very few differences. Actually, I do not remember any compile time changes. Nice one Telerik.

Note: Mitigated the cost of buying new WPF licenses by timing the change close to our Silverlight licenses renewal. Tell the bean counters the new license costs are X but this is offset by dropping the cost of license Y making our cost ....

However there were a few run-time behaviour differences which were probably base WPF framework differences. I list these later.
Migrating assemblies
The procedure to migrate an assembly, working up the reference hierarchy was:
  1. Add a new dot net assembly (not Silverlight) with the same name as the one being replaced but with a clear suffix like "_X". Important: Tell VS to create it in the same folder as the existing assembly.
  2. Remove the newly created assembly from the solution.
  3. VS has created a folder for the new project in the existing assemblies folder. Navigate to that folder and move the project file down into the same folder as the existing project.
  4. Add the moved project to the solution. As VS sorts projects alphabetically you will find in just below the one your replacing.
  5. Copy the project's name space from the original project (do not change it until the entire migration is finished). Set the assembly name to be the same but with a suffix like the project name.
  6. Add a reference to, as a minimum, PresentationFramework.dll, WindowsBase.dll, PresentationCore.dll.
  7. Select the new project and click on "show all". Select all of the source files and folders and add them to the project until the new project has the same files and folders as the original project.
  8. Try to compile the new project. You will need to change used namespaces. Typically the change will be to half a dozen namespaces in many files. So you will learn to do a search and replace (within the one project). This will break the compile of the original assembly but that does not matter so much.
Keep the old project in the solution until the migration is finished, you may need to navigate to solve errors in the migration, etc.
Migrating WCF service references

I did not find a way to migrate WCF service references. Create new ones in the new Dot Net assembly.

Run Time (Post code migration)

I did find some breaking differences. These were:
  • Had to change a few Page controls to UserControls. WCF class hierarchy is a bit different. The compile error is clear and it was not difficult to do.
  • The initialisation sequence is a little different. I need to move some initialisation code to different event handlers. I do not remember the details. This did take a few hours and was hit and miss. On one case I do remember I had to move some initialisation to an OnActivated event and add an "initialised" state. I'm sure that is not the intended way, but it worked.
  • I had one binding that did need considerable rework. The binding was to an indexer. Looked like a control binding defect, should have worked. This one also took a few hours to resolve.
Conclusion

It proved viable to migrate our Silverlight application to WPF with few problems. Medium complexity job.

Code conversion took one developer a week. Planning and changes to the application are required before changing references to the WPF framework. Risk is low-moderate provided that your libraries, like Telerik, have a common WPF and Silverlight API.

However, defects were introduced. Most had very high visibility (ribbon bar blank) so are relatively low risk. A few were changes in control behaviour so rigorous post migration testing is required. So allow the same time for testing and introduced defect fixing as was required for the code conversion (pre-testing) stage. But ... if you have good automated UATs manual testing will still be necessary but there will be few introduced defects.

Monday, 20 September 2010

Using Log2Console from NLog via MSMQ

I've been using Log2Console to view real-time logging for a while using the Chainsaw target and UDP. But UDP is not reliable for fast real time logging. I found a simple way to configure NLog to send messages via MSMQ in the Log4JXml format used by Log2Console. It recon this could be used with any NLog target.

Here is the NLog config:

<target name="messageQueue" xsi:type="MSMQ" queue=".\private$\log">
<layout xsi:type="Log4JXmlEventLayout"/>
</target>


So simple.

Rob

Thursday, 19 August 2010

Cube Farm Designs That Cut Out Conversation

The 2006 Waterfall Conference proves to provide timeless value. Those who know of Alistair will appreciate his input on office layout here.

Code Smell Metric - Doco Fluff Metric (DFM)

Code documentation is one of those things that is so easy to do to without adding anything useful. The problem is that the added lines of code/text appear have no value and reduce code readability. A case of less is more. Documentation can be useful, but nonsense documentation is worse than no documentation at all.

So a metric that detects nonsense documentation ("fluff"?) is another little helper.

Here is a real world example:

/// <summary>
/// Thread Name.
/// </summary>
public string ThreadName;
/// <summary>
/// Time Stamp.
/// </summary>
public DateTime TimeStamp;


A trival example but I recon it is less readable than:

public string ThreadName;
public DateTime TimeStamp;


So the metric is: If the documentation, with white spaces removed, case insensitive matches the property, method, or type then flag as doco fluff.

Wednesday, 21 July 2010

Oath of Non-Allegiance

Alistair Cockburn again challenges us. Check out the "Oath of Non-Allegiance" here.

I'm sick of hearing 'they are not agile because they use UML' or 'In agile we stand up and spin clockwise every 42 minutes".

Now I will go back to making my agile coffee on my agile PC and writing lots and lots of agile modelling diagrams. I wonder if my agile undies have arrived?

The term 'Agile' has joined the livings dead, its meaning and purpose of the manefesto is long gone. It is now used to mean whatever you want like Scrum, eXtreme Programming, or just anything.

May tolerance, an understanding that all processes are broken, and the bravery to find 'what works', replace it.

Monday, 12 July 2010

Using Visual Studio 2010 over RDP With Dual Screens

I often work from home and want to use my work box with dual screens but my home monitors have different resolutions. Normally, with a VPN/RDP connection, the windows RDP software cannot handle dual screens nicely and not at all when they have different resolutions. I like SplitView, it allows me to do this.

At home I have laptop and an external monitor. The laptop has a 1680 pixels across and the monitor has 1920 across. Needless to say I want to use both with Visual Studio's main editor on the larger monitor and the debug, output, unit tests windows on the smaller laptop monitor. With SplitView I can connect to my work computer and use both. The nice thing is that I can maximise an application within a monitor without it filling both monitors.

It is a little tricky (you do need to read the doco) to setup as it does layer over RDP. With a little fiddling about with the screen sizes it works nicely.

But the screens are not exactly the same and when I return to the office after working from home I find that the screen layouts have changed. A bit annoying. Move the VS2010 properties window here, output window here, etc. So I saves my "in office" and "rdp" layouts using VS's "Import and Export Settings". First get the layout you want and the export it by selecting "General Settings | Windows Layout" only. Then from home, or in the morning, you can quickly restore world order to your windows. Nice.

Friday, 9 July 2010

Continuous Integration can build time dependency

It seems to me that there is a dependency between Continuous Integration (CI) and build time. Each time I hit a slow build it puts pressure on 'fast' CI. It seems that if the build is slow then CI may be more destructive that beneficial. I'm thinking it is all a question of ratio.

CI does not define an integration rate. Some teams see CI as once a week, others see it as every 15 minutes. It is a relative concept.

If a build box can build and run all tests in, say, 1 second then a team's commit rate of once every 5 minutes would seem achievable. Each developer would have instant feedback and be able to fix, or revert, any problem within a couple of minutes, after a commit, with minimal effect (without considering a pre-commit test system). But if the build takes, say, 30 minutes and the team's commit rate was once every 15 minutes then by the time a build failure is detected the whole team is affected.

So it seems to me that there is a relationship between CI's rate and build time. Just do not know hat it is yet.

Visual Studio 2010 build speed - kinda Windows 7 really

When I moved to a new project using Visual Studio 2010 the build time seemed very very slow. But it would seem the problem is more to do with Windows 7. We are using Subversion, VisualSVN, TortoiseSVN for RCS. I have been perplexed as to why we no longer get icons in Explorer and today I found the solution here.

Once I ran the spell:
netsh interface tcp set global autotuninglevel=disabled.
The world order was restored. VS2010 compiled 2 times faster, Explorer icons came out from hiding, and Explorer was happy (much much faster).

Monday, 14 June 2010

Build Management

Over the last few years I've worked on .Net projects using a few build management sytems. My ratings, best first:

  1. TeamCity
  2. Cruise
  3. CruiseControl
  4. TFS

TFS 2010 is a still birth

The last few months I've been on a new project using Visual Studio 2010. Being a green field project we started out going for the very latest suite of Microsoft integrated tools using TFS for project management and revision control. What we found was that TFS was ... hmmm ... how do put this? ... still born. Try as we did we just could not give it life and after months of painful efforts we ditched it for subversion and TeamCity so we could get on with the project.

I just could not find anything about TFS that was, in a professional sense usable.

My summary of TFS:
  • RCS - it loses code changes. It is also difficult to use, and has a merge-phobia, but the corruption thingy is kinda a slam dunk. Subversion is way ahead and hey ... it keeps your code changes.
  • Project Management - Notepad is better.
  • Build system - works but is difficult to manage. TeamCity is much better.
  • Integration - Best described as 'share the pain'. Other, non-Microsoft, tools integrate as good. e.g. VisualSVN.

Details below. If your going to read further either you find it hard to believe Microsoft could do such a thing (yea, me too), your a Microsoft basher looking for a fix (go away!), or you need know the experience to avoid it.

RCS Ability

Basics:

My experience with TFS for revision controls was .... well ... it doesn't work. It is that basic. Well actually it is worse, it silently drops changes. Yep, a revision control system (RCS) that actually looses code. Hard to believe, and we were so sure that nobody could release such a bad RCS that we thought that it had to be the way we were using it. But no, check in your changes, then update and guess what? Your changes are gone!

Merging:

That TFS cannot reliable keep code changes is sufficient to just forget it and move on. But I suppose that may be fixed. Trouble is that this dude is just way out of its league. TFS's idea of a merge conflict is that the file changed . Its merging is so bad that continuous integration (CI) collaboration is expensive. It sometimes even reports files that it cannot merge files that, it says, are the same.

TFS is merge-o-phobic.

Working with a build system:

One real attractive feature with TFS is that it offers the ability to compile and run all tests on a build system before accepting the commit. Nice ... but ....

It works, most of the time. Trouble is that TFS's merge-o-phobia kinda negates this feature but ignoring that TFS sometimes reverts all of your code when you do one of the commits. Not always mind you, and it does not tell you if it did. Surprise! If it does, and you realise it, well you just have to go make a coffee while the build system decides if your code can go it. That down time is optimistic :-).

For Project Management

TFS is just fine if your project's always go exactly as planned in a gantt chart. So why do you need a project management tool?

I'm serious when I say that notepad would be a better tool. Excel would be far better, and VersionOne just awesome.

For what is called 'agile' style project:
  1. Burn down reporting does not work.
  2. Move a story to a different iteration and the tasks stay behind. Hey try to do iteration planning with that!
  3. Story prerequisites are meaningless. You can put a dependent story in an iteration preceding it prerequisites.

Summary

In the end we gave up and now use: Subversion with TortoiseSVN and VisualSVN, and TeamCity. We can deliver working functionality more often now as we can spend more time coding features and less time working with infrastructure.

But if subversion ain't for you as you need something real expensive with lots processes to be sure to be sure that you really are going to commit that code then I can say that TFS is even worse than ClearCase. ClearCase does not loose code changes. It may slow them going in but it does not loose them. And ... trump this ... ClearCase is so much more expensive.

Saturday, 3 April 2010

Fault Tolerant Automated Functional Tests Oxymoron

Microsoft seems to be pushing making coded UI functional tests fault tolerant by using multiple methods of 'finding' controls on a page. If it can't find a control as its text has changed then it tries another approach. I recon it more likely to cause more problems that it would solve. At best it is unnecessary, and worst it will allow tests to pass when they should fail. Like over use of null reference guards in code that hide defects.

Microsoft's automation tools generate code that uses multiple approaches (fault tolerant) to find controls. I'm also seeing examples of this approach in VS2010 documentation/tutorials.

e.g: 4 minutes into: Introduction to Creating Coded UI Tests with Visual Studio 2010.

I do not understand the need nor the intent. On the 'need' level it implies that there is not a reliable method for finding a control although each control has a name or AutomationId that is independent of location, inner text, colour, visibility. On the intent level, if the control changes so you can not find it ... well ... I would rather the test failed.

I use test jigs and testers (thanks Nigel ... a pattern that should be documented) to access UI controls. Typically a page has a test jig and a property on the test jig provides access to control's tester or test jig. So, I have one place in which I define the control's id (e.g. AutomationId). So if it changes I just change one line.

Fault tolterant test code is an oxymoron.

Perhaps the next step is to make tests intermittent failure tolerant by retry on fail. :-)

Monday, 19 October 2009

NMate - Missing In Action?

Anybody know were to download NMate? There seems to be plenty of download links but not work. Sounds like a useful NUnit application.

Friday, 9 October 2009

Source Code Outliner PowerToy

Another useful, and free, VS2008 tool Source Code Outliner PowerToy.

Why install? Well it is free, painless (negligable learning curve), and it helps.

Install it and go to the menu View | Other | Source Code Outliner Power Toy’. Then drag the window to dock into your Visual Studio IDE (see picture). Now you can forget about about it .. it adds more convieniant navigation that offered by the standard IDE drop downs.

Not a killer app, but nice.