Thursday 9 August 2007

Is it finished yet?

When developers say they’ve finished a work package, what does finished really mean? In the majority of cases, it’s not finished.

Ok, so this isn’t always the case. I’ve worked with some great developers who when they say they’ve finished, they really have delivered a finished work package which goes through the test dept with nothing more than a couple of minor queries on interpretation of the spec. Rock solid code and always delivered on time.

I consider a work package complete when the developer is confident the robustness of the product is good enough to stand up to rigorous testing (examples below), the deliverable is functionally complete with a tick against every item in the work package, and the code quality is of an acceptable standard. If a developer gets these basics right it’ll be a monumental leap forward.

First of all, a developer needs a solid spec to work from. This doesn’t need to be Encyclopaedia Britannica in proportions but it must contain a few key ingredients – the main one being Test & Acceptance criteria. I’ll be covering how to brief a developer properly in a forthcoming post but don’t worry as you can still apply the rest of this article.

Robustness
This is generally my attempts to break the application to see how easy it is for a user to break it and see how gracefully it fails. I pull the rug from under the application in several different ways to see how recoverable or gracefully it fails. What I’m talking about here really is how well the overall exception handling strategy has been implemented. This is still a key area that is regularly missed by developers. Testing the robustness is a good measure of how well the application has been written. So for example, on web applications I’ll bypass client side validation using something like Fiddler and inject all sorts of evil data into the application to make sure data is being validated server side as well as client side. As a developer, if you treat all input as evil, you’ll write more robust applications. When we talk about input here, I’m not just referring to user input. Its common place these days, especially with service orientated architectures to have other routes for data to enter your application which you need to treat as “evil input”. For more information on writing secure code read the Open Web Application Security Project (OWASP) guidelines.

A common issue I come across usually arises when different developers are working on different tiers of an application, i.e. the UI developer just calls methods from the Biz layer and from that point on it’s the other developer’s problem (chucking it over the wall syndrome). But the common bug here is data integrity. I always test this by putting uniquely identifiable data into each form field (up to the max length) and then check it’s been mapped correctly to the database with no concatenation. If there are any issues I raise them there and then and own the issue until it’s resolved, no matter which team or developer is responsible for resolving it.

Functional Completeness
I often hear PM’s and business analysts complaining that developers don’t follow specs or they’ve implemented it wrong. The common responses to this from developers are “oh yeah, I didn’t spot that one”, or “I thought that’s what it meant”. This is an area that can be improved by briefing the developer clearly and having peer reviews in place. Again, look out for my forthcoming post on briefing developers properly.

Unfortunately, I’ve come across many sloppy developers who adopt an attitude of “it doesn’t matter as anything I miss in the alpha release can be picked up in the beta”. For some reason, some developers seem to be coding as if it’s a proof of concept, or a dress rehearsal (“I’ll come back and complete all the detail later as I want to get it functionally working first”) and skip the important detail, and never go back and finish the detail. Some bad developers I’ve encountered stick “finished” work packages into test to buy themselves a couple of days rest. I’ve even heard several developers say “I’ve put it into test so they can give me a list of things I’ve yet to do”. When you have a developer with this attitude, it’s not a training issue or a mentoring issue; it’s a HR issue that needs dealing with immediately by reassignment, possibly to another company. (Peopleware, Tom Demarco & Timothy Lister).

Developers have said to me “so if we’re doing all this testing what do the test team do?” or “isn’t this what we have testers for?”. Getting over the project finish line will not happen if you get stuck in a continuous test and fix cycle with the test team. Each iteration of this cycle is prohibitively expensive and kills project budgets.

Every time you submit a deliverable to a good test team they will, recreate a “clean” environment to test on, run through their test scripts, write up bugs in the bug tracking system, and handle all the communication around these activities. In essence, every time you submit something to test, you are in fact triggering a huge amount of effort (project budget) to be burnt. This test team effort does not directly improve the quality of the product, it just gives you a measure of what level of quality the product is at; in the same way that a set of scales tells you how much you weigh – they don’t make you lose weight (Steve McConnell, Software Project Survival Guide).

Code Quality
I’ve seen many company coding standards in the form of a 60 page plus coding standards document that takes a developer at least a day to read and then gets forgotten about. These paper based documents are almost impossible to enforce in a commercially viable way. They quickly become out of date as the Microsoft platform moves forwards. These documents really aren’t necessary. What you need is some basic tools and a whole load of common sense. In fact, most of the coding standards documents I’ve seen, basically look like a rules export from FXCop. FXCop is free and can be incorporated into your daily build. If you do have some specific coding standards that FXCop doesn’t cover then write yourself an FXCop rule for it! This is just one aspect of code quality though. Other areas are, how well it’s been documented, how much unit test coverage you’ve got, and is the code at a low enough level of complexity . These are the main aspects of code quality that I look for in a deliverable. Click tools in my tag cloud for further details on Code Quality tools.


I hear the uninitiated saying “So if I have to do all this extra stuff to get a work package to the standard that you consider finished, it will take me twice as long!”, wrong! - by adopting a test driven attitude to development (TDD) the good developers who I mentioned at the beginning of this post deliver high quality quicker. For more information and an in-depth look at TDD visit the blog of Dan Bunea.

No comments: