Sunday 9 December 2007

Continuous Transparency - a new best practice?

Continuous Transparency is the practice of giving your customer or key stakeholders direct access to the output of your daily build or continuous integration environment, in other words, your customer is able to see the work in progress (warts and all).

For web applications you simply get your build script to automatically deploy the build to an internet facing server and send the URL to your customer. For thick client applications you give VNC or RDC access to a demo PC, usually located in your corporate DMZ, and send your customer the connection details.

When you adopt this practice for the first time you must communicate to all stakeholders the following in relation to having direct access to the build:
  • What you will see in the build is work in progress. It is normal for untested software that is in mid-development to crash or give unexpected results. Please do not log any calls or be alarmed if the software crashes or is partially unavailable or completely unavailable when you try to access it.
  • You must not use the build for demonstration purposes within your organisation as it may change significantly before the final release and may not be available when you come to demonstrate it. You also run the risk of setting the wrong expectations should the software fail or change in the future.
  • Placeholders will often be used where content or functionality is not yet available to provide “scaffold” for the application.
  • What you see in the daily build does NOT constitute an official release.
  • All previous data is removed each time the build is deployed.

Advantages of Continuous Transparency
  • Builds an enormous amount of confidence with the customer and creates a very strong relationship as they can see tangible progress and feel constantly engaged.
  • Enables you to share problems with the customer in real-time. The most common problem is having an overly optimistic (unachievable) schedule. The customer can usually gauge the overall progress of the development and realise that you can’t complete all functionality on time. They actively help you to de-scope the solution to hit their deadline.
  • Focuses the developers’ minds when they know all stakeholders are watching their creations on a daily basis and improves quality upstream.
  • Entirely removes the risk of any surprises to the customer when the first release is made, where they would have traditionally only seen the software after many months of development.
  • Flushes out change requests or fundamental requirements issues earlier on in the development life cycle. Better to find out earlier before project budget is burnt.
Disadvantages of Continuous Transparency
  • If your project has major resourcing issues (I’m not talking about the odd day of sickness or annual leave) then your customer will be able to see an unexplainable lack of progress.

I have operated Continuous Transparency on all projects I’ve run over the past 4 years and it’s always been a great success.

Everyone I’ve introduced it to automatically gives me a look of horror followed by "showing the customer untested software or airing our bugs in public is ridiculous", or "but doesn’t it give the customer a chance to sneak in changes?" But once they try it and reap the rewards of success they say they will never go back to how they used to do software. Most importantly though, is all my customers have said “why don’t all our suppliers operate like this.”

I’m interested to hear comments from anyone who’s already operating this approach or is concerned or sceptical about using it.

Sunday 30 September 2007

Briefing Developers

For some time now, I’ve been meaning to write a post on “briefing developers properly”. The difficulty I’ve had is articulating it in such a way that it provides some real tangible advice that can be of immediate use to Technical Leads and Project Managers. The more I think about it, the more I realise I could write an entire book on the subject. There isn’t simply a one-size fits all Developer Briefing Template.doc that you simply fill in the blanks.

Many attributes affect which approach you use to brief a developer. Project type, project size, team size, and schedule are all attributes you need to think about when considering the most appropriate approach. For example, a small Flash based multimedia project would probably be best suited to a simple story board. A web site should have a site map as a minimum. A web application could require a more detailed work package incorporating UML.

What ever approach you take here’s some key points to consider when writing a brief:


  • A verbal brief isn’t a brief. It must be in some kind of a written form.

  • Piecing together the brief from a large email thread is the wrong type of written brief!

  • Complementing the written brief with verbal communication is acceptable.

  • Get the developer to play back the brief to you to give you the confidence they’ve understood it.

  • Don’t be too prescriptive in the brief as you still want developers to use their expertise in coming up with solutions.

  • Reduce the risk of over-engineering or misinterpretation of specs by putting extra effort into the test & acceptance criteria, and review their work regularly.

  • Don’t just give them the document to get on with. Step them through it and get them to explain it to you.

  • Don’t assume that something that is obvious to you is obvious to them. Explain everything, even if you risk insulting their intelligence.

  • Keep documentation lightweight. Remember, “a document isn’t finished when you can’t think of anything else to put in it, it’s finished when you can’t take anything else out of it.” (Steve McConnell).

  • Avoid writing a paragraph containing multiple business rules that may result in some rules becoming overlooked. Separate out each business rule into an itemised list that can be ticked off when completed.

  • Pull together a work package that not only contains the brief, but also contains any assets or test data the developer needs to complete the work.

  • Get the developer to provide you with a work breakdown structure for the task at hand, along with estimates for each task. When the developer goes through this exercise it shows they’ve really thought about what they’ve got to do.
Test & Acceptance Criteria
I can’t emphasise enough how important it is to define test & acceptance criteria for the developer. You can do this in a number of ways but I tend to generate some test data for “input” into the system and specify how it should result in its “output” state. If you do have a test team, get them to contribute to this section.

Your T&A criteria should exercise the non-functional requirements as well as the functional requirements. So, if you’ve specified 100,000 records as the warrantable limit of your system then you need to specify this in the T&A. You should also provide this test dataset in the work package for the developer.

Work Packages
In many organisations I’ve worked for, it’s very rare for a developer to do everything in the software lifecycle themselves. The organisations are usually made up of teams of Business Analysts, Graphic Designers, Producers, PM’s, Developers, and Testers. This results in a collection of documents looking at the project from different perspectives that the developers have to implement.

My personal preference when briefing developers is to create what I call a Work Package. This work package pulls together the relevant sections from the collection of documentation to deliver a specific piece of functionality. The following diagram should give you some ideas as to what could be included in a work package:


I often get asked how big a work package should be. This obviously depends on the size of the project, but from the point at which the developer starts the work package to the point where it’s finished (see previous post on what is classed as finished!), should be a minimum of 1 week. I’ve written many work packages that take 20-30 days to complete so there are no strict guidelines on how big it should be. I have found though that writing a detailed work package for small tasks of a few hours simply doesn’t work. Quite often in these situations I find a whiteboard or flipchart perfectly adequate, but the important point is that it’s being written down.

Thursday 9 August 2007

Is it finished yet?

When developers say they’ve finished a work package, what does finished really mean? In the majority of cases, it’s not finished.

Ok, so this isn’t always the case. I’ve worked with some great developers who when they say they’ve finished, they really have delivered a finished work package which goes through the test dept with nothing more than a couple of minor queries on interpretation of the spec. Rock solid code and always delivered on time.

I consider a work package complete when the developer is confident the robustness of the product is good enough to stand up to rigorous testing (examples below), the deliverable is functionally complete with a tick against every item in the work package, and the code quality is of an acceptable standard. If a developer gets these basics right it’ll be a monumental leap forward.

First of all, a developer needs a solid spec to work from. This doesn’t need to be Encyclopaedia Britannica in proportions but it must contain a few key ingredients – the main one being Test & Acceptance criteria. I’ll be covering how to brief a developer properly in a forthcoming post but don’t worry as you can still apply the rest of this article.

Robustness
This is generally my attempts to break the application to see how easy it is for a user to break it and see how gracefully it fails. I pull the rug from under the application in several different ways to see how recoverable or gracefully it fails. What I’m talking about here really is how well the overall exception handling strategy has been implemented. This is still a key area that is regularly missed by developers. Testing the robustness is a good measure of how well the application has been written. So for example, on web applications I’ll bypass client side validation using something like Fiddler and inject all sorts of evil data into the application to make sure data is being validated server side as well as client side. As a developer, if you treat all input as evil, you’ll write more robust applications. When we talk about input here, I’m not just referring to user input. Its common place these days, especially with service orientated architectures to have other routes for data to enter your application which you need to treat as “evil input”. For more information on writing secure code read the Open Web Application Security Project (OWASP) guidelines.

A common issue I come across usually arises when different developers are working on different tiers of an application, i.e. the UI developer just calls methods from the Biz layer and from that point on it’s the other developer’s problem (chucking it over the wall syndrome). But the common bug here is data integrity. I always test this by putting uniquely identifiable data into each form field (up to the max length) and then check it’s been mapped correctly to the database with no concatenation. If there are any issues I raise them there and then and own the issue until it’s resolved, no matter which team or developer is responsible for resolving it.

Functional Completeness
I often hear PM’s and business analysts complaining that developers don’t follow specs or they’ve implemented it wrong. The common responses to this from developers are “oh yeah, I didn’t spot that one”, or “I thought that’s what it meant”. This is an area that can be improved by briefing the developer clearly and having peer reviews in place. Again, look out for my forthcoming post on briefing developers properly.

Unfortunately, I’ve come across many sloppy developers who adopt an attitude of “it doesn’t matter as anything I miss in the alpha release can be picked up in the beta”. For some reason, some developers seem to be coding as if it’s a proof of concept, or a dress rehearsal (“I’ll come back and complete all the detail later as I want to get it functionally working first”) and skip the important detail, and never go back and finish the detail. Some bad developers I’ve encountered stick “finished” work packages into test to buy themselves a couple of days rest. I’ve even heard several developers say “I’ve put it into test so they can give me a list of things I’ve yet to do”. When you have a developer with this attitude, it’s not a training issue or a mentoring issue; it’s a HR issue that needs dealing with immediately by reassignment, possibly to another company. (Peopleware, Tom Demarco & Timothy Lister).

Developers have said to me “so if we’re doing all this testing what do the test team do?” or “isn’t this what we have testers for?”. Getting over the project finish line will not happen if you get stuck in a continuous test and fix cycle with the test team. Each iteration of this cycle is prohibitively expensive and kills project budgets.

Every time you submit a deliverable to a good test team they will, recreate a “clean” environment to test on, run through their test scripts, write up bugs in the bug tracking system, and handle all the communication around these activities. In essence, every time you submit something to test, you are in fact triggering a huge amount of effort (project budget) to be burnt. This test team effort does not directly improve the quality of the product, it just gives you a measure of what level of quality the product is at; in the same way that a set of scales tells you how much you weigh – they don’t make you lose weight (Steve McConnell, Software Project Survival Guide).

Code Quality
I’ve seen many company coding standards in the form of a 60 page plus coding standards document that takes a developer at least a day to read and then gets forgotten about. These paper based documents are almost impossible to enforce in a commercially viable way. They quickly become out of date as the Microsoft platform moves forwards. These documents really aren’t necessary. What you need is some basic tools and a whole load of common sense. In fact, most of the coding standards documents I’ve seen, basically look like a rules export from FXCop. FXCop is free and can be incorporated into your daily build. If you do have some specific coding standards that FXCop doesn’t cover then write yourself an FXCop rule for it! This is just one aspect of code quality though. Other areas are, how well it’s been documented, how much unit test coverage you’ve got, and is the code at a low enough level of complexity . These are the main aspects of code quality that I look for in a deliverable. Click tools in my tag cloud for further details on Code Quality tools.


I hear the uninitiated saying “So if I have to do all this extra stuff to get a work package to the standard that you consider finished, it will take me twice as long!”, wrong! - by adopting a test driven attitude to development (TDD) the good developers who I mentioned at the beginning of this post deliver high quality quicker. For more information and an in-depth look at TDD visit the blog of Dan Bunea.

Thursday 5 July 2007

Why I hate Gantt charts

One of my pet hates is Project Managers (PM's) living their lives in Gantt charts or spreadsheets. I've seen many projects over the years where PM's have got to the end of the project without ever seeing the actual delivered software! Incredible I know. I'm flabbergasted every time I see it and I still see it regularly. In fact, I think it's on the increase.

They usually get their progress reports from developers via MS Excel in the form of hours to complete per task or some other kind of subjective guess. They then merrily update their Gantts in complete ignorance of what's really happening and get on with writing their progress reports. It's a real coincidence that the plans nearly always look rosy in these situations ;)

On some occasions I've had to force PM's to look at the software to gauge for themselves how complete it is. Interestingly enough, what they consider complete rarely matches the developers idea of complete, which in itself introduces another key review point.

My message to PM's is, stop hiding behind project plans and get under the skin of the software to form your own judgement of progress.

My message to developers is, the only real measure of progress is tangible software that PM's can see and interact with, so make sure you're constantly in a position to demonstrate this.

Tuesday 3 July 2007

An easy way to improve code quality

Cyclomatic Complexity has been around for a long time and yet many developers are still unaware of it. Cyclomatic Complexity is the measure of code complexity. It's just one aspect of code quality but I've found it's impact to be massive when used daily within the software development life cycle. Cyclomatic Complexity Analysis basically counts the number of decision points (e.g. If's, elseif's, Case's, nesting levels, etc) in source code and gives you a rating. These ratings can be assessed as follows:

Cyclomatic ComplexityRisk Evaluation
1-10a simple program, without much risk
11-20more complex, moderate risk
21-50complex, high risk program
greater than 50untestable program (very high risk)

The reality is that the most complex areas of a code base attract the most bugs and as it's complex code it's harder to fix (and costs more!).

About 3 months ago I introduced it into the daily build on all projects at my company so that the build fails if code is introduced into the build with a complexity rating greater than 15. Initially it took a while for the development team (34 developers) to make it part of their daily routine but it has paid massive dividends. We've noticed a sharp decrease in the number of bugs found in the testing phase, and maintenance of the code base's have eased dramatically.

If a developer can't get the complexity down below 15 then it's put out for review to other developers. There are occasions when it's truly not possible to reduce the complexity due to performance tradeoffs but at least those important decisions are being handled in the correct manor.

Out of curiosity I ran the analysis tool over some past projects to get a feel for some kind of baseline to measure improvements against. One particular project reported a maximum complexity of 536!! I thought the analysis tool was broken until I dove into the source code and found a single method containing over 3000 lines of code, consisting mainly of if, elseif, if, elseif, etc. Also, the deepest level of nesting in this method was 24! Only a complete nutter could have written this!

So in summary, my advice to you as a developer is anything you write, analyse the complexity and simplify accordingly.

There are many other tools available for analysing code complexity such as IDE integrated plugins from Developer Express to name just one. Find the one that's right for you and use it.

The best tool I found for analysing complexity is Source Monitor. It's good for two reasons; you can run it as part of your daily build & smoke test, and it's free! It works with all the mainstream languages such as C#, VB.NET, Java, and we've also managed to get it to work with Actionscript 2.0.

Sunday 1 July 2007

Agile - here's my two pennies worth

Not a week goes by without hearing someone say "we're taking an Agile approach to this project". In most cases, the Agile approach means "we're making it up as we go along". I'm not going to create yet another blog on the pros and cons of Agile. Instead I'm simply going to point you in the direction of a great video presented by the inventor of SCRUM, who let's face it, should be able to explain it better than anyone.



Even if you can't adopt the full SCRUM approach you should still be able to pick out a few gems from this video. Developers should pay particular attention to the section on cutting quality!

Friday 29 June 2007

Solutioneering

When I’m recruiting developers, what I’m actually looking for is Solutioneers. These are developers who consider “getting things done” as one of the problems they need to solve. For example, if another part of the project team is holding up their work, this isn’t an opportunity to kick back and grab some slack time. This is a problem that needs solving. May be the simply solution here would be to walk around the office and get the blockage cleared. I here non-solutioneers saying “but that’s not my job” or “I didn’t want to step on anyones toes”. Finishing a project on time must be a Solutioneers primary goal and as such is always willing to "solve this problem" by rolling up their sleaves and getting stuck into any aspect of the project. Solutioneers take ownership very seriously and often lose sleep at night over particular aspects of project delivery.

To be dynamic enough to solve problems they need to keep on top of what’s happening around the industry, and constantly pushing their own intellectual capacity and technical abilities. These Solutioneers are avid bloggers who not only write blogs but also see blogs as a valuable source of knowledge. They spend their daily commute glued to technology podcasts and spend lunchtimes scouring the web for quicker and better ways of developing software (these are the guys who keep sending emails saying, “hey, have you seen this!”). Almost certainly they'll have what is approaching an enterprise class data centre in their home to enable them to tinker in the evenings and weekends. Solutioneers posses an almost unhealthy passion for software development and never rest on projects until they’re over the finish line, so a sense of urgency is one other facet here.

You may be thinking that outright Solutioneering can be very hazardous in a commercial project environment and spits of perfectionism which can easily blow project budgets (over engineering syndrome or paralyses by analysis!). This is yet another problem for a Solutioneer to overcome. Solutioneers are experienced enough to make a balanced call on trade-offs, i.e. I know the purest approach to a software problem, I know the quick and dirty approach, but what’s the risks associated with trade-offs from either end of the scale?

Solutioneers have to be good communicators and be very personable, friendly, and approachable. But most of all they must posses a good sense of humour.

Oh yes, I forgot, and if they know the .Net Framework, C#, SQL, and most Microsoft technologies and platforms then that also helps! ;)

The importance of defining non-functional requirements

You wouldn't believe the amount of projects I've come across where nobody on the project team has established what the non-functional requirements (NFR's) are. If you're reading this and thinking "what the hell are non-functional requirements" then this very brief introduction to them is for you. If you're already familiar with NFR's then read on anyway as there may be a few nuggets in here for you that you don't already know.

So, non-functional requirements (sometimes referred to as Software Quality Attributes), in the simplest terms define performance, scalability, capacity, resilience, and many more attributes of the software being delivered. The best way for me to describe why NFR's are so important is to share with you a few experiences. Here's one experience to demonstrate the point:

A web application is designed, built, tested, passes UAT, and deployed. The customer is happy for the first 6 months as traffic levels have been low. Over time the traffic levels rise (from 30 user sessions a day, to over 2000 sessions a day). The web site starts to crash regularly, the customer is very unhappy with the "quality" of the delivered software and wants it fixing. Well, you can see where this is going already. The answer to this seems obvious, why didn't the supplier agree traffic levels with the customer up front and design the solution appropriately?

Well, this isn't always as easy as this. Did the customer even anticipate such high traffic levels? Is the customer techno savvy enough to care about such things as this is what they pay you for isn't it? Shouldn't you have made some assumptions in this case?

When I'm dealing with a customer who isn't techno savvy and doesn't know what their traffic levels are going to be, there's always the temptation to design for a large number of users, i.e. design BIG. Unfortunately, going large on these decisions costs more money to do and very quickly you become uncompetitive and could potentially end up with the customer walking away and cancelling the project before it's even started. The best thing to do in these circumstances is to agree with the customer an affordable level of scalability and then warrant the system to this level. Basically, agree that your design can handle say 100 concurrent sessions, and do your internal performance testing to this level. In this situation you're only warranting the system to 100 sessions and if the customer exceeds these limits then the solution may continue to operate comfortably but you won't fix any issues that occur without charging for it.

For further reading on the subject of NFR's here's a link to an extremely thorough list of NFR's that I use daily! Volere Requirements Specification Template