Measuring and predicting delivery

29 08 2015

I’ve now spent 7 or 8 years working with agile teams of one flavour or another. I’ve always struggled with the way that I was taught to report progress in an agile team and have never been comfortable with predicting our ability to deliver future features.

I have found that conversations about current progress in a team always needed to be preceded by a conversation about the terms that will be used and what they mean and what inference the recipient can draw from those numbers.

This always felt like a bad pattern to me. While discussing the design of software I always try to model it using business relevant terms so that we can share a single lexicon across business and technology. But when I report progress it’s suddenly OK to use terms that the business (at best) only vaguely understands and at worst has no clue about.

Here I’m talking about the use of things like “story points”, “velocity”, “scrum” and “burn-up”

These opaque terms seem far away from the open, transparent communication that I am striving for.

Then it comes to planning for the future and it all devolves into a little bit of black magic – I’ve done it more times than I can count – sizing stories to plot a scope line (both optimistic and pessimistic) and then trending “best case” and “worst case” velocity lines to give that quadrant of possible delivery.

It always felt to me to be too heuristic – a little too much black magic.

About a year ago I read Daniel Vacanti’s book “Actionable Agile Metrics for Predictability“. Now I’ve tried using Monte Carlo simulations for project planning before but very much based on heuristic values so not much better than the magic quadrant really. After reading Daniel’s book I downloaded Troy Magennis’s “Focused Objectives” software and started to model different software projects – with the right input and measurement I was far more confident in my ability to predict a team’s capacity. I had finally found a way that I could use the data generated during the measuring of progress to apply a more rigourous and structured planning model.

The big realisation for me using these tools is that project delivery can be modeled pretty well stochastically but I’d always been taught to do forecasting deterministically. In other words there are complex interactions in software delivery that I simply don’t take account of very well using traditional models of forecasting. Capturing the right measures and using these to build up a statistical model of the delivery has worked much better. Big shout out to Troy – his tool has been invaluable and his support when things go wrong is first class – I’d highly recommend downloading it and giving it a try (use the link above).

I now measure as much as I can about the team – every state change of every story is timestamped. I measure the overall cycle time from when a story is first prioritised to when it is complete and the time it takes to go through each stage. I have scatter plots for each of them and distribution curves for each. I monitor the average age of stories on our backlog and the trend across average age and average cycle time. I actively intervene in the team processes to try to optimise these measures to ensure that we remain as predictable as possible – and it has made a big difference. It feels more like the team has a constant delivery focus and rhythm and the analysis bears this out.

That means I am far more confident about my forecasts for how much work we can get through and when we will finish certain items that we are currently working on.


Actions

Information

Leave a comment