Agile Estimation & Release Planning

From AgileMe
Jump to navigation Jump to search


The Traditional Estimation Approach

If we have a pile of rocks at station A and we want to move them to station B we can do the traditional approach of looking at the rocks, taking a sharp intake of breath and then coming up with a guess of how long it will take.

This guess is now transformed into an estimate, which sounds a lot more official and professional, and it plays its part in setting the expectations of the customer or stakeholder. (We can add decimal places to the estimate to improve its precision and make it look more scientific, and we can use our extensive rock moving experience to give it some substance, but the harsh reality is that it is still just a guess.)

Now that we have our stakeholders’ expectations set and all locked in, we are now tasked with moving the rocks and sticking to our guess. The natural world works within the laws of physics and not hypotheticals, and so it is highly likely that our guess is completely inaccurate and even too optimistic. Hence, we may not be delivering to our guess at all, and more likely to be falling behind as we are getting more tired from moving the rocks, (this is a dynamic problem and not a linear one,) and our stakeholder is now getting a bit upset as their expectations are not being met. So we work a bit harder and really try to commit. We might even look the stakeholder in the eye and voice our commitment, but we are still falling behind our guess. Eventually we are finished. We are relieved that the hard work is over, and the stakeholder is somewhat disgruntled because we didn’t deliver on their expectations. All, because of a guess upon which there is absolutely no certainty no matter how many decimal places we add to it.

If this sounds familiar, you are not alone, and yet it is troubling to think that with all of the practical reasoning in the world that we still place so much value upon a guess.

Agile and Traditional Estimation

An Alternative Estimation Approach

An alternative approach is to move away from guesses altogether and instead use evidence and actual measures to inform how long things are going to take. One approach is to do the activity of moving rocks from station A to station B for a short period of time, say 5 minutes for example, and then evaluate how many rocks we actually moved in that time. We can do this a couple of more times, and that will give us three data points of how many rocks we can move in a 5 minute period. We can then form an average Velocity across the three data points for how many rocks we can move in a 5 minute time-box.

Now all we need to do is count up how many rocks we have at station A and divide by our Velocity measure to determine how many 5 minute time-boxes it will take to move the rocks from station A to station B. We can now set the customers’ or stakeholders’ expectations with some degree of evidence rather than just a mere guess.

It’s still a dynamic problem where we might become tired towards the end when it will take longer for us to move a single rock than at the start when we were keen and fresh, and so estimates will never really be truly accurate enough to describe what will actually happen in the natural world in sufficient detail. However, at least this approach to estimation is based upon some evidence of doing the activity.

In Practice

If we use this approach with Scrum for example, then all we have to do is think about the 5 minute time-box as a Sprint and the rocks as being Product Backlog Items that we transform into working software. Station A is the Product Backlog and station B is the potentially releasable product Increment.

We will have to size the pile of Product Backlog Items, (rocks) on the Product Backlog, (pile of rocks,) in terms of how big and heavy they are and how much effort is involved in moving them before we can determine our Velocity. To do this we may use simple techniques such as Planning Poker®, Affinity Estimation or Ouija Board Estimation to name but a few. Once the Product Backlog Items are sized, we can then measure how many we can get through in a Sprint to provide the Velocity. Our exhibited Velocity in the first few Sprints can then be used to provide a moving average that we can use to determine how long the release on the Product Backlog might take us to do, and more importantly, if we need to make any decisions. (Refer to the section on Metrics for techniques on how we can measure and describe our progress in the form of charts.)

Units of Measure

A very popular unit of measure is to use Story Points. Although not explicitly mentioned in the Scrum Guide, Story Points tend to be used by a large number of teams as their unit of measure for estimation and metrics.

Story Points use a Fibonacci or Planning Poker® sequence that is exponential in nature rather than a linear scale, as the larger something is, the more unknowns are attributed to it and so more uncertainty surrounds the larger items.

(Tip: If the size of a Product Backlog Item is over at the large end of the scale, then it is probably best to break that item down into several items that are more manageable at the lower end of the scale. Then you might avoid the large numbers of unknowns also attributed to the large PBIs.)

Items using these techniques are then sized relative to each other. If I firstly pick an item that is deemed to have a medium effort to complete it, and then pick up a second item, I can now ask myself the question of whether it is more or less effort than the first item. I can also ask if it is twice or three times that effort of the first item to help assign a number to it.

(Tip: The first item is always the most difficult to estimate and the subsequent items are easier as they are done in relation to that first item. Hence, it is probably best to select a medium effort item to begin with and work from there.)

Items should be sized in terms of the effort to complete them, as in, transforming them from a Product Backlog Item into a high quality piece of working software that is a potentially releasable Increment according to the Definition of Done.

Forming A Release Plan

After sizing the Product Backlog Items, we can then perform a few initial Sprints to determine the average Velocity and understand how much work is on the Product Backlog and have an informed understanding of how long it may take to complete it. This realisation can often lead to further discussions about how to refine the Product Backlog to provide the best possible product within the time and capacity available.

We now have all the information that we need to form a Release Plan that encompasses the intentions for the next few Sprints with a reasonable amount of certainty and what is probably going to be delivered in the release. Notice that even with evidential based metrics such as Velocity, there is still no absolute certainty of what will actually happen once the team begin to do the work and the unknowns begin to appear.

The ideal Release Plan is one where items that are close to being worked on in the next few Sprints are in sufficiently broken down into small and manageable Product Backlog Items that can be consumed by the team. Items that are a bit further away may be a little larger and a little more ambiguous, with the items furthest away from being worked on are still quite large and relatively unknown.

The approach of leaving items further away being more ambiguous is intentional to provide some room for interpretation and adaptation, as there will be new knowledge acquired from the prior Sprints which will inform how these items are to be refined further. Breaking these large ambiguous items down too far ahead of time can be a waste of time and money if we need to do them differently based on the feedback we get from the prior Sprints. Hence, it is good practice to do a bit of refinement and backlog curating constantly. Also, breaking everything down ahead of time may lead to a large number of items that are then difficult to prioritise and order, whereas a dynamic Product Backlog that is constantly being refined allows for decisions to be made throughout the life of the Product Backlog.

The intention is to balance two needs:

  • searching for feedback and evidence to suggest the right product e.g. prototypes
  • refining and converging the backlog into the best possible product within the time and cost constraints

The need to search vs. convergence is a balance where searching activities such as creating prototypes and acquiring evidence to inform which is the right product may be done towards the start of a release, which is then refined and converged into the final product towards the end.

The Product Backlog and Release Plan should be constantly curated, adjusted and adapted as more information becomes available from the Sprints to ensure it is the most appropriate Release Plan with the information available.

Forget Everything I Just Told You

Why do we need estimates and release plans in the first place? Why do you need to know how long things are going to take? If we move away from the whole concept of projects and releases and instead adopted continuous delivery techniques, then we could focus on a constant stream of work that refines our software assets.

The notion of batching up our work into a project with predicted costs and timeframes now becomes redundant, and instead we focus on continual releases of small refinements to the end product. (If things go wrong, we can choose to roll back the small change or fix it in the next change.) Now we could have a rolling horizon of small changes that only present small risk at a time.

Our focus has now changed from measuring time, cost and scope, and has evolved to thinking about the value of our Product Backlog Items that we work on, and what they add to the end product. We can now spend the same energy that we used for estimating and planning, into thinking about the feedback and how we should refine the product.

See Also