Accelerated Agile – comparing theory to practice


I recently attended a training session run by Dan North (@tastapod) called Accelerated Agile. This blog post summarises what we learnt: Dan argued that some organisations/teams using Agile practices have lost sight of actual achievement and are unconsciously going through the motions, too comfortable in the fact that they are ‘doing Agile’? Why might this be happening, and how can it be avoided?

Dan North at Foo Cafe

Dan North in full flow

Have we become too comfortable with Agile?

A theme from the day was that a desire of the business to be able to measure the progress or performance of an agile team encourages the team to work repetitively rather than smartly. The irony of this position is that the dev team is unable to give meaningful information to the the business on and so the business will have adopted a method of working that is posited upon incorrect information. By insisting on this, they are both taking time from the dev team and not adopting the close collaborative working relationship that would give actual benefits

The rest of the blog provides some more information that leads to the above thoughts.

Generally text formatted like this highlights areas where (largely unknowing) management practices is likely to result in suboptimal results (ie the team will be ‘doing agile’ but not gaining as many benefits as they could).

A bit of background

Agile came about trying to solve the problem of non delivery of IT projects that had often been running for a long period of time without delivering and introduced the concept of short cycles and regular feedback (OK, so obviously there were other drivers but bear with me…).

Agile practitioners did this by using the concept of very short regular delivery (and constant customer feedback). Thus we have the concept of iterations/sprints with showcases to customers at the end. The team estimates in points and the actual achievement of the last iteration(s) is used to predict the velocity (points eaten) of the next. Thus the team is only estimating how much it will achieve in the next (say) 2 weeks and it is doing so on the basis of actual recent achievement.

As an aside the business needs to know when the team thinks it will ‘finish’ the work and it is irresponsible to answer that the team can only look at the next iteration.  An acceptable answer is that once the team has built up a reasonable history on the project they can reasonably forecast team velocity and can then match that to the roughly sized backlog to create an answer – though blending that with the flash estimation idea at the end of this blog is also an option.

Another aside: I once worked on a project where the team were using Scrum but its management were reviewing using traditional methods. It was the kind of project that was in continuous discovery but they wanted to know when the project would be finished. As the manager I was pressured into providing a data which was hopefully far enough in the future for the team to achieve …but we never tracked to our iteration commitments even right from the start. But by focussing on the end of the project date I was able to claim for far too long that the project (may/was) on track. Of course when I had to admit that it couldn’t be done, the impact was great. If the review team had focussed on why stories weren’t closing at the end of each iteration we could have discussed the problem far far earlier when it had a far lesser effect.

Management practices that come along with this reviews the team’s actual and want regular a predictable (velocity) over those sprints.  Thus the mentality is somewhat “Stablise” (see below) but in order to be truly agile we ought to recognise that we need creativity at all points.  It is all too easy to fall into the trap of running some spikes at the start of the project and addressing the hardest part first then falling into “commoditise”.  There is nothing truly wrong with this – its just that by recognising the fact that you can always do better, it is worthwhile maintaining mirco cycles of explore/build out.

The Three Ages of Agile

It is often useful to be aware of whether you are acting in the context of Exploration, Stabilisation or Commoditisation.

Of course if the team is micro managed within their work a likely drop in velocity will be noticed and thus this behavious is discouraged.

Another thought in the relationship between development and its customers is that the customers reserve the right to be in Explore mode – e.g. “I can’t tell you the full requirement but I will quite happily demand that development operate but I still want you to predictably deliver” i.e. be in Stabilise mode….(business gaining confidence in development by monitoring velocity rather than mixing into the team).

Is avoiding problems the right strategy for reducing risks?

Normal Project behaviour really assumes that the impact of a bad event is worth avoiding.  Risk/issues registers have a column that indicates mitigating action, so we focus those risks/issues, but IT projects suffer from 2nd Order ignorance – i.e. we dont know what we dont know.

We can test this by asking the question: If you had to do exactly the same project but knowing now what you know now – how much quicker could you have done the project? The answer is maybe 40%-60% faster (i.e. significantly faster).

Therefore any method that relies on being fed correct information is doomed to failure – I have heard it said that we can estimate a project to the precision of plus or minus one project.

But Agile methods are designed to minimise the impact of any bad things that happen (high test coverage, ability to change direction fast, regular reporting over short periods, onsite customer, business meaningful deliverables), and we very reasonably expect bad things to happen…and we thus we don’t mind

But what we in the industry haven’t generally done is to change the mindset of those who commission or monitor our work to accept the fact that unknown things will always happen and what matters is to work with us reviewing the time it takes to deliver the required benefit (over a long period of time) and making that delivery faster (removing blocks) There is still a mentality that monitoring progress and the latest (inevitably incorrect) view of completion is the best to achieve results.

Fits in my head

Real benefit/agility comes from fully understanding what you are looking at and having the confidence to be able to change it for the better.  This requires developers with high skills, good domain knowledge and a suitable codebase. If the codebase has too many interactions/is too complicated then the desire/ability to refactor will be diminished. A reasonable test for this is: looking at the code I am interested in modifying – can I fit it into my head?

An interesting observation was that TDD avoids this issue by encapsulating any complexity into a multitude of tests. A focus on raw simplicity is a better goal that high test coverage

Of course it is easier to measure test coverage than simplicity. More companies have policies on test coverage than cyclomatic complexity.

Micro service architecture

As can be seen above maximal benefits occur if the code in question can fit in your head – this implies that is needs to be small and good SOLID OO principles encourage this.  Each small component needs to be suitably isolated and tested separately and ideally able to be deployed separately (minimise the delay between software change and benefit).

Multi-team development / remote development / pools of teams picking up the next project makes this kind of codebase much harder to achieve.  It requires very high discipline to develop and maintain a codebase in this shape.  It is surprisingly quick (as little as 3 weeks I have heard quoted) before a codebase can become reasonably compromised – and focussing on velocity/delivery to date can unknowingly promote this decay. Similarly a focus on projects rather than programmes will encourage this bad behaviour. The goal of a project is to deliver known items in a known time/cost. This goal will encourage short term goals/goals other than code quality – a programme being delivered by stable teams on the other hand have a different focus. An analogy might be the early work on game theory (e.g. prisoner’s dilemma) that resulted in the conclusion that lack of trust in the opposing player resulted in the only reasonable behaviour being defection (a lose:lose situation) – a win:win was too risky as the other player gained more from a win:lose than they would from win:win and thus a lose:lose was the only sensible strategy. But when the games were played with the same players many times over there was time to react based on reputation (past behaviour) and so more win:win situations emerged. A company focussing only on individual projects will lock itself into a lose:lose cycle

Focussing effort where it provides most benefit

If the codebase is divided into manageable components it becomes possible to treat them separately i.e. those most important  changing most  can receive most test/review attention and those not can safely be downplayed. It seems intuitively obvious that this is sensible way to behave.

Management likes simplicity in their KPIs and so often standards are applied globally which does not allow for particular situations.

‘Release mentality’ is harmful

Again another way of minimising risk to batch changes and ensure that everything going live is retested i.e. large releases.  However if all software goes live together then the release is as risky as the riskiest element in it and the impact of that risk becoming real is as big as the worst case.  Thus the whole release has to be extensively tested.  Thus effort cannot be focussed where it matters to the business most.

Question: is software an asset or a liability?

There seems to be a consensus on this issue – the benefit that software brings is the asset – the code itself is a liability (as it has to be maintained etc).  A corollary of this is that consistency matters – it is better to be consistently suboptimal that inconsistently anything.

Thus we can now see why the quality of the codebase is of such paramount importance.  A bad codebase increases the liability of the code.

And yet often development shops are measured on productivity which can easily morph into the amount of code developed/work performed where really what matters is leaving the code smallest and simplest.  I guess it is hard to measure this exactly as it is expected that new functionality will increase the size of the code but refactoring may obviate some of this so what does good look like?  The difficultly in understanding/measuring this leads to the measurement of other aspects… often with the undesirable effects mentioned here and by Dan North.

If we have to live with the unknown – how can we limit its impact?

I thought this was an interesting concept.  Dan suggests that there is a way of chasing out what we don’t know we don’t know – I’m sure we are already familiar with developing the thinnest slice/walking skeleton or addressing the hardest part of the architecture first but a new suggestion to me was to consider than ignorance isn’t evenly spread. Thus if you get a disparate group of individuals together (early in the project) and discuss issues suspected of being interesting…or even encourage a free flowing discussion about the project then different people will chip in with information of significance.

The sound of a project in this mode is a series of ‘oh really?‘ emerging from the team.  The sound of this information being discovered later is ‘oh sh*t!‘ plus sinking stomachs.

Blink Estimation

If you know that what you don’t know always has a major impact on your project’s size then it makes no sense to put too much effort into bottom up estimation. Top down will be just as good and much quicker. However, top-down estimation requires:

  • Experts in (top down) estimation (who have worked on similar projects)
  • Expert messengers
  • Expert customers

Note that often better results can be obtained by spending time educating/communication with your customer than creating a spurious view of accuracy of the estimation

Surprising benefit of fast estimation

Since the estimate has been produced quickly (Dan suggested 2 hours!) the business is likely to view it in a different light.  It may be viewed as

We believe that we can create a viable business benefit creating solution for this amount of money (remember we haven’t said exactly what we will deliver for the money) …rather than…we will deliver this functionality for £x.

This has the benefit that it moves the estimation for an us/them supplier estimate to a joint business decision to proceed or not.

Conclusion

What I think Dan North has revealed is how incredibly easy it is to lose many of the benefits that Agile can bring. Dan explained what we mean when we hear the the rather bland statement “it’s not enough for IT to be Agile, it has to extend throughout the whole company”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s