Over-commitment

When I wrote the “Long time, no see” post in December I was planning to resume writing regularly (at least bi-weekly) but since then I’ve only managed one. I thought I would have some spare time that could be put towards it but other things got in the way.

Does that sound familiar? How many teams start a sprint having planned to complete a number of things, only for that plan to go awry? If your team hasn’t experienced it yet, then it’s just a matter of time before you do… or maybe you’re playing it safe and you have a second planning session mid-sprint.

bury headI’m not saying this is a bad thing – stuff happens and not adapting would be worse. Sticking blindly to a plan and ignoring reality sounds ridiculous and yet many organisations still do precisely this.

So how can we reduce the chances of our plans being impacted? Plan smaller steps; discuss risks and dependencies as part of planning; understand your capacity; and say “no”, or at least “not right now”.

Plan smaller steps

planning onionIf you plan once a year (e.g. an annual budget cycle) then try moving to quarterly planning. If you create quarterly roadmaps that describe what needs to be delivered, then break it down into smaller feedback loops. If you’re a scrum team that plans four-week sprints but finds yourself replanning during sprints, then try two-week sprints. Whatever your cadence, if your plans often need updating then try a shorter timeline. It’s also possible that your plans are too detailed or rigid; there needs to be room to adapt based on learning, discovery and feedback. This is also why there are multiple “planning layers” – the team meet (at least) once a day to coordinate their efforts, share new insights, identify obstacles and make adjustments as needed. The shorter these feedback loops are, the sooner the team can respond.

Discuss risks and dependencies as part of planning

raidSprint planning isn’t simply for the team to decide what “fits” – there should be (or have been in preparation for planning) discussions about the risks, assumptions, issues and dependencies. The somewhat-controversial Definition Of Ready is a good place to keep a list of reminders of the types of things to consider, e.g. if it takes a while to get access to a particular system, then make sure the team knows to start that process ASAP (in extreme cases, it may mean postponing a story until the access is granted). What else might throw the plan off course? It’s not possible to predict and anticipate everything but the team should learn from past “gotchas” and try to avoid repetition.

Understand your capacity

overflowing bucketEven if everything goes smoothly, there’s no point in planning for more than could be realistically achieved. If a team uses story points consistently, then they probably have a good guide as to their capacity. Even without putting a number on it, stable teams should have a good feel for what is possible. The trouble with both those approaches is that teams can believe (or be cajoled into believing) that they can do more “just this once”. Forecasting (e.g. using cycle time) is based on the reality of historic data and is less open to “interpretation” or wishful thinking.

Say “no”, or at least “not right now”

drakeThe previous steps can help improve the likelihood of success, but one other defining factor is whether the team feels they can say “no” to taking “just one more thing”. If they feel they have to say yes to everything then there is no true commitment. (I know commitment is a controversial term; I’m using it as a shorthand for what the team tell those outside the team what they believe is achievable.)

In short: don’t make promises, because things change. Small steps with frequent feedback let your customer/sponsor see progress (which should lead to increased confidence) and collectively you can make adjustments based on new information.

I’m not committing (out loud or even just in my head) to updating this blog every two weeks, but I will try to write when I have time and have something to say. 🙂

Coaching backlog

One challenge as a coach is deciding where to spend your time – do I help team A or team B or split my time between them? If I had the capacity to give both the attention they need then it’s not a problem but of course there’s never enough time to go round. Like so many things, it comes down to prioritisation… but what goes in the backlog? Personally, I tend to think in terms of experiments, e.g. will smaller stories improve the team’s ability to respond to changes in direction?

(I should add that when I say “team” I’m actually thinking about the people in the dev team, the other people they work with, the stakeholders, and the system which encompasses them.)

But who creates the backlog items? Obviously, there are things which come up in conversations with the team members and stakeholders, but there’s also observations and comparisons between the current system and a potential “next level”. Now, we all know there aren’t actually “levels” but there are things we tend to observe in high performing teams compared to relatively new teams. I’m not going to touch on “assessments” here – I’ll save that for a future post.

In order to share this backlog with the coaching stakeholders (which includes the team) rather than a simple backlog of Stories, something like a POPCORN board can help support the conversations around which experiments to try and what the outcomes were. POPCORN is a backronym for: Problems & observations; Options; Possible experiments; Commitments; Ongoing; Review; Next – you can watch Claudio Perrone, aka Agile Sensei, present POPCORN at Agile Testing Days 2017.

Priorities Change

One of the Product Owner’s key responsibilities is to ensure the backlog (well, the top of it, at least) reflects the stakeholders’ priorities. Of course, the certainty of those priorities decreases the further out you try to look: we ought to be pretty confident in the priorities for the next couple of weeks, but less so for a couple of months away, and if we’re thinking 6 months ahead then it’s quite a low certainty. (Your timescales may be different but the “funnel” model should still be applicable.)

The reason certainty decreases the further into the future you try to predict is that things change! Hopefully, you’re learning more about your customers over time, but also their needs change. There are also changes in technology which mean something you couldn’t build last year is now possible, or maybe the costs have dropped so now it’s worthwhile.

Like everything we do in Agile, it’s about feedback: we think the customer needs X so let’s build just enough so that we can test the market, i.e. a Minimal Viable Product. Based on the learning from that we should adjust the backlog, so putting too much effort into the details of a plan that’s likely to change is waste. No matter how much you believe you know what your customers need and spend time drawing up a 12-month plan, if your plan doesn’t change then either (a) you’re a genius, (b) you’re really lucky and should play the lottery, or (c) you’re not listening to your customers.

Even in a tightly regulated market, things change – if your product is constrained by legislation, there can be changes resulting from interpretations of the law or even changes in the law itself. If your plan is unable to reflect those changes in a timely manner, don’t be surprised when someone beats you to the release of a “fully compliant” version.

The key is to have a plan which gives enough context for the current/imminent work but which is flexible enough to react to changing information, as well as to have people who understand that a plan could (and should) change when circumstances change.

Maybe that’s the most important piece: having people who ask why when a plan doesn’t change?