Coaching backlog

One challenge as a coach is deciding where to spend your time – do I help team A or team B or split my time between them? If I had the capacity to give both the attention they need then it’s not a problem but of course there’s never enough time to go round. Like so many things, it comes down to prioritisation… but what goes in the backlog? Personally, I tend to think in terms of experiments, e.g. will smaller stories improve the team’s ability to respond to changes in direction?

(I should add that when I say “team” I’m actually thinking about the people in the dev team, the other people they work with, the stakeholders, and the system which encompasses them.)

But who creates the backlog items? Obviously, there are things which come up in conversations with the team members and stakeholders, but there’s also observations and comparisons between the current system and a potential “next level”. Now, we all know there aren’t actually “levels” but there are things we tend to observe in high performing teams compared to relatively new teams. I’m not going to touch on “assessments” here – I’ll save that for a future post.

In order to share this backlog with the coaching stakeholders (which includes the team) rather than a simple backlog of Stories, something like a POPCORN board can help support the conversations around which experiments to try and what the outcomes were. POPCORN is a backronym for: Problems & observations; Options; Possible experiments; Commitments; Ongoing; Review; Next – you can watch Claudio Perrone, aka Agile Sensei, present POPCORN at Agile Testing Days 2017.

Big changes, little changes

Some recent conversations have included questions around whether small changes or large changes are “better”. Unsurprisingly my answer was “it depends” 🙂

Even though our goal might be to achieve a big change, I would recommend making a series of small changes to get there because we might learn all sorts of things along the way, including whether we’re actually going in the desired direction (it’s possible our small change has taken us down a side road) and whether the goal is still the goal.

OK, but what kind of changes does this apply to? All, I think. For example, if the aim is to change the users’ experience by adding a new feature (that sounds like a big change) we would usually break it down into epics and stories, and maybe think about multiple releases … but we probably want to start with some experiments to see if we’re building something the users want, i.e. get feedback early. So the large change is broken into smaller changes (releases, epics) and then into even smaller changes (stories, experiments).

But how does the team learn to do this decomposition? Well, learning a new skill can be a big change, so let’s break it down into small changes too. There are different techniques, so pick one, learn it (e.g. read about it, watch a video) and practice – ideally this includes getting feedback from someone with experience of the technique.

This principle can also be applied to coding: small, frequent check-ins make it easier to spot problems – when a test fails you’re only looking through a few lines of code, not pages and pages. It also reduces the chance of merge conflicts because there’s less time between pull and commit, which means a lower probability of someone else changing the file you’re working on.

So, back to the original question: are small changes or large changes are “better”? Well, I think the answer is small changes are what enable you to make large changes, so it depends on whether you’re looking at the end goal or how you’re going to achieve it.

[Here’s the video of the domino chain reaction]

Finding value

As part of the warm-up for a retrospective this week we were asked to share an article of clothing which represented the previous sprint. I picked a photo of a jumble sale because I felt we had been trying to find the value in a pile of possibilities – there are many things we could do, but which is the “best”? At a jumble sale, you really shouldn’t try something on then put it back and repeat until you find the thing you want; however, we have the benefit of being able to conduct small experiments to see if we’re doing the right thing.

By taking a small step and inspecting the outcome we can decide whether to continue going in that direction, make a small tweak based on what we’ve learned, or maybe go back and try something different.
To avoid any cognitive bias we should be clear beforehand what criteria we will use to analyse the outcomes; stating those criteria also helps identify any metrics we should collect.
For me, this is the heart of agility – short feedback loops that enable us to learn and react. That reaction helps us deliver what our customers need, and stop working on something as soon as it stops providing value.