Too busy to improve

I’m starting with an assumption: that we believe there is always the need for improvement (in what you produce and/or how you make it). If your product/team/system is perfect then you can stop reading now and go feed your unicorns instead. 😉

The next group I’ll invite to stop reading are those who believe it’s everyone else who needs to change but not themselves. I have worked for organisations where senior/middle managers want the coaches to “fix” the teams. Unfortunately, there can be no significant improvement until those people realise they’re not just part of the solution, there’s a good chance they’re actually part of the problem.

Hopefully, we are agreed on the need to continuously improve, and to be part of those changes. So how are you demonstrating that? Do you make sure there’s time for experiments or do you constantly push for more output? Do you encourage the team to discuss frustrations even when that might lead to questions about the organisation’s structure or decisions that have been made higher up the org chart? Do you help the team look deeply at themselves too, because effective retrospection and root cause analysis is not easy?

One unfortunate anti-pattern I’ve seen is “we’re too busy to improve”. This may be understandable for a week or two, but when this persists then it’s a red flag. I’ve encountered variations of “the Product team want us to focus solely on delivery”, or “leave the teams to get their work done”, or even “if it doesn’t help us get stuff done then we don’t have time to look at it”.

This is a very short-sighted approach; I’ve seen it paired with cancelling vacation or requiring overtime. The focus becomes the quantity of output, often at the expense of quality, sustainability, and the team’s health. Sadly the eventual outcome is often the realisation that what was produced wasn’t even all that important or urgent.

Returning to the presumed assumption that time spent on improvements will not contribute to helping the team deliver more, it’s important to ask why. If the time spent on retrospectives each sprint (commonly that’s 1 hour every 2 weeks) cannot be spared, then your plan is already incredibly fragile and very likely will fail – how will you cope if a team member is sick for a day, for example.

So rather than focus on the cost of improvement, think about the potential benefits. What if that one-hour retro resulted in an improvement that saves the team a few hours? What if the retro highlighted a concern about the quality or design of the work they’re producing? What if the conversation resulted in the discovery that a critical element had been skipped due to cutting corners because they are under pressure?

Saying “we’re too busy to improve” is a step back towards waterfall; it sends the message that “we know everything we need to know” and that nothing can be learned from the work the team are doing. More damagingly, if that is believed to be true when under pressure, when it’s finally lifted why would the team return to doing retrospectives and looking for improvements?

Is it safe?

One of the agile principles is that the team have a supportive environment; Modern Agile explicitly says “Make safety a prerequisite”. It’s essential that the team can say “no” when asked to take on extra work, for example, otherwise saying “yes” has no meaning. We expect the team to commit to a sprint plan but if they cannot decide that a story is not ready (e.g. too many unknowns) or that taking “just one more story” will mean they cannot deliver them all, then any apparent commitment is just lip service.

However, many of the questions posed during software development can’t be answered with a simple yes or no – many require investigation (more than just a Google search!) so how can we make that safe? If the product owner wants to know if moving the “buy now” button will increase sales, or the team wants to know if splitting stories smaller will help them finish epics sooner, then the best way to find out is to try it and see i.e. run an experiment.

I often hear people talk about “failed experiments” but that’s misleading: an experiment proves or disproves a hypothesis. (OK, if you have constructed or executed it incorrectly, then maybe that’s a “failed experiment”.)

If we want to encourage a learning environment, we should get used to saying that the experiment proved our hypothesis to be incorrect, and that is something from which we can learn. For example, if the experiment’s hypothesis was that moving the “buy now” button to the top of the screen would increase sales by 5-10%, then (after collecting data from multiple customers) we could compare the sales figures for the button’s new location to its old location. An increase in sales could support the hypothesis; a decrease doesn’t. If sales were zero then we probably broke something and maybe that could be deemed a failure.

How do we make it safe to experiment? We must change our language but also we should observe the experiments closely and be prepared to abort them if we notice something outside of the expected parameters, e.g. sales dropping to zero. As part of defining our experiment, we should define an expected range (e.g. sales go up or down by 10%) – if moving the “buy now” button causes a 50% increase in sales (i.e. well beyond expectations) then perhaps something has gone wrong and we should reset everything.

While we’re thinking about the language we use regarding “failure”, another common phrase that needs to be stopped is “we failed the sprint”. Usually, this refers to the work planned for a sprint not being completed. However, sprints are timeboxes which means they end whether you are ready or not. Sprints don’t fail, they end. If the committed work is not completed, then I would still think twice about calling that a failure – it feels like the lead into apportioning blame, which is far from creating a safe environment. Incidentally, if the root cause of not completing planned work was found to be the product owner or senior management I suspect it would not be called a failure.

p.s. I managed to not reference the scaled agile framework or the scene from Marathon Man, so I’m calling that a success 😉

Does Mastery Matter?

Firstly I have to mention the inspiration for this post is a recent(ish) episode of The Agile Pubcast where Geoff Watts and Paul Goddard (both in the UK) were joined by Chris Williams from Canada, so this one matched many of my interests 🙂

Chris mentioned sporting events to see in Toronto but he missed that the Wolfpack has joined the North American Rugby League and (hopefully) we’ll get to see some rugby again soon.

I will give him kudos for recommending the Loose Moose – I’ve worked close to it for a few years and have enjoyed many of the beers they have on tap. I’m definitely in for a meetup if Paul & Geoff make it to Toronto! (Blanche de Chambly is fine but try adding orange juice to turn it into a beermosa!)

Their topic was “the concept of mastery as a discipline, and how failure is still a stigma”. Definitions of mastery include “full command or understanding of a subject”, “outstanding skill; expertise”, and “possession or display of great skill or technique”; in general it relates to knowing a subject thoroughly.

Personally I see it more as a journey than a destination – unless your chosen subject is quite tightly constrained then there’s probably always more to learn. In my experience, often when I think I’m close to reaching a peak I discover that it’s a false peak and the real peak is further, or sometimes that there are multiple peaks that I could strive for.

Whether it’s a journey or a destination, I believe mastery is essential. But does everyone need to master everything? No, of course not – for one thing, it’s not possible to master everything! Also, that’s part of the purpose of being a team – each team member brings their own strengths and interests. There used to be a popular concept of “T-shaped people”, meaning an individual with expertise in one aspect and a broad understanding of others, and then teams would be the sum of these Ts (and similar shapes).

Whenever I discuss mastery it’s a fairly safe bet that I will bring up one of my favourite films: Jiro Dreams of Sushi (2011). It focuses on 85-year-old sushi master Jiro Ono, his renowned Tokyo restaurant, and his relationship with his son and eventual heir, Yoshikazu.

I find it amazing to watch someone who has spent decades mastering his art and to hear him talk so profoundly about his approach: “You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honourably.”

How do agilists and sushi chefs get better? Incrementally. 🙂 “I do the same thing over and over, improving bit by bit. There is always a yearning to achieve more. I’ll continue to climb, trying to reach the top, but no one knows where the top is.”

As a prog rock fan, one of the musicians I follow is Robert Fripp (of King Crimson) and despite being recognised as a great guitarist he still practices for multiple hours every day. (Here’s a recent video clip of him talking about practice.)

I believe it’s important to take pride in what you do, and that means not just wanting to be good but wanting to improve. Those improvements often don’t come easily, and that’s where the discipline comes in – no-one can make you better if you don’t want to improve, so start by identifying the area(s) you want to master.

Watching the F1 Grand Prix this morning was a reminder that being at the pinacle of motorsport requires lots of practice; repeatedly rehearsing for various situations (e.g. a pit stop) is how teams build mastery and resilience.

It’s also interesting to hear the drivers talk about learning from their experiences; they focus on finding something which can be improved for next time, not wallowing in how they failed. They study how a problem occurred, identify the root causes, decide what to change, and then implement it. With minimal testing time between races, F1 is like testing in production – a mistake can be expensive so you try to minimise the risk but when something goes wrong (because eventually it will!) you make sure to learn as much as you can.

When I see failure being stigmatised, it’s often down to a misunderstanding of how we work. We cannot be right 100% of the time, so we try to reduce the risk by working in teams (“two minds are better than one”) and making small experiments. The evaluation of an experiment should show the hypothesis is accepted or rejected – the experiment isn’t a failure.

For example, if an online retailer were to propose that changing the “buy now” button to red would result in more sales, then they could conduct an experiment. The result might be that it does show an increase in sales; alternatively, it may show no increase. That is not a failure – it is a new piece of information. If the “no increase” outcome is called a failure, then we will end up demotivating the team and killing creativity.

Here’s an experiment to try: if we refer to the outcome of experiments as “discoveries” rather than “failures” then we will see more enthusiasm for conducting experiments and that will lead to a better product and a happier team.

Keep your backlog brief

Your product backlog is probably too long. A quick way to tell is to look at the items at the bottom of your backlog and consider two questions: do you know their origin (who requested them and why), and assuming the team works through the backlog and eventually completes those items will they still be valuable? There’s a good chance that the answer to at least one of those questions is no, and that’s an indication that your backlog needs attention.

I often refer to Henrik Kniberg’s “Agile Product Ownership in a Nutshell” video because he covers so many key points, especially the need for the PO to say “no” so that the backlog remains relevant and realistic. The backlog should not be a place where ideas go to die! When a stakeholder says “I really need feature X” they’re probably hoping to see that feature delivered in a few weeks or maybe a couple of months – if the PO has to tell them it’s going to be 6 months or more, then why are we doing agile?

Aside from the benefit of setting realistic expectations with your stakeholders, keeping your backlog brief makes it easier to see what’s there. (This is especially useful if you use a tool like Jira which randomly adds new items to the top or bottom of the backlog!) There is also less effort invested in creating those backlog items, which means there’s probably less resistance to throwing them away when the direction changes. The same can be true for the sprint backlog. If your sprint backlog is long and unwieldy then the root cause may be that the team are breaking things down into overly detailed tasks, over committing in sprint planning, or have too much capacity (e.g. sprint is too long, or too many people in the team).

One concern I’ve heard about having a short product backlog is that it doesn’t give the team a sense of long-term stability, e.g. the team may be disbanded (or even laid off) once everything in the backlog is done. However, I think if the team are concerned about their future then there are bigger issues to address than the backlog, and artificially adding items to try to calm the team is disingenuous, contrary to the openness and honesty we promote.

The Scrum Guide describes refinement as “the act of breaking down and further defining Product Backlog items into smaller more precise items”, which is usually how product backlog items are made ready for sprint planning. I think what’s missing is the attention that a PO gives the product backlog, maybe in conjunction with the stakeholders, to prune the backlog in order to keep it focused on the upcoming needs – probably the largest part of the PO role is the communication with the stakeholders, and I think Henrik’s video gives a great overview of how much happens often without the team’s knowledge.