Is it safe?

One of the agile principles is that the team have a supportive environment; Modern Agile explicitly says “Make safety a prerequisite”. It’s essential that the team can say “no” when asked to take on extra work, for example, otherwise saying “yes” has no meaning. We expect the team to commit to a sprint plan but if they cannot decide that a story is not ready (e.g. too many unknowns) or that taking “just one more story” will mean they cannot deliver them all, then any apparent commitment is just lip service.

However, many of the questions posed during software development can’t be answered with a simple yes or no – many require investigation (more than just a Google search!) so how can we make that safe? If the product owner wants to know if moving the “buy now” button will increase sales, or the team wants to know if splitting stories smaller will help them finish epics sooner, then the best way to find out is to try it and see i.e. run an experiment.

I often hear people talk about “failed experiments” but that’s misleading: an experiment proves or disproves a hypothesis. (OK, if you have constructed or executed it incorrectly, then maybe that’s a “failed experiment”.)

If we want to encourage a learning environment, we should get used to saying that the experiment proved our hypothesis to be incorrect, and that is something from which we can learn. For example, if the experiment’s hypothesis was that moving the “buy now” button to the top of the screen would increase sales by 5-10%, then (after collecting data from multiple customers) we could compare the sales figures for the button’s new location to its old location. An increase in sales could support the hypothesis; a decrease doesn’t. If sales were zero then we probably broke something and maybe that could be deemed a failure.

How do we make it safe to experiment? We must change our language but also we should observe the experiments closely and be prepared to abort them if we notice something outside of the expected parameters, e.g. sales dropping to zero. As part of defining our experiment, we should define an expected range (e.g. sales go up or down by 10%) – if moving the “buy now” button causes a 50% increase in sales (i.e. well beyond expectations) then perhaps something has gone wrong and we should reset everything.

While we’re thinking about the language we use regarding “failure”, another common phrase that needs to be stopped is “we failed the sprint”. Usually, this refers to the work planned for a sprint not being completed. However, sprints are timeboxes which means they end whether you are ready or not. Sprints don’t fail, they end. If the committed work is not completed, then I would still think twice about calling that a failure – it feels like the lead into apportioning blame, which is far from creating a safe environment. Incidentally, if the root cause of not completing planned work was found to be the product owner or senior management I suspect it would not be called a failure.

p.s. I managed to not reference the scaled agile framework or the scene from Marathon Man, so I’m calling that a success 😉

Does Mastery Matter?

Firstly I have to mention the inspiration for this post is a recent(ish) episode of The Agile Pubcast where Geoff Watts and Paul Goddard (both in the UK) were joined by Chris Williams from Canada, so this one matched many of my interests 🙂

Chris mentioned sporting events to see in Toronto but he missed that the Wolfpack has joined the North American Rugby League and (hopefully) we’ll get to see some rugby again soon.

I will give him kudos for recommending the Loose Moose – I’ve worked close to it for a few years and have enjoyed many of the beers they have on tap. I’m definitely in for a meetup if Paul & Geoff make it to Toronto! (Blanche de Chambly is fine but try adding orange juice to turn it into a beermosa!)

Their topic was “the concept of mastery as a discipline, and how failure is still a stigma”. Definitions of mastery include “full command or understanding of a subject”, “outstanding skill; expertise”, and “possession or display of great skill or technique”; in general it relates to knowing a subject thoroughly.

Personally I see it more as a journey than a destination – unless your chosen subject is quite tightly constrained then there’s probably always more to learn. In my experience, often when I think I’m close to reaching a peak I discover that it’s a false peak and the real peak is further, or sometimes that there are multiple peaks that I could strive for.

Whether it’s a journey or a destination, I believe mastery is essential. But does everyone need to master everything? No, of course not – for one thing, it’s not possible to master everything! Also, that’s part of the purpose of being a team – each team member brings their own strengths and interests. There used to be a popular concept of “T-shaped people”, meaning an individual with expertise in one aspect and a broad understanding of others, and then teams would be the sum of these Ts (and similar shapes).

Whenever I discuss mastery it’s a fairly safe bet that I will bring up one of my favourite films: Jiro Dreams of Sushi (2011). It focuses on 85-year-old sushi master Jiro Ono, his renowned Tokyo restaurant, and his relationship with his son and eventual heir, Yoshikazu.

I find it amazing to watch someone who has spent decades mastering his art and to hear him talk so profoundly about his approach: “You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honourably.”

How do agilists and sushi chefs get better? Incrementally. 🙂 “I do the same thing over and over, improving bit by bit. There is always a yearning to achieve more. I’ll continue to climb, trying to reach the top, but no one knows where the top is.”

As a prog rock fan, one of the musicians I follow is Robert Fripp (of King Crimson) and despite being recognised as a great guitarist he still practices for multiple hours every day. (Here’s a recent video clip of him talking about practice.)

I believe it’s important to take pride in what you do, and that means not just wanting to be good but wanting to improve. Those improvements often don’t come easily, and that’s where the discipline comes in – no-one can make you better if you don’t want to improve, so start by identifying the area(s) you want to master.

Watching the F1 Grand Prix this morning was a reminder that being at the pinacle of motorsport requires lots of practice; repeatedly rehearsing for various situations (e.g. a pit stop) is how teams build mastery and resilience.

It’s also interesting to hear the drivers talk about learning from their experiences; they focus on finding something which can be improved for next time, not wallowing in how they failed. They study how a problem occurred, identify the root causes, decide what to change, and then implement it. With minimal testing time between races, F1 is like testing in production – a mistake can be expensive so you try to minimise the risk but when something goes wrong (because eventually it will!) you make sure to learn as much as you can.

When I see failure being stigmatised, it’s often down to a misunderstanding of how we work. We cannot be right 100% of the time, so we try to reduce the risk by working in teams (“two minds are better than one”) and making small experiments. The evaluation of an experiment should show the hypothesis is accepted or rejected – the experiment isn’t a failure.

For example, if an online retailer were to propose that changing the “buy now” button to red would result in more sales, then they could conduct an experiment. The result might be that it does show an increase in sales; alternatively, it may show no increase. That is not a failure – it is a new piece of information. If the “no increase” outcome is called a failure, then we will end up demotivating the team and killing creativity.

Here’s an experiment to try: if we refer to the outcome of experiments as “discoveries” rather than “failures” then we will see more enthusiasm for conducting experiments and that will lead to a better product and a happier team.

Scrum doesn’t work for us

Hearing a team say “Scrum doesn’t work for us” can trigger a negative reaction, based on an assumption that the team doesn’t understand Scrum or isn’t doing it right, but it could be a positive sign – maybe the team is growing and needs to move beyond “Scrum by the book”.

Scrum can be a good starting point because it provides structure and guidelines on how to “do” Agile – the Scrum Guide specifies roles, artifacts, and events. It specifies pillars (transparency, inspection, and adaptation) and values (commitment, focus, openness, respect, and courage), so it also tries to instil a mindset of “being” Agile. Scrum’s definition states “Scrum is simple” and “purposefully incomplete”; it is not intended as an endpoint but as a stepping stone on an Agile journey.

But rather than say “we’ve outgrown Scrum” and throw it away, this could be an opportunity for the team to inspect the way they work, consider which aspects they would like to change, and then run experiments to try to move towards where they want to be. That might sound like a retrospective, and that’s not an accident – effective retrospection is, I believe, the engine that powers change. If, for example, a team identifies the way that they collaborate during a sprint as something they could improve then they may decide to try pairing for the next couple of sprints. After those two sprints, they should ensure they discuss collaboration in their retro and whether pairing is helping – it could be they decide to extend the experiment, try something different, and adopt it permanently as part of how they work.

Another example could be to move towards kaizen (continuous improvement); maybe the team decide it’s working so well that they decide to drop scheduled retrospectives from their calendar. A purist might ask if you are still doing Scrum if you aren’t doing everything in the Scrum Guide; as a pragmatist, I would be pleased to see the team grow and wouldn’t care what they call it.

Coaching backlog

One challenge as a coach is deciding where to spend your time – do I help team A or team B or split my time between them? If I had the capacity to give both the attention they need then it’s not a problem but of course there’s never enough time to go round. Like so many things, it comes down to prioritisation… but what goes in the backlog? Personally, I tend to think in terms of experiments, e.g. will smaller stories improve the team’s ability to respond to changes in direction?

(I should add that when I say “team” I’m actually thinking about the people in the dev team, the other people they work with, the stakeholders, and the system which encompasses them.)

But who creates the backlog items? Obviously, there are things which come up in conversations with the team members and stakeholders, but there’s also observations and comparisons between the current system and a potential “next level”. Now, we all know there aren’t actually “levels” but there are things we tend to observe in high performing teams compared to relatively new teams. I’m not going to touch on “assessments” here – I’ll save that for a future post.

In order to share this backlog with the coaching stakeholders (which includes the team) rather than a simple backlog of Stories, something like a POPCORN board can help support the conversations around which experiments to try and what the outcomes were. POPCORN is a backronym for: Problems & observations; Options; Possible experiments; Commitments; Ongoing; Review; Next – you can watch Claudio Perrone, aka Agile Sensei, present POPCORN at Agile Testing Days 2017.