One of the agile principles is that the team have a supportive environment; Modern Agile explicitly says “Make safety a prerequisite”. It’s essential that the team can say “no” when asked to take on extra work, for example, otherwise saying “yes” has no meaning. We expect the team to commit to a sprint plan but if they cannot decide that a story is not ready (e.g. too many unknowns) or that taking “just one more story” will mean they cannot deliver them all, then any apparent commitment is just lip service.
However, many of the questions posed during software development can’t be answered with a simple yes or no – many require investigation (more than just a Google search!) so how can we make that safe? If the product owner wants to know if moving the “buy now” button will increase sales, or the team wants to know if splitting stories smaller will help them finish epics sooner, then the best way to find out is to try it and see i.e. run an experiment.
I often hear people talk about “failed experiments” but that’s misleading: an experiment proves or disproves a hypothesis. (OK, if you have constructed or executed it incorrectly, then maybe that’s a “failed experiment”.)
If we want to encourage a learning environment, we should get used to saying that the experiment proved our hypothesis to be incorrect, and that is something from which we can learn. For example, if the experiment’s hypothesis was that moving the “buy now” button to the top of the screen would increase sales by 5-10%, then (after collecting data from multiple customers) we could compare the sales figures for the button’s new location to its old location. An increase in sales could support the hypothesis; a decrease doesn’t. If sales were zero then we probably broke something and maybe that could be deemed a failure.
How do we make it safe to experiment? We must change our language but also we should observe the experiments closely and be prepared to abort them if we notice something outside of the expected parameters, e.g. sales dropping to zero. As part of defining our experiment, we should define an expected range (e.g. sales go up or down by 10%) – if moving the “buy now” button causes a 50% increase in sales (i.e. well beyond expectations) then perhaps something has gone wrong and we should reset everything.
While we’re thinking about the language we use regarding “failure”, another common phrase that needs to be stopped is “we failed the sprint”. Usually, this refers to the work planned for a sprint not being completed. However, sprints are timeboxes which means they end whether you are ready or not. Sprints don’t fail, they end. If the committed work is not completed, then I would still think twice about calling that a failure – it feels like the lead into apportioning blame, which is far from creating a safe environment. Incidentally, if the root cause of not completing planned work was found to be the product owner or senior management I suspect it would not be called a failure.
p.s. I managed to not reference the scaled agile framework or the scene from Marathon Man, so I’m calling that a success 😉