It’s not all about delivery

As I wrote in an earlier post, The first of the Manifesto’s principles is “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”. It is our highest priority, but it’s not our only priority.

If we only focus on delivery, then we become a production line that does the same thing over and over again. That wouldn’t be so bad if what we did was the absolute best possible thing, but that’s not realistic – that’s why we have feedback loops and strive for continuous improvement. (Incidentally, even production lines aren’t focused solely on delivery!)

Not only do we need to collect feedback but we also need to be able to act on it, and to do that we must have room to make changes. If we are always rushing because of deadlines or other pressure, then we don’t have time to improve; in fact, we’re probably on a downward spiral which will result in frustration, mistakes, burnout and eventually people will quit. Warning signs of this slippery slope include perfunctory demos and sprint reviews (because there’s no time to improve the product), and ineffective or even skipped retrospectives (there’s no time to think about how to improve the way we work, but even if we did discuss it we can’t address the biggest problem which is lack of time so instead let’s spend that time cranking out more code).

But it’s not just time that’s needed – there must be room to grow, both personally and as a team. This is one of the biggest things a manager can do for a team – ensure that the team members are getting the training, mentoring, coaching and support they need. (If your organisation doesn’t have managers then hopefully there’s a similar role to support everyone’s development. If you don’t have managers because the Scrum Guide doesn’t mention them then consider that the Guide also doesn’t mention payroll but everyone still expects to get paid!)

A warning sign that I watch for is when someone (often the Product Owner) tells the team there’s no capacity in the next few sprints to tackle technical debt or any improvement experiments because “we’ve just got to get this feature out”. Unless there is an ironclad guarantee that the sprint(s) following the release will be focused on those postponed items, then I would suspect that the same message will surface again and again. When the sole focus becomes delivery I’ve seen teams resort to hiding work (tech debt “goes dark”, i.e. disappears from the kanban board) or a rigid division of time is introduced (20% of every sprint is withheld for technical tasks) – neither is healthy but they are understandable.

So how do we make room? A key step is for people to stop saying “deadline” when they mean “target date”. There are some instances where there really is a deadline (e.g. if legislation requires compliance starting on a particular day, then that’s probably a valid deadline) but more often when a team is told “the release deadline is June 30th” that’s actually a target. If the date can slip, then it’s not really a deadline. If the date is tied to someone’s bonus, it’s not really a deadline. Artificial deadlines cause unwarranted pressure. [This is a pet peeve of mine so I’ll write a separate post just focused on deadlines.]

My other recommendation is to improve how plans are created. (Even if your team has adopted #NoEstimates, there’s probably still someone in Sales who has to create a plan.) Even when the dev team is adept at relative sizing for stories, it’s not uncommon for people outside the team to estimate features in days or weeks and that is where the problem begins. Ideally, when someone asks how big a future feature is, the answer should be in terms of “yesterday’s weather”, e.g. “it feels like feature X that we did a couple of months ago” and then inspecting the data for feature X will provide some rough estimates.
The big assumption is that past conditions (when the team delivered X) will be similar when they work on the future feature… but if tackling tech debt, having effective retrospectives and running the associated experiments, and other “non-delivery” activities were postponed because of pressure to deliver feature X, then don’t be surprised when you encounter the same problem in your next delivery.

It’s not all about delivery; work at a sustainable pace; pay attention to all your feedback loops (and act on that feedback); don’t introduce unnecessary pressure (e.g. artificial deadlines); nurture your team members; improve your product. If you care about your people and your product, and I assume that you do, then ensure there’s space. Without space, the ability to deliver a quality product will suffer.

Agile Musicians

I’ve been accused of seeing examples of agile everywhere, which I think is probably true, but I was just watching a video by my favourite band about their progress on the latest album and I couldn’t help but see agile parallels. In this video they talk about how they have been jamming together, which to me feels like small experiments – they try something new and get feedback on whether it’s worth pursuing or putting aside.

They’ve made a list (“a long shortlist” according to Pete!) of pieces that they think are worth developing, moving them from rough experiments to more polished songs. It’s not a stretch to see that in software product terms: a proof of concept needs reworking to bring it up to production quality.

Even though Marillion is a 5-piece band, they consider Mike (the producer) as one of the team because they recognise that they need his talents to bring their efforts to fruition. I’ve known developers who are individually very talented but the real magic happens when they work as a team; the whole is much greater than the sum of the parts.

I like Steve Hogarth’s analogy of making a record to “a planet exploding backwards”, but I don’t know if this has a software equivalent. Fortunately, we don’t have “months and months of nothing” 🙂
Maybe that’s how product features evolve sometimes – building on small stories to create epics and then the epics come together to form a feature. We do decompose features into epics and then into stories, but I don’t think I’ve seen it happen backwards as H describes it.

Ideas can come from anywhere – absolutely! This is why it’s important for dev teams to treat each other equitably – an idea isn’t better just because it comes from someone with more experience or further up the corporate ladder.

When the band isn’t together in the studio they’re tinkering with ideas at home. As an ex-developer, I know I would often do research outside of work … back before we had Google and Stack Overflow to help 🙂 Even if I wasn’t actively thinking about it, I know there was always some degree of subconscious processing happing because suddenly I’d have a breakthrough.

Everyone enjoys themselves and enjoys working together. It’s hard to be creative when you’re not enjoying it. Yes, musicians can write sad songs; I wonder if there’s an equivalent for software engineers? Can you look at someone’s design solution and tell what mood they were in at the time?

A bit later in the video, they’re talking about postponed tour dates. Ian says “it’s really important to get out of this studio and in front of an audience, because it makes you realise why you’re doing it”. Customer feedback! OK, they will get a fantastic reception regardless because it will be so long since the last tour (thanks to COVID).

Of course, the one big difference is that once Marillion release the new album, that’s it – no updates, no patches, no hotfixes. Regardless of the feedback, that release is done. Not that it matters though – I’m sure it will be amazing!

I’m definitely not a musician, but I’m sure there are many people who understand both agile and making music – do you agree there are analogies?

Too busy to improve

I’m starting with an assumption: that we believe there is always the need for improvement (in what you produce and/or how you make it). If your product/team/system is perfect then you can stop reading now and go feed your unicorns instead. 😉

The next group I’ll invite to stop reading are those who believe it’s everyone else who needs to change but not themselves. I have worked for organisations where senior/middle managers want the coaches to “fix” the teams. Unfortunately, there can be no significant improvement until those people realise they’re not just part of the solution, there’s a good chance they’re actually part of the problem.

Hopefully, we are agreed on the need to continuously improve, and to be part of those changes. So how are you demonstrating that? Do you make sure there’s time for experiments or do you constantly push for more output? Do you encourage the team to discuss frustrations even when that might lead to questions about the organisation’s structure or decisions that have been made higher up the org chart? Do you help the team look deeply at themselves too, because effective retrospection and root cause analysis is not easy?

One unfortunate anti-pattern I’ve seen is “we’re too busy to improve”. This may be understandable for a week or two, but when this persists then it’s a red flag. I’ve encountered variations of “the Product team want us to focus solely on delivery”, or “leave the teams to get their work done”, or even “if it doesn’t help us get stuff done then we don’t have time to look at it”.

This is a very short-sighted approach; I’ve seen it paired with cancelling vacation or requiring overtime. The focus becomes the quantity of output, often at the expense of quality, sustainability, and the team’s health. Sadly the eventual outcome is often the realisation that what was produced wasn’t even all that important or urgent.

Returning to the presumed assumption that time spent on improvements will not contribute to helping the team deliver more, it’s important to ask why. If the time spent on retrospectives each sprint (commonly that’s 1 hour every 2 weeks) cannot be spared, then your plan is already incredibly fragile and very likely will fail – how will you cope if a team member is sick for a day, for example.

So rather than focus on the cost of improvement, think about the potential benefits. What if that one-hour retro resulted in an improvement that saves the team a few hours? What if the retro highlighted a concern about the quality or design of the work they’re producing? What if the conversation resulted in the discovery that a critical element had been skipped due to cutting corners because they are under pressure?

Saying “we’re too busy to improve” is a step back towards waterfall; it sends the message that “we know everything we need to know” and that nothing can be learned from the work the team are doing. More damagingly, if that is believed to be true when under pressure, when it’s finally lifted why would the team return to doing retrospectives and looking for improvements?

Is it safe?

One of the agile principles is that the team have a supportive environment; Modern Agile explicitly says “Make safety a prerequisite”. It’s essential that the team can say “no” when asked to take on extra work, for example, otherwise saying “yes” has no meaning. We expect the team to commit to a sprint plan but if they cannot decide that a story is not ready (e.g. too many unknowns) or that taking “just one more story” will mean they cannot deliver them all, then any apparent commitment is just lip service.

However, many of the questions posed during software development can’t be answered with a simple yes or no – many require investigation (more than just a Google search!) so how can we make that safe? If the product owner wants to know if moving the “buy now” button will increase sales, or the team wants to know if splitting stories smaller will help them finish epics sooner, then the best way to find out is to try it and see i.e. run an experiment.

I often hear people talk about “failed experiments” but that’s misleading: an experiment proves or disproves a hypothesis. (OK, if you have constructed or executed it incorrectly, then maybe that’s a “failed experiment”.)

If we want to encourage a learning environment, we should get used to saying that the experiment proved our hypothesis to be incorrect, and that is something from which we can learn. For example, if the experiment’s hypothesis was that moving the “buy now” button to the top of the screen would increase sales by 5-10%, then (after collecting data from multiple customers) we could compare the sales figures for the button’s new location to its old location. An increase in sales could support the hypothesis; a decrease doesn’t. If sales were zero then we probably broke something and maybe that could be deemed a failure.

How do we make it safe to experiment? We must change our language but also we should observe the experiments closely and be prepared to abort them if we notice something outside of the expected parameters, e.g. sales dropping to zero. As part of defining our experiment, we should define an expected range (e.g. sales go up or down by 10%) – if moving the “buy now” button causes a 50% increase in sales (i.e. well beyond expectations) then perhaps something has gone wrong and we should reset everything.

While we’re thinking about the language we use regarding “failure”, another common phrase that needs to be stopped is “we failed the sprint”. Usually, this refers to the work planned for a sprint not being completed. However, sprints are timeboxes which means they end whether you are ready or not. Sprints don’t fail, they end. If the committed work is not completed, then I would still think twice about calling that a failure – it feels like the lead into apportioning blame, which is far from creating a safe environment. Incidentally, if the root cause of not completing planned work was found to be the product owner or senior management I suspect it would not be called a failure.

p.s. I managed to not reference the scaled agile framework or the scene from Marathon Man, so I’m calling that a success 😉