Quality built-in

When you notice that the quality of your product isn’t a high as you’d like, what can you do?

It’s useful to start by understanding what quality means for your product: how is it defined, how is it measured, and what level is desirable.

For example, if your indicator is the number of “escaped defects” (i.e. bugs found in the customer-accessible environment) you probably want a very low number; zero is ideal but probably isn’t realistic – no matter how tight the controls are, some defects will escape. In fact, in a safe (non-life-threatening) environment, a few escapes may actually be desirable as an indicator that the controls are not excessive (i.e. wasteful).

But if you only have one metric, then you could end up focusing on that at the expense of everything else. If you only focus on preventing bugs you could spend a lot of time “gold-plating”, building far more than the customer really wants.

There needs to be one or two other metrics for balance, e.g. delivering on time (or improving the cycle time), automating as much as possible, reducing technical debt – again, it needs to make sense in your environment.

Once you have those complementary metrics in place, the question may arise: who is responsible for achieving them? In “traditional” software development there used to be a Quality Assurance team that were meant to catch all the bugs before the product was released.

The obvious problems with this are that it puts too much onus on the QA team and it’s too late in the dev process … oh, and anyone who has worked on a waterfall project will have seen that QA always gets squeezed by deadlines, so the all-important “final line of defence” has to make compromises.

It’s unfair to make the QA team solely responsible for quality when they only see the product when it’s finished – at that point, it’s very expensive to rectify a defect. Sometimes the escalating cost of fixes can lead to their prioritisation, delaying “cosmetic” defects, resulting in a bug backlog.

A better approach is to identify and tackle quality problems sooner in the development process. For an agile team, that means continuous testing, frequent feedback from the intended users, as well as code review, pair programming and other feedback loops. Quality is something the team “builds in” throughout their process; they seek to constantly improve the product they create as well as how they create it.

As far as bugs are concerned, I like the approach recommended by Yassal & Daniel Sundman: Fix It Now Or Delete It! If the bug has to be fixed now, then fix it; if not, then delete it.

I tend to believe there’s room for a little leeway: if it needs to be fixed ASAP but without disturbing the current sprint, then bring it to the next sprint planning… but if it doesn’t make it into that sprint then it should be deleted.

But quality is more than just removing (or better yet, avoiding) defects. If the team get feedback earlier then any changes needed in the design can be incorporated sooner changing direction to meet the users’ needs. The team should also be constantly looking for ways to improve how they work, i.e. retrospectives and/or kaizen. If the team can identify bottlenecks or waste in their process, then they can increase their throughput and maybe make their work more enjoyable.

Quality can mean many things; once you’ve defined it for your circumstances and identified how you’ll measure it, then ensure that everyone understands not only what quality is but also how everyone is involved in contributing it. (Ideally, the whole team have been part of those previous steps!) That could include writing automated tests for more than just the happy path, involving real users in providing feedback early and often, or participating in retrospectives to improve the way in which the team work.

But you’re not done yet – quality is a moving target. Once hitting the current target consistently has become a habit, then it’s time to expand your definition. If you start simply with just counting escaped bugs, then consider inspecting how long it takes to go from idea to released feature (“concept-to-cash”) – how can you go faster without the bug count creeping up? Hopefully, the answer will involve removing impediments, bureaucracy and other wastes but that’s a topic for another day.

Context is king

I know the common phrase is “content is king”, but as far as Agile is concerned it’s context that is key.

When a team starts a new initiative, it’s important that they understand who it is for, why it is important, what value is it expected to provide, what problem(s) will it solve, etc. Without this context, the team can’t be expected to know if they are building the right thing – all they can do is follow the Product Owner’s direction.

As much as we look to the PO to represent the stakeholders’ needs, the development team need to understand the bigger picture: how does this feature fit into the product? With a wider context, the team can think about a general technical direction, bearing in mind things will change and the project might end before finishing the whole thing (hopefully because the customers are happy rather than running out of time/money).

The team can also identify risks, dependencies and unknowns – not all of them, of course, because there will always be discoveries as they dig deeper. They can create spikes, run experiments, build a proof of concept, and/or change the prioritisation of the backlog in order to reduce those concerns. This is a big difference from my experience of waterfall projects where the project manager would fill in the risk assessment sheet and file it away.

Tackling the high-risk issues early can improve the confidence in the rest of the backlog, or (in at least one project I can recall) result in the project being canned because it would be too expensive to resolve a showstopper… and this is a good thing! It’s far better than we identified the risk and investigated our options, so that the PO could make an informed decision whether to proceed. The alternative is to plough time and effort into the project and only think about the problem when we couldn’t ignore it any longer – that would have been a huge waste.

So how do we help the teams gain the context they need? I find Jeff Patton’s Story Mapping incredibly useful – the template I use is slightly different to Jeff’s because I really want the initial conversation to focus on the who and why pieces. The “Frame the problem” section is important in Jeff’s approach too, but I find the addition of specific sections in the template reminds the team (and the facilitator!) that we need to spend time really discussing and understanding this context.

I find teams often want to jump to the details, the minutiae, the weeds – junior developers tend to be detail-oriented, so it’s important to repeatedly bring them back to the bigger picture. An experienced software engineer knows that the technical details are important but so are other things: are we building the right thing (e.g. does it address the problem? does it provide value?) as well as are we building it right (e.g. is our testing strategy solid? is our architecture evolving?).

Context is important throughout: in Refinement and Planning, we should set goals that move us towards the initiative’s objective; during the sprint, keeping the context in mind helps ensure we build what’s needed, not necessarily something because that’s what’s written in a user story; and then the Sprint Review should use the initiative’s context as the frame for understanding progress and any impact on the product backlog. Refer to it early & often. If we were in a team room, big posters of business and technical context FTW! (Jim Benson did a great presentation on this in the Agile Virtual Summit recently, but I’ll come back to this in another post.)