Quality built-in

When you notice that the quality of your product isn’t a high as you’d like, what can you do?

It’s useful to start by understanding what quality means for your product: how is it defined, how is it measured, and what level is desirable.

For example, if your indicator is the number of “escaped defects” (i.e. bugs found in the customer-accessible environment) you probably want a very low number; zero is ideal but probably isn’t realistic – no matter how tight the controls are, some defects will escape. In fact, in a safe (non-life-threatening) environment, a few escapes may actually be desirable as an indicator that the controls are not excessive (i.e. wasteful).

But if you only have one metric, then you could end up focusing on that at the expense of everything else. If you only focus on preventing bugs you could spend a lot of time “gold-plating”, building far more than the customer really wants.

There needs to be one or two other metrics for balance, e.g. delivering on time (or improving the cycle time), automating as much as possible, reducing technical debt – again, it needs to make sense in your environment.

Once you have those complementary metrics in place, the question may arise: who is responsible for achieving them? In “traditional” software development there used to be a Quality Assurance team that were meant to catch all the bugs before the product was released.

The obvious problems with this are that it puts too much onus on the QA team and it’s too late in the dev process … oh, and anyone who has worked on a waterfall project will have seen that QA always gets squeezed by deadlines, so the all-important “final line of defence” has to make compromises.

It’s unfair to make the QA team solely responsible for quality when they only see the product when it’s finished – at that point, it’s very expensive to rectify a defect. Sometimes the escalating cost of fixes can lead to their prioritisation, delaying “cosmetic” defects, resulting in a bug backlog.

A better approach is to identify and tackle quality problems sooner in the development process. For an agile team, that means continuous testing, frequent feedback from the intended users, as well as code review, pair programming and other feedback loops. Quality is something the team “builds in” throughout their process; they seek to constantly improve the product they create as well as how they create it.

As far as bugs are concerned, I like the approach recommended by Yassal & Daniel Sundman: Fix It Now Or Delete It! If the bug has to be fixed now, then fix it; if not, then delete it.

I tend to believe there’s room for a little leeway: if it needs to be fixed ASAP but without disturbing the current sprint, then bring it to the next sprint planning… but if it doesn’t make it into that sprint then it should be deleted.

But quality is more than just removing (or better yet, avoiding) defects. If the team get feedback earlier then any changes needed in the design can be incorporated sooner changing direction to meet the users’ needs. The team should also be constantly looking for ways to improve how they work, i.e. retrospectives and/or kaizen. If the team can identify bottlenecks or waste in their process, then they can increase their throughput and maybe make their work more enjoyable.

Quality can mean many things; once you’ve defined it for your circumstances and identified how you’ll measure it, then ensure that everyone understands not only what quality is but also how everyone is involved in contributing it. (Ideally, the whole team have been part of those previous steps!) That could include writing automated tests for more than just the happy path, involving real users in providing feedback early and often, or participating in retrospectives to improve the way in which the team work.

But you’re not done yet – quality is a moving target. Once hitting the current target consistently has become a habit, then it’s time to expand your definition. If you start simply with just counting escaped bugs, then consider inspecting how long it takes to go from idea to released feature (“concept-to-cash”) – how can you go faster without the bug count creeping up? Hopefully, the answer will involve removing impediments, bureaucracy and other wastes but that’s a topic for another day.

Context is king

I know the common phrase is “content is king”, but as far as Agile is concerned it’s context that is key.

When a team starts a new initiative, it’s important that they understand who it is for, why it is important, what value is it expected to provide, what problem(s) will it solve, etc. Without this context, the team can’t be expected to know if they are building the right thing – all they can do is follow the Product Owner’s direction.

As much as we look to the PO to represent the stakeholders’ needs, the development team need to understand the bigger picture: how does this feature fit into the product? With a wider context, the team can think about a general technical direction, bearing in mind things will change and the project might end before finishing the whole thing (hopefully because the customers are happy rather than running out of time/money).

The team can also identify risks, dependencies and unknowns – not all of them, of course, because there will always be discoveries as they dig deeper. They can create spikes, run experiments, build a proof of concept, and/or change the prioritisation of the backlog in order to reduce those concerns. This is a big difference from my experience of waterfall projects where the project manager would fill in the risk assessment sheet and file it away.

Tackling the high-risk issues early can improve the confidence in the rest of the backlog, or (in at least one project I can recall) result in the project being canned because it would be too expensive to resolve a showstopper… and this is a good thing! It’s far better than we identified the risk and investigated our options, so that the PO could make an informed decision whether to proceed. The alternative is to plough time and effort into the project and only think about the problem when we couldn’t ignore it any longer – that would have been a huge waste.

So how do we help the teams gain the context they need? I find Jeff Patton’s Story Mapping incredibly useful – the template I use is slightly different to Jeff’s because I really want the initial conversation to focus on the who and why pieces. The “Frame the problem” section is important in Jeff’s approach too, but I find the addition of specific sections in the template reminds the team (and the facilitator!) that we need to spend time really discussing and understanding this context.

I find teams often want to jump to the details, the minutiae, the weeds – junior developers tend to be detail-oriented, so it’s important to repeatedly bring them back to the bigger picture. An experienced software engineer knows that the technical details are important but so are other things: are we building the right thing (e.g. does it address the problem? does it provide value?) as well as are we building it right (e.g. is our testing strategy solid? is our architecture evolving?).

Context is important throughout: in Refinement and Planning, we should set goals that move us towards the initiative’s objective; during the sprint, keeping the context in mind helps ensure we build what’s needed, not necessarily something because that’s what’s written in a user story; and then the Sprint Review should use the initiative’s context as the frame for understanding progress and any impact on the product backlog. Refer to it early & often. If we were in a team room, big posters of business and technical context FTW! (Jim Benson did a great presentation on this in the Agile Virtual Summit recently, but I’ll come back to this in another post.)

Scaling Up

If some is good, then surely more is better? Companies want to expand, and frequently the assumption is that just adding more people will result in successfully growing the organisation. Then, when they realise that’s not working, they decide that what’s needed is more process.

Fortunately, there are frameworks which you can install to fix all your problems; just invest in SAFe training and consultants, then follow their prescriptive framework, and magically you’ll be back to a command and control organisation in no time. Whoops!

It’s quite telling that the “Agile Teams” image is tucked away down in the bottom left corner; the majority of the people in the company are reduced to that little graphic. Meanwhile at the top of that hierarchy are Epic Owners and Enterprise Architect; I wonder at what stage of designing the framework they remembered to remove the picture of a waterfall. You also have to look pretty closely to find a mention of the customer.

I’m not saying there is nothing of value in SAFe – in fact it covers so much that there’s bound to be something useful for everyone! But most of the valuable pieces can be found outside SAFe, e.g. Scrum, Kanban, continuous delivery, DevOps – the key thing is to be selective.

Rather than painting SAFe across the organisation, recognise that each org is different and therefore has different challenges, but also that within an organisation there are departments, groups, and teams with their own circumstances. They should all be working towards the same goals, and following the same principles, but the implementation details within each unit are going to differ… and they will change over time.

And therein lies the challenge: if each team is different, then they need to find their own path… but that is time-consuming and expensive. The temptation is to enforce standardisation, which simplifies the thinking (not necessarily the problem!) and assumes that the same solution can be applied across multiple teams. This should make you shout “slippery slope” because it’s the path back to factory-thinking and turning everyone into widget makers.

The teams are building different things because they are solving different business problems, and with different people in the teams they will have different approaches to creating those solutions. If those differences don’t exist (a) you should probably look at your hiring practices, and (b) your staff might be about to be replaced by robots (or a very small shell script).

So does that mean there’s no easy way to scale up? I don’t think there’s an easy way, but there are things that can help… and it’s similar to how we help teams tackle feature delivery. It starts by understanding the problems you’re trying to address, then breaking them down into manageable chunks, minimising dependencies and risks, and using frequent feedback loops to check you’re heading in the right direction.

You probably don’t need to scale up equally across the organisation – some parts warrant adding more people, e.g. if there’s a promising new business sector. But reducing the impediments that a group faces could also help their growth, and dependencies between teams is one of the biggest obstacles they face. In the same way that developers try to have clean interfaces between services, teams should be clear on their responsibilities and their backlog should reflect that.

Work with the team to identify the challenges; an exercise like Circles and Soup can help identify where the team cannot address the issues on their own – these are slowing progress, so focus on reducing or removing them and that will help your team grow.

If you add people to a team that’s not addressing its impediments, then you just end up with more people struggling with those impediments. Make the teams independent and autonomous; focus on continuous improvement; address the issues which impede the system, and you may not need to scale up!

When I say resources, I mean resources

It shouldn’t need to be said, but unfortunately there are still people who refer to others (usually subordinate staff) as resources. As we become more aware of potentially offensive or harmful language and make efforts to change the words we use, I hope resources will appear on the list of inappropriate terms.

I still wear my #PeopleWorkHere T-shirt from This Agile Life as a visual reminder.

I think the second image comes from Mike Cohn (Mountain Goat Software) but I can’t find it.

But on to the real topic: where to find reliable agile-related information? The benefit of the internet is having access to so many resources, but the downside is that anyone can post their (mis)understanding – it’s important to check that the author has relevant, practical experience.

Personally, I find podcasts a great way to learn about new techniques etc. I tend to play them at 1.25x (or sometimes 1.5x) speed because I follow a lot of podcasts (across many topics) and at that speed I can just about keep up with the influx of new episodes 🙂

Unfortunately some of my favourites haven’t released anything in a year or more, so my currently-active favs are:

Similarly, I have a backlog of webinars and books. I don’t follow any agile-related YouTube channels; I tend to come across videos via email (mailing lists), websites, slack, and the infamous YouTube rabbit hole, and then I add them to my bookmarks to watch later. If there are some good channels that you would recommend, please leave a comment.

Update: I checked and actually I do subscribe to a few YT feeds:

OK, that’s a longer list than I expected!

Ultimately, I think it’s about finding a format (or formats) that works for you and then identify some reliable authors. But it’s not just about absorbing information – it’s useful to discuss concepts and challenges, and (as the saying goes) the best way to understand a topic is to explain it to someone else. Meetup groups (albeit online at the moment) and twitter (I have an agile list of people I follow) are good, but the one I find most useful is my team’s coaching circle: a weekly opportunity for us to discuss our challenges, discoveries, experiments, and questions. Rather than squeeze this in at the end of a long post, I’ll write a separate entry about how this format works and the type of things we discuss.