Forecasting Using Yesterday’s Weather

When we talk about “forecasting using yesterday’s weather”, we mean that the most reliable prediction comes from using recent data, i.e. if you want to know how much a team can complete next sprint, look at how much they completed in the past few sprints.

The recent weather in Toronto reminded me of a discussion I had with a Product Owner: “how can we use yesterday’s weather when what happened was unexpected and unlikely to happen again?” In the past week, the temperature here peaked at 31°C and three days later we had hail and snow. If we forecast next week’s weather using last week’s data, then should we expect to see both extremes again?

Let’s review the PO’s question in this context: was last week’s high and low weather unexpected? Well, the short-term forecasts warned that both were coming. But even if one or both were unexpected, would it matter? It’s not like we expect weather forecasts to be 100% reliable – that would be unrealistic.

Are those conditions unlikely to happen again? Well, that depends on your definition of “likely”. Will it definitely, 100% guaranteed, without fail, happen tomorrow – probably not. Will it happen again at some point in the future – probably. I’m still not willing to say yes or no because that’s not how forecasts work! Will we see both 30°C heat and snow tomorrow? I’m going to say that’s very unlikely, but I won’t go as far as 100% certainly no.

So if we can’t give the PO an ironclad guarantee, why should we use “yesterday’s weather” when forecasting? Well, we could revert to asking the team for their estimates (which is usually an “educated guess” or just “gut feel”) and then tell them they have to meet the date which they supplied … but that’s not going to be a reliable forecast, and is likely to result in inflated estimates if the team feel they are being misused.

If we have to use forecasts, then using representative data is the best basis. We also need to understand probability, that asking for high confidence in an estimate means you’ll get a very conservative forecast. We also need to understand that the further ahead we try to forecast the more likely we are to be wrong; the best time to forecast the weather for 3pm tomorrow is at 2:59pm tomorrow.

But most of all, we need to understand that forecasts are a guess – if we do it right it’s the most reliable prediction that we can make, but it’s still a guess – so don’t gamble more than you’re willing to lose and be prepared to adapt when reality takes an unexpected turn.

It’s not all about delivery

As I wrote in an earlier post, The first of the Manifesto’s principles is “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”. It is our highest priority, but it’s not our only priority.

If we only focus on delivery, then we become a production line that does the same thing over and over again. That wouldn’t be so bad if what we did was the absolute best possible thing, but that’s not realistic – that’s why we have feedback loops and strive for continuous improvement. (Incidentally, even production lines aren’t focused solely on delivery!)

Not only do we need to collect feedback but we also need to be able to act on it, and to do that we must have room to make changes. If we are always rushing because of deadlines or other pressure, then we don’t have time to improve; in fact, we’re probably on a downward spiral which will result in frustration, mistakes, burnout and eventually people will quit. Warning signs of this slippery slope include perfunctory demos and sprint reviews (because there’s no time to improve the product), and ineffective or even skipped retrospectives (there’s no time to think about how to improve the way we work, but even if we did discuss it we can’t address the biggest problem which is lack of time so instead let’s spend that time cranking out more code).

But it’s not just time that’s needed – there must be room to grow, both personally and as a team. This is one of the biggest things a manager can do for a team – ensure that the team members are getting the training, mentoring, coaching and support they need. (If your organisation doesn’t have managers then hopefully there’s a similar role to support everyone’s development. If you don’t have managers because the Scrum Guide doesn’t mention them then consider that the Guide also doesn’t mention payroll but everyone still expects to get paid!)

A warning sign that I watch for is when someone (often the Product Owner) tells the team there’s no capacity in the next few sprints to tackle technical debt or any improvement experiments because “we’ve just got to get this feature out”. Unless there is an ironclad guarantee that the sprint(s) following the release will be focused on those postponed items, then I would suspect that the same message will surface again and again. When the sole focus becomes delivery I’ve seen teams resort to hiding work (tech debt “goes dark”, i.e. disappears from the kanban board) or a rigid division of time is introduced (20% of every sprint is withheld for technical tasks) – neither is healthy but they are understandable.

So how do we make room? A key step is for people to stop saying “deadline” when they mean “target date”. There are some instances where there really is a deadline (e.g. if legislation requires compliance starting on a particular day, then that’s probably a valid deadline) but more often when a team is told “the release deadline is June 30th” that’s actually a target. If the date can slip, then it’s not really a deadline. If the date is tied to someone’s bonus, it’s not really a deadline. Artificial deadlines cause unwarranted pressure. [This is a pet peeve of mine so I’ll write a separate post just focused on deadlines.]

My other recommendation is to improve how plans are created. (Even if your team has adopted #NoEstimates, there’s probably still someone in Sales who has to create a plan.) Even when the dev team is adept at relative sizing for stories, it’s not uncommon for people outside the team to estimate features in days or weeks and that is where the problem begins. Ideally, when someone asks how big a future feature is, the answer should be in terms of “yesterday’s weather”, e.g. “it feels like feature X that we did a couple of months ago” and then inspecting the data for feature X will provide some rough estimates.
The big assumption is that past conditions (when the team delivered X) will be similar when they work on the future feature… but if tackling tech debt, having effective retrospectives and running the associated experiments, and other “non-delivery” activities were postponed because of pressure to deliver feature X, then don’t be surprised when you encounter the same problem in your next delivery.

It’s not all about delivery; work at a sustainable pace; pay attention to all your feedback loops (and act on that feedback); don’t introduce unnecessary pressure (e.g. artificial deadlines); nurture your team members; improve your product. If you care about your people and your product, and I assume that you do, then ensure there’s space. Without space, the ability to deliver a quality product will suffer.