“Definitions of #NoEstimates”? An enumerated list of counterpoints, Part II

As I set the scene in Part I of this post, I’m centralizing the counterpoints here for the enumerated list of #NoEstimates “definitions” (meaning approaches/arguments) that were nicely laid out by Jay Bazuzi in his recent post. Jay listed 11 items, the first six of which I covered in Part I of my post; I’m covering the last five in this Part II, plus adding my counterpoints for two additional frequent NE arguments that Jay omitted.

7. The parts of our work that can be estimated aren’t the parts that matter: if you understand work well enough to estimate it reliably, then it’s in the Known/Complicated or Obvious domains and you should automate it away.

But everything can be estimated to some degree of accuracy, and “accuracy” doesn’t imply precision. And the very phrasing of the question misses the point on what estimates actually are: note the casual misuse of “reliably” to imply some level of what amounts to certainty. No profession works with certainty. My dentist has never put a crown on this particular tooth, but she has no problem discussing with me the probable time frame, cost, and risks that are involved in doing so.

We’ve got to stop thinking (and we’ve certainly all got to stop exuding the pervasive attitude to our business compatriots) that software developers are special snowflakes who just can’t be reasonably asked to give their professional judgment in a similar manner, in areas they are deeply familiar with in general.  Note too that estimates, properly done, are always revised regularly as your understanding increases. It’s not a one-shot deal. Professionals in any arena simply don’t chronically scoff at normal business questions, and questions on cost, effort, time are all perfectly normal.

Also, think about the automation claim: it’s actually a rather strange and quite techno-centric assumption to make, that anything that you can understand would be both possible and somehow easy to automate. For example, all of us understand quite well the basic process and mechanisms required for driving, but look at auto manufacturers and technology companies struggling with automating the trickier aspects of self-driving vehicles.

Often, what’s very hard to automate isn’t at all hard to estimate usefully. In fact, that’s the whole point. When I drive, any new trip I embark on will have unfamiliar territory and new challenges, yet I am perfectly capable of making some assumptions, setting an overall plan, and adjusting as needed as I proceed. Equally, just because a software project incorporates something new (a technology, an approach, an integration) doesn’t meant that it’s a completely brand-new beast with absolutely no commonalities to what’s come before. We’re humans, we’re engineers, we’re practitioners, and that means we extend tried-and-true techniques and practices every day in various ways without somehow sailing off the edge of the world into the completely unknown/unplannable. We’ve got to stop raising the all-too-frequent lament of “here be dragonsfor every new initiative; it makes us come off, to our business colleagues, like Chicken Little combined with Eeyore.

8. We could estimate reliably, but there are other approaches that produce better outcomes. Using estimates takes your attention away from those other approaches.

This is essentially the oft-repeated NE “there are better ways” argument. But that statement carries no weight whatsoever as an abstract declaration, i.e., when it is used (as it almost always is) as a stand-alone deepity containing no specifics on what those “other approaches that produce better outcomes” may actually be. “There are better ways” worked (maybe) as an argument when the NE movement was starting out years ago. Yet years later, we’ve heard little or nothing about these surmised “better ways”. Instead, we simply hear wild claims, the same arguments just repeated more loudly, and zero discussion of the counterpoints that have been raised to them.

If you want to argue that there are better ways, go ahead, but the burden is on you to describe those better ways in detail, so that they can be evaluated and responded to. Anything else is just empty rhetoric.

9. When there’s a lot of technical debt, you can’t estimate reliably because you don’t know when you’ll hit a quagmire. Accidental complexity greatly exceeds essential complexity, but inconsistently. See “7 minutes 26 seconds”.

Again, true but only to a point. The sizable impact of technical debt on estimating is acknowledged and discussed in any text on estimating, such as McConnell. The answer, of course, is to set about actually addressing the overall issue of technical debt in the systems you’re dealing with, not simply throw up one’s hands in helpless despair and say there’s never any use in estimating whatsoever.

10. When the team owns its business outcomes, it doesn’t need estimates. See “AFM Optimizing”.

Not sure what this means, exactly, but it smacks quite a bit of the lofty insinuation of “if only we were in charge.” Really? I’ve rarely (if ever) met a developer who was chomping at the bit to “own the business outcome”, because the implication there is that if the business outcome turns out to be negative, the developer’s job is more seriously at risk. Or maybe this means basing developer compensation on business metrics success: I suspect most developers would run from that idea, because (and this is a reasonable basis for such a reaction) business success may indeed depend on lots of external factors besides the quality of the software systems they construct.  If the stated NE approach/argument somehow means neither of these things, a fair question is “what does it really mean to “own” the business outcome?”

Of course, though, the devs are usually NOT in charge, and someone else owns the business outcome. Even the CEO reports to the Board of Directors. And everyone needs estimates for obvious, non-nefarious purposes: to help decide where we should invest our scarce resources, and to monitor the company’s progress according to the plans we’ve laid out, sometimes even publicly. And whoever “owns the business outcome” is pretty much always going to be interested in setting and watching such plans, using something other than earnest gut-level assurances about how dedicated the team may be. If the business outcome matters to you, you’re going to want to see a plan, and you’ll want to have some notion of how well you’re tracking to that intended plan. Anything else is just asking people to go on faith.

And faith is never enough: specifically, the attitude of “just let us own our business outcomes and trust us, you’ll love the result” doesn’t cut it for most business endeavors. Anyone declaring that is (de facto) aligning themselves with the likes of Elizabeth Holmes of Theranos, where investors lost many hundreds of millions of dollars by trusting a confident, compelling spokesperson and failing to do basic due diligence on the claims they were hearing.

11. My boss uses estimates as a tool of abuse and manipulation, creating an unsafe work environment.

That’s unfortunate and regrettable. Chances are high in such situations that your boss uses other tools and other approaches as well that contribute to an unsafe work environment. Either way, that behavior has nothing to do with the value of estimation when it’s used properly, collaboratively, and non-abusively. Despite what you may see NE proponents claim, the everyday use of estimating doesn’t automatically and always lead to forms of abuse and manipulation; if it did, given the prevalence of estimating in our daily lives, none of us would be able to get out of bed in the morning. So let’s stop making the absurd logical leap of equating estimation with abuse just because that occasionally happens: seeing plentiful examples of bad driving and occasional twisted metal car wrecks on your daily commute doesn’t make a compelling argument for #NoCars.

Moreover, proper use of estimates can in fact be a team’s best defense against being arbitrarily saddled with work that goes way beyond their actual capacity to take on. A healthy culture of estimation and capacity-fitting forces trade-offs, as people up the chain are brought to realize they can’t “have it all”.


Additional points (not in Jay’s original list) that are often cited to justify #NoEstimates:

12. There’s no need to estimate. Just pick the most valuable thing to do, complete it, then move on to what is now the most valuable of the items remaining. Wherever you decide to stop, you’ll have implemented the most valuable aspects of what you set out to do.

OK, but who determines “the most valuable thing to do”? Based on what, and at what granularity? And does the entire team work on that and only that? And what if you run out of money or time before you actually finish the absolutely vital parts of “what you set out to do”? Software isn’t composed of independent chunks where any grouping whatsoever, any subset of its fragmented features, will provide the desired business value. A tricked-out sports car that lacks side-view mirrors and headlights, just because those were deemed less important than the steering or braking systems, is still unusable on the highway.

This item-by-item, no-long-range-plan approach may work fine for something like a software system in maintenance mode, with mostly small, independent, uncertain items (especially, bugs) in an ever-evolving list/backlog. But using it for everything a team needs to take on, especially major initiatives? On its face, it’s an anti-planning, very reactive stance, focusing on one item at a time, purposely ignoring any kind of big picture. And as such, it still doesn’t answer the fundamental, overwhelmingly common, unavoidable business question: I need business capability X. What’s the likely time frame, cost, level of effort, risk associated with getting me that capability?

13. Deliver regularly. Users will stop asking for estimates because they know that the team delivers regularly.

Color me (intensely) skeptical about the odds of users suddenly deciding they don’t need to know likely levels of effort (cost, time frame) for their initiatives. Users are not going to be fooled: regular delivery of smaller items in general doesn’t say much, if anything, about the delivery of a *specific* desired larger business capability.

As the metaphor I always use illustrates: just because a bus terminal features buses arriving every 3 minutes doesn’t tell me anything whatsoever about when MY bus to Pittsburgh will show up. Why would the regular arrival of various buses possibly cause me to stop asking about the arrival time of the bus that I need? Why wouldn’t I care about being informed if, say, that specific Pittsburgh-bound bus breaks down en route to the terminal and there will now be a substantial delay before it will be there? Why wouldn’t I take it amiss if your answer to my “when will the Pittsburgh bus be here” is a smiling, placating, infuriating “oh, not to worry, buses arrive here on average every three minutes”?

Similarly, your team may be delivering various items every week into production. That’s laudable and something to strive for, but it is a completely irrelevant fact when people ask (and they always will) when the team will likely finish building a specific larger capability that will take multiple weeks.

Not answering (essentially, point-blank refusing or even politely fending off) the question about when something can be expected can work just fine, in all those cases where no one especially cares when that something will be delivered, what exactly might be delivered, or how much it is likely to cost. Because that means that there are no stakeholder expectations and no curiosity about progress and likely time frame. I’ve encountered exactly zero such circumstances in my decades as an IT professional.

So between Part I of this post and Part II, I’ve laid out, item by item, approach by approach, the specific counterpoints to all of #NoEstimates’ collected “definitions”.

And let’s make the key point again: NE proponents cannot and do not ever respond to these counterpoints with actual reasoned responses attempting to argue their invalidity. Instead we hear insults, name-calling, and blocking of anyone who challenges their thinking. Draw your conclusions.

 

Speak Your Mind

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.