“Just try it”? How NOT to sell a controversial idea

Alas: when it comes to pitching a controversial idea, many of us in technology fail miserably. We often fall reflexively into extreme “oversalesmanship” of a pet idea. We tend towards the binary: we seem to find it next to impossible to see the idea’s downsides, or to imagine how other people might be viewing it and how we could usefully, effectively, and without condescension counter their various objections (i.e., barriers to the “sale”) of our idea. 

Instead, here’s how we often react. We “flip the bozo bitall too readily on anyone who criticizes our baby: such folks are clearly clueless, we think; we rant that they must not be technical; they’ve “probably never written software at all” and “possibly can’t work their <expletive> email; they’re a PHB; they’re a troll; they’re a dinosaur; we can’t wait for them to die out so we, the enlightened wizards, can take over. (Actual examples of such declarations are easy to find).

None of this attitude is inevitable or unfixable. A start at combating this weakness when selling others on a controversial idea is to heighten our own awareness of the problem. Inspect and adapt, after all. So let’s focus here on one particular tactic of such bad salesmanship, as frequently employed by the (yes, very controversial) #NoEstimates movement: the “just try it” taunt. [Read more…]

“Definitions of #NoEstimates”? An enumerated list of counterpoints, Part II

As I set the scene in Part I of this post, I’m centralizing the counterpoints here for the enumerated list of #NoEstimates “definitions” (meaning approaches/arguments) that were nicely laid out by Jay Bazuzi in his recent post. Jay listed 11 items, the first six of which I covered in Part I of my post; I’m covering the last five in this Part II, plus adding my counterpoints for two additional frequent NE arguments that Jay omitted.

7. The parts of our work that can be estimated aren’t the parts that matter: if you understand work well enough to estimate it reliably, then it’s in the Known/Complicated or Obvious domains and you should automate it away.

But everything can be estimated to some degree of accuracy, and “accuracy” doesn’t imply precision. And the very phrasing of the question misses the point on what estimates actually are: note the casual misuse of “reliably” to imply some level of what amounts to certainty. No profession works with certainty. My dentist has never put a crown on this particular tooth, but she has no problem discussing with me the probable time frame, cost, and risks that are involved in doing so.

We’ve got to stop thinking (and we’ve certainly all got to stop exuding the pervasive attitude to our business compatriots) that software developers are special snowflakes who just can’t be reasonably asked to give their professional judgment in a similar manner, in areas they are deeply familiar with in general.  Note too that estimates, properly done, are always revised regularly as your understanding increases. It’s not a one-shot deal. Professionals in any arena simply don’t chronically scoff at normal business questions, and questions on cost, effort, time are all perfectly normal.

Also, think about the automation claim: it’s actually a rather strange and quite techno-centric assumption to make, that anything that you can understand would be both possible and somehow easy to automate. For example, all of us understand quite well the basic process and mechanisms required for driving, but look at auto manufacturers and technology companies struggling with automating the trickier aspects of self-driving vehicles.

Often, what’s very hard to automate isn’t at all hard to estimate usefully. In fact, that’s the whole point. When I drive, any new trip I embark on will have unfamiliar territory and new challenges, yet I am perfectly capable of making some assumptions, setting an overall plan, and adjusting as needed as I proceed. Equally, just because a software project incorporates something new (a technology, an approach, an integration) doesn’t meant that it’s a completely brand-new beast with absolutely no commonalities to what’s come before. We’re humans, we’re engineers, we’re practitioners, and that means we extend tried-and-true techniques and practices every day in various ways without somehow sailing off the edge of the world into the completely unknown/unplannable. We’ve got to stop raising the all-too-frequent lament of “here be dragonsfor every new initiative; it makes us come off, to our business colleagues, like Chicken Little combined with Eeyore.
[Read more…]

“Definitions of #NoEstimates”? An enumerated list of counterpoints, Part I.

A week or two ago, we saw the first interesting new blog post on the bizarre and rancorous #NoEstimates movement in quite some time. Although that post is titled “definitions of #NoEstimates”, it’s not really “definitions” per se; it seems instead to be more of a mixed list of NE approaches (sometimes contradictory, as the author himself notes) and miscellaneous arguments that have been frequently made in favor of the movement. To the best of my knowledge, no such overall compilation has ever been made by a #NoEstimates proponent; as such, I applaud Jay Bazuzi for putting it together.

Of course, each of the described approaches/arguments has been outlined (and countered) individually many times before. But as far as I know, none of the major NE advocates has ever actually addressed any of the counterpoints to them, choosing instead just to block and insult the people making those counterpoints, often boasting proudly that they do so to “filter out the noise”.

In any case, let’s centralize those counterpoints now: here’s an item-by-item recap, springboarding off of Jay’s enumerated list of #NoEstimates approaches. For reasons of space and manageability, I’m splitting this rundown of counterpoints into two separate posts. Here goes: [Read more…]

Quocknipucks, or, why story points make sense. Part II.

Last time, I set the stage for why Quocknipucks (OK, I mean story points), despite being the target of recent severe Agile backlash, actually do provide a sensible and workable solution to the two most difficult aspects of software team sprint and  capacity planning. I elaborated on the ways that Quocknipucks story points solve these two problems, in that they:

  • Enable us to gauge the team’s overall capacity to take on work, by basing it on something other than pure gut and/or table-pounding; and
  • Enable us to fill that team capacity suitably, despite having items of different size, and, again, basing our choices on something other than pure gut.

But there’s lots more to cover. I have more observations about the role of story points, and I want to provide some caveats and recommendations for their use.  And it’s also worthwhile to list the various objections that people routinely make to story points, and provide some common sense reasons for rejecting those objections.

[Read more…]

Quocknipucks, or, why story points make sense. Part 1.

A long time ago, before most people (including me) had ever heard of the concept of story points, I came in as the CTO at a major social networking site. The dev team, even though staffed with a lot of excellent developers, had experienced enormous historical difficulty in delivering according to expectations, theirs or anyone else’s. People both inside and outside of the team complained that the team wasn’t delivering big projects on a timely basis, plus there were a lot of small-but-important items that never got done because the team was focused on larger work.

What’s the team’s capacity, I asked? How much can it reasonably take on before it becomes too much? How do we viably fit in smaller items along with the major initiatives, instead if it being an either/or? No one really knew, or even had thought much about what seemed like natural (even mandatory) questions to be asking.

At the time, I declared that it seemed like we just needed some abstract unit of capacity (I jokingly proposed the first Carrollian word that popped into my head: Quocknipucks) that could be used to help us “fill up the jar” with work items, large and small, without overfilling it. Each item would be valued in terms of its number of Quocknipucks, representing some approximation of size, and we’d come up with a total team capacity for a given time frame by using the same invented Quocknipuck units, which we would adjust as we gained experience with the team, the platform, the flow.

Little did I know that I was independently coming up with the basic idea behind story points. Interestingly, the term I chose was deliberately whimsical, to separate the concept from things in the real world like the actual amount of time needed for any particular item.

Here’s what I’ll argue: the basic idea behind story points is sound, and useful; yet, somehow a certain set of Agilists has now come to reject story points entirely, even referring to them (wrong-headedly and quite overstated) as “widely discredited”.

[Read more…]

Deconstruction of a #NoEstimates presentation

It’s been over three years now since I published a lengthy dismantling of the very bizarre “No Estimates” movement. My four-part series on the movement marched methodically and thoroughly through the issues surrounding NoEstimates — e.g., what common sense tells us about estimating in life and business, reasons why estimation is useful, specific responses to the major NoEstimates arguments, and a wrap-up that in part dealt with the peculiar monoculture (including the outright verbal abuse frequently directed by NoEstimates advocates at critics) that pervades the world of NoEstimates. I felt my series was specific and comprehensive enough so that I saw no reason (and still see no reason) to write further lengthy posts countering the oft-repeated NoEstimates points; I’ve already addressed them not just thoroughly, but (it would seem) unanswerably, given that there has been essentially no substantive response to those points from NoEstimates advocates.

However, the movement shows little signs of abating, particularly via the unflagging efforts of at least two individuals who seem to be devoted to evangelizing it full-time through worldwide paid workshops, conference presentations, etc. Especially at conferences attended primarily by developers, the siren song that “estimates are waste” is ever-compelling, it seems. Even though NoEstimates advocates apparently have no answer to (and hence basically avoid discussion of) the various specific objections to their ideas that people have raised, they continue to pull in a developer audience to their many strident presentations of the NoEstimates sales pitch.

So here’s my take: the meaty parts of the topic, the core arguments related to estimates, have indeed long been settled — NoEstimates advocates have barely ventured to pose either answers or substantive (non-insult) objections to the major counterpoints that critics have raised. For the last several years, then, the sole hallmark of the NoEstimates controversy has actually not been the what, but rather the how, of how the NoEstimates advocates present it: its tone, rhetoric, and (ill)logic.

With that in mind, it’s time to deconstruct a NoEstimates conference talk in detail. There are several such talks I could have done this with (see the annotated list at the end of this post), but I decided to choose the most recent one available, despite its considerable flaws. And by “deconstruct”, I’m going to look primarily at issues of gamesmanship and sheer rhetoric — in other words, I won’t take time or space to rehash the many weaknesses of the specific NoEstimates arguments themselves. As I’ve stated, those weaknesses have been long addressed, and you can refer to their full discussion here.

I’m arguing that at this point, the key learning to be had from the otherwise fairly futile and sadly rancorous NoEstimates debate is actually no longer about the use of estimates or even about software development itself, but really more about the essence of how to argue any controversial case, in general, effectively and appropriately. It’s an area where IT/development people are often deficient, and a notable case example of that is the flawed way that some of those people argue for faddish, unsupportable ideas like NoEstimates.

The NoEstimates conference talk that I’ll deconstruct here, given at the Path To Agility conference in 2017, is characteristic: in particular, it starts out setting its own stage for a “them against us” attitude; then, it relies on:

  • straw man arguments and logical leaps
  • selective and skewed redefinitions of words
  • misquoting of experts
  • citing of dubious “data” in order to imbue the NoEstimates claims with an aura of legitimacy.

[Read more…]

The case against #NoEstimates: the bottom line

I’ve now methodically presented the case against #NoEstimates in three different lights: from a common sense standpoint, from the perspective of the solid reasons why estimates are useful, and by examining the various frequent talking points used by NoEstimates advocates.  Looked at from any of these angles, NoEstimates comes up way short on both its core ideas and business practicality.

Aside from these issues of substance, let’s look briefly at the behavior of the NoEstimates proponents. Blunt as it may be, here’s my summary of the behaviors I’ve seen across most NoEstimates posts and tweets:

  • Presenting, and repeating via redundant tweets month after month, fallacy-riddled arguments consisting primarily of anecdotal horror stories, jibes at evil management, snide cartoons, and vague declarations that “there are better ways.”
  • Providing little or no detail or concrete proposals on their approach; relying (for literally years now) on stating that “we’re just exploring” or “there are better ways”
  • Consistently dodging substantive engagement with critics, and at times openly questioning whether critics should even have a voice in the discussion. If NoEstimates avoids engaging actively in the marketplace of ideas and debate, why should their arguments be taken seriously? Real progress in understanding any controversial topic requires we do more than state and restate our own views, but actually engage with those who disagree.
  • Continuing to use discredited examples and statistics, or even blatant misrepresentation of the stated views of recognized authorities, to help “prove” their case.
  • Frequent use of epithets to describe NoEstimates critics: “trolls”, liars, “morons”, “box of rocks”, and more.

I pointed out in my introduction that the lofty claims of the NoEstimates movement (essentially, that software development can and should be an exception to the natural, useful, and pervasive use of estimates in every other walk of life) carry a heavy burden of proof. Not only have they failed to meet that burden, they’ve barely attempted to, at least not the way that most people normally set about justifying a specific stance on anything.

But aside from style, let’s return to the substance of the issue. Here’s my take, as backed by specific examples over the course of these blog posts: estimates are an important part of the process of collaboratively setting reasonable targets, goals, commitments. Indeed, whether estimates are explicit or implicit, they’re a reality. I see them as an unavoidable and indispensable factor in business.

[Read more…]

The case against #NoEstimates, part 3: NoEstimates arguments and their weaknesses

I’ve spent the last two blog posts introducing the #NoEstimates movement, first discussing what it appears to espouse, and presenting some initial reasons why I reject it. I then covered the many solid reasons why it makes sense to use estimates in software development.

This time, let’s go through, in detail, the various arguments put forward commonly by the NoEstimates advocates in their opposition to estimates and in their explanation of their approach. Full disclosure: I’ve attempted to include the major NoEstimates arguments, but this won’t be a balanced presentation by any means; I find these arguments all seriously flawed, and I’ll explain why in each case.

Here we go, point by point:

  • “Estimates aren’t accurate, and can’t be established with certainty”

Let’s use Ron Jeffries’ statement as an example of this stance:

“Estimates are difficult. When requirements are vague — and it seems that they always are — then the best conceivable estimates would also be very vague. Accurate estimation becomes essentially impossible. Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before. “

But “accurate” is simply the wrong standard to apply to estimates. It’d be great if they could be totally accurate, but it should be understood at all times that by nature they probably are not. They are merely a team’s best shot, using the best knowledge available at the time, and they’re used to establish an initial meaningful plan that can be monitored and adjusted moving forward. They’re a tool, not an outcome. As such, the benefits of estimates, and their contributions to the planning and tracking process, exist even without them being strictly “accurate” per se. These benefits were itemized in my last post.

Knowing the future precisely isn’t what estimating is about, actually. It’s a misunderstanding and a disservice to think it is. Here’s why. [Read more…]

Mastodon