The CIO and integrity: this shouldn’t be hard, folks

Surprising and disturbing IT-related news crossed my Twitter feed last night: a well-known CIO is being sued for alleged fraud by his former firm. Allegations are that this senior executive received kickbacks from vendors that he helped connect to the company where he served as CIO and vice president.

My purpose here isn’t to comment on this individual case; it’s now in the courts, information is still sketchy about the details, and I feel that people are entitled to a presumption of innocence while the various legal actions run their course. As Forbes columnist Ben Kepes wrote, though, this is “one for the ‘we knew these things happened but tried not to know about it’ department.”

So let’s broaden the topic to the overall issue of CIO integrity, particularly with respect to financial matters. As I’ve written before, the head of technology for many companies (certainly all the ones I’ve worked for) stands at the rudder of a very large portion of the overall “spend” for that company. IT infrastructure and systems spending, taken broadly, is often the second-highest category, after salaries, for total annual outlay for a company. The responsibility involved for the senior IT executive cannot be overstated.

I’ve frequently come into a new CTO/CIO position and discovered, over the course of the natural “archaeology” that one performs in such a situation, highly questionable deals that my predecessors in the position have cut. I’ve raised my eyebrows and occasionally shouted a bit at the incomprehensibility of various vendor arrangements I’ve inherited.

[Read more…]

The case against #NoEstimates: the bottom line

I’ve now methodically presented the case against #NoEstimates in three different lights: from a common sense standpoint, from the perspective of the solid reasons why estimates are useful, and by examining the various frequent talking points used by NoEstimates advocates.  Looked at from any of these angles, NoEstimates comes up way short on both its core ideas and business practicality.

Aside from these issues of substance, let’s look briefly at the behavior of the NoEstimates proponents. Blunt as it may be, here’s my summary of the behaviors I’ve seen across most NoEstimates posts and tweets:

  • Presenting, and repeating via redundant tweets month after month, fallacy-riddled arguments consisting primarily of anecdotal horror stories, jibes at evil management, snide cartoons, and vague declarations that “there are better ways.”
  • Providing little or no detail or concrete proposals on their approach; relying (for literally years now) on stating that “we’re just exploring” or “there are better ways”
  • Consistently dodging substantive engagement with critics, and at times openly questioning whether critics should even have a voice in the discussion. If NoEstimates avoids engaging actively in the marketplace of ideas and debate, why should their arguments be taken seriously? Real progress in understanding any controversial topic requires we do more than state and restate our own views, but actually engage with those who disagree.
  • Continuing to use discredited examples and statistics, or even blatant misrepresentation of the stated views of recognized authorities, to help “prove” their case.
  • Frequent use of epithets to describe NoEstimates critics: “trolls”, liars, “morons”, “box of rocks”, and more.

I pointed out in my introduction that the lofty claims of the NoEstimates movement (essentially, that software development can and should be an exception to the natural, useful, and pervasive use of estimates in every other walk of life) carry a heavy burden of proof. Not only have they failed to meet that burden, they’ve barely attempted to, at least not the way that most people normally set about justifying a specific stance on anything.

But aside from style, let’s return to the substance of the issue. Here’s my take, as backed by specific examples over the course of these blog posts: estimates are an important part of the process of collaboratively setting reasonable targets, goals, commitments. Indeed, whether estimates are explicit or implicit, they’re a reality. I see them as an unavoidable and indispensable factor in business.

[Read more…]

The case against #NoEstimates, part 3: NoEstimates arguments and their weaknesses

I’ve spent the last two blog posts introducing the #NoEstimates movement, first discussing what it appears to espouse, and presenting some initial reasons why I reject it. I then covered the many solid reasons why it makes sense to use estimates in software development.

This time, let’s go through, in detail, the various arguments put forward commonly by the NoEstimates advocates in their opposition to estimates and in their explanation of their approach. Full disclosure: I’ve attempted to include the major NoEstimates arguments, but this won’t be a balanced presentation by any means; I find these arguments all seriously flawed, and I’ll explain why in each case.

Here we go, point by point:

  • “Estimates aren’t accurate, and can’t be established with certainty”

Let’s use Ron Jeffries’ statement as an example of this stance:

“Estimates are difficult. When requirements are vague — and it seems that they always are — then the best conceivable estimates would also be very vague. Accurate estimation becomes essentially impossible. Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before. “

“Accurate” is simply the wrong standard to apply to estimates. It’d be great if they could be totally accurate, but it should be understood at all times that by nature they probably are not. They are merely a team’s best shot, using the best knowledge available at the time, and they’re used to establish an initial meaningful plan that can be monitored and adjusted moving forward. They’re a tool, not an outcome. As such, the benefits of estimates, and their contributions to the planning and tracking process, exist even without them being strictly “accurate” per se. These benefits were itemized in my last post.

Knowing the future precisely isn’t what estimating is about, actually. It’s a misunderstanding and a disservice to think it is. Here’s why. [Read more…]

The case against #NoEstimates, part 2: why estimates matter

Last time, I provided an introduction to a very odd and very vocal recent movement known as #NoEstimates, which seeks ways to reduce or eliminate the use of estimates in software development. I started off my discussion of it by going through some basic common-sense business reasons to reject it. Those reasons for rejection boiled down to:

  • estimates are flat-out natural, ubiquitous, and unavoidable in practical life and in business;
  • expressing general reluctance to do them unfortunately reinforces the often negative perception of IT people as aloof, uncooperative, and unsavvy about business imperatives.

Let’s look now at the many other solid reasons to keep estimates in the software development and project management toolbox:

Estimates help in project selection over a wider time frame, and they assist in filling in a project portfolio evenly to align with overall team/company capacity. Specifically: companies often need to determine which projects to sink time and company resources into: e.g., needing to pick just three out of a list of fifteen proposed projects for a given time period. This is a common and recurring dilemma in every company I’ve ever worked at, where demand for various business functionality always exceeds the supply of resources to fulfill it.

In such a ubiquitous scenario, what makes anyone think that the anticipated cost and duration for delivering each potential project should not figure into the decision process that selects the “vital few” from among the many choices? Why would you possibly rule out considering those factors to the best of your ability? Yes, of course they should be weighed along with value, risk, and other factors, but cost and schedule will always be among the key considerations to juggle.

You’re thinking that assessing probable cost and schedule can’t be done, because you’re uncomfortable with uncertainty, and perhaps you might get it wrong? Then you’re not ready to play strategically in the business arena.

Estimates reduce overreliance on experimenting with “red herring” projects. Nothing is wrong with selective and judicious experimentation on a small scale, focused on learning and adjusting, but the universal NoEstimates answer to the project selection conundrum is to “just start”. Well, simply as a practical matter, you can’t “just start” 15 different projects as an experiment and gather enough useful, consistent information on all of them to feed into your decision meaningfully. Even if you could and did, you’d still have to use your gut (i.e., you’d have to estimate) based on what that information is telling you, since very few of the unknowns associated with each of the potential projects would have been eliminated via the short experiment. The NoEstimates panacea of “story slicing” doesn’t help you here at this up-front stage: unless (and even if!) you story-slice the entire project (which would be BDUF), unknowns will almost certainly continue to lurk in your backlog, some big, some small.

There’s simply no way around it: at some point, if you want to choose purposefully among competing options of similar value, you have to take a shot at determining what each project is likely to take (cost, schedule, dependencies), despite imperfect information.

You don’t like it that there’s a possibility that you might get it wrong and pick the “wrong” projects? Then you’re not ready to play strategically in the business arena. [Read more…]

The case against #NoEstimates, part 1: introduction and common sense

In the immortal words of Pogo, “we have met the enemy, and he is us.”

The long-held stereotype of IT portrays us as uncooperative, unable to integrate socially, always arguing over nits, and deeply, intractably immersed in our own tunnel vision and parochial perspective.

That stereotype has held us back, as individuals and as a profession: it’s actually one root cause for the oft-lamented situation of the CIO needing a seat at the executive table and not getting it.

Now a whole movement has arisen that unfortunately reinforces that negative perception of IT people, a movement that coalesces around the Twitter hashtag #NoEstimates (all movements need a Twitter hashtag now, it appears). It started with long screeds about how inaccurate estimates are for software development: they’re nothing more than guesses; “there is nothing about them that makes them necessary, or even beneficial to the actual creation of software”. They’re a “wasteful and deceptive practice“, lies, and needing them is even comparable to how heroin users need their heroin. You can’t predict the future, they insist. The #NoEstimates rhetoric has become increasingly harsh, and replete with drastic imagery: estimates are a “game of fools“; an “inherited disease for the industry“; “we predict like gypsy ladies.” In what seems at times to be some kind of “top the previous hyperbole” competition, estimates have even been referred to as “management by violence”.

This sort of overreaction, this IT resistance to estimates, isn’t really wholly new, of course. More than once, I’ve watched in horror as an IT person earnestly explained to a company’s senior management how predicting systems delivery is so very difficult and so very filled with uncertainty, justifying how (for example) they couldn’t possibly commit to having a system deployed in a particular quarter. One time, I witnessed the VP of Sales in particular having zero patience with that: “Hey, I get asked to set my sales quotas way in advance based on my best professional judgment; what makes you folks in IT think that you should be an exception?” Business runs on goals and commitments, and on the “best judgment” estimates that help create those goals and commitments. And everyone in the room seems to understand that, except IT.

Setting achievable, concrete goals is healthy, in business and in life. Making solid, reasonable commitments is healthy. Taking responsibility for meeting one’s commitments all or at least most of the time is natural and should be encouraged. And if you’re perceived as constantly wailing that you’re different and special, in fact so special that you believe you can’t really be expected to state when you’ll be done? That’s a non-starter. This blog post is an introduction to how some IT practitioners have been pulled in by the seductive but ultimately wrongheaded NoEstimates claims, and what the resulting implications are for our industry.

What’s #NoEstimates about? Well, its proponents are actually quite slippery on providing any solid definition, reflexively pushing back whenever anyone tries to summarize it for them, and they often prefer to vaguely say it’s just “a hashtag for a concept about alternatives to estimates and how they might help make better decisions.” However, it seems to boil down to a virulent and unshakable base conviction that estimates are utterly horrible, for all the reasons stated above and more. Using estimates allegedly leads to chronic abuse of developers by management, cements inflexibility into projects, and disturbs developer “flow”. The sole alternative to estimates that the NoEstimates advocates clearly identify, however, seems to simply restate the age-old Agile principles of small teams, user stories sliced into doable chunks, use of “drip funding” so as to achieve a predictable fixed cost, and software delivery “early and often”. They point out that software project scope tends to change drastically and frequently during such frequent delivery, and argue that this scope variability allows projects to be declared as done and successful, long before and without ever having actually delivered all the upfront-identified desired functionality.

Over the next two blog posts, I’ll first lay out the reasons to see considerable value in a software development estimating process and its outcomes, and then respond to the myriad NoEstimates complaints that are levied against estimates as used for software development. But we’ll start here, as an intro, just with invoking basic common sense about life and business.

[Read more…]

Towards a more balanced list of content about #NoEstimates

Both my readers will have noticed there’s been a fairly large gap between my posts here, as life (picnic, lightning, and all that) has intervened. Like J.D. Salinger, however, I have continued writing drafts on various topics, and I plan to post more in the coming months.

My past posts here have often delved into a favorite theme of mine: that IT people tend to go to extremes, often rejecting something useful (an approach, a technology, a tool) simply because it has downsides. Such rejection is at times emotional and even self-righteous; we can get so caught up in it that we fail to look at a topic at all evenhandedly, let alone dispassionately.

No better case example along these lines has come along in the past year than the active and contentious #NoEstimates debate on Twitter and in the blogosphere. I’ll have a much more detailed post soon about my objections to the #NoEstimates approach overall (full disclosure: I’m one of its most vocal critics), but right now, let’s focus on one aspect of the relentless advocacy I see in the hashtag’s proponents: its lack of evenhandedness.

Specifically, proponents of #NoEstimates insist repeatedly and proudly that they’re “exploring”; recently, one major advocate tweeted out a call for links to posts about the topic (“I’m gathering links to #NoEstimates content”) so that these could be collected and posted. Yet, it turned out that only posts advocating one side of the issue would be included, even though the resulting list of links was then touted to people who might be “interested in exploring some ideas about #NoEstimates.” When challenged on this dubious interpretation of the meaning of “exploring”, the advocate then defiantly attached a disclaimer: “Warning! There are no links to “Estimate-driven” posts”.

Advocates can use their own blog for whatever purposes they want, of course. Yet, there’s an interesting split going on here: staunchly claiming to be “exploring”, while rejecting the inclusion of any summarizing or critical posts, and then sneeringly labeling all such posts as “estimate-driven.” There couldn’t be a clearer case study of IT black-and-white-ism, them vs us. Explore all you want, this behavior says, as long as you’re doing it on my side of the issue and on my terms. What, there’s a post that attempts to summarize both sides of the argument? Not interested.

[Read more…]

IT does the moonwalk: our endless search for absolutes

Scene: I was CTO at a high-traffic social networking site, circa ten years ago. It was one of those times when our site got crushed by unexpected sudden volume, due to being mentioned in an article in a prominent newspaper. My infrastructure manager walked into my office the next morning, ashen-faced. “We’re gonna get killed tomorrow unless we add ten front-end servers to our prod environment,” he proclaimed. A fairly common IT reaction: absolute, adamant, ominous.

Ten new servers? That was a nice round pulled-from-thin-air number, obviously, and by the time we talked through it, we actually found other, more practical, more feasible ways first to estimate and then handle the increased load. But to the infrastructure guy as he walked in, the situation was both dire and absolute, and he saw only one solution that should even be considered.

So now let’s look at another data point on IT psychology. Take the latest iPhone brouhaha: the quick “cracking” of the iPhone 5s Touch ID fingerprint scanning technology.  Amazingly, Touch ID has turned out to be less than perfect. Someone with $1,000 of equipment, plus lots of time, motivation, and patience, could conceivably fool the scanner. Meanwhile, what gets lost in the outrage over this turn of events is the notion that the technology might indeed be “good enough”, or “better than the alternative”. We forget the simple fact that the technology is primarily oriented to people who currently don’t use passcodes at all, and that it vastly improves general security for those sorts of users.  As one article pointed out, “The point of any security system isn’t to be unbreakable – there’s no such thing – but to be fit for purpose.”

My larger point: if there’s a problem or a difficulty or even a nuance to a particular approach’s applicability, a common IT practitioner’s instant reaction is that the approach or practice is absolute junk and should be completely avoided.

Similarly, we often reject fundamental improvements to a situation, simply because they are not perfect. We let “best get in the way of better.” On this general theme, an amusing tweet crossed my screen the other day. @rands wrote, “I find when an engineer says, ‘Less than ideal’, they often mean ‘Complete fucking catastrophe.’”  I laughed at this, of course, but partly because I’ve more often experienced that scenario in reverse: an engineer deciding, and then loudly and profanely proclaiming, that a situation was nothing short of a complete disaster, simply because it was less than ideal.

[Read more…]

Starting points for the quantitative CIO: downloadable basic tools

Much as in any field, IT executives constantly have to seek a balance between idealism and pragmatism. Given a particular problem and the range of possible solutions, do we insist on “doing it right”, or do we buckle down and “just get it done”, even with gaps?

There’s obviously no single right answer, which is what makes IT consistently so fun and frustrating at the same time. Over time, though, my own approach has typically been to focus on the continuous improvement aspect of “doing it right”: whenever possible, get something going as a start, then hone it over time as you learn more about the problem and your situation.

Using spreadsheets as a management tool definitely falls on the “just get it done” side of this spectrum of approaches. Spreadsheets are seductively easy, omnipresent, and usable by people with a variety of skill sets and technical savvy.

But there’s a host of downsides: spreadsheets are frail creatures. Errors can creep in fairly easily, even for experienced users, as data and circumstances change, and spreadsheets are especially prone to the incursion of silent errors and omissions when undergoing revision.  And once implemented, in all their imperfection, spreadsheet-based solutions can broaden and become large-scale, long-term systems (I’ve seen this happen again and again).

Yet, I feel that every technology executive should be maximally fluent in spreadsheeting: simple tracking, analyzing, modeling alternatives, understanding costs and risks. The technology provides a readily available, easy way to knock out quick and dirty models that can clarify one’s thinking and approach enormously. They work well, as long as you keep in mind that the spreadsheet is usually a stop-gap, for those times when you are faced with a glaring need and you don’t have time, budget, or staff to implement anything deeper right away.

In an early blog post, I listed the seven areas where a quantitative approach is especially necessary for the technology executive:

[Read more…]