The Practical CIO: Difficulties in project prioritization & selection, part 1

How does your company pick which projects to undertake?  Demand outstrips available resources: nearly always, there are far more “good ideas” for things to do than can actually be done in a given time period.  So how do you decide which ones you take on?

If you research this general topic, you’ll find a lot of rather intricate, idealistic screeds that detail how to model an admixture of financials, market potential, risk factors, etc., and promise that this will get you “the” answer.  I don’t dismiss the importance and general validity of such approaches, but let me be frank: that’s actually not what usually happens at most companies. Not even close. Here are some real-life (albeit generally unsuccessful) approaches to project selection that I’ve seen in real companies. In no particular order:

1) Do ’em all: everything proposed by anyone goes on a list, and people just work like crazy and do the best they can to accomplish whatever;
2) Let a single executive (CEO, CIO, CTO, whoever) decide. That’s what executives are there for, right?
3) Insist that all proposed projects be evaluated for ROI, and do the ones that produce the biggest ROI number.
[Read more…]

IT, the CIO, and the business need for “roof projects”

Have you ever had to replace the roof of your house? It costs lots of money, and there’s no visible or immediate benefit. Metaphorically, that situation comes up astonishingly often in IT organizations that struggle with how to get “roof projects” prioritized and worked on.  “Roof projects” (a term of my coinage, as far as I know, in this respect) in a company consist of facilities or systems that need upgrading or major work to continue functioning, even though that work may not provide immediate business-visible value.  Just like the roof on a house, some systems shouldn’t wait until they experience failure before they are attended to.

Understanding the notion of “roof project” seems obvious, even common sense, yet it proves necessary to “sell” it constantly within an organization, even to people who understand it intellectually.  IT roof projects are often also quite difficult to communicate the value of, since they rest not only on abstract assessments of risk, but also involve technical details that business people find arcane.  The conundrum then becomes how to “sell” such business-lifeblood-affecting projects to a skeptical clientele who mostly just wants new functionality, and who collectively yawn at IT technobabble (to them) like “middleware” and “protocol.” Everything has to be business-driven in the end, I firmly believe, but it’s a catch-22: users tend to drive only what they understand and which benefits them directly. Neglected or grossly deferred maintenance/upkeep (which is what happens if you never prioritize and do the roof projects) mounts up over the years, until eventually a company can be completely paralyzed. Picture a roof that should have been repaired 10 years ago; would you want to live in that house?

Let’s look at a couple of concrete IT examples I’ve had to deal with:
[Read more…]

A rational CapEx purchase and tracking process for IT

How often does someone in your company (often the CIO, or the CTO, or the head of infrastructure) end up running through the halls, waving a purchase order that “has” to be signed off that very day, or else key systems will allegedly go dark? Maybe you’re in the fortunate situation of being in a company where this frenzy doesn’t happen, but in my experience, that’s unusual.

I’ve written before on the importance of technology carefully shepherding its fiduciary responsibilities. Nothing contributes to the IT stereotype/stigma as much as a loud demand for a major purchase, at the last minute, justified solely by dire predictions of doom, and topped (often) with acronym-laden technobabble. Amazingly, it’s not that hard to avoid this situation, if you exercise a little forethought and planning.  The benefits of doing so are indirect as well as direct: you can change perceptions of IT into being viewed as a partner of business concerns, rather than as a troublesome, risk-fraught, and confusing cost center.

It all goes back to Management 101: plan the work, then work the plan. Surprises are a bad thing. Not only do you need a solid plan, but then you want to diligently track actuals against that plan. None of this is exactly a radical idea, yet I’ve served as an executive now in at least three different companies where none of it was happening before I arrived, with respect to capital expenditures.  To the extent there was a capital expenditure plan for the year at all (as opposed to just one big CapEx number!), it had been thrown out the window by February. Sure, this can and does happen in fast-paced Internet companies in particular, but the rankling thing was that no one really was tracking the changes against plan, or could envision how funds were shaping up for the year. Even if a plan has undergone radical changes, there still needs to be a current plan. Walking in, any executive (not to mention any auditor!) should be able to see one or two core documents that detail that current plan, as well as the progress against it.  If that’s not there, then the technology area (and by extension the whole company) is just operating by the seat of its collective pants, and that’s not acceptable.

Here are the minimum elements of responsible CapEx stewardship, in my view:
[Read more…]

Get multiple arrows for that quiver: selective and competitive outsourcing

As I’ve written before (“Offshore development: target the destination, even if you never go there“), the reality of the CTO/CIO’s life is to be constantly challenged to produce more. Most technology executives, given that challenge, focus on squeezing out greater efficiency from existing processes, which is of course a necessary and constant push. What many don’t do is recognize the importance of crisp, formal handoffs of software from stage to stage, and how those can greatly enhance productivity.

Software engineering lessons over the past decades have taught us that software architectural techniques such as encapsulation, data hiding, and well-defined module interfaces are essential practices as systems scale ever larger. Equally, the human side of software delivery needs those sorts of crisp interfaces and neutral handoffs: loose coupling, in other words.  Loose coupling entails “minimal assumptions between the sending and receiving parties.”  And an increased focus on internal efficiencies (plus deadlines and pressure) can sometimes lead a shop away from that, and into tightly coupled handoffs, because those seem faster and easier.  You don’t have the time to do it right, so you end up using a lot of time doing it over.  My argument is that (just as it is with object-oriented architectures) it’s worth slightly less efficiency-in-the-small, if certain sacrifices in the hand-off arena can help you attain efficiency-in-the-large.

As I wrote in my previous post, referring to the constant business-driven pressure to “put eight pounds of manure into a five pound bag” when it comes to delivering technology projects,

The main insight here is that finding a viable way to outsource some projects is your ticket to expanding the bag.  I’m not even talking about offshoring here, simply about being able to take a chunk of your project load and hand it to an outside entity to get done.  If everything could be done that way, then you’d be constrained in your project load only by available money.  Sadly, in many shops, almost nothing can be done that way, due to too much interdependency of systems, too much background lore required, and no processes in place to allow for external entities delivering changes into current production environments.  My position here is that it’s a key part of your job to change that situation: to work actively on decoupling the interdependencies so that you at least have the option to leverage outside help more effectively.

[Read more…]

Canaries in the coal mine: Why your IT department may be in worse shape than you think

Think about it: you can’t really tell the difference, on a day-to-day basis, between a car that has had its oil changed every 3,000 miles and one that has had its oil changed every year or two.  Only eventually.

Similarly, the stability of most IT departments proves very difficult to discern from outside.  Even insiders within IT can have myopia.  And non-technical senior management (CEO, COO, CFO)?  They usually can’t really tell either; they often don’t even know the right questions to ask, and their gut instincts on IT matters can actually run dizzyingly counter to best practices.  In short: to many or most people, it can look like things in IT are going pretty well, but in fact it’s all getting shakier and riskier every day.  Truth is, if a company is passionate about excellence, IT has to function well both on the surface and to the careful trained observer. IT is a service organization, and getting a few key things wrong means that the entire company suffers as a result.  Eventually.

My claim here sounds like an admittedly rather pessimistic one: that your IT department may be in much worse shape than appears to the eye. Yet, industry statistics indicate that’s probably the case. Having worked for and/or consulted to a lot of companies in the past decade, I’ve walked into a lot of “opportunities”, places where there was a lot of unchanged oil, so to speak. In fact, I’d be willing to bet that most companies have at least one, if not several, of the situations I’m going to describe in this post.

On the optimistic side, though, there are identifiable common root causes, all of which can be addressed, over time, by the appropriate focus and leadership.  As people always say, the first (and often hardest) step is simply recognizing that there is a problem. Let’s dive into the specifics, at a high level.

Here’s a reverse top-10, David Letterman-style, loosely ranked list of IT “anti-patterns“. I’ve actually seen companies where all of these situations existed. How many hold true where you work?  These gaps represent failures at meeting important best practices; like canaries in the coal mine, you should consider them to be potent indicators of looming instability in one or many of the dimensions where IT needs to serve the company.  Each of these deserves a separate post, or more, to treat fully; in some cases, I’ve already written posts on the item, so for those I provide a link below.
[Read more…]

“Getting” Twitter, from the technology executive’s perspective

I don’t want this to be just another post about Twitter, the current hot trend of the Internet.  Rather, I’d like to relate this new Twitter fad to a long-planned important topic here.

Specifically, what can we in technology do to keep current and stay up-to-speed on our various areas of interest and expertise? There’s more out there than any of us can learn, and new technologies come along all the time.  Truly staying current, at a reasonable depth level, would be a more-than-full-time job.

Here’s how I’ve come to grips with that basic reality. These remarks are most relevant to the executive level, but to some extent they apply across the spectrum of roles in IT.
[Read more…]

Mantra for IT: “Participate in the process rather than confront results”

Let’s sail into a stretch of a metaphor this time. You probably know by now how much I embrace metaphors as a way to impart, often via a concrete example, ideas and concepts that are hard to grasp. So let’s go way back and talk about a metaphorical influence from long ago.

When I was in early high school, we would occasionally spend English class watching and then discussing a variety of short subject films, many of them from the fertile minds at the National Film Board of Canada.  Some of these films, described by the NFB as “socially engaged documentary”, bordered on (or transcended) the bizarre; they thus spurred all sorts of avid arguments among teenagers, easily as much as Ethan Frome or Wang Lung, the more literary staples of the curriculum that I can remember from that year.  There was one such film in particular, in fact, that has stuck with me for decades.  After some digging, I’ve finally been able to identify it by name and origin.  The researchers at the NFB have now kindly confirmed for me that the film is titled “I.B.M.”, and that it was directed by Jacques Languirand. When I reflect on it, the film’s staying power with me makes sense, since it not only features IT elements, but also serves admirably and in multiple ways as a metaphor for IT issues.

As I recall the five-minute film, it features an unchanging close-up view of an automated keypunch machine, punching out a series of IBM computer punchcards with a mysterious and incomplete common message. The film shows the cards sliding into place and getting punched, one at a time, then rolling off into the output hopper. Only parts of the full message can be read at first, since some of the letters of each word are omitted or obscured. Little by little, though, over the course of the film’s duration, each successive card that is punched contains more and more of the message, until it becomes clear at the end of the film that the text reads, “Participate in the process rather than confront results.”

Think about the wisdom and depth of that line: “Participate in the process rather than confront results.” Three ways come to mind of relating this metaphor to IT, to its role across the enterprise, and even to effective IT management of staff. They share a common aspect: the duty (and the reward) of emphasizing participation over passive observation.
[Read more…]

“Hot stove” lessons, part II: development and operations

I noted last time, once again, that “IT is hard. In fact, it’s so hard that it seems most people have to learn certain core lessons by themselves.  It seems like everyone needs to burn his or her own hand on the hot stove.”  I went through some examples of this sort of “hot stove” lessons particular to management; this time, let’s talk about similar “hot stove” lessons/myths I’ve observed in other IT areas, most notably development and operations.

  • Source code control and release management. One of the traits of a superlative programmer is his or her ability to maintain a complete logical model of the system/program in their head.  The really good ones are in fact really good at this.  Unfortunately, their consummate skill ironically leads to them sometimes resisting tools that help out with some of the logistical pitfalls that arise as system complexity increases.  Source code control is the most important such tool.  Programmers have even told me, “I can just keep track of what I’ve changed and what’s where.”  In a way, I suppose this is a species of the (typically male) trait of refusing to ask for directions—it’s a reluctance to embrace appropriate tools that are designed to avoid common screw-ups and to facilitate overall team success.  Suddenly, though, complexity mushrooms: releases overlap and patches are made in the heat of the moment, and without impeccable source code control and release management, bugs reappear, QA takes longer, and so on.  Typically, I’ve seen this “hot stove” lesson not really get learned by an organization until the source code control failure causes a notable customer-facing issue with a major release.

[Read more…]

Mastodon