Complexity isn’t simple: multiple causes of IT failure

Roger Sessions recently published a white paper on IT complexity and its role in IT project failure: “The IT Complexity Crisis: Danger and Opportunity”.  It’s certainly possible to quarrel with bits and pieces of his analysis, and thereby tweak his numbers, but the overall thrust remains undeniable: IT failures are costing the world incredible amounts of real money. Sessions even sums it up under the dire-sounding phrase, “the coming meltdown of IT,” and says, “the out-of-control proliferation of IT failure is a future reality from which no country—or enterprise—is immune.” And he presents “compelling evidence that the proliferation of IT failures is caused by increasing IT complexity.”  He points out that the dollar cost of IT failure in the US alone is close to the cost of the recent US financial meltdown, and cites indications that the rate of failure is also increasing by 15% per year.

Roger’s paper is excellent and thought-provoking, and I recommend it highly. And I do agree with his view that complexity is the chief culprit in IT failure. That said, I think his argument focuses a little too strongly on one cause of complexity (unnecessary overcomplexity of architecture), to the neglect of other important factors.

To be sure, some obvious contributors to IT failure (poor project management, and lack of communication within teams and from business to IT implementers) aren’t dismissed by Sessions, but he sees their contribution to the crisis as relatively small. I don’t, and I’ve used this blog to write about those factors quite a bit.

Most of all, though, I differ with Roger’s focus on streamlining architecture as being the key to reducing system complexity. One could say, in fact, that Roger’s solution is primarily a technical one, where the bugaboos I see are primarily cultural and sociological.  I see not one, but at least three distinct complexity-related burdens, increasingly endemic, and increasingly bringing down IT:

  • Overly complex design/architecture
  • Taking on too much functionality
  • Poor implementation (technical debt in-the-large and in-the-small)

Roger has admirably dealt with the issue of overly complex design/architecture, at least in terms of a viable approach for simplifying up-front architecture, so I’ll focus here on the other two.

Taking on too much functionality

I recently rented an economy car (the least expensive option) on a trip with my son. (Remember, my self-appellation is “Cheap Technology Officer.”) He was stunned and dismayed that the car didn’t have automatic door locks; he didn’t realize that they even made cars without them anymore. Similarly, my 11-year-old daughter has grown up in a TiVo-ized world where live TV can always be paused. When she encounters a TV without a DVR (and thus no pause capability), she regards it as hopelessly primitive. Indeed, as unacceptable.

Similarly, the general standard for functionality and UI design has been raised by extremely functional PC software.  I now expect to be able to double-click on a number in any onscreen report, and thus “drill down” into the transactional details that make up that number. When I can’t, I feel cheated. Equally, I expect everything on an interface to drag and drop; I get frustrated if it doesn’t.

So as an industry, we’ve raised the bar of acceptability, considerably, in software and technology systems over the last couple of decades.  What that means in practical terms, though, is that across the board, our eyes have gotten bigger than our stomachs. We want more, up front, than it often makes sense to build at the start. And our demands are not negotiable, or so it seems. The first few cell phones I had didn’t even have a ring silencer on them; I used to silence the phone by adroitly disconnecting the battery when a call came through at an inopportune time. Today, most people wouldn’t even consider buying a phone that lacks much more elaborate features, such as a camera, that I would have considered as space-age in nature back in the 80s.

So our increase in expectations, alone, has added considerably to the functionality of systems we tend to build. There’s more functionality in and of itself, and usually more interface points to other complex systems. In fact, integration testing—where you connect new code into a working environment where it has to interface correctly with other systems—has become a frequent and major sticking point in launching information technology projects.  In essence, we’ve fallen into the “nuts” dilemma, both in large and small ways.  We want so much, and attempt so much, that we increase our risk of failure considerably.

Technical debt (in the large and in the small)

Any software developer will tell you that their first stab at implementing a given piece of functionality is often (if not usually) much more complex than turns out to be needed. Only after exploring the problem domain, with experiments and backtracking and restarts, do developers usually realize that their code can be pared down, simplified, combined with other modules, etc. This is usually called “refactoring”, and its importance is a relatively recent insight in the software development discipline.

A key insight about refactoring, though, is that it means improving the code without changing its overall results. There’s often no immediately obvious payback to this undertaking: it’s a roof project, in essence. To the extent that refactoring isn’t done (no time, no inclination, no recognition of a simpler approach), the end product is left with vestiges of unnecessary complexity. The greater the time crunch, and the greater the aspiration for the functional depth and breadth of the software to begin with, the more likely it is that these vestiges linger. And one ancillary aspect of the raised bar in expected minimum functionality is that it causes the time crunch to get ever greater.  It’s no longer about delivering just a solution that will work, it’s about making sure that the solution includes (metaphorically speaking) a 5-megapixel camera too.

Couple this unnecessary but accidental complexity with what often amounts to a rush job on design for the sake of meeting schedule (creating “technical debt in-the-large”). An example I’ve used here before: choosing a different core DBMS to implement a given function, simply out of expedience, and failing to take time to modify previous functionality to use that new DBMS. Supporting two DBMSes within the same product represents significant technical debt: every subsequent system change and addition will entail “paying interest” on that debt, which not only increases schedule and manpower costs, but increases risk of failure as well. And that’s just one example; the technical debt cascades, feature upon feature, release upon release. Technical debt, until paid down, can be equated to a invisibly rising substrate of complexity, and it contributes massively to an increasingly wobbly, risky system. And lately, I see more and more organizations “pyramiding” their technical debt, never taking the time and cost to pay it down, with disastrous results. As Hemingway said about going broke, it happens slowly, then all at once.

What now?

Roger’s analysis, to its large credit, outlined an important aspect of complexity and posed a solution (an approach he calls SIP, or Simple Iterative Partitions).  The aspects that I’m presenting (taking on too much functionality, and the pyramiding of technical debt) are, as I’ve said, cultural and sociological within companies. The answer is not nearly as simple or as neat as a specific technical solution (although I am certain that I will get comments on this post, perhaps rightly, from devotees of Agile).

To my mind, it’s engaged, savvy, forceful leadership that alone can address these issues, slow down the demand train, stop the madness. If anything, I think that there is an increasing lack of leadership in IT circles that can suitably recognize and address these factors, as well as educate their peers. And that’s what needs fixing most of all.

Lagniappe:

Comments

  1. Essentially the problem is incredibly simple (but as with all systems it is the combination of the simple parts that creates the appearance of complexity). IT fails to deliver simple solutions because at all levels in the delivery chain from leadership to live run, organisations tolerate mediocrity in its people (or worse still, are unable to recognise real ability over empty words).

    IT could exist with far fewer “A” grade people. It is those who cannot do, who get in the way of those who can.

    Wherever these incapable people find themselves, the problem manifests itself. This is why the problem appears complex. It seems to be a combination of issues in scope, architecture, design, build, test, deliver and run. In reality it is simply a matter of people unable to do the job getting in the way of those who can. Wherever an incompetent person interacts with an IT solution, unnecessary delay and complexity will be introduced, and productivity and reliability will reduce.

    Worse still, the inefficiency of these people then leads the business to believe that they need even more people (rather than less) and they then have to employ less than adequate individuals because of the self-imposed skills shortage.

    Imaging an operating theatre with one skilled surgeon and 10 incompetent ones, all operating on the same individual at the same time. Result? The existence of the skilled surgeon is completely negated. The answer is not more bad surgeons. The answer is remove the bad ones and leave the good one to get on with the job.

    Regards
    The Enterprising Architect
    http://theenterprisingarchitect.blogspot.com

  2. 1) I am not the Agile police.
    2) Jon’s comment is interesting, if lacking in compassion, but not sure of specific relevance.

    3) I am a Requirements guy, so I am going to suggest better definition of requirements to start, and using them to avoid scope creep during development, helps with complexity because you know what you are getting, and should have a good idea as to cost. It should be clear up-front if the 5 megapixel camera is what is wanted, and it should be clear when it is not to be delivered.

    I don’t want to oversell the ability of good requirements work to solve this problem by itself, but I do think it can play a part along with better design techniques and such.

  3. Thanks for this post, Peter.

    Very true, David. The more you know about requirements up front, the better off you will be. Notably, that’s not the same as what you think you know, or what you’ve been told.

    Requirements have been known to change legitimately, but it also frequently happens that they’re just understated or incorrectly described. That’s where you come in!

    Also, there tends to be a lot of pressure towards fake reality in estimating the time to complete a project, and from there the bogosity just snowballs.

    David, in your experience is it useful to go back during the project to recheck the assumptions underlying the original requirements so they can be adjusted? Or is that just a can of worms?

  4. Dave: Requirements don’t work. They’re a facade of control, chasing what is often flawed assumptions that result in flawed designs. [I’ve been chasing this position for well over a decade and haven’t found any evidence to the contrary.]

    Peter: Great perspective. Reflective of the rant I had with one of the Innovation Design peeps from Microsoft recently — stop adding new stuff and get back to delivering what’s missing (in Word most of the issues are due to a flawed functional architecture).

  5. Paula… Requirements work for me and the many companies/clients I have worked with, so I have to decline to accept your supposition. What do you base your designs on, even if it is just the first iteration? But that may be a topic for another post.

    Mark… If I complete a round of requirements elicitation with an accumulated set of assumptions, my next step is to validate or invalidate each one. If neither can be done then, then I treat an assumption becoming invalid down the road as a risk for which a mitigation plan is needed. Requirements in of themselves should not be dependent on untested assumptions, but on the defined business process and information needs. If those change, then you do need to ‘go back’ and determine the impact.

  6. I had a go-round with Paula on Laura Brandau Brandenburg’s excellent blog about this whole requirements thing already. I honestly can’t fathom your stance on this, Paula. As I wrote on that other blog, “I’m afraid that this line of thinking is so far outside the mainstream of what the IEEE calls “generally accepted knowledge”, so utterly extreme, that it just can’t be taken seriously and no one should spend a lot of cycles trying to refute it. That’s blunt and harsh, but it’s accurate.” David, I believe you commented on that thread as well.

    To my mind, requirements gathering/understanding is simply a critical success factor. In fact, projects that tend to fail are often projects that haven’t bothered with, or have shortcut, the requirements phase. I don’t think this is the best place to debate this very non-mainstream stance of yours, Paula. I’d suggest you blog about it at greater length and let us know.

  7. A couple of additions:

    1) I do not feel my position is lacking in compassion. There is nothing compassionate about keeping someone in an activity that they are ill suited to and find stressful and unrewarding. The true leader will examine the strength of those who are failing in their current role, help them to find their true calling and assist them in making the transition. It is also compassionate to those who are good at something to free them from the stress and anxiety created by those who negate their good work and make their daily lives a misery rather than a joy. Compassion requires action not inaction, and it requires inventive thinking not blind observance of the status quo.

    2) Regarding the requirements issue. No amount of extra work on requirements up front will overcome bad implementation. Try handing detailed requirements for the redecorating of your lounge to a group of five year olds and see if the added detail improves the result (extreme analogy I know, but I feel it illustrates the spirit of the problem). Regarding Paula’s position, I would say that development of requirements as a standalone, pre-design, pre-implementation activity is unfair on those of whom you are demanding the detailed requirements. A more holistic approach that embraces prototyping, refactoring and direct involvement in the process from start to finish will allow the requirements to evolve over time in a controlled way, without the label of scope creep being attached to this. Scope is a different thing to requirements, and this needs to be recognised.

    Regards
    The Enterprising Architect

  8. Peter, thanks for the reminder.

    Back to this topic: how do we get the leadership that is needed? I don’t want to besmirch anyone, but management tends to respond and act based on how they are measured and rewarded. How do we make it ‘worth their while’?

  9. Pay attention to what Jon said. Requirements are relevant when you’re actually ENGINEERING something. Software is not engineering — except when it’s controlling the launch of the space shuttle or when it’s part of firmware.

    The bottom line here is that businesses need to adapt, constantly. The methods that presume that there are things called requirements that can be ‘locked down’ for releases is the stuff of denial. Yes, you can create some next to meaningless elements for the sake of being able to do testing (don’t get me started on that arcane practice), but only for technical purposes. Testers should have no say over or test for any UI function — that’s all part of UX and usability….and is ‘specified’ by design criteria — not requirements.

  10. Again, Paula, this really isn’t a good place to argue against well-accepted tenets of IT project management such as requirements gathering and testing. You’re certainly entitled to hold your views, but I’d suggest, with all due respect, that you recognize that they’re quite unusual and far out of the mainstream of acceptance. The burden of proof/argument is very much on you, therefore. As such, blog comments aren’t a suitable venue to casually drop what come across as lofty, ex cathedra dismissals of these fundamental approaches, providing no backup. I suggest again that you write a lengthy and well-researched blog post on your views, which I’ll be happy to read. Meanwhile, let’s stop throwing around phrases like “stuff of denial” and “meaningless” about approaches that nearly all IT practitioners embrace as necessary and beneficial.

  11. Well, David, there’s no quick, easy answer to that question, because it requires “enlightenment” on the part of people who hire and incent that management, all the way up the chain. I’m doing my best here, on this blog, to discuss at length the reasons why leadership needs to focus on certain things over others, and to point out the ramifications when they don’t. Otherwise, I expect change and learning in this arena to take place slowly. I’ve witnessed more than my share of CEOs who fail at one company yet pop up with the same approach (“do it all now”) at their next company.

  12. “well-accepted tenets of IT project management such as requirements gathering and testing”

    I thought the topic here was failure? THESE ARE THE REASONS FOR THE FAILURE!!!!

  13. I disagree with Jon that the basic problem is staff mediocrity. I do believe that there is a problem with staff motivation, but this is another side effect of complexity.

    Systems with high complexity are characterized by convoluted relationships. When systems have convoluted relationships it is hard to make changes since each change has global ramifications that are difficult to predict. This is highly frustrating and demotivating to a creative soul who sees new ideas and wants to implement them but can’t because of the fragility of the system.

    Simple systems, in contrast, are characterized by high degrees of autonomy. This means that local changes do not percolate out to the larger system and, because of this, it is much easier to incorporate new ideas. For the creative soul, this is highly motivating, probably even more motivating that traditional motivators such as money and recognition.

    Mark points out there is a lot of pressure to estimate project completion times. I point out that any estimate is only as good as the input data. If the input data has a lot of uncertainty, any estimates based on that data will have low accuracy.

    The uncertainty of cost estimation data is highly dependent on the complexity of the system being estimated. The higher the complexity of the system, the higher the uncertainty of the cost estimation data and the lower the value of the estimate.

    The best way to do quality cost estimates on a large complex system is to first go through the simplification process to break the system apart into small simple systems and then do the cost estimates on those small simple systems.

    Paula doesn’t like requirements. I actually agree with her, in a way. One’s ability to confidently gather requirements is highly dependent on the complexity of the system for which you are gathering those requirements. Once a system reaches a certain level of complexity (say, more than a few $100K), the confidence level in the requirements drops dramatically.

    The answer to Paula’s concerns is not to eliminate requirements gathering (which I’m not sure she is suggesting.) It is to first deal with the complexity of the system by breaking it into small, simple systems and then going through a formal requirements gathering on those smaller simple systems.

    Now it is not at all unusual, when doing requirements gathering on those smaller simple systems that you will find that some of the partitioning of the original large complex system into smaller simple systems needs to be adjusted. That’s okay. The whole process should be iterative anyway.

    Since I have used the word “simple” a number of times in this discussion, let me define the word as I use it. A system S that solves a problem P is simple if and only if it is the least complex system possible that solves P. So “simple” does not mean that a system S has no complexity, only that it has the minimum complexity necessary to do its job.

    To look at “simple” another way, every problem P has a set of possible solutions, {s1, s2, s3, …}. For non-trivial problems the set is very large. Each element in the set has a complexity measure. The simple solution is the element in the set with the lowest complexity measure.

    Thanks, Peter, for hosting this discussion!

  14. Now that I have looked in more detail at Paula’s ideas on requirements gathering, I think I better understand what she is saying. I believe she is saying that requirements gathering can only tell you what people think they want. And the best systems are those that redefine what people want.

    A good example is Twitter. Had that group done any kind of formal requirements gathering, it is hard to know what they would have developed, but it certainly wouldn’t have been anything close to Twitter. And almost certainly, wouldn’t have been anyplace near as successful.

    Twitter is evolving not by gathering new requirements on what people think Twitter should do, but by watching how people actually use Twitter and then adapting Twitter to those uses.

    Paula, is this getting at what you are saying?

    I can’t resist pointing out that one of the reasons Twitter is so successful is its simplicity. It has clearly carved out a small piece of functionality (a subset of the partition, in SIP terminology), and does that functionality well. Very similar to Google.

  15. Roger, good requirements practice includes defining scope and then decomposing it as needed to processes and activities at a level where requirements can be elicited. This indeed helps in reducing something big and complex to many smaller, understandable components.

    The question becomes: does doing this lead to less complex designs that support useful but manageable amounts of functionality? My experience is yes, but that would still be anecdotal. I am sure it is possible to take “good” requirements and still produce complex designs and un-factored code, but I would like to think it would less likely to happen.

    I am thinking this could be a spin-off topic for some investigation; what do working designers/developers want from Requirements that helps them design less complex and more effective systems. If anyone can suggest a good meeting place or portal for developers, I would appreciate. There are many, so some suggestions would help.

    A “side-bar”: the Twitter example. I freely admit that what I do in Requirements work is aimed delivering information systems for business, govt or non-profit organizations. That presents a wide range of numerous opportunities, but it does leave out things like real-time systems for guided missiles, or video-games, or disruptive technologies like a Twitter or Facebook.
    Now, information systems have seen new and unforseen developments over the decades, like DBMS, online systems, to new stuff like BPMN and BRMS; even the Web could be see seen as resulting from the initial military and research need for distributed but connected resources.
    But some things will still come from bright ideas and unexpected convergences of people and resources, no doubt. What IS can do is normalize or standardize some of this so the bright minds can move on to the next big thing. If I can’t be as creative as others (my spouse is always saying, “how come you couldn’t invent that?), then I can be useful. (Or as Red Green says, “If you can’t be handsome, be handy”.).

  16. Roger: Thanks for helping clarify something that has so much to it that a discussion like this can hardly do it justice. You’ve clarified a couple of critical angles (both of your posts are extremely relevant). There are many others.

    1. We have to consider the ‘continuity’ of this stuff. We tend to treat (mainly because of funding) such efforts as 1-offs and do not create persistent artifacts and architectures that puts what’s being done into a larger context (artifacts noted here http://www.fastforwardblog.com/2008/07/07/transparent-and-explicit/) Every initiative starts from scratch. There is no shared learning.

    2. Design Thinking helps us to see that we start with leveraging some of these artifacts as a means by which to challenge assumptions (there are often many critical and fundamental assumptions that have been held for so long in companies that everyone believes that they’re true). Bringing these assumptions out to be challenged is critical (the visual artifacts and discussions around them help). And as we uncover facts to challenge assumptions we need to create a persistent sharable collection of these findings for a continuous business context — to inform new efforts http://www.fastforwardblog.com/2009/07/31/the-context-of-intent/

    3. The whole argument of requirements/testing having been long established is of no consequence when it becomes clear that they help cement reliability but do nothing to balance it with validity http://www.fastforwardblog.com/2009/08/07/reliability-vs-validity/. Yes, if we’re building rockets, reliability is of greater significance, but most of what’s happening in business today requires a greater focus on validity (which is where the challenge of assumptions becomes so critical). The current methods and models are still on a path to lock down reliability. This is fundamentally flawed for most of the business problems we are addressing today (as Roger noted, not to say that there aren’t scenarios that still require it).

    4. The SDLC is fundamentally flawed. The ‘design’ phase is a sub-design phase. There’s a larger architectural design that should feed individual initiatives, similar to the model engaged by commercial construction. Individual projects should be trades responding to a general contractor as part of a larger initiative with defined blueprints and specifications )http://www.fastforwardblog.com/2007/09/20/crossing-the-chasm/). Any ‘requirements’ that the sub-contractor want to provide are for their own purpose of fulfilling their response to the larger initiative (the business at large).

    IT fails at understanding true architecture. They even behave as sub-contractors — seeing only their piece of the overall construction effort (e.g. landscaping is the center of the project and everything else revolves around it).

  17. It’s possible, I suppose, that we’re not fundamentally in disagreement about requirements, since Design Research (per Paula’s link of http://www.peachpit.com/articles/printerfriendly.aspx?p=1389669) to me sounds an awful lot like what I call requirements gathering. Perhaps my quarrel is with disrespectful and dismissive tone as much as anything: (“requirements gathering doesn’t work”, “stuff of denial”, etc.).

    But that’s my point. I’m a practical CIO; I’ve gone in, both as an employee and as a consultant, to quite a few “turnaround” situations (IT departments and projects in serious crisis), and I can attest that the number one thing I hear from the business stakeholders about the delivery to date is “they [IT] don’t even ask us what we need! And then they just give us something that doesn’t do what we need it to!” We all know that a key IT focus over the last decade has been the need for increased business/IT alignment. There’s room for lots of disagreement on the hows and whats involved there, but the answer is simply not “we’ll figure out what you need and give it to you. Trust us.” If any one attitude/approach is a veritable recipe for IT failure, it’d be that.

    In short, it’s lofty, arrogant, and usually plain dead WRONG to say “we’ll redefine what you want.” (On the other hand, you can work with stakeholders to achieve a mutual redefinition of what’s needed. That, astonishingly, is called requirements gathering, as David pointed out.) Of course you can point at examples where a unilateral redefinition approach has worked (although I don’t think Twitter is a particularly good one, with its issues in every conceivable area –performance, UI design, functionality, business model. Apple (iPod, iPhone) is perhaps a better thing to point to. But mainstream business systems? Seldom.

    But I think we’ve been sidetracked, as this is a fringe issue, as I’ve stated above. Roger, your reply earlier that related directly to my post has again focused on architecture (application partitioning, etc.), as if that’s perhaps not the only thing, but certainly the main thing in your eyes when it comes to complexity. My blog post argued that it’s not. Do you continue to disagree? I was hoping, I suppose, to have “moved the needle.” 🙂

  18. Peter I’d agree with you except for one basic fact, for over 30 years in company after company I cannnot work with the requirements people — at all. They refuse to budge in their close-minded approach to what they’re doing. And they refuse to admit that there’s a possibility that there are other ways to approach their discipline.

  19. Sorry Peter, I didn’t mean to ignore your original post. Let me go back to your points.

    I think the first point on which we seem to disagree is on the relative contribution to IT failure of issues like poor project management, lack of communications, etc. You say that I see their contributions as relatively small. But I don’t see these as small. I agree that these are important issues. It’s just that I believe that can’t be dealt with effectively until complexity is dealt with effectively.

    Let’s look at project management as a representative member of the set. I think that you would agree with the following points:

    – The more complex the project is, the more difficult it is to manage.
    – The more difficult a project is to manage, the more likely it is to be poorly managed.
    – The more poorly a project is managed, the more likely it is to fail.

    Now it seems that if you agree with all of those points (I’m assuming that you do), then you must also agree with the inverse points:

    – The simpler the project, the easier it is to manage.
    – The easier it is to manage a project, the more likely it is to be well managed.
    – The better a project is managed, the more likely it is to succeed.

    So based on these presumed agreements, it must also be true that if you can break up a big complex project up into smaller simple projects that can be managed autonomously (as I claim can be done with SIP), then you are going to be more likely to succeed. So whether you say the primary problem is complexity or project management, the bottom line is that you must deal with complexity before you can effectively deal with project management.

    Now it is certainly true that you need good project management skills. If you can’t manage a project, that you can’t manage a simple project. (See my earlier post for my definition of “simple” project.) So dealing with complexity does not mean you don’t need good project managers. My point is that if you don’t deal with complexity, then you are likely to fail regardless of your project management skills.

    My analysis of “taking on too much functionality” (which, again, I fully agree with you about) is similar. When the project is too complex, you can’t tell whether or not you have too much functionality. It’s too complex to figure out.

    Now as to your point that complexity issues are primarily cultural and sociological, especially around the area of technical debt, we largely agree here as well. I hadn’t thought much about the metaphor of technical debt until you brought it to my attention, but I think it is a great metaphor.

    From my perspective, one of the largest components of technical debt is not addressing complexity at the start of the project. Part of the reason people don’t do so is that they don’t know how to do so. But the bigger part of the problem is exactly the issue you mention, lack of leadership.

    In my experience, far too many IT leaders are much more interested in blame avoidance than risk avoidance. So they continue to do things the way they always have, despite all of the evidence that the way they have always done things doesn’t work.

    So I’m not sure that we disagree on anything. With a little time together, I think we could pull our respective ideas together into a nice neat package.

    These are great discussions!

  20. Interesting debate. For me Peter nailed it in his blog. It is about complexity and wanting too much too soon. Can you imagine what would have been produced if the iPhone was created 20 years ago? Disaster. Remember the Newton?

    The key is to start small and then add incrementally over time. At each stage really testing the new bits so they work and adding more and more value.

    Too often the requirements are for an iPhone, IT is an order taker not part of the organisation, accepts them rather than challenges. The result disaster.

  21. Absolutely, Malcolm. As I wrote earlier in a post called “Mantra for IT: Participate in the process rather than confront results“,

    At its worst, I’ve seen IT become little more than “order takers” for the enterprise — relegated to asking questions that are essentially equivalent to, “oh, do you want fries with that?” and obediently scribbling down the answers. That approach of course seems cooperative and agreeable, but in truth, treating requirements gathering that way is actually a form of neglect of one’s responsibilities to the greater good of the enterprise. Ironically, it often leads to long-term failure rather than success. Don’t let this happen. Instead, IT people need to be there at every juncture, going full throttle, to challenge and to help mold requirements towards greater viability and cost-effectiveness.

    Thanks for commenting!

  22. Thanks, Roger, for your comprehensive and meticulous reply. I must say I had to squint at it for a while before I realized where (I think) you and I differ, at least in matter of emphasis if not core substance (because I agree, we’re in agreement about much of this). Here’s the key jump you make (with a similar jump in the subsequent paragraph):

    it must also be true that if you can break up a big complex project up into smaller simple projects that can be managed autonomously (as I claim can be done with SIP), then you are going to be more likely to succeed. So whether you say the primary problem is complexity or project management, the bottom line is that you must deal with complexity before you can effectively deal with project management.

    The key realization for me is that you repeatedly use the phrase “deal with complexity” mainly in the architectural sense, essentially as an equivalent “code phrase” (for you) for the SIP approach. I certainly am inclined to applaud what I understand so far about SIP as a philosophy, but I disagree that it is as all-solving and all-resolving as you tend to imply (despite the caveats you’ve made and which I’ve heard) by your concerted focus on that approach. And I also differ with your preceding assumptions in subtle ways: for example, only certain kinds of projects can be (or are) practically broken up into autonomous smaller simple projects, and there is (as you have acknowledged in your white paper) risk and overhead involved in doing so (multiple projects requiring a kind of metamanagement if nothing else). In addition, I’m not convinced that breaking a large project into smaller ones indeed brings, prima facie, significantly lower risk. There may be a “tipping point”, up to which suboptimal architectures can be absorbed just fine by a given organization (due to size, people’s skills, etc.).

    Finally, there are many many more reasons, both technical and non-technical, for failure of a given system beyond the mere architecture-in-the-large of that system. SIP, while laudable, may be (and I’m being overly dramatic here to make a point) akin to strengthening the front door of a house against burglary, even while burglaries all around the neighborhood are happening through second-story windows. Technical reasons for system failure could include performance issues, or inadequate interaction with other systems. (Better partitioning of subsystems is by no means guaranteed to address such thorny integration issues, and I’d argue it doesn’t necessarily substantially reduce their risk). Non-technical reasons for IT system failure, particularly given the typically loose definition of such failure (“cancelled prior to completion or delivered and never used”), might be purely political: a new CEO comes in who wants to sweep clean, or requirements gathering was botched such that the delivered system was wholly inadequate to the users’ needs. None of those kinds of failure has anything to do with the optimal or suboptimal partitioning of what was actually built.

    I hope this hasn’t been too long-winded, but I needed to attempt to clarify where you and I differ. In sum, that difference is on what I see as your (laudable, defensible, groundbreaking even) approach to addressing a major cause of IT system complexity. You tend to view that approach, which is largely a technical one, as the fundamental lever which will make the difference in the necessary struggle against complexity overall, while I insist more strongly that the complexity problem is manifold and requires simultaneous efforts on a variety of fronts.

  23. Re: “The key is to start small and then add incrementally over time. ”

    Note that the above is only possible with a modifiable and extensible design. The design follows from the architecture, so a modifiable and extensible architecture is crucial.

    Let me toss out a genuinely radical concept. If we admit that much of the problem is complexity, maybe it’s (past) time to admit that the systems we are trying to build are simply too complex to develop using human methods.

    We accept this prinicple in modern cpu design, which has grown too complex to be efficiently design and verified by humans. Instead, we use heuristics.

    Our systems have similarly grown too complex for us to manage by human minds alone, whether we are talking military/aerospace, transportation, energy, or information technology.

    There are methods and tools available providing such technology for systems and software engineering. These technologies involve formal methods, however, and have therefore been avoided by the systems and software engineering industry.

    Companies like Praxis High Integrity Systems use such technology integrally in their process. They are a very notable exception to prevalent industry practice with consistent deliveries on schedule, under budget, and having the lowest defect rates in the industry. They even have a zero-defect project delivery in their portfolio.

    The hurdles to adopting such technologies in the rest of the industry are cultural rather than technological. I often wonder what it would take to overcome these hurdles.

  24. Rotkapchen: “Great perspective. Reflective of the rant I had with one of the Innovation Design peeps from Microsoft recently — stop adding new stuff and get back to delivering what’s missing (in Word most of the issues are due to a flawed functional architecture).”

    Thank-you, this has been driving me crazy for years with Microsoft office. Since XP from a business productive prospective to the average business user Microsoft issue has not been opearting system, it has been office. Feture such spell check, gramer check, print preview in Excel are all things which have remained unchanged for at least a decade and need improvements. I do not need new development tools. Make what I use the most better.

    Overall a good artical.

Speak Your Mind

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Mastodon