“How technical are you?” This common challenge, almost playground-aggressive in nature, can turn into a sore spot for the CIO today. It’s inevitable (and actually desirable), you see: as you move up to executive rank, you lose your day-to-day involvement in the actual nuts-and-bolts implementation of technical details. Many executives respond by essentially abandoning all direct personal engagement with technology. But to do so across the board is a mistake.
Here, I seldom post directly about technologies or techniques, because, quite frankly, I’ve found in business situations that technology in and of itself is very rarely either the real problem or the real solution. Despite this, I still see technology as an ongoing crucial area of expertise for the CTO/CIO (contrary to the claims of some pundits that I’ve written about before). To maintain this vital expertise, the CIO’s dilemma is as follows: you have to keep your hand in, but you won’t ever have the time or focus to try out every technique, tool, or approach. You’re going to be, at best, a dilettante.
However, just because you’re doomed, as an executive, to be a dilettante doesn’t mean you should give up all efforts to stay current, or that such efforts won’t provide you with useful CIO-level insights. Even a little goes a long way. This post describes one example of that, as a case study.
Case study: test-driven development
Like most people, I learn best by doing. And I know that as an executive, I want to continue to understand things past a superficial level. Overall, I’ve found that actually diving in, judiciously and of course very selectively, is the best way to stay “tech-savvy”. So, as I’ve mentioned before, I regularly undertake specific small technical projects on the side, on my own time: personal R&D, as I like to call it. Recently, I decided to delve more deeply into test-driven development (or TDD for short), combined with figuring out how to code in Python using the Eclipse development environment; the nuances of Python and Eclipse are unimportant details for my larger points in this post, so I’ll leave them aside.
In this case, returning to doing some actual software development on my own (even on an admittedly small scale) using a new-to-me technique (TDD) provided me with general insights far beyond the mere “hows”: it showed me a lot of the “whys” and even a few of the “watch out fors”. Even though I’ve led, as an executive, entire software development organizations that used TDD as a standard part of our methodology, and even though I intellectually already understood the concepts and benefits of the technique, having to myself grapple with doing it was a whole different story.
What is TDD?
For those who may not be fully familiar with TDD, let me briefly (and in somewhat simplified fashion) describe it. TDD turns conventional software development a bit on its head. Rather than jumping right in to code the actual logic of a program, the developer starts by writing a relatively simple test case, in the form of code that will invoke the (as yet non-existent) part of the program she is about to work on. The test code expects the core program to produce a desired and specified result (say, a return value that comes from calling a particular method with specific values as parameters). The test checks the value returned from the invoked code against what it expects, and thereby declares that the test has passed or failed.
Execute that test straightaway, and of course it will fail, because there’s no code out there yet for it actually to call. So the task of the developer now becomes to make the test pass, by constructing actual program code that will somehow cause the test to pass. A given module of code would normally of course have many individual tests written against it, and all of these are gathered into a test suite and executed together (with automated tools, typically) upon any change or addition to the overall program. If your new code or other change has broken existing code, you find that out immediately, and can address it at once.
Safety net provided as you make changes
Periodically, the developers will notice opportunities to modify their overall approach, in order to remove duplications or make design changes for other reasons. With TDD, this “refactoring” now has a safety net of sorts: the successful execution of the accumulated tests provides at least some level of satisfaction that the refactoring didn’t break parts of the existing code. Similarly, when a bug is encountered in the program, the first task the developer takes on actually isn’t to fix the code. Rather, it’s to write a test that duplicates the situation that reveals the bug (and thus fails). Then, the task becomes correcting the code so that the failed test now passes, while of course not breaking any of the other existing tests. Rinse and repeat: write a failing test before you write or fix code, then write the code that will make the test pass, then refactor as necessary. Seems pretty simple, even if a little counterintuitive, eh?
But actually doing it amazed me at how forcing myself into this “backwards” approach of test-first-then-code reaped real benefits in a number of ways. First, I should note that it was always tempting to bypass the test writing and instead fall right into coding core logic; absorbing the all-important discipline of holding back and first coding a test took a while. But then I discovered I could fly: for example, I could retool a core set of methods to use a completely different underlying data structure, and (due to my test suite) feel reasonably confident that I’d done it without inserting bugs elsewhere in the system. And, ever seen where a previously fixed bug reappears in a later release? Well, when you’ve written a good test for that bug before you fix it, you have an alerting mechanism: the test should fail if the bug shows up again.
Takeaways for the CIO
My experience, as I expected, showed me a few larger points about TDD, aside from bringing me to a greater appreciation of the immediate benefits of the technique.
- TDD provides great advantage but should not be seen as a cure-all
As I’ve frequently discussed, our industry has always had a yearning and push for a “silver bullet” solution that will magically make all this messy development stuff suddenly simple. TDD, on its own, is certainly no more a silver bullet than any of the other candidates, but that doesn’t mean it isn’t phenomenally useful. My work convinced me that by improving the odds of catching problems early and thereby fostering flexibility, TDD takes away some of the house advantage of Murphy’s Law that often seems to rule the casino of software development.
- It’s possible to adopt TDD as a practice and still miss out
I came to realize that with TDD, one can vigorously comply with the “letter of the law” (i.e., dutifully designing tests for every method) but still miss the spirit. Designing a set of tests is a bit of an art, anticipating how to ensure, via verifying the behavior produced by just a few very specific examples, that the routine will produce correct result(s) in all cases. Picking the right tests takes judgment. Some developers have better judgment than others. Remember the cautionary words of a famous quote: “One thing a person cannot do, no matter how rigorous his analysis or heroic his imagination, is to draw up a list of things that would never occur to him.” — Thomas Shelling, Nobel Prize winner
It’s possible, then, to get into the situation where the developer can proudly declare that “all my tests pass”, but the software still doesn’t work in some way, because the tests don’t cover the failure condition. TDD is not a guarantee or an iron-clad catch-all, any more than a spell checker or a burglar alarm. The key point is that it improves the odds; it doesn’t eliminate all possibility of mishap. It helps you react to issues better and with higher confidence; it doesn’t remove every chance that problems will occur. In particular, if the developer hasn’t really understood how to go about constructing a meaningful set of tests, or if he cuts corners by bypassing the writing of a test “just this one time”, a false sense of security can set in.
- It’s especially unwise to simply mandate TDD and hope to get the benefits
So my hands-on TDD work made me realize that people should beware thinking you can get TDD’s benefits via a simple mandate instituting the practice in your organization. TDD done without deep understanding and active commitment by all can become no more than a form of “cargo cult” software engineering, done primarily for appearances and cachet, but with little real impact on the quality of your results. To get the benefits, you can’t tell your developers to “just do this”; rather, people have to understand at a gut level the underlying purpose and the ensuing overall benefit. Across the organization, everyone needs to be constantly watchful for whether the underlying aspects of the technique are being understood, adhered to, achieved. Because if they aren’t, you’re all just wasting your time; even worse, you’re wasting it while strutting your pride and growing ever more complacent.
- Rob Myers, “The ROI of Test-Driven Development”, August 3, 2012.
- Peter Seibel, “Unit testing in Coders at Work“, October 5, 2009.
- Test Driven Development and the Meaning of Done
- Robert C. Martin, TDD Derangement Syndrome, October 7, 2009.
- Ben Hughes, Does TDD Really Ensure Quality?, Jan 25, 2008.
- David S. Janzen. “An Empirical Evaluation of the Impact of Test-Driven Development on Software Quality” Dissertation Jan. 2006. Available at: http://works.bepress.com/djanzen/4