Monday, February 05, 2007

Turd Polishing

Heh, I wrote the piece below over 10 years ago. It applies today more than it ever did ;->



I can't stress this enough. Get your basic code working before launching on any major tuning efforts. Doing performance work while code is unstable, or a design is still being hammered out, is usually a complete waste of time. This is not to say you shouldn't be thinking about performance goals for a program up front as part of the design work - you should do this if the program is one where performance will be an issue. Choosing good data structures and algorithms during the design phase always pays off better than the most expertly executed tuning of poor code late in the game. Strategic algorithm and data structure choices are likely going to account for orders of magnitude improvements in a program's behavior compared to the 2X-5X that would be more common for "tactical" tuning efforts.later in the game.

We need to be realistic about optimizations too. It's nice when a project was well planned and executed from the start -- however this isn't always going to be the case. You may not have been the one to do the original design and coding work.

Often we'll be handed something that's in trouble and tasked with bailing out a behind schedule and over budget disaster -- this happens in real life all the time. In cases like this there just may not be enough time to do it right and redesign the project.

For whatever reason, management usually seems lothe to give up on what exists, and insists on pouring good money after bad. In situations like this we may be reduced to doing damage control just to get the project to a point where it works well enough to consider shipping. When this happens, any performance work falls into a category I call "turd polishing" -- the transformation of that which is truly awful into that which is simply bad.

When you're stuck with a barely functioning house of cards that is going to ship in a couple of weeks come hell or high water, priorities need to be set differently than we would under "ideal" circumstances. In a nightmare scenario like this we may be forced into doing a little bit of performance work before the code is truly stable -- because it's inevitable that the product is going to ship loaded with bugs anyway.

If you're stuck with one of these all too common situations, you're going to have to be real Draconian about triaging garden variety bugs versus performance problems. Suppose this hypothetical product from hell were a word processing package that takes 20 seconds to open up a trivial length file on a 200mhz Pentium Pro machine? The vast majority of potential users are going to consider this an unacceptable level of performance -- something NEEDS to be done about that performance problem before the product ships or the reviewers will throw up all over the product, then sales will be zilch and you'll be out of a job when the company goes bankrupt. At the same time, there's a number of garden variety functional bugs that exist in relatively obscure features -- i.e. things that just don't work right. None of these bugs is serious enough to crash the program or corrupt the user's data. This distasteful situation is one where you're going to be forced into violating the basic principle of not optimizing unstable code because the file opening problem is so severe it must be fixed. There's really only two options when faced with a disaster like this -- you can hunker down and polish that turd, or resign in disgust.

If the end is in sight, I'd grit my teeth and polish that turd. A successful turd polishing effort can actually be pretty rewarding from a perverse technical point of view. You'll learn a lot from doing it, and quite possibly be hailed as the hero who "saved the project"

1 comment:

Francis W. Porretto said...

"It is easier to make a working system efficient than to make an efficient system work." -- Tom DeMarco.