Refactoring Inhibitors


source: Jeff Langr & Tim Ottinger
font: Daniel Black

Refactoring is perhaps the most significant part of sustaining a system in an agile environment. Entropy in a system is reality: Code degrades unless you make strong efforts to stave off the degradation. You need good controls (tests) that provide rapid feedback in order to effectively keep a system clean. If the system attains a state where its design is poor, maintenance costs can become an order of magnitude larger. This can happen more rapidly than you think.

Agile can exacerbate poor practice. You don't do a lot of up-front design, and you constantly add new features that were not pre-conceived in an original comprehensive design. Repeatedly forcing features into a design will result in a disaster unless you have a means of righting the design over time. That's what refactoring is about.

With the need to avoid degradation, it's important to recognize anything that might prevent a team from refactoring as much as they must. This card provides some inhibitors to refactoring that you should watch for.

  • Insufficient tests. Experienced developers know when they do the wrong thing, something that degrades the quality of the code. Yet much of the time, they don't follow up and fix the problems they just created. Why not? Too much fear over shipping a defect. "It ain't broke, don't fix it." They don't want to do the right thing, because that would require mucking with code that's already working--code that's not even "their" code.

    The right thing is to retain a high-quality design through continual incremental refactoring, which requires the confidence to change the code. That confidence derives from very high levels of unit test coverage, which you can obtain through TDD. You won't get the confidence from test-after development (TAD), which at best nets around 70% (and often the complex areas are in that 30% of uncovered code). TDD enables confident refactoring.
  • Long-lived branches. The right thing is to ensure the system has the best possible design through continual refactoring. But developers working on a branch want to avoid "merge hell," and will plead for minimal refactoring as long as the branch exists. Branches should be short-lived.
  • Implementation-specific tests. "Changing the design of existing code" should not create the need for a lot of test rework, particularly if you are changing details not publicized through the class interface. The need to mock generally exposes information to a client (the test) that could otherwise remain private. The use of mocks should be isolated and abstracted. Make sure you're refactoring your tests! Minimize test-to-target encapsulation violations created by mocks.
  • Crushing technical debt. If you've not refactored enough, you'll soon be faced with a daunting challenge--code rampantly duplicated throughout the system, long (or short!) inscrutable methods, and so on. Once a problem gets bad enough, we tend to look at it as a lost cause and throw our hands up into the air, not knowing where to even begin. Don't let technical debt build up--refactor incrementally, with every passing test!
  • No know-how. Understanding how to properly transform code is one educational hurdle. Knowing if it's a good move or not requires continual learning about design. Developers without significant background in design will be reluctant to refactor much, as they're not sure what to do. Learn as much as you can about design, but start by mastering the concept of Simple Design.
  • Premature performance infatuation. The goal of refactoring is better design, which to most means more cohesive and decoupled. That means a good number of small classes and small methods. A simple refactoring, like extracting a method solely to improve cohesion and thus understanding of code, can frighten some programmers. "You're degrading performance with an unnecessary method call." Such concerns are almost always unfounded, due to things like HotSpot and compile-time optimization. A true performance expert, working on a system with some of the highest transactions in the world, backed me up on this one. Make it run, make it right, make it fast. -Kent Beck (and make it fast only if you measure first and after)
  • Management metric mandates. Management governance (wow, do I hate that word) by metrics can have nasty, insidious effects. Examples:

    1. "You must increase coverage by x percent each iteration." Actual result: Developers tackled each story with a hacked-together written integration test, not a unit test, that blew through as much code as possible. Developers then hastily created new tests by copy-paste-vary. No time left to refactor--they just need to hit their coverage numbers! Later, changes to the system would break many tests at once. Since the tests were barely comprehensible, developers began turning them off.
    2. "We need to reduce defect density." Defect density = defects / KLOC. Well, anything based on lines of code is useful only as far as you can throw it, and you can't throw code (the bits fall everywhere). You can improve defect density by reducing defects. Or, you can increase the amount of code. Most programmers aren't as evil to deliberately create more code than necessary. But if you say to your pair, "hey, we should factor away the duplication between these two methods that are 500 lines each," there will be either a conscious or subconscious decision to resist, since it worsens the metric.
    Programmers will do whatever it takes to meet bogus mandates on metric goals. Use metrics to help uncover problem areas, not dictate absolute goals.
From a technical perspective, few things will kill an agile effort more certainly than insufficient refactoring.

6 comments:

  1. Sorry about the very long delay between posts. Tim and I should be collaborating much more frequently in the near future. We have two more cards in the pipeline.

    ReplyDelete
  2. I'm easy. But some people *really* get annoyed about performance obsession: http://www.theregister.co.uk/2009/03/23/douglas_bowman_quits_google/

    ReplyDelete
  3. By the way, I just got snagged again by a branch that has to live too long. I've had to set up two working directories. In one, I do refactorings to the trunk. Then I can pull them into my long-lived branch.

    This reduces the cost of integrating rather significantly. Still, the long-lived branch is a problem. We should have found a better way.

    ReplyDelete
  4. I can so sympathize with those long-lived branches. At work I am partly stuck with ClearCase that has some source code branches from a decade ago (one branch-hell was 10-11 screens *wide* and only two screens deep). The price of technical debt has fortunately hit the management level so we are slowly refactoring both the source and the version control system.

    ReplyDelete
  5. I just whiteboarded three excuses I commonly hear that probably come under the "fear" category. They are not technical impediments, but I think they forestall a lot of necessary refactoring:

    1. "I meant to do that"

    We had a story/requirement when we wrote the code so there must have been a reason we did it that way (even if we can't quite remember it now).

    2. "It's too late now"

    Despite it being a refactoring, we might have to change some aspect of an external or internal api that somebody is relying on. (This even when the software is in alpha or beta).

    3. "It will slow us down"

    Sigh. Too busy adding new functionality (at an ever decreasing pace, with more spaghetti on top). We won't make our "velocity" number, if we stop to refactor. Probably a "metric misuse" problem.

    ReplyDelete
  6. Now that the Organizational Objections card is out:
    1) superiority complex
    2) "they won't let us"
    3) means/ends juxtaposition.


    :-)

    ReplyDelete

Note: Only a member of this blog may post a comment.