Mock Terminology


By Jeff Langr and Tim Ottinger
Font: BrownBagLunch

Well, the title of this post is just wrong. The generic term for "things we use in testing to emulate production behavior" is test double, not mock. The casual programmer may bandy about the term mock when they mean test double, but it is technically incorrect and may lead to misinterpretation. We've never seen it make much of a difference to the end result of a programming conversation, but there are distinct definitions for the various implementations and uses of test doubles.

The term mock object stems from a 1999 paper by Tim Mackinnon, Steve Freeman, and Philip Craig, "Endo Testing: Unit Testing With Mock Objects." The authors' simple definition: "a substitute implementation to emulate or instrument other domain code." Mocks, or whatever you might call them in 2010, still serve the same purpose.

Use of mock objects in TDD circles grew dramatically over the next several years. Debate grew dramatically, too. The community debated about (a) whether or not to use them at all, (b) in what situations they were most appropriate, and (c) whether or not to use one of the mock tools that were starting to proliferate and pro-create.

In 2006, Gerard Meszaros published the book XUnit Patterns, which enshrined a handful of nuanced terms for the various kinds of test doubles. These terms had been shopped around in various agile forums for some time leading up to publication of the XUnit Patterns book. Today the taxonomy is commonly accepted by programmers and mock frameworks alike (oops, but "mock frameworks" itself is a misnomer, as these frameworks usually support all sorts of test doubles, not just mocks).
  • Test Double is the generic term, the phylum for all our species of testing dopplegangers.
  • Stub - An object that returns a specific, fixed value to the system under test (SUT). Stubs are usually constrained to a small subset of methods defined on a collaborating class. "When someone calls the price method, return the value 9.99."
  • Fake - An object that completely emulates its production equivalent. The classic example of a fake is a lightweight, in-memory "database" object that allows for simple, fast emulation of a relational database interface.
  • Mock - An object that self-verifies. A mock asserts that information sent to it is as expected. A test that uses a mock defines and verifies this expectation.
  • Partial mock - An object that contains a mixture of production method implementations and mock method implementations. Partial mocks are generally used when you need to emulate non-existent behavior (i.e. abstract methods) or troublesome behavior defined on the same class you are testing, something that might indicate questionable design.
  • Spy - An object that simply captures messages sent to it, so that the test can later verify that the SUT interacted correctly with its collaborator.
These terminological differences may on occasion be useful but are not worth arguing with any real passion. We suggest learning the differences as a means of avoiding time wasted with arguments, using a formulation along the lines of "I'm sorry. Of course I meant spy, not mock. Now, look on line 47..."

So, where do we stand on the debate? (a) Yes, use test doubles (b) when you must, (c) and use a tool if it makes things easier or clearer. Next time, we'll talk about more important things, such as what pitfalls to avoid when working with test doubles.

Organizational Objections To Agile Process


By Jeff Langr and Tim Ottinger
Font is Andrew Script

It is no surprise that organizations struggle when attempting to transition to agile methods. As with any new venture that threatens the status quo, the list of objections is long and varied. In developing this card we collected over a hundred typical sentiments and grouped them into about a dozen categories--too many for a single card. In keeping with our personal vows of brevity, we present here the first card: Objections borne out of organizational and cultural circumstances. We will present reasons that stem from individual belief systems and biases separately.

In order to help transitional consultants and rank-and-file people who are struggling, we provide commentary and counter-arguments.
  • "It Can't Work Here" There is a common assumption that Agile methods require a special set of initial conditions. Most companies believe that their own situation is unique, their software uniquely complex, their market position too tenuous, or their management system too inflexible. Given such specialized initial conditions, how could a general-purpose method based on simplicity possibly work?
    Agile is not so much a set of a constraints and rules as it is a framework in which a team can continually discover its own limitations and then derive better approaches. There are few necessary initial conditions beyond an agreement to work together in an incremental and iterative way.
  • "They Won't Let Us" is a special condition of "It Can't Work Here." Agile methods may be deemed feasible and even advantageous among the technical crowd, but may seem counter to the organizational culture and/or management habit. For instance, there may be a competitive personnel evaluation system for staff which frustrates attempts at collaboration. The management might have a strong command-and-control model which prevents self-organization. Perhaps the organization is built upon the concept of a strong lead programmer directing a squad of "code monkeys." Maybe schedule pressures are so great that there is no slack to spend on organizational learning. It may be that the team cannot modify the layout of the office due to union issues or concern for decor.
    Agile methods are advantageous to development and product management since they provide more data about the team's real progress, allow better focus on important features, and require very limited limited overhead to practice. While some aspects of Agile practice are clearly focused on management practices and product management strategies, an increasingly capable team can usually cope with difficult organizational practices in the short term and can win over leaders in the long run.
  • "Guilt By Association" refers to the situation where Agile methods have not been tried, but at first blush seem to resemble other methods or practices that have fallen from favor. Sometimes Agile is associated with uncontrolled "cowboy coding." Other times Agile is perceived as a trick by management to force programmers to work harder. It may be confused with ceremony-heavy consulting-driven methodology. The poor image is often tarnished further by tool vendors hoping to cash in on the latest buzz with tools that are rarely necessary, of limited helpfulness, hard to learn, tedious to use, or even detrimental to collaboration and communication. An Agile conversion project may follow on the heels of other failed "flavor of the month" methodology attempts.
    Agile is a low-ceremony, disciplined way of working built on concepts and ideas that have been successfully applied in the software industry for many decades, and longer in other industries that still embrace these principles today. It is a simple, incremental approach to team software development that requires little tooling beyond a place to sit, a whiteboard, a good supply of index cards, and a few rolls of tape. It would be a shame if an unfair prejudice caused us to miss out on an opportunity to build a truly great team.
  • "Means/Ends Juxtaposition" is a variation on "cargo cult" mentality. A typical non-agile company will have layers of policy and management practices built on strict phase separation, copious up-front documentation, individual accountability, rigidly-defined hierarchical roles, and/or tail-end quality control processes. Artifacts produced by these practices become the primary output of teams, and enforcing mandated behaviors becomes the primary concern of managers, even though neither contribute meaningfully to quality software development.
    An organization attempting to transition to agile may fall into the same trap by rigidly applying so-called agile practices. Numerous teams claim (capital-A) "Agility" because they hold interminable feckless retrospectives, prescribe stand-up meetings that provide only vertical status, or prolong interminable pair "marriages." Stand-ups, retrospectives, and pairing are extremely valuable tools, but only if you are able to align them with agile values and principles.
    To succeed at agile, we must first understand that it is a continual journey of team discovery, and not a rigid set of practices. We must have some sense of where we want agile to take us, and that the journey will reveal unforeseen challenges and opportunities. Agile development is about growth rather than conformity.
  • "Inferiority Complex" The team that ships quality software on a consistent and frequent basis exhibits a high level of confidence. A team that lives with frequent failure and inability to estimate quality of product will exhibit a low level of confidence. An observer may attribute the confidence and ability of the team as an initial condition and not as an eventual outcome of the process, surmising that confident superstars are a necessary initial condition of agile success.
    The Pareto distribution suggests that most teams simply don't have enough star developers to conquer this mistaken understanding of agile. Real concerns over minimizing entropy in a rapidly changing system engender pure fright: "How can we possibly introduce new features without pre-conceiving the entire design? Our code is horrible to start with, and we've tried to write some unit tests, and it's just too hard." A large number of teams ultimately feel they lack the skill to produce anything of value in a short iteration.
    Agile software development is about teamwork, not about superstars. For example, pair programming helps make TDD less difficult (for everyone, superstars and supernovices alike); TDD in turn provides us with the means to safely refactor code; refactoring sustains quality design in an extremely dynamic environment. Likewise, introspection and teamwork allow for continual improvement. Agile is a means to raise the competence of a team and lower the difficulty of working with a code base.
  • "Superiority Complex" is when an organization feels that they have a pretty good handle on things. They have a process that has allowed them to deliver successfully in the past, and regard any change in practice as a step down. Practices like TDD and pair programming are regarded as training wheels for junior programmers, and wholly inappropriate for serious software professionals. They may believe that they have a special gift for up-front design that makes incremental design wasteful and unnecessary. The organization believes they have transcended the need for Agile practices.
    There will always be those who think that the world has nothing left to teach them. If the organization is perfect, there are no flaws to uncover, no waste to reduce, and no improvements for agile to bring. Since agile methods are about doing, measuring, and reflecting on the work, we often find that Superiority Complex is based mostly in wishful thinking and a lack of measurement.
    If, on the other hand, a company has found the techniques that leave Agile in the dust, we would love to learn about and adopt this superior method at work. Agile methods are perhaps not the best methods possible, only the best ones we know as of this writing.
  • "Rejection of Insufficient Miracle" is the tendency to refuse to use a practice which leaves any current problems unsolved. "It's all or nothing, baby!" A team that cannot automate all testing sees little point in automating tests at all. Incremental development does not guarantee that a certain date will be met with all features in place, so there is no reason to iterate. The team must collaborate with other groups which do not work in an Agile way--what is the sense in only part of the company being agile? If we can't guarantee everyone will refactor the code, why should anyone spend the time cleaning up the design? By refusing incremental improvements, the organization rejects the very soul of the Agile way of working.
    Samuel Johnson once said, "
    Nothing will ever be attempted if all possible objections must first be overcome." All software projects, even one that might be hypothesized to be free of technical error, are prone to failure from myriad reasons. All products, teams, and organizations are "insufficient miracles," leaving many of life's problems unsolved. Seeing that we have all gotten out of bed and come to the office, the question is whether we want to try a process that maximizes learning and quality (without guarantees), or an equally guarantee-free process that does not.

We fully recognize that there are teams that will not want to use Agile methods, and we suspect that there are organizations unwilling to modify their practices to accommodate a new style. We suspect that any method will not be successful in such organizations and that abandonment may be a reasonable strategy. As they say, "Change your company, or change your company."

In time, we may discover ways of fulfilling the promises of Agile with other methods. Agile is not the only possible way to improve an organization, even though we have found it to be one exceptional way to improve software-developing companies.

If, on the other hand, an organization is not interested in the kinds of benefits Agile methods promise (teamwork, growth, quality, productivity) we recommend that they become our competitors.

Branch Per Feature




Font: Mechanical pencil for body, Erwin for heading
Sources: Tim Ottinger, George Dinwiddie (via mail list), Jeff Langr

Branch-per-feature is a common SCM strategy in which a branch is created on a central server for each feature that is under development. As a feature is completed, the branch is integrated back into the main development code line.

In some organizations, releases are composed by merging selected, completed features together. This seems quite rational, and can be made to work with enough effort applied in bursts. Every merge creates a unique, new configuration in the system that must be tested for side effects. If the merge has unintended consequences, then one or more features must be modified or the release re-composed without it. As a result, releases become major events in the life of the company with huge testing parties and great schedule risk. These times are frequently called "hell week" or "hell month" depending on the release periodicity.

In addition, the more two lines of code diverge, the harder it is to reconcile their changes. When work is integrated several times a day, it tends to be a fairly trivial effort. When it is integrated only a few times per month, it is rather harder. A few times a year, and it is surprisingly difficult. Likewise, if many branches are held for a long time, each branch will not only diverge from the main codeline but from other branches as well. This effect can make it rather difficult to estimate the effort of integrating branches, which contributes to the nervousness around hell week.

Agile teams integrate continually with great ease and success. Why, then, do some organizations hail branch-per-feature? Teams that use branch-per-feature as regular practice are often compelled to do so because they don't know which tasks/stories will actually be completed by the end of the sprint. It seems easier to hedge bets with version control systems than to tackle the organizational/political problems that frustrate planning and execution of sprints.
  1. Too much WIP (work-in-process) means that too many tasks are undertaken at once. Branch-per-feature helps the team deal with the fact that work is not being finished predictably or reliably within the iterations. Many tasks are reported complete on the last days of the iteration, yet some of those are rejected. Branch-per-feature allows management to decompose and recompose the release at the tail-end as they find out what is actually completed.
    In an agile team, as many as half of the features are done by the midpoint of the iteration and are being tested frequently. Very few changes are actually at risk of being left incomplete. In the best teams, missing an iteration boundary is a rare event, so branched features are unnecessary.
  2. Features are too large if they cannot be completed in a small part of a single iteration. Otherwise the uncertainty prompts the team to fork the code base. The forked code base is more difficult to integrate as it diverges from the original code line. Fear of difficult integrations actually encourages the team to hold isolated branches longer.
    Agile teams instead look to ensure features are small enough to completed within a few days. Small features completed in a day or two rarely require complex merging. Any overlap of effort can be easily coordinated with other team members during stand-ups or ad hoc conversations across the team table, further minimizing the chance that a merge will be necessary.
  3. Structure is poor if changes routinely span many files in many libraries or assemblies (something Martin Fowler refers to as "shotgun surgery"). When a feature's implementation is scattered all over the code base, it is harder to accurately make changes and harder to predict when work will be truly done. This uncertainty drives the team to isolate rather than integrate.
    Agile teams are highly cognizant of the value of simple, SOLID design. They understand that systems exhibiting true cohesion and minimal duplication dramatically lower the need for shotgun surgery.
  4. If there is no way to turn off incomplete features then the team will fear customers stumbling into incomplete sections of code and causing damage to data or additional customer support burden. This force drives developers to develop code in isolation from the main codeline, which of course increases the cost of integration.
    Agile teams view changes to the system holistically, breaking down each new feature as a series of incremental changes to the mainline. They understand that while this requires some level of overhead, it means that merge hell is minimized, and that it demands a better system design. For larger changes, they look to solutions such as branch by abstraction as helping provide both benefits.

The Seven Code Virtues



Authors: Tim Ottinger and Jeff Langr
Font: Burst My Bubble


Programming pundits often decry the dismal state of code in the world. We hear speakers demand professionalism or a more craftsmanlike value system, rigorous certification, etc. In response to these very demands we find contradiction of these very concepts. The argument is frequently made that whether code is "good" or "bad" is subjective and situational. We beg to differ.

To promote a shared set of programming values, we propose these seven virtues of code:
  • Working, as opposed to incomplete
    A program that works is simply superior to one that doesn't work. We contend that a working program now is of higher value than one that might work some day. To this end, incremental and iterative methods (such as agile methods) push us to complete small features as soon as possible, with improvements and expansions to follow.
    We ensure code is working by writing tests before and after writing code as we consider more success and failure modes. We can tell code is working by running the tests and by using the software.
  • Unique, as opposed to duplicated
    The worst thing we can do to working code is to duplicate it. Copies and near-copies scattered willy-nilly across the code base makes code difficult to maintain. We struggle to eliminate duplication each time we refactor in our red, green, refactor cycle.
    A dirty software industry secret: Many "stepback" or "regression" errors are not really re-broken code, but are instead examples of fixes to duplicated code.
    We can tell that code is duplicated visually (common paragraph structures) or by the use of duplicate detecting tools like Simian.
  • Simple, as opposed to complex
    Simplicity here refers to the number of entities, operations, and relationships present in any particular routine/function, and not to the readability of that module (which we call "clarity").
    The best way to increase simplicity is to use simpler structures and algorithms. Reducing complexity in this way often translates to improved runtimes, smaller code size, and easier optimization.
    We can also improve simplicity of one routine by extracting methods so that a series of manipulations becomes a single step as far as all of its callers are concerned. By moving the extracted methods to the appropriate classes, we also further develop the type system. After extraction, the code still takes all of the same steps, but those steps are evident in far fewer places in the code. The extracted methods are also simpler because they are unencumbered by their original context, a fact which aids us in finding yet simpler algorithms and structures.
    Such simplifying code migration is at the heart of object-oriented design.
  • Clear, as opposed to puzzling
    The meaning and intent of code must be obvious to the reader. Code misunderstandings generate errors. Confusion over code creates delays.
    While high-level languages make it easy to see what code is doing, there is still an art to producing code which communicates its goal and intent. The consensus of multiple readers is nonetheless a reasonably consistent measure of clarity. Therefore, the most reliable way to make code clear is to have multiple colleagues reading it.
    When one sees an improvement in readability from merely renaming variables, classes, or functions it is because one has improved clarity without changing any of the other virtues of the code. Clarity is further amplified by other virtues such as simplicity and brevity.
  • Easy, as opposed to difficult
    Adding and modifying code should not be an arduous process. Ease is largely a matter of how much code must be typed in how many places, and how much configuration must change. In a particularly ugly code base, the easiest way to get code working is to implement a hack in an inappropriate place. In a truly clean and simple code base, putting a correct design into place is often as easy as a hack. Uncle Bob Martin has stated that design has degraded when the doing "the right thing" is significantly harder than making "an ugly hack."
  • Developed, as opposed to primitive
    A primitive system is not necessarily simpler (fewer parts), nor easier (less thinking and typing), nor more clear than a developed system. Primitive code tends to be characterized by Duplication, Feature Envy, and Primitive Obsession code smells. These make a primitive solution more complex, more difficult, and less clear than one built with a well-developed type system.
    In an object-oriented system, the developed type system of an application provides well-thought-out classes whose methods make continued development easy.
    A system is well-developed when functionality appears to be just where one might expect it. String methods on strings, account methods on accounts, and button activations and the like merely make calls on "business objects."
  • Brief, as opposed to chatty
    It is valuable for code to be as brief as possible without sacrificing other virtues. This is part of the reason that language tools like LINQ and list comprehensions and closures have become so popular of late. All programmers, including the one who writes it, benefit from writing and reading less code (as long as this smaller amount of code is otherwise clean).
    Code that is long and chatty is much more likely to contain hidden errors. An overly cryptic method is likely to be misunderstood. Either one is hard to take in at a glance and understand.
    Playing "programming golf" is actually a meaningful activity. If one can make the solution to a problem smaller without sacrificing clarity (or indeed may improve clarity by reducing the solution to a smaller form), then one is reaching a more brief form. The distance from an ideally brief, clear form is unwanted verbosity (chattiness).

Personal Objections To Agile Process


By Tim Ottinger and Jeff Langr
Font is Kristen ITC

We have already discussed organizational objections to adopting Agile work methods. Here we discuss the personal objections.Agile development has never claimed to be an easy fit for all organizations.

We understand most of the reasons that agile transitions can be quite difficult for organizations. Likewise, individuals may be emotionally invested in certain structures and practives so that converting to an agile workstyle is perceived as threatening and undesirable. We again spent time collecting and categorizing a great many complaints, finally boiling (most) of them down to fit on our 7 +/- 2 bullet point format. We find the categorizations given here to be helpful, and hope that they will be useful to the coaches, managers, and developers who visit the Agile in a Flash blog.
  • Personal Bubble/Social Dysfunction The software development industry's long history of attracting anti-social sorts aside, there are some legitimate reasons that people retreat into a personal bubble. Some team members may have bad history together, ranging from the awkward (ill-fated past romance) to unpleasant (adversely opinionated pair) to intolerable (abusive partner). Some suffer from issues such as a simple timidity, fear of exposure for doing non-work tasks at the office, or a tendency toward introversion. There are iron-clad issues such as actual mental or emotional disabilities. Cultural issues can make understanding each other in a team more difficult.
    The personal bubble is a tough issue to overcome, but we don't work in tight teams because it is comfortable. We work in this manner because of the advantages it can bestow:
    • improved code quality
    • ongoing opportunities to learn new techniques
    • wider exposure to the code base
    • a trustworthy, open communication channel with the customer
    • process improvement based on experiential data
    • a team aligned on common goals
  • Lone Ranger Teamwork is the Agile Way, but some individuals prefer immediate gratification with immediate recognition. The fictitious Lone Ranger would ride into town, solve a mystery, rescue the innocent, restore the peace, and disappear, leaving behind a single silver bullet as a signature. This romantic vision is appealing to many programmers. Everyone dreams about being the hero.
    The downside is that a team is functional to the degree that it does not need to be rescued. The Lone Ranger may have been the hero of the day, but he did not share the knowledge and techniques that led to his success. The Lone Ranger does little to help the rescued learn how to solve similar problems in the future.
    A better role model is the Karate Kid's mentor, Mr. Miagi, who not only rescues Ralph Macchio's character, but also teaches him to fend for himself. Skilled practitioners who can teach others are superior assets to the team and the organization. Agile teams provide superior mentoring, which leads to teams developing the art of making good decisions.
  • Old Dog "Habit is habit," said Mark Twain, "and not to be flung out of the window by any man, but coaxed downstairs a step at a time." This is especially true for those productive habits which have served us in the past. Sometimes people don't want to learn any new technologies or methods, and even those who are excited about new skills will revert to old habits under pressure.
    Agile presents significant challenges to the old dog. Practices like TDD require developers to think about solving problems in a different, even inverted manner. Agile planning can invert the flow of how everyone thinks about their work--people once at the tail end of the cycle must think about how their role changes as they look to provide value earlier and more incrementally.
    It may help to realize that agile is not only a change to how the code is written, but a way to ensure that the individuals in the team can develop personally and as a team. It is a way to optimize the meaningfulness of the work that is done. It is a means to gain respect for a developer's contribution. It is likely to increase the perceived value of the Old Dog's work, rather than merely inconvenience him.
  • Zero Sum Game It is especially hard to engender cooperative behaviors when the development team (or its leaders) are competing against each other for position, respect, compensation, or autonomy. If the team thinks in terms of a zero-sum game, then they feel that they can only win if their teammates lose. In organizations with a history or risk of layoffs, developers will scramble to avoid being at the top of the pay hierarchy or at the bottom of the performance stack, knowing that those ends tend to be chopped first. In organizations that reward individual effort, one feels the need to be the last to leave and the first to arrive to beat out his so-called "colleagues."
    Agile promotes a different system. There is more than enough work to go around, and more than enough improvement possible for us all. There is plenty of credit to share. The Mr. Miagi sharing-and-mentoring model comes into play, and we can grow through our contributions to our teammates and our project. We can all have more success, and it lessens none of us.
  • Inferiority Complex The individual may fear he is less capable than his teammates, and may seek to hide his inabilities by working alone. He may be concerned about slowing down his teammates, or dreading daily humiliation at the hands of his teammates. Looking at the famous "rock star" agile developers, the insecure developer may fear that he could never measure up. The most senior persons on a team often fear displaying their few deficiencies in a pairing session.
    The nice thing about a functional agile team is that you eventually will get over that sore ego hurdle. Things like switching pairs frequently to avoid silo pairing, collaborating in an open workspace, and delivering working software every few weeks all create true transparency.
    A motivating fact is that pair programming is a path to personal improvement. Pairing with star programmers tends to make one into a star programmer. In relatively short time, any interested individual can become surprisingly competent. It is mostly a matter of seeing how it is really done, asking questions, and getting some guided practice.
  • Superiority Complex An individual who feels she has a pretty good handle on things may also believe that it is beneath her dignity to be "forced" to work with "mediocre" teammates. She may feel that she is the only one capable of working in agile, or that she is far above the use of common methods. Sometimes she even feels that she's already learned everything worth knowing about software development. To her, teamwork requires her to drag along incompetent partners, a practice that will slow her down and provide no personal upside.
    As agile coaches, we react most strongly to the intransigent, overly cocky developer--but we need to remember that a projected superiority complex can actually be a mask for inferiority complex.Pairing can let the overconfident member fail more visibly, which can allow coworkers to help correct her shortcomings.
    Some rightfully confident developers find that they also enjoy coaching and developing their coworkers. Bluster fades when developers realize that they are not competing against each other, but against errors and code faults. Finally, a developer with top chops in a pre-agile organization will usually emerge as a leader in an agile organization as well.
  • Rejection of Insufficient Miracle The individual experiences problems in the development team that are not addressed by agile methods. She realizes that it will not make all the teammates act like best friends, and won't make customer pressures or payscale changes any better. It may not make them happier in their workspace. Since the new system does not solve their individual issues, they have no reason to use it at all. It is not miracle enough for them.
    The agile focus is on unencumbered and incremental development of the product, the team, the customer relationship, and the organization. Agile is more of a system in which to identify and address problems than it is a method or methodology.
    One might choose to wait for an absolute solution to all problems, but in the meantime it might be good to invest in daily incremental improvements. Agile--in our view, the currently best bet for most software projects--will eventually fall out of favor for a better approach. We don't know what will supplant it, but we can confidently bet that the new methods won't involve eliminating incremental growth. Good things come to those who wait, but only if they work hard while waiting.

B.E.S.T. leadership



source: Tim Ottinger & Jeff Langr
font: Brown Bag Lunch


Do agile teams require leaders? Neither the agile manifesto nor its principles speak about leaders. Instead, the principles emphasize teams, and the penultimate principle says that the "best architectures, requirements, and designs emerge from self-organizing teams." A self-organizing team would seem to obviate the need for a leader, at least in the classic organization's notion of being "singular and fixed."

But all teams, agile or not, need leadership. "Self organizing" is tough, and it's often far more effective for someone to guide a team along at times, through its various challenges. This leadership comes from someone who at the moment has the experience, the clarity to help drive the best plan, and the people skills to make it happen. Such leaders can be external to the core team (such as the ever-present line manager), but a successful agile team accommodates more dynamic leadership. Leaders arise from within as needed. The team learns how to support individuals in this role, however temporary it might be.

A successful agile team embraces incrementalism for all activities required as part of software development: planning, requirements gathering, analysis, testing, design, coding, review, and delivery. Leadership is but one more team activity that is best executed on an incremental and continual basis. At times an agile team member may fulfill the role of leader for perhaps a couple minutes.

Effective leadership requires four values that agile team members should also hold dear:

  • Benevolence: The team must trust that their leader won't throw them individually or collectively under a bus, that the team's achievements will not be used against them, and that their faults will not be grist for public humiliation. A benevolent leader is not a pushover, but even in confrontation, his interest is in improving the team and its members.
  • Effectiveness: If one cannot get things done, one cannot lead others. A recent study shows that the greatest motivator for "working people" is the ability to make progress. An effective leader will help make it possible for the team to make real progress every day. Within a team, the person who knows how to get started often emerges as a leader; this leader must also know how to keep moving forward when others would be blocked.
  • Strength: A leader does not lose her head in a crisis. A leader's infrequent NO carries weight. She does not beat up on her inferiors in order to look tough. If necessary, a strong, benevolent leader will remove some people for the good of the rest of the team. She provides feedback appropriately, instead of sweeping things under the rug or embarrassing team members with needless public confrontation. (Remember the old adage, "praise in public, punish in private.") Respect is best earned, not extorted.
  • Temporariness*: A good leader does not install himself as a fixture in the company or team by building reliance on his personality and special knowledge. Knowing that success may lead him to new places, he is always helping others understand what it will take to replace his leadership. His actions when he is present will allow the team to continue successfully when he is absent.
When we find these four traits active in one person, we are looking at potentially legendary leaders. As followers, we need to support and encourage our best leaders and take pride in their work. As managers, we need to give them additional respect, autonomy, and possibly compensation. As clients, we should listen more carefully to such leaders and heed their warnings and advice. As team members, we should cultivate these four qualities in ourselves so that we may lead when we are called upon.

* "Temporariness" is a real word. We looked it up.

Refactoring Inhibitors


source: Jeff Langr & Tim Ottinger
font: Daniel Black

Refactoring is perhaps the most significant part of sustaining a system in an agile environment. Entropy in a system is reality: Code degrades unless you make strong efforts to stave off the degradation. You need good controls (tests) that provide rapid feedback in order to effectively keep a system clean. If the system attains a state where its design is poor, maintenance costs can become an order of magnitude larger. This can happen more rapidly than you think.

Agile can exacerbate poor practice. You don't do a lot of up-front design, and you constantly add new features that were not pre-conceived in an original comprehensive design. Repeatedly forcing features into a design will result in a disaster unless you have a means of righting the design over time. That's what refactoring is about.

With the need to avoid degradation, it's important to recognize anything that might prevent a team from refactoring as much as they must. This card provides some inhibitors to refactoring that you should watch for.

  • Insufficient tests. Experienced developers know when they do the wrong thing, something that degrades the quality of the code. Yet much of the time, they don't follow up and fix the problems they just created. Why not? Too much fear over shipping a defect. "It ain't broke, don't fix it." They don't want to do the right thing, because that would require mucking with code that's already working--code that's not even "their" code.

    The right thing is to retain a high-quality design through continual incremental refactoring, which requires the confidence to change the code. That confidence derives from very high levels of unit test coverage, which you can obtain through TDD. You won't get the confidence from test-after development (TAD), which at best nets around 70% (and often the complex areas are in that 30% of uncovered code). TDD enables confident refactoring.
  • Long-lived branches. The right thing is to ensure the system has the best possible design through continual refactoring. But developers working on a branch want to avoid "merge hell," and will plead for minimal refactoring as long as the branch exists. Branches should be short-lived.
  • Implementation-specific tests. "Changing the design of existing code" should not create the need for a lot of test rework, particularly if you are changing details not publicized through the class interface. The need to mock generally exposes information to a client (the test) that could otherwise remain private. The use of mocks should be isolated and abstracted. Make sure you're refactoring your tests! Minimize test-to-target encapsulation violations created by mocks.
  • Crushing technical debt. If you've not refactored enough, you'll soon be faced with a daunting challenge--code rampantly duplicated throughout the system, long (or short!) inscrutable methods, and so on. Once a problem gets bad enough, we tend to look at it as a lost cause and throw our hands up into the air, not knowing where to even begin. Don't let technical debt build up--refactor incrementally, with every passing test!
  • No know-how. Understanding how to properly transform code is one educational hurdle. Knowing if it's a good move or not requires continual learning about design. Developers without significant background in design will be reluctant to refactor much, as they're not sure what to do. Learn as much as you can about design, but start by mastering the concept of Simple Design.
  • Premature performance infatuation. The goal of refactoring is better design, which to most means more cohesive and decoupled. That means a good number of small classes and small methods. A simple refactoring, like extracting a method solely to improve cohesion and thus understanding of code, can frighten some programmers. "You're degrading performance with an unnecessary method call." Such concerns are almost always unfounded, due to things like HotSpot and compile-time optimization. A true performance expert, working on a system with some of the highest transactions in the world, backed me up on this one. Make it run, make it right, make it fast. -Kent Beck (and make it fast only if you measure first and after)
  • Management metric mandates. Management governance (wow, do I hate that word) by metrics can have nasty, insidious effects. Examples:

    1. "You must increase coverage by x percent each iteration." Actual result: Developers tackled each story with a hacked-together written integration test, not a unit test, that blew through as much code as possible. Developers then hastily created new tests by copy-paste-vary. No time left to refactor--they just need to hit their coverage numbers! Later, changes to the system would break many tests at once. Since the tests were barely comprehensible, developers began turning them off.
    2. "We need to reduce defect density." Defect density = defects / KLOC. Well, anything based on lines of code is useful only as far as you can throw it, and you can't throw code (the bits fall everywhere). You can improve defect density by reducing defects. Or, you can increase the amount of code. Most programmers aren't as evil to deliberately create more code than necessary. But if you say to your pair, "hey, we should factor away the duplication between these two methods that are 500 lines each," there will be either a conscious or subconscious decision to resist, since it worsens the metric.
    Programmers will do whatever it takes to meet bogus mandates on metric goals. Use metrics to help uncover problem areas, not dictate absolute goals.
From a technical perspective, few things will kill an agile effort more certainly than insufficient refactoring.

Weinberg's (First Three) Laws of Consulting


Font: AndrewScript 1.6
Source: Gerald M. Weinberg, The Secrets of Consulting


I re-read Gerald Weinberg's book The Secrets of Consulting (Dorset House, 1985) once every year or two. The first time I read it, about a dozen years ago, half of it seemed obvious while the other half seemed counter-intuitive. But I discovered that I too often ignored its obvious advice (i.e. "common sense"), and that its counter-intuitive advice is spot on.

Weinberg spills so many secrets ("give away your best ideas" being one of them) that it seems unfortunate to mention only the first three (and their corollaries), but getting past these is key to understanding the rest. And yes, I highly recommend getting a copy of the book so that you can discover the rest of the secrets, even if you don't consider yourself a consultant. Substitute "problem solver" or "agile coach" for "consultant," and most of the book will apply equally well (except perhaps the parts about marketing and pricing).

I actually lost a potential new client because of my inattention to the first law, or more specifically to its corollary. After having a phone conversation about the client's interest in transitioning to XP (this was about 5 years ago, when people still uttered the term XP), I met with the leadership team at their offices. On the phone, the person looking to bring me in talked about some of the serious challenges they had. But when I arrived, I heard little about these serious problems, only some vague notions that they wanted to improve how they did things.

Never mind, I was too wrapped up in my grandiose scheme to solve all problems for them using XP. I mentioned the challenges I had heard about on the phone, and indicated that I'd be able to help them fix it all. "What makes you think we have such serious problems?" Oops!

What Weinberg points out is that it's very difficult for us to admit it when we have serious problems. I didn't get the gig, because were I to have waltzed in and solved many large problems, it would have been far too embarrassing for the people in that room. Since then, I've promised as much improvement as their pride is willing to admit--per Jerry, 10%. (Of course, there's nothing that says you can't deliver more, as long as you are cautious about who gets the credit--see rule #3). Just last week, I found the rule to be dearly applicable in my personal life.

As far as the second law is concerned, we all tend to follow comfortable patterns, and this is often the cause of the problem: Once you're in a rut, it can be hard to get out. "We've always done things this way, that's just the way it has to be." The trick for a consultant is to help someone get out of a rut--which requires a change in direction--without themselves starting to fall into the same rut. You don't want to stay in one place too long as a consultant.

Some people have found Weinberg's "secrets of consulting" to be nothing short of greedy and cynical. They suggest the third law is all about taking as much money as possible from the client on an hourly basis. But Weinberg points out that this notion of not getting paid by the solution hearkens back to the first law: A solution expensive enough to require a consultant requires a problem too big to admit. Is it cynical to work in a manner that meshes realistically with normal human behavior? I don't think so.

I hadn't looked at The Secrets of Consulting in about two years. That's evidence of the rut I was tracking in. I've recently been shoved out of my rut, and I am thankful that someone reminded me (with the utmost subtlety) to revisit the book. I'm looking forward to re-amplifying my impact!

Stopping the Bad Test Death Spiral





Fonts: Daniel, Daniel Black
Source (SCUMmy Cycle): Ben Rady, Rod Coffin of Improving Works from their Agile2009 presentation.
Source (Remedies): Tim Ottinger & Jeff Langr


The "SCUMmy Cycle" is all-too-common. A team with legacy (untested) code tries TDD hoping they will be able to continue making improvements. First efforts result in integration tests, perhaps because the code is tightly coupled and not cohesive. The team intends to someday replace them with proper unit tests. A team lacking essential understanding of the qualities of a good unit test will write integration tests unwittingly.

Months or years later the tests are abandoned, with a significant investment in their construction and maintenance having gone to waste. How does this happen?

Here's how the cycle generally plays out:
  • Only integration tests are written. One common cause: business logic is intertwined with UI or database code, perhaps as a reflection of examples found in framework and library tutorials.
  • More tests are added, until running them is slow/painful. Fifty to 100ms to interact with a database doesn't seem bad. But multiply that by a few hundred or thousand tests, and a even small test suite execution takes several minutes.
  • Tests are run less often because developers can't afford to run them. Developers will resist running ten-minute test suites more than a few times a day. Less-frequently-run tests are much harder to resolve when they fail. Large tests tend to be fragile and fail intermittently. They have runtime dependencies on external elements that are not controlled by the tests, and perhaps dependencies on side-effects other tests.
  • Tests are disabled because they are unreliable, obsolete from lack of maintenance, or simply too slow to tolerate.
  • Bugs become commonplace just as they were before the team started doing automated testing. Disabling too many tests lowers coverage and the remaining tests become ineffective.
  • Value of automated testing is questioned--"we're no better off than before!" And yet the team still wastes time writing and (sometimes) running tests.
  • Team quits testing in disgust, or managers mandate a stop to testing. The experiment is deemed an expensive failure. Teams are now free to return to the good old days of rapid coding and expensive manual testing. As W. Edwards Deming said, "Let's make toast the American way: I'll burn, and you scrape."
A sad progression, and it's real. Both of us (Tim and Jeff) have experiences confirming Rod and Ben's SCUMmy Test Progression. At each step along the progression it becomes harder to salvage the testing effort. Plenty of teams have started rough, but have recovered before reaching the bottom of the progression.

One possible lesson: it's cheaper to have no tests than to have bad tests. A better lesson: life is too short to settle for crummy tests.

The flip side of this card lists some therapeutic strategies for each downward step:
  • Only integration tests are written -> Learn unit testing. This also ties in with a better understanding of (and adherence to) the SRP and LoD. Consider hiring a short term coach to teach healthy habits in the team, or invest in better reading materials and the time to absorb the material. Attend a training session.
  • Overall suite time slows -> Break into "slow/fast" suites. Establish a time limit for the fast suite, and strive to keep the fast suite large and fast. Thousands of unit tests can easily run in under 10 seconds. Consider a tool like Infinitest to help keep tests running fast (but note that everything works better in a system that exhibits low coupling).
  • Tests are run less often -> Report test timeouts as build failures. The measures you institute will be arbitrary, but the key focus is on continually monitoring the health of your test suite. If the suite slows dramatically, developers will soon skimp on testing.
  • Tests are disabled -> Monitor coverage. New functionality should have coverage in the mid-to-high-90% range, and the rest of the system should exhibit stable or increasing coverage. System changes resulting in reductions in coverage should be rejected. Integration tests provide broad coverage, but you should either replace these with unit tests or elevate them to acceptance tests. You should otherwise delete disabled tests.
  • Bugs become commonplace -> Always write tests to "cover" a bug. These tests should always be written first, a la TDD. A defect is evidence of inadequate test coverage. Make sure you always track defects and understand the root cause of each and every one! Insist that these tests be fast tests.
  • Value of automated testing is questioned -> Commit to TDD, Acceptance Testing, Refactoring. Committing to TDD means learning how to do it properly--it is of low or negative value otherwise! Also note that many "regressions" are rooted in code duplication. Refactoring to eliminate duplication is critical for quality improvement. It is reasonable that a quality crisis causes a reduction in new features production.
  • Team quits testing in disgust -> Don't wait until it's too late! If a team's gotten to this point of admitting defeat, it's often too late--management won't normally tolerate a second attempt at what they think is the same thing.
In the adoption of unit testing, a few training sessions and a little time with a good coach can make all the difference in the world.

If you must self-coach, then ensure that team members don't view TDD as simply "management forcing programmers to test their code." Ensure that programmers understand the significant design and documentation benefits that TDD can deliver. Ensure they understand the scalability advantages of fast automated tests over manual testing.

A team that understands TDD and strives to attain the benefits it offers will avoid the Bad Test Death Spiral.