Why Put Big Ideas on Little Cards?

"... certainly it is excellent discipline for an author to feel that he must say all he has to say in the fewest possible words, or his reader is sure to skip them; and in the plainest possible words, or his reader will certainly misunderstand them..."

John Ruskin

Metaphor

Index Card

The font for this agile flash card is Lavos Handy

Of the XP practices, the practice of Metaphor seems to be the most widely argued and misunderstood. Simply put, it is a shared understanding of how the system should work. It serves as the oversimplification that "everyone knows" and against which changes can be discussed intelligently.

It may literally be a metaphor, such as a description of a hive of bees or an assembly line or blackboard, but it may also be a more literal description of the basic entities of the system and their interactions (ie not metaphorical at all). The shared understanding is the thing.

The metaphor is:
  • A shared theory of system operation from which additional features may be discussed intelligently. If the system is like a spider's web, then we can talk about strands being moved and the spider waiting. If it is like an assembly line, we can talk about stations near the end of the line and the constant feed of parts. If it is like an accounting system, we can talk about double-entry accounting. The importance of the metaphor is partly that it provides a back story for further development.
  • A model of the system's primary entities and flows can be sufficient shared understanding if the solution is relatively straightforward and can be easily-enough digested. In this way, the business domain of a sufficiently transparent system may be its metaphor(!).
  • A shared system of names is provided by the metaphor. In a beehive system, workers and drones and queens may serve as class names or database tables. In a petri net, it may be tokens and places. I personally suggest a more straightforward naming style, but a metaphor can certainly be a rich source of meaningful names.
  • May be an actual metaphor or story that is well-known to the development team. If the system can be well-understood by comparison to a nest of termites or a storm system or trash collection then (as far as it goes) the metaphor can be at the core of all implementation decisions and requirements discussions.
  • May be replaced or rejected when the system, through evolution of an implementation, ceases to have a compelling resemblance to the orginal metaphor. Remember this when choosing names. The metaphor may still have paid for itself by simplifying earlier requirement and design discussions, and a new metaphor may have at least as much descriptive power. There is no requirement that one cling to an inappropriate metaphor.


The point of the metaphor is that it is supposed to make it easier to think about and discuss the implementation of a system relative to its requirements. Having a metaphor can certainly aid in development of new features, especially in the early days of a software project.

Rules for Commenting




Font is Lavos Handy

Have you ever noticed that almost every popular code highlighting scheme places comments in low-contrast grey-blue-on-white or dark-blue-on-black? Commenting has long been considered an important part of programming, yet in practice comments are written to be ignored. I think it is because they are typically added to satisfy bureaucratic requirements rather than to supplement the source code with information.

I used to be a huge supporter of comments. I loved to see glorious, rich, voluminous comments in all the code I read. I loved to write them. I thought that for a program to be 60% comments was a pretty good start. But as my teams began to write cleaner code, they sometimes would brag that the code was so clean that it almost didn't need comments. Now I want all of my code to go beyond almost.

I also have come to value vertical space in a file. I want it to be filled with working code instead of boilerplate and fluff. I hate scrolling through noise to find the code.

A few simple points to consider, and we can all have code that is more clear and gloriously free from noise and distraction.

  • Comments provide information that is not expressible in code. Comments are meta-commentary. Anything that we can express in the code, we are duty-bound to express as code. Comments must never repeat the code, or the version control system. They are never substitutes for expressive code. Comments may express things like copyright, sources of algorithms, and reasons one algorithm or data structure is chosen over a more obvious alternative.
  • Comments are to be deleted when they are obviated. Any comment that provides information available in the code must be deleted. We don't need comments to tell us that x++ increments X. Any comment that does not provide value is to be deleted immediately. This applies to passages of code that are commented out. If the system runs fine without them, we don't need those lines.
  • We should always strive to obviate comments. When we see a comment, we should take it as a challenge. If we can make the code express the content of the comment through refactorings (renames, extracting variables/methods/classes, inlining methods/variables, etc) then we should do so post-haste. If we can do it, then the code itself becomes smaller and simpler. And then the obviated comments can be deleted.
Inevitably someone will read this card as giving them license to stop writing comments, but that is a gross misunderstanding. If one takes a big steaming mound of poor code and removes the comments, it becomes an even more unmaintainable steaming pile of crummy code. There is a system and a balance that needs to be understood here.

This card doesn't tell us not to write comments, but rather to avoid writing code that needs them. Do I write comments? Yes, when I absolutely have to. But now I know when I have to.

ABCs of Story Estimation


Estimation, predicting the future, is an art at best. Backing estimation techniques with extensive mathematics and statistics may make upper management happy--it's always great if you can provide supporting data for something and distill it to a single number! (42, of course.) But such diligence only serves to bestow way too much legitimacy on a bunch of HAGs (remember WAGs and SWAGs? Well, the "H" is for "Hairy").

Agile teams use dozens of various techniques for deriving story estimates. As my doctor once told me, so many "solutions" means that none of them is very good.


Predicting the future with estimates is immediately setting ourselves up for failure. When we estimate, we are always wrong. Always. It's just a matter of degree. But that's ok--we can get better over time, and the business always finds value in estimates. Repetition and increasing familiarity with things can dramatically improve the amount of error.

Best advice: Start simply! These five guidelines should help get you on your way.


  • All contributors estimate - not just a representative developer or team lead, and most certainly not anyone from the customer team simply asking for the story (how bizarre!). Those who do the work get to have some say in how large the story is. It doesn't make sense any other way.

  • Break story down into tasks to verify scope - This exercise can help verify whether or not this is the right story, and often brings out discussions around things that will impact the estimate. Sometimes a quick task breakdown gets the team to concede that this is a much larger story than thought.

  • Come to Consensus using planning poker - James Grenning's wideband delphi-based technique is a fun, engaging, and expeditious way of deriving a lot of estimates from a good-sized team. Show the cards, and if all are in agreement, or close to it, pick the size and move on. If there's disagreement, debate for a few minutes, try again. If there's still disagreement, come back later and move on to the next story for now.

  • Decrease estimation granularity as story size increases - It's silly to think that we can estimate anything precisely. As the story gets larger, beyond a couple days, even a day variance on an estimate is too fine. "Is this 6 days or 7 days?" Gee, let's do it and find out. The Fibonacci sequence is a classic choice for agile teams. A story that the team thinks is a 6 is most certainly not going to be done in 5 days, so it gets bumped up to 8.

  • Estimate using relative sizes, not calendar days - There will be no end of debate on this, but sizes do not degrade over time, and they do not vary depending on developer capability and availability. Calendar days do. Mike Cohn's book Agile Estimating and Planning provides many good additional insights on the differences between the two.

TDD Process Smells


This list of "process smells" focuses on execution of the practice of test-driven development (TDD)--not on what the individual tests look like. There are no doubt dozens of similar smells; following the rule of 7 (+/- 2), I've chosen the smells I see most frequently.
  • Using code coverage as a goal. If you practice test-driven development, you should be getting close to 100% coverage on new code without even looking at a coverage tool. Existing code, that's another story. How do we shape up a system with low coverage? Insisting solely on a coverage number can lead to a worse situation: Coverage comes up quickly by virtue of lots of poorly-factored tests; changes to the system break lots of tests simultaneously; some tests remain broken, destroying most of the real value in having an automated test suite.
  • No green bar in the last ~10 minutes. One of the more common mis-interpretations of TDD is around test size. The goal is to take the shortest step that will generate actionable feedback. Average cycle times of ten minutes or more suggest that you're not learning what it takes to incrementally grow a solution. If you do hit ten minutes, learn to stop, revert to the last green bar, and start over, taking smaller steps.
  • Not failing first. Observing negative feedback affirms that any assumptions you've made are correct. One of the best ways to waste time is skip getting red bars with each TDD cycle. I've encountered numerous cases where developers ran tests under a continual green bar, yet meanwhile their code was absolutely broken. Sometimes it's as dumb as running tests against the wrong thing in Eclipse.
  • Not spending comparable amounts of time on refactoring step. If you spend five minutes on writing production code, you should spend several minutes refactoring. Even if your changes are "perfect," take the opportunity to look at the periphery and clean up a couple other things.
  • Skipping something too easy (or too hard) to test. "That's just a simple getter, never mind." Or, "that's an extremely difficult algorithm, I have no idea how to test it, I'll just give up." Simple things often mask problems; maybe that's not just a "simple getter" but a flawed attempt at lazy initialization. And difficult code is often where most of the problems really are; what value is there in only testing the things that are easy to test? Changes are most costly in complex areas; we look for tests to clamp down on the system and help keep its maintenance costs reasonable.
  • Organizing tests around methods, not behavior. This is a rampant problem with developers first practicing TDD. They'll write a single testForSomeMethod, provide a bit of context, and assert something. Later they'll add to that same test code that represents calling someMethod with different data. Of course a comment will explain the new circumstance. This introduces risk of unintentional dependencies between the cases; it also makes things harder to understand and maintain.
  • Not writing the tests first! By definition, that's not TDD, yet novice practitioners easily revert to the old habit of writing production code without a failing test. So what if they do? Take a look at Why TAD Sucks for some reasons why you want to write tests first.

Card, Conversation, Confirmation



Source: Ron JeffriesJoshua Kerievsky, Essential XP: Card, Confirmation, Conversation

Too often, the story card is the thing that agile teams elevate. Some teams insist on strict verbiage for what gets written on the card, or on writing the stories very neatly (sometimes even printing them), or on strictly arranging cards on a peg board and adding rules about who can touch the cards and move them. Some teams even go as far as to spend hundreds of thousands of dollars on a tool to track the cards.


But they're only cards! The card is perhaps one of the least important things in agile software development. Wow, that seems like an odd statement for me to make--this whole Agile in a Flash project is about the power of index cards. I carry cards with me at all times. I often depend on their power and flexibility to help expedite various meetings (planning, estimating, brainstorming, etc.). The cards are a very useful tool, but they aren't what's important. The cards ain't the thing!


  • Card - Index cards are little physical things on which you can jot only a few things. A card does not capture all details about what should be built. It is instead a reminder, a promise for subsequent communication that must take place. Ron refers to the card as a "token," a word choice that I like a lot: The card isn't the thing--it's a placeholder for the real thing.

  • Conversation - So what is the real thing? Collaborating to build and deliver software that meets the customer's needs! To succeed, we must converse continually. We must continue to negotiate supporting specifics for a story until all parties are in agreement and the software is delivered.

  • Confirmation - The specifics of what we're building must ultimately be clear to the customer and the team who will deliver. The customer captures this criteria in acceptance tests, designed to exercise the system in enough ways to demonstrate that the feature really works. When this set of acceptance tests for a given card all pass, it confirms to the customer that the story is truly done--the software does what they asked.


I think you could run a very successful agile project without using a single index card. But that's not what I'm recommending. The key thing is to understand what the cards represent.

Arrange-Act-Assert



Source: William C. Wake


In a unit test, three things typically need to happen: you must first create a context, then execute the thing that you're trying to verify, and finally verify that what was executed actually behaved as expected. So if, for example, I need to verify that the system properly applies a fine to a library patron for returning a delinquent book, I:


  • arrange the context by creating a new patron object and setting its fine balance to $0

  • act by executing the applyFine method with an argument of $0.10

  • assert that the patron's balance is $0.10, thus verifying that the fine was applied correctly to the patron


The most direct result of thinking about tests in this manner is its impact on the code's visual layout:

@Test public void applyFine() {
Patron patron = new Patron();
patron.setBalance(0);

patron.applyFine(10);

assertEquals(10, patron.fineBalance());
}

Note the blank lines separating the three act, arrange, and assert sections in the test applyFine. I've even seen some test code with guiding comments to make the section names explicit:

// arrange
...
// act
...
// assert
...

Honestly, it's idiomatic--once this organization is explained, the intent is obvious, and thus it's only a waste of time and space to include such low-value comments.


Emphasizing AAA (Arrange-Act-Assert) played a significant factor in the evolution of the Rhino Mocks framework for NUnit. Ayende Rahien introduced a new interface for Rhino Mocks 3.5, one that dispenses with the need to call a "verify mock" method after assertions are complete. This allows for cleaner, AAA-compliant tests. In contrast, older "classic" syntax for record-replay mock tools generates "mock clutter" across the test--record, replay, and verify calls are littered through the test.


Note that the form can vary. If you're verifying a static function, there may be no need to Arrange anything. And you may also want to introduce preconditions that verify things are in an appropriate state before the Act step--you could end up with Arrange-Assert-Act-Assert, or even Assert-Arrange-Act-Assert. And it's also possible for Act and Assert to be combined effectively in a single line of code. I don't believe there's value in being dogmatic about the organizational form, but there is value in striving to ensure that the intent of the test is clear. Keeping sections clear and visually separate from each other helps!


Arrange-Act-Assert is a simple concept, and probably only adds marginal value. But it costs nothing to practice, and it gets us that much closer to being a community willing to agree on some standards. What I also like about memorable acronyms like AAA is that they provide a consistent way to communicate simple ideas, which often don't have concise and consistent names.

Stay


I was reading an article by Cherie Berklee at Payscale on the topic of surviving layoffs (having not survived one not too long ago) and was inspired by some of the points. I recommend the article to those who have been laid off, and those who hope never to be.
  • Stay Positive is an inspiring bit of advice which no doubt has origins in antiquity, but which was all the more powerful in Berklee's article. I have not always followed this advice, and even had disregarded it for some time in my career and was the worse for it. Staying positive (not fecklessly hopeful or naive) when things are difficult is not just good behavior. It is leadership.
  • Stay Engaged. Though I don't have a reference, I seem to remember Robert Martin talking about professionals, and Craftsmen in particular, being more highly engaged (attentive, focused) than your average code monkey. It is sometimes a struggle to stay engaged, which is why we have things like the Pomodoro technique and pair programming to keep us focused.
  • Stay Professional should be the motto of the Craftsman movement (if I may call it a movement). Always do your best work, and always look for ways to raise the bar on "best".
We should be mindful of these points well before our companies start to have financial trouble. These are things that make us better employees and better people in general. I've since hand-written these three points on a sticky-backed index card stuck to the lower edge of my second monitor. It is there to always remind me of simple ways to be more effective and thereby more valuable.

Ten Principles for Agile Testers



Source: Crispin, Lisa, and Gregory, Janet. Agile Testing: A Practical Guide for Testers and Agile Teams, Addison-Wesley, 2009.


Lisa Crispin and Janet Gregory wrote the recently-published Addison-Wesley book Agile Testing, a thick tome that fills a much-needed gap in agile literature. Prior, detailed guidance for testing in an agile environment was sparse. With Agile Testing and a couple other fairly recent books (2007's Test Driven: TDD and Acceptance TDD for Java Developers by Lasse Koskela and Bridging the Communication Gap: Specification by Example and Agile Acceptance Testing by Gojko Adzic, 2009), we now have a good foundation for comprehensive agile testing knowledge.



The ten principles that the authors published should sound familiar. Four of these principles directly cover XP's four values of feedback, communication, courage, and simplicity; these and the remainder are also largely echoed in the agile manifesto and its supporting principles. So what's relevant about how these ten principles apply to agile testing?


  • Provide continuous feedback - The agile tester is central to providing the team with feedback: Acceptance criteria is the most concrete way to measure positive progress on an agile project. Tests also help identify issues and capture decisions on appropriate future direction.

  • Deliver value to the customer - The insistence on acceptance tests is a "reality check" on scope creep. Acceptance tests help us all understand what it means to realize a customer's needs.

  • Enable face-to-face communication - Testers can often be the ones on a team responsible for bridging the communication gap between customers (BAs, product owners, etc.) and programmers. A tester can be the one who physically brings these people together, as well as the one who drives derivation of a common language between the two parties.

  • Have courage - One of the larger challenges of agile is in sustaining a fast-paced iterative environment, where every two weeks we need to ship quality software. This challenge demands significant courage. Yet the irony is that we also need to understand that iterations afford us opportunities to learn how to fail and adapt--something that can require an even heavier dose of courage!

  • Keep it simple - Agile testers can help push back against an insistence on overly-elaborate features. Testers can also help the customer understand how to incrementally deliver value. They must learn an appropriate balance of iterative testing-- just enough to provide the right confidence in delivering software.

  • Practice continuous improvement - A key element of using iterations is to allow for learning to take place. Testers should be part of retrospectives (and if you're not consistently adapting based on the results of retrospectives, you're not agile enough.) Testers should also treat their career as a profession by continually learning more about testing practices, tools, and the system itself.

  • Respond to change - Agile testing is dramatically different in that there are few true "cutoff points"--things keep changing and thus must be continually re-tested. This requires automation! The agile tester learns to cope with the customer changing his or her mind from iteration to iteration, and correspondingly learns how to incrementally flesh out necessary testing specifications.

  • Self-organize - In a true agile team, everyone has the capability to act as a tester. Agile teams know how to shift focus as needed; from time to time, for example, it may be prudent for programmers to turn their attention toward helping verify a "done" but not "done done" feature.

  • Focus on people - Testers are often at the bottom of the totem pole in a non-agile software development team. Work is thrown at them, their available slice of time is continually squished, and programmers often look upon them as lessers. In an agile team, every shares the responsibility for ensuring that we are building quality product. Agile testers are key in bringing their testing expertise to the team.

  • Enjoy - The ability to help drive the process and be a true, equal contributor to a team can be extremely gratifying for an agile tester.


The use of acceptance tests to drive agile development is perhaps one of the most critical needs for a team that wants to sustain success. With that in mind, it's amazing that it's taken over ten years since the explosion of XP for us to begin to fully document just what agile testing is.

Make It Work, Make It Right, Make It Fast

Today is our first ever reader contribution from Brian DiCroce

Our font today is called "j.d." See bottom of post for details.



Content Source: Kent Beck at C2.com wiki

When working on a feature, let yourself be driven by this little principle to actually get things done right.

Make It Work
Program with your mind focused on the feature's basic behavior. Just make the feature work. Ensure that all the test(s) pass for that feature.

Make It Right
Now that you have your feature working (all tests passing), focus on refactoring the code. Tests will provide the necessary feedback. Clear up structural and aesthetic issues, remove duplication, rename variables, etc.

Make It Fast
Once the the tests are passing and the code is clean, you can focus on tweaking its performance. Use a profiler of course. Once again, you can feel confident in your changes because the tests will ensure that functionality is not broken.

I abide to this principle whenever I'm developing a feature. It help to shift my focus toward specific development efforts when working on a single feature. The initial effort is toward getting a feature working for quick feedback from users (or potential users). Improvements can be made after the fact, making the code faster and smaller.

This principle is easily applied in conjunction with TDD. The test-first approach helps me drive the design or API of the feature, while the pre-existing tests ensure that the rest of the system continues to function correctly. Refactoring can be applied naturally and at will as the new tests provide a safety net for the code. With the functionality assured, performance improvements may be added with confidence.

If your quick-running, hand-optimized system does not do what it is suppose to do, it is not worth investing time and resources in making it any faster. After making sure that the feature works correctly, you can focus on the performance of the feature. Once again, the tests act as a sidekick in that they will continuously provide you feedback whether changes broke the system.

About the font:
"j.d." is a windows/macintosh truetype and postscript type 1 font, handcrafted in the Emerald City by Steven J. Lundeen, Emerald City Fontwerks, Seattle, Washington, USA.
Generously released as freeware, and copyrighted ©2001.