Value Of Index Cards



Typographical search continues. Font is Marker SD.

Source: Langr, Jeff and Ottinger, Tim

Index cards are an important part of the agile experience. We use them as tokens for user stories, for task breakdowns (sometimes), for backlog generation, for estimation, for reminders, etc. Here at Agile In A Flash, we use them as cheat sheets and motivational posters. There is a surprising amount of value to having an agile card posted in your workspace. Print one of ours out, and try it for a month.

Index cards are great. Why?
  • They are Low-tech and High-touch. You can put your hands on them, hand them to other people, stick them on the wall, tape them together, sort them on the desktop, etc. Having a tactile device such as an index card can make various workspace rituals possible (like moving a card from "in progress" to "done").
  • They can be dynamically reorganized to order them by point cost, or functional area, or assigned teams, or originators. They can have connections one did not intend originally. This puts them a step above most software programs -- they are routinely re-purposed on the fly.
  • They can hold schema-free markup. For instance, the security or documentation team can write notes on them. Developers and product owners can mark them up with issues or orderings. They can have priorities added to them. They might have none of the above, only a two-word name. They can have pictures, and notes with arrows. If you have a card and a marker, your only constraints are space and penmanship.
  • They are reminders for much bigger ideas. Being small, they can hold very little content. Yet they can have vast evocative powers. Take the agile flash cards on this site for instance. Once you know a principle or practice, a card bearing as little as three words can help you do your job better.
  • They are extremely portable, being small physical items. You don't have to copy them to a USB drive or mail them to your teammates. You can use them on the bus, train, or hiking trail. They don't mind being without web access and are immune to OS and browser incompatibilities. Whereever you are, they are the same.
  • They are inexpensive and replaceable. Of course, you never want to be without your Agile In A Flash (tm) cards but most other cards can be lost without any real cost. If they are important, they can be easily reproduced. If not, maybe you didn't really need them. I think every day we can tear up a story card or two is a good day. They are not heirlooms to be maintained for the life of a project, and that makes them even better.

Shu-Ha-Ri


Images from Wikimedia

Font: j.d.

References: Takamura, Yukiyoshi: Teaching and Shu-Ha-Ri

Cleanup & edits: thanks Tim!

Kata are patterns of movements practiced by martial arts students; the moves act as the basis for actual application, i.e. fighting or self-defense. The notion of executing kata is analogous to a musical student practicing scales or finger exercises. Note that many professional musicians still practice scales, and professional martial artists still use kata.

Learning music, martial arts, the game of go, new software development techniques, or anything that requires extensive practice to master, can be broken into three levels, or stages, known as shu-ha-ri. These three words are (according to Ward's wiki) roughly translated as "hold", "break", and "leave."

  • shu - In first learnings, a student strictly follows the instructor, replicating each move as precisely as possible. Per the late Takamura, "the student must first resign himself and his ego to a seemingly random series of repetitious exercises." Kata at this level (known as shoden) are specifically designed to get students to focus on basic movements. Some shoden are even designed to create "physical discomfort," in order to reinforce the student's need to master focus. Ultimately, the goal of shu is to ingrain basic moves in the student, so that they do not think twice when it comes time to apply them. Wax on, wax off. Or, "Red, green, refactor."

  • ha - At this secondary level, the instructor begins to grant the student some leeway to experiment and diverge from strict application (but not to the point where the kata no longer resemble the originals). The goal is to open the students' minds to begin to recognize the true usefulness of the thing being mastered. The ha level is where we expect to start seeing light bulbs click on in peoples' heads. After practicing strict TDD for a while, students start to see some of the power of being able to move their code bits around. They learn when they can not test something because it "could not possibly break." Letting students diverge also allows them to note the value in things like consistent test naming, taking smaller steps, or exploring more sophisticated forms like BDD.

  • ri - At the ri level, the student no longer views the rules as constrictions, but instead as having been the stepping stones to learning and freedom. In fact, the student no longer thinks about rules. He or she has ingrained a set of techniques that can be applied at a moment's notice, but has also learned to specialize that knowledge with additional experience-based elements. In some cases, this level may find the master abandoning Red-Green-Refactor to even greater success, for short periods of time. Don't try this before you're a master!

Rules were meant to be broken, but only after you've followed them for a while, to the point of intimate or intrinsic understanding of how they benefit you. Eventually, the essence of the rule has been internalized and the rule (as a rule) has been obviated. At that point, you are able to operate as if the rule were common sense. You are then free to deal with the repercussions of not following the rules.


CRISP



Brad Appleton provided this as a reader submission. Thanks Brad!

Mike Clark gave a 2004 presentation on Pragmatic Project Automation that included a description of what he called the "CRISP" criteria for build. There is a similar description in the 2007 presentation “All Builds are Good”, and a more detailed description in this 2007 CT-SPIN presentation on project automation:

Complete:
• Build from scratch and independently without human intervention.

Repeatable:
• Must be able to create exactly the same build at a later time.
• Store build scripts in source control.

Informative:
• "Detector of unexpected changes".
• Provide information on why a build failed.

Scheduled:
• Let the builds run automatically.

Portable:
• Build should be runnable from any system (same platform), not just that of the developer.
• For cross-platform software, it should build on all platforms.

Why Put Big Ideas on Little Cards?

"... certainly it is excellent discipline for an author to feel that he must say all he has to say in the fewest possible words, or his reader is sure to skip them; and in the plainest possible words, or his reader will certainly misunderstand them..."

John Ruskin

Metaphor

Index Card

The font for this agile flash card is Lavos Handy

Of the XP practices, the practice of Metaphor seems to be the most widely argued and misunderstood. Simply put, it is a shared understanding of how the system should work. It serves as the oversimplification that "everyone knows" and against which changes can be discussed intelligently.

It may literally be a metaphor, such as a description of a hive of bees or an assembly line or blackboard, but it may also be a more literal description of the basic entities of the system and their interactions (ie not metaphorical at all). The shared understanding is the thing.

The metaphor is:
  • A shared theory of system operation from which additional features may be discussed intelligently. If the system is like a spider's web, then we can talk about strands being moved and the spider waiting. If it is like an assembly line, we can talk about stations near the end of the line and the constant feed of parts. If it is like an accounting system, we can talk about double-entry accounting. The importance of the metaphor is partly that it provides a back story for further development.
  • A model of the system's primary entities and flows can be sufficient shared understanding if the solution is relatively straightforward and can be easily-enough digested. In this way, the business domain of a sufficiently transparent system may be its metaphor(!).
  • A shared system of names is provided by the metaphor. In a beehive system, workers and drones and queens may serve as class names or database tables. In a petri net, it may be tokens and places. I personally suggest a more straightforward naming style, but a metaphor can certainly be a rich source of meaningful names.
  • May be an actual metaphor or story that is well-known to the development team. If the system can be well-understood by comparison to a nest of termites or a storm system or trash collection then (as far as it goes) the metaphor can be at the core of all implementation decisions and requirements discussions.
  • May be replaced or rejected when the system, through evolution of an implementation, ceases to have a compelling resemblance to the orginal metaphor. Remember this when choosing names. The metaphor may still have paid for itself by simplifying earlier requirement and design discussions, and a new metaphor may have at least as much descriptive power. There is no requirement that one cling to an inappropriate metaphor.


The point of the metaphor is that it is supposed to make it easier to think about and discuss the implementation of a system relative to its requirements. Having a metaphor can certainly aid in development of new features, especially in the early days of a software project.

Rules for Commenting




Font is Lavos Handy

Have you ever noticed that almost every popular code highlighting scheme places comments in low-contrast grey-blue-on-white or dark-blue-on-black? Commenting has long been considered an important part of programming, yet in practice comments are written to be ignored. I think it is because they are typically added to satisfy bureaucratic requirements rather than to supplement the source code with information.

I used to be a huge supporter of comments. I loved to see glorious, rich, voluminous comments in all the code I read. I loved to write them. I thought that for a program to be 60% comments was a pretty good start. But as my teams began to write cleaner code, they sometimes would brag that the code was so clean that it almost didn't need comments. Now I want all of my code to go beyond almost.

I also have come to value vertical space in a file. I want it to be filled with working code instead of boilerplate and fluff. I hate scrolling through noise to find the code.

A few simple points to consider, and we can all have code that is more clear and gloriously free from noise and distraction.

  • Comments provide information that is not expressible in code. Comments are meta-commentary. Anything that we can express in the code, we are duty-bound to express as code. Comments must never repeat the code, or the version control system. They are never substitutes for expressive code. Comments may express things like copyright, sources of algorithms, and reasons one algorithm or data structure is chosen over a more obvious alternative.
  • Comments are to be deleted when they are obviated. Any comment that provides information available in the code must be deleted. We don't need comments to tell us that x++ increments X. Any comment that does not provide value is to be deleted immediately. This applies to passages of code that are commented out. If the system runs fine without them, we don't need those lines.
  • We should always strive to obviate comments. When we see a comment, we should take it as a challenge. If we can make the code express the content of the comment through refactorings (renames, extracting variables/methods/classes, inlining methods/variables, etc) then we should do so post-haste. If we can do it, then the code itself becomes smaller and simpler. And then the obviated comments can be deleted.
Inevitably someone will read this card as giving them license to stop writing comments, but that is a gross misunderstanding. If one takes a big steaming mound of poor code and removes the comments, it becomes an even more unmaintainable steaming pile of crummy code. There is a system and a balance that needs to be understood here.

This card doesn't tell us not to write comments, but rather to avoid writing code that needs them. Do I write comments? Yes, when I absolutely have to. But now I know when I have to.

ABCs of Story Estimation


Estimation, predicting the future, is an art at best. Backing estimation techniques with extensive mathematics and statistics may make upper management happy--it's always great if you can provide supporting data for something and distill it to a single number! (42, of course.) But such diligence only serves to bestow way too much legitimacy on a bunch of HAGs (remember WAGs and SWAGs? Well, the "H" is for "Hairy").

Agile teams use dozens of various techniques for deriving story estimates. As my doctor once told me, so many "solutions" means that none of them is very good.


Predicting the future with estimates is immediately setting ourselves up for failure. When we estimate, we are always wrong. Always. It's just a matter of degree. But that's ok--we can get better over time, and the business always finds value in estimates. Repetition and increasing familiarity with things can dramatically improve the amount of error.

Best advice: Start simply! These five guidelines should help get you on your way.


  • All contributors estimate - not just a representative developer or team lead, and most certainly not anyone from the customer team simply asking for the story (how bizarre!). Those who do the work get to have some say in how large the story is. It doesn't make sense any other way.

  • Break story down into tasks to verify scope - This exercise can help verify whether or not this is the right story, and often brings out discussions around things that will impact the estimate. Sometimes a quick task breakdown gets the team to concede that this is a much larger story than thought.

  • Come to Consensus using planning poker - James Grenning's wideband delphi-based technique is a fun, engaging, and expeditious way of deriving a lot of estimates from a good-sized team. Show the cards, and if all are in agreement, or close to it, pick the size and move on. If there's disagreement, debate for a few minutes, try again. If there's still disagreement, come back later and move on to the next story for now.

  • Decrease estimation granularity as story size increases - It's silly to think that we can estimate anything precisely. As the story gets larger, beyond a couple days, even a day variance on an estimate is too fine. "Is this 6 days or 7 days?" Gee, let's do it and find out. The Fibonacci sequence is a classic choice for agile teams. A story that the team thinks is a 6 is most certainly not going to be done in 5 days, so it gets bumped up to 8.

  • Estimate using relative sizes, not calendar days - There will be no end of debate on this, but sizes do not degrade over time, and they do not vary depending on developer capability and availability. Calendar days do. Mike Cohn's book Agile Estimating and Planning provides many good additional insights on the differences between the two.

TDD Process Smells


This list of "process smells" focuses on execution of the practice of test-driven development (TDD)--not on what the individual tests look like. There are no doubt dozens of similar smells; following the rule of 7 (+/- 2), I've chosen the smells I see most frequently.
  • Using code coverage as a goal. If you practice test-driven development, you should be getting close to 100% coverage on new code without even looking at a coverage tool. Existing code, that's another story. How do we shape up a system with low coverage? Insisting solely on a coverage number can lead to a worse situation: Coverage comes up quickly by virtue of lots of poorly-factored tests; changes to the system break lots of tests simultaneously; some tests remain broken, destroying most of the real value in having an automated test suite.
  • No green bar in the last ~10 minutes. One of the more common mis-interpretations of TDD is around test size. The goal is to take the shortest step that will generate actionable feedback. Average cycle times of ten minutes or more suggest that you're not learning what it takes to incrementally grow a solution. If you do hit ten minutes, learn to stop, revert to the last green bar, and start over, taking smaller steps.
  • Not failing first. Observing negative feedback affirms that any assumptions you've made are correct. One of the best ways to waste time is skip getting red bars with each TDD cycle. I've encountered numerous cases where developers ran tests under a continual green bar, yet meanwhile their code was absolutely broken. Sometimes it's as dumb as running tests against the wrong thing in Eclipse.
  • Not spending comparable amounts of time on refactoring step. If you spend five minutes on writing production code, you should spend several minutes refactoring. Even if your changes are "perfect," take the opportunity to look at the periphery and clean up a couple other things.
  • Skipping something too easy (or too hard) to test. "That's just a simple getter, never mind." Or, "that's an extremely difficult algorithm, I have no idea how to test it, I'll just give up." Simple things often mask problems; maybe that's not just a "simple getter" but a flawed attempt at lazy initialization. And difficult code is often where most of the problems really are; what value is there in only testing the things that are easy to test? Changes are most costly in complex areas; we look for tests to clamp down on the system and help keep its maintenance costs reasonable.
  • Organizing tests around methods, not behavior. This is a rampant problem with developers first practicing TDD. They'll write a single testForSomeMethod, provide a bit of context, and assert something. Later they'll add to that same test code that represents calling someMethod with different data. Of course a comment will explain the new circumstance. This introduces risk of unintentional dependencies between the cases; it also makes things harder to understand and maintain.
  • Not writing the tests first! By definition, that's not TDD, yet novice practitioners easily revert to the old habit of writing production code without a failing test. So what if they do? Take a look at Why TAD Sucks for some reasons why you want to write tests first.

Card, Conversation, Confirmation



Source: Ron JeffriesJoshua Kerievsky, Essential XP: Card, Confirmation, Conversation

Too often, the story card is the thing that agile teams elevate. Some teams insist on strict verbiage for what gets written on the card, or on writing the stories very neatly (sometimes even printing them), or on strictly arranging cards on a peg board and adding rules about who can touch the cards and move them. Some teams even go as far as to spend hundreds of thousands of dollars on a tool to track the cards.


But they're only cards! The card is perhaps one of the least important things in agile software development. Wow, that seems like an odd statement for me to make--this whole Agile in a Flash project is about the power of index cards. I carry cards with me at all times. I often depend on their power and flexibility to help expedite various meetings (planning, estimating, brainstorming, etc.). The cards are a very useful tool, but they aren't what's important. The cards ain't the thing!


  • Card - Index cards are little physical things on which you can jot only a few things. A card does not capture all details about what should be built. It is instead a reminder, a promise for subsequent communication that must take place. Ron refers to the card as a "token," a word choice that I like a lot: The card isn't the thing--it's a placeholder for the real thing.

  • Conversation - So what is the real thing? Collaborating to build and deliver software that meets the customer's needs! To succeed, we must converse continually. We must continue to negotiate supporting specifics for a story until all parties are in agreement and the software is delivered.

  • Confirmation - The specifics of what we're building must ultimately be clear to the customer and the team who will deliver. The customer captures this criteria in acceptance tests, designed to exercise the system in enough ways to demonstrate that the feature really works. When this set of acceptance tests for a given card all pass, it confirms to the customer that the story is truly done--the software does what they asked.


I think you could run a very successful agile project without using a single index card. But that's not what I'm recommending. The key thing is to understand what the cards represent.

Arrange-Act-Assert



Source: William C. Wake


In a unit test, three things typically need to happen: you must first create a context, then execute the thing that you're trying to verify, and finally verify that what was executed actually behaved as expected. So if, for example, I need to verify that the system properly applies a fine to a library patron for returning a delinquent book, I:


  • arrange the context by creating a new patron object and setting its fine balance to $0

  • act by executing the applyFine method with an argument of $0.10

  • assert that the patron's balance is $0.10, thus verifying that the fine was applied correctly to the patron


The most direct result of thinking about tests in this manner is its impact on the code's visual layout:

@Test public void applyFine() {
Patron patron = new Patron();
patron.setBalance(0);

patron.applyFine(10);

assertEquals(10, patron.fineBalance());
}

Note the blank lines separating the three act, arrange, and assert sections in the test applyFine. I've even seen some test code with guiding comments to make the section names explicit:

// arrange
...
// act
...
// assert
...

Honestly, it's idiomatic--once this organization is explained, the intent is obvious, and thus it's only a waste of time and space to include such low-value comments.


Emphasizing AAA (Arrange-Act-Assert) played a significant factor in the evolution of the Rhino Mocks framework for NUnit. Ayende Rahien introduced a new interface for Rhino Mocks 3.5, one that dispenses with the need to call a "verify mock" method after assertions are complete. This allows for cleaner, AAA-compliant tests. In contrast, older "classic" syntax for record-replay mock tools generates "mock clutter" across the test--record, replay, and verify calls are littered through the test.


Note that the form can vary. If you're verifying a static function, there may be no need to Arrange anything. And you may also want to introduce preconditions that verify things are in an appropriate state before the Act step--you could end up with Arrange-Assert-Act-Assert, or even Assert-Arrange-Act-Assert. And it's also possible for Act and Assert to be combined effectively in a single line of code. I don't believe there's value in being dogmatic about the organizational form, but there is value in striving to ensure that the intent of the test is clear. Keeping sections clear and visually separate from each other helps!


Arrange-Act-Assert is a simple concept, and probably only adds marginal value. But it costs nothing to practice, and it gets us that much closer to being a community willing to agree on some standards. What I also like about memorable acronyms like AAA is that they provide a consistent way to communicate simple ideas, which often don't have concise and consistent names.