- F.I.R.S.T.
- The Three Rules
- Red Green Refactor
- Characterization Tests
- TDD Smells
- Ice Breakers
- Antipatterns
- Why Test-After Sucks
Tim Ottinger & Jeff Langr present the blog behind the versatile
Pragmatic Programmers reference cards.
Essential Unit Test Cards
Plan-Do-Check-Act
Font: Daniel Black
Thanks to Igor Czechowski for suggesting this card.
At the core of agile is short cycles of Plan-Do-Check-Act (PDCA). These steps are also what it means to be scientific in approach, at least per the definition of science that says you are following the scientific method: hypothesize, experiment, evaluate. Those who say agile isn't disciplined have not made this connection.
Plan-Do-Check-Act is echoed in agile practices, particularly TDD. The Plan step is about "making the expected output the focus," per Wikipedia. Writing a test that first fails captures your plan. After observing test failure, Do means you write enough code to make the test pass, and Check tells you to verify the actual results against the expected output. If there are differences you must Act to determine their cause and correct your implementation (or sometimes your expectations). In any case, you must also Act by observing the changes to the environment--the rest of the system--and "determine where to apply changes that will include improvement," which can mean some doing some incremental refactoring.
The iterative-incremental development core of agile also follows the cycle:
- Plan - iteration planning/definition of acceptance tests
- Do - day-to-day iteration execution
- Check - verification of results using acceptance tests
- Act - retrospectives and subsequent planning
As with many of the best modern ideas for quality control, PDCA in part comes from Dr. W. Edwards Deming. While Deming credited Walter Shewhart for the original concept of PDCA, Deming gets credit for popularizing the cycle.
Naming Fail or Comment Fail?
/// Adds Sessions which fit in specified date-time range
private void ReadSessions() {
Abbreviations
Font is Mechanical Pencil
Source: Vadim Suvorov, Tim Ottinger
When can we use abbreviations as names in our source code? Can we ever use abbreviations as variable names? Vadim and I explored this issue, and Vadim in his orderly way of thinking enumerated these these principles. I'm sure that not everyone will find these to his liking, but I think these principles are well-reasoned and sufficient. I think these nest nicely into my naming rules in general, though my preference is to avoid any kind of encodings.
- Shared, not Personal: the abbreviation should not be something the author has invented, and which other programmers will not recognize on sight.
- Consistently Used: the abbreviation is not punned, so that it means one thing in one context and another thing entirely in a different context. Note that a very short abbreviation has a greater likelihood of collision (fn = function or filename or ...?).
- Must Be Justified: If the programmer is to use abbreviations, then he should have clear reasons why the abbreviation is required. If, for instance, the abbreviation helps the reader see the unique part of the name without being distracted by context warts (prefix ofr suffix). My addition here is in the case of parallel names Persistant.User v. Domain.User if only one name is present, then no prefix is justifiable. My partner in this enterprise may not agree (likely with well-considered reason).
- Special Latitude Given for Domains: in solution domains, some abbreviations are common and it is beneficial for the programmers to know them. If I worked with military jet software and didn't know IFF, or in education and didn't understand ILT, or if I worked in accounting and didn't grasp AP or AR then I would be less effective when communicating with the Business/Customer.
To the extent that your team deems to use abbreviations, we recommend these criteria for your consideration. Clean naming is one of the most important factors in writing understandable code, and has no negative effect upon compilation or runtime speed, and so is very precious to me. Yet, in the appropriate context, I am open to sacrificing the "no encodings of any kind" rule to appropriate use of well-reasoned abbreviations, with the caveats given above.
Planning Poker (R)
Source: Wikipedia
Font: AndrewScript 1.6
Planning poker, trademarked by Mike Cohn, is a modernizing of a 50+-year-old estimating process known as Wideband Delphi. Estimating is not far from the dark arts, and attempts to make the process serious and exacting are ill advised. James Grenning devised planning poker as a quick and entertaining way to come to consensus on estimating stories in agile. I've found it can help dramatically minimize the tedium of estimating through a large stack of stories.
A typical point scale might be 1, 2, 3, 5, 8, 13. Resist larger scales--toss the higher-value cards. You might replace them with one card that says "too big."
You might want to include a few additional cards: 0, ? ("I don't have clue"), and infinity. The value of adding a 0 card is debatable (nothing is free, and even if development is "free," testing a story is never free), but you may find some usefulness in having it: Sometimes, completing one story automatically includes another. Or, a story might simply represent a milepost achieved.
The wikipedia site provides good detail on the steps involved, but I highly suggest you make your own rules and stick to them. The section on anchoring is particularly useful: part of the reason James devised planning poker was to counteract the heavy influence on estimates coming from one individual. Make sure that when people divulge their card selection, they aren't watching and waiting on certain other individuals to show theirs first!
Before starting the meeting, figure out how long you'd like to spend estimating. If your backlog of stories looks pretty good, and there's a good understanding of the project by most people in the room (obviously not always the case), you might find that 5 minutes per story works well. Appoint a facilitator who can keep time and help keep the estimating session on track.
If the product you're building is less well-known to the participants, this process will take considerably longer, maybe 10-15 minutes per story. Do the planning poker estimates, regardless, and plan on doing them again during a quicker second meeting. If you feel like you are bogging down on a story, and understanding of it is not "critical path," set it aside, and plan to come back to it after other stories are visited.
For a backlog of not-well-understood stories, you will probably want a couple sessions. Some stories will need to be set aside to split or researched offline. Some stories will need to be revisited by the customer. One of the best things to do is give people time to go off and think about things (and having at least one night between sessions is always a good idea).
Still, you want to avoid investing too much time in estimation. The more time you invest, the higher the expectation that the numbers coming out of it are anywhere near perfect. Estimates are guesses!
Instead, the estimation meeting is best seen as a way to ensure that we have good, appropriately sized stories that are fairly well understood by everyone involved. The consensus mechanism in planning poker will quickly let you know if this is not the case. Getting confidence from a good ballpark project plan is almost a bonus!
Principles of Package Cohesion
Post: Tim & Jeff
Source: Uncle Bob
Font: Segoe Print
Coupling and cohesion are the two most important principles guiding the quality of an object-oriented class design. Most programmers learned about these principles in their first week of exposure to OO. The rise of TDD has helped reinforce the value of low coupling and high cohesion (although many programmers still unconsciously and even consciously resist truly small, cohesive classes and methods).
Uncle Bob teaches us that these core OO principles apply equally to packages. The Agile in a Flash card for Principles of Package Coupling covers the dependency side of the dynamic duo--how do you structure packages so as to improve the coupling relationships between them? This card covers the other side--how should you compose a package? What is the definition of "cohesive," as it applies to packages?
- The Reuse-Release Equivalence Principle (REP) - The REP tells us that classes should be packaged together because they are used together. This seems obvious, but many package structures are instead based around ideas like "functional areas," "architectural layers," or "originating team." The result? Users are inconvenienced, for example, by having to recite a litany of import lines at the top of each file. REP tells us to consider the destination instead of the source or even the structure of the code itself. It urges us to group things together for user convenience.
An entire system shoehorned into a single package/library would comply with this principle. But if the rate of change in the library was not extremely low, it would suffer from the problems addressed by the remaining principles.
- The Common-Reuse Principle (CRP) - This almost seems like a restatement of the REP, but the emphasis here is on limiting the impact to consumers. Imagine an API package with two sets of reusable class clusters. A programmer might choose to consume only one reusable component, but changes to the other component necessitate redistribution of the entire package. An unwanted release unnecessarily burdens the consuming programmer, who must re-integrate and re-test the entire library in order to stay current. Where the REP leads us to conglomerate, the CRP leads us to split packages apart. This tension between the principles leads us to find the level of granularity that provides greatest convenience (least negative impact) to users.
In spirit, the CRP is a package-level restatement of the Interface Segregation Principle, which says to keep interfaces small and focused for similar reasons.
- The Common-Closure Principle (CCP) - The CCP is an application of the Open-Closed Principle at the next level up. This principle suggests that you should group your classes around the impact of change. A change should optimally impact only one package; as many other packages as possible should be closed to that change. This varies from the other two principles as it recommends grouping classes around the way the code is maintained, rather than the way it is used. Modules with a high rate of change might need to be grouped by closure rather than by use. Highly stable packages might be better conglomerated (REP).
The principles of package cohesion are not absolute. Sometimes a packaging may satisfy all three principles at once, but often the principles represent competing principles. The development team will have to make trade-offs in order to find a balance that works. In general, smaller (and even smaller) packages provide the best chance for adherence to the CCP and CRP principles, although the REP says there is a limit to how small you will want to go.
Following these principles will require occasional re-packaging, which is upsetting to many users. However, correcting less-than-optimal packaging is a single deep cut that can halt the "death of a thousand paper cuts" caused when changes ripple across packages, or when users of a package have to deal with frequent irrelevant updates.