Stay


I was reading an article by Cherie Berklee at Payscale on the topic of surviving layoffs (having not survived one not too long ago) and was inspired by some of the points. I recommend the article to those who have been laid off, and those who hope never to be.
  • Stay Positive is an inspiring bit of advice which no doubt has origins in antiquity, but which was all the more powerful in Berklee's article. I have not always followed this advice, and even had disregarded it for some time in my career and was the worse for it. Staying positive (not fecklessly hopeful or naive) when things are difficult is not just good behavior. It is leadership.
  • Stay Engaged. Though I don't have a reference, I seem to remember Robert Martin talking about professionals, and Craftsmen in particular, being more highly engaged (attentive, focused) than your average code monkey. It is sometimes a struggle to stay engaged, which is why we have things like the Pomodoro technique and pair programming to keep us focused.
  • Stay Professional should be the motto of the Craftsman movement (if I may call it a movement). Always do your best work, and always look for ways to raise the bar on "best".
We should be mindful of these points well before our companies start to have financial trouble. These are things that make us better employees and better people in general. I've since hand-written these three points on a sticky-backed index card stuck to the lower edge of my second monitor. It is there to always remind me of simple ways to be more effective and thereby more valuable.

Ten Principles for Agile Testers



Source: Crispin, Lisa, and Gregory, Janet. Agile Testing: A Practical Guide for Testers and Agile Teams, Addison-Wesley, 2009.


Lisa Crispin and Janet Gregory wrote the recently-published Addison-Wesley book Agile Testing, a thick tome that fills a much-needed gap in agile literature. Prior, detailed guidance for testing in an agile environment was sparse. With Agile Testing and a couple other fairly recent books (2007's Test Driven: TDD and Acceptance TDD for Java Developers by Lasse Koskela and Bridging the Communication Gap: Specification by Example and Agile Acceptance Testing by Gojko Adzic, 2009), we now have a good foundation for comprehensive agile testing knowledge.



The ten principles that the authors published should sound familiar. Four of these principles directly cover XP's four values of feedback, communication, courage, and simplicity; these and the remainder are also largely echoed in the agile manifesto and its supporting principles. So what's relevant about how these ten principles apply to agile testing?


  • Provide continuous feedback - The agile tester is central to providing the team with feedback: Acceptance criteria is the most concrete way to measure positive progress on an agile project. Tests also help identify issues and capture decisions on appropriate future direction.

  • Deliver value to the customer - The insistence on acceptance tests is a "reality check" on scope creep. Acceptance tests help us all understand what it means to realize a customer's needs.

  • Enable face-to-face communication - Testers can often be the ones on a team responsible for bridging the communication gap between customers (BAs, product owners, etc.) and programmers. A tester can be the one who physically brings these people together, as well as the one who drives derivation of a common language between the two parties.

  • Have courage - One of the larger challenges of agile is in sustaining a fast-paced iterative environment, where every two weeks we need to ship quality software. This challenge demands significant courage. Yet the irony is that we also need to understand that iterations afford us opportunities to learn how to fail and adapt--something that can require an even heavier dose of courage!

  • Keep it simple - Agile testers can help push back against an insistence on overly-elaborate features. Testers can also help the customer understand how to incrementally deliver value. They must learn an appropriate balance of iterative testing-- just enough to provide the right confidence in delivering software.

  • Practice continuous improvement - A key element of using iterations is to allow for learning to take place. Testers should be part of retrospectives (and if you're not consistently adapting based on the results of retrospectives, you're not agile enough.) Testers should also treat their career as a profession by continually learning more about testing practices, tools, and the system itself.

  • Respond to change - Agile testing is dramatically different in that there are few true "cutoff points"--things keep changing and thus must be continually re-tested. This requires automation! The agile tester learns to cope with the customer changing his or her mind from iteration to iteration, and correspondingly learns how to incrementally flesh out necessary testing specifications.

  • Self-organize - In a true agile team, everyone has the capability to act as a tester. Agile teams know how to shift focus as needed; from time to time, for example, it may be prudent for programmers to turn their attention toward helping verify a "done" but not "done done" feature.

  • Focus on people - Testers are often at the bottom of the totem pole in a non-agile software development team. Work is thrown at them, their available slice of time is continually squished, and programmers often look upon them as lessers. In an agile team, every shares the responsibility for ensuring that we are building quality product. Agile testers are key in bringing their testing expertise to the team.

  • Enjoy - The ability to help drive the process and be a true, equal contributor to a team can be extremely gratifying for an agile tester.


The use of acceptance tests to drive agile development is perhaps one of the most critical needs for a team that wants to sustain success. With that in mind, it's amazing that it's taken over ten years since the explosion of XP for us to begin to fully document just what agile testing is.

Make It Work, Make It Right, Make It Fast

Today is our first ever reader contribution from Brian DiCroce

Our font today is called "j.d." See bottom of post for details.



Content Source: Kent Beck at C2.com wiki

When working on a feature, let yourself be driven by this little principle to actually get things done right.

Make It Work
Program with your mind focused on the feature's basic behavior. Just make the feature work. Ensure that all the test(s) pass for that feature.

Make It Right
Now that you have your feature working (all tests passing), focus on refactoring the code. Tests will provide the necessary feedback. Clear up structural and aesthetic issues, remove duplication, rename variables, etc.

Make It Fast
Once the the tests are passing and the code is clean, you can focus on tweaking its performance. Use a profiler of course. Once again, you can feel confident in your changes because the tests will ensure that functionality is not broken.

I abide to this principle whenever I'm developing a feature. It help to shift my focus toward specific development efforts when working on a single feature. The initial effort is toward getting a feature working for quick feedback from users (or potential users). Improvements can be made after the fact, making the code faster and smaller.

This principle is easily applied in conjunction with TDD. The test-first approach helps me drive the design or API of the feature, while the pre-existing tests ensure that the rest of the system continues to function correctly. Refactoring can be applied naturally and at will as the new tests provide a safety net for the code. With the functionality assured, performance improvements may be added with confidence.

If your quick-running, hand-optimized system does not do what it is suppose to do, it is not worth investing time and resources in making it any faster. After making sure that the feature works correctly, you can focus on the performance of the feature. Once again, the tests act as a sidekick in that they will continuously provide you feedback whether changes broke the system.

About the font:
"j.d." is a windows/macintosh truetype and postscript type 1 font, handcrafted in the Emerald City by Steven J. Lundeen, Emerald City Fontwerks, Seattle, Washington, USA.
Generously released as freeware, and copyrighted ©2001.

Legacy Code Change Algorithm


Source: Feathers, Michael. Working Effectively With Legacy Code, Prentice Hall PTR, 2005.

A simply-written technique is not always easily written. The best reference for this technique is Michael's book, of course. Second best is the paper available through Object Mentor's web page (Michael's published papers).

To identify change points is either very easy or very hard. Find the place where you want to make the next change in order to get a new feature added or to eliminate a bug.

The point of change may be nested too deeply inside other methods or into a long if/then/else case, or buried in classes referenced by other classes. Michael suggests that this effort is proportional to the the sickness of the code.

When you know where the code needs to be changed, you need to find test points . Michael calls them "inflection points." The inflection point is in the call tree where any change below it will either be evident or insignificant. The test point is often not the change point.

Break dependencies that make it difficult or impossible to start writing tests to get some coverage of the current behavior at the test point(s). It is normal, in a legacy code base, to do some "pre-factoring" to make testing possible. This is tricky territory, since you don't yet have test to protect you as you work. Expect to spend time running and manually validating basic functionality. You are of course better off if there are some tests that cover the test point.

Write tests to cover existing behavior at the test point. Remember your tests will need to comply with the F.I.R.S.T. principles.

Make changes and refactor the change point in the normal test-first manner, enjoying the test coverage you've built.

Extreme Measures


(font is Flourine)

This is a dangerous "power" card. This is not the first card one should reach for, and one should not make these changes without some consultation with peers. These tend to not be the kinds of changes a transitioning team would make for itself. Still, the practices do have merit and can be used to train a team and the organization around it to adopt agile behaviors. These are mostly "trellis" measures, probably best selected by a transition consultant or coach.

You may shorten iterations to force priority when you find that a team is suffering slump and binge cycle. Teams may slump for a number of days with knowledge that they still have plenty of time to complete a feature, and then cram in overtime at the last minute to complete their work before the deadline. This syndrome is often related to term papers, finals cramming, and amateur theatre rehearsals.

With shorter iterations, the excitement of picking up new work and the immediacy of due dates brings focus. The goal is that each team member goes home happy and tired, and comes back refreshed and ready to work after a full night's sleep. That can work with shorter iterations and shorter days. This extreme measure should be taken to stop overtime, rather than to inspire it.

Mandate that a subject-matter expert can only help others complete tasks so that expertise will spread through the team. If an expert works alone in her area of expertise, then the team continues to be starved of experience. Worse yet, you may find that work has to queue for the expert and "steel threads" abound. An expert may not want to "own" an area of the application, but the organization may force it to be so. This does bad things to "truck number." It is best if a subject-matter expert not be siloed for the sake of the expert, the team, and the software organization.

You might require that 40% of all work is done at the iteration midpoint to avoid the slump-and-binge cycle listed above, since this practice essentially cuts one iteration into two, with a slightly softer deadline for the first one.

If you revert or discard old, unfinished work you will cut down on the Work-in-progress (AKA: "waste") of a team. People don't want to do work that isn't going to be deployed, so they tend to old onto unfinished work in hopes of completing it on the sly "when time allows". Sadly, code becomes dated outside of the current line of version control. Code that is old may no longer merge into the codeline. The longer the interval since the last integration, the less likely it is that code can be integrated. It is often better to scrap and rewrite old features. It certainly makes a statement. Overplaying this card causes depression in programmers and outrage in Customers, so be careful with it. Not playing this card can result in risky rework, which is waste. It is better far if the team only takes on the amount of work it can complete and successfully field in an iteration.

Random weekly team rosters will force closure on features if the team is suffering from soft iteration boundaries. Often teams take credit for work not fully completed (for shame!) and will try to quietly squeeze in the completion of hung-over features during the next iteration. Of course, if they couldn't complete 12 points of work last iteration, why do they think they can complete another 12 points of features plus three points of last iteration's work in the same number of days? If the team is gaming the iterations in such a way, create two teams and randomly shift members between the two. It is especially handy if one team is tasked with new features while the other deals with production issues and bugs. If you don't know where you'll be next week, can you afford to let a feature run long? If you have to complete more work than you can do in an iteration, you really must seek help from teammates. You also need someone who can take over if you're shifted into a new team, so you need to pair with several people on your team, just in case. One risks resentment and lowered short-term productivity with this extreme measure, but one can benefit by spreading more code knowledge and by getting more bugs fixed with the second team.

If you find that your team is suffering from "pair marriages" you might need to stir pairs (at least) twice daily. A bell, loudspeaker announcement, or other mechanism can be used to signal that it's pair-switching time. You might pass a rule that a pair must switch out one partner each morning and after each lunch break. Either way, you can push your team to learn from each other by pairing with someone different for a change. Pair programming was never intended to be conducted by two people chained to one desk for the duration.

It is wise to eliminate individual work assignments as they create a disincentive against pair-programming. If you have work to complete, and I have work to complete, I am less likely to help you because it will put my work at risk of non-completion. Since my name is on my feature, I will be evaluated on my reputation for completing work. In such a system, pairing is an act of great bravery and potentially high risk. Esther Derby has already said everything that needs to be said about the follies of such a system.

I worry a little about this card. I know that agile is about values, and that a primary value is that we value people. We value individuals and interactions over processes and tools. Yet sometimes the people we value need to spend some time in a system that encourages interaction and completion so that they can learn to work together more successfully. I expect this to be the set of ideas that bring us the most angry reader mail.

Yet I think that you have to change the system sometimes. So be it.

SOLID



The SOLID principals are a big topic these days (in the post-Spolsky/Atwood v. Martin war of words). I suppose that is a good thing, because these principles have been around a pretty long time. We taught these as principles of Object-oriented Design at least 15 years ago, and they've appeared in a number of Uncle Bob books.

These principles were developed upon observation of how good code should be structured (and partly by sniffing out why bad code is bad). Never were they presented as a methodology. Always the assumption was that code was being written for good reasons and that it was written to work. They focused on "building things right" rather than on "building the right things."

There is more here than would fit comfortably on a hand-written 3x5 card, by the way.

Single Responsibility Principle (SRP): A class should have only one reason to change.


The original formulation was "a class should do one thing; it should do it all, it should do it well, and it should do it only." This was subject to a lot of interpretation and didn't address the dependency problems in software design, so it was reformulated. A feature of good design is that all classes have clear, crisp responsibility boundaries. When code is munged together or poorly organized by responsibility, it is hard to determine where a change should be made.

Open-Closed Principle (OCP): Software entities should be open for extension, but closed for modification.

This was originally formulated by Bertrand Meyer to say essentially that published interfaces should not change. It evolved to a larger meaning that new features should be added primarily as new code, and not as a series of edits to a number of existing classes. A quality of good design is the extent to which new features are bolted-on instead of woven-in.

Liskov Substitution Principle: Subtypes must be substitutable for their base types.



Barbara Liskov's paper Family Values addressed a notion of the proper use of inheritance. There have always been a lot of design errors from overuse or misuse of inheritance between classes. When a base class has methods that don't apply to all of its children, then something has gone wrong. When a class uses an interface, but some implementations are inadvisable or incompatible, then the design is unclean.

Interface Segregation Principle (ISP): Clients shouldn't be forced to depend on methods they don't use.

In a good design, dependencies are managed. To keep irrelevant changes from flowing up into classes that don't really care about them, it behooves the developer to keep interfaces small and focused.

Of course, the problems of conglomerated interfaces don't manifest the same way in dynamic languages because of late binding, but the problem lives on in C++, C#, Java and other statically-typed languages. In dynamic languages interfaces are not declared, but just understood to exist. If my python program only calls an add routine and an assignment on an argument, then it is understood that the argument must include an add and an assignment in its interface. I automatically depend only on the methods I use in my code, and changes to the underlying class do not seep unbidden into my code.

On the other hand, we find that some dynamic languages are adding declarative interfaces, and some do interface with java or C programs through declared interfaces. Maybe the principle lives on after all.

Dependency Inversion Principle (DSP): Abstractions should not depend on details; details should depend on abstractions.

Since abstractions are built with the intention of writing interface users and interface implementations, it is reasonable that most of our code should depend on abstractions (opening the code to open/closed goodness) rather than concrete implementations (making further change a matter of weaving new code with old). Again, problems with this principle do not manifest the same way in dynamic languages, but not everyone uses dynamic languages.

And of course, even if one uses dynamic languages one might find it useful to use an interface that limits exposure to details. This warning still applies to leaky abstractions.

If one follows the SOLID principles, one may find that their code becomes more fluidly maintainable than if they did not. Ultimately, that is what the whole good/bad thing was about from the beginning.

BTW: font of the day is Anime Ace, a free font. I'm not sure that it will appear in the final version of this card because of its all-caps nature.

UncleBob's Three Rules of TDD



This was one of the three TDD cards I posted while at Object Mentor. A more attractive set of hand-drawn cards then appeared (with full attribution) at Brian DiCroce's site a little while after. I still wish I'd drawn them first. The other have been presented here: FIRST principles and our site's most popular card to date, the Red, Green, Refactor card.

These three laws originated with Robert "Uncle Bob" Martin, who has provided such a wonderful write-up that there is no value I can add other than shrinking the sentences to fit on a card. Bob is a great guy and a solid techie. And by solid, I mean SOLID.

By the way, I've submitted a paper to Agile2009 to do a little session on doing "in a flash" teaching sessions with index cards. If it's accepted, I'll see you there.

F.I.R.S.T



Source: Brett Schuchert, Tim Ottinger

In my Object Mentor days Brett and I were looking at ways to improve some class materials on the topic of unit testing. I noticed that our list of properties almost spelled FIRST. We fixed it.

We refer to these as the FIRST principles now. You will find these principles detailed in chapter 9 of Clean Code (page 132). Brett and I have a different remembrance of the meaning of the letter I and so I present for your pleasure the FIRST principles as I remember them (bub!). He had it right the first time so we will cut him some slack.

The concepts are very simple, and of course achieving them once you've gone off the track can be very hard. Always better to start with rigor here, and maintain it as you go.

Fast: Tests must be fast. If you hesitate to run the tests after a simple one-liner change, your tests are far too slow. Make the tests so fast you don't have to consider them.

A test that takes a second or more is not a fast test, but an impossibly slow test.

A test that takes a half-second or quarter-second is not a fast test. It is a painfully slow test.

If the test itself is fast, but the setup and tear down together might span an eighth of a second, a quarter second, or even more, then you don't have a fast test. You have a ludicrously slow test.

Fast means fast.

A software project will eventually have tens of thousands of unit tests, and team members need to run them all every minute or so without guilt. You do the math.

Isolated: Tests isolate failures. A developer should never have to reverse-engineer tests or the code being tested to know what went wrong. Each test class name and test method name with the text of the assertion should state exactly what is wrong and where. If a test does not isolate failures, it is best to replace that test with smaller, more-specific tests.

A good unit test has a laser-tight focus on a single effect or decision in the system under test. And that system under test tends to be a single part of a single method on a single class (hence "unit").

Tests must not have any order-of-run dependency. They should pass or fail the same way in suite or when run individually. Each suite should be re-runnable (every minute or so) even if tests are renamed or reordered randomly. Good tests interferes with no other tests in any way. They impose their initial state without aid from other tests. They clean up after themselves.

Repeatable: Tests must be able to be run repeatedly without intervention. They must not depend upon any assumed initial state, they must not leave any residue behind that would prevent them from being re-run. This is particularly important when one considers resources outside of the program's memory, like databases and files and shared memory segments.

Repeatable tests do not depend on external services or resources that might not always be available. They run whether or not the network is up, and whether or not they are in the development server's network environment. Unit tests do not test external systems.

Self-validating: Tests are pass-fail. No agency must examine the results to determine if they are valid and reasonable. Authors avoid over-specification so that peripheral changes do not affect the ability of assertions to determine whether tests pass or fail.

Timely: Tests are written at the right time, immediately before the code that makes the tests pass. While it seems reasonable to take a more existential stance, that it does not matter when they're written as long as they are written, but this is wrong. Writing the test first makes a difference.

Testing post-facto requires developers to have the fortitude to refactor working code until they have a battery of tests that fulfill these FIRST principles. Most will take the expensive shortcut of writing fewer, fatter tests. Such large tests are not fast, have poor fault isolation, require great effort to make them repeatable, tend to require external validation. Testing provides less value at higher costs. Eventually developers feel guilty about how much time they're spending "polishing" code that is "finished" and can be easily convinced to abandon the effort.

Automating Tasks



A naive team (or boss) will want to start their agile process by researching, purchasing, and installing all sorts of automated, web-based tools. This is odd when you consider that a major value of Agile is to prefer human interactions over tools and processes. Agile begins with people.

In order to help break the tendency toward automation madness, Jeff and Tim offer these tips:


Do strive to automate build, test, and install processes. You don't really want to press a button on a web page to get the tests to run. You want the testing built into your IDE or watching your source directories for changes. The dream is to always "just know" if you've broken anything, and to make releases a non-event. That's hard to do when your processes require manual intervention. Automate early and often.

Automate dull, tedious, or repetitive work because it makes your day seem long. Prefer to acquire rather than build, because the fun work of automating the dull work has a siren's song. Get it working, make it invisible, and get on with the work. Don't fear shell scripts, IDE automation scripts, or the like. But don't be fascinated with them either; automation by individuals (or pairs) still has to pay off in increased productivity for the whole team.

Automate to free time for creative, intellectual work. Automating is not something you do to seem more professional, to impress your peers, or to earn some kind of "automation points" with superiors. The point is to not have to spend time doing trivial and boring work. Rather than getting good at switching between multiple files and cut-n-pasting code, one should eliminate the wasted motion through design or automation. Then one can take on more interesting, difficult work and complete it within an interation boundary.

Never automate evolving processes. If you standardize a process that isn't settled in, you will either freeze the process prematurely, or you will be playing catch-up as the process continues to change. Alternatively, if you must automate an evolving process, be prepared to evolve your automation regularly or discard it and take over manually.

Do not automate in order to avoid human interaction. Building "avoidance technology" is contrary to the first principles of the agile manifesto. It is never our goal to avoid interaction with the Customer or between the QA and Programmers. Instead, add consider what techniques (low-tech or otherwise) will draw all the participants closer together or leave it alone.

Prefer physical artifacts with coercive immediacy over virtual artifacts stored somewhere in software. A corkboard loaded with 3x5 index cards will have more influence on a team's actions than a set of virtual cards in a web server somewhere. Manual processses with physical items have been a mainstay of agile development. Low-tech, high-touch processes can be quite satisfying.

Software serves the team, never the reverse. Nobody comes to work just to feed the machines. When the automation software demands tedious and/or meaningless efforts from the team, is it worth keeping? Automation is supposed to make it possible for us to do more interesting work, it is not supposed to increase tedium. When software gets in the way, the next retrospective action should be obvious.

All developers owe a debt to those who build great tools like Eclipse, Vim, Emacs, etc. Great tools tend to have a wonderful transparency. In a good editor, the code seems to almost dance and shape itself before your eyes. Great build tools like Paster and Maven take a lot of the effort out of building deployable modules. These things are wonderful. On the other hand, someone has to turn out the business applications during the day job, and for those hours of the day it is important that the application is the star of the day and the tools melt into the background.

Coding Standards



Coding standards? That tired, old chestnut? Who talks about coding standards these days?

You do. You are talking about coding standards when you see someone has their tabs set wrong, when you complain about bracing, when you ask someone why they put their spaces where they do, when you add comments and someone else says delete them, ...

The need for a common style guide was glossed over in Collective Code Ownership, in the following paragraph:
The team has a single style guide and coding standard not because some arrogant so-and-so pushed it down their throat, but rather the team adopts a single style so that they can freely work on any part of the system. They don't have to obey personal standards when visiting "another person's" code (silly idea, that). They don't have to keep more than one group of editor settings. They don't have to argue over K&R bracing or ANSI, they don't have to worry about whether they need a prefix or suffix wart. They can just work. When silly issues are out of the way, more important ones take their place.

So what kind of document is the code standard? The authors have seen plenty of large, complex, detailed guides that strive to be comprehensive. Who has the time? Perhaps the answer should be to look for the least documentation one can afford. How much is too much? How much is too little?

The first recommendation is that a team should standardize to avoid waste. In this case, "waste" includes rework, arguments, and work stops while issues are settled. In this regard, it is better to have even a bad decision than a variety of opinions. If we find that we are enduring waste, then we add a line to the standard. Again, we have to build collective ownership and any choice is better than arguing the same points repeatedly.

The team should start with an accepted community standard if one can be found. If there are multiple, choose one of them. For the python community, PEP 8 is a wonderful starting point. One may look at a ubiquitous tool's default styling as a community standard (community of tool users), since it seldom pays to to fight your tools. Notice that public style guides tend not to be the smallest document one can afford, but the goal is really to make development time productive and non-contentious (with regard to silly issues) so even a longer document may be successful.

If there is some particular contentious difference of opinion, then the team should time-box the argument. They might choose to debate for an hour, agree, and write it down. Some teams don't time-box the argument, and others do not reach resolution. Do not let contentious issues remain, or they will remain contentious. Once agreed and recorded, the issue should not be revisited. Nor should a pair partner allow his partner to spend iteration time arguing over decisions already made. Nobody has to love the decision, but they should admit to a fight well fought and a final answer that should not get in the way of getting real work done well and often.

Ideally, the standard should be minuscule. A standard-by-example will have a better chance of being concise and obvious. Therefore, the standard should be mostly code and fit on one page. This is especially true of the initial version of the team's style guide. To some degree, arguments and uncertainty will lead to the accumulation of additional guidelines but it is wise to start small. Our working documents should be like our code in the sense that it is as small, simple, and unambiguous as possible.

The style guide is an agile artifact. it is subject to corrective steering. A team should revisit the standard every iteration until no one cares any more. Iteration time is too valuable for these arguments, but retrospective time exists to help the team eliminate waste and turbulence. If further discussion of style points will help smooth the coming iteration, then it should be brought up.

The rule for simplifying any system is to obviate and then remove steps, processes, and instructions. If the code is written to standard, then the code ultimately becomes the standard and the standard becomes redundant. It is perfectly reasonable for a mature team to discard their documented style guide and keep the style. You will know the style guide is unnecessary when all new code looks like the existing code. And it all looks good.