Tim Ottinger & Jeff Langr present the blog behind the versatile
Pragmatic Programmers reference cards.
Source: Brett Schuchert, Tim Ottinger
In my Object Mentor days Brett and I were looking at ways to improve some class materials on the topic of unit testing. I noticed that our list of properties almost spelled FIRST. We fixed it.
We refer to these as the FIRST principles now. You will find these principles detailed in chapter 9 of Clean Code (page 132). Brett and I have a different remembrance of the meaning of the letter I and so I present for your pleasure the FIRST principles as I remember them (bub!). He had it right the first time so we will cut him some slack.
The concepts are very simple, and of course achieving them once you've gone off the track can be very hard. Always better to start with rigor here, and maintain it as you go.
Fast: Tests must be fast. If you hesitate to run the tests after a simple one-liner change, your tests are far too slow. Make the tests so fast you don't have to consider them.
A test that takes a second or more is not a fast test, but an impossibly slow test.
A test that takes a half-second or quarter-second is not a fast test. It is a painfully slow test.
If the test itself is fast, but the setup and tear down together might span an eighth of a second, a quarter second, or even more, then you don't have a fast test. You have a ludicrously slow test.
Fast means fast.
A software project will eventually have tens of thousands of unit tests, and team members need to run them all every minute or so without guilt. You do the math.
Isolated: Tests isolate failures. A developer should never have to reverse-engineer tests or the code being tested to know what went wrong. Each test class name and test method name with the text of the assertion should state exactly what is wrong and where. If a test does not isolate failures, it is best to replace that test with smaller, more-specific tests.
A good unit test has a laser-tight focus on a single effect or decision in the system under test. And that system under test tends to be a single part of a single method on a single class (hence "unit").
Tests must not have any order-of-run dependency. They should pass or fail the same way in suite or when run individually. Each suite should be re-runnable (every minute or so) even if tests are renamed or reordered randomly. Good tests interferes with no other tests in any way. They impose their initial state without aid from other tests. They clean up after themselves.
Repeatable: Tests must be able to be run repeatedly without intervention. They must not depend upon any assumed initial state, they must not leave any residue behind that would prevent them from being re-run. This is particularly important when one considers resources outside of the program's memory, like databases and files and shared memory segments.
Repeatable tests do not depend on external services or resources that might not always be available. They run whether or not the network is up, and whether or not they are in the development server's network environment. Unit tests do not test external systems.
Self-validating: Tests are pass-fail. No agency must examine the results to determine if they are valid and reasonable. Authors avoid over-specification so that peripheral changes do not affect the ability of assertions to determine whether tests pass or fail.
Timely: Tests are written at the right time, immediately before the code that makes the tests pass. While it seems reasonable to take a more existential stance, that it does not matter when they're written as long as they are written, but this is wrong. Writing the test first makes a difference.
Testing post-facto requires developers to have the fortitude to refactor working code until they have a battery of tests that fulfill these FIRST principles. Most will take the expensive shortcut of writing fewer, fatter tests. Such large tests are not fast, have poor fault isolation, require great effort to make them repeatable, tend to require external validation. Testing provides less value at higher costs. Eventually developers feel guilty about how much time they're spending "polishing" code that is "finished" and can be easily convinced to abandon the effort.
A naive team (or boss) will want to start their agile process by researching, purchasing, and installing all sorts of automated, web-based tools. This is odd when you consider that a major value of Agile is to prefer human interactions over tools and processes. Agile begins with people.
In order to help break the tendency toward automation madness, Jeff and Tim offer these tips:
Do strive to automate build, test, and install processes. You don't really want to press a button on a web page to get the tests to run. You want the testing built into your IDE or watching your source directories for changes. The dream is to always "just know" if you've broken anything, and to make releases a non-event. That's hard to do when your processes require manual intervention. Automate early and often.
Automate dull, tedious, or repetitive work because it makes your day seem long. Prefer to acquire rather than build, because the fun work of automating the dull work has a siren's song. Get it working, make it invisible, and get on with the work. Don't fear shell scripts, IDE automation scripts, or the like. But don't be fascinated with them either; automation by individuals (or pairs) still has to pay off in increased productivity for the whole team.
Automate to free time for creative, intellectual work. Automating is not something you do to seem more professional, to impress your peers, or to earn some kind of "automation points" with superiors. The point is to not have to spend time doing trivial and boring work. Rather than getting good at switching between multiple files and cut-n-pasting code, one should eliminate the wasted motion through design or automation. Then one can take on more interesting, difficult work and complete it within an interation boundary.
Never automate evolving processes. If you standardize a process that isn't settled in, you will either freeze the process prematurely, or you will be playing catch-up as the process continues to change. Alternatively, if you must automate an evolving process, be prepared to evolve your automation regularly or discard it and take over manually.
Do not automate in order to avoid human interaction. Building "avoidance technology" is contrary to the first principles of the agile manifesto. It is never our goal to avoid interaction with the Customer or between the QA and Programmers. Instead, add consider what techniques (low-tech or otherwise) will draw all the participants closer together or leave it alone.
Prefer physical artifacts with coercive immediacy over virtual artifacts stored somewhere in software. A corkboard loaded with 3x5 index cards will have more influence on a team's actions than a set of virtual cards in a web server somewhere. Manual processses with physical items have been a mainstay of agile development. Low-tech, high-touch processes can be quite satisfying.
Software serves the team, never the reverse. Nobody comes to work just to feed the machines. When the automation software demands tedious and/or meaningless efforts from the team, is it worth keeping? Automation is supposed to make it possible for us to do more interesting work, it is not supposed to increase tedium. When software gets in the way, the next retrospective action should be obvious.
All developers owe a debt to those who build great tools like Eclipse, Vim, Emacs, etc. Great tools tend to have a wonderful transparency. In a good editor, the code seems to almost dance and shape itself before your eyes. Great build tools like Paster and Maven take a lot of the effort out of building deployable modules. These things are wonderful. On the other hand, someone has to turn out the business applications during the day job, and for those hours of the day it is important that the application is the star of the day and the tools melt into the background.
Coding standards? That tired, old chestnut? Who talks about coding standards these days?
You do. You are talking about coding standards when you see someone has their tabs set wrong, when you complain about bracing, when you ask someone why they put their spaces where they do, when you add comments and someone else says delete them, ...
The need for a common style guide was glossed over in Collective Code Ownership, in the following paragraph:
The team has a single style guide and coding standard not because some arrogant so-and-so pushed it down their throat, but rather the team adopts a single style so that they can freely work on any part of the system. They don't have to obey personal standards when visiting "another person's" code (silly idea, that). They don't have to keep more than one group of editor settings. They don't have to argue over K&R bracing or ANSI, they don't have to worry about whether they need a prefix or suffix wart. They can just work. When silly issues are out of the way, more important ones take their place.
So what kind of document is the code standard? The authors have seen plenty of large, complex, detailed guides that strive to be comprehensive. Who has the time? Perhaps the answer should be to look for the least documentation one can afford. How much is too much? How much is too little?
The first recommendation is that a team should standardize to avoid waste. In this case, "waste" includes rework, arguments, and work stops while issues are settled. In this regard, it is better to have even a bad decision than a variety of opinions. If we find that we are enduring waste, then we add a line to the standard. Again, we have to build collective ownership and any choice is better than arguing the same points repeatedly.
The team should start with an accepted community standard if one can be found. If there are multiple, choose one of them. For the python community, PEP 8 is a wonderful starting point. One may look at a ubiquitous tool's default styling as a community standard (community of tool users), since it seldom pays to to fight your tools. Notice that public style guides tend not to be the smallest document one can afford, but the goal is really to make development time productive and non-contentious (with regard to silly issues) so even a longer document may be successful.
If there is some particular contentious difference of opinion, then the team should time-box the argument. They might choose to debate for an hour, agree, and write it down. Some teams don't time-box the argument, and others do not reach resolution. Do not let contentious issues remain, or they will remain contentious. Once agreed and recorded, the issue should not be revisited. Nor should a pair partner allow his partner to spend iteration time arguing over decisions already made. Nobody has to love the decision, but they should admit to a fight well fought and a final answer that should not get in the way of getting real work done well and often.
Ideally, the standard should be minuscule. A standard-by-example will have a better chance of being concise and obvious. Therefore, the standard should be mostly code and fit on one page. This is especially true of the initial version of the team's style guide. To some degree, arguments and uncertainty will lead to the accumulation of additional guidelines but it is wise to start small. Our working documents should be like our code in the sense that it is as small, simple, and unambiguous as possible.
The style guide is an agile artifact. it is subject to corrective steering. A team should revisit the standard every iteration until no one cares any more. Iteration time is too valuable for these arguments, but retrospective time exists to help the team eliminate waste and turbulence. If further discussion of style points will help smooth the coming iteration, then it should be brought up.
The rule for simplifying any system is to obviate and then remove steps, processes, and instructions. If the code is written to standard, then the code ultimately becomes the standard and the standard becomes redundant. It is perfectly reasonable for a mature team to discard their documented style guide and keep the style. You will know the style guide is unnecessary when all new code looks like the existing code. And it all looks good.
Top 10 Agile Failure Factors
In working with teams attempting to be agile, we tend to see them make many of the same "mistakes" over and over. This list captures some of the most common pitfalls we've seen teams stepping into.
Please take a moment to help refine this card by completing a short poll. It'll be open for a week, and we'll feed the results back into this list--both to help prioritize it and possibly swap out some factors for others. If you want to challenge any of the elements, you can of course add your thoughts in a comment.
Sources: Beck, Kent. Extreme Programming Explained, Addison Wesley, 2000; Ottinger, Tim; Langr, Jeff.
When I first started teaching XP, I remember having more than enough to say about the other three XP values (communication, feedback, simplicity). Courage? Uh... well, you know, you need it to be able to stick to your guns with respect to the other three values. Moving on...
So I particularly like this card for its specifics on when you need to be courageous. It also highlights the opposite dependency of courage on the other three values: An environment where you're not communicating, acting simply, or generating lots of feedback is going to make you feel like Tommy (or Helen Keller) piloting a helicopter.
- To make architectural corrections - It takes little courage to put a model on paper and make everyone sign their name in blood, insisting that we cannot change anything going forward. We're all courageous about the distant future (which we might never be a part of).
- To throw away tests and code - "I worked on that mess all day long, are you kidding?" Often the best results are produced by discarding a poor solution and reworking it. Even short-term, this takes far less time than most people think, and long-term it usually returns many times over the modest rework.
- To be transparent, whether favorable or not - It's so easy to hide in your cube, or to use a long, drawn out development period that makes it seem like real progress is being made. Yes, short iterations can make it obvious that you know nothing about things like getting the software successfully integrated in a timely fashion.
- To deliver complete, quality work in the face of time pressure - Push back. If your manager tells you, "never mind with the testing, just get it out the door," push back. "We don't have time to pair or review this iteration." Push back.
- To never discard essential practices - Ditto, from the last one. If your manager says, "you're not allowed to refactor the code," push back. You will pay for omission of essential activities, to the point that having done them at all in the first place will have seemed like a waste of time. (In other words, if you're going to sling code like a cowboy, just do it and stop pretending you're agile or at all professional.)
- To simplify code at every turn - Does this really take courage? Fortitude, maybe. Perhaps the courage here is learning to accept that your code probably does stink, and that you can almost always improve upon it.
- To attack whatever code the team fears most - The usual reaction is to tread lightly. What's particularly interesting is that treading lightly can often lead to the worst possible design choices. For example, I need 3 lines of alternate behavior in the middle of a 2000 line method. My fear leads me to believe that the safest thing is to copy the entire method and change the three lines in the copy. Courage would allow me to refactor to a template method pattern or some other solution that resulted in only three additional lines of code.
- To take credit only for complete work - The business can't use software that's 99.99% complete. If it doesn't do what the customer asked for, it's not complete. The courage here is to accept that incomplete work delivers no value, and admitting incomplete work must be reflected on the very visible plan.
Courage is excessively important with respect to communication in an agile environment, so there are many more specific elements that we could add here. It requires courage to speak up about the challenges we face on a development effort. Retrospectives, for example, are not at all useful without the XP value of courage.
Drawing: Tim Ottinger
Photo: Libby Ottinger
Cleanup: Jeff L.
We show flash cards to students in order to help them completely ingrain a concept, to the point where they don't think about something, they just know it. The classic flash card presents a student with a vocabulary word or a math expression for which we expect almost immediate recognition and response. I show you the Spanish vocabulary word "acuario" and you blurt out "fish tank!"
So, red-green-refactor. By definition, TDD says write the tests first. They should fail, since you've not built the functionality that the tests specify, and a GUI test tool will show red at this point. That's useful feedback that tells you to write just enough code to get all existing tests to pass; the GUI test tool shows us green. Finally, you can ensure that the code has an optimal design during a refactoring step, since you have tests to give you the confidence to change things. Spend a few minutes, improve the design, and re-run your tests, which should all still be green. The entire cycle should take about 5 minutes on average, and no more than 10 minutes.
If you've been doing TDD for more than a few days--I mean really doing it and not writing code first and then sneaking in a few tests afterward and then telling people "I got muh TDD's done"--this cycle should be starting to sink in. Those of us who've done TDD a little longer--maybe a few months--don't think twice about it. Red-green-refactor feels like the natural way for us to build software.
- Red - A common mistake for newbies to TDD is to gloss over the need to see the test fail first. It's part of keeping on course; the red is extremely valuable feedback that tells us our assumptions still hold true. Once in a while, they won't, and you'll have saved yourself a lot of time by finding that out immediately.
- Green - Keep rough track of how long it takes you to derive a solution that passes all your tests. If you're taking 5-10 minutes on average, or more, start figuring out ways to take smaller steps.
- Refactor - You should always take advantage of the refactoring step. Even if you added perfect code (that doesn't duplicate any other code in the system), treat the refactoring step as "mandatory bonus time." Poke around the area you're changing. Get rid of a warning. Rename a test. Improve the readability of an existing method. Follow the boy scout rule: Make things a little better when you leave than they were when you arrived. Note that the refactor step doesn't necessarily represent a single green test run: You should look to decompose refactoring efforts into even tinier steps, getting a rapid succession of green bars before moving on.
Writing Characterization Tests
Source: Feathers, Michael.
Test-after development (TAD) is tough; dependencies and non-cohesive code can make for a significant challenge. That's why Michael Feathers' book Working Effectively With Legacy Code is so useful.
Does working with legacy code really require a different set of skills than doing TDD? Well, sprout method and sprout class are the first avenues you should always explore, and they are strictly about test-driving the new code. But most of the other WELC techniques require less test driving and more digging about, trying different things. Some of the techniques even border on being too clever--they're about problem solving, and you do what you gotta do!
An important step in modifying existing code is understanding what it does before you make any changes. Feathers says characterization tests are like putting a vise around code: You want to pin it down before you attempt changing it, so that you know if it slips a bit when you do. So in a sense, we are back to test-first.
How many characterization tests do you need to write? You can read the WELC chapter on effects analysis, and start to get scared that you're going to have to write a lot of tests. Or you can use confidence as a guideline, writing as many tests as you think you need to understand the code you're about to change.
The flash card describes what are pretty obvious steps, but they back one of the underlying themes in Feathers' book: be methodical, be safe.
Well, it had to happen sooner or later: a card that bothers me quite a bit. Mike Cohn's book Agile Estimating and Planning, a book I recommend for just about anyone working on an agile project, popularized this story card format.
"As an actor, I want to accomplish some goal, for some reason"--very similar to what we did with use cases in the early 90s. And back then, we spilled time bickering over format and wording, so I give credit to whoever came up with a template to help sidestep those wasteful arguments.
The problem is that the story card ain't the thing, it's a placeholder and "promise for more communication," per Ron Jeffries. The card could have one word on it, and that would be sufficient. Bob Koss often told a story about a story for the U.S. Navy regarding how to adjust large gun firing based on sonic feedback. The story card said simply "boom splash!" That was enough to initiate talking about it, and no one forgot who the story was for, what it meant, or why it existed over the course of the iteration.
I realize I'm railing against a potentially useful flash card. As a tool, the format template can help us considerably in improving our collections of stories. Some same lessons from use cases apply: Thinking about the actors involved (the "as a" people) can help trigger the introduction of important stories that might be missed. The phrase "I want to" reinforces that stories are goals for customers the system, and not just technical pipe dreams. And "so that," well, as we write out dozens of stories in release planning, we'll want to remind ourselves of the rationale behind certain stories (but not all of them!). And sometimes this "why" can trigger other useful considerations.
To be fair to Cohn, he says many similar things in his book. But you know how people are. I've already encountered spreadsheets and other software tools that rigidly insist on this format. Never mind that the actor is the same for every last story, or that the reasons are pretty obvious for most stories, or that typing all that stuff is just redundant crud that we would stamp out if it were in our code.
Remember: The cards we present as part of the Agile In a Flash project are tools, not gospel. They're here to help prod you if you get stuck, to give you ideas, and to give you guidance. In most cases, you should follow them unless you have darn good reasons--and even then, you should follow them until you know why the rules exist, and only then should you consider taking your own path.
Breaking Down Larger Stories
Source: Jeff Langr, Tim Ottinger; also, Cohn, Michael. Agile Estimating and Planning.
One ideal for agile development would be to be able to deliver a "done done" story every day. We get a lot of resistance on this thought, and we push right back. Our goal is not to insist on the ideal, but instead to get you to move toward that goal, and not to defend stories that we think are too large. "Too large?" There's no consensus. Anything taking over a half iteration undoubtedly should be scrutinized.
One way to deliver stories more frequently is collaborate a bit more. Instead of one developer working a large story alone (sadly common), consider adding another developer or two or three. If they can help deliver the story sooner without inordinate overhead costs (including coordination efforts to avoid stepping on each other's toes), go for it.
The other route is to split stories. Breaking down larger stories is an often challenging proposition, but the more you do it, the easier it gets. Next time you have a large story, step through the list on the card.
The most challenging stories are "iceberg" stories. These are stories where the customer sees only a small impact, but the algorithmic or data complexity required to implement the story is large.
Sometimes a split isn't worth it. For example, you look to consider an alternate case as a separate story, but the developer says, "well, it's only going to take me ten minutes to implement that."
A key thing to think about when looking to split stories is the tests.
- Defer alternate paths / edge cases - If anything, thinking about how to split stories around alternate cases will force you to make sure you've captured all of them!
- Defer introducing supporting fields - If the story involves user input of good-sized chunks of data, support a few key and/or significant fields and introduce the remainder later.
- Defer validation of input data - Demonstrate that you can capture the information; add the ability to prevent the system from accepting invalid data later.
- Defer generation of side effects - For example, creating a downstream feed when the user updates information.
- Stub other dependencies - "Fake it until you make it."
- Split along operational boundaries (e.g. CRUD) - This is directly from Cohn's book. Note that you don't necessarily have to have "C" (Create) done before you implement "U" (Update).
- Defer cross-cutting concerns (logging, error handling) - This generates an interesting challenge around an important point--devising acceptance tests that verify the addition of robustness concerns.
- Defer performance or other non-functional constraints - Sometimes it's possible to bang out an implementation using an overly simplistic algorithm. A similar challenge: How do you devise acceptance criteria for the performance improvement?
- Verify against audit trail that demonstrates progress - I'm not enamored with this one, but sometimes there are no other obvious solutions. Sometimes this will expose more implementation details than you would like. Perhaps these tests can disappear once the entire story gets completed.
- Defer variant data cases - Will it simplify the logic if we don't have to worry about special data cases? Or more complex data variants? For example, it's probably a lot easier to devise a delivery scheduling system that supports only one destination.
- Inject dummy data - In some cases, data availability or volume can be a barrier to full implementation of a story.
- Ask the customer! - You may be pleasantly surprised!
Well, this list isn't necessarily complete, and needs a bit of work. What other story splitting mechanisms have you used successfully?
Refactoring is a large topic with enough material to fill a few books. In fact, it has filled a few good books. Clearly we don't have room for all of that on an index card but at least we can provide some meta-guidance on the topic.
Done only with a green bar (all tests passing) never with failing tests. No matter how much you want to rename, extract, or modify the code you really should have green (even an ugly green) before you try to refactor.
Always run the tests before and after the refactoring steps. Make sure you know that the system worked before and after each step. Check before, so you aren't confused about origin of breakage, and after so that you know you've done a good job. Skip this at the peril of having to use debugging or having to revert a set of changes. Always have a single reason to fail.
Refactor incrementally: have only one reason to fail. This cannot be over-recommended. Michael Feathers describes programming in his book as "the art of doing one thing at a time", which has been a very good bit of advice through the years. There is a good reason that the refactoring book gives recipes of tiny steps. This incrementalism is worthwhile and hard-won habit.
Add before removing when you are replacing a line. For instance, if there is a complex calculation and you are replacing it with a function that calculates the same result, place the new call & variable assignment right after the old calculation. Run the tests. If the result is different, comment out the call and try again. Don't delete the original until you know for sure that the new code is equivalent.
If you move a method to a new class, you must move or create a test. Otherwise your tests become increasingly indirect. Indirect tests do not produce a good smell.
Changing functionality is not refactoring. That would be correction or implementation. Calling a functional change a "refactoring" is not entirely honest. Refactoring does not change what the code does, it only changes where the code is located.
Increasing performance is a story, not refactoring. This is another misuse of the term "refactoring." The point of refactoring is to improve the structure of the code. While performance can accidentally result from simplifying the code, it is the simplification and not the performance that is the object.
The books outlined above will give a better idea of how to perform refactoring. At least the index card can help remember the circumstances in which one should perform the task.
Collective Code Ownership
See Extreme Programming's writeup on the rule of collective ownership.
Next to pairing, the Agile practice that causes consternation among developers is the idea of collective code ownership. It is not a difficult concept to grasp, but individual ownership is a difficult idea to let go. Developers inclined or acclimated to working solo find this four-bullet card quite threatening, and the motto above it positively chilling.
But what if we were truly working as a community? We would give up having the final say about how our code is formatted and how variables were named. We might find other people's pet peeves getting equal say. We might find that other people can take over the code we wrote. We might be able to be sick for a day or go on vacation without work ("that only we can do") piling up. We might be able to simplify the team's work by having a single work queue. We might find that we can help make other people's code better. We might find that they can improve ours. But we have to get over the idea that the code is ours.
It's not ours. It really belongs to the customer or company the second it rolls off our fingers. It won't follow us home, and we can't take it with us when we leave. It is copyrighted by the company as a work for hire. Let's not lose sight of that nugget of reality.
Anyone can modify any code at any time so that nobody has to seek permission to simplify and speed the code, or to add features that might touch code they don't own. Tests can be written because code can be made testable. This frees the development team from fear of a certain variety of personal conflict getting in the way of getting real work tested and delivered.
The team has a single style guide and coding standardnot because some arrogant so-and-so pushed it down their throat, but rather the team adopts a single style so that they can freely work on any part of the system. They don't have to obey personal standards when visiting "another person's" code (silly idea, that). They don't have to keep more than one group of editor settings. They don't have to argue over K&R bracing or ANSI, they don't have to worry about whether they need a prefix or suffix wart. They can just work. When silly issues are out of the way, more important ones take their place.
Original authorship is immaterial because "it's my code" has never been a valid reason to avoid improving or modifying a system that doesn't work or which doesn't do enough. In agile teams, we try not to talk about the original author when we are working on code. It simply is. It had an original intent, and we hope to strengthen it and make it more clear. The code exists and works. We want it to work better. This focus on the behavior of the code over the personality of the author is freeing as well. We can learn together once we let the guards down.
Plentiful automated tests increase confidence and safety so that we can learn while coding. If code is test-driven, every violation of the original intent (as expressed in the tests) is met with a test failure. Rather than pre-worrying every possible code change, the developer pair can make progress in short order and can refactor without fear. This is better for the company, better for the programming pair, better for the individual.
Reading over this card, I wonder if we missed the more important point that work does not have to queue for individuals. Too often we find work piled up for one or two siloed developers while two more are nearly idle because their silos are not busy. This is inevitable when our team structure involves multiple queues, and it is a shame. If we broke down the silos and followed pairing, testing, and collective ownership practices for just a little while we would find ourselves in a situation where any number of developers would be able to pick up the jobs that are waiting.
Examine a few common types of situations:
Scenario 1: Fred can produce automated filing reports in a day. Abe and Margaret together could get it done in two and a half days.. It makes sense to have Fred do this work, yes? But it's Wednesday, the work is due Friday, and Fred has 16 hours of more important work that is also queued for him. Now it seems foolish to assign Fred, because it will cause us to miss a release or else Fred will work copious overtime while Abe and Margaret go to the bar at 3:30pm. A good deal for Abe and Margaret, bad for Fred and risky for the company.
Scenario 2: Fred has three days of work to get done, and only two days to complete it. Nobody else can help, because Fred has always maintained this system alone. His wife calls to tell him that their newborn has pneumonia. Should he work late and fail as a father, or go home and fail as an employee?*
Scenario 3: Abe and Margaret already have worked with Fred in the automated filing area many times over the past three months, and feel pretty comfortable there. There are a lot of tests, and these tests help them to avoid checking mistakes into the code line. Fred has three days of work to be done, and a new two-day assignment was added to the pile. Margaret feels pretty comfortable with the new work, and should be done in two days. Fred and Abe team up on the remaining work, and at the end of the slightly-longer-than-usual day they're half-done. They should be able to finish up tomorrow. Fred's had bad Thai food. He begs off and dashes home. Charlie comes in to pair with Abe and Margaret as needed so that the release can finish on time.
This is an argument not only that a team should pair and explore the project's code, but also that they should be doing so at all times. The team should always be able to cover for members who are incovenienced by family emergencies or personal illness.
Some common objections:
It is dehumanizing and will reduce good programmers to undifferentiated ”cogs”. It has been our experience that it improves the human side of the operation, and that great programmers are not great programmers because they keep a walled garden of code working, but because they are good at programming. Why hide greatness under a bucket? Why limit a great programmer to a small corner of a system if you want the system to be great? Wouldn't a great programmer be better recognized by peers if they get to share in his great work?
It won't work in a shared trunk/branch. It does, all the time. The developers will share code more often, which means that they will integrate changes into their branches sooner rather than later. By sharing more, they have smaller and simpler integrations. I
It doesn't work in the face of refactoring and renaming code. You would think, yet somehow it still does. Common code and frequent integration make this largely a non-issue. And if people must work in the same small area of code, they can locate closer together and talk over their changes on-the-fly to make integration easier yet.
It will be hard to get any design done. A team does settle into shared design with surprisingly little thrash. The idea that good design is the work of a single enlightened individual and that it cannot be understood or appreciated by his peers is largely unsupported. An incremental project does small acts of design all of the time, and often hold impromptu stand-up meetings to talk over changes that impact the system on a larger scale.
You can't really get people to do it. However unlikely it seems, many work precisely this way each day. More surprisingly, people come to truly love the flexibility and expanded influence that collective code ownership gives them. This is particularly true in agile shops where they practice pair programming and unit testing.
*Rhetorical question. You always choose the child. Of course, you may still face dire consequences at the job. Personally, I would hate to be the manager who has to tell clients that he can't make the release date because he only has one programmer competent to do the work. Who should really be in trouble here?
The original XP take on what made a good story was that it had to have business value, it had to be estimable, and it had to be testable. That might have made for a simpler and also appropriate acronym, VET. But the three newer elements (I, N, S) add considerable value in helping you shape a candidate story into a real story that can be accepted in an iteration.
- Independent - Any story could be the next one done; the customer should have the final say. As stories complete, some may become cheaper and others more expensive. Tim recommends estimating all stories as if they were first, and re-evaluating estimates before iteration planning game.
- Negotiable - A story is not a contract! It is a "promise for communication," as we used to say. It shouldn't be flush with every last detail.
- Valuable to the customer - Don't create technical stories! A primary goal of using stories is to demonstrate to the customer that we can deliver business value incrementally, so that the customer can help steer us and provide us with feedback. I'll repeat, because it's very important: Don't create technical stories!
- Estimable - A story might be impossible to estimate if it's too big, or if we have no idea what's involved, in which case we probably need to go off and to a bit more research before we present this story.
- Small - Some sources replace "small" with "sized appropriately." Size will vary on your shop, but obviously, a story can't represent more than a single iteration's worth of work. A better rule of thumb would be that no story would represent more than half the iteration. To me, an ideal size would mean that your team could crank out a story every day. It is possible to make stories too small, but very rare, so the general rule is "smaller."
- Testable - If you can't verify it in some manner, how do you know it's done?
Want a challenge? Take a nice. long paper you've had around for years, expand it into a chapter in a book and then try to fit it on a flashcard. Flash cards are like poetry. You have to cut the material down to the smallest and most poignant form you can so that it is memorable and sparks memories in the psyche of the reader. I'm hoping that this one doesn't find me a poor poet.
The name says what it is for, not what it is. Poor names tend to say nothing, or the wrong thing. Take the variable 'x' for example. How about psz? I hope it's a pointer to a zero-terminated string, but I can't bet on it and have no idea what it is for. The English word for one piece of automobile safety equipment is "windshield". I'm told that the Japanese name for the same device is "front glass". In variables, we lean toward "windshield" and away from "front glass" if we want our code to make sense to our peers.
Avoid encodings of any kind. Psz (from above) encodes type enformation. IDoldrum contains an encoding "wart". Psz_agey may be a string containing a person's age in years, but that's not obvious. The example from paper and book is "gen_ymdhms". These encodings can be learned, and having been learned they join the other pile of pointless minutae in the reader's mind, but that's not a good reason to use them, no more than I should pinch your arm just because I know you'll heal. We write names to make code obvious and clear.
Functions and variables have the same level of abstraction. For this reason, in a Person class, we expect fullName, birthDate, address. We don't really expect stringDict or autoPtrArray to be part of a person.
Use pronounceable names. Most people read text with their innervoice, so a pronounceable name keeps them from say "blah-blah-blah" mentally (leading to errors). It is also much easier to collaborate with a pair partner if you can actually pronounce the names of the symbols you're manipulating. Finally, you can explain the program to new partners, bosses, or the like if you have pronounceable names. Names exist to communicate. Don't hamstring them.
Shun names that disinform or confuse. Don't call something 'list' if it is not a list. (Even if it is a list, don't call it 'list' but that's covered elsewhere in this list.) Avoid calling a variable 'ram' or 'mem'. Don't call your internal integer a 'socket' unless it is a socket. Don't use 'iSomething' for non-interfaces. If you use a name that causes a peer to misunderstand your code, take it as a coding error and fix it. Renames are cheap.
Make context meaningful. Don't add gratuitous warts at the front or back of your names. Especially the fronts. If everything in your application is named with the prefix 'app_' then you are causing people to look into the middles of names to find meaning in the name. Likewise, the dotted-names common in object-oriented code should have meaning in their context (person.age() v. person.session()). If a variable name is out of place in its namespace, class, or module then perhaps it is because it is in the wrong place.
Length of identifier matches scope. For a local variable in an anonymous one-line function, x, y, z, i, j, k are all fine variable names. For a variable in a 12-line function it is insufficient. For a parameter to a function call it is wholly in appropriate. As class names or module names, these are insanely poor choices. A global variable with a short name is an abomination on so many levels. This rule was gleaned from James Grenning, and I think it should have been in my original paper. It's a good rule.
No lower-case L or upper-case o, ever. 1t should be pretty Obvious that nob0dy shou1d ever cause others to confuse the letters 1 and 0 with the numbers l and O. It obviously is okay for you to use L in names like
oldNameand capital O in something like
Organizebut it is an act of purest naming evil to have a name that consists only of
O. It doesn't take a genius to see the confusion in "return l - 12 > O;"
The word "discipline" often is invoked to say that you should do more grunt work, jump through more bureaucratic hoops, and extend your hours. That isn't really necessarily the same as discipline in an agile sense.
Discipline in the agile sense is more like character, in that it consists in doing the things you know are right. What things are "right"? In this case, it is those things that align with the XP values of:
Impose simplicity on all the software you touch, because code will naturally decay if not tended, and because it's often easier to invent a complex (or complicated) solution than to create a very, very simple model. Also because changes tend to cluster, so the code you change today is likely to be touched again tomorrow and next week. It is also possible that you may have to change it months from now when the current design foibles are no longer foremost in your mind.
Clean the code you touch. If you only make the smallest changes possible, it may seem like good risk management, but it equates to making interest-only payments on your technical debt. It's not a way to make a better next month. Instead, all the participants on the project should clean all of the code. Sometimes we avoid this, for fear that the code will be continually rewritten by different people. We consider that a good thing, if you replace "rewritten" with refactoring. We find that code moves from cleaner to cleaner forms as different people touch it. Each programmer should clean the code, and expect the others to clean it further.
Work on only one thing at a time because multitasking is a deal with the devil. If you split time among tasks, you do them all less well than if you could focus on one at a time. Because you have many things to do each day, you may want to work on the periodicity of your efforts to find a sweet spot. Some people recommend 48-minute work sprints, others may suggest 25-minute stretches. The thing they agree on is that focus on a single task means getting work done. Refuse the urge to multithread your head.
Work in exceedingly small steps. Michael Feathers describes programming as the art of doing one thing at a time. We learn through refactoring and Test-Driven Development to do only very small things between tests so that we only have one reason to fail: if one changes only one line of code at a time, and runs all tests after each change, then one knows exactly which line of code broke the tests. This applies to non-programming tasks too. One discrete, small step at a time can lead to more things being truly done sooner than ever before.
Shorten your feedback loops to the smallest period and the smallest number of participants possible. Do you need to have security review of code? Move the security reviewer closer to the coders. Perhaps embed them in the team. Perhaps bring them in as pair programming partners. Does the Customer take a long time to respond? Co-locate them with the team, demo changes as they are made, have the Customer run the iteration-end demo. Feedback loops are better if they are richer, faster, sooner.
Be transparent, especially in difficulty. This is where discipline gets sticky. The easiest thing to do when one is in trouble (or wrong) is to lie about it. This is also the worst thing that could happen. The point is to be transparent with those who can help you. A hidden problem is never addressed. If you are being assigned more work than you can complete, it's better to get the work flow reduced than to pretend you are going to "make it up" by working harder in the future. The whole "fail fast" value depends on being frightfully frank. Note that transparency in success is a pretty darned good idea, too.
Obviate and remove process steps because every process is inefficient to some degree. Note the word "obviate", because every moderately successful process produces some kind of (perceived) value at each step. If you can cease reliance on a given value, or gain equal-or-better value in other steps, then you can remove the the targeted step. For instance, you may have a long QC cycle following a coding sprint. If you can automate all tests for existing features then you may eliminate manual regression testing. If you can embed testers in the development team (shortening a feedback loop) then you can have them add to the automated test suite on-the-fly. It is possible then to eliminate the tail-end QC cycle. The same thinking applies to the process you are automating inside the program you are writing.
Go home before you break something. Or at least after reverting the first breakage that is due to lack of rest. Do not work yourself stupid. One might have to "power through a day" sometimes, but on those days one should not allow oneself to do anything important alone. And one should not push too hard to be the decision-maker of the pairing. One should listen to one's pair partner suggesting that one might need a coffee break or to leave early. Beware the urge to work scads of overtime in order to seem dedicated; be the one who works best, not the one who works longest. Don't expect to be 100% when you're on prescription antihistamines. Go recharge your batteries, and come prepared work all the harder when your brain is fresh tomorrow.
Great Laws of Software Development
I can never remember the name of Parkinson's Law! I also think these are cool laws, so I really like this card. I'm sure Tim and I could add a couple good laws to the card. Got any favorites?
Gerald Weinberg has a treasure trove of laws in his book The Secrets of Consulting; those alone could easily fill a few cards.
The very existence of "agile in a flash" reference cards might suggest that you can be agile if you just check off all the items on the cards. Nothing could be further from the truth. If you treat agile as a checklist process, you will probably not succeed with it. These cards are gentle reminders and guidelines, not absolute prescriptions and rules that can't ever be broken.
This card on stand-ups is a classic example. I can't remember how many times people have told me that they stopped doing stand-ups because they seemed such a waste of time. Lots of times, absolutely. The meetings were getting too long, and they were tedious.
Well, sometimes you do just want to follow the rules. Or at least, you don't want to break the rules until you understand why they exist. Stand-ups that go on too long aren't usually stand-ups, they're sit-downs (and thus break the rules). Once people get seated, they often have a tough time getting up off their rear end. We're here, we're comfortable, let's just dig into all these issues.
Stand-ups, as Ron Jeffries once remarked on one of the many lists he frequents (onto which he has probably posted over 100,000 messages, and so I'm not about to go try and find the message), are a bare minimum for daily conversation between a team. A stand-up isn't intended to capture all group conversations that should occur during the day. It's a starting point, to see who's there, and find out who you need to talk to later.
I've blogged a few times about how tedious stand-ups are often a direct result of avoiding collaborative work. In addition to a throw-it-over-the-wall mentality (BA->dev->QA), developers work in silos: You do that story, I do mine. There's little reason for us to talk to each other. The only people who really care much about what you're working on are you and the project manager. I care only enough to make sure you're not dragging us all down. Otherwise I'm not really listening to what you have to say during the stand-up; I'm figuring out what I'm going to say and trying to think about how to make it sound clever.
Don't work that way. Find a way to work as a team and collaborate on stories, and look to deliver a new story every day or two if you can. (And don't make excuses why you can't get it to a day or two, but instead keep trying to move in that direction.) Your stand-ups will become far more interesting!
Declaration of Interdependence for Modern Management
Alistair Cockburn spearheaded the derivation of this declaration along with a number of signatories (including Lowell Lindstrom, who both of us know; there's my name-dropping for the day). At one point these were principles for project managers only, and targeted at agile teams. But the declaration authors realized that the final set of principles had much broader applicability--hence the use of the term "modern management."
Similar to the agile manifesto, the declaration of interdependence for modern management sounds to me like motherhood and apple pie. Of course we want to increase ROI, be creative, and deliver reliable results! Let's see if we can figure out what each principle is really saying.
Increase return on investment by making continuous flow of value our focus - This directly corresponds to lean production: we look to produce small, complete features with high quality before moving on to the next. Phased approaches such as waterfall certainly go against this principle, as do purely iterative approaches that do not insist on useful incremental deliveries.
Deliver reliable results by engaging customers in frequent interactions and shared ownership - On far too many projects, the customer coughs up some needs, and then the development team disappears, reappearing much later with some completed or partially completed product. The customer has no ownership of the process or even of the priorities for feature development. There is no definition of "frequent" for this principle, but if you are following along with the first principle--continuous flow of small bits of value--frequent would seem to be almost as often as each new feature was introduced to the development team. Specifically: at least once per iteration, and hopefully more often.
Expect uncertainty and manage for it through iterations, anticipation and adaptation - There is a balance between planning (anticipating) and reacting (adapting). This principle absolutely suggests that we would never want to plan an entire project at the outset of a project, and then set that plan in stone. But it also suggests that we should look to continually anticipate upcoming events. With this principle, the declaration authors felt it was important to tell the agile community to heed information about anticipated events, i.e. not to always wait and react.
Unleash creativity and innovation by recognizing that individuals are the ultimate source of value, and creating an environment where they can make a difference - Many in upper management already think they do this, but ask any developer to honestly assess the way they feel about how they are treated. How many of you developers on agile teams have had management tell you that you weren't allowed to write tests, or refactor the code, because those were just things that were keeping us from shipping the software?
How many times have you heard someone in management say: "And finally, we'll make sure we have fun on this project!" Period. Their idea of creating a fun environment is to declare that people "vill haff fun!"
Boost performance through group accountability for results and shared responsibility for team effectiveness - Even for many teams trying to be agile, the typical implementation is silo-mode development. BAs hand off requirements to developers, who hand off product to QA. Each one of these can think they did a good job, and be rewarded by managers for such, yet we still end up failing miserably. I've blogged numerous times about how it all leads back to principle #1: delivering a quality story at a time as a team before moving on to the next.
Improve effectiveness and reliability through situationally specific strategies, processes and practices - Pragmatism and adaptability. Sticking with pairing, for example, when the culture or physical environment has already proven to not support it.
I've read a good number of books on management and have attended training such as Covey's Seven Habits. What always strikes me is that the material presented always seems so obvious to me--almost a waste of time. There is the Catch-22: Those who are interested enough to grow themselves in management already know the principles behind the training or books. Those who don't know it aren't the kind who will want to be bothered with attending classes or reading a book.
Principles of the Toyota Production System
When selecting books to buy (a bit of an obsession with me), I pay some attention to Amazon reviews. I've found that the more useful reviews are those with 2, 3, or 4 stars. People rating books at the extremes (5, the highest review, or 1) often have biases, agendas, or serious personality disorders. Still, I do read a few of the 1-star reviews just to find out why people hate the book.
The Toyota Way, by Jeffrey Liker, is an excellent overview of the Toyota Production System (TPS). It takes a very positive stance on the value of the TPS, however, and does not go out of its way to critique things.
Out of 84 reviews, there are only four 1-star reviews of the book, a handful of 3-star reviews, and everything else is fours and fives. Three of four 1-star reviews complain that Liker paints a picture of the "old" Toyota. These reviewers have first-hand experience with the implementation of TPS at one of the Toyota plants. A quote from a negative review: "When Mr. Cho opened this plant back in 1988, we were a much better run organization and we earned many J.D. Power awards because the environment at that time was the application of many of these 14 Principles."
Thus even the small number of people who slammed the book see the value of of TPS, a specific implementation of lean principles. The reviewer recognized that current challenges were caused by moving away from the process's principles. It is much the same with agile, which at its heart is a lean process. The principles of the TPS and the principles of agile are solid. Failures in agile or lean are usually caused when the application (or lack of application) of specific practices is not reconcilable with the principles.
That's not to say that TPS or agile processes don't create challenges. Indeed, the processes themselves introduce problems that would not exist otherwise. (For example, the core notion of "iterate and increment" in agile can far more rapidly degrade a system's design if proper controls don't exist.) There are always tradeoffs!
The core principle underlying the TPS is that the "right process will produce the right results." This message underlies one of our largest schisms in the software development community today. During a fairly intense series of blog debates involving Bob Martin, Jeff Atwood, and Joel Spolsky, Atwood posted, "If you're not careful, you might even slip and fall into a Methodology. Then you're in real trouble."
Toyota asks their workers to use their heads, to look for continual improvement (kaizen), as part of the TPS process. So does Uncle Bob. Atwood and Spolsky also say to use your head--but eschew process. Notably, Toyota quality has suffered in the past few years, as our Amazon reviewer also indicates. Yet Toyota will stick to its guns with TPS. The process and its principles are not the problem. In fact, the process lays the foundation for best opportunities to understand what's wrong with its implementation.
Pair Programming Smells
Source: Tim Ottinger and Jeff Langr
Just as there are "code smells," there are process smells. Pairing is supposed to be a short-term collaboration between equals. Done well it is a high-energy, enabling process that helps teams to gel and to produce much better code. Done badly, it is a rote process to be dreaded.
Unequal access is typical in today's cubicles when the computer is in the corner. While such an arrangement is wonderful for solo work, try pairing when one partner is forced to sit behind the other. Both pair partners need to be able to reach the keyboard and see the screen equally well. It is also helpful if both can use the installed IDE or editor.
Keyboard domination can happen even when partners have equal access. One developer doesn't allow the other to type, either by verbal or nonverbal clues. Sighing when a partner tries to drive, pulling the keyboard out from under their hands, and the like are hardly partnership-nurturing behaviors.
Pair marriage is the name for the arrangement where a pair of programmers are stuck together for entire days, or entire weeks, or entire projects. A healthy pair programming day will include several changes of partner. In some situations, you may have programmers who prefer to pair together. That's very nice, but they will develop the same point of view and end up making the same mistakes together if they don't break it up from time to time. We switch at least once a day.
Worker/Rester is the situation where one of the pair programs while the other watches youtube or plays games or reads a book. Pair programming is not like a cross-country trip where one of you can sleep while the other drives. The point is that both are actively participating. If one needs a rest, maybe it's time to take a 10 minute break. Or switch partners.
Second Computer is usually indicative of a worker/rester situation where one brought along some entertainment for the time when he's not working. Not working? How is that pair programming?
"Everyone does their own work" is absolutely not the way to foster collaboration. In EDTOW, each developer is responsible for a different piece of work. If they complete programmer A's assignment, then programmer B's work is at risk. By holding them individually accountable for separate work projects, the manager is ensuring that the wise programmer will avoid pairing (even if ordered to do so). Instead, the pair must be responsible for their task. If other programming smells are eliminated, you will find that developers do not use this practice to evade responsibility. Instead, a lazy coworker has trouble finding anyone to pair with. That is an easily detectable situation. 360-degree reviews make it a certainty. B Carlson also found that sharing a story across teams helps to relieve the tail-end QC crunch.
90% of work 90% done shows that everyone is doing their own work (EDTOW) instead of pairing, or else the team is swapping tasks instead of completing one and moving on to the next. Either way, it shows a serious lack of collaboration.
People who can't stand to program together have more than a personality problem. They have a professionalism problem. The team needs to be refactored. It may be that one of them can move to a different team where they don't have personality issues that keep them from doing their work. It may be that they have bad blood everywhere they go, indicating that all the teams may be better off with out them. Counseling or "counseling out" is recommended if the root cause is one bad player. If the team is large enough, it's possible that the two can be kept apart, but allowing and protecting insensitive and offensive players is not a way to build a team. It is a way to build resentment.
Debates lasting more than 10 minutes without producing new code is normal among pair programming newbies. It is nonproductive, annoying, and tedious. The better answer is for one or the other to quit telling and start showing how they want the code to work. Arguing in code is better than arguing about code. You can always undo changes, or perhaps check into the repository or archive the code before the code argument so you can revert unwanted changes.
Pairing is a little scary. Tim confesses to feeling uneasy at the start of each new pairing session. It is possible that one partner knows more than the other (or thinks so) or that there are stylistic differences or different paths to different-but-equally-sufficient results. On the other hand, he confesses that he always learns something and does some of his best teaching in pairing sessions.
One can hate pair programming (at least for a while) or find it uncomfortable (at least at first), but it sure does make the code nice. I suspect that nothing produced by an individual is quite as good as if it were produced by a fresh, actively-engaged pair.
I've been working hard lately on getting teams to experience a good retrospective, after hearing many teams say they tried it and quickly fell off the practice. I've been challenged by the fact that every one of the retrospectives I've helped facilitate recently has been distributed. We've used phone bridges with WebEx; we've tried to incorporate the use of some of its attendant tools (whiteboarding, chat, and polling) to help make up for the extreme inhibition of interpersonal communication a phone environment creates.
Letting the team understand the steps you will take them through is one way to keep your retrospective meetings effective. Sticking to the outline can prevent your team from getting derailed with solutions while you're still in the data gathering or insight generation steps.
A key distinction I've found between effective use of retrospectives relates to what the team decides to commit to. Vague promises that aren't tracked or verified upon completion eventually lead to the team's choice to not bother with retrospectives. Instead, a commitment needs to be treated similar to a story. A perhaps better analogy is to an experiment, as there is a hypothesis: "This change will lead to some form of improvement." In any case, it should be tracked and tested. Generally, the more specific the better: "We will do a quick status and mention of issues in the standups, and then allow people to drop off the call who don't need to be involved in follow-up discussions. We will work on completing stories as a collaborative team instead of drawing stories out across the iteration; this should help keep standup calls interesting for all parties. We will monitor the initial time spent on the call and keep this average to below 10 minutes per day."
There are many areas not explored in the Agile Retrospectives book that might be a good addition for a second edition. I've found the activities to be helpful, but I would love to see a larger number and variety. I think it could touch on some ideas for distributed retrospectives. And finally, I think it needs to incorporate discussions of safety exercises.
What to Do When Not Pairing
Pairing is one of the more contentious practices in the XP canon. There are many ways to approach it, and many of those ways will result in disgruntled team members. Joining people at the hip for extended durations is a surefire way to kill interest in pairing. The same result occurs when you insist that people pair every hour of their work day.
A dogmatic stance (or even one that seems dogmatic) on any practice rarely works. People are good at finding excuses to avoid adhering to a new practice. To them, you're being dogmatic as long as you challenge their excuses. A better approach is to get the team to agree upon a reasonable rule, and then talk about a process for dealing with exceptions to the rule.
The idea behind pairing is that two people (and more over time) actively collaborate to solve a problem. It provides a form of review. Is it dogmatic to suggest that people must pair all the time? Absolutely! Is it dogmatic to insist that all production code is reviewed? Perhaps. That's a judgment call for your team to make. I would tend to lean toward reviewing all production code; even the supposedly simple stuff is easy to do poorly. But it's not my call, it's yours.
I have yet to see a shop that insists on pairing all the time. A better approach is to define what's appropriate to work on during times when you're without a partner. Sometimes, unfortunately, that necessarily will involve working on production code, but there are usually many other things that can come first. If you must work on production code alone, ensure that you solicit some form of follow-up review.
The card shows a number of things you might consider before touching production code without the safety net of a pair partner.
The Seven Wastes
Source: Poppendieck, M. & T. Implementing Lean Software Development, Addison Wesley, 2007.Once I read an article by K. Beck which said that our problem is not that we don't have enough time. Such a problem would be untenable. Rather, our problem is that we have too much to do. "Too much to do" can be solved. We should accept that we only have so much time, and do the most important things possible with that time.
If the goal is to use the time we have more productively then we must identify and eliminate the time-wasters that rob us. Focusing on tasks instead of social situations can help, but as human beings we sometimes need a break in order to focus our thoughts and allow our minds to mull over issues. Totally eliminating off-task time is not as productive as it sounds, though most of us could spend a little more time on-task each day. One might notice that coffee breaks, smoke breaks, phone calls, and small office conversations are not listed above. Those are necessary to some extent (though Tim as an ex-smoker might argue that smoking is unnecessary and detrimental), and are not the great wholesale time wasters that the Seven Wastes expose. Compared to the seven wastes listed above, individual off-task time is insignificant and unimportant. Indeed, forcing the employees nose to the grindstone in a stupidly wasteful system will result in no appreciable increase in productivity, while at the same time decreasing satisfaction and driving the best workers away.
The problem is that our time-on-task may be full of wasted effort. If we have not been sufficiently trained to see waste in progress. we may be wasting time by optimizing the wrong things. This is like straining out a gnat and swallowing a camel.
There are some interesting and even counter-intuitive items in the list. Some of these will creep unbidden into your work experience and strangle your organizational efficiency. One really should read Implementing Lean Software Development and/or Lean Software Development (same authors) to get a deeper understanding of waste, but since this guide is called "In A Flash", we enumerate a few instances here.
Partially-done work is any work that has begun, and has not completed. Organizations will often attempt to get more done by doing more things at once, which is merely adding waste to the process. If work is to be completed as soon as possible, there must be a minimal amount of work to be done at any point in time. The symptom of this kind of waste is that you have a hundred items 90% done. In code this is extremely troubling, because the main line of code development will outpace the branches (or working copies) until the mainline can no longer gracefully accept the branches' changes. This leads to rewor. The waste item task switching is related.As my old manager Bob B. used to say "Fooooocccuuuuuuusssss!" The answer is to do less, more frequently. This is actually not hard technically, but can be difficult politically. The organization must reach a point where it is not bragging about how much work it has in progress.
Extra features are stories built on speculation that they will be useful or desirable, but which are not actually used by the customers. This waste is doubly damning because the work consumed development dollars, and also an opportunity cost in that more desirable features had to wait. A team may complete a number of features only to be told by the end users that they "didn't do anything." One way to mitigate this problem is to only implement stories requested by the business, but that answer is only helpful to development. For business, the problem is only assigning the most-needed stories for development. It is very difficult to do this if the decision-makers are not users of the software, and are not very closely associated with the actual users.
Relearning is necessary whenever the process captures information in order to complete one phase of the task, and then the next cell in the process has to reverse-engineer their input to determine its purpose. It occurs when the team had information when it started one critical component of the product, but later must rediscover how that part works. Relearning occurs when only one member of the team understands a particular siloed area of the application, and someone else must work in that area. The solutions in an agile setting are to involve the whole team in a feature, to capture requirements and expectations in Acceptance Tests and Unit Tests, and to collaborate (pair programming) to reduce the costs of relearning when it is necessary. If you find you have to reverse-engineer the code you are working on, you are hip-deep in relearning waste.
Hand-offs are sort of the evil twin of task switching, and is worsened by relearning. A classic symptom is when there is a quality control group and a documentation group, and neither are involved in the development process. Instead, they are handed a release to test or document while the programmers move on to the next task. A classic waterfall hand-off is from requirements to design, and from design to implementation.
Information and momentum are lost at these boundaries, and responsibility is surrendered. At the point of hand-off, the originator no longer feels responsible for the product (a violation of "see the whole"). Often this leads to localized optimizations, where one group takes pride in (or worse is rewarded for) its ability to outrace the next even though this slows the organization as a whole and leads to partially-done work waste.
An Agile answer is to work as a whole team, even if responsibilities are parsed to different people on the team (for documentation, QC, etc).
Delays are simply wait-states or queues. Perhaps too much work was handed-off to QC, and the organization has to wait until the beleagured testers get around to testing the code. Perhaps there are questions about requirements that have to wait until the next monthly stakeholder meeting. Perhaps there are development or requirement silos and work must be queued for the one person who is familiar with the subject matter. Agile groups often assign a scrum master to clear up all delays every single day. By pairing on all tasks, agile groups break down silos. In these ways, we help to eliminate the waste of delay.
Defects are multi-causal. Some of them may arise from sloppy programming practices (especially if the team is overworked). Some of them may arise from an individual's incomplete understanding of a subject area. Sometimes the code is riddled with technical debt and programmers cannot accurately predict the outcome of their actions. Sometimes code is complex, and a long chain of decisions are altered by a small change in an innocent-looking piece of code. An agile team attacks defects in a very active way by pairing, unit-testing, and acceptance-testing. Continuous integration aids the team by running every test that exists every day (multiple times per day). The agile team takes a very serious stance regarding build-breaking changes, and treats a failing test as a broken build.
Why POUT (aka TAD) Sucks
Tim and I are no strangers to controversy and debate. As agile coaches, we get challenged all the time. "TDD is a waste of time, it doesn't catch any defects, you don't want to go past 70% coverage," and so on. They sound like excuses to us. But we're patient, and perfectly willing to have rational discussions about concerns with TDD. And we're the first to admit that TDD is not a panacea.
Still, we haven't yet seen a better developer technique than TDD for shaping a quality design and sustaining constant, reasonable-cost change over the lifetime of a product. Yet: If you show us a better way, we'll start shouting it across the rooftops. We're not married to TDD, it just happens to be what we find the most effective and enjoyable way to code.
Many of the challenges to TDD come from the crowd that says, "write the code, then come back and write some unit tests." This is known as POUT (Plain Ol' Unit Testing), or what I prefer to call TAD (Test-After Development). The TAD proponents contend that its practical limit of about 70% coverage is good enough, and that there's little reason to write tests first a la TDD.
70% coverage is good enough for what? If you view unit testing as a means of identifying defects, perhaps it is good enough. After all, the other tests you have (acceptance and other integration-oriented tests) should help catch problems in the other 30%. But if you view tests instead as "things that enable your code to stay clean," i.e. as things that give you the confidence to refactor, then you realize that almost a third of your system isn't covered. That third of your system will become rigid and thus degrade fare more rapidly in quality over time. We've also found that it's often the more complex (and thus fragile) third of your system!
And why only 70%? On the surface, there should be little difference. Why would writing tests after generate any different result from writing tests first? "Untestable design" is one aspect of the answer, and "human nature" represents the bulk of the second part of the answer. Check out the card (front and back).
If you've been in software for any period of time, you encounter the idea of Technical Debt. It has been expounded upon by a number of luminaries and included in many software development books and blogs. ACCU has a nice write-up. It has been discussed on Ward C's wiki. Tim Ottinger tried to explain why pay-down doesn't happen. Steve McConnel chews the idea thoroughly. Technical debt has been likened to pollution.
We've not met a programmer who didn't believe that Technical Debt exists or that it is not significant. It is certain (or at least felt certainly) that ill-conceived or ill-formed code will impair the developers of any software system.
Well-designed code exhibits a deep and profound simplicity. When code has cleverness and magic in it, it tends to be complex and that complexity decays in the face of future changes. Well-designed code is code that follows the SOLID design principles (check out the audio version or the in-depth articles at Object Mentor). Simplicity and careful design don't exist as an academic study. Simplicity and careful design lead to a more fluid development process. Technical debt leads to a very rocky, uneven development process where very small changes in very bad areas become impossible to estimate. Even those changes that are not slow to write may end up bouncing around in QA for days, weeks, or even months. Any foray into bad code may cause delivery dates to be missed.
Bad code and poor design: that's what we call technical debt. This is not what Ward originally meant. He was referring to code written well to an incomplete understanding. As the word is used these days, it is to describe poor design and messy code. This is a shame, but here we use the term in the common sense, not as Ward intended.
I think there may be confusion in the way the word "quality" is used. If your definition includes opulence (gold-plating), self-indulgent implementation, needless future-proofing, complexity, wars over formatting, or man years of back-end testing with emergency rework, then one might be better off without it. If, on the other hand, your definition of quality is simplicity and well-crafted design then quality is essential and it makes no sense to argue against it.
Technical debt is drag, but quality and simplicity are thrust.
Once one finds that his team is suffering from technical debt what must be done? There are three choices:
- Do nothing... allow it to accumulate and strangle productivity.
- Pay it down by investing development dollars in cleaning the code.
- Declare bankruptcy. Abandon the code and start over.
The bankruptcy option is a dire decision indeed. As mentioned on the AgileOtter blog article:
Rewrites are doomed for the most part. The reasons are well documented and well understood. The risks of gaps and scope creep are legend.Doing nothing is an option frequently taken, but once the code impairs proper software development, one finds that avoiding ugly areas of code spreads the ugliness. Instead of making a fix where it belongs in the ugly code, a fix is made in all the places where the code is called. This creates duplication, which increases maintenance points, which increases the likelihood that you will get incomplete fixes in the future and which increases the ugliness in the caller code. This code eventually becomes indebted and the ugliness spreads from there as well.
The reasonable person will not take on debt he cannot afford. If the reasonable person takes on debt willingly, he does so with a clear plan for paying it. What kind of a person would borrow with no intention to pay? Such a person would be a fool or a crook.
So the big question is what you are going to do about your technical debt, and do you (as evidenced by your actions) understand and believe that code quality matters?