Retrospectives

Primary source: Derby, E. and Larsen, D. Agile Retrospectives: Making Good Teams Great, Pragmatic Bookshelf, 2006.


I've been working hard lately on getting teams to experience a good retrospective, after hearing many teams say they tried it and quickly fell off the practice. I've been challenged by the fact that every one of the retrospectives I've helped facilitate recently has been distributed. We've used phone bridges with WebEx; we've tried to incorporate the use of some of its attendant tools (whiteboarding, chat, and polling) to help make up for the extreme inhibition of interpersonal communication a phone environment creates.


Letting the team understand the steps you will take them through is one way to keep your retrospective meetings effective. Sticking to the outline can prevent your team from getting derailed with solutions while you're still in the data gathering or insight generation steps.


A key distinction I've found between effective use of retrospectives relates to what the team decides to commit to. Vague promises that aren't tracked or verified upon completion eventually lead to the team's choice to not bother with retrospectives. Instead, a commitment needs to be treated similar to a story. A perhaps better analogy is to an experiment, as there is a hypothesis: "This change will lead to some form of improvement." In any case, it should be tracked and tested. Generally, the more specific the better: "We will do a quick status and mention of issues in the standups, and then allow people to drop off the call who don't need to be involved in follow-up discussions. We will work on completing stories as a collaborative team instead of drawing stories out across the iteration; this should help keep standup calls interesting for all parties. We will monitor the initial time spent on the call and keep this average to below 10 minutes per day."


There are many areas not explored in the Agile Retrospectives book that might be a good addition for a second edition. I've found the activities to be helpful, but I would love to see a larger number and variety. I think it could touch on some ideas for distributed retrospectives. And finally, I think it needs to incorporate discussions of safety exercises.

What to Do When Not Pairing

Source: Jeff Langr


Pairing is one of the more contentious practices in the XP canon. There are many ways to approach it, and many of those ways will result in disgruntled team members. Joining people at the hip for extended durations is a surefire way to kill interest in pairing. The same result occurs when you insist that people pair every hour of their work day.


A dogmatic stance (or even one that seems dogmatic) on any practice rarely works. People are good at finding excuses to avoid adhering to a new practice. To them, you're being dogmatic as long as you challenge their excuses. A better approach is to get the team to agree upon a reasonable rule, and then talk about a process for dealing with exceptions to the rule.


The idea behind pairing is that two people (and more over time) actively collaborate to solve a problem. It provides a form of review. Is it dogmatic to suggest that people must pair all the time? Absolutely! Is it dogmatic to insist that all production code is reviewed? Perhaps. That's a judgment call for your team to make. I would tend to lean toward reviewing all production code; even the supposedly simple stuff is easy to do poorly. But it's not my call, it's yours.


I have yet to see a shop that insists on pairing all the time. A better approach is to define what's appropriate to work on during times when you're without a partner. Sometimes, unfortunately, that necessarily will involve working on production code, but there are usually many other things that can come first. If you must work on production code alone, ensure that you solicit some form of follow-up review.


The card shows a number of things you might consider before touching production code without the safety net of a pair partner.

The Seven Wastes



Source: Poppendieck, M. & T. Implementing Lean Software Development, Addison Wesley, 2007.

Once I read an article by K. Beck which said that our problem is not that we don't have enough time. Such a problem would be untenable. Rather, our problem is that we have too much to do. "Too much to do" can be solved. We should accept that we only have so much time, and do the most important things possible with that time.

If the goal is to use the time we have more productively then we must identify and eliminate the time-wasters that rob us. Focusing on tasks instead of social situations can help, but as human beings we sometimes need a break in order to focus our thoughts and allow our minds to mull over issues. Totally eliminating off-task time is not as productive as it sounds, though most of us could spend a little more time on-task each day. One might notice that coffee breaks, smoke breaks, phone calls, and small office conversations are not listed above. Those are necessary to some extent (though Tim as an ex-smoker might argue that smoking is unnecessary and detrimental), and are not the great wholesale time wasters that the Seven Wastes expose. Compared to the seven wastes listed above, individual off-task time is insignificant and unimportant. Indeed, forcing the employees nose to the grindstone in a stupidly wasteful system will result in no appreciable increase in productivity, while at the same time decreasing satisfaction and driving the best workers away.

The problem is that our time-on-task may be full of wasted effort. If we have not been sufficiently trained to see waste in progress. we may be wasting time by optimizing the wrong things. This is like straining out a gnat and swallowing a camel.

There are some interesting and even counter-intuitive items in the list. Some of these will creep unbidden into your work experience and strangle your organizational efficiency. One really should read Implementing Lean Software Development and/or Lean Software Development (same authors) to get a deeper understanding of waste, but since this guide is called "In A Flash", we enumerate a few instances here.

Partially-done work is any work that has begun, and has not completed. Organizations will often attempt to get more done by doing more things at once, which is merely adding waste to the process. If work is to be completed as soon as possible, there must be a minimal amount of work to be done at any point in time. The symptom of this kind of waste is that you have a hundred items 90% done. In code this is extremely troubling, because the main line of code development will outpace the branches (or working copies) until the mainline can no longer gracefully accept the branches' changes. This leads to rewor. The waste item task switching is related.As my old manager Bob B. used to say "Fooooocccuuuuuuusssss!" The answer is to do less, more frequently. This is actually not hard technically, but can be difficult politically. The organization must reach a point where it is not bragging about how much work it has in progress.

Extra features are stories built on speculation that they will be useful or desirable, but which are not actually used by the customers. This waste is doubly damning because the work consumed development dollars, and also an opportunity cost in that more desirable features had to wait. A team may complete a number of features only to be told by the end users that they "didn't do anything." One way to mitigate this problem is to only implement stories requested by the business, but that answer is only helpful to development. For business, the problem is only assigning the most-needed stories for development. It is very difficult to do this if the decision-makers are not users of the software, and are not very closely associated with the actual users.

Relearning is necessary whenever the process captures information in order to complete one phase of the task, and then the next cell in the process has to reverse-engineer their input to determine its purpose. It occurs when the team had information when it started one critical component of the product, but later must rediscover how that part works. Relearning occurs when only one member of the team understands a particular siloed area of the application, and someone else must work in that area. The solutions in an agile setting are to involve the whole team in a feature, to capture requirements and expectations in Acceptance Tests and Unit Tests, and to collaborate (pair programming) to reduce the costs of relearning when it is necessary. If you find you have to reverse-engineer the code you are working on, you are hip-deep in relearning waste.

Hand-offs are sort of the evil twin of task switching, and is worsened by relearning. A classic symptom is when there is a quality control group and a documentation group, and neither are involved in the development process. Instead, they are handed a release to test or document while the programmers move on to the next task. A classic waterfall hand-off is from requirements to design, and from design to implementation.

Information and momentum are lost at these boundaries, and responsibility is surrendered. At the point of hand-off, the originator no longer feels responsible for the product (a violation of "see the whole"). Often this leads to localized optimizations, where one group takes pride in (or worse is rewarded for) its ability to outrace the next even though this slows the organization as a whole and leads to partially-done work waste.

An Agile answer is to work as a whole team, even if responsibilities are parsed to different people on the team (for documentation, QC, etc).

Delays are simply wait-states or queues. Perhaps too much work was handed-off to QC, and the organization has to wait until the beleagured testers get around to testing the code. Perhaps there are questions about requirements that have to wait until the next monthly stakeholder meeting. Perhaps there are development or requirement silos and work must be queued for the one person who is familiar with the subject matter. Agile groups often assign a scrum master to clear up all delays every single day. By pairing on all tasks, agile groups break down silos. In these ways, we help to eliminate the waste of delay.

Defects are multi-causal. Some of them may arise from sloppy programming practices (especially if the team is overworked). Some of them may arise from an individual's incomplete understanding of a subject area. Sometimes the code is riddled with technical debt and programmers cannot accurately predict the outcome of their actions. Sometimes code is complex, and a long chain of decisions are altered by a small change in an innocent-looking piece of code. An agile team attacks defects in a very active way by pairing, unit-testing, and acceptance-testing. Continuous integration aids the team by running every test that exists every day (multiple times per day). The agile team takes a very serious stance regarding build-breaking changes, and treats a failing test as a broken build.

Why POUT (aka TAD) Sucks

Tim and I are no strangers to controversy and debate. As agile coaches, we get challenged all the time. "TDD is a waste of time, it doesn't catch any defects, you don't want to go past 70% coverage," and so on. They sound like excuses to us. But we're patient, and perfectly willing to have rational discussions about concerns with TDD. And we're the first to admit that TDD is not a panacea.


Still, we haven't yet seen a better developer technique than TDD for shaping a quality design and sustaining constant, reasonable-cost change over the lifetime of a product. Yet: If you show us a better way, we'll start shouting it across the rooftops. We're not married to TDD, it just happens to be what we find the most effective and enjoyable way to code.


Many of the challenges to TDD come from the crowd that says, "write the code, then come back and write some unit tests." This is known as POUT (Plain Ol' Unit Testing), or what I prefer to call TAD (Test-After Development). The TAD proponents contend that its practical limit of about 70% coverage is good enough, and that there's little reason to write tests first a la TDD.


70% coverage is good enough for what? If you view unit testing as a means of identifying defects, perhaps it is good enough. After all, the other tests you have (acceptance and other integration-oriented tests) should help catch problems in the other 30%. But if you view tests instead as "things that enable your code to stay clean," i.e. as things that give you the confidence to refactor, then you realize that almost a third of your system isn't covered. That third of your system will become rigid and thus degrade fare more rapidly in quality over time. We've also found that it's often the more complex (and thus fragile) third of your system!


And why only 70%? On the surface, there should be little difference. Why would writing tests after generate any different result from writing tests first? "Untestable design" is one aspect of the answer, and "human nature" represents the bulk of the second part of the answer. Check out the card (front and back).



Technical Debt



If you've been in software for any period of time, you encounter the idea of Technical Debt. It has been expounded upon by a number of luminaries and included in many software development books and blogs. ACCU has a nice write-up. It has been discussed on Ward C's wiki. Tim Ottinger tried to explain why pay-down doesn't happen. Steve McConnel chews the idea thoroughly. Technical debt has been likened to pollution.

We've not met a programmer who didn't believe that Technical Debt exists or that it is not significant. It is certain (or at least felt certainly) that ill-conceived or ill-formed code will impair the developers of any software system.

Well-designed code exhibits a deep and profound simplicity. When code has cleverness and magic in it, it tends to be complex and that complexity decays in the face of future changes. Well-designed code is code that follows the SOLID design principles (check out the audio version or the in-depth articles at Object Mentor). Simplicity and careful design don't exist as an academic study. Simplicity and careful design lead to a more fluid development process. Technical debt leads to a very rocky, uneven development process where very small changes in very bad areas become impossible to estimate. Even those changes that are not slow to write may end up bouncing around in QA for days, weeks, or even months. Any foray into bad code may cause delivery dates to be missed.

Bad code and poor design: that's what we call technical debt. This is not what Ward originally meant. He was referring to code written well to an incomplete understanding. As the word is used these days, it is to describe poor design and messy code. This is a shame, but here we use the term in the common sense, not as Ward intended.

I think there may be confusion in the way the word "quality" is used. If your definition includes opulence (gold-plating), self-indulgent implementation, needless future-proofing, complexity, wars over formatting, or man years of back-end testing with emergency rework, then one might be better off without it. If, on the other hand, your definition of quality is simplicity and well-crafted design then quality is essential and it makes no sense to argue against it.

Technical debt is drag, but quality and simplicity are thrust.

Once one finds that his team is suffering from technical debt what must be done? There are three choices:
  • Do nothing... allow it to accumulate and strangle productivity.
  • Pay it down by investing development dollars in cleaning the code.
  • Declare bankruptcy. Abandon the code and start over.

The bankruptcy option is a dire decision indeed. As mentioned on the AgileOtter blog article:
Rewrites are doomed for the most part. The reasons are well documented and well understood. The risks of gaps and scope creep are legend.
Doing nothing is an option frequently taken, but once the code impairs proper software development, one finds that avoiding ugly areas of code spreads the ugliness. Instead of making a fix where it belongs in the ugly code, a fix is made in all the places where the code is called. This creates duplication, which increases maintenance points, which increases the likelihood that you will get incomplete fixes in the future and which increases the ugliness in the caller code. This code eventually becomes indebted and the ugliness spreads from there as well.

The reasonable person will not take on debt he cannot afford. If the reasonable person takes on debt willingly, he does so with a clear plan for paying it. What kind of a person would borrow with no intention to pay? Such a person would be a fool or a crook.

So the big question is what you are going to do about your technical debt, and do you (as evidenced by your actions) understand and believe that code quality matters?

The most disturbing things about technical debt are that some lack (or have lost) the ability to see it when they're knee-deep in it, and others feel that it is a necessary condition of software development.   This is more common in non-pairing, siloed development shops.   Tim recommends pairing and breaking down silos to help get more eyes on the code.

Getting Un-stuck in TDD

Just like any other writers who are working in a creative medium and against a schedule, Test-Driven developers have writers block from time to time. When I was learning to Test-Drive software, I saw how much more quickly my more experienced colleagues broke through the barrier. People like Dave Chelimsky had a bag of tricks that would keep them working productively. I think he is responsible for transmitting the first three item to me. From there, I needed to develop some simple ice-breakers of my own.

A small variant of something Esther Derby said is responsible for the "most interesting" item on the list. She suggested a retrospective leader asking the team what they have the energy to do. That hit me. There is nothing wrong with picking the test you can most easily pour your energy into.

Writing the assertion and function name first has been greatly effective for me. Sometimes I leave the name as something generic until I can write an assertion that is meaningful, then I revisit the test name. If I have an assertion the ("check"), I know where I want to go. I can perform the setup (build) and the key method call (operate) . The Build/Operate/Check method also gives me more readable tests in addition to helping me maintain some semblance of flow.

Renaming and refactoring are helpful, even if the naming is not all that it could possibly be. This is because your eyes can become "stale": you don't really read the code your eyes pass over. One solution to this is to cause something about the code to be "new". Furthermore, if you increase readability then you may reach a state where wrong code looks wrong.

Reading the code for obvious flaws is another way to make the code new by changing perspectives. This can have a good effect, since a change is as good as a rest.

The best and most reliable way to break your writers' block is to change partners. If neither you nor your pair programming partner is able to easily think of the next test to write, it is possible that you are done or that you have both gone stale. Changing partners will give you a whole new set of eyes and a whole new perspective on the readability of the code.

There is a missing element here which will be considered elsewhere -- "reduce your scope".

Everymember Skills



Source: Tim Ottinger

People are typically worried about starting XP on the basis of personality types. There is typically a very small number of people who just don't "get it", and developers become concerned that they will fall through the cracks, or that their best coworkers will.

There is more that each Agile team member needs beyond technical wizardry. These are really as much attitudes as skills. When the team begins to break down, it tends to be along certain lines. To help adopters, it is good to have a clear statement of some values that members need in order to reduce friction and keep the project moving.

Mottos and slogans are not the answer to real problems among people. If a set of developers have unresolved past conflicts, they will find it difficult to take to this way of thinking. I am not sure how possible it is to coach people out of bad history with each other. I suspect it will be necessary for some of them to find a different team where they can have feelings of equality, humanity, cooperation, etc.

What these attitudes & skills can do is set an expectation. When a team is starting out, they should understand that the normative behavior is to ask questions, look for ways to measure the quality of their input and output (information), to contribute equally with their peers (energy, equality, cooperation), and to have patience when others are struggling (humanity). With such a start, there is some chance of the team being successful together. With a balk at the very ideas espoused, there is a good chance that the team needs a little refactoring.

ABC's of Pair Programming

Source: Jeff Langr and Tim Ottinger

I love the opportunity to sit and program as half of a pair. I'm sure it's not for everyone, but I too was once one of those who resisted the idea. Part of my resistance was my fear that people might realize I didn't know as much as they thought I did. I got over that. Many people who have given an honest effort to pairing (done by the rules) have found out that it's actually very enjoyable and effective.


Another common resistance is misunderstanding of what pairing is, and of what benefits you might get from it. To be effective, you can't just sit near someone else and expect magic to happen. The rules are simple, but not obvious. It makes perfect sense to me why someone would hate pairing after doing it poorly.


The least obvious rule is "change pairs frequently." A typical conception is that pairs are married at the hip for days at a time or even weeks. No, I'd slit Tim's wrists and he'd slit mine were that the case. Instead, we switch pairs often. The tedium of dealing with one person all day long aside, one of our goals in switching often is to ensure that not just two, but at least three people contribute to the solution of a task.


The downside of frequent switching is the overhead cost of context switching. It takes time to explain things! But that's where the synergies in the original XP practices come in: If you're doing TDD well, following simple design, coming up to speed on the test at hand isn't a terribly difficult proposition. And in fact, the need to minimize context switching overhead is a subtle force in the direction of improving code quality.

Simple Design

Source: Jeffries, et. al. Extreme Programming Installed, Addison-Wesley, 2000.


My understanding is that the concept of simple design is Kent Beck's invention, but I've not found a definitive answer yet. I reference the Ron Jeffries XP series book as the first published place I could find it (but the material on the C2 wiki predates this). I also wrote about these four rules as a chapter in Uncle Bob's Clean Code.


The notion of "simple design" is often interpreted as YAGNI ("you ain't gonna need it"): Build only what the customer asks for--don't add bells and whistles that they don't need. Or, from a design perspective, don't build complexity into the system where it's not needed. Don't introduce abstractions unless you have a valid need, and don't build the all-encompassing infrastructure or design that will support every possible feature that could come along.


But my definition for "simple design" has always been these four rules that I learned while at one of the Object Mentor XP Immersion classes, way back in 2000-2001. The Immersions were led by Uncle Bob, Ron Jeffries, and Kent Beck, with a few of us fortunate lesser Object Mentors helping out as table coaches. These were some of the most entertaining sessions I've ever attended.


The four rules of simple design speak to specifics--while YAGNI can be understood in a broad context, here we are specifically talking about systems design. The rules are in order of importance, hence the numbering, and you'll note that the last one, i.e. the least important, is the rule that most closely speaks to YAGNI. More important, then, are having as comprehensive tests as possible, minimizing redundancies, and coding clearly.


My contention is that if you knew nothing about classic concepts of good design, such as the single responsibility principle, the dependency inversion principle, design patterns, and other heuristics, you could still get there by following these four rules. When explaining these rules, Beck talked a lot about the notion of emergent behavior--that you can get a holistically positive result just from applying a very small set of simple rules at the individual level.


I've done that experiment a couple times in reasonably-sized subsystems (a few dozen classes): I've followed TDD (test, code, refactor), and during the refactor step only considered rules two through four, trying to pretend that I didn't have a good background in OO design. Then I've gone back and sketched a model of the design in UML. Both times, I've been pleasantly surprised, having produced a design that looked elegant but not complex.


You don't have to throw away your exhaustive knowledge of design principles when using simple design. Instead, reconcile what you produce with the rules, and be willing to change your design principles if simple design suggests there's a problem. Try it!

Lean Principles


Source:
Lean Software Development

Tom and Mary Poppendieck


Addison Wesley, 2003



Agile software development owes quite a debt to The Toyota Way, AKA Lean manufacturing. While Lean principles have lead to tremendous savings in the world of producing material goods, it was uncertain how the principles apply to software development. Furthermore, there seemed to be a bit of a gap regarding how managers relate to Agile teams.

Enter Tom and Marry Poppendieck, and their "An Agile Toolkit for Software Development Managers". In one concise, readable, usable volume they managed to teach the application of lean principles to software development in a way that is useful for managers.

As explained in the book, the main challenge was in translating basic concepts.

In order to reduce waste, one has to learn to see waste in software development. It takes the same basic forms of rework, wait-states, work-in-progress, etc. As we work through Lean Software Development, we learn again to be courageous in simplifying our work lives. We learn to do one thing at a time. We learn to stop half-doing work to pick it up again later.

A surprising bit is "decide as late as possible" -- deferring decision making until the latest responsible moment. It is quite obvious once one has begun to apply it, but goes against the type-A intuition. Rather than deciding more things up front and sticking to our guns, we are told to wait and decide when it is clear which way we should go. This kind of iterative decision-making is at the heart of Agile practices and aids us in making simplifying assumptions. For that matter, we find the same principle at work in well-crafted software in the form of late binding.

Building integrity into the product again echoes Deming.

The final point, seeing the whole, helps us remember to beware local optimizations that tend to decrease the performance of our organization. A favorite teenage lament is "my life would be easier if everyone else would take better care of me." In a similar vein, we find that optimizing for one group (perhaps the developers, perhaps product group, perhaps management or sales) can cause the organization as a whole to suffer. By seeing the whole, we will take steps that may actually make some peoples jobs a bit more difficult in order to improve the ability of the company to compete.