Font: Complete In Him
The word "acceptance" implies that we have a high level of confidence that we can ship quality product. The goal for acceptance tests is to provide self-verifying, self-documenting executing examples of how the system is intended to be used. In a rough sense, they are use cases taken to the next level.
- Define “done done” for stories. The acceptance tests for a story provide a contract for completion. As programmers, we know we are truly done when the code passes all of the acceptance tests. As consumers of the system, we know we can accept it when all of its tests pass. Progress in an iteration is often best tracked by the number of acceptance tests passing.
- Are automated. Our Automating Tasks card provides detailed recommendations on when and when not to automate. Most importantly, this directive doesn't preclude the existence of tests that aren't automated!
- Document all uses of the system. Think about the tests as examples that demonstrate valid uses of the system. This mindset will help you develop tests that can be read and understood by people wanting to understand the system. Such documentation never becomes stale (although it requires good effort to design the tests to be so expressive).
- Don't just cover "happy paths." Stories usually require a family of tests to comprehensively cover alternate cases and exceptional situations. Inevitably, you will ship a defect that an automated test would have prevented. This previously "missed" acceptance test now becomes part of the complete test suite.
- Do not replace exploratory tests. We will always want some manual testing above and beyond the tests that define acceptance criteria for a new story. Exploratory testing highlights the more creative aspects of how a user might choose to interact with a new feature. It also helps teach testers how to improve their test design skills. Don't forget to knee test!
- Run in a near-production environment. How many nickels have you earned for hearing the phrase "It worked fine on my machine!?" Acceptance tests must execute in an environment that emulates production as closely as possible. This means they hit a real database and external API calls, as much as possible. By definition, then, acceptance tests are slow.
Still, minute to minute, I might want my suite of acceptance tests to run as fast as possible, as part of a continuous integration build. As a developer, I want rapid feedback, to provide high short-term confidence that I can move on. I know we'll still run the full suite in a proper environment overnight; having this fallback allows me to look at tactics such as using an in-memory database. - Are defined by the customer. Hopefully we know who the customer is. Agile acceptance tests are an expression of customer need (aka requirements). While all parties can contribute to statements of requirements, it's important that a "single customer voice" defines their interests as an unambiguous set of tests. (This also means that it's perfectly ok for a programmer to help with testing--they just can't be the one defining the tests.)
I'm often asked, "what about regression tests?" Most people define regression tests as "tests that make sure we didn't break something already in the system." My answer is that they imagine we had built this entire system in an agile, acceptance-test-driven fashion, i.e. where we ship only code that passes pre-defined acceptance tests. In this world, regression testing can thus happen every iteration--we simply run the entire suite of automated tests for all stories delivered to date.
Updated with a new font, as the "Anything" font doesn't hold up well under scrutiny. One would hope a religiously named font is infallible, so for now I'll try this one.
ReplyDeleteYou might want to make a note about whether and when to test through the GUI or just below the GUI.
ReplyDelete