We’ve already established the importance of user stories when it comes to the discovery phase of a digital project. However, user stories in a vacuum are not enough – left alone, they leave way too much room for assumption and error. And we all know that Assumptionville is where projects go to die.
So how do we avoid assumptions when it comes to user stories? Simple: add acceptance criteria to each one.
Acceptance Criteria is Your Key to Project Success
Let’s take a look at a sample user story I wrote about in our user stories article:
As a content manager, I want to review submitted articles from our readers before publishing so that I can maintain quality control of our content.
This user story, on the surface, is relatively easy to understand right? But when we start thinking about the nuts & bolts of the process, it quickly becomes clear that the user story doesn’t have enough details. How can we measure whether or not the user story has been properly met if we don’t know the details?
That’s exactly what acceptance criteria is designed to do: make clear the details that define the user story. If you stop and think about it, quite literally “acceptance criteria” is the criteria for a user story to be acceptable (or accepted by the product manager/client).
It’s helpful to start with questions when we’re attempting to define acceptance criteria. Some questions the above user story elicit might be:
- How are the articles submitted?
- What information is required when an article is submitted?
- Where does this information need to be collected?
- Does anyone need to be notified on submission or approval?
And from those items, we can begin to define the criteria that the user story can be built towards and tested against:
- Articles are submitted through a front-end form
- Form cannot be submitted without author name, author email, body content
- User can attach images for article
- Form supports HTML
- Form data is saved as a “Draft” article in CMS
- Form data is sent to <email address>
- Acknowledgement email is automatically sent to submitter on submission
- Content manager can attribute post to guest author
- Submitter notified via automated email when post is published
And so forth.
Now, all of a sudden the high-level user story has a set of additional details supporting it. Developers know what to build, designers know what to design, and expectations around functionality and effort are much more clear.
Further to that, when the user story moves into production and ultimately into the QA and UA testing phases, there is a clear set of criteria to test the user story against. After all, if you don’t know the criteria, how do you know if the item passed or failed?
How to Capture and Record Acceptance Criteria
Much the same as the process of capturing user stories and their priorities, there is no singular “right” way to capture acceptance criteria. However, there are some ways we know work – so let’s talk about those in the context of the overall user story process. No need to reinvent the wheel!
As mentioned, we capture user stories in a workshop with our clients. That workshop covers a lot of ground – because of that, we often defer acceptance criteria to another conversation rather than try to do too much in a kickoff. The key word here, though, is conversation.
Remember how user stories user the following format?
As a <user persona> I want <functionality> so that <benefit / rationale>.
The best way to define acceptance criteria for a given user story is to have a conversation with the user (or, the knowledge holder / representative of that user) to whom that story relates. That might sound like a monolithic undertaking, but remember that there are usually around 3-5 user personas that should be focused on in a project.
What you can do, then, is grab your stack of prioritized user stories. I’d recommend just grabbing the 1s / Ms so as to focus strictly on the must-haves. Organize them by user persona (e.g. here are all the user stories that relate to the Marketing Manager; here are all the user stories that relate to the Prospective Job Seeker; etc).
Then, book a meeting with each of those users. It can be an in-person meeting (recommended); a VOIP call (second best); or a phone call (still useful). In this meeting, simply run through each user story one-by-one and ask questions designed to understand more details around the expected functionality behind each story. Have a conversation about each story.
It’s helpful to talk through the end-to-end process with the user/client – e.g. “Ok, how are users submitting articles? What happens next? How will you know an article has been submitted? What format should it come in?” Talking with the user about the entire process – the story of the functionality, as it were – and asking the right questions will uncover acceptance criteria.
When you’re doing this, make sure to take a bunch of simple-language notes! Then, before moving on from a user story card, turn these notes into acceptance criteria for the story and make sure you come to a consensus in the conversation. You can write the criteria down on the back of the user story card, or if you’ve already transferred the user stories into digital format (for example, a spreadsheet or table), then capture the acceptance criteria in a related cell. If you haven’t already got our digital user stories spreadsheet, you can download it at the bottom of this post.
Then, go on to the next story. And the next. And the next. Then go to the next user group. Lather, rinse, repeat.
All of a sudden, you’ll have robust criteria to accompany each user story. Criteria that was collaboratively defined by the user. Criteria that can be estimated against, built towards, and tested.
Now? Your user stories are usable themselves. The discovery phase of your digital project can move into the imperative auditing step next. Learn what a Discovery Audit is, and how to run one, in the next chapter!