I like to use prioritization matrices (sometimes called testing or research matrices) to help a team visualize problems and decide what to tackle first. The example above is one I created for a team that hadn't been through this exercise before.
On every team, there are differing priorities among the folks doing the work. The developer may be very concerned about a security update, the product owner may want a new feature scoped, while the designer advocates for research and testing to validate the feature being requested. And poor delivery just wants to know when it will all be done!
I've found the best way forward is a group mapping session: everyone gets a sticky note pad, they write down the tasks or features they care about getting done, and we cover the board in stickies. From there, I draw the above prioritization matrix on one half of the board, and we walk through the quadrants together.
This method is pretty common, and there are a lot of resources on the internet for how to build and use a prioritization matrix like mine. The place where teams get stuck is once they've grouped tasks into quadrants, how do you further prioritize from there?
Let's say you've got three tasks in the lower left quadrant -- the Just Do It corner. But if all are relatively low value and relatively low effort, do we just roll a die? or draw straws? or try the rock/paper/scissors method?
This is where I add a second set of screening questions to help clarify the team's collective understanding of each task. We walk through the following four questions and, in a few brief sentences, come to an agreement on what we know and how to move forward with priority. (I've added this question screen as a PNG at the bottom of the post - feel free to use it!)
Impact on users Try to answer high, medium, or low. Questions to help frame the impact: - Will they need to learn a new pattern to benefit from this change? - Will they see it immediately or have to find it in the details? - Is this fixing a known problem? - Does it reduce time-to-task?
Work involved to implement Try to answer high, medium, or low. Questions to help frame workload: - Hours estimate? people estimate? sprint planning considerations - Are there resources in place to build/integrate into the existing system? - What is the expected quality level? cost? known dependencies? - Will it require internal stakeholder buy-in or consent?
Defined problem; measurable approach Try to answer clear, vague, or fuzzy. Questions to help frame clarity: - Is it solving a known business or customer problem (or both)? - Is it in alignment with the product roadmap and product goals? - Is this based on UX best practices or psychological/HCI principles (ie: things we know we can do without further research)? - Do we know how we'll measure its success or failure?
Company values This should be a clear yes or no. Questions to help frame value: - Are we doing this for short term gain? Or are we slowing down to do it the right way? - Are we proud of what we're proposing? - Is it evidence-based and data-backed and customer-centric? - Are we challenging the status quo with this change? - Does it directly or indirectly conflict with any of our company values?
I especially love the last screening question -- should we be doing this at all? Is it aligned with who our company is as a whole? I've seen that question get neglected in product prioritization again and again. And by the time someone is brave enough to answer it, the feature is mostly done or we've spent too much time and effort justifying it to question the WHY.
And the third question helps put the brakes on building features for the sake of having work to do. By identifying fuzziness early in the process, we save ourselves (and usually the user research team) from chasing down a gut instinct or doing twice the work. Instead, we can use the fuzzy tasks as room for discovery sprints or design sprints and more internal valuation.