Notes from Jared Spool's UX Speakeasy talk
Last week (February 3rd, 2016) I attended a UX Speakeasy meet up which featured the always entertaining and insightful Jared Spool.
Jared presented Is Design Metrically Opposed (transcript of the same talk at a different event).
The theme: observations drive inferences which drive design decisions. To make a good design decision, inferences must be right, or mostly right. Jared drove home the point that the best design teams don't stop at the first inference. The best teams explore multiple inferences and usually end up with better design since they validate their inferences through user research and testing.
A large portion of the talk focused on metrics and their shortcomings. Metrics are used as observations to drive inferences. But hold on, there's a problem: most things from analytics packages, like Google Analytics, are not suitable as the basis for inferences. Some examples of what analytics tell you:
- page views spiked one day
- bounce rate is 56%
- time on page was below average last week
None of these come with any explanation as to why. And the "why" is what leads to a design decision. Without "why" you're left guessing.
Other metrics got some air time too: net promoter score, conversion rate and customer satisfaction were all discussed. All were subsequently discarded as suitable metrics to drive inferences. Especially fun was the digression on satisfaction. The gist being that "satisfied" shouldn't be a goal. Jared pointed out that "satisfactory" is neutral at best. He likened it to "edible". Instead, design should be measured on a scale of frustrating to delightful.
Enter the customer journey map, which tracks customer sentiment throughout a process, like completing a purchase on a website. When doing user research, by measuring customer emotions throughout a process, it's easy to see what works and what doesn't. Focus on eliminating or minimizing frustrating parts of a journey and the whole experience improves.
Out of this discussion came a concrete recommendation that can be applied outside of customer research settings: to see where customers are getting frustrated, count and track when error messages are shown. The reason being that error messages are shown in times of greatest frustration.
At one point I was reminded of Jared's wonderful definition of design so I'm adding it here: design is the rendering of intent.
To summarize: Most analytics don't make sense. Observations drive inferences so what's observed is important. Don't stop at the first inference. Test inferences, create data from them. Data science is an essential UX skill. Don't measure satisfaction. Tracking error messages could be the best way to get simple solutions. The answer to the question asked in the title of the talk is no.