A couple of "usability" topics recently caught my eye. Usability falls under the broad umbrella of ergonomics. Usability is Ergonomics. Professional ergonomists understand usability to be a systems science, like all other ergonomic pursuits, that includes physical, cognitive and social/organizational components. The articles touch on the challenges – often political in nature – ergonomists face when attempting to influence human interface design, as well as the value and importance of science in our pursuit of improving those interfaces.
In one article John Sorflaten, PhD, CPE, CUA, writing for the private firm Human Factors International, writes about the politics of usability, comparing a "We came, we saw, we conquered" human interface design approach to one based on expertise backed by scientific evidence and experience. He uses the "we conquered" concept to describe the all too familiar scenario through which individual and anecdotal biases mixed with a dash of bravado drive user interface designs, rather than the scientific approach we use in ergonomics: analyze the problem and define desired outcomes; design a solution; test the design for efficacy (and revise until the desired results are achieved). He argues that we need to understand and expect resistance from our non-expert colleagues, and be prepared to deal with and overcome the inevitable political challenges to good/usable designs.
It turns out that we can expect untrained colleagues and co-workers to draw wrong conclusions when talking about usability issues dear to our hearts. And we can expect them to feel their opinions have the weight of truth and justice.
I liken this to a phrase we hear a lot about ergonomics, invariably from someone with little experience or knowledge with the topic: Ergonomics is common sense (to which I reply, If it were common sense, then good ergonomics wouldn’t be so very uncommon).
Sorflaten cites research that indicates people with less knowledge and experience about a specific topic tend to rate their knowledge higher than it actually is:
… individuals who fell within the bottom 25% of those reporting knowledge in a given topic still tended to place themselves in the "above average knowledge" category …
They were also asked to estimate their skill level compared to other people taking the same set of quizzes.
When looking at the 25% of the participants who scored the lowest, their average quiz scores fell around the bottom 10-12th percent of the possible scores. Meanwhile, those very same participants predicted their own scores would fall above average – their scores would rank as high as 58 to 67 percent of the other participants’ scores …
The authors suggest that "Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it."
The gist of Sorflaten’s article is that ergonomists (or software usability experts, his particular focus) should be better prepared to deal with project development politics if we are to influence designs. I wholeheartedly agree.
Perhaps one of the barriers to our success is that there’s an opposite trend for experienced/knowledgeable people predicting their own performance:
… participants scoring in the top quarter gave more modest predictions of their performance.
Their actual average scores were ranked as high as 85 to 90 percent of all participant scores, yet they predicted they would only get as high as 72 to 74 percent of the participant scores.
There are a lot of people calling themselves usability experts these days, most often in the narrow application area of software interfaces. I wonder how many of them really understand usability/ergonomics in its broad sense? How many recognize that true usability goes well beyond the software interface? Likewise, how many people who practice under the banner of ergonomics recognize and address the role of software interfaces?
The hardware-software ergonomics/usability relationship comes directly into play with many new consumer products, such as smartphones and tablet devices. Usability guru Jacob Nielson recently published a summary of a study comparing readability between a printed book, an iPad, a Kindle 2, and a desktop PC.
Here are a few of the findings:
The iPad measured at 6.2% lower reading speed than the printed book, whereas the Kindle measured at 10.7% slower than print. However, the difference between the two devices was not statistically significant because of the data’s fairly high variability.
Thus, the only fair conclusion is that we can’t say for sure which device offers the fastest reading speed. In any case, the difference would be so small that it wouldn’t be a reason to buy one over the other.
But we can say that tablets still haven’t beaten the printed book: the difference between Kindle and the book was significant at the p<.01 level, and the difference between iPad and the book was marginally significant at p=.06.
Another way to say this is that there was a statistically significant difference between reading speeds comparing a book to the Kindle 2, but there was no statistically significant difference between reading a book and reading on an iPad.
iPad, Kindle, and the printed book all scored fairly high at 5.8, 5.7, and 5.6, respectively [on a scale of 1-7]. The PC, however, scored an abysmal 3.6.
Most of the users’ free-form comments were predictable. For example, they disliked that the iPad was so heavy and that the Kindle featured less-crisp gray-on-gray letters. People also disliked the lack of true pagination and preferred the way the iPad (actually, the iBook app) indicated the amount of text left in a chapter …
This study is promising for the future of e-readers and tablet computers. We can expect higher-quality screens in the future, as indicated by the recent release of the iPhone 4 with a 326 dpi display. But even the current generation is almost as good as print in formal performance metrics — and actually scores slightly higher in user satisfaction.
Not everyone was pleased with Nielson’s study, or at least his interpretation of the results. Calling it Bad Research,
We love a usability study as much as the next person. But we love well-designed, elegant studies that rightfully point out their own limitations and are printed in peer-reviewed journals most of all. We have less love for studies that act as propaganda, or researchers who draw conclusions not supported by their own data.
… the point of research and statistical analysis in the first place is to go beyond what seems to be true and see if the difference is meaningful or not. After all, data may look like they mean something, but if the statistics don’t back it up, then the appearance of meaning is just an illusion. One that shouldn’t be emphasized in one’s sub-titles, since it’s misleading.
In fact, the data from this particular study found that only reading on the Kindle was statistically different than reading a book. But that’s a far less sexy conclusion than the broader, “Books Faster Than Tablets.”
Read the full article …
Grohol goes on to critique Nielson’s study and conclusions, offering alternative meanings that could be derived from the data.
This debate piqued my interest because just last week I questioned the quality of our science. Writing in The Ergonomics Report™, our subscription based publication:
It’s not just the state of the scientific basis for ergonomics, but the state of science in general, and the public’s understanding of that science. I’m not sure we can influence all scientific pursuits, but hopefully we can help to improve the research, and the interpretation and reporting of that research, in our own field.
I don’t know how closely Nielson identifies himself with the term ergonomics, but when he comments on usability, he’s by default commenting on ergonomics. And he has a large following, so when he speaks, he’s heard, which makes it even more concerning if he’s drawing unsupported conclusions from his research. In my article I also wrote:
We don’t bring this issue forward with the intent to look for or assign blame. As a publisher ourselves, I’m sure we can find instances where we’ve reported questionable research without diligent critical review. We do bring this issue forward because we’d like to foster a dialogue that addresses this important issue.
We posed the following questions to our subscribers, and I’ll now extend the questions to all:
- Do you believe there is a lot of "bad" science in the field of ergonomics? If so, do you believe the quality level is any worse than the science in other fields? What factors do you believe contribute to the publication/promotion of poor quality science?
- Can you share any ideas that you believe would increase the quality level of science in ergonomics?
- Would you support Ergoweb if we chose to launch a high quality journal for the field of ergonomics?
We encourage you to share your thoughts either publicly, by adding your comments in the designated comments area below, or confidentially, by emailing firstname.lastname@example.org. We look forward to your thoughts and feedback.