There are a number of methods to improve the usability of an interface. While it's hard to identify one overarching concept that's fundamental to the whole idea of usability, I think there's one that underlies most methods and desirable outcomes. That concept is that the developer is not the user.
The descriptive data becomes the template for who you measure. The behavioral data becomes the framework for testing. The interaction data becomes the task-scenarios that you simulate and measure during a usability test. Improvements in the interactions affect attitudes and the attitudes, like increased trust and loyalty, drive further buying behavior.
One of the most important practices in UX design is actually done before the UX design process even starts. Defining the goals and values of the product that you would like to build is the key driver for a results-driven process.
Accounting for problem frequency and severity are two critical ingredients when communicating the importance of usability problems. They are also two of the inputs needed for a Failure Modes Effects Analysis (FMEA), a more structured prioritization process.
It’s important to remember that, between retail category, visitor demographics and savvy, and changing trends over time, there’s a lot of idiosyncrasy to any single case study. And of course, as Dell’s bold triggered overlay test illustrates, there are exceptions and surprises, and we have to keep on testing to uncover them.
Explained the benefits of breaking down user experience into its four elements—usability, desirability, adoptability, and value—and discussed ways of applying this framework to help you develop products that customers love.
As consumer UX underwent a renaissance over the last decade, enterprise software stagnated with a design sensibility from the dial-up era.
Usability—much less beauty—was never a priority for business software. All that mattered was that large and complex applications worked. What’s the point of tweaking and beautifying when basic functionality is challenging enough and all of your competitors are equally sub par?
The point is users. Not yesterday’s users who eventually adapted to whatever complex software product you put in front of them. Those users are retiring. I’m talking about millennial workers who know better than to settle for unwieldy, confusing applications that only make their jobs harder.
Bad profits are a ticking time bomb. Customers who are dissatisfied with the service or quality of a product are not only less likely to repurchase it, they are also more likely to tell their friends about the bad experience.
The advantage of looking at multiple studies using different devices, facilitators, and evaluators is that we don't need to rely on a single study with its potential flaws and idiosyncrasies to draw a conclusion about the relationship between frequency and severity.
Many modern digital products enable complex, emergent behavior, not just pure task completion. We’re building habitats, not just tools; yet we often think of discoverability only in terms of task execution.
LukeW Ideation + Design provides resources for mobile and Web product design and strategy including presentations, workshops, articles, books and more on usability, interaction design and visual design.
Mario K. Sakata's insight:
In our earlier multi-device designs, we accounted for this convention by creating a series of navigation structures that adapted from comfortable touch zones on small screen devices to the kinds of navigation structures people have come to expect on desktop and laptop computers (top of screen, etc.).
These are some of the questions we typically ask when testingdesign concepts with users. In this article, we’ll tell you how we used a method called Rapid Iterative Testing and Evaluation, or RITE, to help us answer them, and we’ll offer some tips that you can try within your own teams.
User Experience Design calls for us to write words on buttons all the time – but how do we know whether we’re choosing the right ones? Linguistics may provide a clue. What follows is a simple test to check whether your calls to action “work” linguistically as well as a guide to consider the grammar of your experience elements.