Summary: AI can already perform many UX tasks, ranging from design and research ideation to analyzing qualitative user data at scale. It’s the perfect assistant that quickly produces the first drafts of any UX method plan or deliverable. It will do more in the future, including possibly complete UI designs. But AI will not eliminate the need to watch human users.
AI has a strong role in user experience, and UX professionals need to embrace AI sooner, rather than later. (If you still haven’t gotten with the program, here’s advice for getting started with AI in UX before you become an obsolete relic who deserves to be unemployed.)
AI itself sorely needs more UX involvement, because almost all the major AI platforms have atrocious usability and have clearly never benefited from the standard UX design process. However, that’s another story. My errand today is not to ask what UX can do for AI, but to ask what AI can do for UX. (And what it can’t do.)
A standard usability test has 3 actors: the test participant (a representative of the target audience; to the left in this drawing), the test facilitator (a usability expert; to the right), and the user interface being tested (on the computer screen). Most AI firms need more of this, but today, I’m asking how AI can improve usability. (Usually, the facilitator will scoot back a little to be out of the participant’s peripheral vision, but there are limits to how accurately I can get Dall-E to position objects in a scene.)
What AI Can Do
Before turning to the limits of AI, let’s first look at what AI is capable of:
AI is more creative than most humans, meaning that ideation is free when using AI. Need more ideas? Just ask, and AI will make 10 more in a few seconds. This is obviously good for design, but it’s also good for user research. Need ideas for test tasks? Just one of the many things AI can do. (Just remember to curate the ideas and iterate on the details, because you shouldn’t expect current AI to be perfect. The best results come from human-AI symbiosis.)
AI is more productive than humans, especially at grunt work. Use it to read through voluminous user feedback to identify themes and pick out the pins in the haystack that deserve UX attention. Turn qual into quant data. (See case study of how GE does this to track and improve the usability of internal systems.) If you have enough data, it’s in the noise if IA misclassifies a few entries, as it’s wont to do.
AI creates content at scale — quickly too — for anything from illustrations for UI prototypes or UX deliverables to draft copy for your website mockups before user testing. No more testing (or showing to clients) with “lorem ipsum” instead of realistic content. By the same token, AI also expedites the writing of the final copy, but continue to heed my advice for human editing before publishing.
AI analyzes content at scale, making content strategy measurable and allowing you to manage formerly fuzzy concepts such as tone of voice and readability levels (a core accessibility concern).
AI is tireless and doesn’t suffer from blank page syndrome. Let it type out the first draft of virtually any of the endless documents and deliverables in the UX process, including usability test plans, recruiting screeners, and test tasks. (Always remember to check and edit. But you get the first draft in seconds instead of hours!) It’s always easier to edit than to create from scratch.
AI is a better coder than you. (Unless you’re an expert developer, but if you’re a typical UX professional, AI is indeed the better coder.) It can write code for anything from design prototypes to statistical analyses in R. Even if you prefer to do your own programming, using AI as an assistant (or “copilot” as GitHub calls it) more than doubles programmer productivity.
AI is not you: it’s a free colleague, if a junior one. Two heads are better than one, even if one head is artificial. Exactly because AI is not you, it’ll give you something different, whether in ideation or when providing a free crit of your work.
AI can be the UI. We don’t have many examples of this yet (except for the low-usability AI platforms), but AI can be a design medium, in addition to a design helper. Even if AI doesn’t infuse the entire UI, it can contribute a component, such as a recommendation engine.
This is what current AI can do. Since Jakob’s 1st Law of AI states that, “today’s AI is the worst we'll ever have,” we should expect future generations of AI to do better.
It’s a long-standing truth that two heads are better than one (Midjourney). Still true when one is AI.
Two heads are better than one, ancient version (Midjourney). In The Iliad by Homer (about 850 BC), Diomedes, master of the war cry, says, “When two men walk together, one may see the way to profit from a situation before the other does. […] One man alone […] his powers of invention are too thin.” (Ellen Wilson’s translation — my current favorite.)
What AI May Possibly Do
I get exasperated when I hear people say, “AI will never be able to do X.” Never is a long time. Many of the things AI is claimed to be incapable of, it can already do, such as being creative and exhibiting empathy.
Empirical research shows that AI is more creative than 99% of humans. This leaves 1% of humans who are more creative than current AI, but with next-generation AI, maybe this drops to 0.5% or less. In any case, if AI is as good at something as 99% of humanity, I say it can do it. In general, I prefer comparing AI to average humans, not the very best of the 8,088,032,213 people on the planet.
As an example, that also addresses the question of AI showing empathy, if AI is a better therapist for a person with a certain mental-health condition than the best human therapist that this patient could realistically be treated by, then it’s great to have AI available for the patient. (Here, “realistically” is some combination of availability and affordability. In many parts of the world, there are simply no qualified therapists within a 100 km radius of the patient.)
In a recent debate about whether AI can exhibit empathy, my debate partner was more skeptical than me but conceded that AI sometimes can exhibit acceptable levels of empathy. I thought AI stretched wider than she did. In any case, we both thought that even current AI can show some degree of empathy and that this may even be superior to human empathy under certain circumstances.
It is quite likely that AI will become capable of designing good user interfaces, given descriptions of the tasks and user research findings. Currently, AI can only give you design ideas, and the human designer must apply good taste and discernment in picking the best of these ideas to be integrated into a coherent user experience.
What AI Will Never Do
Quite likely, many things that current AI can’t do will become possible in the future. However, some things will not become possible, no matter how much AI improves, simply because they are impossible by nature. This also means that no humans can do it — even the very best of our 8B+ people can’t, and they never will.
By way of analogy, let’s consider when Homer in The Iliad says that a hero, such as Achilles, is “godlike.” This doesn’t mean that Achilles literally can do the same as the gods: he can’t create lightning like Zeus, earthquakes like Poseidon, or a plague like Apollo. Today, science can do several of these things, which just means that the ancient Greeks didn’t have sufficient ambitions when ascribing powers to their gods. When Achilles is described as godlike, it means that he is so superior to the other soldiers that he seems beyond the human plane. Not that he’s on the level of Zeus: there’s room in the middle.
Achilles was the greatest of the ancient Greek warriors. Was he as powerful as Zeus? No way. Similarly, AI may be better than humans at some tasks and yet not accomplish the impossible. (Leonardo)
Similarly, I expect AI to achieve many abilities that are far superior to regular humans, and sometimes even superior to the very best human. This is already the case in constrained domains like playing chess or go. But AI will never gain supernatural powers. Just as with Achilles, just because nobody else can do something doesn’t mean that AI won’t be able to do it. But if it’s impossible, AI can’t do it.
What do I mean by impossible? It’s possible that our understanding of the laws of nature will change. Still, within current science, it’s impossible to change the past, to perfectly predict the future, or to travel faster than light. Of these 3 examples, predictions are possible with some degree of inaccuracy, and it’s likely that AI will gradually achieve tighter margins of error than humans. There will still be some error, though.
This speaks directly to UX, because one of our main goals is indeed to predict the future. Will we make more money if we release this new design? Or would another design be more profitable?
User research methods allow us to estimate the answers to such questions based on knowledge about users, as opposed to simply the product manager’s own opinion or preferences. This increases our probability of guessing right, but we can never achieve 100% prophetic accuracy when estimating the future.
UX has incredibly high return on investment because the user research studies are cheap (often only testing 5 users), whereas the profit from guessing right more often are huge.
What about AI? It lives in its little box and currently doesn’t interact with users. This makes AI estimates less accurate than estimates based on user data. AI can perform heuristic evaluation along similar lines as a human UX expert, but current AI is less good than human experts at applying usability concepts to the analysis of user interface designs. Future AI will be better, and I see no reason why AI won’t eventually get better than most (or all) human UX experts at heuristic evaluation. After all, it will have more examples and more data at its (virtual) fingertips to help in resolving the inevitable tradeoffs and nuances.
But will AI be able to replace user research? No, then we get into that realm of the impossible. Future AI will certainly be able to facilitate user research, especially remote studies. Current AI can already conduct user interviews and ask follow-up questions during the session. Does it do this as well as an expert human usability specialist? Probably not, but an AI-driven interview might already now be better than the biased sessions conducted by many humans who are not skilled in user research.
Refer back to my image at the beginning of this article: User research involves 3 parties:
The user (who should be representative of your customers)
The facilitator (who can be human or AI)
The design being tested (can be operational or a prototype)
The only one of these who has to be human is number 1, the representative customer. User research without users is an impossibility.
This scenario will not work: AI testing AI instead of a human user who is representative of your customers. (DallE)
The reason we conduct user research instead of relying solely on the 10 heuristics and other usability guidelines is that humans always have unexpected behaviors. How do we know what people want, need, or prefer? This cannot be predicted, any more than we can predict the future. We can estimate, yes, and the 10 heuristics encapsulate decades of hard-won experience with the types of user interfaces that are easy and difficult.
But we won’t know what humans do unless we watch them.
About the Author
Jakob Nielsen, Ph.D., is a usability pioneer with 41 years experience in UX and the Founder of UX Tigers. He founded the discount usability movement for fast and cheap iterative design, including heuristic evaluation and the 10 usability heuristics. He formulated the eponymous Jakob’s Law of the Internet User Experience. Named “the king of usability” by Internet Magazine, “the guru of Web page usability” by The New York Times, and “the next best thing to a true time machine” by USA Today. Previously, Dr. Nielsen was a Sun Microsystems Distinguished Engineer and a Member of Research Staff at Bell Communications Research, the branch of Bell Labs owned by the Regional Bell Operating Companies. He is the author of 8 books, including the best-selling Designing Web Usability: The Practice of Simplicity (published in 22 languages), the foundational Usability Engineering (26,661 citations in Google Scholar), and the pioneering Hypertext and Hypermedia (published two years before the Web launched). Dr. Nielsen holds 79 United States patents, mainly on making the Internet easier to use. He received the Lifetime Achievement Award for Human–Computer Interaction Practice from ACM SIGCHI and was named a “Titan of Human Factors” by the Human Factors and Ergonomics Society.
· Subscribe to Jakob’s newsletter to get the full text of new articles emailed to you as soon as they are published.
I love this post, it exactly how I think regarding this topic. All this was what I was considered when creating my startup PrepxUs.com. instead of just creating another user research tools with a horrid usability, I focused on simplicity, unification of data, diversity of data, accuracy, transparency, and unbiases, before even considering to adding AI functionality.
So the symbiosis of Human AI interact can flourish.
I detail it in this LinkedIn post:
https://www.linkedin.com/posts/rilwan-owolabi-prepxus_userresearch-uxresearch-ux-activity-7151890631416508416-SBf2?utm_source=share&utm_medium=member_desktop
A great take by one of the masters. But I'm more skeptical that others will see human observation at the center of UX in our AI-driven future. Yes, "user research without users is an impossibility." But many organizations are already quite comfortable making product decisions without user testing. I could easily see AI responses acting as proxy for user feedback in the not-to-distant future, if it's not already.