Summary: Is AI an unfeeling machine, or can it exhibit empathy? Do users feel empathy when interacting with AI? If AI coaches a physician to appear more emphatic, is he or she showing empathy toward the patient, or is the doctor faking it? A debate.
Sarah Gibbons and Jakob Nielsen co-authored this article. See the authors’ biographies at the end of the article. Most of the article is a debate between Sarah and Jakob, who present their differing perspectives on the question.
In our rapidly advancing world of technology, artificial intelligence has not only become a driving force behind innovation but has also taken great strides in replicating human cognitive abilities. Beyond users treating AI like it’s human, there’s increasing use of AI as a companion that engages users in heartfelt conversations and offers solace through friendly exchanges.
As AI systems continue to ascend the ladder of sophistication, a consequential query looms on the horizon: does artificial empathy count as empathy? Does the distinction between machine and human empathy bear any significance, as long as the recipient experiences its constructive impact?
In this article, we explore two different perspectives on artificial empathy: how it compares to human empathy, the ethical questions it presents, and the depth to which it can mirror the intricacies of human compassion. We don’t have clear answers yet, so the article is a debate between two informed estimates of how our AI future will play out.
Since we don’t know the final outcome yet, this article is in the form of a debate between Sarah Gibbons and Jakob Nielsen about the prospects for artificial empathy. Sit back and enjoy. (Dall-E.)
The Spectrum of (Human) Empathy
Before engaging in discussions surrounding artificial empathy, aligning on a shared understanding of the concept of (human) empathy is important.
Definition: Empathy is the capacity to comprehend and share another individual's emotions, thoughts, and viewpoints.
Empathy often leads to a genuine emotional connection and elicits feelings of compassion and support for the person in question. It involves not only recognizing the emotions someone else is experiencing but also feeling what they are feeling, to some extent, and showing understanding and care.
Empathy exists along a spectrum. This spectrum of empathy encompasses pity, sympathy, empathy, and compassion. Pity and sympathy require minimal effort or understanding, while empathy and compassion demand active engagement aimed at effecting positive change.
An adaptation of Sarah’s original Spectrum of Empathy visualization from 2019, which continues to align with our thinking of human empathy.
The Debate on Artificial Empathy: Is It Empathy?
To embrace the debate format while ensuring we cover the many dimensions of the topic, Jakob and Sarah answered five key questions related to the subject, summarized in the table below.
Q1: How is artificial empathy different from human empathy?
Jakob: Doesn’t matter.
How empathy is generated doesn’t matter if AI and humans both positively impact the recipient.
Sarah: It’s manufactured.
There is an inherent choice behind human empathy, rendering it more valuable and meaningful than the automated and predetermined nature of artificial empathy.
Jakob: The most important aspect of empathy is how it impacts the recipient. How that empathy is being generated is of less importance. Let’s develop something similar to the Turing Test. I might immodestly call this the Nielsen Test.
(As a reminder, the Turing Test, proposed by English computer pioneer Alan Turing in 1950, is a method for determining whether a computer can be said to “think.” The test involves a human judge engaging in a natural language conversation with a human and a machine, without knowing which is which, and then deciding which of the two is human. If the judge can’t tell the difference between the human and the machine, then the machine is said to have passed the Turing Test. For all practical purposes, AI thinks like a human if its expressed thoughts are like a human’s. We can’t look inside the black box and see how those expressions came to be.)
In the Nielsen Test, a person is treated with a certain degree of empathy, and if the person can’t tell whether a human or an AI generated that empathy, then we’ll say that the AI exhibits complete empathy. More likely, instead of the whole Turing-like test apparatus, we’ll employ the standard user research methodology of simply asking people to rate the empathy they’re receiving on a 1-7 scale. Were they treated with brutal unfeeling (score 1), or do they feel they were treated with utmost empathy (score 7)?
For example, patients are told some bad news after a medical test. Did the doctors generate emphatic ways of communicating with the patients solely from their innate bedside manners, or did they benefit from an AI's coaching and/or script-writing contributions? We would know if we conducted a research study on this problem. But the patients would not know. All that matters to the patients is how they feel treated by the doctors. That’s what the patients would score, and that’s what we would measure in our hypothetical research study.
I prefer the scoring approach above having people trying to “guess whether this came from a human or a machine.” The reason is that the entire exercise aims to make people feel better. The patient may be doomed to die in a year, but the pain of that news is softened by the way it’s delivered. That’s what we need to score, and if AI makes patients feel better, then using AI is good for humanity.
(Of course, artificial empathy does not just benefit medical patients. Delivered empathy matters in many other cases, such as students receiving high or low exam scores and being admitted or refused entry to their “dream school,” employees receiving good or bad performance reviews and being promoted or fired, and so on. Thousands of cases that can be measured for whether using AI makes people feel better or worse.)
The doctor is treating a patient who lost a leg in an accident. Calls for empathy. Let’s say she’s a great surgeon but has lousy bedside manners. What if the doctor is being coached by the AI in the corner? Will this be better or worse for the patient? If better, Jakob says to use AI-enhanced empathy. (Dall-E.)
Sarah: Human empathy stems from complex cognitive processes, shaped by individual experiences, cultural influences, and moral considerations. It involves a conscious decision to invest effort to understand, and connect with, others on an emotional level.
In contrast, artificial empathy is programmed and operates on predefined algorithms, lacking the spontaneity and depth inherent in human emotional responses. Machines simulate empathy based on patterns and rules, instead of experiencing emotions themselves. While artificial empathy can mimic certain aspects of human behavior, it lacks the authentic, intuitive nature of human emotional engagement.
The essence of human empathy lies in the ability to choose when and how to express compassion. Humans can prioritize empathy based on context, relationship, and personal values, contributing to a more meaningful and adaptive form of emotional connection. In contrast, artificial empathy functions as a default mechanism, devoid of the nuanced and subjective choices that characterize human empathy.
Q2: What essential components of empathy can AI mimic?
Jakob: Almost all of them.
AI can mimic most essential components of empathy, except for the non-verbal indicators.
Sarah: Mimics human sympathy more than human empathy.
AI more successfully mimics the essential components of sympathy—understanding and supporting others without necessarily sharing their emotional experiences.
Sarah: Given artificial empathy is a product of programmed models, it lacks the genuine emotional understanding and personal experiences, even though it is able to mimic it on the surface. Thus, artificial empathy and its characteristics more closely align with human sympathy for two primary reasons:
Programmed responses: Artificial empathy expresses itself through predetermined responses and actions, similar to human sympathy where supportive responses can easily be given without necessarily sharing the same emotion or experience. This is similar to how we will often extend a habitual “I’m so sorry” to someone after a tragic event, without necessarily knowing what it may feel like to be in their shoes. We are programmed to respond with “I’m so sorry,” just as artificial empathy is programmed to extend the same.
Lack of personal experience: Artificial empathy lacks personal experiences, subjective consciousness, and the ability to draw from a reservoir of personal experiences (as human empathy requires). Instead, it functions based on objective, designed learning models. This is similar to human sympathy, which does not require you to have had a personal experience in order to feel sorry for someone. (Whereas human empathy requires mirroring one’s emotion, often built on personal experiences that bear resemblance.)
When discussing artificial empathy, it is more appropriate to draw parallels with human sympathy, which involves understanding and supporting others without necessarily sharing their emotional experiences, which AI is unable to do.
Jakob: Sarah’s chart uses words like “I feel” or “I am moved.” We might say that AI can’t feel or be moved. But as I claim in promoting the Nielsen Test, what matters is whether it seems to the outside that the AI behaves as if it felt or was moved. I think it can do both. Maybe right now, AI can’t mimic the most empathic humans perfectly. So, on my test, it would receive lower scores on these angles.
However, most humans are not that empathic. We’re not scum who goes around kicking little puppies or telling mothers that their baby is ugly. At least most of us are not that evil. But we also don’t necessarily truly feel the pain of other people or are deeply moved by their plight, especially when dealing with strangers. This doesn’t make us bad people, just normal people.
Returning to my behavioralist roots, let’s consider the external manifestations of empathy and see if AI can mimic them. ChatGPT provided the following list of the top 15 ways of exhibiting empathy:
Verbal Indicators
Active Listening: The person is fully engaged in what the other is saying, asking follow-up questions to understand better.
Affirmative Responses: Uses phrases like "I understand," "I see," or "That must be hard for you" to affirm feelings.
Personal Sharing: May share similar experiences or feelings, but not in a way that overshadows the other person's situation.
Non-Verbal Indicators
Eye Contact: Maintains steady eye contact, indicating focused attention.
Open Posture: Body language is open and inviting, signaling attentiveness and willingness to engage.
Mirroring: Subconsciously mimics the other person's body language or emotional state.
Emotional Validation
Acknowledging Feelings: Expresses understanding of the other person's emotional state without judgment.
Naming Emotions: May articulate what the other person is feeling, showing an understanding of their emotional state.
Supportive Actions
Offers Help: Takes practical steps to offer assistance, whether it's giving advice (if appropriate) or offering to help in concrete ways.
Follow-Up: Checks back to see how the person is doing, demonstrating ongoing concern.
Tone of Voice
Soothing Tone: Uses a calm and soothing voice that communicates understanding and care.
Vocal Inflections: Varied pitch and tone that align with the emotional undertones of the conversation.
Engagement Level
Fully Present: Appears genuinely interested and committed to the conversation, not distracted by external factors.
Validates Perspective: Shows an understanding of the other person's viewpoint, even if they don't necessarily agree.
Expresses Humility
Admits Limitations: Acknowledges if they can't fully understand the other person's experience but still offers support.
I would argue that current AI can mimic all of these indicators of empathy, except for the non-verbal indicators. Eye contact, open posture, and mirroring can be included in future AI-generated video avatars, and it will be easier for an AI-generated video to remember to exhibit these cues than it is for a human.
Many of the behaviors on the list are easier for AI than for humans, especially regarding being fully present, understanding somebody you disagree with, and expressing humility. It’s no skin off the AI’s nose to do any of these. In contrast, a human can’t help but be intermittently distracted by thoughts of what’s for dinner or having some reluctance to exhibit strong humility or validate a perspective that’s opposite to their own beliefs.
Consider a therapist who is either profoundly religious or a committed atheist and treats a patient with the opposite belief system. Yes, all training says the therapist should respect the patient’s beliefs. But in practice, if you strongly believe in A, you can’t help but be a little repulsed by people who forcefully profess not-A as a fundamental element of their being. On the other hand, to a computer, A and not-A are equally valid beliefs, so it will treat people of both persuasions with equal empathy.
Similarly, circling back is easier if you have perfect memory, as a good AI would have. In contrast, humans often forget to follow up on certain items from past conversations.
AI’s main weakness on the list seems to be the personal sharing of similar experiences or feelings. Since AI doesn’t have personal experiences or real feelings, sharing those would be unauthentic. Of course, the AI can lie and fake a memory or emotion, but doing so would be unethical.
“That reminds me of when my Dad took me fishing, and we didn’t catch a thing. No dinner, but I knew he loved me.” I can tell this story, because it actually happened when I was a kid (and I still remember 60 years later). An AI could tell the same story, especially now that it’s on the Internet and part of the next training data set. But nobody took the AI fishing, and the person engaging with the AI knows this and will easily spot the fake story.
True story: Jakob’s Dad took him fishing in the castle lake, a familiar spot where his dad had often fished in his youth. They didn’t catch anything that day, circa 1964. An AI couldn’t tell that story. (Leonardo.)
Q3: Is the value of empathy diminished if delivered by a machine?
Jakob: It depends on if the user knows it's artificial or not.
Empathy itself isn’t diminished. However, the positive emotional impact on the recipient would likely be diminished if the AI involvement were disclosed.
Sarah: Yes, but it’s far from worthless.
Artificial empathy is less valuable than genuine human empathy, but more valuable than fake human empathy.
Jakob: It’s almost always unethical to lie. (We can construct examples where complete empathy requires us to deliver a white lie, but they tend to be borderline cases, such as a patient on death’s door. Speaking for myself, I would rather be told the truth if I were estimated to die within the week. But that’s not true for everybody, and sometimes it may be better to say to the patient, “You’ll pull through,” even if that’s not the best medical judgment.)
Given that we should not lie, we also should not tell people they are communicating with a human if they’re communicating with an AI chatbot. (In the future, AI companions will have full “deepfake” video capabilities, so you can have a video conversation with an AI that cannot be visually or linguistically distinguished from meeting with a human.) Similarly, if a human is helped by or reading from empathy guidance from an AI coach, that human should not claim to be unguided.
If asked, say the truth. Sticking with the medical example, if a doctor has to look up the recommended dose of a rare medication in a handbook, the doctor should admit to seeking external help if asked by the patient. It would be unethical for the doctor to pretend to have perfect advanced knowledge of that rare drug. However, the doctor doesn’t need to proactively disclose to the patient having referred to a handbook if the patient doesn’t ask how the dose was determined.
Similarly, humans whose emphatic skills are improved through AI guidance or scripts need not proactively disclose this information if they are not asked. Let’s say a customer calls a support line to complain about a product that failed during a critical time, which caused the customer great anguish. The support agent should acknowledge this anguish and exhibit empathy with the customer’s unusually unpleasant experience to a higher degree than required when dealing with a less emotionally charged problem, such as an expired credit card. Should the customer be told why the agent is particularly empathic? I say no.
I don’t think empathy is diminished because it’s delivered by a machine (if interacting with a chatbot) or augmented by a machine (if interacting with a human doctor or support agent). However, the positive emotional impact on the recipient would likely be diminished if the AI involvement were disclosed. Since the purpose is to treat people better, we should not discuss AI use, unless asked.
I am critical and even skeptical of current product offerings where AI listens in on support calls, sales calls, job interviews, or medical interactions to score the empathy exhibited by the provider. If people’s bonuses depend on high scores from these systems, they may learn to fake the superficial signifiers of empathy the AI is listening for. Maybe they’ll start peppering their conversations with phrases like “that must be hard for you,” which was included in the list I got from ChatGPT. Such fake empathy is probably worse than no empathy at all. (Note that I distinguish between artificial empathy and fake empathy. The former is good, the latter is bad.)
Sarah: As our famous UX slogan goes, "It Depends.” There’s a hierarchy to the empathy variants we’ve been debating; human empathy stands as the pinnacle, surpassing all others, while artificial empathy outshines inauthentic human empathy or the absence of empathy altogether.
The true value of artificial empathy becomes evident when contrasting it with the void of empathy in many contemporary experiences. Currently, numerous interactions lack genuine empathy, relying instead on generic information, template error codes, and pre-written phrases to address users' frustrating moments. In these instances, artificial empathy emerges as a substantial improvement, offering a level of value significantly superior to the existing user experience. Despite its machine-driven nature and delivery, the introduction of artificial empathy in these scenarios not only mitigates the deficiency in empathy but enhances the overall user experience.
Consider a customer navigating a complex online service that encounters an error. In the current paradigm, the system might present a standard error code or a generic message. However, with artificial empathy integrated, the system could recognize the user's frustration, acknowledge it, and offer a more personalized and understanding response. This not only alleviates the user's frustration but also creates a more positive and supportive interaction, showcasing how artificial empathy can add considerable value, despite being delivered by a machine.
Q4: How can AI-driven empathy complement human empathy?
Jakob: An empathetic human augmented with AI advice.
However, even more use cases will benefit from pure AI-driven empathy without human involvement.
Sarah: Supplementing skill gaps.
Artificial empathy can be a training tool to foster human empathy.
Sarah: The ability to build and convey empathy is an undeniable skill that significantly influences the quality of human connections. Unfortunately, it is a skill often overlooked in formal education, leaving individuals without structured guidance. Moreover, the lack of consistent feedback on our empathetic efforts hinders the refinement of this crucial skill. Consequently, many people find themselves without the complete toolkit needed to genuinely empathize with others and effectively convey it in a way that resonates.
Instead of endorsing the substitution of human empathy with artificial empathy, a more impactful strategy is to promote their collaborative integration. For instance, AI could analyze an email thread or Slack conversation and offer suggestions to enhance empathic connections. While this may not be as valuable as authentic empathy, this approach could serve as a viable training tool. It could be beneficial for individuals who unintentionally display minimal empathy, fostering awareness and prompting behavioral change. Similarly, it could aid those who aspire to be empathic but are unsure where to begin, providing a practical starting point.
By integrating artificial empathy alongside human empathy, individuals can bridge gaps in their empathizing skills. Artificial empathy, driven by advanced algorithms, can provide valuable insights, feedback, and practice scenarios to supplement and enhance human empathetic abilities.
Jakob: The best is almost certainly cases where the human already has strong empathy skills and is motivated to employ them and is then augmented with AI advice for how to exhibit even more empathy in the specific situation. This is similar to my general preference for human-AI symbiosis, as opposed to either acting alone.
Even though I dislike AI scoring of human interactions, that exact same technology could be useful if viewed as symbiosis support. The AI can listen to the conversation and visually observe a professional’s body language and posture. Then, it should be possible to provide hints to that professional to get back on track if the AI detects suboptimal empathy. Of course, the proof is in the pudding: if this advice is usually good, the system should be deployed. But if the guidance is often wrong, the distraction of constantly being pinged with poor recommendations would be harmful.
There are also many cases when people deal solely with a machine, and no other human is involved. As AI gets to be more capable, more such situations will arise. Currently, most interactions with machines are impersonal to the extreme. You step up to an ATM to withdraw some cash, and you don’t expect the machine to exhibit empathy because you got laid off and didn’t get your usual paycheck deposit. All the ATM will do is spit out dollar bills if the balance on your account is sufficient to cover the requested withdrawal. I am not sure I would want the ATM to exhibit empathy in this example. Maybe I don’t want to be reminded of being unemployed. (Of course, what I like is irrelevant. What most bank customers like is what matters. Ideally, the interaction can be personalized to individual preferences.)
Many use cases will likely benefit from pure AI-driven empathy without any human involvement. Discerning “just the facts” cases from empathy-deserving cases is hard in the abstract, absent user research. I know I sound like a parody of a usability expert when I recommend doing more user research, but this is an area in desperate need of such research. Hopefully, after a good dose of user research on AI-driven empathy, we can derive heuristics for when to pursue empathy in our computer systems and when to leave well enough alone.
Q5: In which scenario(s) might artificial empathy be superior to human empathy?
Jakob: When dealing with people who lack human empathy,
AI will coach people to better deliver genuine empathy, which in turn will help the recipient.
Sarah: Constrained contexts.
AI-driven empathy has efficiency and scale on its side:
customization and personalization, accessibility, and inclusivity.
Jakob: I would like to quote Jakob’s Fifth Law of AI, which says, “To be useful, AI doesn't need to surpass the best human; it only needs to outperform the human you have available, who will, on average, be an average human.”
It will be a long time before AI exhibits more profound empathy than the most empathic human in the world. But that human is unlikely to be your doctor, therapist, professor, manager, or other person you interact with in tense situations where empathy can help. I think artificial empathy will be superior in many scenarios when we have to deal with people who are naturally on the low end of the empathy scale.
Similarly, many people probably recognize that empathy is not their strong side. They may feel professionally capable but emotionally insufficient. Even when realizing this, such people may have difficulties exhibiting high levels of empathy. This is where AI-based coaching or scripts can help. Indeed, it will help both the recipient of the improved (through coaching) empathy and the person delivering this enhanced empathy, because it feels good to help others.
Sarah: Human empathy demands considerable effort, understanding, and engagement, constituting a resource-intensive process as visualized in my original framework. In contrast, AI-driven empathy offers a substantial advantage by efficiently delivering empathy at scale.
Artificial empathy excels in contexts where human resources would be impractical or unfeasible:
Personalization/Customization at Scale: AI analyzes individual preferences to provide personalized recommendations or interventions efficiently and on a large scale. Imagine a mental health app leveraging AI to deliver personalized empathy, at scale, tailored to an individual's unique emotional triggers and responses.
Accessibility: Unlike humans, AI operates around the clock without risk of fatigue or overwork. Consequently, delivering artificial empathy is not confined by time constraints, theoretically enabling continuous companionship and support at all hours. For example, virtual assistants equipped with empathetic features could provide unwavering support and information whenever needed, ensuring a consistent presence and companionship for users.
Inclusivity: Artificial empathy transcends language barriers, reaching across all languages in a way even the most empathetic individual cannot. For instance, in a children's learning program, artificial empathy ensures inclusive access to supportive, therapeutic resources for children worldwide, irrespective of language differences.
That’s a wrap of the debate between Sarah Gibbons and Jakob Nielsen on AI empathy. What do you think? Can AI learn to empathize? Let us know in the comments. (Illustration by DallE.)
About the Authors
Sarah Gibbons, Vice President at Nielsen Norman Group, is a design leader and UX strategist. Sarah's pioneering research in design thinking, service design, and mapping frameworks sets the gold standard for designers worldwide. She mentors and instructs global industry leaders on the practical implementation of her methodologies. Follow Sarah on LinkedIn.
Prior to joining NN/g, Sarah played a pivotal role in conceptualizing, researching, and designing a new suite of collaboration software built on IBM Watson Artificial Intelligence. She also took on the role of a design thinking educator, actively contributing to developing educational and professional standards for designers and researchers during the establishment of IBM Design in 2014.
Sarah's academic achievements include graduating as Valedictorian and Summa Cum Laude from College of Design at North Carolina State University, where she earned her Bachelor of Graphic Design. She sits on Penn State College of Arts and Architecture advisory board and is involved in The Future of Design Education Initiative. Sarah resides between New York City and Jamestown, Rhode Island.
A handful of Sarah's popular articles and videos:
Jakob Nielsen, Ph.D., is a usability pioneer with 40 years experience in UX and the Founder of UX Tigers. He founded the discount usability movement for fast and cheap iterative design, including heuristic evaluation and the 10 usability heuristics. He formulated the eponymous Jakob’s Law of the Internet User Experience. Named “the king of usability” by Internet Magazine, “the guru of Web page usability” by The New York Times, and “the next best thing to a true time machine” by USA Today. Previously, Dr. Nielsen was a Sun Microsystems Distinguished Engineer and a Member of Research Staff at Bell Communications Research, the branch of Bell Labs owned by the Regional Bell Operating Companies. He is the author of 8 books, including the best-selling Designing Web Usability: The Practice of Simplicity (published in 22 languages), Usability Engineering (26,439 citations in Google Scholar), and the pioneering Hypertext and Hypermedia (published two years before the Web launched). Dr. Nielsen holds 79 United States patents, mainly on making the Internet easier to use. He received the Lifetime Achievement Award for Human–Computer Interaction Practice from ACM SIGCHI.
Subscribe to Jakob’s newsletter to get the full text of new articles emailed to you as soon as they are published.
More on AI UX
This article is part of a more extensive series Jakob is writing about the user experience of modern AI tools. Suggested reading order:
AI Vastly Improves Productivity for Business Users and Reduces Skill Gaps
Ideation Is Free: AI Exhibits Strong Creativity, But AI-Human Co-Creation Is Better
AI Helps Elite Consultants: Higher Productivity & Work Quality, Narrower Skills Gap
The Articulation Barrier: Prompt-Driven AI UX Hurts Usability
UX Portfolio Reviews and Hiring Exercises in the Age of Generative AI
Analyzing Qualitative User Data at Enterprise Scale With AI: The GE Case Study
Navigating the Web with Text vs. GUI Browsers: AI UX Is 1992 All Over Again
UX Experts Misjudge Cost-Benefit from Broad AI Deployment Across the Economy
ChatGPT Does Almost as Well as Human UX Researchers in a Case Study of Thematic Analysis
“Prompt Engineering” Showcases Poor Usability of Current Generative AI
AI-driven empathy is a MASSIVE life-changer for both mental health issue sufferers and their care-givers. Indeed, not using AI empathy here is *TRULY EVIL.*
Really love the back-and-forth! Very thorough and full of interesting insights and valuable perspectives.