Could a lack of empathy be one of the key value adds of AI-led research?
Natan Voitenkov
Dec 14, 2023
•
10
min read
If we were to assume that empathy is always a positive thing, you’d be happy to hear that the interest in empathy has grown over time, even more so within the realm of UX. Numerous articles (e.g., here, there, or this one) detail the importance of empathy in bridging the gap between researchers and users, fostering deeper user understanding, uncovering hidden needs, and generating more relevant insights.
But one thing stands out in the discourse on empathy in UX research; the focus is on the one-directional empathy of a researcher towards the user they are attempting to understand. What about the other direction? What’s the impact of users’ empathy towards the researchers interviewing them? That’s a question we’ve not seen discussed, so let’s talk about it.
Let's consider the empathy from a research participant to a researcher by leveraging Daniel Goleman's empathy framework. Goleman describes three types of empathy: cognitive, emotional, and compassionate (the last being a combination of the first two.) Cognitive empathy is perspective-taking, which in this case would be when a user puts themselves in a researcher’s shoes to understand their needs. Emotional empathy is the mirrored feeling of another person’s emotions. As much as they try to avoid it, researchers often convey their feelings regarding the product or service they’re involved with, and users may pick up on that, feeling the researcher’s frustration or disappointment. Compassionate empathy combines the two previous types, where users will not only understand a researcher’s predicament or feel for them; they’ll act to do something about it… In some cases, doing what’s necessary to help the researcher rather than the company, product, or service they’re interviewing about.
As participants in paid, human-led research, people often want to appease the researcher in front of them. They want to do “a good job” and be helpful… While avoiding any awkwardness or difficult conversations. This is, of course, culturally dependent. Still, in the U.S., for example, Kim Scott’s book Radical Candor succeeded because of the struggle within American culture to balance caring and challenging directly. Therefore, it’s reasonable to assume researchers often miss out on critical feedback because of users’ empathy towards them, leading to “feedback reticence.”
We can do more than just assume, though, because in early research we conducted on people’s preferences for an AI or human interviewer, we found that people preferred an AI interview in several cases. The preference towards AI wasn’t just because of the fact that “with an AI, you can just get to the point” (quote from a woman participant, 41 y.o.) but also because AI interviewing is novel and convenient (we’ll get into these topics in future articles.)
AI-led research is a potential solution for the user-to-researcher empathy issue because we can control the factors influencing empathy towards AI. What are those factors, you ask?
Anthropomorphism: People naturally connect with things that resemble themselves. Therefore, more human-like AI, with physical embodiment or emotional responses, tends to elicit greater empathy. (relevant sources: #1, #2, #3)
Warmth coupled with Competence: Empathy elicits empathy from the other party. When people perceive AI as empathetic and warm, in addition to viewing AI as competent and helpful, they have stronger empathetic feelings towards it. (relevant sources: #1, #2)
Shared experiences: When people believe they share common goals, experiences, or emotions with AI, their empathy increases. This could involve AI expressing personal opinions or working collaboratively on tasks. (relevant sources: #1)
Vulnerability and dependence: AI portrayed as vulnerable or reliant on humans can evoke protective instincts and empathy. (relevant sources: #1)
There’s much more to research and understand in the quest to optimize the bi-directional empathy from and to an AI researcher. We’re also still in the early days of exploring how people experience AI as an interviewer and how we can optimize the experience. We’re on it. But what we’ve already begun to witness is that sometimes, people would rather disclose specific experiences to an AI… And perhaps that’s because they have less empathy towards it. Cases when participant-to-researcher empathy is a concern for the validity of the research are, therefore, one example of how AI-led research can augment and complement human-led research.
If you’d like to learn more about what we’re up to at Genway, check out our website at www.genway.ai. We’re working hard to optimize for empathy and emotion detection with upcoming features like speech emotion recognition, facial expression detection, and human-like voice mode.
We’re also perfecting the end-to-end process of conducting interviews by leveraging AI to refine and enhance how research teams schedule research, synthesize their learnings, and integrate them into their workflows for maximal impact.
We’re always looking for feedback; If you’d like to try Genway, reach out at natan@genway.ai or DM me on LinkedIn.
More insights
Generate insights with depth and scale using AI Interviewers