Could a lack of empathy be one of the key value adds of AI-led research?

Natan Voitenkov
Dec 14, 2023
โข
10
min read
If we were to assume that empathy is always a positive thing, youโd be happy to hear that the interest in empathy has grown over time, even more so within the realm of UX. Numerous articles (e.g., here, there, or this one) detail the importance of empathy in bridging the gap between researchers and users, fostering deeper user understanding, uncovering hidden needs, and generating more relevant insights.
But one thing stands out in the discourse on empathy in UX research; the focus is on the one-directional empathy of a researcher towards the user they are attempting to understand. What about the other direction? Whatโs the impact of usersโ empathy towards the researchers interviewing them? Thatโs a question weโve not seen discussed, so letโs talk about it.
Let's consider the empathy from a research participant to a researcher by leveraging Daniel Goleman's empathy framework. Goleman describes three types of empathy: cognitive, emotional, and compassionate (the last being a combination of the first two.) Cognitive empathy is perspective-taking, which in this case would be when a user puts themselves in a researcherโs shoes to understand their needs. Emotional empathy is the mirrored feeling of another personโs emotions. As much as they try to avoid it, researchers often convey their feelings regarding the product or service theyโre involved with, and users may pick up on that, feeling the researcherโs frustration or disappointment. Compassionate empathy combines the two previous types, where users will not only understand a researcherโs predicament or feel for them; theyโll act to do something about itโฆ In some cases, doing whatโs necessary to help the researcher rather than the company, product, or service theyโre interviewing about.
As participants in paid, human-led research, people often want to appease the researcher in front of them. They want to do โa good jobโ and be helpfulโฆ While avoiding any awkwardness or difficult conversations. This is, of course, culturally dependent. Still, in the U.S., for example, Kim Scottโs book Radical Candor succeeded because of the struggle within American culture to balance caring and challenging directly. Therefore, itโs reasonable to assume researchers often miss out on critical feedback because of usersโ empathy towards them, leading to โfeedback reticence.โ
We can do more than just assume, though, because in early research we conducted on peopleโs preferences for an AI or human interviewer, we found that people preferred an AI interview in several cases. The preference towards AI wasnโt just because of the fact that โwith an AI, you can just get to the pointโ (quote from a woman participant, 41 y.o.) but also because AI interviewing is novel and convenient (weโll get into these topics in future articles.)
AI-led research is a potential solution for the user-to-researcher empathy issue because we can control the factors influencing empathy towards AI. What are those factors, you ask?
Anthropomorphism: People naturally connect with things that resemble themselves. Therefore, more human-like AI, with physical embodiment or emotional responses, tends to elicit greater empathy. (relevant sources: #1, #2, #3)
Warmth coupled with Competence: Empathy elicits empathy from the other party. When people perceive AI as empathetic and warm, in addition to viewing AI as competent and helpful, they have stronger empathetic feelings towards it. (relevant sources: #1, #2)
Shared experiences: When people believe they share common goals, experiences, or emotions with AI, their empathy increases. This could involve AI expressing personal opinions or working collaboratively on tasks. (relevant sources: #1)
Vulnerability and dependence: AI portrayed as vulnerable or reliant on humans can evoke protective instincts and empathy. (relevant sources: #1)
Thereโs much more to research and understand in the quest to optimize the bi-directional empathy from and to an AI researcher. Weโre also still in the early days of exploring how people experience AI as an interviewer and how we can optimize the experience. Weโre on it. But what weโve already begun to witness is that sometimes, people would rather disclose specific experiences to an AIโฆ And perhaps thatโs because they have less empathy towards it. Cases when participant-to-researcher empathy is a concern for the validity of the research are, therefore, one example of how AI-led research can augment and complement human-led research.
If youโd like to learn more about what weโre up to at Genway, check out our website at www.genway.ai. Weโre working hard to optimize for empathy and emotion detection with upcoming features like speech emotion recognition, facial expression detection, and human-like voice mode.
Weโre also perfecting the end-to-end process of conducting interviews by leveraging AI to refine and enhance how research teams schedule research, synthesize their learnings, and integrate them into their workflows for maximal impact.
Weโre always looking for feedback; If youโd like to try Genway, reach out at natan@genway.ai or DM me on LinkedIn.
More insights
Generate insights with depth and scale using AI Interviewers