To main content

LLM Hallucinations in Conversational AI for Customer Service: Framework and End-User Perceptions

Abstract

Large language models (LLMs) hold the potential to significantly enhance conversational AI in customer service. Yet, a key challenge with LLMs is hallucinations, where LLMs provide wrongful or inconsistent outputs, potentially causing problems for end-users and eroding trust. Addressing this, this paper makes two key contributions: first, building on extant scholarly work, we provide a framework of LLM hallucinations adapted to the customer service context. Second, drawing on a survey of 274 potential end-users, we provide empirical insights into how end-users experience different types of LLM hallucinations, including factors users emphasize when assessing their severity. The analysis shows that users indeed care about hallucinations, with all types of hallucinations potentially undermining users’ trust. Yet, hallucinations which entail provision of wrongful information, and which may have negative implications for users are deemed especially problematic.
Read the publication

Category

Academic article

Language

English

Affiliation

  • SINTEF Digital / Sustainable Communication Technologies
  • Diverse norske bedrifter og organisasjoner

Date

06.11.2025

Year

2025

Published in

International Journal of Human-Computer Interaction

ISSN

1044-7318

View this publication at Norwegian Research Information Repository