Skip to content


Research Preprint: “AI Hallucinations: A Misnomer Worth Clarifying”

Research Preprint: “AI Hallucinations: A Misnomer Worth Clarifying”

Research Preprint: “AI Hallucinations: A Misnomer Worth Clarifying”

Research Preprint: “AI Hallucinations: A Misnomer Worth Clarifying”

The article (preprint) linked below was recently shared on arXiv.

Title

AI Hallucinations: A Misnomer Worth Clarifying

Authors

Negar Maleki
University of South Florida

Balaji Padmanabhan
University of South Maryland

Kaushik Dutta
University of South Florida

Source

via arXiv

DOI: 10.48550/arXiv.2401.06796

Abstract

As large language models continue to advance in Artificial Intelligence (AI), text generation systems have been shown to suffer from a problematic phenomenon termed often as “hallucination.” However, with AI’s increasing presence across various domains including medicine, concerns have arisen regarding the use of the term itself. In this study, we conducted a systematic review to identify papers defining “AI hallucination” across fourteen databases. We present and analyze definitions obtained across all databases, categorize them based on their applications, and extract key points within each category. Our results highlight a lack of consistency in how the term is used, but also help identify several alternative terms in the literature. We discuss implications of these and call for a more unified effort to bring consistency to an important contemporary AI issue that can affect multiple domains significantly.

 

 

0 Shares

Posted on: February 11, 2024, 6:52 am Category: Uncategorized

0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.

Some HTML is OK

(required)

(required, but never shared)

or, reply to this post via trackback.