Skip to content


Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.

“Disturbing & pervasive errors among three popular models on a wide range of legal tasks” study found

“hallucination rates range from 69% to 88% in response to specific legal queries for state-of-the-art language models”

“performance deteriorates when dealing with more complex tasks that require a nuanced understanding of legal issues or interpretation of legal texts”

“case law from lower courts … subject to more frequent hallucinations than case law from higher courts

“model susceptibility to what we call “contra-factual bias,” namely the tendency to assume that a factual premise in a query is true, even if it is flatly wrong”

https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive

https://arxiv.org/pdf/2401.01301.pdf

0 Shares

Posted on: January 12, 2024, 8:18 am Category: Uncategorized

0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.