It’s almost as if these large language models are extremely unsuited to the primary application being imagined for them: knowledge retrieval.

Buzz Andersen

@buzz@andersen.social

The choice of the term “hallucination” to describe what is happening with these language models is fundamentally misleading, and feels calculated to obscure their probabilistic nature.

Characterizing the fabricated output as “hallucination” means, by implication, that correct output is “knowledge”—that the incorrect results are aberrations, not the result of the exact same blind processes that sometimes produce factually sound results.

May 27, 2023 at 4:52:52 PM
(Edited)

That term also unnecessarily humanizes "AI" and reinforces this idea that these tools are much more sentient than they actually are.

This comment ignores some linguistic history. Historically, if you allowed a language model to “hallucinate,” you allowed it to generate text (rather than, e.g., assign a probability to text that comes from another source). So in a the technical sense of the term, all generations of an LM are hallucinations (not just the ones one doesn't like).

I quite like the term hallucination to be honest. I mean, everyone makes errors, but would you accept information about the world from someone who admittedly frequently hallucinates?

Hey, ChatGPT, would I call the answers I get from you that are factually incorrect lies, untruths, or falsehoods?

Call them hallucinations.

You could argue that "hallucination" is a more accurate description - these systems literally have no mechanism to separate facts from lies - they have no intent to lie or tell the truth and can't represent those concepts.

Humans recognize hallucinations as wrong because they have systems in the brain that say "that can't have been real".

LLMs can't recognize lies because they don't have referents for "real".

LLMs can’t “recognize” anything. They can’t “perceive” anything. And that’s why using sensory-oriented terminology (like “hallucination”) with LLMs is misleading and incorrect. It’s wrong both about what human hallucinations are and what’s going on in an LLM.

It’s more like when Trump is rambling on in one of his speeches, just stringing together phrases and thoughts haphazardly. So I’d like to propose that it be called “trumpeting”.

Elk Logo

Elk is in Preview!

Thanks for your interest in trying out Elk, our work-in-progress Mastodon web client!

Expect some bugs and missing features here and there. we are working hard on the development and improving it over time.

Elk is Open Source. If you'd like to help with testing, giving feedback, or contributing, reach out to us on GitHub and get involved.

To boost development, you can sponsor the Team through GitHub Sponsors. We hope you enjoy Elk!

三咲智子 Kevin DengAnthony FuPatakDaniel Roe

The Elk Team