It’s almost as if these large language models are extremely unsuited to the primary application being imagined for them: knowledge retrieval.

The choice of the term “hallucination” to describe what is happening with these language models is fundamentally misleading, and feels calculated to obscure their probabilistic nature.

Characterizing the fabricated output as “hallucination” means, by implication, that correct output is “knowledge”—that the incorrect results are aberrations, not the result of the exact same blind processes that sometimes produce factually sound results.

キャリー

@cary@mstdn.jp

That term also unnecessarily humanizes "AI" and reinforces this idea that these tools are much more sentient than they actually are.

May 27, 2023 at 8:59:19 PM

Elk Logo

Elk is in Preview!

Thanks for your interest in trying out Elk, our work-in-progress Mastodon web client!

Expect some bugs and missing features here and there. we are working hard on the development and improving it over time.

Elk is Open Source. If you'd like to help with testing, giving feedback, or contributing, reach out to us on GitHub and get involved.

To boost development, you can sponsor the Team through GitHub Sponsors. We hope you enjoy Elk!

三咲智子 Kevin DengAnthony FuPatakDaniel Roe

The Elk Team