The choice of the term “hallucination” to describe what is happening with these language models is fundamentally misleading, and feels calculated to obscure their probabilistic nature.
Characterizing the fabricated output as “hallucination” means, by implication, that correct output is “knowledge”—that the incorrect results are aberrations, not the result of the exact same blind processes that sometimes produce factually sound results.