1/3-Large Language Models (LLM) model their “word space” outputs based on combination of “input words+interaction parameter settings + architecture configuration + imprinting, not training”.
Person consuming that grammatically correct sentence model output is overlaying it on their perceived world model (PWM) & accusing the LLM of hallucinating when the output does not match PWM.
The human is the one hallucinating by expecting mind reading LLM, not the confabulating LLM.
https://nerdculture.de/@ByrdNick/114235056735546320
NerdCulture
Nick Byrd, Ph.D. (@[email protected])Attached: 1 image Overheard at a conference about #AI in #Medicine: Speaker: "I hear neurologists prefer we say that generative AI systems 'confabulate' and not that they 'hallucinate'." Neurologist [shouting from the back of the room]: "CORRECT!" #psychiatry #neuroscience #sciComm #edu