Yes! I think that for many, when they ask questions of an LLM, they expect the capabilities that we imagine an AGI might have - like deciding if a question can receive a vague, inaccurate or generic answer vs. there is only one possible answer, and if it's unknown then the model must ask questions back.

I think that some “common sense” stuff is rooted purely in language, and LLMs will pick up the pattern. Like a thing usually can’t be both important and unimportant at the same time; the LLM will encode those two words with anti parallel state vectors.
But that’s because “common sense” is a real grab bag of stuff.
It does the same to ‘big’ and ‘small’ although it has no comprehension of size.

Three plus or minus five

@ThreeSigma@mastodon.online

That’s why so many people are fooled.

March 30, 2025 at 1:12:58 PM

Elk Logo

Elk is in Preview!

Thanks for your interest in trying out Elk, our work-in-progress Mastodon web client!

Expect some bugs and missing features here and there. we are working hard on the development and improving it over time.

Elk is Open Source. If you'd like to help with testing, giving feedback, or contributing, reach out to us on GitHub and get involved.

To boost development, you can sponsor the Team through GitHub Sponsors. We hope you enjoy Elk!

Daniel RoePatakAnthony Fu三咲智子 Kevin Deng

The Elk Team