Dawn Ahukanna

@dahukanna@mastodon.social

2/3 - “Your door is red” is a grammatically correct statement, but may not accurately match intended door object.
Asking,”What colour is my door?”, that’s 1 possible answer in infinite array of colour choice.
Without grounding the question in a reference, like detailed information about building in question & a point in time, as same door could’ve had various colours over time, LLM will grammatically construct the reply & pick a colour with highest probability in it’s imprinting data I.e. red.

March 30, 2025 at 9:36:59 AM
(Edited)

Yes! I think that for many, when they ask questions of an LLM, they expect the capabilities that we imagine an AGI might have - like deciding if a question can receive a vague, inaccurate or generic answer vs. there is only one possible answer, and if it's unknown then the model must ask questions back.

I think that some “common sense” stuff is rooted purely in language, and LLMs will pick up the pattern. Like a thing usually can’t be both important and unimportant at the same time; the LLM will encode those two words with anti parallel state vectors.
But that’s because “common sense” is a real grab bag of stuff.
It does the same to ‘big’ and ‘small’ although it has no comprehension of size.

Elk Logo

Elk is in Preview!

Thanks for your interest in trying out Elk, our work-in-progress Mastodon web client!

Expect some bugs and missing features here and there. we are working hard on the development and improving it over time.

Elk is Open Source. If you'd like to help with testing, giving feedback, or contributing, reach out to us on GitHub and get involved.

To boost development, you can sponsor the Team through GitHub Sponsors. We hope you enjoy Elk!

PatakAnthony Fu三咲智子 Kevin DengDaniel Roe

The Elk Team