Yes! I think that for many, when they ask questions of an LLM, they expect the capabilities that we imagine an AGI might have - like deciding if a question can receive a vague, inaccurate or generic answer vs. there is only one possible answer, and if it's unknown then the model must ask questions back.
I think that some “common sense” stuff is rooted purely in language, and LLMs will pick up the pattern. Like a thing usually can’t be both important and unimportant at the same time; the LLM will encode those two words with anti parallel state vectors.
But that’s because “common sense” is a real grab bag of stuff.
It does the same to ‘big’ and ‘small’ although it has no comprehension of size.