We don’t have any proof of that, as we don’t have proof also about its opposite. We have no idea why neural nets work, or how our brain works in context of this. There is definitely something human-like in neural networks, but we don’t have any idea why, and what exactly. It’s a completely empirical field and not a theoretical one. We don’t have any good idea what would happen if we could built a 180 billion neuron large neural net, because there is no theory which would prove what would happen even the current ones. That’s why I’ve seen almost every single prediction about what AI would solve in the following years in the past 40 years fail. We have no clue.
My point isn't about how good or bad this is being done. Humans, at least some of the time, attempt to assess truth and accuracy of things. LLMs do not attempt to do this.
That's why I think it's incorrect to say they're bad at it. Even attempting it isn't in their behavior set.