Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can you point to where the thinking happens in a human?


Nope. But I don't have to do that to understand that LLMs do not assess truth or accuracy of anything.


We don’t have any proof of that, as we don’t have proof also about its opposite. We have no idea why neural nets work, or how our brain works in context of this. There is definitely something human-like in neural networks, but we don’t have any idea why, and what exactly. It’s a completely empirical field and not a theoretical one. We don’t have any good idea what would happen if we could built a 180 billion neuron large neural net, because there is no theory which would prove what would happen even the current ones. That’s why I’ve seen almost every single prediction about what AI would solve in the following years in the past 40 years fail. We have no clue.


There is research that shows humans are also predicting the next word.

I posted that here: https://news.ycombinator.com/item?id=34875324


But you don't know that humans don't reason by stringing words together and seeing how statistically likely they seem.

Related to this topic, see "Babble and Prune": https://www.lesswrong.com/s/pC6DYFLPMTCbEwH8W


I don’t think it’s a stretch to say humans aren’t great at assessing the truth or accuracy of anything either.


My point isn't about how good or bad this is being done. Humans, at least some of the time, attempt to assess truth and accuracy of things. LLMs do not attempt to do this.

That's why I think it's incorrect to say they're bad at it. Even attempting it isn't in their behavior set.


Where is the organ that does that? My impression is everything the brain does is homomorphic to what LLMs do.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: