Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Transformer models are universal, which isn't very surprising given they're made of things which are also universal when configured right, so they absolutely can learn the rules of logic.

Some say that reason requires consciousness, which I'm… frustrated by given the 50 common meanings of the word "consciousness"; but to merely use logic as your standard here?

Why, of all the things people could object to in an LLM, why is logic what people want to pick on? It's the weakest possible objection IMO.



The set of models we're discussing haven't.

You were derailing with "reason" for no reason, so I pointed it out. That doesn't mean logic should be applied as some sort of universal standard.


> The set of models we're discussing haven't.

Haven't what?

> You were derailing with "reason" for no reason

I was quoting someone else who said LLMs can't reason, and I'm asking them what they meant by that because ChatGPT sure acts like it reasons no matter what's going on "inside". I assume the inside to be a Transformer model because otherwise the naming is weird, but whatever it is, it acts like it learned to reason.

And I'm saying that despite wanting this to be a repeat of Clever Hans so I can go back to feeling optimistic about my economic future.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: