Transformer models are universal, which isn't very surprising given they're made of things which are also universal when configured right, so they absolutely can learn the rules of logic.
Some say that reason requires consciousness, which I'm… frustrated by given the 50 common meanings of the word "consciousness"; but to merely use logic as your standard here?
Why, of all the things people could object to in an LLM, why is logic what people want to pick on? It's the weakest possible objection IMO.
I was quoting someone else who said LLMs can't reason, and I'm asking them what they meant by that because ChatGPT sure acts like it reasons no matter what's going on "inside". I assume the inside to be a Transformer model because otherwise the naming is weird, but whatever it is, it acts like it learned to reason.
And I'm saying that despite wanting this to be a repeat of Clever Hans so I can go back to feeling optimistic about my economic future.
Some say that reason requires consciousness, which I'm… frustrated by given the 50 common meanings of the word "consciousness"; but to merely use logic as your standard here?
Why, of all the things people could object to in an LLM, why is logic what people want to pick on? It's the weakest possible objection IMO.