The point I'm trying to make is that all LLM output is based on likelihood of one word coming after the next word based on the prompt. That is literally all it's doing.
It's not "thinking." It's not "solving." It's simply stringing words together in a way that appears most likely.
ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math.
It's a parlor trick, like Clever Hans [1]. A very impressive parlor trick that is very convincing to people who are not familiar with what it's doing, but a parlor trick nontheless.
This is like saying chess engines don't actually "play" chess, even though they trounce grandmasters. It's a meaningless distinction, about words (think, reason, ..) that have no firm definitions.
This exactly. The proof is in the pudding. If AI pudding is as good as (or better than) human pudding, and you continue to complain about it anyway... You're just being biased and unreasonable.
And by the way, I don't think it's surprising that so many people are being unreasonable on this issue, there is a lot at stake and it's implications are transformative.
We know that chess can be solved, in theory. It absolutely isn't and probably will never be in practice. The necessary time and storage space doesn't exist.
Chess is absolutely not a solved game, outside of very limited situations like endgames. Just because a best move exists does not mean we (or even an engine) know what it is
> ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math
What am I as a human doing when I "Do math" ?
1.I am looking at the problem at hand, identifying what I have and what I need to get
2.I am then doing a prediction using my pretrained neural net to find possible courses of action to go in a direction that "feels" right
3.I am using my pretrained neural net to find pairs of values that I can substitute with each other (Think multiplication tables, standard results, etc...)
4.Repeat till I arrive at the answer or give up.
As a simple example, when I try to find 600×74+42 I remember the steps for multiplication. I recall the associated pairs of numbers from my tables and complete the multiplication step by step. I then recall the associated pairs of numbers for addition of single digits and add from left to right.
We need to remember that just because we are fast at doing this and are able to do it subconsciously it doesn't mean that we can natively do math, we just do association of information using the neural networks we have trained.
So you don't think 50T parameter
neural networks can encode the logic for adding two n-bit integers for reasonably sized integers? That would be pretty sad.
You are wrong. Especially that we are talking about models with 50T parameters.
Can they do arbitrary computations for arbitrarily long numbers? Nope. But that's not remotely the same statement, and they can trivially call out to tools to do that in those cases.
Third things can exist. In other words, you’re implying a false dichotomy between “human computation” and “computer computation” and implying that LLMs must be one or the other. A pithy gotcha comment, no doubt.
Edit: the implication comes from demanding that the OP’s definition must be rigorous enough to cover all models of “computation”, and by failing to do so, it means that LLMs must be more like humans than computers.
After dismissing it for a long time, I have come around to the philosophical zombie argument. I do not believe that LLMs are conscious, but I also no longer believe that consciousness is a prerequisite for intelligence. I think at this point it is hard to deny that LLMs do not possess some form of intelligence (although not necessarily human-like). I think P-zombies is a fitting description.
I don't think P-zombies can exist. There must be some perceptible difference between an intelligence w/ consciousness and one without. The only way there wouldn't be a difference is if we are mistaken about the consciousness (either both have it or neither do).
> There must be some perceptible difference between an intelligence w/ consciousness and one without
I think there are differences, and I think we can make good guesses, but I'm not sure we can reliably classify a P-zombie from a normal human from their behaviour with 100% accuracy..
It's not "thinking." It's not "solving." It's simply stringing words together in a way that appears most likely.
ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math.
It's a parlor trick, like Clever Hans [1]. A very impressive parlor trick that is very convincing to people who are not familiar with what it's doing, but a parlor trick nontheless.
[1] https://en.wikipedia.org/wiki/Clever_Hans