"""That's one of those irregular verbs, isn't it? I give confidential security briefings. You leak. He has been charged under section 2a of the Official Secrets Act.""" - Yes Minister.
To rephrase for the subject: We're only human, mistakes are to be expected; They are a idiots, mistakes are to be expected; that thing is just a glorified calculator, mistakes are to be expected.
(There's a short story I found I couldn't bring myself to finish, “Zero for Conduct” by Greg Egan, where the lead character is bullied by someone who has a similar disregard for her intelligence; I know one cannot use fiction to learn about reality, so I will instead say that this disregard of human intelligence by other humans happens a lot in real life too, the racism and xenophobia of βαρβαρίζω can still be found today in all the people who insist that ancient structures like the pyramids couldn't possibly have been built by the locals and therefore it must have been aliens).
> AI hype is just sales talk
But where does the hype end and the reality begin?
> I had to convince a customer the other day we cannot write whole apps with ChatGPT. As far as I know, not a single example exists of a full app written by ChatGPT. It just can't be done, because ChatGPT is not reasoning!!! It is not intelligent.
I will agree ChatGPT does indeed make incoherent solutions — one test project was making a game in JS, it (eventually) gave me a vector class with methods for multiply(scalar) etc., but then tried to use mul(scalar).
But ironically, I've also made a functioning (basic, but functioning) ChatGPT API interface… by bolting together the output of ChatGPT. I won't claim it's amazing or anything, because I'm an iOS developer and the thing "I" made is a web app, but it works well enough for my needs (just don't paste HTML into the query section, because I stopped adding to it when it was good enough for my needs and therefore only have a very basic solution to code being shown in the chat list, there's a lot of stuff that would be improved by using simple libraries but I didn't want to).
> I think it's best described as a word-calculator, or autocomplete on crack.
And that's the same error in the opposite direction.
If I understand right, GPT-3 is literally the complexity of a mid-sized rodent. Thus the metaphor I use is:
Imagine a ferret was generically modified to be immortal, had every sensory input removed except smell, which was wired up to a computer. The ferret and computer then spend 50,000 years going through 10% of all the text on the internet, where every sequence is tokenised and those tokens are turned into a pattern of olfactory nerves to stimulate, and the ferret is rewarded or punished based on how well it imagined the next token.
You're annoyed that this specific ferret's jokes are derivative, their code doesn't always compile, that they make mistakes when trying to solve algebraic problems, that their pecan pie recipe needs work, and that they make mistakes when translating Latin into Hindi.
GPT-3 has about the same number of free parameters as the number of synapses in a mid-sized rodent brain.
> Intelligence is comprised of multiple complex systems. ChatGPT only ever claimed to focus on the language part. It does not contain reasoning.
It (ChatGPT) is doing abstract symbol manipulation, unless you want to expand the idea of language to include chess positions (even if mediocrely), algebra, and the application of the rules of formal logic to the symbols used to represent those rules when it is so prompted.
> Even a rodent can reason.
> Much less complex organisms can reason, too.
For which values of the meaning of the word "reason"? From your other comment "But I would define ability to reason as the ability to make logic conclusions based on input from a feedback loop."
1. This is a terrible waste of computer power, in the order of multiple-trillions-to-one ratio, given logic is the underlying thing used to represent the numbers and floating point operations that approximate linear algebra that in turn is used to build LLMs.
(A similar ratio is also found in human "logical" cognition, and is why we don't use logic all the time, hence the classic example of "a baseball bat & ball that cost $1.10 together, the bat is $1 more expensive than the ball, how much does a bat cost?" which so many get wrong).
2. LLMs can do that anyway. Even literal flow charts are still doing that anyway, and LLMs can build flow charts.
It still makes mistakes, sure, yup, but then the question is "how many compared to a human?" and that's terrifyingly close to humans and I'm just hoping it will remain on the same side, slightly worse — I mean, your own example is humans being fooled by it, so clearly you also know humans can be wildly wrong.
To rephrase for the subject: We're only human, mistakes are to be expected; They are a idiots, mistakes are to be expected; that thing is just a glorified calculator, mistakes are to be expected.
(There's a short story I found I couldn't bring myself to finish, “Zero for Conduct” by Greg Egan, where the lead character is bullied by someone who has a similar disregard for her intelligence; I know one cannot use fiction to learn about reality, so I will instead say that this disregard of human intelligence by other humans happens a lot in real life too, the racism and xenophobia of βαρβαρίζω can still be found today in all the people who insist that ancient structures like the pyramids couldn't possibly have been built by the locals and therefore it must have been aliens).
> AI hype is just sales talk
But where does the hype end and the reality begin?
> I had to convince a customer the other day we cannot write whole apps with ChatGPT. As far as I know, not a single example exists of a full app written by ChatGPT. It just can't be done, because ChatGPT is not reasoning!!! It is not intelligent.
I will agree ChatGPT does indeed make incoherent solutions — one test project was making a game in JS, it (eventually) gave me a vector class with methods for multiply(scalar) etc., but then tried to use mul(scalar).
But ironically, I've also made a functioning (basic, but functioning) ChatGPT API interface… by bolting together the output of ChatGPT. I won't claim it's amazing or anything, because I'm an iOS developer and the thing "I" made is a web app, but it works well enough for my needs (just don't paste HTML into the query section, because I stopped adding to it when it was good enough for my needs and therefore only have a very basic solution to code being shown in the chat list, there's a lot of stuff that would be improved by using simple libraries but I didn't want to).
> I think it's best described as a word-calculator, or autocomplete on crack.
And that's the same error in the opposite direction.
If I understand right, GPT-3 is literally the complexity of a mid-sized rodent. Thus the metaphor I use is:
Imagine a ferret was generically modified to be immortal, had every sensory input removed except smell, which was wired up to a computer. The ferret and computer then spend 50,000 years going through 10% of all the text on the internet, where every sequence is tokenised and those tokens are turned into a pattern of olfactory nerves to stimulate, and the ferret is rewarded or punished based on how well it imagined the next token.
You're annoyed that this specific ferret's jokes are derivative, their code doesn't always compile, that they make mistakes when trying to solve algebraic problems, that their pecan pie recipe needs work, and that they make mistakes when translating Latin into Hindi.
I'm amazed the ferret can do any of these things.