> What failure mode do LLMs have that proves they don't understand anything at all ?
Try to get it to write something in a programming language not commonly used on the internet, say Forth or Brainfuck, with only the specifications of said languages. Humans are able to grasp the law of reality through a model and use it to act upon the real world.
> You genuinely think that a system whose goal is to predict the data it's given and continues to improve is limited in what it can learn?
Not GP, but Image generators have ingested more images that I've seen in my life and still can't grasp basic things like perspective or anatomy. Things that people can learn from a book or two. And there are software that already have models for both.
>Try to get it to write something in a programming language not commonly used on the internet, say Forth or Brainfuck, with only the specifications of said languages. Humans are able to grasp the law of reality through a model and use it to act upon the real world.
My Experience with this has been SOTA LLMs generating sensible code at rates much greater than random chance even if it may not be as good as i'd like. I don't see how that is evidence LLMs don't understand anything at all especially since there are probably humans who would write less workable code.
>Not GP, but Image generators have ingested more images that I've seen in my life and still can't grasp basic things like perspective or anatomy.
The human brain didn't poof from thin air. It's the result of billions of years of evolution tuning it for real world navigation and vision amongst other things. You are not a blank slate. All Modern NNs are much more blank slate than the brain has been for at least millions of years.
You're moving the bars. In fact, these bars are so laughably low, I don't know that we're having the same conversation any more.
Nobody's saying it can't write "sensible code at rates much greater than random chance." We're not competing with an army of typing monkeys here. We're saying it actually doesn't "know" anything, and regularly demonstrates that quality, despite it seeming very much like something that knows things, most of the time. You're being tricked by a clever algorithm.
> All Modern NNs are much more blank slate than the brain has been for at least millions of years.
All well and good if we were talking about interesting research and had millions of years to let these algorithms prove themselves out, I suppose. But we're talking about industries that are being created out of whole cloth and/or destroyed, depending on where you stand, and the time frame is in single-digit years, if not less. And these things will still confidently make elementary mistakes and get lost in their own context.
Look, they're obviously not useless, but they're a tool with weaknesses and strengths. And people like pg who are acting like there ARE no weaknesses, or that a simple application of will and money will erase them, they are selling us a bill of goods.
Yeah and I'm saying this is a nonsense statement if you can't create a test (one that would also not disqualify humans) that demonstrates this. If you are saying what LLMs do is "fake understanding" then "fake understanding" should be testable unless you're just making stuff up.
>All well and good if we were talking about interesting research and had millions of years to let these algorithms prove themselves out, I suppose
Did you even read what the commenter I replied to was saying. This is irrelevant. We don't need to wait millions of years for anything.
Try to get it to write something in a programming language not commonly used on the internet, say Forth or Brainfuck, with only the specifications of said languages. Humans are able to grasp the law of reality through a model and use it to act upon the real world.
> You genuinely think that a system whose goal is to predict the data it's given and continues to improve is limited in what it can learn?
Not GP, but Image generators have ingested more images that I've seen in my life and still can't grasp basic things like perspective or anatomy. Things that people can learn from a book or two. And there are software that already have models for both.