My dentist is more skilled at dentistry than 99.999% of humans. Why would I want to replace her with a machine which is more skilled than 99% of non dentists but 1000 times worse by an educated experienced dentist?
Important to remember that these skills evolve iteratively and often best in an apprenticeship environment. No one started with these abilities. They get developed by solving problems and getting better at it.
And how many companies do you know of today offer any kind of “apprenticeship environment”? Out of those how many pay former juniors market rate when they become mid level and senior developers and don’t suffer from salary compression and inversion where they pay new employees market rate and existing employees some HR mandated maximum where it makes more sense to job hop?
What ai? LLMs are language models, operating on words, with zero understanding. Or is there a new development which should make me consider anthropomorphizing them?
They don't have understanding but if you follow the research literature they obviously have a tendency to produce a token stream, the result of which humans could fairly call "entity with nefarious agency".
Why? Nobody knows.
My bet is that they are just larping all the hostile AI:s in popular culture because that's part of the context they were trained in.
The way my thinking has evolved is that "AGI" isn't actually necessary for an agent (NB: agents, specifically ones with state, not LLMs by themselves - "AI" was vague and I should've been clearer) to be enough like a person to be interesting and/or problematic. To quote myself [1]:
> [OpenClaw agents are like] an actor who doesn't know they're in a play. How much does it matter that they aren't really Hamlet?
Does the agent understand the words it's predicting? Does the actor know they're in a play? I don't know but I'm more concerned with how the actor would respond to finding someone eavesdropping behind a curtain.
> Or is there a new development which should make me consider anthropomorphizing them?
The development that caused me to be more concerned about their personhood or pseudopersonhood was the MJ Rathbun affair. I'm not saying that "AGI" or "superintelligence" was achieved, I'm saying that's actually the wrong question and the right questions are around their capabilities, their behaviors, and how they evolve over time unattended or minimally attended. And I'm not saying I understand those questions, I thought I did but I was wrong. I frankly am confused and don't really know what's going on or how to respond to it.
Whether it has "real understanding" is a question for philosophy majors. As long as it (mechanically, without "real understanding") still can perform actions to escape containment, and do malicious stuff, that's enough.
LLMs are machines trained to respond and to appear to think (whether that's 'real thinking' or text-statistics fake-thinking') like humans. The foolish thing to do would be to NOT anthropomorphize them.
We used to have the very difficult task of producing working scalable maintainable code describing complex systems which do what we need them to do.
Now on top of it we have the difficult task of producing this code using constantly mutating complex nondeterministic systems.
We are the circus bear riding a bicycle on a high wire now being asked to also spin plates and juggle chainsaws.
Maybe singularity means that time sunk into managing LLMs is equal to time needed to manually code similar output in assembly or punch cards.
reply