Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So we're just going to hand the world over to everyone who

1) is not competent (were not educated, and generally can't predict the outcome of their actions)

2) were not driven enough at least gain basic competency (so in other words: you ask them to do something and the odds of them still thinking about that 5 minutes later are pretty bad)

3) have no intention to change that

What do you think will happen? What is going to make these people brilliant hires?

Docters, lawyers, accountants ... are at their best when they have to work adversarially, which is something LLM models really, really suck at. I think instead we'll see the same as we've seen in IT. Destruction of entry-level jobs, but "seniors" will command bigger and bigger premiums because of two big reasons:

a) LLMs amplify their abilities greatly. 1 competent accountant can now do taxes of 100 people well with LLM help. But 10 incompetent accountants (either because they're actually bad, but more likely because some CEO decided to "just do it himself" with LLM help) still only deliver a single product: catastrophe.

b) If a competent person, with or without LLM help, has to adversarially deal with an incompetent person (extreme example: in court), the senior person will always come out way ahead.

Doctors are adversarial versus health problems (disease, symptoms, government ...) and a little bit versus patients. Lawyers are of course adversarial. Accountants have to make intelligent and consistent choices with particular goals where the choice of how to classify things isn't clear, sometimes literally adversarially (e.g. keeping TWO government tax departments happy at the same time about the same transactions). And so on and so forth.

Over time, what will happen is that AIs will simply get captured by governments. They will make the game impossible, while only giving their own LLMs the required information not to screw up company taxes etc. This is really the way society worked for millenia now.



Just to clarify, the person staring out the window would be competent, just bored. They see the schoolwork as relatively pointless. The adversarial work/tasks you described can be largely done today. Throw a complex scenario at GPT 5.3 with Extending thinking and you will see my point. The problem now isn’t the models capability, it’s the context. Getting the AI the info it needs. So, where I think we differ, is I am saying AI today (or near future) can reason against these complex scenarios (edge cases/outliers) you mention. The agents just don’t have all the context it needs in these complex scenarios. There will be some scenarios where the AI will fail, but this will be very small, and you can have human in the loop.


Well, I was a person like that. Voraciously reading/practicing when I could, then being bored in class most of the time, some of it spent dreaming. But ... this was not exactly common, to put it mildly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: