Cold-blooded cost/benefit analysis is an abdication of moral reasoning and responsibility. There is no such thing as an abstract life, only concrete realizations. What if the victim of an avoidable fatality is the one person who had they survived had the skills/insight/vision to literally save humanity from extinction?
I can accept an argument that there are societal tradeoffs that we must make that involve the sacrifice of human lives (obviously we should not try to remove risk to the extent that we live in sterile protective bubbles), but we should be honest about what we are doing and not hide behind some phony numbers that mask the fact that money, and hence numerical value, isn't an imaginary construct and that lives are fungible under this value system.
I further think that if we have an honest conversation instead of hiding behind quantitative analysis, we may actually have a productive dialogue about risk tradeoff and accountability. Perhaps if there is a wide gap between the bean counters and the bleeding hearts, there is a third possibility that needs to be explored.
I have written a compiler for a language at more or less this abstraction level. It provides access to 16 general purpose registers and a set of virtual instructions that operate on these registers. I program on an intel mac so the virtual instructions all map directly to x86_64 instructions, but it would be very straightforward to write an arm backend that may compose multiple arm instructions for the equivalent x86_64 behavior. I also could support virtual registers for platforms with fewer than 16 registers.
Having this compiler, which is extremely fast (it is self hosted and self compiles its own 9750K line source file in 15ms on my 2019 macbook pro and I've seen throughout ranging from 500K to 10M lines per second for other programs), I have little interest in ever using LLVM again. I would rather just write optimal code than pray that LLVM does a good job for me. For equivalent programs, I find LLVM can be anywhere from 50-10000x slower than my compiler and doesn't necessarily produce better code. I can produce examples of LLVM code becoming significantly worse at -O2 than at -O0.
The only real utility LLVM offers me at this point is that I could use it to figure out what an optimal solution to a small subprogram might be by compiling a small c program and inspecting the output. Then I can just directly write the optimal solution instead of being perpetually bogged down by llvm's bloat as well as also being exposed to unexpected regressions in the optimizer and/or having to learn its strange incantations to get desired performance.
> Why would anyone create a new language now? The existing ones are "good enough", and without a body of examples for LLMs to train on, a new language has little chance getting traction.
Compiler writing can be an art form and not all art is for mass consumption.
> Java has won (alongside many other winners of course), now the AI drawbridge is being raised to stop new entrants and my pick is that Java will still be here in 50 years time, it's just no humans will be creating it.
This makes no sense to me. If AI possesses intelligence then it should have no problem learning how to use a new language. If it doesn't possess intelligence, we shouldn't be outsourcing all of our programming to it.
Intelligence is not a well understood concept. "AI" is also not a well understood concept - we have LLMs that can pick up some novel patterns on "first sight", but then that pattern takes up space in the context window and this kind of learning is quite limited.
Training LLMs on the other hand requires a large amount of training data.
> This makes no sense to me. If AI possesses intelligence then it should have no problem learning how to use a new language. If it doesn't possess intelligence, we shouldn't be outsourcing all of our programming to it.
Perfection. You have made such an excellent. However, I don't want to detract from that but it's like, in reality, this is a completely obvious point but because of this AI/LLM brain-rot that has taken over the software programmer community, writ large, this is particularly insightful. It's also just a sad and unimaginative state we are in to think that no more programming languages will ever be needed other than what currently exists in March 2026 because of LLMs.
Insulting to whom? Are you anthropomorphizing LLMs. Also, you think it is insulting it is a bad idea to complete outsource software development to it. If that’s the case then being insulting sounds like a good thing.
>Insulting to whom? Are you anthropomorphizing LLMs.
Obviously not. Insulting to programmers who use them, hence I object to the term "brain-rot". Clearly the insinuation is we're just suffering from brain rot.
The examples that you and others provide are always fundamentally uninteresting to me. Many, if not most, are some variant of a CRUD application. I have yet seen a single ai generated thing that I personally wanted to use and/or spend time with. I also can't help but wonder what we might have accomplished if we devoted the same amount of resources to developing better tools, languages and frameworks to developers instead of automating the generation of boiler plate and selling developer's own skills back to them. Imagine if open source maintainers instead had been flooded with billions of dollars in capital. What might be possible?
And also, the capacities of llms are almost besides the point. I don't use llms but I have no doubt that for any arbitrary problem that can be expressed textually and is computable in finite time, in the limit as time goes to infinity, an llm will be able to solve it. The more important and interesting questions are what _should_ we build with llms and what should we _not_ build with them. These arguments about capacity are distracting from these more important questions.
Considering how much time developers spend building uninteresting CRUD applications I would argue that if all LLMs can do is speed that process up they're already worth their weight in bytes.
The impression I get from this comment is that no example would convince you that LLMs are worthwhile.
This feels like it conflates problem solving with the production of artifacts. It seems highly possible to me that the explosion of ai generated code is ultimately creating more problems than it is solving and that the friction of manual coding may ultimately prove to be a great virtue.
This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.
How we work changes and the extra complexity buys us productivity. The vast majority of software will be AI generated, tools will exist to continuously test/refine it, and hand written code will be for artists, hobbyists, and an ever shrinking set of hard problems where a human still wins.
> This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.
This to me looks like an analogy that would support what GP is saying. With modern farming practices you get problems like increased topsoil loss and decreased nutritional value of produce. It also leads to a loss of knowledge for those that practice those techniques of least resistance in short term.
This is not me saying big farming bad or something like that, just that your analogy, to me, seems perfectly in sync with what the GP is saying.
And those trade-offs can only pay off if the extra food produced can be utilized. If the farm is producing more food than can be preserved and/or distributed, then the surplus is deadweight.
I’ll be honest with you pal - this statement sounds like you’ve bought the hype. The truth is likely between the poles - at least that’s where it’s been for the last 35 years that I’ve been obsessed with this field.
"Airplanes are only 5 years away, just like 10 years ago" --Some guy in 1891.
Never use your phrase to say something is impossible. I mean there are driverless Waymo's on the street in my area so your statement is already partially incorrect.
Nobody is saying it isn't possible. Just saying nobody wants to pay as much money as it's going to take to get there. At some point investors will say, meh, good 'nuff.
I feel like we are at the crescendo point with "AI". Happens with every tech pushed here. 3DTV? You have those people who will shout you down and say every movie from now on will be 3D. Oh yeah? Hmmm... Or the people who see Apple's goggles and yell that everyone will be wearing them and that's just going to be the new norm now. Oh yeah? Hmmm...
Truth is, for "AI" to get markedly better than it is now (0) will take vastly more money than anyone is willing to put into it.
(0) Markedly, meaning it will truly take over the majority of dev (and other "thought worker") roles.
This is a false equivalence. If the farmer had some processing step which had to be done by hand, having mountains of unprocessed crops instead of a small pile doesn’t improve their throughput.
This is the classic mistake all AI hypemen make by assuming code is an asset, like crops. Code is a liability and you must produce as little of it as possible to solve your problem.
As an "AI hypeman" I 100% agree that code is a liability, which is exactly why I relish being able to increasingly treat code as disposable or even unnecessary for projects that'd before require a multiple developers a huge amount of time to produce a mountain of code.
Just about a week ago I launched a 100% AI generated project that shortcircuits a bunch of manual tasks. What before took 3+ weeks of manual work to produce, now takes us 1-2 days to verify instead. It generates revenue. It solved the problem of taking a workflow that was barely profitable and cutting costs by more than 90%. Half the remaining time is ongoing process optimization - we hope to fully automate away the reaming 1-2 days.
This was a problem that wasn't even tractable without AI, and there's no "explosion of AI generated code".
I fully agree that some places will drown in a deluge of AI generated code of poor quality, but that is an operator fault. In fact, one of my current clients retained me specifically to clean up after someone who dove head first into "AI first" without an understanding of proper guardrails.
I do see this as a bad thing and an abdication of taking responsibility for one's own life. As was recently put to me after the sudden death of a friend's father (who lived an unusually rich life): everyone dies, but not everyone truly lives.
Ah... we found the person who thinks they can pass judgement on how people choose to live their lives. I didn't say that my friend doesn't love his job (he does) - I said that he'll probably die before retiring.
Stephen Hawking, Einstein, Marie Curie, and Linus Pauling never retired. Did they not "truly live"?
At the end of life, Maslow became convinced that self-transcendence was the pinnacle of the hierarchy. Strong identification with work will not get one to that final step. I am not sure if ai is a path to self transcendence or self annihilation, but it's interesting to ponder in the case of some like Brin.
I can accept an argument that there are societal tradeoffs that we must make that involve the sacrifice of human lives (obviously we should not try to remove risk to the extent that we live in sterile protective bubbles), but we should be honest about what we are doing and not hide behind some phony numbers that mask the fact that money, and hence numerical value, isn't an imaginary construct and that lives are fungible under this value system.
I further think that if we have an honest conversation instead of hiding behind quantitative analysis, we may actually have a productive dialogue about risk tradeoff and accountability. Perhaps if there is a wide gap between the bean counters and the bleeding hearts, there is a third possibility that needs to be explored.
reply