> but that gap gets smaller every year (and will ostensibly be closed)
As long as you build software for humans (and all software we build is for humans, ultimately), you'll need humans at the helm to steer the ship towards a human-friendly solution.
The thing is, do humans _need_ most software? The less surfaces that need to interact with humans, the less you need humans in the loop to design those surfaces.
In a hypothetical world where maybe some AI agents or assistants do the vast majority of random tasks for you, does it matter how pleasing the doordash website looks to you? If anything, it should look "good" to an ai agent so that its easier to navigate. And maybe "looking good" just amounts to exposing some public API to do various things.
UIs are wrappers around APIs. Agents only need to use APIs.
Yes, if it's not redundant software. The ultimate utility is to a human. Sure, at some point humans stopped writing assembly language and employed a compiler instead, so the abstraction level and interfaces change, but it's all still there to serve humans.
To use your example, do you think humans will want to interact with AI agents using a chat interface only? For most tasks humans use computers today, that would be very unwieldy. So the UI will migrate from the website to the AI agent interface. It all transforms, becoming more powerful (hopefully!), but won't go away. And just how the advent of compilers led to an increase of programmers in the world, so will AI agents. This is connected with Javon's paradox as well.
Yeah, for those you can just relax and trust the vibes. It's for complex software projects you need those software engineering chops, otherwise you end up with a intractable mess.
If it's for a complex software project the first question you need to ask is "does this really need to be software at all?"
Honestly this is where most traditional engineers get stuck. They keep attacking the old problem with new tools and being frustrated. I agree that agents are not a great way to build "complex software projects" but I think the problem space that is best solved by a "complex software project" is rapidly shrinking.
I've had multiple vendors try to sell my team a product that we can build the core functionality of ourselves in an afternoon. We don't need that functionality to scale to multiple users, server a variety of needs, be adaptable to new use cases: we're not planning to build a SaaS company with it, we just need a simple problem solved.
But these comments are a treasure trove of anecdotes proving exactly my point.
One thing I've learned by following a link from elsewhere in this thread is that while the total count of neurons in an animal's nervous system is not a good proxy for intelligence, the count of neurons in the forebrain is. By that measure, only the orca ranks higher than humans [1].
That doesn't mean language ability is a natural outcome of crossing a certain threshold of brain complexity; if anything it's more likely the other way around: this complexity being be driven by highly social behavior and communication.
But LLMs can also explain code, in fact they're fantastic at that. They can also be used to build anti-censorship, surveillance-avoidance and fact-checking tools. We are all empowered by them, it's just up to us to employ them so as to nudge society towards where we'd like it to go. Instead of giving up prematurely.
There are no cartoon villains in general, that's the point GP is making by using the word "cartoon". Let's use some common sense, it's not like Trump and Hegseth got together and sneaked in the school on the list of targets just because they liked the idea of children being killed. It's naive to suggest this is a possibility worth considering.
Yeah, going to have to go ahead and disagree with you there boss. The man Hegseth in all his 'no quarter' bravado is only affirming his own mother's claim that he is a piece of shit. respectfully of course, I would not put it past him to kill some kids for a political or terrorism reason (the parents).
This is very different from targeting civilians as a goal in itself, which is what it would have had to be if this was not just negligence, but intentional, as GP suggested. Parent correctly points out that there's both no political incentive for that, and that it's not realistic from a psychological point of view, given reasonable assumptions about human nature.
The claim I'm responding to is "I refuse to believe anyone in the decision chain would move forward if they believed kids were going to be killed." I agree it's unusual for anyone in the US military to drop a bomb primarily because they want to kill some children. I think it is not unusual for people involved in bombing campaigns to anticipate killing children and move forward anyway.
> This is very different from targeting civilians as a goal in itself
Targeting a single person which might be a valid target had war been declared, while also intentionally striking many civilians around them, is the same as targeting those civilians. You knew the bomb you dropped was going to kill them, and you pressed the button. It makes no difference who the primary "target" is.
Otherwise, countries would just bomb all the civilians and all their infrastructure and medical facilities and schools with the excuse that they heard from an unnamed source that there was a combatant nearby, like israel does in Palestine.
Self-improving AI systems aim to reduce reliance on human engineering by learning to improve their own learning and problem-solving processes. Existing approaches to self-improvement rely on fixed, handcrafted meta-level mechanisms, fundamentally limiting how fast such systems can improve. The Darwin Gödel Machine (DGM) demonstrates open-ended self-improvement in coding by repeatedly generating and evaluating self-modified variants. Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability. However, this alignment does not generally hold beyond coding domains. We introduce \textbf{hyperagents}, self-referential agents that integrate a task agent (which solves the target task) and a meta agent (which modifies itself and the task agent) into a single editable program. Crucially, the meta-level modification procedure is itself editable, enabling metacognitive self-modification, improving not only the task-solving behavior, but also the mechanism that generates future improvements. We instantiate this framework by extending DGM to create DGM-Hyperagents (DGM-H), eliminating the assumption of domain-specific alignment between task performance and self-modification skill to potentially support self-accelerating progress on any computable task. Across diverse domains, the DGM-H improves performance over time and outperforms baselines without self-improvement or open-ended exploration, as well as prior self-improving systems. Furthermore, the DGM-H improves the process by which it generates new agents (e.g., persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. DGM-Hyperagents offer a glimpse of open-ended AI systems that do not merely search for better solutions, but continually improve their search for how to improve.
This 'self vs non-self' logic is very similar to how plants prevent self-pollination. They have a biological 'discrimination' system to recognize and reject their own genetic code.
> For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.
There's an acronym for this: POSSE (Publish [on your] Own Site, Syndicate Elsewhere). Part of the IndieWeb movement, for those who want to explore this worthwhile idea further.
Sure, you can do that. But then the syndicated content usually ends up looking like low-effort slop and doesn't get much traction. Each publishing platform has it's own features, limitations, and cultural norms. If you want to have any impact then you can't just copy content around: you have to tailor the message to the medium.
Probably some AI assistance was involved. Though you'd expect em dashes above, for example. A better example would be "No regression. No noise. Just compounding." It's not so much as to bother me, and I'm often annoyed by the ever-expanding tide of slop.
A hiker on a mountain might as well imagine that at the end of their journey they will step off onto the moon. But it's just a mirage. As us humans have externalized more and more of our understanding of the world into books, movies, websites and the like, our methods of plumbing this treasury for just the needed tidbits have developed as well. But it's still just working off that externalized collective understanding. This includes heuristics for combining different facts to produce new ones, sure, but still dependent on brilliant individuals to raise the "island peaks" which ultimately pulls up the level of the collective intelligence as well.
As long as you build software for humans (and all software we build is for humans, ultimately), you'll need humans at the helm to steer the ship towards a human-friendly solution.
reply