Perhaps we're in an AI summer and a tech winter. Winter is always the time when people hole up, dream, and work on whatever big thing is next.
We're about due for some new computing abstractions to shake things up I think. Those won't be conceived by LLMs, though they may aid in implementing them.
The stacks of turtles that we use to run everything are starting to show their bloat.
The other day someone was lamenting dealign with an onslaught of bot traffic, and having to deal with blocking it. Maybe we need to get back to good old fashioned engineering and optimization. There was a thread on here the other day about PC gamer recommending RSS readers and having a 36gb webpage ( https://news.ycombinator.com/item?id=47480507 )
Yeah. Oftentimes get crickets here when I talk along those lines. Can't tell if apathy, learned helplessness, or obliviousness. Regardless, devs seem like an extremely docile labor group based on how they react to this and other economic pressures.
And too many people have their egos tied to its failure, too.
Im a massive AI skeptic. If anyone were to be jumping up and down on the corpse of AI and this incessant drive to use it everywhere, it’d be me. But I also work at Amazon. I got the email. I attended the meeting. I can personally attest that there are no new requirements for AI-generated code. The articles about this in the meeting at extremely misleading, if not outright wrong. But instead of believing the person that was actually there in the room, this thread is full of people dismissing my first-hand account of the situation because it doesn’t align with the “haha AI failed” viewpoint.
Not just their egos, but their paychecks. This place is either going to get very quiet or really weird when the hype train derails and the AI bubble bursts.
It’s not about perfectly architected code. It’s more about code that is factored in such a way that you can extend/tweak it without needing to keep the whole of the system in your head at all times.
It’s fascinating watching the sudden resurgence of interest in software architecture after people are finding it helps LLMs move quickly. It has been similarly beneficial for humans as well. It’s not rocket science. It got maligned because it couldn’t be reduced to an npm package/discrete process that anyone could follow.
I've always been interested in software architecture and upon graduating from university, I was shocked to see the 'Software Architect' title disappear. Software devs have been treating software architecture like phrenology or reading tea leaves.
But those who kept learning and refining their architecture skills during this time look at software very differently.
It's not like the industry has been making small, non-obvious mistakes; they've veen making massive, glaringly obvious mistakes! Anticipating a reasonable range of future requirements in your code and adhering to the basic principles of high-cohesion and loose-coupling is really not that hard.
I'm taken aback whenever I hear someone treating software architecture as some elusive quest akin to 'finding Bigfoot'.
Asked it to spot check a simple rate limiter I wrote in TS. Super basic algorithm: let one action through every 250ms at least, sleeping if necessary. It found bogus errors in my code 3 times because it failed to see that I was using a mutex to prevent reentrancy. This was about 12 lines of code in total.
My rubber duck debugging session was insightful only because I had to reason through the lack of understanding on its part and argue with it.
If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.
how do you take over the world if you have access to 1000 normal people? if AGI is by the original definition (long forgotten by now) of surpassing MEDIAN human at almost all tasks. How the rebranding of ASI into AGI happened without anyone noticing is kind of insane
IMO, you need to have the capacity to write Good Code to know what Good Enough Code is. It's highly contextual to a particular problem and season in a codebase's life. One example: ugly code that upholds an architecture that confers conceptual leverage on a problem. Most of the code can operate as if some gnarly problem is solved without having to grapple with it themselves. Think about the virtual memory subsystem of an OS.
The problem with this argument is many do not believe this sort of leverage is possible outside of a select few domains, so we're sort of condemned to stay at a low level of abstraction. We comfort ourselves by saying it is pragmatic.
LLMs target this because the vast, vast majority of code is not written like this, for better or for worse. (It's not a value judgment, it just is.) This is a continuation (couldn't resist) of the trend away from things like SICP. Even the SICP authors admitted programming had become more about experimentation and gluing together ready-made parts than building beautifully layered abstractions which enable programs to just fall out of easily.
I don't agree with the author, BTW. Good code is needed in certain things. It's just a lot of the industry really tries to beat it out of you. That's been the case for awhile. What's different now is that devs themselves are seemingly joining in (or at least, are being perceived to be).
> IMO, you need to have the capacity to write Good Code to know what Good Enough Code is.
I completely agree, and its one of the biggest problem of trying to talk about "how you use agents". A lot of the people that may use the same agents with the same workflow may see wildly different results depending on their ability to evaluate the end result.
> The problem with this argument is many do not believe this sort of leverage is possible outside of a select few domains, so we're sort of condemned to stay at a low level of abstraction.
I think theres a similar tangential problem to consider here: people don't think that they are the person to create the serious abstraction that saves every future developer X amount of time because its so easy to write the glue code every time. A world where every library or API was as well thought out as the virtual memory subsystem would be an overspecified but at the same time enable creations far beyond the ones seen today (imo).
> Even the SICP authors admitted programming had become more about experimentation and gluing together ready-made parts than building beautifully layered abstractions which enable programs to just fall out of easily.
> Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things
I'd put intelligence in quotes there, but it doesn't detract from the point.
It is astounding to me how willfully ignorant people are being about the massive aggregation of power that's going on here. In retrospect, I don't think they're ignorant, they just haven't had to think about it much in the past. But this is a real problem with very real consequences. Sovereignty must be occasionally be asserted, or someone will infringe upon it.
> It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.
it's much worse. a great demographic of hacker news love gen. AI.. these are usually highly educated people showing their true faces on the plethora of problems this technology violates and generates
>I've given up tbh. It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.
Especially at cost of diverting power and water for farmers and humans who need them. And the benefit of the AI seems quite limited from recent Signal post here on HN.
Water for farmers is its own pile of bullshit. Beef uses a stupid amount of water. Same with almonds. If you're actually worried about feeding people and not just producing an expensive economic product you're not going to make them.
Same goes for people living in deserts where we have to ship water thousands of miles.
I came back to agree that we should be eating a lot less meat than we do, I'm guilty of it too. We didn't eat meat all day every day as we evolved; if we love it it's because it was scarce and we need to create an artificial scarcity by choosing not to indulge (the same goes for fats, sugars etc in general).
As for the other responses regarding AI. I think that AI could very well become the best thing to ever happen to our species, if we were ready for it but we are not by a long shot.
Regarding wastage: AI research is just fine imo, but corpos have gotten their parasite hooks into the technology and as per usual are more interested in making money right now rather than when it's appropriate. Energy and water use would not be a problem if everyone & their mum weren't desperately seeking VC funding for a technology they know nothing about.
Regarding culture: besides the obvious capitalisation of capitalism doing capitalism things, we aren't ready in a cultural sense either; our tribalism and social structure is incredibly juvenile. They say that civilisation first started when one human tended to the wounds of another human and took care of them while they healed. From what I see of the world around me we have gone backwards - we call this civilisation? UBI would just be one step in a long list of cultural change required to prepare for AI.
Our deficiencies are long and complex to solve. The only solution that I see is that one day we do crack AGI. And that "it happens" and turns out to be banevolent in that it forces us to be good. Because we have to be forced to; we are selfish and will never vote in each other's interests.
We're about due for some new computing abstractions to shake things up I think. Those won't be conceived by LLMs, though they may aid in implementing them.
reply