Hacker Newsnew | past | comments | ask | show | jobs | submit | csallen's commentslogin

I'll take the other side of this.

Professional software engineers like many of us have a big blind spot when it comes to AI coding, and that's a fixation on code quality.

It makes sense to focus on code quality. We're not wrong. After all, we've spent our entire careers in the code. Bad code quality slows us down and makes things slow/insecure/unreliable/etc for end users.

However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

There are two forces contributing to this: (1) more people coding smaller apps, and (2) improvements in coding models and agentic tools.

We are increasingly moving toward a world where people who aren't sophisticated programmers are "building" their own apps with a user base of just one person. In many cases, these apps are simple and effective and come without the bloat that larger software suites have subjected users to for years. The code is simple, and even when it's not, nobody will ever have to maintain it, so it doesn't matter. Some apps will be unreliable, some will get hacked, some will be slow and inefficient, and it won't matter. This trend will continue to grow.

At the same time, technology is improving, and the AI is increasingly good at designing and architecting software. We are in the very earliest months of AI actually being somewhat competent at this. It's unlikely that it will plateau and stop improving. And even when it finally does, if such a point comes, there will still be many years of improvements in tooling, as humanity's ability to make effective use of a technology always lags far behind the invention of the technology itself.

So I'm right there with you in being annoyed by all the hype and exaggerated claims. But the "truth" about AI-assisted coding is changing every year, every quarter, every month. It's only trending in one direction. And it isn't going to stop.


> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

Strongly disagree with this thesis, and in fact I'd go completely the opposite: code quality is more important than ever thanks to AI.

LLM-assisted coding is most successful in codebases with attributes strongly associated with high code quality: predictable patterns, well-named variables, use of a type system, no global mutable state, very low mutability in general, etc.

I'm using AI on a pretty shitty legacy area of a Python codebase right now (like, literally right now, Claude is running while I type this) and it's struggling for the same reason a human would struggle. What are the columns in this DataFrame? Who knows, because the dataframe is getting mutated depending on the function calls! Oh yeah and someone thought they could be "clever" and assemble function names via strings and dynamically call them to save a few lines of code, awesome! An LLM is going to struggle deciphering this disasterpiece, same as anyone.

Meanwhile for newer areas of the code with strict typing and a sensible architecture, Claude will usually just one-shot whatever I ask.

edit: I see most replies are saying basically the same thing here, which is an indicator.


I agree entirely with your statement that structure makes things easier for both LLMs and humans, but I'd gently push back on the mutation. Exactly as mutation is fine for humans it also seems to be fine for LLMs in that structured mutation (we know what we can change, where we can change it and to what) works just fine.

Your example with the dataframes is completely unstructured mutation typical of a dynamic language and its sensibilities.

I know from experience that none of the modern models (even cheap ones) have issues dealing with global or near-global state and mutating it, even navigating mutexes/mutices, conds, and so on.


> LLM-assisted coding is most successful in codebases with attributes strongly associated with high code quality: predictable patterns, well-named variables, use of a type system, no global mutable state, very low mutability in general, etc.

That's all very true, but what you're missing is that the proportion of codebases that need this is shrinking relative to the total number of codebases. There's an incredible proliferation of very small, bespoke, simple, AI-coded apps, that are nonetheless quite useful. Most are being created by people who have never written a line of code in their life, who will do no maintenance, and who will not give two craps how the code looks, any more than the average YouTuber cares about the aperture of their lens or the average forum commenter care about the style of their prose.

We don't see these apps because we're professional software engineers working on the other stuff. But we're rapidly approaching a world where more and more software is created by non-professionals.


> That's all very true, but what you're missing is that the proportion of codebases that need this is shrinking relative to the total number of codebases. There's an incredible proliferation of very small, bespoke, simple, AI-coded apps, that are nonetheless quite useful. Most are being created by people who have never written a line of code in their life, who will do no maintenance, and who will not give two craps how the code looks, any more than the average YouTuber cares about the aperture of their lens or the average forum commenter care about the style of their prose.

I agree that there will be more small, single-use utilities, but you seem to believe that this will decrease the number or importance of traditional long-lived codebases, which doesn't make sense. The fact that Jane Q. Notadeveloper can vibe code an app for tracking household chores is great, but it does not change the fact that she needs to use her operating system (a massive codebase) to open Google Chrome (a massive codebase) and go to her bank's website (a massive codebase) to transfer money to her landlord for rent (a process which involves many massive software systems interacting with each other, hopefully none of which are vibe coded).

The average YouTuber not caring about the aperture of their lens is an apt comparison: the median YouTube video has 35 views[0]. These people likely do not care about their camera or audio setup, it's true. The question is, how is that relevant to the actual professional YouTubers, MrBeast et al, who actually do care about their AV setup?

[0] https://www.intotheminds.com/blog/en/research-youtube-stats/


This is where I get into much more speculative land, but I think people are underestimating the degree to which AI assistant apps are going to eat much of the traditional software industry. The same way smart phones ate so many individual tools, calculators, stop watches, iPods, etc.

It takes a long time for humanity to adjust to a new technology. First, the technology needs to improve for years. Then it needs to be adopted and reach near ubiquity. And then the slower-moving parts of society need to converge and rearrange around it. For example, the web was quite ready for apps like Airbnb in the mid 90s, but the adoption+culture+infra was not.

In 5, maybe 10, certainly 15 years, I don't think as many people are going to want to learn, browse, and click through a gazillion complex websites and apps and flows when they can easily just tell their assistant to do most of it. Google already correctly realizes this as an existential threat, as do many SaaS companies.

AI assistants are already good enough to create ephemeral applications on the fly in response to certain questions. And we're in the very, very early days of people building businesses and infra meant to be consumed by LLMs.


> In 5, maybe 10, certainly 15 years, I don't think as many people are going to want to learn, browse, and click through a gazillion complex websites and apps and flows when they can easily just tell their assistant to do most of it.

And how do you think their assistant will interact with external systems? If I tell my AI assistant "pay my rent" or "book my flight" do you think it's going to ephemerally vibe code something on the banks' and airlines' servers to make this happen?

You're only thinking of the tip of the iceberg which is the last mile of client-facing software. 90%+ of software development is the rest of the iceberg, unseen beneath the surface.

I agree there will be more of this but again, that does not preclude the existence of more of the big backend systems existing.


I don't think we disagree. We still have big mainframe systems from the 70s and beyond that a powering parts of society. I don't think all current software systems are just going to die or disappear, especially not the big ones. But I do think significant double digit percentages of software engineers are working on other types of software that are at risk of becoming first- or second- or third-order casualties in a world where ephemeral AI assistant-generated software and vibe coded bespoke software becomes increasingly popular.

You are vastly overstating the capabilities of LLMs and the capacity and desire of non-technical individuals to use them to create applications.

What's even the point of vague replies like this that disagree with no real evidence, arguments, or examples?

The thing, everything you describe may be easy for an average person in the future. But just having your single AI agent do all of that will be even easier and that seems like where things will go.

Just like everyone has a 3D printer at home?

People want convenience, not a way to generate an application that creates convenience.


And perhaps they'll get that convenience from an application that they don't even know came into existence because they asked their agent to do something.

What, in practice, is the difference between AGI and what you’re suggesting will exist in terms of agent automation?

However, code quality is becoming less and less relevant in the age of AI coding

It actually becomes more and more relevant. AI constantly needs to reread its own code and fit it into its limited context, in order to take it as a reference for writing out new stuff. This means that every single code smell, and every instance of needless code bloat, actually becomes a grievous hazard to further progress. Arguably, you should in fact be quite obsessed about refactoring and cleaning up what the AI has come up with, even more so than if you were coding purely for humans.


Even non-frontier models now offer a context window of 1 million tokens. That's 100K-300K LOCs. I would not call that a limited context.

> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

Strong disagree. I just watched a team spend weeks trying to make a piece of code work with AI because the vibe coded was spaghetti garbage that even the AI couldn’t tell what needed to be done and was basically playing ineffective whackamole - it would fix the bug you ask it by reintroducing an old bug or introducing a new bug because no one understood what was happening. And humans couldn’t even step in like normal because no one understood what’s going on.


Okay, so you observed one team that had an issue with AI code quality. What's your point?

In 1998, I'm sure there were newspaper companies who failed at transitioning online, didn't get any web traffic, had unreliable servers crashed, etc. This says very little about what life would be like for the newspaper industry in 1999, 2000, 2005, 2010, and beyond.


Im arguing that code quality very much still matters and will only continue to matter.

AI will get better at making good maintainable and explainable code because that’s what it takes to actually solve problems tractably. But saying “code quality doesn’t matter because AI” is definitely not true both experientially and as a prediction. Will AI do a better job in the future? Sure. But because their code quality improves not because it’s less important.


Well then sure, we can agree there, it's just a matter of phrasing then.

Then you may want to clarify what your phrasing meant because I couldn’t find a more charitable interpretation

More and more software will be built by non-experts, software that has smaller user bases and simpler use cases and doesn't need to be maintained as much if at all. "Poor AI code quality" matters much less for these than for say, software written by developers at FAANG companies, since literally nobody will ever even look at the code.

Where we're headed is toward a world where a ton of software is ephemeral, apps literally created by AI out of thin air for a single use, and then gone.


Ephemeral in the same way the electrical wiring in an old house is ephemeral.

Which is to say, not at all.

Original wiring done by a professional, later changes by “vibe electrician” homeowners.

Every circuit might be a custom job, but they all accumulate into something a SWE calls “technical debt”.

Don’t like how the toaster and the microwave are on the same circuit even though they are in different parts of the kitchen? You’re lucky if you can even follow the wiring back to the circuit box to see how it was done. The electrical box is so much of a mess where would you even run a new circuit?

That’s the future we’re looking at.


Ephemeral like a table quickly hacked together for a specific purpose.

It can hold a single flower pot, will fall over if a kid tries to climb on it, is made from wood that's delicious to dogs and most likely won't last 25 years because it wasn't finished properly.

But the person who made it needed a table for a flower pot, doesn't have kids or pets and will happily build a new one in 10 years when the old one breaks down from bad joins.

Not every piece of shitty software has a massive attack surface the will immediately kill people if it operates wrong.

Is my personal cheap imitation of Hazel[0] I vibe-coded in two evenings perfectly bug free and does it replicate every feature of Hazel perfectly? Of course not, but it does the exact 5 things I need it to do and saved be the upgrade price to Hazel 6.

[0] https://www.noodlesoft.com/


No ephemeral as in: I'll ask the AI to check my email, and it'll create a bespoke table UI on the fly right inside my AI assistant, and populate it with relevant email data. And I'll use it, and then it will disappear. Software created and destroyed in a moment.

Not all software is meant to be some permanent building block upon which other software sits.

When new technology arrives that makes earlier ways of doing things obsolete, the consistent pattern throughout history has been that existing experts and professionals significantly underestimate the changes to come, in large part because (a) they don't like those changes, and (b) they're too used to various constraints and priorities that used to be important but no longer are. In other words, they're judging the new tech the lens of an older world, rather than through the lens of a newer world created by the new tech.


And a smart agent will eventually notice that it's doing that single operation 5 times a day and creates a more permanent skill/tool for itself instead of reinventing the wheel every time.

Yeah, I’ve built many one-off scripts in my day, and these days they take 100x less time.

There's almost no point in arguing about this anymore. Neither you nor the other person are going to be convinced. We just have to wait and see if a new crop of 100x productivity AI believer companies come along and unseat all the incumbents.

It seems that your opinion is based on expectations for the future then, which is notoriously difficult to predict.

It's not that hard to predict that obviously useful new technology is going to improve over time.

Guns, wheels, cars, ships, batteries, televisions, the internet, smartphones, airplanes, refrigeration, electric lighting, semiconductors, GPS, solar panels, antibiotics, printing presses, steam engines, radio, etc. The pattern is obvious, the forces are clear and well-studied.

If there is (1) a big gap between current capabilities and theoretical limits, (2) huge incentives for those who to improve things, (3) no alternative tech that will replace or outcompete it, (4) broad social acceptance and adoption, and (5) no chance of the tech being lost or forgotten, then technological improvement is basically a guarantee.

These are all obviously true of AI coding.


That list cherry picks all the successful cases where the technology improved while ignoring the many, many others where it didn't and the technology improved no further. That's dishonest.

It isn't even a good job of cherry picking: we never got mainstream supersonic passenger aircraft after the Concorde because aerospace technology hasn't advanced far enough to make it economically viable and the decrease in progress and massively increasing costs in semiconductors for cutting edge processes is very well known.


You're not factoring in the list of constraints I provided.

There's no broad social acceptance of supersonic flight because it creates incredibly loud sonic booms that the public doesn't want to deal with. And despite that, it's still a bad counterexample, as companies continue to innovate in this area e.g. Boom Supersonic.

At best you can say, "It's taking longer than expected," but my point was never that it will happen on any specific schedule. It took 400 years for guns to advance from the primitive fire lances in China to weapons with lock mechanisms in the 1400s. Those long time frames only prove my point even more strongly. Progress WILL happen, when there is appetite and acceptance and incentive and room to grow, and time is no obstacle. It's one of the more certain things in human history, and the forces behind it have been well studies.

Just as certain: the people and jobs who are obsoleted by these new technologies often remain in denial until they are forgotten.


If code quality only stops mattering in 400 years (whatever that definition happens to be) then the prediction that it makes is worthless in terms of what you should do today. You use it to argue it’s unimportant deal with it, but if it’s a 400 year payoff you’ve made the wrong bet.

Surely you don't think AI coding technology will be as slow to develop as guns were.

We're obviously talking about 1-10 years here, not 100-1000 years.


It’s really hard to predict where exponential progress will freeze. I was reading the other day that the field seems to have stagnated again in terms of no really meaningful ideas to overcome the inherent bottlenecks we’ve hit now in terms of diminishing returns for scaling. I’m not a pessimist or unbridled optimist but I think it’s fundamentally difficult to predict and the law of averages suggests someone will end up crowing about being right

In contrast to AI/AI companies, which have no negative externalities?

But hindsight is 20/20 as they say. In 2020 people predicted that Facebook Horizon would only go one direction, always improve and become as pervasive as the internet. So when you predict that the design and architecture capabilities of models will continue to improve, thus making code quality irrelevant, you sound very confident. And if in five years you are right, you will brag about it here. If not, well I for one will not track you down and rub it in your face. Peace out.

You're confusing betting on a company/product vs betting on technological improvement in general.

It is absolutely the case that virtual reality technology will only get better over time. Maybe it'll take 5, or 10, or 20, or 40 years, but it's almost a certainty that we'll eventually see better AR/VR tech in the future than we have in the past.

Would you bet against that? You'd be crazy to imo.


There's a kid outside the window of the place I'm staying who's been in the yard playing and talking with people online through his VR headset for like 2+ hours. He's living in the future. Whatever happens, he and his friends are going to continue to be interested in more of this.

Whether what they're using in 20 years is produced by the company formerly known as Facebook or not is a whole different question.


The newspaper industry is the perfect analogy, because it is effectively dead. Wholesale dead. Here and there, the biggest, most world-renowned papers are still alive, on life-support... NYT, WSJ, etc. But they're all dead. Their death has caused the absolute destruction of an entire industry sector and has given gangrene to adjacent industries that they will soon succumb to. The point about 1998 wasn't that there was this transition that demanded careful attention and wise strategy, but that death was coming for it no matter what anyone did to stop it.

The death of newspapers is quite the spectacle too. No one seems to understand how bad it is... the youngest generation can't even seem to recognize that anything is missing. We've effectively amateurized journalism so that only grifters and talentless hacks want to attempt it, and only in tiny little soundbites on Twitter or other social media (and they're quickly finding out how it might be more lucrative to do propaganda for foreign governments or MLM charlatanism). When the death of the software industry is complete, it too will have been completely amateurized, the youngest generation will not even appreciate that people used to make it for a living, and the few amateurs doing it will start to comprehend how much more lucrative it will be to just make poorly disguised malware.


I don't buy this at all. Code quality will always matter. Context is king with LLMs, and when you fill that context up with thousands of lines of spaghetti, the LLM will (and does) perform worse. Garbage in, garbage out, that's still the truth from my experience.

Spaghetti code is still spaghetti code. Something that should be a small change ends up touching multiple parts of the codebase. Not only does this increase costs, it just compounds the next time you need to change this feature.

I don't see why this would be a reality that anyone wants. Why would you want an agent going in circles, burning money and eventually finding the answer, if simpler code could get it there faster and cheaper?

Maybe one day it'll change. Maybe there will be a new AI technology which shakes up the whole way we do it. But if the architecture of LLMs stays as it is, I don't see why you wouldn't want to make efficient use of the context window.


I didn't say that you "want" spaghetti code or that spaghetti code is good.

I said that (a) apps are getting simpler and smaller in scope and so their code quality matters less, and (b) AI is getting better at writing good code.


Apps are getting bigger and more ambitious in scope as developers try to take advantage of any boost in production LLMs provide them.

Yes, some people are pushing everything-apps.

But just as many are creating One More Habit Tracker or Todo App, so many that Apple had to change their review guidelines to block the surge of low-tier app slop.

Internally people are creating bespoke tools for themselves to fix issues in their daily workflows that would've either been a 100k€ software project that lasts for 6 months or required an expensive SaaS system with 420 extra features they didn't need - and the price to match.


Every metric I've seen points to there being an explosion in (a) the number of apps that exist and (b) the number of people making applications.

What relevance do either of those claims have to the claim of the comment you are responding to?

Are you trying to imply that having more things means that each of them will be smaller? There are more people than there were 500 years ago - are they smaller, or larger?

Also, the printing press did lead to much longer works. There are many continuous book series that have run for decades, with dozens of volumes and millions of words. This is a direct result of the printing press. Just as there are television shows that have run with continuous plots for thousands of hours. This is a consequence of video recording and production technologies; you couldn't do that with stage plays.

You seem to be trying to slip "smaller in scope" into your statement without backing, even though I'd insist that applications individuals wrote being "smaller in scope" was a obvious consequence of the tooling available. I can't know everything, so I have to keep the languages and techniques limited to the ones that I do know, and I can't write fast enough to make things huge. The problems I choose to tackle are based on those restrictions.

Those are the exact things that LLMs are meant to change.


The average piece written and published today today is much shorter than the average piece from the past. Look at Twitter. Social media in general. Internet forums. Blog posts. Emails. Chats. Etc. The amount of this content DWARFS other content.

The same is true of most things that get democratized. Look at video. TikTok, YouTube, YouTube shorts.

Look at all the apps people are building are building for themselves with AI. They are typically not building Microsoft Word.

Of course there will be some apps that are bigger and more ambitious than ever. I myself am currently building an app that's bigger an more ambitious than I would have tried to build without AI. I'm well aware of this use case.

But as many have pointed out, AI is worse at these than at smaller apps. And pretending that these are the only apps that matter is what's leading developers imo to over-value the importance of code quality. What's happening right now that's invisible to most professional engineers is an explosion in the number of time, bespoke personal applications being quickly built by non-developers are that are going to chip away at people's reasons to buy and use large, bloated, professional software with hundreds of thousands of users.


> Look at all the apps people are building are building for themselves with AI.

The apps those people were making before LLMs became ubiquitous were no apps. So by definition they are now larger and more ambitious.


There's already been an explosion of apps - and most of them suck, are spam, or worse, will steal your data.

We don't need more slop apps, we already have that and have for years.


The Jevons paradox says otherwise. As producing apps becomes cheaper, we will not be able to help ourselves: we will make them larger until they fill all available space and cost just as much to produce and maintain.

That's the incorrect application of the Jevons Paradox. We won't get bigger apps, we'll get more apps.

Think about what happened to writing when we went from scribes to the printing press, and from the printing press to the web. Books and essays didn't get bigger. We just got more people writing.


I’ve been told repeatedly now that if AI coding isn’t working for me it’s because my projects code quality is too poor so the agents can’t understand it.

Now I’m being told code quality doesn’t matter at all.


Nothing you wrote seems to support what you said at the start there. Why is the importance of code quality decreasing?

Controversy much :-)

I completely agree. Just going through the beginner & hobbyist forums, the change from "can you help me with code to do X" to "I used ChatGPT/Claude/Copilot to write code to do X" happened with absolutely startling speed, and it's not slowing down. There was clearly a pent-up demand here that wasn't being met otherwise.

People are using AI to get code written. They have no idea what code quality is and only care that what they built works.

AFAICT, every time technology has allowed non-technical people to do more, it's opened up new opportunities for programmers. I don't expect this to be any different, I just want to know where the opportunities are.


> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

It's the opposite, code quality is becoming more and more relevant. Before now you could only neglect quality for so long before the time to implement any change became so long as to completely stall out a project.

That's still true, the only thing AI has changed is it's let you charge further and further into technical debt before you see the problems. But now instead of the problems being a gradual ramp up it's a cliff, the moment you hit the point where the current crop of models can't operate on it effectively any more you're completely lost.

> We are in the very earliest months of AI actually being somewhat competent at this. It's unlikely that it will plateau and stop improving.

We hit the plateau on model improvement a few years back. We've only continued to see any improvement at all because of the exponential increase of money poured into it.

> It's only trending in one direction. And it isn't going to stop.

Sure it can. When the bubble pops there will be a question: is using an agent cost effective? Even if you think it is at $200/month/user, we'll see how that holds up once the cost skyrockets after OpenAI and Anthropic run out of money to burn and their investors want some returns.

Think about it this way: If your job survived the popularity of offshoring to engineers paid 10% of your salary, why would AI tooling kill it?


> That's still true, the only thing AI has changed is it's let you charge further and further into technical debt before you see the problems. But now instead of the problems being a gradual ramp up it's a cliff, the moment you hit the point where the current crop of models can't operate on it effectively any more you're completely lost.

What you're missing is that fewer and fewer projects are going to need a ton of technical depth.

I have friends who'd never written a line of code in their lives who now use multiple simple vibe-coded apps at work daily.

> We hit the plateau on model improvement a few years back. We've only continued to see any improvement at all because of the exponential increase of money poured into it.

The genie is out of the bottle. Humanity is not going to stop pouring more and more money into AI.

> Sure it can. When the bubble pops there will be a question: is using an agent cost effective? Even if you think it is at $200/month/user, we'll see how that holds up once the cost skyrockets after OpenAI and Anthropic run out of money to burn and their investors want some returns.

The AI bubble isn't going to pop. This is like saying the internet bubble is going to pop in 1999. Maybe you will be right about short term economic trends, but the underlying technology is here to stay and will only trend in one direction: better, cheaper, faster, more available, more widely adopted, etc.


> What you're missing is that fewer and fewer projects are going to need a ton of technical depth. > I have friends who'd never written a line of code in their lives who now use multiple simple vibe-coded apps at work daily.

Again it's the opposite. A landscape of vibe coded micro apps is a landscape of buggy, vulnerable, points of failure. When you buy a product, software or hardware, you do more than buy the functionality you buy the assurance it will work. AI does not change this. Vibe code an app to automate your lightbulbs all you like, but nobody is going to be paying millions of dollars a year on vibe coded slop apps and apps like that is what keeps the tech industry afloat.

> Humanity is not going to stop pouring more and more money into AI.

There's no more money to pour into it. Even if you did, we're out of GPU capacity and we're running low on the power and infrastructure to run these giant data centres, and it takes decades to bring new fabs or power plants online. It is physically impossible to continue this level of growth in AI investment. Every company that's invested into AI has done so on the promise of increased improvement, but the moment that stops being true everything shifts.

> The AI bubble isn't going to pop. This is like saying the internet bubble is going to pop in 1999.

The internet bubble did pop. What happened after is an assessment of how much the tech is actually worth, and the future we have now 26 years later bears little resemblance to the hype in 1999. What makes you think this will be different?

Once the hype fades, the long-term unsuitability for large projects becomes obvious, and token costs increase by ten or one hundred times, are businesses really going to pay thousands of dollars a month on agent subscriptions to vibe code little apps here and there?


> Again it's the opposite. A landscape of vibe coded micro apps is a landscape of buggy, vulnerable, points of failure. When you buy a product, software or hardware, you do more than buy the functionality you buy the assurance it will work. AI does not change this. Vibe code an app to automate your lightbulbs all you like, but nobody is going to be paying millions of dollars a year on vibe coded slop apps and apps like that is what keeps the tech industry afloat.

This is what everyone says when technology democratizes something that was previously reserved for a small number of experts.

When the printing press was invented, scribes complained that it would lead to a flood of poorly written, untrustworthy information. And you know what? It did. And nobody cares.

When the web was new, the news media complained about the same thing. A landscape of poorly researched error-ridden microblogs with spelling mistakes and inaccurate information. And you know what? They were right. That's exactly what the internet led to. And now that's the world we live in, and 90% of those news media companies are dead or irrelevant.

And here you are continuing the tradition of discussing a new landscape of buggy, vulnerable products. And the same thing will happen and already is happening. People don't care. When you democratize technology and you give people the ability to do something useful they never could do before without having to spend years becoming an expert, they do it en masse, and they accept the tradeoffs. This has happened time and time again.

> The internet bubble did pop... the future we have now 26 years later bears little resemblance to the hype in 1999. What makes you think this will be different?

You cut out the part where I said it only popped economically, but the technology continued to improve. And the situation we have now is even better than the hype in 1999:

They predicted video on demand over the internet. They predicted the expansion of broadband. They predicted the dominance of e-commerce. They predicted incumbents being disrupted. All of this happened. Look at the most valuable companies on earth right now.

If anything, their predictions were understated. They didn't predict mobile, or social media. They thought that people would never trust SaaS because it's insecure. They didn't predict Netflix dominating Hollywood. The internet ate MORE than they thought it would.


Your whole argument is based on 'the technology improves'.

Ok, so another fundamental proposition is monetary resources are needed to fund said technology improvement.

Whats wrong with LLMs? They require immense monetary resources.

Is that a problem for now? No because lots of private money is flowing in and Google et al have the blessing of their shareholders to pump up the amount of cash flows going into LLM based projects.

Could all this stop? Absolutely, many are already fearing the returns will not come. What happens then? No more huge technology leaps.


This has literally never happened in the history of humanity. Name one technology where development permanently stopped due to lack of funding, despite there being...

1. lots of room for progress, i.e. the theoretical ceiling dwarfed the current capabilities

2. strong incentives to continue development, i.e. monetary or military success

3. no obviously better competitors/alternatives

4. social/cultural tolerance from the public

Literally hasn't happened. Even if you can find 1 or 2 examples, they are dwarfed by the hundreds of counter examples. But more than likely, you won't find any examples, or you'll just find something recent where progress is ongoing.

Useful technology with room to improve almost always improves, as people find ways to make it better and cheaper. AI costs have already fallen dramatically since LLMs first burst on the scene a few years back, yet demand is higher than ever, as consumers and businesses are willing to pay top dollar for smarter and better models.


AI has none of these things.

1. As I said before, we've long since reached diminishing returns on models. We simply don't have enough compute or training data left to make them dramatically better.

2. This is only true if it actually pans out, which is still an unknown question.

3. Just... not using it? It has to justify its existence. If it's not of benefit vs. the cost then why bother.

4. The public hates AI. The proliferation of "AI slop" makes people despise the technology wholesale.


1. Saying that AI will never approach its theoretical limits because XYZ tech is approaching diminishing returns, is like saying guns would never get better than the fire sticks of China in 1000 AD because the then-current methods hit their theoretical limits. You're betting against tens of thousands of the smartest minds of a generation across the entire planet. I will happily take the other side of this bet.

2. Sure, depends on #1. But the incentive is undeniable.

3. It has. Do you think people are using Claude Code in incredible numbers for no reason?

4. The public and businesses are adopting AI en masse. It's incredibly useful. Demand is skyrocketing. I don't think you could show that negative public sentiment has been sufficient to stop this, any more than negative sentiment about TVs, headphones, bicycles, etc (which was significant).

With the exception of #1, I feel like you're arguing that things won't happen, where the numbers show they've already have happened and are accelerating.


Thanks for jumping in fella. Agree on all points.

> This is what everyone says when technology democratizes something that was previously reserved for a small number of experts.

What part of renting your ability to do your job is "democratizing"? The current state of AI is the literal opposite. Same for local models that require thousands of dollars of GPUs to run.

Over the past 20 years software engineering has become something that just about anyone can do with little more than a shitty laptop, the time and effort, and an internet connection. How is a world where that ability is rented out to only those that can pay "democratic"?

> When the printing press was invented, scribes complained that it would lead to a flood of poorly written, untrustworthy information. And you know what? It did. And nobody cares.

A bad book is just a bad book. If a novel is $10 at the airport and it's complete garbage then I'm out $10 and a couple of hours. As you say, who cares. A bad vibe coded app and you've leaked your email inbox and bank account and you're out way more than $10. The risk profile from AI is way higher.

Same is even more true for businesses. The cost of a cyberattack or a outage is measured in the millions of dollars. It's a simple maths, the cost of the risk of compromise far oughtweights the cost of cheaper upfront software.

> You cut out the part where I said it only popped economically, but the technology continued to improve.

The improvement in AI models requires billions of dollars a year in hardware, infrastructure, end energy. Do you think that investors will continue to pour that level of investment into improving AI models for a payout that might only come ten to fifteen years down the road? Once the economic bubble pops, the models we have are the end of the road.


Dont waste your time on him. He reminds me of people who are so concentrated on one part of the picture, they can't see the whole damn thing and how all the pieces fit and interact with each other.

You're describing yourself imo. Your point ignores hundreds of years of history and says zero about the forces that shape technological development and progress, which have been studied fairly exhaustively.

"Thousands of dollars of GPU" as a one-time expense (not ongoing token spend) is dirt cheap if it meaningfully improves productivity for a dev. And your shitty laptop can probably run local AI that's good enough for Q&A chat.

On a SWE salary maybe. If the baseline cost of doing business is a $5k GPU you've excluded like a quarter of the US working population immediately.

> What part of renting your ability to do your job is "democratizing"? The current state of AI is the literal opposite. Same for local models that require thousands of dollars of GPUs to run.

"Renting your ability to do your job"?

I think you're misunderstanding the definition of democratization. This has nothing to do with programmers. It has nothing to do with people's jobs. Democratizing is defined as "the process of making technology, information, or power accessible, available, or appealing to everyone, rather than just experts or elites."

In other words, democratizing is not about people who who have jobs as programmers. It's about the people who don't know how to code, who are not software engineers, who are suddenly gaining the ability to produce software.

Three years ago, you could not pay money to produce software yourself. You either had to learn and develop expertise yourself, or hire someone else. Today, any random person can sit down and build a custom to-do list app for herself, for free, almost instantly, with no experience.

> The improvement in AI models requires billions of dollars a year in hardware, infrastructure, end energy. Do you think that investors will continue to pour that level of investment into improving AI models for a payout that might only come ten to fifteen years down the road? Once the economic bubble pops, the models we have are the end of the road.

10-15 year payouts? Uhhh. Maybe you don't know any AI investors, but the payout is coming NOW. Many tens of thousands of already gotten insanely rich, three years ago, and two years ago, and last year, and this year. If you think investors won't be motivated, and there aren't people currently in line to throw their money into the ring, you're extremely uninformed about investor sentiment and returns lol.

You can predict that the music will stop. That's fair. But to say that investors are worried about long payout times is factually inaccurate. The money is coming in faster and harder than ever.


I have no idea what this flood of personal-use software is that you think normal people want to produce. Normal people don't even think about software doing a thing until they see an advertisement about software that does a thing. And then they'd rather pay 10 bucks for it than to invent a shittier version of it themselves for $500.

And I'm not being condescending about normal people. Developers often don't think about the possibility of making software that does a particular thing until they actually see software that does that thing. And they're going to also going to prefer to buy than vibe code unless the program is small and insignificant.


Go look at the numbers from Lovable and Replit and Claude Code and similar companies. Quite staggering.

I myself have run an online community for early-stage startup founders for over a decade. The number of ambitious people who would love to build something but don't know how to code and in the last year or two have started cranking out applications is tremendous. That number is far higher than the number of software engineers who existed before.


That's very much an echo chamber you find yourself in. I'm far away from any technological center and the main use of LLM for people is the web search widget, spell checking and generating letters. Also kids cheating on their homework.

> Democratizing is defined as "the process of making technology, information, or power accessible, available, or appealing to everyone, rather than just experts or elites."

Your definition only supports my point. The transfer of skill from something you learn to something you pay to do is the exact and complete opposite of your stated definition. It turns the activity from something that requires you to learn it to one that only those that can afford to pay can do.

It is quite literally making this technology, information, and power available to only the elite.

> Uhhh. Maybe you don't know any AI investors, but the payout is coming NOW.

What payout? Zero AI companies are profitable. If you're invested in one of these companies you could be a billionaire on paper, but until it's liquid it's meaningless. There's plenty of investors who stand to make a lot of money if these big companies exit, but there's no guarantee that will happen.

The only people making money at the moment are either taking cash salaries from AI labs or speculating on Nvidia stock. Neither of which have much do with the tech itself and everything to do with the hype.


> It is quite literally making this technology, information, and power available to only the elite.

I don't know what to say to you. More people are coding now with AI than ever coded before. If your argument was true, then that would just mean that there are more elites than ever. Obviously that's not what's happening.

> What payout? Zero AI companies are profitable.

Because they're reinvesting profits into continued R&D, not because their current products are unprofitable. You're failing to understand basic high-growth business models.

> If you're invested in one of these companies you could be a billionaire on paper, but until it's liquid it's meaningless.

Plenty of AI companies have exited, and plenty of other AI companies offer tender offers where shareholders have been able to sell their shares to new investors. Again, it sounds like you just aren't really educated on what's happening. Plenty of people are millionaires in real life, not just on paper. You're massively incorrect about the payout landscape that investors are considering.

> The only people making money at the moment are either taking cash salaries from AI labs or speculating on Nvidia stock.

No, founders, early-stage investors, and employees with stock have cashed out in many cases. Again, it just feels like you're not aware of what's happening on the ground.

> Neither of which have much do with the tech itself and everything to do with the hype.

That's a very different argument. If you want to say that the investment is unsound, then fine, that's your opinion, but trying to say that investors have no appetite because they have to wait 10 to 15 years for a payout is incredibly incorrect.


> I don't know what to say to you. More people are coding now with AI than ever coded before. If your argument was true, then that would just mean that there are more elites than ever. Obviously that's not what's happening.

I don't know how I can explain this any more clearly.

If you need AI to create software, and the cost of AI is $200/month, then only people who can afford $200/month can create software.

Costs will increase. The current cost is substituted by investor funding. Sell at a loss to get people hooked on the product and then raise the price to make money, a "high-growth business model" as you say.

The cost to make a competitor to Anthropic or OpenAI is tens or hundreds of billions of dollars upfront. There will be few competitors and minimal market pressure to reduce prices, even if the unit costs of inference are low.

$200/month is already out of reach of the majority of the population. Increases from here means only a small percentage of the richest people can afford it.

I don't know what definition of "elite" you're using but, "technology limited so that only a small percentage of the population can afford it" is... an elite group.

This is fun and all, but I think we've reached the end of the productive discussion to be had and I don't have much more to say. Charitably, we're leaving in completely different realities. I just hope when the bubble pops the fall isn't too hard for you.


> I don't know how I can explain this any more clearly. If you need AI to create software, and the cost of AI is $200/month, then only people who can afford $200/month can create software.

Your entire hypothetical is based on "ifs" that aren't true. Nothing in this sentence is true. You don't need AI to create software, the cost of AI development is much less than $200/month on average, and many more people can afford AI dev than programming bootcamps or classes or degrees.

> Costs will increase. The current cost is substituted by investor funding. Sell at a loss to get people hooked on the product and then raise the price to make money, a "high-growth business model" as you say.

Inference is already profitable at current pricing. Most funding goes toward R&D for new model training, not inference.

Also, inference costs dropped over 280x between Nov 2022 and Oct 2024. Inference will continue to get cheaper as we develop more specialized hardware and efficient models.

This is not Uber, subsidizing the cost of human drivers. This is real tech, chips and servers and software. Costs fall over time, not rise. Innovation does not go backwards.


> $200/month is already out of reach of the majority of the population.

1. You can build small applications with the $20/month sub, much more with the $100/month. Competition and technology improvements will inevitably improve the price to value ratio.

2. Cable sports subscriptions are in a similar price range. Expensive, but not exclusive to “the elites”.


The median per capita income in the United States is $37,683/year.[0] Depending on your state, after taxes, that's something like ~$2,600/month. You're asking almost 10% of their post-tax income to this just for the opportunity to create software. With rent, food, and other living expenses many households at that income level simply cannot afford this.

This is the median income. If it's a struggle for someone on this income then it's worse for half of all Americans, and American incomes are higher than most of the rest of the world.

[0]: https://en.wikipedia.org/wiki/Per_capita_personal_income_in_...


The bar for "create software" up to this last year or so was "learn software development" or "pay someone else".

Personally, I think millions more people having the ability to create some subset of software is an incredible shift.


> $200/month is already out of reach of the majority of the population. Increases from here means only a small percentage of the richest people can afford it.

This is an absurd claim. There are many things the majority of the population spends money on that cost more than this.


I'm going to take your comment at face value, and I'm also going to assume that you're US-based.

You need to take a step back and look at the economic reality of the majority of Americans today. Many live paycheck-to-paycheck, even those with "middle class" incomes. For many a $200 one-off bill is debilitating, yet alone a recurring subscription. If you don't know that, you have a dangerously narrow view of the economy.


If you think that a $200/month subscription is "out of reach" for the majority of Americans, you are just plainly and simply wrong about that. They might have to make some tradeoffs by reducing spending in other areas, but that's part of life.

> nobody will ever have to maintain it, so it doesn't matter

I'm curious about software that's actively used but nobody maintains it. If it's a personal anecdote, that's fine as well


I mean I've written some scripts and cron jobs for websites that I manage that have continued trucking for years with no changes or monitoring on my end. I suppose it's a bit easier on the web.

  > However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

  > [...]

  > We are increasingly moving toward a world where people who aren't sophisticated programmers are "building" their own apps with a user base of just one person. In many cases, these apps are simple and effective and come without the bloat that larger software suites have subjected users to for years. The code is simple, and even when it's not, nobody will ever have to maintain it, so it doesn't matter. Some apps will be unreliable, some will get hacked, some will be slow and inefficient, and it won't matter. This trend will continue to grow.
I do agree with the fact that more and more people are going to take advantage of agentic coding to write their own tools/apps to maker their life easier. And I genuinely see it as a good thing: computers were always supposed to make our lives easier.

But I don't see how it can be used as an argument for "code quality is becoming less and less relevant".

If AI is producing 10 times more lines that are necessary to achieve the goal, that's more resources used. With the prices of RAM and SSD skyrocketing, I don't see it as a positive for regular users. If they need to buy a new computer to run their vibecoded app, are they really reaping the benefits?

But what's more concerning to me is: where do we draw the line?

Let's say it's fine to have a garbage vibecoded app running only on its "creator" computer. Even if it gobbles gigabytes of RAM and is absolutely not secured. Good.

But then, if "code quality is becoming less and less relevant", does this also applies to public/professional apps?

In our modern societies we HAVE to use dozens of software everyday, whether we want it or not, whether we actually directly interact with them or not.

Are you okay with your power company cutting power because their vibecoded monitoring software mistakenly thought you didn't paid your bills?

Are you okay with an autonomous car driving over your kid because its vibecoded software didn't saw them?

Are you okay with cops coming to your door at 5AM because a vibecoded tool reported you as a terrorist?

Personally, I'm not.

People can produce all the trash they want on their own hardware. But I don't want my life to be ruled by software that were not given the required quality controls they must have had.


> If AI is producing 10 times more lines that are necessary to achieve the goal, that's more resources used. With the prices of RAM and SSD skyrocketing, I don't see it as a positive for regular users. If they need to buy a new computer to run their vibecoded app, are they really reaping the benefits?

I mean, I agree, but you could say this at any point in time throughout history. An engineer from the 1960s engineer could scoff at the web and the explosion in the number of progress and the decline in efficiency of the average program.

An artist from the 1700s would scoff at the lack of training and precision of the average artist/designer from today, because the explosion in numbers has certain translated to a decline in the average quality of art.

A film producer from the 1940s would scoff at the lack of quality of the average YouTuber's videography skills. But we still have millions of YouTubers and they're racking up trillions of views.

Etc.

To me, the chief lesson is that when we democratize technology and put it in the hands of more people, the tradeoff in quality is something that society is ready to accept. Whether this is depressing (bc less quality) or empowering (bc more people) is a matter of perspective.

We're entering a world where FAR more people will be able to casually create and edit the software they want to see. It's going to be a messier world for sure. And that bothers us as engineers. But just because something bothers us doesn't mean it bothers the rest of the world.

> But then, if "code quality is becoming less and less relevant", does this also applies to public/professional apps?

No, I think these will always have a higher bar for reliability and security. But even in our pre-vibe coded era, how many massive brandname companies have had outages and hacks and shitty UIs? Our tolerance for these things is quite high.

Of course the bigger more visible and important applications will be the slowest to adopt risky tech and will have more guardrails up. That's a good thing.

But it's still just a matter of time, especially as the tools improve and get better at writing code that's less wasteful, more secure, etc. And as our skills improve, and we get better at using AI.


If strongly typed languages are preferred for AI coding, maybe the fixation on code quality make LLM produce better code.

Maybe, but how exactly are you defining "code quality" ?

You can for sure be above average without a very good memory if you're good at spotting tactics. But average isn't a super high bar.

Huge numbers (billions) of people have enough money to make massive changes to the lives of those less fortunate than them, but don't, and prefer instead to make incremental upgrades to their own lives. New rugs, more savings, first-class airline tickets, eating out a few more times a month, etc.

This is just human nature.

People who are at wealth level x tend to say, "I can't believe that people at wealth level x+1 aren't more generous!" all the while ignoring their own lack of desire to give generously to people at wealth levels x-1 and below.


Aaron Swartz had a good take on this - http://www.aaronsw.com/weblog/handwritingwall

I remember wrestling with this in my therapist's office when Aaron died. I had known him tangentially - we hung out in the same IRC channels, and had several mutual friends in the Cambridge/Somerville techie crowd that he would hang out in person with.

As a college student and young adult I had always envied his fame, his intelligence, his money (post-Reddit acquisition), and the strength of his convictions. And yet, in that moment in early 2013, he was dead, and I was working a good job at Google (and this was 2013 Google, when it was still a nice place to work doing things that I could generally approve of). And he'd died doing the stuff that I wanted to do but had been too chickenshit to actually carry out.

I think that this illustrates why the world is the way it is. All the true altruists are dead, killed for their altruism. It is adaptive, in a survival sense, to think of yourself and your own survival and not worry too much about other people. Ironically, this is what my therapist was trying to get me to realize.

But I think this also goes back to the GP's point. When people at wealth level x give to people at level x-1, it doesn't raise the people at x-1 up to x. It brings the person at x down to x-1. There are more people at x-1 than x, after all; you could give everything you had away and mathematically, it would lower your net worth significantly more than it would raise theirs. And of course, it doesn't do a damn thing about the people at x+1. Why can't they donate instead, where their wealth would do an order of magnitude more good?

There actually do exist people who are like that: they would rather spread their wealth around the people at wealth level x-1, joining them at that level, than raise themselves up to x+1. I've met some; most poor people are far more generous than rich people are. That is why they are poor. But then, it doesn't solve the problem of inequality, they just disappear into the masses of people at level x-1.


There's also twitch viewers who love to give all their money to people at wealth level x + 10-100

Game theory is the most dangerous force in the universe.


I'm not talking about people with x+1, where X is a standard US middle-class amount of money. In that case, $20k or $100k or some amount that would make a tiny difference in the world is a huge amount of money to a middle-class family.

No, I'm talking about wealth level X*100. For them, the difference between $100M and $1B is basically no difference in the quality of life to that family. They'd have 1 fewer megayachts. They could give away $900M, and eliminate hunger forever in a large city or a small state. $100B is 100x that again, they could give away $99.9B, still have $100M, and solve poverty in most _countries_.

Or, if they don't want to, we institute a 90% wealth tax on everything over $10M, and solve it ourselves.


What you forget is, none of the x+100 people you are talking about would have ever become a x+100 person if they thought like you suggest they should. In german, we have a proverb: "Von den Reichen lernt man das sparen." (The rich teach you how to save money) And giving away huge sums without personal gain, is the contrary of saving.

Hmmm.

> For them, the difference between $100M and $1B is basically no difference in the quality of life to that family.

I think money at that level is not about family quality of life. It's usually about buying companies, launching product lines, etc.

> They could give away $900M, and eliminate hunger forever in a large city or a small state. $100B is 100x that again, they could give away $99.9B, still have $100M, and solve poverty in most _countries_.

Ehhh...

Most people who have $X billion don't really have that. They just have controlling shares in a company that's worth a lot, and media companies like Forbes enjoy making headlines by pretending that's cash. Actually turning that into cash would be impossible.

Of course, they can still turn meaningful percentages of it into cash and give it away, that's true. And I think more billionaires should.

But also, many problems aren't money problems. For example, simply flooding a state or a small city with money isn't going to "solve hunger forever." Hunger and poverty are more often an issue around distribution and logistics, infrastructure, politics, culture and conflict, and things like that. Huge cash giveaways famously tend to disappear and accomplish very little.

The single biggest force that reduces suffering and poverty in an enduring way, imo, is the creation and proliferation of technology. Vaccines that are cheaper to produce. Water filtration systems that are easier to maintain. Seeds and crops that are heartier and more durable. Healthcare that's more affordable and available. Etc. Advances like this have reduced more suffering and ended more poverty and counteracted more famine and saved more lives than any amount of charity ever has or could.

What I would like to see more of is billionaires and even super-talented non-billionaires starting more organizations that are a force for good. Or using money from profitable enterprises to fund unprofitable-yet-charitable enterprises.


We can also tell because anyone who can take the time to use a computer with internet to write a comment in well-formed English is already comparatively wealthy or connected enough to provide food and housing for dozens of people.

Dirt poor people in 3rd world countries have smartphones and internet access and write comments in well-formed English.

All of them? Weird argument to take

Safe to assume those downvoting you will not be donating their MacBooks and refrigerators.

I also think this could be a symptom of an economically unequal society (which creates a higher range of x), and is a big reason why it's important to fix it, on top of the extra money to the state.

So thats essentially communism right? Is human nature incompatible with communism or is capitalism incompatible with human nature?


Communism doesn't eliminate power relationships, it just papers them over with politics and bureaucracy instead of having them legible with prices and wages.

In the American golden age of capitalism from ~1950-1970, the top marginal tax rate was 90%, and so you didn't have CEOs get paid more than about 3x the median worker, because the government would get it all. Instead, they got perks. Private jets. Positions at the company for their kids. Debaucherous holiday parties. Casual sexual harassment of secretaries.

In Soviet communism, all production was centrally planned by government bureau run by party members. It was not uncommon for these bureaus to make mistakes, leading to severe shortages for the population. Nevertheless, these shortages never seemed to really hit the party members responsible for making the plans. Power has its perks.

And that's also why reforms attempting to reduce economic inequality need to focus on power rather than money. There have been a number of policies that do meaningfully raise standards of living for the poor: they're things like the 13th amendment to the (US) Constitution, the 1st amendment, the jury trial system, free markets, anti-monopoly statutes, bans on non-competes, etc. What they all have in common is that they preserve economic freedom and the power to make your own living against people who would seek to restrict that freedom and otherwise keep you in bondage.


So why not bus drivers? Supposedly because their routes are fixed?

Friend of mine is an airline pilot and when she was doing short flights, going around several EU capitals per day she said "it's like driving a bus". You fly from A -> B -> C -> D -> A. And start over again the next day. She wasn't a huge fan.

>They booked a flight — and got a bus

https://wapo.st/3Q41RX9


Maybe them too, to some extent. Have they been studied?

There's a huge difference between fake content and fake authors.


> The company is profiting off of other people's work! That's not right.

What's wrong with it?

We live in an interconnected world. Every company or individual who profits off anything does so, in very large part, thanks to work left behind by others that they don't directly compensate each other for.

Stated differently, if we look at the other side of the coin, it's one thing to create value, and another thing to capture value. If you are a business (and artists seeking profit are businesses), you create value then try to capture that value. Creating value and trying to capture (in the form of profit) is the entire name of the game. But no business captures 100% the value they create. If you make a product/artwork/service/whatever and release it to the public, lots of people may use it, view it, be inspired by it, learn from it, and ultimately profit off it in their own way without you necessarily being able to capture some part of it. And what's wrong with that?

Do we really want the entire world to be endlessly full of cookie-licking rent seekers who demand profit every time anyone does anything? Because they failed to capture the value they created, and thus demand a piece of the pie from those who are better at capturing value?

I like the way Thomas Jefferson put it:

> If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation. Inventions then cannot, in nature, be a subject of property. Society may give an exclusive right to the profits arising from them, as an encouragement to men to pursue ideas which may produce utility, but this may or may not be done, according to the will and convenience of the society, without claim or complaint from anybody. Accordingly, it is a fact, as far as I am informed, that England was, until we copied her, the only country on earth which ever, by a general law, gave a legal right to the exclusive use of an idea.


Do we really want the entire world to be endlessly full of cookie-licking rent seekers who demand profit every time anyone does anything? Because they failed to capture the value they created, and thus demand a piece of the pie from those who are better at capturing value?

The starving artists I know would be extremely happy to get even one cookie to lick. I know an artist prodogy that paints on canvas and has work in a sizable gallery, at least one institution patron, and is constantly hosting paid workshop events. He architected and built his own custom 40ft ceiling pine art house covered in beautful stained wood and arches, with large metal cuttings and engravings of wild horses on the railings. This artistic prodigy is still starving and works as a handyman/construction worker part-time. He is strongly opposed to AI, by the way.

Most artists are "starving artists"; there are extremely few artists that can support themselves by their creations alone. Many artists make no money at all, and many artists seem to work or create alone as individuals, meaning that they almost always lack the funds or community resources to protect their creative work.


To the extent that you care about deriving money from your art, you aren't just an artist, you're a business. And that's fine. I love businesses and the people who start them.

But, sadly, art isn't an easy business. There is a tremendous supply of art, and not enough demand for it. In other words, it's very competitive.

So, just like every other competitive industry on earth, if you're going to be in the art business, then simply making a good product/service isn't enough. You have to think about marketing and sales and differentiation and distribution and strategy and the whole big picture.

There are plenty of starving artists who make incredible art that nobody pays for, just like there are plenty of starving startup founders who build well-coded apps that nobody pays for.


> Do we really want the entire world to be endlessly full of cookie-licking rent seekers who demand profit every time anyone does anything?

No, far better that we have four rent-seekers who gobble up everything that anyone is naive enough to share with the world, then turn around to demand profit in order to keep up with the new pace of the world that they’ve created.


But original artists being rent-seekers is OK, right?

PS: I categorically disagree that AI developers are rent-seekers, unless they require rent for the products their models generate


Uh, is there an AI developer that does not charge for their services that I don't know about? I'd love to use that one.


Charging for a service isn't rent-seeking.


I'd love to see copyright abolished. As long as it continues to exist, however, AI should not get an exemption; that would directly advantage a few large companies over everyone else, giving them a special privilege to violate licenses that nobody else gets.


What's the exemption? To me it seems like we're trying to do the opposite, make the laws even more stringent for AI than we do for people.

Imagine a genius person went out, trained himself by studying all the great art, and in doing so gained the ability to paint or draw in just about any style. That wouldn't be a copyright violation. In fact, plenty of people do this, or some version of it. But we say it should be illegal for AI... why, exactly? Because artists don't like it? Because it's "too good"?

If we're going to keep copyright around, then imo what should be illegal here is using the AI as tool to create copies of people's work. Like, if I go out and use AI generate a book of poetry and it's got a bunch of Beatles lyrics in it, fine, sue me.


Yeah, that is true. I'd be on-board with abolishing copyright then.


> Greenfield and then leaving is too easy, you don’t learn the actually valuable lessons.

You learn a ton of valuable lessons going from 0 to v1. And a ton of value is created. I guess I'm unclear how you're defining "actually valuable" here.


I suspect the issue is the parent has never worked in an early stage role at a growing startup still iterating on finding product-market-fit. If they had they would realize you learn a lot about "maintaining and expanding", especially when your prototype now has a bunch of users.

This is evident in my personal experience by the fact that I am often the one that sees scaling and maintenance issues long before they happen. But of course parent would claim this is impossible.


Sure, but that's in many ways the easy part.

If v1 is successful and attracts a lot of users, it will have to have features added and maintained.

Doing that in ways that does not produce "legacy code" that will have to be thrown away and rewritten in a few years is a very different skill than getting v1 up and running, and can easily be what decides if you have a successful business or not.


Having worked at both greenfield startups and unicorns, I've found that virtually every problem I've encountered at the unicorn startups was caused by folks being incompetent at the greenfield level. Maybe when you get to the scale of Google things are different, but it's certainly possible to build a business big enough to retire off that doesn't require any more technical knowledge than what you'd learn at a two-person pre-PMF startup.


Architecting not knowing how to maintain it.

Edit: a legacy vibe coder


If you make bad enough decisions, your customers leave, your company dies, and/or you are fired by the board.

CEOs get fired all the time, and companies die all the time. It's part of life, and so are layoffs.

There's no need for some sort of additional punitive actions to be taken. If you control a company, you have the right to do layoffs, and if you're an employee, you take that risk of being laid off because you prefer it to going out and trying to grow your own company from scratch.


It’s fine if someone doesn’t believe that executives should not be held to additional high consequence standard, but you’re boiling down this argument without addressing the central element at play which is viability and wealth gap.

Now, you can absolutely hold a different position here, that’s okay, I’m fine with that, but at least address it head on.

Consider the fact that those getting laid off have disproportionate negative affects compared to what executives face for making terrible decisions in the first place. Jack still keeps his aspen home and whatever wealth he’s extracted out of the company. So he faces no real downside here. He could run block into the ground and still have more money than he would know what to do with.

You’re arguing about shares of paper entitling people to do things to other people’s lives without facing much actual consequence in their personal lives.

Not to mention professional, I’ve watched executives jump from company to company doing terrible things and they still keep getting hired.

Where as the average person is often advised to reduce or obfuscate the fact they were laid off less there be discrimination.

Now you can argue that executives shouldn’t face higher consequences in exchange for wielding such immense power over the lives of those which they employ, I ask that you say it plain, don’t hide behind feigned guise of people who live in a world where they don’t have a choice but to work for corporations or not have a roof over their head and basic needs met.

It’s fine if you want to defend that, but don’t act like people are just making a deliberate choice. This is a choice society has made for them and the wealthiest perpetuate


To me what you're upset about basically sounds like, "People who have more money/power/etc have it easier than those who don't."

Yes, yes, they do. So what?

All else being equal, greater wealth generally brings greater ease and comfort. A billionaire’s life is easier than a millionaire’s, a millionaire’s life is easier than being a middle-class Westerner, middle class living is easier than living below the poverty line, living below the poverty line in a wealthy country is easier than being poor in a developing country, and being poor in a developing country is easier than surviving as a subsistence farmer or living without shelter at all.

All else being equal, if you're a majority owner in a company, you're going to get away with a lot more than if you are a smaller owner, or a non-owner, or an employee, or a customer. All else being equal, if you're a general in the military, you're going to have more power and more leeway than if you're a lieutenant or a private.

Etc etc.

I fail to see what is wrong with this.


Potentially, universal human rights is what's wrong with it. Much depends on what "get away with" and "leeway" actually mean. There's a difference between owning a jacuzzi and owning a high court judge. Between these there's a gray area of things people vaguely disapprove of, and sometimes it turns out that they're decided to be illegal.


Sure, we're all in agreement that one should not be able to buy a judge, or violate human rights. But that's not what we're talking about.

What no_wizard and others in this thread are upset about is the owner/leadership of a company firing employees from that company. no_wizard goes so far as to suggest that that's "entitled" behavior.

IMO he has it exactly backwards.

We have at-will employment in 49 out of 50 states for a reason. You're adults entering into a mutually agreed upon contract where you trade money for services rendered. Your company is not your parent/nanny/caretaker who owes you continued employment and predictability in life. And vice versa, if you are a company owner, your employees are not your slaves who owe you work or continued employment.

Employees have the freedom to quit at any time, and owners have the freedom to fire them at any time. Both of these actions can adversely affect the other party, but that's life. People are free to do what they want with their own companies and their own availability as employees, and just because we would prefer them to continue giving us money or employment doesn't mean we are owed that. Neither quitting nor firing is entitled.

What is entitled is the belief that you are somehow owed your job (or vice versa, that you are owed continued tenure by your employees), and that for them to cancel the at-will contract when they no longer want it is worthy of punishment.


>What no_wizard and others in this thread are upset about is the owner/leadership of a company firing employees from that company. no_wizard goes so far as to suggest that that's "entitled" behavior.

It is entitled behavior. In the very literal sense of the word entitled. They wouldn't have such power to affect so many people unless they had a form of entitlement.

>We have at-will employment in 49 out of 50 states for a reason. You're adults entering into a mutually agreed upon contract where you trade money for services rendered. Your company is not your parent/nanny/caretaker who owes you continued employment and predictability in life. And vice versa, if you are a company owner, your employees are not your slaves who owe you work or continued employment.

We don't have at will employment because workers decided its what's best for the arrangement. There's been systematic efforts by business lobbying politicians. Its a well documented history. At will employment overwhelming benefits employers, not employees. Its not an equal relationship. The reason we have at will employment so prevalent is because it undermines unions. Its not because its the best and most equitable arrangement between employers and employees

>What is entitled is the belief that you are somehow owed your job (or vice versa, that you are owed continued tenure by your employees), and that for them to cancel the at-will contract when they no longer want it is worthy of punishment.

Who said anything about owed a job? What I'm saying is there is a lack of consequences for executives and by extension companies.

Triggering a wealth tax would be one. Companies that are profitable laying off thousands of people being required to pay fairer severances would be another form of consequence.

The specifics of all that can be debated, what I'm saying head on though is there should be higher consequences for their stupidity and inability to deploy resources effectively.


> The reason we have at will employment so prevalent is because it undermines unions. Its not because its the best and most equitable arrangement between employers and employees

At-will employment came about in the 1870s and predates the predates major union battles and influence. It's part of American culture, which treats adults like adults who are able to make decisions on their own. I support it as both an employer and and employee. It's a big reason there's so much mobility in the tech industry and in other areas with high-skilled employees.

In many places that don't have at-will employment, employees get stronger firing protections, sure, but it's also common that they have to give longer notice before quitting, can be subject to more stringing non-competes, etc.

I don't want any of that. People should be able to quit, and companies should be able to fire, and the consequences for the other party are not either party's responsibility. As a business owner, it's your company, and you need to be prepared for employees to quit. As an employee, it's your life, and you need to be financially prepared to lose your job at all times.

I absolutely do not want a nanny market that forces private citizens to be responsible for each other. That's what the government is for. We already made this mistake tying healthcare to businesses and employment. It's a huge mistake.

If you want to benefit more from at-will employment, you can learn more and get a higher skilled job with more bargaining power. Or you can take the massive risk to start and own your own company. If you don't want to do that, then that's your choice.

---

> What I'm saying is there is a lack of consequences for executives and by extension companies.

There are consequences. If you mismanage your company, it will fail. This happens tens of thousands of times per day in America.

Similarly, if you're an adult, and you go out and get a job, and you don't prepare for the fact that you might lose that job at any time for any reason, then there are consequences for you. The second you go out an sign an employment contract, you should prepare for the very obvious and predictable circumstance that your job might someday end. This is not someone else's responsibility. It's your sole responsibility.

It's not fellow citizens' or your boss' job or your company's job to nanny you and manage your financial situation in life. That's what the government is for. If you fail to prepare in life, then you can fall back to government entitlement programs, which are aptly named, because they are benefits you are entitled to.

You're absolutely not entitled to someone else who owns a company taking care of you, or worrying about what happens after you lose your job, or any other part of your private life. Just like they are not entitled to you worrying about what happens to the company if you quit.

Nor is either party entitled to some sort of weird vindictive emotion they may have to punish or hurt the other side if the employment contract is unilaterally ended. Thank god.

---

> Triggering a wealth tax would be one. Companies that are profitable laying off thousands of people being required to pay fairer severances would be another form of consequence. The specifics of all that can be debated, what I'm saying head on though is there should be higher consequences for their stupidity and inability to deploy resources effectively.

I don't see why. This just seems vindictive/punitive to me.

The market already punishes companies for mismanagement. As much as everybody likes to focus on the tiny percentage of really rich companies that do well enough to survive a big mistake, the vast majority of founders and CEOs never get to that level. They lose their companies and possibly their shirts before that ever happens.

If you start a company and you get through the gauntlet and you make it successful enough and rich enough that you can employ thousands of people, that's great. It was your competence that created those jobs, and your incompetence (or straightforward market forces) can destroy them, and that's that. If other people don't like it, they can go work somewhere else.

It's just very hard for me to understand this alternative perspective you're promoting, which imo fails to treat adults as adults, and is more akin to a nanny situation. I'm very grateful to live in the US where the game is played differently, and the last thing I want is for it to turn into something resembling Europe.

I like individual responsibility. I like treating adults like adults. I like the fact that the sky is the limit and the ceiling is high for the most skilled and ambitious people. And hell, I also like having a high floor. I just think that floor needs to come from the government and taxes.

What I don't like is this entitled idea that one's fellow citizens need to be their nannies, need to be punished, or need to be held down, just because they're successful.


> If you make bad enough decisions, your customers leave, your company dies, and/or you are fired by the board.

Not true. Buffet's written a lot of great stuff on this subject.


The number of famous CEOs who've hopped around from company to company despite being terrible at their job is probably in the dozens. The number of founders/CEOs whose companies have died is demonstrably in the millions.

Also, there are plenty of regular employees who suck at their jobs and yet manage to hold onto them, get promoted, get new and lucrative job offers, etc.


This is a super disingenuous take. He was very obviously making a specific point, not try express a perspective on the value of humanity.


I understand he’s making a technical point about efficiency, but language isn't neutral and I think it betrays something deeper. It's such a glib and shallow point too that I think it should be called out since he has a track record of saying some incredibly shallow things about AI, people, politics, and everything really.


The meaning of a message is what has been understood.


Can you please make your substantive points without being snarky or condescending? Your comment would be fine without that last bit.

https://news.ycombinator.com/newsguidelines.html


The meaning of a message is what is intended + communicated, assuming those intentions were communicated clearly.

Willfully interpreting otherwise (especially uncharitably so) is the very definition of being disingenuous, which is pretending to not know what was really meant.


I disagree: if a message is open to such disingenuous interpretations, then its meaning has not been formulated clearly enough. I use the: (1) say what you will communicate, (2) communicate, (3) say what you have communicated rule, also the six W's...


No one communicates that way. It's not practical. Almost all expressions can be uncharitably interpreted by a listener who doesn't like you, and thus has a motive to quote your sentences and disingenuously pretend you're saying something much more dastardly than you clearly intended.


Their own coding agent and models, marketing, tons of UI customizations, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: