Hacker Newsnew | past | comments | ask | show | jobs | submit | 0xbadcafebee's commentslogin

It's raised in the sense that some people made a pinky promise to give them cash. But those people also don't have the money and have to raise it from other places. It's largely SoftBank, Oracle, Microsoft and Nvidia, all of whom don't have big piggybanks full of hundreds of billions. They ask for loans based on the promise of making cash to pay for it, and that cash is based on people wanting to use OpenAI. So it's kind of a big financial circle jerk. (Debt, SPVs, loans from Nvidia (at high interest rates), etc)

Investors do not care about the product, the users, etc. They care about cash. There are lots of ways to make cash that don't involve having a good product. But if you commit to spending a trillion dollars on hardware, then borrow hundreds of billions in the short term, and it turns out there's no way to recoup the cost, the investors go looking for better returns. This would've worked back in the old days of a bull market, angels looking for the next whale (with "modest" $5BN investments), and startups with no rivals. But in a bear market with multiple competitors trading on a commodity? Lol. Finally the bubble bursts.

Both of these are not really the right way to use AI to code with. There are two basic ways to code with AI that work:

1. Autocomplete. Pretty simple; you only accept auto-completes you actually want, as you manually write code.

2. Software engineering design and implementation workflow. The AI makes a plan, with tasks. It commits those plans to files. It starts sub-agents to tackle the tasks. The subagents create tests to validate the code, then writes code to pass the tests. The subagents finish their tasks, and the AI agent does a review of the work to see if it's accurate. Multiple passes find more bugs and fix them in a loop, until there is nothing left to fix.

I'm amazed that nobody thinks the latter is a real thing that works, when Claude fucking Code has been produced this way for like 6 months. There's tens of thousands of people using this completely vibe-coded software. It's not a hoax.


#2 does not negate my steering suggestion, so I'm not sure how you can conclude nobody thinks it's a real thing that works

also Claude Code is notoriously poorly built, so I wouldn't tout it as SOTA


I have worked at companies from startups to fortune 500. They all have garbage code. Who cares? It works anyway. The world is held together with duct tape, and it's unreasonably effective. I don't believe "code quality" can be measured by how it looks. The only meaningful measure of its quality is whether it runs and solves a user's problem.

Get the best programmer in the world. Have them write the most perfect source code in the world. In 10 years, it has to be completely rewritten. Why? The designer chose some advanced design that is conceptually superior, but did not survive the normal and constant churn of advancing technology. Compare that to some junior sysadmin writing a solution in Perl 5.x. It works 30 years later. Everyone would say the Perl solution was of inferior quality, yet it provides 3x more value.


I hear you about "it just works" mattering infinitely more than some arbitrary code quality metric

but I'm not judging Claude Code by how it looks. I kinda like the aesthetics. I'm talking about how slow, resource hungry and finnicky/flickery it is. it's objectively sloppy


> when Claude fucking Code has been produced this way for like 6 months

And people can look at the results (illegally) because that whole bunch of code has been leaked. Let's just say it's not looking good. These are the folks who actually made and trained Claude to begin with, they know the model more than anyone else, and the code is still absolute garbage tier by sensible human-written code quality standards.


Yet it works anyway. What does that say about human code quality standards?

Human code quality standards are built around the knowledge that humans prefer polished products that work consistently. You can get away without code quality in the short term, especially if you have no real competitors - to a lot of people, there just aren't any models other than Anthropic's which are particularly useful for software development. But in the long term it gets you into a poor quality trap that's often impossible to escape without starting over from scratch.

(Anthropic, of course, believes that advances in AI capability over the next few years will so radically reshape society that there's no point worrying about the long term.)


A car saves you time in getting to and from the store. But if you don't learn to drive, and just hop in the car and press things, you're going to crash, and that definitely won't save you time. Cars are also more expensive than walking or a bike, yet people still buy them.

I already know how to drive stick (trad coding), I don’t feel like I’m gaining much by switching to automatic transmission.

Yeah that's not the difference, lol. With AI coding you can get the same work done in an order of magnitude less time, without even knowing how to program.

The only comparison I can come up with is 3D printers, but even that's not as ridiculously fast and easy as AI coding. An average person can ask an agent to write a program, in any popular language, and it'll do it, and it'll work. We still need people intelligent enough to steer the agent, but you do not need to edit a single line of code anymore.


They're more compelling to the HN echo chamber. I have never heard a normal person say "I was asking Claude the other day...", but they do use ChatGPT.

Based on the limited public information out there, the AI chat tools with the most users are ChatGPT, Meta, Gemini, Alibaba, Baidu, Copilot, and Grok. Anthropic is nowhere near the top.


If you spend $200/month on Anthropic, that's $2400/year. Buy a fast GPU or Strix Halo machine, do the AI locally, after a year you're saving money.

The open models are still far behind GPT 5.4 and Claude if you're using them for building software.

I don't think people realize how irrational that argument is (that SOTA is better, so you have to use SOTA).

Open weights will always trail SOTA. Forever. So let's say they continue to get better every year. In 100 years, the open weight model will be 100x better than today. But the SOTA model will be 101x better. And still, people will make this argument that you should pay a premium for SOTA. Despite the open weights being 100x better than what we have today.

The open weights today are better than the SOTA models from a year ago. Yet people were using the SOTA models for coding a year ago. If people used SOTA models a year ago, then it was good enough, right? So why isn't the same (or better) good enough now?

The answer is: it is good enough. But people are irrationally afraid of missing out (FOMO). They're not really using their brains. They're letting fear lead their decisions. They're afraid "something bad" will happen if they don't use the absolute latest model. Despite the repeatable, objective benchmarks telling us all that open weights are perfectly capable of doing real work today, the fear is that we're missing out on something better. So people throw away their money and struggle with rate-limits because of their fear.


About a year behind , TBQH. Newer Mixture-of-Experts models are comparable to a slightly older Claude Sonnet; if you don't mind the (lack of) speed. Some benchmarks say they're competitive with the frontier models right now for certain tasks.

I'm not sure how much I trust those benchmarks; I have a feeling everyone is playing up to them in some way. Still, if you're willing to accept the latency, they're definitely usable.

Of course everyone has realized this, so the hardware you need to run them is a little bit on the expensive side right this minute.

CPU manufacturers are working on improvements so that you can more practically run models on regular CPU+RAM (it's already possible with llama.cpp, just even slower).


If you want to run better models, you need one of the more expensive GPUs, like an H100 or such ($40k). I don't think any of the smaller models are remotely comparable to anthropic.

The GPU also takes around $500-$1000 in electricity, and even then you won't be able to run a model of as good quality as anthropic.

It's also hard to justify since who knows how quickly it will be outdated, like maybe soon you'll need a blackwell chip (like a $100k PC, check out the NVIDIA DGX Station) to run a decent model.

... It'll take a lot more than a year to pay back a model capable of running openclaw with any sort of reasonable performance.

Or can you report that you've had good luck with a Strix Halo or local GPU for less than $40k up-front costs?


They're not anti-renewables as a bet, they're anti-renewables strategically. If you like going to war, you can power your warfighting apparatus much easier with a gas tank than a battery. If you want better defense, you don't depend on hostile nations for your energy needs. The US wants to double down on oil because it likes to fight wars and it's paranoid about defense.

This would be more believable to skeptics if it wasn't all pro-arguments and theory. If you don't cover the cases in which it doesn't work, or at least mention the arguments against, it reads as propaganda.

The thing that reads the most false is the economics. A 480W solar panel is like $90 on sale, they're dirt cheap. A dozen of them is $1,080. But an installed solar+battery system tied to the grid is more like $30,000, and that's not covering the cost of replacing damaged equipment (lightning is a thing). That's just one home, using certified equipment.

For nation-states to do solar and battery, they need land, capital, and skilled labor that most nations don't have. Then there's the fact that not all nations get enough sun, or the fact that you must have a stable backup supply (not just for "cloudy days", but also emergencies and national defense), and multiple sources of equipment so your entire nation's energy isn't dependent on one country (China). Only about 10-20 nations on earth could switch to renewables for the majority of their energy in the next 10 years.


Or you are somewhere in Africa and have no electricity anyway so you start on something renewable.

African countries are some of the poorest on the planet. It takes a very large capital investment, in addition to skilled labor, to energize a nation. They often have other bigger issues to deal with. It's often way cheaper to pay for humans to mine coal, produce charcoal, extract oil (not uncommonly a private company - or a country like China - extracting the resources and making a big profit, or giving a loan with a huge interest payment).

But let's look at a single off-grid DIY example. First you need the expertise to hook it up. If you have the skill, it still isn't cheap if your economy is weak. It requires saving money for a capital purchase, something people all around the world struggle with. And when there are payment programs, they're often exploitative.

It's not like people in Africa aren't aware solar power exists. If it was cheap and easy, everyone would have done it already.


I codify all my AI install/setup/running junk (https://codeberg.org/mutablecc/ai-agent-coding) with Makefiles. You can make DRY Makefiles real easy, reuse them, override settings, without the fancy stuff in the author's post. The more you build up a reusable Makefile, the easier everything gets. But at the same time: don't be afraid to write a one-off, three-line, do-almost-nothing Makefile. If it's so simple it seems stupid, it's probably just right.

I use mise, but its conclusion that everybody needs to write an aqua plugin now is annoying. They need to make plugin-making a lot easier.

What conclusion do you mean? Aqua is just one of the many backends it supports.

For example there's also the GitHub backend which lets you install binaries from releases, no plugin needed at all.


https://github.com/mise-plugins <-- First they say "Try to get your tool into aqua or see if it can be installed with the github backend, then it may be added to the mise registry", and then later they say "The rest of this doc is outdated and does not reflect the current state of preferring aqua/ubi.".

Overall there's too many ways to install things and it's not easy to add any of them. Asdf plugins were easy, but insecure (which could be fixed, but whatever). Everything else requires more research because it's more technical.


> it's not easy to add any of them

For most of them there's nothing to add though, you simply publish tools on GitHub/Cargo/etc. and mise will know how to install them.

https://mise.jdx.dev/registry.html#backends has a bit more current info.


Only if they have a plugin that describes how to install them. Many popular tools are much more complex to install and set up than just downloading a binary and making it executable. For those you need to create a plugin for mise to be able to install them. Luckily, very often some other generous person has gone through all the trouble of learning how to make the plugin, going to the official repos, making a PR, and finally getting it merged. But if somebody hasn't done that already, it's painful (more painful than, say, an asdf plugin). It depends on the language, on the tool and system requirements, etc. Overall it's kind of a mess. Mise leaves you with the trouble of figuring all that out, rather than making some kind of convenience function to get the process started easily.

> Many popular tools are much more complex to install

I'm curious which dev tools you're using aren't installable with standard mise backends. 99% of dev tools I use don't require a plugin.

> (more painful than, say, an asdf plugin)

You can still use asdf plugins, I could use mise to install an asdf plugin right now with one line `mise use asdf:raimon49/asdf-hurl`. The mise registry is just a convenient list of aliases, even if it doesn't accept new asdf plugins, you don't need it to.

As Larry Wall said "make easy things easy and hard things possible"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: