Is that right? I think that you can serve tokens without training the next models. It would be bad strategy, but it would work. So it's an important question, are they covering their operating expenditure? If they are the business has legs (and it will be worth spending a lot to train the next models). If not, maybe not.
The blog post of this thread argues that now, even average users have the ability to modify GPL'd code thanks to LLMs. The bigger advantage though is that one can use it to break open software monopolies in the first place.
A lot of such monopolies are based on proprietary formats.
If LLM swarms can build a browser (not from scratch) and C compiler (from scratch), they can also build an LLVM backend for a bespoke architecture that only has a proprietary C compiler for it. They can also build adobe software replacements, pdf editors, debug/fix linux driver issues, etc.
I hate to say it, but there is no sense in which Anthropic has the clearly better product than OpenAI at this point. I know Claude caught developer's hearts through the fall, but GPT5.4 is a more powerful, careful, and competent model for coding and Codex is a far less buggy and more performant TUI. For the last 3 months I've gone back and forth between the two and I always run anything written by Claude Opus 4.6 by myself and my coworkers through Codex for review and it is constantly finding severe correctness issues to the point where I simply won't subscribe to Anthropic's product anymore.
On top of that, OpenAI provides far higher token limits. Even their $20 plan goes quite far.
If I was just building crud websites, probably Claude Code would be fine, and it does indeed show more "initiative" and "imagination" but I've seen it build way too many race conditions and correctness issues to trust it or the work my coworkers make with it.
> OpenAI is struggling to monetize. They turned to showing ads in ChatGPT, something Sam Altman once called a “last resort”, while Anthropic is crushing them with the more profitable corporate customers and software engineers. Their shopping feature flopped and they shut down Sora, both supposed to be revenue drivers.
I don't think Sora ever thought of as a "revenue driver" considering how notoriously expensive and unpredictable video generation via inference is. OpenAI is just a repeat of Uber—minus the scandals—in a different decade. Uber got itself into tons of businesses related to transportation on the assumption that it would all be viable "one day." Same stuff that OpenAI is going.
I would say, once the bubble bursts—which is likely, considering the geopolitical environment—OpenAI, Anthropic, and Alphabet are likely to be the winners, with a lot of small players at the tail end. Anthropic won over programmers and OpenAI on everyone else. For millions of people, AI = ChatGPT, so I would bet that OpenAI can still become profitable, once they cut down their expenses.
Perhaps courts may eventually provide devices for the hearing impaired. At some point this becomes an accessibility issue and once commercially viable, a necessity like a handicap ramp.
Hard disagree, it's very easy for a bot to use a credit card. And not only are card numbers often stolen, they're even given to teenagers these days, and can also be owned by businesses and exist entirely virtually... so I don't think you can assume the use of a credit card can always be tied to a single person.
The silence of the EU on the atrocities committed by Israel and the US is abhorrent, and frankly makes me ashamed.
Especially since it doesn't even look like the US is our ally any more. But of course we will pay the economic price for their actions at best, and with our blood when Israel starts WWIII at worst.
> Standardizing the build system and toolchain needs to happen. It's a hard problem that needs to be solved.
I agree, and I also think it’s never happening. It requires agreeing on so many things that are subjective and likely change behaviour. C++ couldn’t even manage to get module names to be required to match the file name. That was for a new feature that would have allowed us to figure out esports without actually opening the file…
Excalidraw has a 1 click 'sloppiness' change. We do drafts and ideation in 'full sloppy' mode, to indicate to the reader that this is not fully thought through, or a final documented decision. Once we've gotten through discussions and analysis, the final diagram is changed to be 'not sloppy', and the font changed from handwriting to a san serif font.
It's pretty effective to immediately communicate to folks that 'this is a concept' approach. Too many people instantly jump to conclusions about diagrams - if it's written down it must be done / fixed / formal.
Anthropic isn’t the best by any reasonable measure. They’re the best in some areas and get pwned in others.
In general AI is very much like human intelligence in the regard that no two models are the same just like no two people are the same. IOW if you are a single model shop you might even not have any idea that you’re falling behind.
LLMs have already told you these are "known solutions", which implicitly means they are established, non-original approaches. So the key point is really on the user side—if you simply ask one more question, like where these "known solutions" come from, the LLM will likely tell you that these formulas are attributed to Inigo Quilez.
So in my view, if you treat an LLM as a tool for retrieving knowledge or solutions, there isn't really a problem here. And honestly, the line between "knowledge" and "creation" can be quite blurry. For example, when you use Newton's Second Law (F = ma), you don't explicitly state that it comes from Isaac Newton every time—but that doesn't mean you're not respecting his contribution.
Trouble is that regulation isn't imposed by a magical deity in the sky. In a democracy, regulation must come from the very same people who you say don't care, don't complain, and aren't willing to change their habits. Given that you say the people don't care, aren't willing to change, and perhaps even prefer the status quo, regulation is a no-go.
There was also SpaceWar! from MIT, which Nolan Bushnell turned into a standalone cabinet game. Though I think you could make a case for Pong being the first coin-op video game, a commercial game rather than something that primarily existed in academic labs.
(Nonconsensual) genital mutilation is bad no matter who you are or what parts you have.
Also: If pain becomes a contest, we're all losers.
Also: Thank you for complaining. There is much to complain about. There's so much to complain about that we can sit in a circle and take turns complaining and everybody will probably learn something.
But are they actually profitable, or do they employ creative accounting where only parts of overhead expenses are counted against all of inference revenue, similar to what Uber did?
> AI is here to stay. If used right, chances are it will make us all more productive. That, on the other hand, does not mean it will be a good investment.