Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agree, but I guess the Opus 4.6 is 10x larger, rather than Chinese models being 10x more efficient. It is said that GPT-4 is already a 1.6T model, and Llama 4 behemoth is also much bigger than Chinese open-weight models. Chinese tech companies are short of frontier GPUs, but they did a lot of innovations on inference efficiency (Deepseek CEO Liang himself shows up in the author list of the related published papers).
 help



No, Opus cannot be 10x larger than the chinese models.

If Opus was 10x larger than the chinese models, then Google Vertex/Amazon Bedrock would serve it 10x slower than Deepseek/Kimi/etc.

That's not the case. They're in the same order of magnitude of speed.


They serve it about 2x slower. So it must have about 2x the active parameters.

It could still be 10x larger overall, though that would not make it 10x more expensive.


Yes, but I highly doubt they would increase sparsity much vs the chinese models.

That's how you get Llama 4.

Pretty much every major lab settled on ~3-5% sparsity for a reason.


I agree that Opus almost definitely isn't anywhere near that big, but AWS throughput might not be a great way to measure model size.

According to OpenRouter, AWS serves the latest Opus and Sonnet at roughly the same speed. It's likely that they simply allocate hardware differently per model.


The numbers look about right. Opus 4.5 is about 1.5x the size of Sonnet 4.6, and Opus 4/4.1 is about 5x the size of Sonnet 4.5/4.6.

Note that Opus 4.5 is about 1/3 the size of Opus 4/4.1 (and 1/3 the price in the API)


My understanding is that for MoE with top K architecture, model size doesn't really matter, as you can have 10 32GB experts or a thousand, if only 2-3 of them are active at the same time, your inference workload will be identical, only your hard drive traffic will incread.

Which seems to be the case, seeing how hungry the industry lately has been for hard drives.


GPT-4 was likely much larger than any of the SOTA models we have today, at least in terms of active parameters. Sparse models are the new standard, and the price drop that came with Opus 4.5 made it fairly obvious that Anthropic are not an exception.

man i miss GPT-4.5

wasn't GPT 4 the model that was so expensive for open AI to run that they basically completely retired it in favor of later models which became much stronger but weren't as expensive for them to run?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: