Hacker Newsnew | past | comments | ask | show | jobs | submit | impulser_'s commentslogin

It's not even on par with Sonnet. It's on par with open source models and it not even open source and sit behind a private preview API.

Might as well not release anything.


So they are only giving access to their smartest model to corporations.

You think these AI companies are really going to give AGI access to everyone. Think again.

We better fucking hope open source wins, because we aren't getting access if it doesn't.


This story has been played out numerous times already. Anthropic (or any frontier lab) has a new model with SOTA results. It pretends like it's Christ incarnate and represents the end of the world as we know it. Gates its release to drum up excitement and mystique.

Then the next lab catches up and releases it more broadly

Then later the open weights model is released.

The only way this type of technology is going to be gated "to only corporations" is if we continue on this exponential scaling trend as the "SOTA" model is always out of reach.


I don't know how you can read the report and the companies involved and dismiss this as hot air. What incentive does the Linux Foundation have to hype up Mythos? What about Apple?

How can you read the description of the exploits and be like "yeah that's nbd?"

And the only reason OSS has ever caught up is because they simply distill Claude or GPT. The day the big players make it hard to distill (like Anthropic is doing here), OSS is cooked.

And that's a good thing, why would you want random skiddie hackers to have access to a cyber super weapon?


No, that’s a terrible thing and random skiddie hackers absolutely should. This is only a temporary state of insecurity as these vulnerability scanners come online.

If this stuff is open source and not gate kept, it will be standard practice to just run some LLM security analysis on every commit and software will no longer be vulnerable to these classes of attacks.


Your "just a temporary state of insecurity" results in literal dead bodies on the ground unless defenders have a chance to front-run.

It also took many years to put capable computers in the hands of the general public, but it eventually happened. I believe the same will happen here, we're just in the Mainframe era of AI.

Yeah, but computers don't replace you. They are building AI to replace you. You think if these companies eventually achieve AGI that you are going to give you access to it? They are already gatekeeping an LLM because they don't trust you with it.

And the Linux Foundation.

Would you hope that it would be released today so that evil actors could invest few millions to search for 0days across popular open-source repos?

of course they're not giving access to everyone.

they better make billions directly from corporations, instead of giving them to average people who might get a chance out of poverty (but also bad actors using it to do even more bad things)


Anthropic's definition of "safe AI" precludes open-source AI. This is clear if you listen to what he says in interviews, I think he might even prefer OpenAI's closed source models winning to having open-source AI (because at least in the former it's not a free-for-all)

For those who haven't been keeping up with DigitalOcean. They are done with the "Developer Cloud" and now are trying to become another enterprise similar to AWS, GCP and Azure.

They were very debt heavy before the AI boom, and are just going to make it worst because I'm assuming they aren't raising 800m to pay of that debt.

You should definitely take that into account if you are or plan on using them.

This is not a well ran company.


I think this post should be directed to every Typescript developer.

I think a lot of this is just Typescript developers. I bet if you removed them from the equation most of the problem he's writing about go away. Typescript developers didn't even understand what React was doing without agent, now they are just one-shot prompting features, web apps, clis, desktop apps and spitting it out to the world.

The prime example of this is literally Anthropic. They are pumping out features, apps, clis and EVERY single one of them release broken.


This is my theory. They don't want other harnesses to use this because it costs them more. I don't know exactly how OpenCode works, but I'm assuming when people are using this plugin they are mostly using Opus for everything while Claude Code really only uses Opus for writing the actual code. It uses Haiku and Sonnet for almost all of the tasks outside of writing code.

So it hard for them to control and understand the costs of subscriptions if people are using them on different hardnesses that do things that they have no control over.


you can choose your own model in claude code and it generally defaults to Opus


Yeah, but that's just the model the main agent uses. The subagents aren't Opus. They are Haiku and Sonnet. Most of the token heavy work is offloaded to subagents because of this.


He runs the 2nd most valuable company in the world, and the company is most profitable company ever.

Giving him 700m in stock to keep him around is worth it for both the investors and employees.


Isn't Google famous for the investors and the employees both being entirely powerless?


They spend 25b a year on stock compensation. Giving your CEO ~230m of that is probably okay.


For the people that don't understand how they got a deal with the same redlines, it probably because OpenAI agreed to not question them. The safeguards are there, both parties agree now fuck off and let us use your model how we see fit.

Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.

In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.


Anthropic vs OpenAI will probably be The Machine vs Samaritan

(Person Of Interest for those who haven't seen it, watched it a decade ago and it's actually quite surprising how on point it ended up being)


> I don't think they will keep the supply chain risk on Anthropic for more than a week.

Why? It is in the admin's interest to absolutely destroy Anthropic. Make them an example.


Because once Amazon, Google, Microsoft, and all their contractors call to tell them they need Anthropic they will drop it.


The worst part of this is if they do remove Claude, and probably GPT, and Gemini soon after because of outcry we are going to be left with our military using fucking Grok as their model, a model that not even on par with open source Chinese models.


I think the warfighters are a distraction, a system could trivially say that there is a human in the loop for LLM-derived kill lists. My money is that the mass domestic surveillance is the true sticking point, because it’s exactly what you would use a LLM for today.


Apparently part of this whole battle is because Grok isn't up to part to be an acceptable alternative.


As far as we can tell, OpenAI and Google seem to be ok with it and not resisting. It would be easier for Anthropic's cause if they did.


It's better than actively aiding them. Make them struggle at every turn.


Are you Chinese? If not, I think you should prefer the people defending you to have the best tools to do so.


This of course raises the question on whether as an American I have more to fear from the Chinese government or the US one.. given everything happening in the Executive Branch here, that’s a disappointingly hard question to answer.


I think that's an easy question to answer, but obviously you don't fear the Chinese government you're not a Chinese citizen. You can actively talk about your disagreements with the US government, that not a right the Chinese have.


Can you? By ICE agents' own admission on video, they have been adding people to "domestic terrorist" watchlists (just for verbally dissenting, making recordings with a phone, etc) which are then used by Palantir to disappear people directly from their homes - even US citizens. Palantir, the CEO of which gleefully admits to knowing many Nazis and seems to get off on the fact that his software "kills people" (direct quote).


>that’s a disappointingly hard question to answer

It shouldn't be. The US government is already sending armed and masked thugs to shoot political dissidents dead or sending them to concentration camps, threatening state governments and private companies to comply with suppressing free speech and oppressing undesirables, and openly discussing using emergency powers to suspend the next election.

What exactly is the commensurate threat from China? The real tacit threat, not abstract fears like "TikTok is Chinese mind control." What can China actually do to you, an American, that the US isn't already more capable of doing, and more likely to do?

To me it isn't even a question. Even comparing worst case scenarios - open war with China versus civil war within the US - the latter is more of a threat to citizens of the US than the former unless the nukes drop. And even then, the only nation to ever use nuclear weapons in warfare is the US.


This is the correct take. It may be a different question for people living within China, but for Americans, the US Gov is a direct threat to their lives.


If the American military was focused on defending the United States, it would be a very different beast. The 21st Century American military is a tool for transferring wealth from the public to influential parties, and for inflicting destruction on non-peer nations who pose obstacles to influential parties interests. Defending the United States against various often-invoked hobgoblins is at best a very distant concern, closer to pure lip service than reality.


> Are you Chinese? If not, I think you should prefer the people defending you to have the best tools to do so.

They already have the best and most expensive toys in the world, and they mostly seem to be waging aggressive wars with them. Perhaps if the toys weren't so shiny and didn't make it all so one-sided, they wouldn't?


but the "people defending you" have been commiting clear and obvious war crimes?


I'm a natural-born American (many generations back) and firmly believe that if we ever get into a hot war with China, it will be because of American provocation, not Chinese.


The Department of War under Trump has proven itself to not be interested in defending you, the American people. All they’ve done so far is aggression against foreign supposed adversaries.


I am American born and raised and I consider our current government mass murderers who I trust as much as I would have the Nazis. It was a good thing that the Nazis did not get the a-bomb before us, and the same principle applies here. The fewer magnifiers of their power the better. They are a scourge on human rights, and the world.


Yea but every warfighter will get a waifu


Grok in unhinged mode piloting an Apache, what could go wrong.


Yeah, Vercel should have done this with NextJS a while ago. There is a reason why quite literally every other framework uses Vite because it amazing, easy to use, and easy to extend.

Everything just becomes a plugin.


It's greed, now that they have all the data and infrastructure they are pulling up the ladder.

Why do you think not a single one of these labs have released an open source models distilled on their own SOTA model?

They are all preaching they want to provide AI to everyone, wouldn't this be the best way to do this? Use your SOTA model to produce a lesser but open source model?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: