Hacker Newsnew | past | comments | ask | show | jobs | submit | pizzly's commentslogin

Maybe not. You stated "Every country will pursue it as far as it can, and given the multipolar world we are back in, and our recent record with treaties and commitments, I do not believe there will be global alignment on risk reduction" and that is true. While Anthropic may hold of releasing Mythos I don't think they will for long. As long as there is someone somewhere in the world that releases a competing model then Anthropic will be forced to release Mythos. This also assumes that its not a marketing tactic from Anthropic in the first place, build suspense before releasing it.

I would like to see more countries capable of producing frontier models. At the moment we have two in the world but many countries are building their own national models and AI infrastructure and may join the race.

Having a multipolar world may actually result in more freedom in gaining access to frontier models.


Perhaps in the past. I think the approach now will be to vibe code multiple projects very quickly and see which one has traction even with a low quality product. You will get much better feedback than a discussion with a potential customer who may not even know what they want or have a false idea of what they want. You can always improve a product that has demand and abandon the ones that no one even downloads. Usage and payment are the real test if a product is worth doubling down on.

This might work to some degree if you can run your project by many eyeballs, but only if they aren't immediately made gun shy by interacting with a low quality product. A focus group environment would be good for this, but setting that up costs money.

In the end what really matters to most people is does it work. How it is built or works means nothing to them nor should it.

The current iteration of models don't write clean code by itself but future ones will. The problem in my view is extremely similar to agentic/vibe coding. Instead of optimizing for results you can optimize for clean code. The demand is there, clean code will lead to less bugs, faster running code and less tokens used (thus less cost) when understanding the code from a fresh session. It makes sense that the first generation of vibe coding focused on the results first and not clean code. Am I missing something?

AI for editing is good and have many useful cases. The part where it fails is that the tone/style of the writing gets overtaken and reads like all other AI edited writing. But the quality of the edit is good, its just not in your style. When everyone sounds the same then there is no uniqueness. But using it edit legal letters, software documentation etc are very good use cases, using it to explain your ideas in a blog not so much.

I never considered that. When I change LLM models its usually due to two reasons.

1. the current AI model is producing answers that do not met my needs so I try multiple others at the same time and the one that produces the best answer I stick with until I have this problem again.

2. there is a new model released and advertises a new capability that I want to try out.

I can imagine that for many people the answer that ChatGPT generates is adequate enough that they never need to try another model even if better answers exists from another model. For people with less complex needs this is a very real stickiness. Why make the effort to try something new if the answer is adequate.

In this case, OpenAI would only f*k up if they change the pricing significantly, add intrusive ads or their answers become significantly worse.


one thing annoying with premade solutions is that it only does 90% of what you want, its livable but still doesn't quite meet your needs.

Its not just adding features that Linear already provides but adding features and integrations that mets 100% your needs.

The full decision making equation is (cost of implementing it yourself + cost of maintenance + 10% additional benefit for a solution that fully meets your needs) versus (cost of preexisting solution that meets 90% of your needs). Cost of implementing it and cost of maintenance has just gone down. Surely that will mean on a whole more people as a whole will choose to make inhouse rather than outsource.

Thus demand for premade solutions will go down, Saas providers won't be able to increase their prices as this will make even more people choose to implement it themselves. The cost of producing software will continue to drop due to agentic coding and maintenance cost will drop as well due to maintenance coding agents. More people will choose their own custom solutions and so on. Its very possible we are in the beginning of the end for Saas companies.


I think even with vibe coding people definitely still underestimate the stuff mentioned in this comment about IaaS:

> server operations, storage, scalability, backups, security, compliance, etc

https://news.ycombinator.com/item?id=47097450


Because the person filming it brought the camera with approved government ID. Got their camera serial number recorded in the government database. The camera then embeds its serial number into the video using hidden watermarks. Just Joking .... for now.


The bat and rat were identified as non visa holders by ICE.


What about the 'openness' of AI development. When I say 'openness' of AI I mean in research papers, spying, former employees, etc. Wouldn't that mean that after a few months to years after AGI is discovered that the other country would also discover AGI due to benefiting from the obtaining the knowledge from the other side? Similar to how the Soviets did their first nuclear test less than 5 years after US did theirs due to a large part in spying. The point here is wouldn't the country that spends less in AI development actually have an advantage over the country that does as they will obtain that knowledge quickly for less money? Also the time of discovery of AGI may be less important than the country that first implements the benefit of AGI.


This is actually an interesting question! If you look at OpenAI's change in behavior, I think that's going to be the pattern for venture backed AI: piggyback on open science, then burn a bunch of capital on closed tech to gain a temporary advantage, and hope to parlay that into something bigger.

I believe China's open source focus is in part a play for legitimacy, and part a way to devalue our closed AI efforts. They want to be the dominant power not just by force but by mandate. They're also uniquely well positioned to take full advantage of AI proliferation as I mentioned, so in this case a rising tide raises some boats more than others.


Is there even a clear definition of AGI? How will one side or the other side know who is the "winner".


The AI labs have settled on a definition of AGI: "AI that can do the vast majority of economically valuable work at or above the level of humans."

They don't heavily advertise this definition because investors expect AGI to mean the computer from Her, and it's not gonna be that. They want to be able to tell investors without lying that they're on target for AGI in 3 years, and they're riding on pre-existing expectations.


If there is a downturn in AI use due to a bubble then the countries that have built up their energy infrastructure using renewal energy and nuclear (both have decade long returns after the initial investment) will have cheaper electricity which will lead to a future competitive advantage. Gas powered power plants on the other hand require constant gas to convert to electricity. The price of gas would become the price of electricity regardless and thus very little advantage.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: