Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The most interesting bit I find is the time period mentioned until super-intelligence: “thousands of days (!)” aka 6-9 years or more?

With the current hype wave it feels like we’re almost there but this piece makes me think we’re not.



If anything, I would say that that's a very optimistic take. The hype train is strong, but that's largely what it is once you look at the details. What we have right now is impressive, but no one has shown anything close to a possible path from where we are right now to AGI. The things we can do right now are fancy, but they're fancy in the same way good autocomplete is fancy. To me, it feels like a local maxima, but it's very unclear whether the specific set of approaches we're exploring right now can lead to something more.


> What we have right now is impressive, but no one has shown anything close to a possible path from where we are right now to AGI[0].

[0]: From GPT-4 to AGI: Counting the OOMs https://situational-awareness.ai/from-gpt-4-to-agi/


I'm not convinced, and neither is Sam Altman himself [0]. Also, if that projection holds, and that's a big if, the purported breakthrough would cost 10^6 times as much as GPT-4 took to train. That's over 100 million dollars [1] times a million. That adds up to over 100 trillion dollars, in the ballpark of four times the GDP of the whole of United States.

[0] https://www.wired.com/story/openai-ceo-sam-altman-the-age-of...

[1] https://en.wikipedia.org/wiki/GPT-4#Training


The thing is that it looks like, or perhaps I should say it's "understood" at this point, that transformer's abilities scale pretty much linearly with compute (there is also some evidence they scale exponentially with parameter count, but just some evidence).

Right now there is insane amounts of money being thrown at AI because progress is matching projections. There doesn't seem to be a leveling off or diminishing returns taking place. And that's just compute, we could probably freeze compute and still make insane progress just because optimizations have so much momentum right now too.


How do you distinguish the path to fancier autocomplete from the path towards AGI? Why think we're on the former rather than the latter?


I think that's part of the carefully-crafted hype messaging. Close enough to get excited about, but far enough away that by the time we get there people will have forgotten we were supposed to have it by then.


Yeah, that's my number one question, too. Sure, he happened to be appointed the manager of the team who cracked intuitive algorithms through deep learning, but what does he know about superintelligence? IMO that's a completely separate question, and "foundation models continue to improve" is absolutely not related to whether or not an intelligence explosion is guaranteed or not. I'd trust someone like Yudkowsky way more on this, or really anyone who has engaged with academic literature on the subjects of intentionality, receptive vs. spontaneous reasoning, or really any academic literature of any kind...

Does anyone know if he's published thoughts on any serious lit? So far I've just seen him play the "I know stuff you don't because I get to see behind the scenes" card over and over, which seems a little dubious at this point. I was convinced they would announce AGI in December 2023, so I'm far from a hater! It just seems clear that they're/he's guessing at this point, rather than reporting or reasoning.

Really he assumes two huge breakthroughs, both of which I find plausible but far from guaranteed:

   With nearly-limitless intelligence and abundant energy


I would presume that that’s the time period he’s currently trying to fund.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: