If anything, I would say that that's a very optimistic take. The hype train is strong, but that's largely what it is once you look at the details. What we have right now is impressive, but no one has shown anything close to a possible path from where we are right now to AGI. The things we can do right now are fancy, but they're fancy in the same way good autocomplete is fancy. To me, it feels like a local maxima, but it's very unclear whether the specific set of approaches we're exploring right now can lead to something more.
I'm not convinced, and neither is Sam Altman himself [0]. Also, if that projection holds, and that's a big if, the purported breakthrough would cost 10^6 times as much as GPT-4 took to train. That's over 100 million dollars [1] times a million. That adds up to over 100 trillion dollars, in the ballpark of four times the GDP of the whole of United States.
The thing is that it looks like, or perhaps I should say it's "understood" at this point, that transformer's abilities scale pretty much linearly with compute (there is also some evidence they scale exponentially with parameter count, but just some evidence).
Right now there is insane amounts of money being thrown at AI because progress is matching projections. There doesn't seem to be a leveling off or diminishing returns taking place. And that's just compute, we could probably freeze compute and still make insane progress just because optimizations have so much momentum right now too.
I think that's part of the carefully-crafted hype messaging. Close enough to get excited about, but far enough away that by the time we get there people will have forgotten we were supposed to have it by then.
Yeah, that's my number one question, too. Sure, he happened to be appointed the manager of the team who cracked intuitive algorithms through deep learning, but what does he know about superintelligence? IMO that's a completely separate question, and "foundation models continue to improve" is absolutely not related to whether or not an intelligence explosion is guaranteed or not. I'd trust someone like Yudkowsky way more on this, or really anyone who has engaged with academic literature on the subjects of intentionality, receptive vs. spontaneous reasoning, or really any academic literature of any kind...
Does anyone know if he's published thoughts on any serious lit? So far I've just seen him play the "I know stuff you don't because I get to see behind the scenes" card over and over, which seems a little dubious at this point. I was convinced they would announce AGI in December 2023, so I'm far from a hater! It just seems clear that they're/he's guessing at this point, rather than reporting or reasoning.
Really he assumes two huge breakthroughs, both of which I find plausible but far from guaranteed:
With nearly-limitless intelligence and abundant energy
With the current hype wave it feels like we’re almost there but this piece makes me think we’re not.