Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The question is: For a given problem in machine intelligence, what's the expected time-horizon for a 'good' solution?

Over the last, say, five years, a pile of 50+ year problems have been toppled by the deep learning + data + compute combo. This includes language modeling (divorced from reasoning), image generation, audio generation, audio separation, image segmentation, protein folding, and so on.

(Audio separation is particularly close to my heart; the 'cocktail party problem' has been a challenge in audio processing for 100+ years, and we now have great unsupervised separation algorithms (MixIT), which hardly anyone knows about. That's an indicator of how much great stuff is happening right now.)

So, when we look at some of our known 'big' problems in AI/ML, we ask, 'what's the horizon for figuring this out?' Let's look at reasoning...

We know how to do 'reasoning' with GOFAI, and we've got interesting grafts of LLMs+GOFAI for some specific problems (like the game of Diplomacy, or some of the math olympiad solvers).

"LLMs which can reason" is a problem which has only been open for a year or two tops, and which we're already seeing some interesting progress on. Either there's something special about the problem which will make it take another 50+ years to solve, or there's nothing special about it and people will cook up good and increasingly convenient solutions over the next five years or so. (Perhaps a middle ground is 'it works but takes so much compute that we have to wait for new materials science for chip making to catch up.')



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: