Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


That’s a massive oversimplification of the field's trajectory.

Google introduced the Transformer model in 2017. They built interactive voice response (Google Assistant) and applied it to web search and language translation (all have a large error tolerance) but didn't do much more because they considered reliability to be an issue.

ChatGPT was introduced in 2022. It was based on the Transformer model as are all the current AI chatbots.

ChatGPT's big innovation was scale. They spend billions to digitize everything they could find on the web and beyond and market it as a general purpose AI.

But scale has hit a wall. Even with a world of data and an energy budget larger than a small country, reasoning and reliability remains a largely unresolved issue.

Computing has traditionally been about reliable answers at low cost. AI offers the opposite --- unreliable answers at high cost.

https://research.google/blog/transformer-a-novel-neural-netw...


Why do you sound like an LLM?

> While LLMs are probabilistic, their accuracy in specific domains—like Tool Calling—is already hitting near-100% reliability. That is where industrialization happens.

Is this just an AI bot replying to comments on its own AI post?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: