Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did I miss a fundamental shift in how LLMs work?

Until they change that fundamental piece, they are literally that: programs that use math to determine the most likely next token.



This point is irrelevant when discussing capabilities. It's like saying that your brain is literally just a bunch of atoms following a set of physics laws. Absolutely true but not particularly helpful. Complex systems have emergent properties.


The problem I think is that current LLMs maybe not complex enough to accept all stimulation.

Current LLM systems are more like simulation of the stimulation, a conclusion rather than a exploration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: