Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's critically missing in the current architectures, in my view, is models that are able to "keep learning" (in a way beyond just providing a summary of the previous conversation, which grants surface-level insights at most).

If we keep the current iteration cycle (retrain an LLM every couple of months, incorporating the current "status quo", including all of its previous conversations), we might get somewhat interesting results, but even the least motivated grad student has an iteration cycle orders of magnitude faster than that.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: