Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess the value in reporting it is that for most people, for us on HN as well, computing is considered accurate. You can trust the output if you trust the input and the program that processes the input. That is what we expect and value in computing - accuracy.

For LLMs that's not really the case anymore and it needs to be highlighted that "computers" no longer necessarily produce accurate output, to make sure not too much faith is put in what they produce.



> "computers" no longer necessarily produce accurate output

This was always the case. Just because a computer executes your model, doesn't mean your model has any bearing on reality. This is not a new phenomenon.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: