Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What would you do differently if LLM outputs were deterministic?

Perhaps I approach this from a different perspective than you do, so I’m interested to understand other viewpoints.

I review everything that my models produce the same way I review work from my coworkers: Trust but verify.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: