Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some people legitimately have no idea that others recognize and are offput by llm output.

Also, i know a lot of non-native English speakers that use AI tools to "correct things". Because of the language barrier these people especially are less likely to ever be able to recognize the specific llm tone that precipitates.



And that's really the insidious thing about these tools - if you can't do the work yourself then you can't really verify the LLM output either.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: