Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you look at what you wrote and can't identify what rules you've broken, how are you able to validate that the AI output doesn't change the meaning of what you wrote?


Knowing whether or not the AI changed the meaning of what you wrote is not reliant on knowing which specific rules you broke. It's only reliant on you actually reading what the AI spat out and deciding “yes, this is what I meant” or “no, this is not what I meant”.

Unless you're arguing that the rule violations are something the author intends to be part of the meaning of what one wrote?


>Knowing whether or not the AI changed the meaning of what you wrote is not reliant on knowing which specific rules you broke. It's only reliant on you actually reading what the AI spat out and deciding “yes, this is what I meant” or “no, this is not what I meant”.

That's fair.

>Unless you're arguing that the rule violations are something the author intends to be part of the meaning of what one wrote?

I think what I wanted to get at is more like this:

1. I think that they may be part of the meaning

2. I think that people would be primed to accept changes even if they change the meaning

3. I suspected that it would always correct something and wouldn't just say LGTM even if the input was fine

To check, and at the risk of this being hypocritical, I asked for a grammar correction on part of your post that I thought had no mistakes, and both in context and isolation, it corrected "spat out" to "produced." Now, this isn't a huge deal, but it is a loss of the connotation of "spat out," which is the phrasing you chose.

I think grammatical errors are low-cost, and changes in meaning and intent are high-cost, so with 2. above, running it through an LLM risks more loss than it gains.


I suspect those relative costs would be very different for someone who's not me, though. My English writing ability is much higher than my Spanish writing ability, so in some alternate universe where Hacker News was Spanish-only instead of English-only, grammatical errors would add up to be a cumulatively-higher cost than a possible change in meaning/intent. I wouldn't have the requisite knowledge to know the difference in connotation between “produjo” v. “escupió” v. the myriad other verbs Kagi Translate is suggesting to me at the moment, whereas I'd probably have a lot of cases of not just bad grammar, but outright nonsensical word choices — like Peggy Hill in a Mexican courtroom (https://www.youtube.com/watch?v=b7QCvykBXik), except with the added torment of being actually self-aware of my linguistic ineptitude.

(On that tangential note, though, I do appreciate that Kagi Translate provides multiple translations and attempts to explain their differences in connotation such that I can pick whichever one most closely matches my intent; if other LLM-assisted writing tools did that then that'd render a lot of this problem moot.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: