I’m totally against the anthropomorphization of LLMs. These are tools, not sentient beings. Treat them as such. Unlike a power tool, they don’t need respect because they cannot hurt you. LLMs trying to simulate human behavior is like the equivalent of skeuomorphism in UI design.
They’re also language tools, built on the ingestion and training of language and its tone, form and application. The way they work, by inferring responses based on the probabilistic best response to your language prompt, means the construction of your input in its tone, form and application will influence the quality of output.
It might offend you to show deference to a tool but ignoring the optimal way to use a tool on principle is foolish.
That's the point of the study, though, whether using polite language makes the tool work better. It's similarly misguided to refuse to do so out of an anti-anthropomorphization stance, if it makes it respond more usefully.
These are still somewhat mysterious artifacts, that we don't really know how to use. But it's trained on human text, so it's plausible its workings are based on patterns in there.
Because LLMs match your prompt to data scraped off the internet, it’s plausible that being polite results in your response coming from more civil conversations that have more useful data.
Polite but firm and direct works best in my experience — but as they emulate language responses, veering too far leads to either responses trying to engage emotionally (too rude) or failure to obey (too polite).
They don’t have emotions though, eg, you can say that ChatGPT is wrong and as long as you engage in a polite-but-firm tone, it’ll respond to the logical content.
well, if you trust them and act on bad information they can hurt you. if you use them to cheat yourself out of learning something they can hurt you. these things have sharp edges and need to be approached with some care. agree with your broader point if it is that it is silly to think they "deserve" respect because they can talk.