Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its my strong belief that using AI in any capacity which does not upfront state "the following content was generated by artificial intelligence" is never acceptable. In most situations, allowing an AI to wield your name gives off the scent of "My time is more valuable than yours, so I've automated writing to you." It is quite disgraceful. If your use-case would be materially harmed by an upfront disclosure of AI generated content, then you need to take a good, hard think on what that means for what you're doing (then again, maybe you're not interested in thinking anymore and that's how you got to this point in your life).
 help



It’s good-faith arbitrage. Until everyone automatically suspects everything to be LLM generated and there is zero trust, anyone doing this is eroding the good faith that lets them get away with it in the first place.

How far do you take this policy?

1. Am I allowed to ask an AI to proofread a draft for grammatical errors?

2. Am I allowed to ask an AI to proofread a draft for technical errors?

3. In both #1 and #2, am I allowed to ask the AI to suggest revisions, or is it only allowed to point out what's wrong and why?

4. If I write a sentence like "Lucy's laughter ___ her underlying anxiety" and I'm having trouble coming up with the right word to fill in the blank, can I give the sentence to an AI and ask it for a list of possible options?

5. While brainstorming, can I use an AI as a souped up rubber duck before I begin writing?


In general, I think those use cases are fine.

But... AI generated content is a slippery slope. Someone earlier today asked me to "review" a 50 page document they had completely generated with AI yet obviously not reviewed themselves. It is embarrassing.


This happened to me recently at work, I just ignored the request, but I was tempted to feed it to copilot and just send them the response.

Not parent commenter, but my answer is "no to all".

...then I guess my next question would be, why? How do you feel about spellcheck? Should mobile users turn off autocorrect unless they disclose that it's turned on?

I don't really understand your philosophy if you're opposed to an LLM pointing out when someone got the tense wrong.


Who said anything about spellcheck?

GP said they weren't okay with someone using an AI to check for grammatical errors. If they would be okay with using software to check for spelling errors, I'd be interested to know why they're making that distinction. And I'd like to know what they think of autocorrect, which at least on the iPhone uses an on-device LLM nowadays.

"AI" can mean anything with machine learning. Spellchecker can use some sort of machine learning too. But what people mean when they say "AI" is LLM chatbot. But a spellchecker highlights mistakes, it doesn't suggest to rewrite the text arbitrarily like an LLM chatbot. So I totally understand how you can be for one and not other

By the way autocorrect on the iphone got worse recently, bunch of times it "corrected" the word to a wrong one for me


Wow, you’re arguing. What about predictive T9?


Sending someone something that'll take them longer to read than it took you to write is taking the piss.

I think that's a good rule of thumb for AI-generated output.


I agree with you when you are talking with a human in good faith. I disagree when it comes to large corporations and government officials. Often times theres a lot of red tape you have to get through and create documents that nobody on their side is actually reading. Usually this is just to discourage people from completing the action they are trying to accomplish. LLM generated content has gotten me back improperly held taxes and generated multiple extension requests where the receiver just had to check a box that they got it.

The position I think I can most simplify my beliefs in to is "it should have taken you longer to write something than the person receiving it is going to spend reading it."

That position still fits your scenario, where if they're not actually caring enough to read it then you don't need to care enough to write it, but for something like this targeted at a technical audience it's a higher bar.

Also of course the accuracy of the writing is relevant in both cases, which is something LLMs are absolutely worse than humans at, as noted in some of the comments here this article had the LLM hallucinate the existence of macOS 25 which is a mistake no human would have made while writing such an article entirely by hand.


> state "the following content was generated by artificial intelligence"

"… but reviewed by a human / me for accuracy."


[flagged]


I am genuinely curious if you are trolling, or putting that forward as a genuine argument?

Trivially, it's the difference between medium, and message/content.

On one axis, whether message is spoken, written via pen or typewriter or word processor, sent electronically, faxed,mailed, etc - it is fundamentally a communication from one human being to another, even if medium / mechanics differ.

The other axis is actual content - genuine human interaction, intent, message and connection, vs a result of a prompt.


> Same thing as using a word processor and printer rather than handwriting a note. Inexcusable.

There is no confusion, when in receipt of something written using a word processor, that it was so written, and people are free to respond accordingly (though, of course, most of us don't care). There is no such certainty with products generated by AI, so it is appropriate responsibly to disclose it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: