Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s no malice if there was no intention of falsifying quotes. Using a flawed tool doesn’t count as intention.


Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional.


It's intentionally reckless, not intentionally harmful or intentionally falsifying quotes. I am sure they would have preferred if it hadn't falsified any quotes.


He's on the AI beat, if he is unaware that a chatbot will fabricate quotes and didn't verify them that is a level of reckless incompetence that warrants firing


Yeah! We can call things reckless incompetence without calling them malice!


The state of California can classify some driving under the influence cases as operating with "implied malice". Not sure it would qualify in this scenario, but there is precedent for arguing that reckless incompetence is malicious when it is done without regard for the consequences.


“In any statutory definition of a crime ‘malice’ must be taken not in the old vague sense of ‘wickedness’ in general, but as requiring either (i) an actual intention to do the particular kind of harm that was in fact done, or (ii) recklessness as to whether such harm should occur or not (ie the accused has foreseen that the particular kind of harm might be done, and yet has gone on to take the risk of it).” R v Cunningham


I think that is the crucial question. Often we lump together malice with "reckless disregard". The intention to cause harm is very close to the intention to do something that you know or should know is likely to cause harm, and we often treat them the same because there is no real way to prove intent, so otherwise everyone could just say they "meant no harm" and just didn't realize how harmful their actions could be.

I think that a journalist using an AI tool to write an article treads perilously close to that kind of recklessness. It is like a carpenter building a staircase using some kind of weak glue.


> Using a flawed tool doesn’t count as intention.

"Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here."

They aren't allowed to use the tool, so there was clearly intention.


Replace parent-poster's "malice" with "malfeasance", and it works well-enough.

I may not intend to burn someone's house down by doing horribly reckless things with fireworks... but after it happens, surely I would still bear both some fault and some responsibility.


But that kind of recklessness is malice


Outsourcing writing to a bot without attribution may not be malicious, but it does strain integrity.


I don't think the article was written by an LLM; it doesn't read like it, it reads like it was written by actual people.

My assumption is that one of the authors used something like Perplexity to gather information about what happened. Since Shambaugh blocks AI company bots from accessing his blog, it did not get actual quotes from him, and instead hallucinated them.

They absolutely should have validated the quotes, but this isn't the same thing as just having an LLM write the whole article.

I also think this "apology" article sucks, I want to know specifically what happened and what they are doing to fix it.


The issues with such tools are highly documented though. If you’re going to use a tool with known issues you’d better do your best to cover for them.


The malice is passing off someone else's writing as your own.


The tool when working as intended makes up quotes. Passing that off as journalism is either malicious or unacceptably incompetent.


They're expected by policy to not use AI. Lying about using AI is also malice.


It's a reckless disregard for the readers and the subjects of the article. Still not malice though, which is about intent to harm.


Lying is intent to deceive. Deception is harm. This is not complicated.


I think you're reading a lot of intentionality into the situation what may be present, but I have not seen information confirming or really even suggesting that it is. Did someone challenge them, "was AI used in the creation of this article?" and they denied it? I see no evidence of that.

Seems like ordinary, everyday corner cutting to me. I don't think that rises to the level of malice. Maybe if we go through their past articles and establish it as a pattern of behavior.

That's not a defence to be clear. Journalists should be held to a higher standard than that. I wouldn't be surprised if someone with "senior" in their title was fired for something like this. But I think this malice framing is unhelpful to understanding what happened.


> Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.

By submitting this work they warranted that it was their own. Requiring an explicit false statement to qualify as a lie excludes many of the most harmful cases of deception.


Have you ever gone through a stop sign without coming to a complete stop? Was that dishonesty?

You can absolutely lie through omission, I just don't see evidence that that is a better hypothesis than corner cutting in this particular case. I am open to more evidence coming out. I wouldn't be shocked to hear in a few days that there was other bad behavior from this author. I just don't see those facts in evidence, at this moment. And I think calling it malice departs from the facts in evidence.

Presumably keeping to the facts in evidence is important to us all, right? That's why we all acknowledge this as a significant problem?


We see a typical issue in modern online media: The policy is to not use AI, but he demands of content created per day makes it very difficult to not use AI... so the end result is undisclosed AI. This is all over the old blogosphere publications, regardless of who owns them. The ad revenue per article is just not great




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: