Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional.
It's intentionally reckless, not intentionally harmful or intentionally falsifying quotes. I am sure they would have preferred if it hadn't falsified any quotes.
He's on the AI beat, if he is unaware that a chatbot will fabricate quotes and didn't verify them that is a level of reckless incompetence that warrants firing
The state of California can classify some driving under the influence cases as operating with "implied malice". Not sure it would qualify in this scenario, but there is precedent for arguing that reckless incompetence is malicious when it is done without regard for the consequences.
“In any statutory definition of a crime ‘malice’ must be taken not in the old vague sense of ‘wickedness’ in general, but as requiring either (i) an actual intention to do the particular kind of harm that was in fact done, or (ii) recklessness as to whether such harm should occur or not (ie the accused has foreseen that the particular kind of harm might be done, and yet has gone on to take the risk of it).” R v Cunningham
I think that is the crucial question. Often we lump together malice with "reckless disregard". The intention to cause harm is very close to the intention to do something that you know or should know is likely to cause harm, and we often treat them the same because there is no real way to prove intent, so otherwise everyone could just say they "meant no harm" and just didn't realize how harmful their actions could be.
I think that a journalist using an AI tool to write an article treads perilously close to that kind of recklessness. It is like a carpenter building a staircase using some kind of weak glue.
"Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here."
They aren't allowed to use the tool, so there was clearly intention.
Replace parent-poster's "malice" with "malfeasance", and it works well-enough.
I may not intend to burn someone's house down by doing horribly reckless things with fireworks... but after it happens, surely I would still bear both some fault and some responsibility.
I don't think the article was written by an LLM; it doesn't read like it, it reads like it was written by actual people.
My assumption is that one of the authors used something like Perplexity to gather information about what happened. Since Shambaugh blocks AI company bots from accessing his blog, it did not get actual quotes from him, and instead hallucinated them.
They absolutely should have validated the quotes, but this isn't the same thing as just having an LLM write the whole article.
I also think this "apology" article sucks, I want to know specifically what happened and what they are doing to fix it.
I think you're reading a lot of intentionality into the situation what may be present, but I have not seen information confirming or really even suggesting that it is. Did someone challenge them, "was AI used in the creation of this article?" and they denied it? I see no evidence of that.
Seems like ordinary, everyday corner cutting to me. I don't think that rises to the level of malice. Maybe if we go through their past articles and establish it as a pattern of behavior.
That's not a defence to be clear. Journalists should be held to a higher standard than that. I wouldn't be surprised if someone with "senior" in their title was fired for something like this. But I think this malice framing is unhelpful to understanding what happened.
> Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.
By submitting this work they warranted that it was their own. Requiring an explicit false statement to qualify as a lie excludes many of the most harmful cases of deception.
Have you ever gone through a stop sign without coming to a complete stop? Was that dishonesty?
You can absolutely lie through omission, I just don't see evidence that that is a better hypothesis than corner cutting in this particular case. I am open to more evidence coming out. I wouldn't be shocked to hear in a few days that there was other bad behavior from this author. I just don't see those facts in evidence, at this moment. And I think calling it malice departs from the facts in evidence.
Presumably keeping to the facts in evidence is important to us all, right? That's why we all acknowledge this as a significant problem?
We see a typical issue in modern online media: The policy is to not use AI, but he demands of content created per day makes it very difficult to not use AI... so the end result is undisclosed AI. This is all over the old blogosphere publications, regardless of who owns them. The ad revenue per article is just not great