Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure what experience anyone in this thread has with grad level research as a student/author, but I can assure you that heads roll over this kind of thing.

A professor's career is built on reputation, and that reputation is as strong as their students' (who do much of the "work" such as it is). It comes down to the professor, but this can be a career-ending moment for those students and I'm quite confident there were some very uncomfortable discussions as a result of this.

 help



Depends on the field. One of the most influential papers in economics was found to be incorrectly constructed with signs pointing to just straight up fraud. Basically it didn't include data that it said it did, which when included reverses the conclusion. Then when the authors were called out, they doubled down offering up the explanation that the conclusion again reverses if you add a third set of cherry picked data, followed by dragging the person calling them out through the mud in a NY Times opinion piece.

Those authors are still extremely prestigious professors in the field, and have suffered essentially no penalty. https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt


All due respect this is by no means one of the most influential papers in economics.

Not influential on economics research, but on economic policy.

It literally crushed economies and guided international monetary policy for at least a decade.

There's a reason why it's one of the only economic papers that has its own wiki page.


[flagged]


I consider LLMs to be a very useful tool and use them every day. But if I sign a slip of paper saying I won't use them for some project, and then use them anyway, not merely using them but copying without even the pretense of putting it into my own words, then that's fraud. LLMs being a tool is completely orthogonal to this fraud.

This comment doesn't seem to fit the discussion at all?

The discussion is not about humans using LLLs to write papers. It is about humans who agreed not to use LLVM in reviewing papers, then did exactly that.


There's a lot of irony in a defensive comment being written based on misreading / inattentive reading of a post about reviewing papers (requiring attentive reading).

It might be that paper authors required others not to use LLMs for reviewing their work. Then, by the rule of reciprocity, they shouldn't use LLMs for reviewing others work. The article is unclear on whether this implied reciprocity rule was explicitly stated or not.

It was. More details here: https://icml.cc/Conferences/2026/LLM-Policy

In particular: "Any reviewer who is an author on a paper that requires Policy A must also be willing to follow Policy A."


In addition to being a reviewer, they also submitted their own research to this journal. So it leads to the question: if they were willing to cheat on the side of review with less incentive, why wouldn’t they cheat on the side that provides more incentives?

(Meaning, your career doesn’t get boosted much for reviewing papers, but much more so for publishing papers)


A hammer can be used to build a house, or to kill a person. We have a lot of history, law, and culture (likely more), around using tools like hammers so that we know what is good use vs what is bad. The above applies for many others tools as well.

LLMs can be very useful tools. However we also know there are a lot of bad uses and we are still trying to figure out where there are problems and where there are none.


This has nothing to do with whether it is ok to use AI or not, it is about whether it is ok to lie about using it.

They agreed to the no LLM policy.

> what's the problem?

Read the article. They self-selected into the no-LLM group and then copy/pasted from an LLM. Not only dishonest but just not smart.


Reading the article is exhausting. If I can leave a comment just as well without reading the article, then what's the problem? If I got something wrong, other people will point it out. That's a more efficient use of my time.

/s


Not to water down the snark, but isnt cause of situation described in the article the exact mentality you are mocking?

I believe that's the joke, yes.

The issue is not the tool use - research is a small community and violating submission terms is gonna get you stuck in the naughty corner.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: