Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can’t get past all the LLM-isms. Do people really not care about AI-slopifying their writing? It’s like learning about bad kerning, you see it everywhere.
 help



I had a similar reaction to OP for a different post a few weeks back - I think some analysis on the health economy. Initially as I was reading I thought - "Wow, I've never read a financial article written so clearly". Everything in layman's terms. But as I continued to read, I began to notice the LLM-isms. Oversimplified concepts, "the honest truth" "like X for Y", etc.

Maybe the common factor here is not having deep/sufficient knowledge on the topic being discussed? For the article I mentioned, I feel like I was less focused on the strength of the writing and more on just understanding the content.

LLMs are very capable at simplifying concepts and meeting the reader at their level. Personally, I subscribe to the philosophy of - "if you couldn't be bothered to write it, I shouldn't bother to read it".


Alternate theory... a few months into the LLMism phenomenon, people are starting to copy the LLM writing style without realizing it :(

This happens to non-native English speakers a lot (like me). My style of writing is heavily influenced by everything I read. And since I also do research using LLMs, I'll probably sound more and more as an AI as well, just by reading its responses constantly.

I just don't know what's supposed to be natural writing anymore. It's not in the books, disappears from the internet, what's left? Some old blogs for now maybe.


The wave of LLM-style writing taking over the internet is definitely a bit scary. Feels like a similar problem to GenAI code/style eventually dominating the data that LLMs are trained on.

But luckily there's a large body of well written books/blogs/talks/speeches out there. Also anecdotally, I feel like a lot of the "bad writing" I see online these days is usually in the tech sphere.


Books definitely have natural writing, read more fiction! I recommend Children of Time by Adrian Tchaikovsky

I think you're just hallucinating because this does not come across as an AI article

I see quite a few:

“what X actually is”

“the X reality check”

Overuse of “real” and “genuine”:

> The real story is actually in the article. … And the real issue for Cursor … They have real "brand awareness", and they are genuinely better than the cheaper open weights models - for now at least. It's a real conundrum for them.

> … - these are genuinely massive expenses that dwarf inference costs.

This style just screams “Claude” to me.


It was almost certainly at least heavily edited with one. Ignoring the content, every single thing about the structure and style screams LLM.

> I think you're just hallucinating because this does not come across as an AI article

It has enough tells in the correct frequency for me to consider it more than 50% generated.


Name checks out

It's really unfortunate that we call well-structured writing 'LLM-isms' now.

I don’t see the usual tells in this essay

People care, when they can tell.

Popular content is popular because it is above the threshold for average detection.

In a better world, platforms would empower defenders, by granting skilled human noticers flagging priority, and by adopting basic classifiers like Pangram.

Unfortunately, mainstream platforms have thus far not demonstrated strong interest in banning AI slop. This site in particular has actually taken moderation actions to unflag AI slop, in certain occasions...


It is certainly very obvious a lot of the time. I wonder if we revisited the automated slop detection problem we’d be more successful now… it feels like there are a lot more tells and models have become more idiosyncratic.

Tons of companies do this already. It's not like this is a problem that nobody is constantly revisiting...

What’s one company that has revisited this recently and what’s their detection rate on what sample?

Companies like Originality.ai are always updating their models and you could use a simple Google search to answer your questions.

You could also have had the courtesy to put that in your original post. But let’s not get meta.

I did a quick test and it detected an AI summary of a random topic, even after two prompts to disguise it. So as expected it may have become a lot easier to detect.


There are literally hundreds of companies that are doing this. You could have the basic courtesy to do a Google search instead of asking.

This is an Internet forum and one of the ways such places are valuable is that it enables you to ask questions to other humans and allows those other humans, if they'd like, to answer.

You will get better results asking questions like GP's than Googling because you're asking the specific person who made a claim to quote an example, so you can judge from the specific example they provide, rather than the Google results. The best answers are often technically interesting niche tools which don't have great SEO.

Case in point: the platform you recommended does not show up anywhere on my first page of Duck.com results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: