Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll never forget having to be a moderator for a somewhat popular forum back in the day and oh man did I learn how a few people can make your life hell.

One thing not mentioned many times in these discussions are the poor moderators. Having to look at all that stuff, some of which can be very disturbing or shocking (think death, gore, etc as well as the racy things) really takes a toll on the mind. The more automation the less moderators have to deal with and then usually its the tamer middle ground content.



I'll never forget having to be a moderator for a somewhat popular forum back in the day and oh man did I learn how a few people can make your life hell.

I was also a mod for a popular gaming forum way back in the day. It was pretty miserable looking back.

Personally, for me, the extreme/shocking content wasn't the biggest issue. That stuff was quick and easy to deal with. If you saw that type of content you just immediately deleted it and permanently banned account. Quick and easy.

What was a lot harder were the toxic users that just stuck around. Not doing anything bad enough to necessarily warrant a permanent ban, but just a constant stream of shitty behavior. Especially sometimes when the most toxic users were also some of the most popular users.


> What was a lot harder were the toxic users that just stuck around. Not doing anything bad enough to necessarily warrant a permanent ban, but just a constant stream of shitty behavior. Especially sometimes when the most toxic users were also some of the most popular users.

What people find out, again and again, is that you just ban those users. Don't need an excuse. Just ban them. Even if they are popular. Your community will be much better once you do.


I have done this for years and it just works. You know these users give off a bad vibe and that others are put off by it. Just remove them and ignore the complaints from the user. They can find another space that accepts them. I don't even bother writing rules lists because they are pointless.

I do give warnings out first but usually that does nothing to change behavior anyway.


> What people find out, again and again, is that you just ban those users. Don't need an excuse. Just ban them. Even if they are popular.

Rule #1 of moderation. Keep it in an open transparent log and you'll find positivity.


That was also my experience moderating a medium-sized city subreddit. Bigger problems were easily dealt with. Toxicity was a lot harder to deal with, especially when it's so easy to create a throwaway account. I quit when one user decided to target me personally, and kept evading bans to cause more grief.

All of this crap, and your reward is more complaints, more demands.


> ...so easy to create a throwaway account

Bingo.

Why is authenticity so verboten?

If u/gonewild can manage user verification, then any one can.

Doubly so when surveillance capitalists like facebook and NSA already have (shadow) profiles for every person, living and dead.

Facebook absolutely already knows the true identity of each and every troll. Not verifying account creation is a convenient fiction, willful ignorance, allowing their outrage machine to profit. "lalala", hands over ears, "i can't hear you!"


> What was a lot harder were the toxic users that just stuck around.

In about a year as a mod on a semi-busy political forum, the trickiest situations always seemed to involve two users, neither generally horrible but both continually stepping over the line in their interactions with each other. And each had their own highly motivated allies, so any action would ignite a new firestorm of complaints about biased moderators. What a nightmare. Probably part of why that site doesn't exist any more.

BTW, that's also where I learned some rules of effective moderation. Unfortunately, finding a forum where moderators know how to moderate is hard. Far more often, they fall into a pattern of ruling on technicalities instead of considering what will actually improve discourse, and they always end up getting manipulated by the community's worst members to drive out better ones.


> What was a lot harder were the toxic users that just stuck around. Not doing anything bad enough to necessarily warrant a permanent ban, but just a constant stream of shitty behavior. Especially sometimes when the most toxic users were also some of the most popular users.

This is the problem with community guidelines being the be-all and end-all. Hard rules are great for catching insults or slurs. They're not so great for dealing with actual abuse or inciting very bad ideas.

I think Innuendo Studios' video on this problem is one of the best explanations: https://www.youtube.com/watch?v=P55t6eryY3g

There's a reason white supremacists and literal nazis (yes, really with the salutes, genocide fantasies, Jewish conspiracy theory and all) have shifted from using obvious language to dogwhistles and "just asking questions". Erosion is a much more powerful force than a few direct impacts.

If you want to moderate a community, you need to have a plan for dealing with toxic individuals, not just language. We tend to imagine "hackers" and (foul-mouthed) "trolls" but I find Molly's archetypes a lot more though-provoking: https://twitter.com/mollyclare/status/1254886822779502593?la...

I think community moderation is a problem we tend to run into the Dunning-Kruger effect with because it seems like something we have an intuitive understanding of even if we have zero experience actually doing it and having ever learned what works and what doesn't.


> Hard rules are great for catching insults or slurs.

Bingo. Hard rules encourage brinksmanship. There's always a class of "picador" users who will poke and prod and provoke just up to the line where the rules are, then flag the response. A moderator too wrapped up in rule by technicality (or too lazy to look at context) will then come down as harshly as they can on the author of the flagged comment, and give the picador a total pass. Problem is, the picador does this again and again and again, never making a positive contribution, while their targets are often chosen precisely for their prominence. Guess which one is encouraged to continue their behavior, and which one is encouraged to go away. Has the "moderator" helped to improve discourse on the site, or helped to ruin it?


We had some of the crazy people track us down and call in bomb/death threats to our office building.

So many though we were in collusion with a specific forum moderator (out of a million forums) and got to incensed. And this was in the early 2000s that we think was a saner time.


A close friend of mine is a primary contributor to an extremely popular console emulator. He learned quickly to author under an alias which he keeps secret – even from most of our friend group.

It's bizarre that he has to keep this real love of his, which he's devoted hundreds and hundreds of hours to, so close to his chest.

But sadly The Greater Internet Fuckwad Theory holds true today.


This forum, didn't happen to start with a T and end with a G? (Shortened acronym)


> Especially sometimes when the most toxic users were also some of the most popular users.

If the "toxic" users were the most popular, how do you know you were not the "toxic" one instead? If the community is supporting the "toxic" material, how could it be "toxic"?

In my experience, it's not the toxic users that's the problem. It tends to be the toxic mods. You can ignore toxic users. You really can't ignore the toxic mods.

I also have a problem with the term "toxic". It ultimately means "something I don't like". Mods should never ban "toxic" content. They should ban illegal and perhaps non-pertinent content. But that's just my opinion.


There have been a bunch of articles lately about the horrors that Facebook moderators have to pour through. FB has been forced to pay $MMs to some of them for mental health: https://www.bbc.com/news/technology-52642633


> Having to look at all that stuff, some of which can be very disturbing or shocking

Yup, was the designated person to report all child porn for our photo-sharing website. It was horrific. Some of those images still haunt me today, they were so awful. And the way the reporting to the NCMEC[0] server worked, you had to upload every single image individually. They did not accept zip files or anything at the time. It was a giant web form that would take about forty image files at once.

[0] https://www.missingkids.org/HOME


Even without seeing that stuff, seeing a constant stream of bad behaviors with the probably-good behavior filtered out can subtly change your priors about people - it makes you start thinking people suck more in general, kind of like how watching news where they show the worst of the worst makes one trust people less.

I definitely used to notice this after some time working on our moderation queues.


> I'll never forget having to be a moderator for a somewhat popular forum back in the day

Similar experience, though I'll say that the worst was dealing with other teenagers that threatened suicide when you banned them. That always took a lot of effort to de-escalate and was a complete drain on personal mental health.

I could deal with porn, shock images, and script kiddie defacements, but having people threaten to kill themselves was human and personal. It hurt, especially when the other person was legitimately having a personal crisis.

I still think about some of these people and wonder if they're okay.


Several years ago a popular gaming forum with a significant teenage audience I used to read had declared a simple policy toward threats of suicide. If you were threatening to kill yourself, do it, and stop messaging the mods, they are not here to talk you down from a ledge. It seemed pretty effective.


That's horrible! Did you run that plan past any lawyers?


Your parent said they used to read the forum, not that they ran it.


Oof. That is a genuinely terrible and cruel approach.


Pointing to a mental health crisis service would be far more advisable.

I'd stick with the ban if the threat was in response to moderation actions.


Most these threats weren’t serious, just some problematic teen looking for attention. Showing them that some strangers don’t give a shit about them or their antics could be a real eye opener.


And what of the threats which were not?

Responding "just kill yourself already" is simply beyond the pale.


Someone still ends up reviewing images for the ML training dataset.

That's still a huge improvement over every mod everywhere seeing the same images repeatedly, but someone has to make the call at some point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: