Moderator's Note: Managing community knowledge through comment moderation

In Extremis

In 2014, a local council in Australia’s Sunshine Coast region approved the construction of a mosque.

The community polarised instantly. Some locals organised a protest against the mosque, to be held at its proposed location. Others advertised a counter-protest to meet them.

I was the Network Online News Editor overseeing the local newspaper and both me and the paper’s Editor-in-Chief, Mark, made sure the journalists rostered on for that day were told to call us if things got out of hand.

Instead of a call from the journalists, Mark and I were called by the local police. They’d had to call for backup to stop the violence sparking between the two groups — violence that had been stoked by posts on our news story comments section and our Facebook group. Take them down, we were told.

An hour later, more than 90% of the comments across that site and all of the comments made on Facebook had been deleted by either myself or Mark. Within twenty minutes, the two groups were moved on.

In the debrief with the journalists whose job it had been to moderate the comments that day, we were told it was unethical to stand in the way of free speech, and that they were too busy covering the resulting fight to moderate the comments anyway.

Police asked the media not to report on the injuries from the day. Every network agreed, knowing it would only lead to reprisals. For context, I worked for Rupert Murdoch at the time, and it was one of the few times his office and I agreed on something.

Who, What, Where, Why, How

The irony in one of our community’s standards police going by his initials ‘KGB’ isn’t lost on me.

For twenty years — eleven of them professionally — I have been called to keep the peace in online communities. My opinions on the topic come from a broad set of different experiences, books and essays I’ve read, and conversations with countless other moderators and community managers.

From seeing a post sent into oblivion to being k-lined back in the ‘good old days’, anyone who has been ‘moderated’ has likely felt as though they’ve had their voice ‘silenced’. For a good moderator, moderation wouldn’t always be so blunt. Many would prefer a comment or post had been worded differently, toned down, or expressed in the general case instead of aimed at a specific community member. Our aim is to keep things on topic, constructive, and moderate within the expectations of the community we serve. When the tools for nuance exist, good moderators use them. Sadly, few platforms offer such tools.

Moderators are the step between citizen and police force that can only exist online thanks to the constructed nature of internet communities. While not in the position to command the respect of village elders of years gone past, we take on a similar role in using soft power to steer our technological villages.

We exist where communities exist because, as is rarely the case offline, online communities are publication platforms and are private property, and expressing ideas on private property is done so at the discretion of the platform’s owner. The owners of this forum cannot be compelled to publish my views, it is their property and therefore their reputation and responsibility. While they can be held accountable for what I say here, they need to exert some control over what I publish on their behalf.

Given most platform owners are busy people, and some of these platforms are beyond any owner’s ability to self-moderate, the job gets outsourced to people like me.

To ensure communities reflect their creators’ visions, and to avoid radicalisation and extremism from forming, moderators use what tools we have to sculpt both the content and the membership. The contexts, goals, and tools available vary wildly through the spectrum of online platforms, though I have found some commonalities. Deleting comments and deleting users are close to universal. Suspensions, warnings, and feedback are not as common, but exist in plenty of the more focused communities. The more nuanced the tool, the more mental burden is placed on the moderators.

Drawing the line

An experienced moderator develops a form of prescience that those who post or read comments usually lack. From our perspective, wider and somewhat removed, we must form a picture of where a conversation is heading, not just where it currently is. Moderators cannot operate without knowing, or at least approximating, which sentiments sit on ‘the line’ — the point at which someone involved is likely to start down the path of extremity.

On forums with long, considered posts, deciding where the line exists frequently requires consideration of the ideas being expressed. Even on a cooking forum, disagreements over something like a recipe can lead to extreme positions, community-destroying posts, and even offline reprisal. A moderator knows that posting a recipe for red velvet chocolate cake is usually within the line, unless the last five times someone posted a recipe for that cake, a schism formed in the community over the use of a particular ingredient. In that case, posting the recipe is over the line, whether the poster likes it or not.

On real-time platforms where conversation is synchronous, moderation has the same need for prescience but includes other problems. The rapidity of conversation means judging where the line exists takes place in a more granular environment. One moderator might see the death-spiral forming one or two comments earlier than another, while a third might think the line must be crossed before action is taken, and no attempts to steer conversation away from it should be made.

These decisions are like those made by the referee in a sporting match, and often attract as much criticism. Everyone feels as though their opinion on where the moderation line exists is the only logical position. A moderator may need to listen to feedback in order to develop their concept of community expectations, but likewise, non-moderators need to remember that they are not the referee. Community members almost always agree to some terms of use or set of rules before posting. If those whose role it is to enforce that agreement believe there has been a breach, they are within their rights to take action.

In short, the less the target of moderation dwells on a decision, the healthier it would be for them.

Those in charge of moderators are, likewise, well within their rights to retrain, remonstrate, or remove a moderator who they feel is not sculpting their community properly.

The consequences of bad moderation

Online community members now have the tools to cause injury or death in the course of an online altercation. Doxxing, stalking, swatting, and even insurrection are all well-documented results of online platforms failing to identify trouble-makers, anger-spirals, and extremists before they’ve crossed the line.

A disagreement between a reader and a local election candidate on a Facebook page (whose moderator reported to me) resulted in the reader publishing private details about the candidate’s daughter’s movements. She was horribly bashed at school later that day, the post made it into the police report. Those details should have been deleted within seconds and the poster banned.

Multiple online platforms failed to draw the line of radicalisation and extremism presented by the Q-Anon terrorist group. People died when their platforms were used to organise an offline insurrection.

A twitter argument between a young pregnant woman and a mother who disagreed with the parenting situation into which the child would be born became so protracted and nasty that the pregnant woman’s stress resulted in a miscarriage. Twitter had months to ban the aggressor — I’d been the one to report her online and offline abusiveness. The miscarriage was my partner’s; she never recovered from the loss.

These failures to keep online communities moderate, constructive, and safe could have been prevented by early intervention. While moderators may not be responsible for enforcing offline laws, they can be seen as first-responders to the situations that otherwise eventually call for police intervention. The borders of bad moderation are partly defined by the results of good moderation — a community that looks like it doesn’t need to be moderated heavily at all.

Fortunately, thanks to the strong involvement and passion of The Productivists community’s admins, my moderation responsibilities are infrequent and light-handed.

I appreciate the members of this community for their maturity, their knowledge, and their behaviour. Thank you, all of you, and let’s work together to stay the on-topic, constructive, and life-affirming community we enjoy right now.

3 Likes

well said. thank you!

1 Like

Much appreciated. It’s such a complex topic and there’s so much to be said about it that I felt it needed to be discussed before we start on all these plans for group agoras, etc.

1 Like