Killing the king: why do we need moderators online?

Wherever we have forums or communities, we have moderators. It seems a fact as old as the internet itself — whether you are on an old phpBB forum, a Subreddit, a Discord server, Hacker News, etc, moderators are the curators / police / judges / jury / executioners, and essentially rulers of their communities.

And when the rulers are just and kind, the kingdom prospers. Hacker News, for example, is consistently held up as a great community, with fair moderation rules and a “just” moderator (Dang). Similarly, some of the best subreddits on Reddit are the ones where moderators have been the key to making sure people aren’t cruel, content stays on topic, quality is consistent, and much more.

But just as there are good rulers, there are bad ones as well, and too commonly bad moderation can ruin communities. These bad moderators, like bad rulers, can fall into quite a few categories:

  • Apathetic: Moderators that don’t curate the content in their community or respond to changes in their community effectively and in a timely manner.
  • Authoritarian: Moderators that create and moderate too strictly with rules that no one can follow
  • or poorly thought out attempts to moderate.
  • Non-transparent / Non-accountable: Moderators that suppress, manipulate, and/or censor information in a way that allows them to make decisions with impunity
  • Biased / Selfish: Moderators that allow their viewpoints and/or a potential select group of friends / partners viewpoints to be the dominant voices in the community despite not representing their members.
  • Incompetent: Moderators that lack the necessary skills to run their community effectively, or make misguided rules that end up hurting the community
  • Etc etc.

How do you commit regicide in an online community?

Regardless of the reasons for bad moderators, we run into the same problem at the end of the day: when a community starts experiencing bad leadership, what can we do to “unseat” the rulers?

The answer, currently, is nothing. E.g. the common answers include:

  • “If you don’t like the moderation / rules / etc, you can make your own community”
    • This is basically impossible for any well-established community, and seems unfair — if it is only recently that bad moderation has happened, why should I throw away my entire community that I know and love because of one person / rule / recent change?
    • Even if it were possible, and a splintered community was created that started to get traction, who is to say that the moderator/moderators of that new community would be any better?
  • “Protest until the moderator steps down / the rule is removed”
    • This really can only succeed if the moderator has integrity, as there is usually no programmatic “forcing” mechanism for them to step down, just the fact that people are angry and might overwhelm the moderation queue / discourse, but eventually this will die / simmer down when given enough time to pass.
      • E.g. with Reddit, at best you can complain to the administration team about a moderator’s actions, but assuming the moderator is not breaking any site-wide rules, usually an administrator will not forcibly remove an unpopular moderator.

Democracy to the rescue?

One might think that, given the above analogy to rulers / kings / monarchy, why haven’t many forum / community platforms developed a system for “democratically” electing their moderators to help with some of these issues? Well, to that end, the closest I could find was actually StackExchange, which hosts Community Moderator Elections every year, but I cannot comment on whether implementing this has helped, or potentially just created a whole new set of issues.

But if we did implement this functionality, how would moderator elections work? Are moderators elected for life? Who can vote for moderators, and who can qualify to be a candidate for moderator? These are probably all questions that could eventually be answered (either through trial and error, mirroring existing systems of government, or just making a technical decision and going with it), but still represent realistic challenges.

And even assuming we did implement “democratic” moderation elections into more forum software, would that actually be what we wanted to achieve? We’ve seen on many communities that if you allowed users to choose what content would be posted, it would be invariably low-effort content and memes, so if users could vote for their moderators as well, would it not inevitably lead to electing a moderation team that mainly allows low-effort content to proliferate?

Perhaps democratic moderation could work, or perhaps there are different systems of “government” that could be applied to forum software that could also work better than the rough “monarchy” and line of succession that exists today. But in my mind, I feel like we don’t need rulers or kings — we need filters.

No gods or kings, only man (-made filters)

Most systems of showing you information today are already “filtering” content for you:

  • Gmail filters your email for you using algorithms designed to remove as much spam as possible.
  • ublock filters ads out of the websites you visit
  • Bing (attempts) to filter your search results to remove low-quality content

That’s not to say these filters and algorithms are magic, or had no human component: there’s entire teams of people designing and building the more complicated algorithms and tech that filters out spam in your email and ranks your search results behind the scenes! But regardless, the algorithms and filters tend to be as “unbiased” as possible, as to prevent a large backlash, people leaving, and thus money lost. Imagine the uproar if the Gmail spam algorithm team decided one day that all email from companies related to the Coca-Cola company would now go to spam, because they were Pepsi drinkers — they’d quickly find themselves out of the jobs.

So why can’t we similarly do the same thing with forums? If we combined a good set of tools that allowed a user to build powerful filters themselves, along with some “pre-baked” more algorithmic filters ran by the platform that a user could “subscribe” to that handled more complicated filtering, users could quickly curate their communities to their liking.

Some examples could include:

  • User filters:
    • The ability to filter out websites, keywords, hashtags, certain users, etc
    • The ability to filter out posts with scores below a certain threshold
  • Platform filters:
    • The ability to filter out “political” posts algorithmically and/or with a pre-defined large filter list of words to ignore.
    • The ability to filter out “low-effort” memes (e.g. by using image similarity scores to help down-rank things like templates used by AdviceAnimals, etc)
    • On a post that is highly upvoted that you completely disagree with:
      • “Down-rank posts that were upvoted by users who liked this post”

Perhaps it could even be that if a user spent a long time curating all the combination of their filters or filter lists, or created their own algorithm to sort and filter content, and wanted to publish it to the community (for free or for profit), they could become something other people “subscribe” to on the platform, in the same way that you can subscribe to new “filter lists” on uBlock.

The end goal would be that users could have a consistent, filtered experience of any topic they chose based exactly on their preferences. If they didn’t like the way the content looked, they could remove some of their own filters they have in place, or unsubscribe to filters that had become too authoritarian, and get a more “raw” experience as well. The potential best of both worlds?

The king is dead, long live the king?

The first challenge that I end up think of with the filter strategy is that it still ends up potentially feeling no different than a moderator-led community. E.g. let’s assume we implemented filters on a social platform in the following way:

  • Users could create complex filters of their own (e.g. removing certain websites, filtering for keywords, removing users they wish to block, etc)
  • Users could “subscribe” to filters that the platform provides (e.g. “No politics” filters, “No memes” filters, etc)
  • Users could also “subscribe” to other filter-lists that other Users curate on the platform (e.g. kind of like subscribing to EasyList and Fanboy’s filters within uBlock)

Let’s also assume the platform generated the communities themselves, rather than users, mainly around topics. So you might have communities for:

  • Games
  • Technology
  • Sports
  • Etc

When you joined one of these communities, you were then immediately prompted to choose filters or filter-lists that matched the experience you wanted for that community (e.g. you’d be prompted to choose between top subscribed filters like “Scott’s no-memes filter list”, “Dan’s only-game discussion filter list”, create or apply your own filter lists, and more to the “games” community).

The benefit here would be that if you ever felt like one of your subscribed filters got too opinionated (“Scott’s no-meme filter list” starts allowing in some memes), you could unsubscribe from it and apply another one, or look at the “raw” feed of submissions to the “games” community, for better or worse (e.g. seeing all the spam submissions that are normally filtered out for you by the filterers).

And while this feels like it gives you more power to manage the content you wish to see, how did unsubscribing from a managed filter differ substantially from just unsubscribing from a subreddit managed by moderators that weren’t applying rules to your liking? The main difference would be you still have access to the “content” you wish (as they are just filtering existing content, vs “gatekeeping” content to only their community), but it doesn’t feel much different outside of that.

As the king goes, so goes the people?

Another wrinkle / challenge with filters would be that you might lose the “community” aspect along the way. Filters are a great way to get a curated content feed custom to you, and solve the problems of bad moderation (as you end up, to some extent, being your own moderator), which is great for when you want to consume content in the way you wish to consume it, but if you are filtering out everything except exactly what we want to see, who do you share that content experience with? Is it anyone?

To that end, the closest metaphor I can think of is TikTok, as although you can’t quite filter content yourself, the “For You Page” continuously filters content that it thinks you may like, and so ends up being a “unique” experience.

And yet, over time, even on TikTok you can still feel a sense of camaraderie, as people will refer to being on “Bean-Tok” or “Seinfeld-Tok” or “Baby Gronk-Tok“, representing a micro-chasm within TikTok focused on the subject at hand, and so it can still feel like you have “friends” or a community in common. It’s not quite the same feeling, but it does make you feel a bit like you might belong somewhere.

Your new robot overlord

One of the final major downsides to filters and algorithms is that you lose the “human” side of judgement in communities — e.g. what happens when filters and algorithms are not enough?

There’s a great mini online game that takes you through what it’s like to be a moderator called Moderator Mayhem, where it shows that context and nuance can be critical to making the “right” decision in moderating online communities — and algorithms and filters are not quite known for being contextual or nuanced.

Can this aspect eventually be overcome with better filters and algorithms, or perhaps machine learning or AI? Startups like Hive tend to believe so, and in the end I do too — I think eventually we’ll no longer need human moderators except for the most extreme circumstances, the rest handled by filters / algorithm / AI.

Will our robot overlords be better rulers? I guess over time we shall see…

Leave a Reply