Artificial Intelligence VS Bad Guys: Using Algorithms to Moderate Social Networks

ai and social network moderation

The internet is a nasty place. Well, beautiful and nasty at the same time.

Like a Lego Store full of incredible toys sold at exorbitant prices.

But without porn. No porn in Lego Stores.

As the amount of data uploaded by online platforms users continues to increase, traditional human-led moderation approaches are no longer sufficient to stem the consequent tide of toxic content.

To face the challenge, many of these platforms have been boosted with moderation mechanisms based on AI, specifically on machine learning algorithms.


A necessary premise

I will not particularly focus on the technical aspects of artificial intelligence. You can find anything you need to know about this topic in my previous articles on machine learning and deep learning.

Instead, I’ll show you the main ways these algorithms are applied in online environments to keep bad guys at bay.

Are you ready? Let’s start!


How does AI moderation work?

AI online content moderation is essentially based on machine learning, which allows computers to make decisions and predict results without being explicitly programmed to do so.

This approach is extremely data-hungry, because our system needs to be trained with large datasets to improve its performances and fulfil its tasks properly.

In recent times, machine learning systems experienced a massive breakthrough thanks to the introduction of deep neural networks that enabled an additional step forward: the so-called deep learning.

Deep learning allowed systems to recognize and manage complex data input such as the human speech or Dream Theater solos (those are probably too much even for a deep neural network).


Pre and post moderation

AI usually takes care of online moderation in two different phases. The first one, called pre-moderation, is fulfilled before the content’s publication, and it’s almost an exclusive task of automated systems.

Post or reactive-moderation, instead, happens after a content has been published and it has been flagged by users or the AI itself as potentially harmful. In other cases, the content may have been removed previously but needs an additional review upon appeal.

Main phases of AI moderation
Main phases of AI moderation. Source: Cambridge consultants


AI pre-moderation

AI is commonly used to boost the pre-moderation accuracy by flagging content for human review than