As Facebook becomes bigger and bigger, with the new mileage of 2 billion users, one question arises: how does Facebook determine whether a post can be deemed as being abusive in order to be taken down? What exactly are the Facebook guidelines that are used to protect all active and not-so-active users on Facebook? This is one controversial aspect of the giant social media channel.
What are the Facebook Guidelines Used to Protect Its Users?
In a report published by ProPublica, an independent, nonprofit newsroom that produces investigative journalism in the public interest, it is shown that “in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities”. This comes with two examples of posts.
In one post, a political figure calls for the killing of radicalized Muslims. In another post, a poet calls upon the fact that “all white people are racist”. And that in the fight against racism this is the viewpoint from which the change must start.
The ProPublica article then continues to say that in the former instance, Facebook did not censor the post, while in the latter instance, it did. Even more so, they continue the debate over the Facebook guidelines used in such instances and in the instances of fake news, which circulated on the social channel for days and days. Their point is that Facebook guidelines are algorithmic-like.
The formula that is used in the elaboration of the Facebook guidelines, according to ProPublica, would be that certain categories fall within the protected class: “black children” is a specific class subset, “white men” is not. They conclude that, in real life situations and very specific, grain-like speech situations, broad categories become ineffective. Because if a post targets members of a protected class, the respective post is censored. If a post targets a subset from within a class, an unprotected category from a protected category, the post is not censored.
The group thus points to the fact that the Facebook guidelines are in some sensible, delicate cases ineffective, because of the algorithmic approach to discourse.