Facebook Pushes To Crack Down on Hate Speech

facebook-76536_1280

Facebook moves forward on hate speech crackdown; hires 3,000

In a current series termed as Hard Questions, Facebook staff and management have been explaining to the public how they are dealing with some of the tough issues experienced in overseeing the world’s largest social media platform. One problem addressed in the series is the use of artificial intelligence to fight terrorism. Facebook has admitted it isn’t doing the best in dealing with hateful rhetoric.[/tweetit] However, a detailed blog post by Richard Allan, Facebook’s Vice President in charge of Public Policy across Europe, Africa, and the Middle East, has explained the efforts the social media giant is making to manage hate speech cases more efficiently.

Facebook Pushes To Crack Down on Hate Speech[/tweetthis]

In the blog post, Allan took the time to explain the definition Facebook holds for hate speech. He said, “Our current definition of hate speech is anything that directly attacks people based on what are known as their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability, or disease.” At this, the question of what statements the company deemed as an attack and how they differentiated these from usual posts. For example, a joke may get into the issue of race but not be termed as hate against people of a particular color.

Providing insight to this query, Allan said, “There is no universally accepted answer for when something crosses the line. Although a number of countries have laws against hate speech, their definitions of it vary significantly.” This means there is no clear cut way to identify and remove hate speech. However, the Facebook team admitted they rely heavily on users to find content that has hateful rhetoric and flag it. Facebook can then delete the material and prevent it from offending anyone else. Notably, they also admitted that they are relying too much on users reporting content and are set to achieve marked improvements this year.

One of the ways the social media mammoth is set to improve its operations in line with hate speech is by hiring 3,000 more people to work on its Community Operations team. 4,500 people currently work in this department. With more human resources in this field, the team will achieve a higher degree of efficiency in identifying and deleting posts containing hate speech. This will prevent the social media platform from being labeled as one of the primary methods people use to spread hateful content.

Allan also talked about the censoring of people’s posts. He noted the failure to remove content that might offend other users would be a failure on their part since they would not live up to their community standards. He also added that the improvements in the identification of offensive content would reduce the occurrences of people having their posts deleted by accident. Concerning this, Allan said, “When we remove something you posted and believe is a reasonable political view, it can feel like censorship. We know how strongly people feel when we make such mistakes, and we’re constantly working to improve our processes and explain things more fully.”

The comments come along as Facebook reaches the 2 billion user mark.

Resources

Follow the Conversation on Twitter