ADVERTISEMENTREMOVE AD

What Does Facebook Do to Filter Hate Speech and Misinformation?

Can Facebook's partnerships, technology and policies be sufficient to fight the issues its platform causes in India?

Updated
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large

A series of documents leaked by former Facebook employee-turned-whistleblower Frances Haugen have revealed that the platform let hate speech and misinformation spread unchecked, despite being aware of it.

The documents, which are being called 'The Facebook Papers', detail multiple notes and studies conducted in India since February 2019, including one with a test account to check Facebook's recommendation algorithm and how it exposed users to hate speech and misinformation.

These internal reports, accessed by a consortium of media organisations, noted a strong network of accounts, groups and pages that actively spread anti-Muslim propaganda and hate speech, resulting in high volumes of inflammatory content and misinformation with communal undertones.

In the course of this article, we will try to answer the following questions:

What is hate speech? What's the platform doing to filter hate speech and misinformation? What has been the impact and are the adopted measures enough?

What Does Facebook Do to Filter Hate Speech and Misinformation?

  1. 1. What is Hate Speech?

    Facebook defines hate speech as any content that directly attacks people based on "protected characteristics, including race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity or serious disability or disease."

    The “direct attacks” include any kind of violent or dehumanising speech, harmful stereotypes, statements of inferiority, disgust or dismissal, expressions of contempt, cursing and calls for exclusion or segregation, as explained in detail on Facebook’s transparency center.

    Expand
  2. 2. What is Facebook Doing to Filter Hate Speech and Misinformation?

    Since 2016, Facebook follows a policy of “remove, reduce and inform” on its platform to counter misinformation.

    What this means is that the platform removes content that goes against its policies, reduces its reach and distribution and informs other users with additional information or context, so they can decide whether they want to view or share said content.

    While Facebook has trained Artificial Intelligence (AI) software to detect and delete content in English, globally, it uses human support to monitor content that goes against its policies.

    For instance, Facebook has partnered with third-party fact-checkers (3PFC) around the globe to combat misinformation. In India, too, there are ten different fact-checking organisations that work in over 12 languages.

    The Quint’s WebQoof is also one of Facebook’s third-party partners in India.

    However, despite its partnerships and human moderators, Facebook’s fight against misinformation and hate speech has hit more roadblocks.

    Reports have revealed that Facebook's policy decisions in India have been influenced by political actors.

    Srinivas Kodali, a researcher with the Free Software Movement of India says that the current regulations in India don’t suffice.

    “I think it's about time that the Parliament actually took up the issue of regulating these entities. The issue about IT Rules is not that they’re not required, but the problem is that they do not give enough rights to the citizens, they give too much power to the government.”
    Srinivas Kodali

    "I don’t think that the current government, the majoritarian BJP government has any interest in fixing this because we know it benefits them," he added.

    Kodali then added that at most, the Parliamentary committee on Information Technology would bring in Facebook for questioning, but the process would keep getting disrupted.

    "Indian institutions are not in a state to look into this because they are actively being asked not to look into it," Kodali said.

    An internal study, quoted in the Facebook Papers, looked at the amount of misinformation that Facebook took down with the help of its fact-checking partners.

    This study noted how the company had created a “political white list to limit PR risk,” reported New York Times.

    There are several other challenges that the platform has been facing in India. As per the New York Times, of the 22 officially recognised languages in India, Facebook's AI is trained to work in only five of them. According to Facebook, it employs human moderators to work with the rest of them.

    But this is still troublesome. Despite Hindi and Bengali being the fourth and seventh-most used languages on the platform, Facebook does not have effective systems to detect and combat inflammatory content or misinformation in these languages.

    It wasn't until 2018 that the company added hate speech classifiers in Hindi and did the same for content published in Bengali in 2020.

    Systems to detect content inciting or promoting violence in these languages were put into place as recently as 2021.

    Speaking to AP, a Facebook spokesperson said, “Hate speech against marginalised groups, including Muslims, is on the rise globally. So, we are improving enforcement and are committed to updating our policies as hate speech evolves online.”

    They also added that hate speech identifiers in Bengali and Hindi "reduced the amount of hate speech that people see by half" in 2021.

    The newly disclosed documents also reveal that of the total budget earmarked by Facebook to tackle misinformation, 87 percent is spent in the United States – which is 10 percent of its market.

    Expand
  3. 3. What Has Been the Impact of This Unchecked Hate?

    While platforms like Facebook say they are actively working to take down content that is likely to incite violence, promote enmity and hate, there are endless examples of how online hate has resulted in violence.

    In February 2020, BJP leader Kapil Mishra openly called for violence against mostly Muslim protestors to clear them off roads in a video published on Facebook. Riots ensued within hours, resulting in over 50 deaths.

    The video was taken off the platform only after it amassed thousands of views and shares.

    The pandemic too, gave way to increasingly communal content on the platform. The Quint's WebQoof team debunked several such communal claims where social media users shared visuals or text, pinning the spread of the coronavirus on the Muslim community at the onset of the pandemic.

    This spike in communal content and hate speech did trigger violence against Muslims, from businesses being boycotted to street vendors being harassed due to suspicion.

    Similar trends have been noticed in other countries like the Capitol Hill violence in the US, which allegedly started with 'Stop the Steal' Facebook groups and resulted in pro-Trump mob storming the Capitol; Facebook posts targeting the minority Rohingyas in Myanmar which led to tensions and violence.

    Expand
  4. 4. Are the Adopted Measures Enough? What More Needs To Be Done?

    The 'Facebook Papers' revealed how the company had known about the multitude of issues that their platform caused – through multiple studies and over the years – and yet took little to no impactful action to address them.

    Speaking to The Quint, Internet Freedom Foundation's Apar Gupta emphasised the need for Facebook to reallocate its funding when it comes to content moderation in its markets.

    Referring to the revelation about just 13 percent budget devoted to all countries (other than the US) by Facebook to tackle misinformation, Gupta said:

    "India's user base is larger than the population of the United States. The first necessary measure is that the budgetary allocation for misinformation and content moderation needs to be proportionate to the country as per the number of users."
    Apar Guta, Internet Freedom Foundation

    He also spoke about the algorithmic processes that the company adopts to moderate hate speech, which Facebook terms as "AI" and claims that it takes down 90 percent of hateful content. Gupta noted that Facebook's internal reports revealed vastly different numbers.

    "It only results in removal of 3 to 5 percent of hate speech, at best," he said.

    A similar point is raised by senior journalist and researcher Maya Mirchandani in an article titled "Fighting Hate Speech, Balancing Freedoms: A Regulatory Challenge."

    She notes that while AI and machine learning tools are being re-engineered and trained to monitor multimedia content on platforms, the volume of the content may get too much for the technology to handle, and that local contexts are "next to impossible for them to fathom."

    For instance: People spreading disinformation and hate against minorities in India often use derogatory words instead of identifying them directly, thereby probably skirting the machine learning tools.

    However, as Facebook talks about upgrading its machine learning models and relying on AI for detection, the question is – why has the platform not been able to tackle the issue?

    Prateek Waghre, a researcher at The Takshashila Institution, stressed on the importance of holding platforms accountable and added a note of caution that realistically, platforms should be held "accountable in a way that does not reduce the relative power of civil society."

    "There are certain things that Facebook can do, in terms how much they invest in local resources in the country that they operate in," Waghre said.

    He added that the challenges seen on Facebook were common on other social media platforms too. "The challenge is not just for one company, but it's about fixing these issues at a broader, societal level."

    Expand

What is Hate Speech?

Facebook defines hate speech as any content that directly attacks people based on "protected characteristics, including race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity or serious disability or disease."

The “direct attacks” include any kind of violent or dehumanising speech, harmful stereotypes, statements of inferiority, disgust or dismissal, expressions of contempt, cursing and calls for exclusion or segregation, as explained in detail on Facebook’s transparency center.

ADVERTISEMENTREMOVE AD

What is Facebook Doing to Filter Hate Speech and Misinformation?

Since 2016, Facebook follows a policy of “remove, reduce and inform” on its platform to counter misinformation.

What this means is that the platform removes content that goes against its policies, reduces its reach and distribution and informs other users with additional information or context, so they can decide whether they want to view or share said content.

While Facebook has trained Artificial Intelligence (AI) software to detect and delete content in English, globally, it uses human support to monitor content that goes against its policies.

For instance, Facebook has partnered with third-party fact-checkers (3PFC) around the globe to combat misinformation. In India, too, there are ten different fact-checking organisations that work in over 12 languages.

The Quint’s WebQoof is also one of Facebook’s third-party partners in India.

However, despite its partnerships and human moderators, Facebook’s fight against misinformation and hate speech has hit more roadblocks.

Reports have revealed that Facebook's policy decisions in India have been influenced by political actors.

Srinivas Kodali, a researcher with the Free Software Movement of India says that the current regulations in India don’t suffice.

“I think it's about time that the Parliament actually took up the issue of regulating these entities. The issue about IT Rules is not that they’re not required, but the problem is that they do not give enough rights to the citizens, they give too much power to the government.”
Srinivas Kodali

"I don’t think that the current government, the majoritarian BJP government has any interest in fixing this because we know it benefits them," he added.

Kodali then added that at most, the Parliamentary committee on Information Technology would bring in Facebook for questioning, but the process would keep getting disrupted.

"Indian institutions are not in a state to look into this because they are actively being asked not to look into it," Kodali said.

An internal study, quoted in the Facebook Papers, looked at the amount of misinformation that Facebook took down with the help of its fact-checking partners.

This study noted how the company had created a “political white list to limit PR risk,” reported New York Times.

There are several other challenges that the platform has been facing in India. As per the New York Times, of the 22 officially recognised languages in India, Facebook's AI is trained to work in only five of them. According to Facebook, it employs human moderators to work with the rest of them.

But this is still troublesome. Despite Hindi and Bengali being the fourth and seventh-most used languages on the platform, Facebook does not have effective systems to detect and combat inflammatory content or misinformation in these languages.

It wasn't until 2018 that the company added hate speech classifiers in Hindi and did the same for content published in Bengali in 2020.

Systems to detect content inciting or promoting violence in these languages were put into place as recently as 2021.

Speaking to AP, a Facebook spokesperson said, “Hate speech against marginalised groups, including Muslims, is on the rise globally. So, we are improving enforcement and are committed to updating our policies as hate speech evolves online.”

They also added that hate speech identifiers in Bengali and Hindi "reduced the amount of hate speech that people see by half" in 2021.

The newly disclosed documents also reveal that of the total budget earmarked by Facebook to tackle misinformation, 87 percent is spent in the United States – which is 10 percent of its market.

What Has Been the Impact of This Unchecked Hate?

While platforms like Facebook say they are actively working to take down content that is likely to incite violence, promote enmity and hate, there are endless examples of how online hate has resulted in violence.

In February 2020, BJP leader Kapil Mishra openly called for violence against mostly Muslim protestors to clear them off roads in a video published on Facebook. Riots ensued within hours, resulting in over 50 deaths.

The video was taken off the platform only after it amassed thousands of views and shares.

The pandemic too, gave way to increasingly communal content on the platform. The Quint's WebQoof team debunked several such communal claims where social media users shared visuals or text, pinning the spread of the coronavirus on the Muslim community at the onset of the pandemic.

This spike in communal content and hate speech did trigger violence against Muslims, from businesses being boycotted to street vendors being harassed due to suspicion.

Similar trends have been noticed in other countries like the Capitol Hill violence in the US, which allegedly started with 'Stop the Steal' Facebook groups and resulted in pro-Trump mob storming the Capitol; Facebook posts targeting the minority Rohingyas in Myanmar which led to tensions and violence.

ADVERTISEMENTREMOVE AD

Are the Adopted Measures Enough? What More Needs To Be Done?

The 'Facebook Papers' revealed how the company had known about the multitude of issues that their platform caused – through multiple studies and over the years – and yet took little to no impactful action to address them.

Speaking to The Quint, Internet Freedom Foundation's Apar Gupta emphasised the need for Facebook to reallocate its funding when it comes to content moderation in its markets.

Referring to the revelation about just 13 percent budget devoted to all countries (other than the US) by Facebook to tackle misinformation, Gupta said:

"India's user base is larger than the population of the United States. The first necessary measure is that the budgetary allocation for misinformation and content moderation needs to be proportionate to the country as per the number of users."
Apar Guta, Internet Freedom Foundation

He also spoke about the algorithmic processes that the company adopts to moderate hate speech, which Facebook terms as "AI" and claims that it takes down 90 percent of hateful content. Gupta noted that Facebook's internal reports revealed vastly different numbers.

"It only results in removal of 3 to 5 percent of hate speech, at best," he said.

A similar point is raised by senior journalist and researcher Maya Mirchandani in an article titled "Fighting Hate Speech, Balancing Freedoms: A Regulatory Challenge."

She notes that while AI and machine learning tools are being re-engineered and trained to monitor multimedia content on platforms, the volume of the content may get too much for the technology to handle, and that local contexts are "next to impossible for them to fathom."

For instance: People spreading disinformation and hate against minorities in India often use derogatory words instead of identifying them directly, thereby probably skirting the machine learning tools.

However, as Facebook talks about upgrading its machine learning models and relying on AI for detection, the question is – why has the platform not been able to tackle the issue?

Prateek Waghre, a researcher at The Takshashila Institution, stressed on the importance of holding platforms accountable and added a note of caution that realistically, platforms should be held "accountable in a way that does not reduce the relative power of civil society."

"There are certain things that Facebook can do, in terms how much they invest in local resources in the country that they operate in," Waghre said.

He added that the challenges seen on Facebook were common on other social media platforms too. "The challenge is not just for one company, but it's about fixing these issues at a broader, societal level."

ADVERTISEMENTREMOVE AD

(Not convinced of a post or information you came across online and want it verified? Send us the details on WhatsApp at 9643651818, or e-mail it to us at webqoof@thequint.com and we'll fact-check it for you. You can also read all our fact-checked stories here.)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Published: 
Speaking truth to power requires allies like you.
Become a Member
×
×