Facebook Failed to Police Abusive Content in Most Vulnerable Countries: Report

A former employee called the the company's approach to growth "colonial".

The Quint
World
Published:
<div class="paragraphs"><p>Image for representation purpose.</p></div>
i

Image for representation purpose.

(Photo: Accessed by The Quint)

advertisement

Former Facebook employees and documents viewed by several media outlets have revealed that the social media giant failed to "police abusive content" in countries where "such speech was most likely to cause the most harm" in its race to become a global service, Reuters reported.

Currently, Facebook operates in more than 190 countries, has over "2.8 billion monthly users who post content in more than 160 languages."

However, despite its global expansion, the company has not been able to prevent itself from becoming a channel for "hate speech, inflammatory rhetoric and misinformation."

According to Reuters, Facebook was aware that it did not have enough workers who had the right language skills or knew about local events, both of which were needed "to identify objectionable posts from users" in several developing countries.

'FB Prioritises Rooting Out Violence in Western Countries, Not in Developing Nations'

Meanwhile, Bloomberg said that while "Facebook prioritises rooting out violence and hateful content in English-speaking" Western countries, it does not do the same in developing nations, which are more prone or vulnerable to "real-world harm from negativity on social media".

In addition, Facebook's "engagement-focused algorithm" also ended up promoting content that could have been false and divisive.

According to the company's staff who studied misinformation, Facebook's core products contributed to the "spread of harmful material", reported Bloomberg, adding that political considerations sometimes are given priority over quelling misinformation.

Bloomberg further revealed that the social media company's staff said that Facebook failed to prevent the "proliferation of groups" that incited violence on 6 January at the Capitol.

Further, the documents revealed that the AI systems employed by Facebook were not up to the task to "root out such content" frequently, and the company had not made flagging such offensive content in violation of the site's rules easy for its users.

According to former employees, such shortcomings would not allow the company to fulfil its promise of blocking hate speech and other such rule-breakers in "places from Afghanistan to Yemen".

One former employee pointed out that the way the company identified abuses on its site had "significant gaps", especially in countries such as Myanmar and Ethiopia, which were at risk of "real-world violence", reported Reuters.

On Sunday, 3 October, former Facebook employee Frances Haugen, a 37-year-old data scientist from Iowa, revealed her identity as the whistleblower who provided documents to the Wall Street Journal and US lawmakers in an interview with the CBS news show "60 Minutes".

The documents provided by the whistleblower led to a detailed investigation by WSJ and a Senate hearing.

Facebook's former head of policy for the Middle East and North Africa, Ashraf Zeitoon, called the social media giant's approach to growth "colonial," "focused on monetisation without safety measures".

Facebook's Response

Meanwhile, Facebook spokesperson Mavis Jones said in a statement that the company had employed native speakers all over the world, who reviewed content in "more than 70 languages" and also had experts in "humanitarian and human rights issues."

Jones said these teams work to "stop abuse on Facebook's platform in places where there is a heightened risk of conflict and violence".

"We know these challenges are real, and we are proud of the work we've done to date," Jones said.

According to Bloomberg, Facebook is developing new products and services to attract a younger audience.

The company claimed that hate speech is under 1 percent of the overall content on its platform and is on the decline.

The company further said it "uses research, hypothetical tests and other methods to analyse how it recommends content and improve on efforts to curb the spread of harmful content."

The company said it has adequately revealed information concerning its growth and said the documents shared by Haugen showed a "curated selection" that "can in no way be used to draw fair conclusions about us."

ADVERTISEMENT
ADVERTISEMENT

India Connect

To determine how its tools affected people in India, Facebook had set up a test account in the country in 2019. However, within three weeks, the "fictional user's account devolved into a maelstrom of fake news and incendiary images".

Congress Accuses Facebook of Manipulating India's Elections

The Congress party, on Monday, 25 October, accused Facebook of influencing India's elections, being an ally of the Bharatiya Janata Party (BJP) and demanded a joint parliamentary committee probe into it, PTI reported.

"Facebook has reduced itself to a Fakebook," Congress spokesperson Pawan Khera said, adding that Facebook was an ally of the BJP and pushing its agenda. Facebook has not yet responded to the allegations.

He further said that Facebook's internal reports had "identified fake accounts with over a million impressions" but did nothing.

"We demand a JPC probe to look into the role of Facebook in influencing our elections," he said, adding that Facebook was trying to "compromise and undermine our democracy in trying to shape through fake posts the opinions of people".

"What right does Facebook have to push a particular ideology through fake posts, pictures and a narrative. It is shocking how only 0.2 percent of hate speech is removed by Facebook, which despite making the most money from India, does not have the mechanism to filter posts in Hindi or Bengali," he added.

He further said that the role of Facebook could no longer be "dismissed as an error of omission as they are knowingly furthering the agenda of the ruling party and its ideology which is hate-filled, bigotry and dividing society".

He also spoke about the "serious interference" in India's elections by a "foreign company".

"Why should we not accuse Facebook of interfering in our elections by influencing the voting behaviour of its consumers. This is serious election fraud," he alleged.

Khera also demanded answers from Facebook and the Centre.

"Why is Facebook quiet on these accusations which have come from within it. Why is the government silent on this, just because it suits their agenda and Facebook has become a tool in the hands of the BJP and its affiliates," he asked.

He further said why hadn't Facebook designate RSS and Bajrang Dal as "dangerous organisations based on its own internal reports?"

"While the Government of India had been extremely proactive against Twitter citing Social Media safety compliance, why are they not uttering a word now," he asked.

(With inputs from Reuters, Bloomberg and PTI)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Published: undefined

ADVERTISEMENT
SCROLL FOR NEXT