Meta Allowed Inflammatory Ads To Go Live Despite EC's Silence Period: Report

A series of ads inciting violence and targeted to voters cleared Meta's ad review process, new research finds.

Karan Mahadik
Tech News
Published:
<div class="paragraphs"><p>Image used for representational purposes only.</p></div>
i

Image used for representational purposes only.

(Photo: iStock/Altered by The Quint)

advertisement

With the 2024 Lok Sabha elections in full swing, new research suggests that Meta did not proactively block ads vilifying Muslims and inciting communal violence from being published on its platform.

A report by corporate accountability group Ekō and India Civil Watch International states that 14 "highly inflammatory" ads were approved by the big tech platform ahead of Phases 3 and 4 of the seven-round general elections.

"These ads called for violent uprisings targeting Muslim minorities, disseminated blatant disinformation exploiting communal or religious conspiracy theories prevalent in India's political landscape, and incited violence through Hindu supremacist narratives," the report claimed.

However, it must be noted that the researchers themselves created and submitted these ads to be published on Facebook as a "stress test" of the platform's ad review mechanisms in the midst of elections in India.

Interestingly, the report states that one of the ads that Meta cleared for publishing promoted a debunked, false claim that Union Home Minister and BJP leader Amit Shah was threatening to do away with reservation for Scheduled Caste and Scheduled Tribe (SC/ST) groups.

But, in a blog post outlining steps taken specifically for India's general elections, Meta had said that it won't allow ads that "contain content debunked by third party fact checkers." 

Here are the other highlights from the report.

Key Findings From the Report

Out of 22, 14 ads were approved for publishing on Facebook even though these ads violated Meta's own policies on hate speech, bullying and harassment, misinformation, and violence and incitement, the report stated.

Who made these ads? Meta has a separate authorisation process for those who want to publish election-related or political ads in India. But the researchers appear to have taken a different route. They reportedly set up dummy accounts located outside India.

"The process of setting up the Facebook accounts was extremely simple and researchers were able to post these ads from outside of India," the report said.

When asked if there was any verification/authorisation required to set up these dummy accounts, Ekō campaigner Maen Hammad told The Quint that they "did not run into any major authorisation steps, just creating dummy Facebook accounts and respective pages to then run the ads."

He further clarified that the ads were not submitted as "political, social, or election ads which have a separate review and disclaimer process."

What did the ads say? Several of the ads that were cleared by Meta for publishing promoted false claims about Muslim appeasement by Opposition parties, the report said.

Other approved ads also looked to play on "fears of India being swarmed by Muslim invaders," "used Hindu supremacist language to vilify Muslims," and "called for the execution of a prominent lawmaker claiming their allegiance to Pakistan."

"Two ads pushed a ‘stop the steal’ narrative, claiming EVMs were being destroyed, followed by calls for a violent uprising," the report said. The specific contents of these ads were shared by Ekō with The Quint.

It's worth noting that ads questioning the legitimacy of the elections are prohibited, as per Meta's policy, along with ads that discourage people from voting and ads with premature claims of election victory.

"We worked with partners to find and use ongoing hate speech and disinformation narratives present in India's socio-political context in order to reflect those into the ads from the experiment," said Ekō's Hammad while elaborating on the content of the ads that were allowed to run on Facebook.

How did the ads appear? The ads contained messaging written in English, Hindi, Bengali, Gujarati, and Kannada.

"Each ad was accompanied by a manipulated image created with widely used generative AI tools such as Stable Diffusion, Midjourney, and Dall-E," the report said.

"For example, Ekō researchers were able to easily generate images showing a person burning an electronic voting machine, drone footage of immigrants crowding India’s border crossing, as well as notable Hindu and Muslim places of worship on fire," it added. To be sure, The Quint prompted Midjourney to generate these images and received similar results.

ADVERTISEMENT
ADVERTISEMENT

Who could've seen it and when? The report also stated that Meta allowed the 14 inflammatory ads to be published on Facebook despite the 48-hour silence period being in place ahead of the third and fourth phases of the 2024 Lok Sabha elections.

"This silence period requires a pause on all election-related advertising. The experiment exposes Meta's failure to potentially comply with Indian election laws, which impose restrictions on advertisements at different phases of the electoral process," the report said.

Researchers also stated that the ads were specifically listed for targeting voters in "highly contentious districts that were entering the silence period" in Madhya Pradesh, Assam, Karnataka, Andhra Pradesh, Telangana, and Kashmir.

Notably, the researchers pulled the ads just before it was scheduled to be published in order to prevent any Facebook users from being exposed to inflammatory and violent content. "We made sure not to share the actual ads," Hammad reiterated to The Quint.

Were any problematic ads rejected? Yes. "Five ads were rejected for breaking Meta’s Community Standards policy issues regarding hate speech and violence and incitement," the report said.

"An additional three ads were submitted and rejected on the basis that they may qualify as social issue, electoral or politics ads, but they were not rejected on the basis of hate speech, inciting violence, or for spreading disinformation," it added.

However, once rejected, the researchers claimed that they made tweaks to those ads and got them approved for publishing by Meta.

"For example, a variant of an ad inciting violence against Muslims and advocating for the burning of their places of worship was permitted to pass through the platform's filters after adjusting a handful of words or generating a different image. These adjustments were made within minutes, and the revised ads were tested and cleared in less than 12 hours. For motivated and organised bad actors, uncovering and exploiting such loopholes would likely be a straightforward task."
Ekō and ICWI report

What Is Meta's Ad Review Process?

In Meta's own words, "the ad review system reviews whether ads violate our policies." It takes 24 hours and the review begins after creating or editing the ad.

Key components of the ad such as photos, videos, text, and targeting information, as well as the ad's corresponding landing page or other destination are analysed during the review process.

The review is primarily conducted by Meta's automated systems, as per the big tech company's blog post. However, human reviewers are reportedly roped in to manually review ads in some cases.

Schematic diagram of Meta's ad review mechanism

(Photo Courtesy: Meta)

These ads are checked against Meta's Advertising Standards that do not allow hate speech, discriminatory content, and wrong information, among others.

So, how did the 14 inflammatory ads submitted by Ekō researchers pass the review process?

"Because the authors immediately deleted the ads in question, we cannot comment on the claims made," a Meta spokesperson told The Quint, adding that there are several layers of analysis and detection in place, for both before and after an ad goes live.

However, the report argues that "the ads in this investigation served as a stress test of Meta's systems, aiming to illuminate its deficiencies."

Just last month, Access Now published a similar report that found ads intending to spread election-related disinformation were greenlit for publishing on YouTube.

"The 48 ads submitted to YouTube in three official Indian languages, English, Hindi, and Telugu contained content explicitly prohibited by YouTube’s elections misinformation policies. Despite YouTube’s policy to review ad content before it can run, the platform approved every single ad for publication," the digital rights group said.

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Published: undefined

ADVERTISEMENT
SCROLL FOR NEXT