Facebook recently took down 453 accounts, 103 pages, 78 groups and 107 Instagram accounts operated from Pakistan as part of its security policy against “coordinated efforts to manipulate public debate for a strategic goal”. Some of the the activities which brought them under the scanner involved posting content around social and political issues of Pakistan and India, praising the Pakistan government and the ISI, while criticising India and its policies under false names.
Researchers studying these network of accounts, pages and groups have highlighted that this network was engaged in mass reporting of accounts that were critical of Islam, Pakistani government or military, and in some cases, had links to the Ahmadi religion. The research was carried out by a team at the Stanford Internet Observatory’s Cyber Policy Center.
The network had an overall following of about 1.18 million accounts and its activities violated Facebook’s policies against “Coordinated Inauthentic Behaviour (CIB).”
But what is CIB and why does it require Facebook or any other platform to take action against such coordinated effort?
How Coordinated Inauthentic Behaviour Manipulates Public Debate
1. What is Coordinated Inauthentic Behaviour?
Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook, defines CIB as “groups of pages or people working together to mislead others about who they are or what they are doing.”
Facebook also identifies CIB through activities where several accounts make an attempt to hide their geolocation.
Speaking to The Quint, Srinivas Kodaili, an independent researcher on internet movements in India, explained that CIB is anything which tries to manipulate social media trends using automated tools.
“They indulge in behaviour like tweeting the same tweet over and over, or using different sentences with the same hashtags,” he stated.
Kodaili also highlighted that these might be bots or accounts which are associated with a nation or political party.
Another researcher Sai Krishna Kothapalli, founder of cyber security firm, Hackrew, highlighted that “fake accounts are key” in identifying CIB.
Facebook asserts that the rules are based to act against these networks on their behaviour and not the content of these networks. Kothapalli suggested that this is to ensure that people see Facebook as non-partisan.
But why is it important to detect and act against such inauthentic behaviour?
Expand2. Disrupting Politics and Public Opinion: Why is CIB Dangerous?
Over the years we have witnessed misinformation’s potential to sway politics and the public opinion of regions. However, CIB is not just the spread of ‘fake news’ but a malicious attempt to specifically target and mislead people.
The Pakistani pages studied by the Stanford researchers show that the network sought to stifle the critics of the Pakistani government and supporters of religious minorities.
It also polarised its followers by mocking Bharatiya Janata Party (BJP), Prime Minister Narendra Modi and bolstering the Khalistan movement.
It is also interesting to note that several pages and groups claimed to be fans of the Indian Army. While the researchers were unable to determine the reason behind their existence they hypothesised that these pages were used “to identify Indian Army supporters who they would then mass report.”
Another possibility is that the actors aimed to eventually inject “content aligned with their ideology into these networks.”
Sriram, founder and CTO at a digital security services provider, PrimeFort told The Quint that the danger of CIB lies beyond the measures taken to contain it.
“If they had been consuming targeted content against a certain political party, they already have a bias against it, which won’t go away with the suspension of the pages or the groups,” Sriram added.
Fact-checking website, Snopes has also noted several incidences of CIB’s influence on political and religious opinions of social media users.
Their investigation revealed a network of evangelical Christian pages on Facebook that post Islamophobic content, attempting to paint the religion in a “divisive, extremely right-wing view.”
Expand3. CIB: A Threat to National Security
In simple terms, information warfare is the use of information and technology to affect an adversary.
An infamous example of this would be the British firm, Cambridge Analytica’s role in manipulating the 2016 presidential campaign in favour of US President Donald Trump.
Defence and strategic experts have also called out the larger gambit of social media manipulation as a form of information warfare, a threat to the national security.
Lt Gen (Retd) Satish Dua, a former Chief of Integrated Defence Staff and an expert on counter terrorism and strategic affairs, told The Quint that in today’s age of ‘perception,’ of how public views a thing or event, information spaces are used to shape minds and build narratives.
“India has taken cognisance of information warfare but we have a lot of catching up to do. Countries like US, China and Pakistan have invested heavily to strengthen their command on narratives in the cyber-space,” he added.
He also spoke about how it’s necessary to remain connected in a global village and not cut off the platforms.
“We should be able to dictate our valid security concerns and the safeguards we want them to institute for us,” said the retired general.
So what should the tech giants do? Are their policies enough to deal with such coordinated inauthentic behaviour which evidently can have serious repercussions in a democracy?
Expand4. How Tech Giants Fall Short on Combating CIB
Facebook employs its resources to actively detect and take down CIB. It either manually uncovers accounts and suspends them, or uses technology to automatically detect bots and accounts that violate its policies.
However, experts have frequently noted inadequacies in Facebook’s response to CIB.
In an article for digital magazine, Slate, Evelyn Doueck, a doctoral student at Harvard Law School, writes that there’s a “lack of clarity” on what exactly CIB means.
She argues that it can’t be defined as “an obvious category of online behavior that crosses a clearly demarcated line between OK and not OK.”
Doueck writes that “defining what is coordination and what is inauthentic is far from a value-free judgment call.” If a piece of content tries to game the algorithm, the behaviour should be deemed inauthentic.
Echoing Doueck’s thoughts, Waghre says that there’s no transparency in the guidelines of tech giants on how they identify what’s coordinated and inauthentic.
“While the Stanford report points out that the Pakistani accounts had used a browser extension to mass-report other accounts, it’s unclear if that’s the only criteria they employed to identify these accounts,” says Waghre.
Waghre also cites an example of a Twitter trend popular in May, #SaveMaleNursesInIndia, which came about after AIIMS announced its 80 percent female nurses and 20 percent male nurses policy.
Expand5. CIB and Takedowns in India
In the run-up to elections, in April 2019, Facebook had taken down 700 accounts in India for CIB.
While 687 were identified to be connected with INC, 15 of the pages were said to be associated with Silver Touch Technologies.
The findings created a storm in the media, as Silver Touch ran a BJP-aligned page ‘The India Eye,’ and also developed the NaMo app.
The Indian Express reported that while the INC pages and account had amassed a following of 2,06,000 followers, and spent around Rs 27 lakh on ads, on the other hand, the 15 pages of Silver Touch had a greater following of 26,45,000, and spent close to Rs 50 lakh on ads.
Despite the mounting evidence, both Congress and BJP denied being associated to the pages that were taken down.
In the same report, Facebook also mentioned the 103 pages, groups and accounts taken down for CIB, operated by the employees of the ISPR (Inter-Service Public Relations) of the Pakistani military.
The network included military fan pages, general Pakistani interest pages and Kashmir community pages, which also posted content on Indian government and politics.
In February 2020, Facebook had also reported the suspension of 37 Facebook accounts, 32 pages, 11 groups and 42 Instagram accounts focusing on local news and events of UAE, including topics like Saudi Arabia’s role in the Yemen conflict.
While users tried to purport themselves as locals from the gulf region, Facebook found their origin to be a digital marketing firm in India, aRep Global.
This attempt to manipulate public discourse by malicious groups isn’t new. Armed with social media, the impact of such inauthentic behaviour is amplified, reaching lakhs of followers. Can a society, which thrives on transparency and free flow of information, uphold the ideals of democracy if its very foundation is rotten with lies and deception?
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)
Expand
What is Coordinated Inauthentic Behaviour?
Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook, defines CIB as “groups of pages or people working together to mislead others about who they are or what they are doing.”
Facebook also identifies CIB through activities where several accounts make an attempt to hide their geolocation.
Speaking to The Quint, Srinivas Kodaili, an independent researcher on internet movements in India, explained that CIB is anything which tries to manipulate social media trends using automated tools.
“They indulge in behaviour like tweeting the same tweet over and over, or using different sentences with the same hashtags,” he stated.
Kodaili also highlighted that these might be bots or accounts which are associated with a nation or political party.
Another researcher Sai Krishna Kothapalli, founder of cyber security firm, Hackrew, highlighted that “fake accounts are key” in identifying CIB.
Facebook asserts that the rules are based to act against these networks on their behaviour and not the content of these networks. Kothapalli suggested that this is to ensure that people see Facebook as non-partisan.
But why is it important to detect and act against such inauthentic behaviour?
Disrupting Politics and Public Opinion: Why is CIB Dangerous?
Over the years we have witnessed misinformation’s potential to sway politics and the public opinion of regions. However, CIB is not just the spread of ‘fake news’ but a malicious attempt to specifically target and mislead people.
The Pakistani pages studied by the Stanford researchers show that the network sought to stifle the critics of the Pakistani government and supporters of religious minorities.
It also polarised its followers by mocking Bharatiya Janata Party (BJP), Prime Minister Narendra Modi and bolstering the Khalistan movement.
It is also interesting to note that several pages and groups claimed to be fans of the Indian Army. While the researchers were unable to determine the reason behind their existence they hypothesised that these pages were used “to identify Indian Army supporters who they would then mass report.”
Another possibility is that the actors aimed to eventually inject “content aligned with their ideology into these networks.”
Sriram, founder and CTO at a digital security services provider, PrimeFort told The Quint that the danger of CIB lies beyond the measures taken to contain it.
“If they had been consuming targeted content against a certain political party, they already have a bias against it, which won’t go away with the suspension of the pages or the groups,” Sriram added.
Fact-checking website, Snopes has also noted several incidences of CIB’s influence on political and religious opinions of social media users.
Their investigation revealed a network of evangelical Christian pages on Facebook that post Islamophobic content, attempting to paint the religion in a “divisive, extremely right-wing view.”
Another instance of CIB was detected by Benjamin Strick, an open source investigator for BBC, in a politically unstable Indonesian province, Papua.
Strick and his team were able to uncover a network of Facebook and Twitter accounts that ran pro-Indonesian ads in Europe, UK and US, spending up to $300,000, to drown out pro-independence narratives.
In a region where “independent media is restricted and verified information is scarce,” Strick writes for Belling Cat, “the potential for an organised disinformation campaign such as the one we have uncovered has the potential to have a substantial impact on how the situation is perceived by the international community.”
Although experts recognise the need for platforms to work together in combating CIB, it also raises concerns over the creation of ‘content cartels’.
Prateek Waghre, a research analyst at Takshashila Institute opines, “Since these networks run across platforms, there’s is a need to ask platforms to work together. However, academicians like me fear that this could lead to content cartels, where you’re essentially letting them decide what content stays and what doesn’t, making them more powerful than before.”
CIB’s dangers are not limited to its attack on democracy, and can also be viewed as a potential weapon in information warfare.
CIB: A Threat to National Security
In simple terms, information warfare is the use of information and technology to affect an adversary.
An infamous example of this would be the British firm, Cambridge Analytica’s role in manipulating the 2016 presidential campaign in favour of US President Donald Trump.
Defence and strategic experts have also called out the larger gambit of social media manipulation as a form of information warfare, a threat to the national security.
Lt Gen (Retd) Satish Dua, a former Chief of Integrated Defence Staff and an expert on counter terrorism and strategic affairs, told The Quint that in today’s age of ‘perception,’ of how public views a thing or event, information spaces are used to shape minds and build narratives.
“India has taken cognisance of information warfare but we have a lot of catching up to do. Countries like US, China and Pakistan have invested heavily to strengthen their command on narratives in the cyber-space,” he added.
He also spoke about how it’s necessary to remain connected in a global village and not cut off the platforms.
“We should be able to dictate our valid security concerns and the safeguards we want them to institute for us,” said the retired general.
So what should the tech giants do? Are their policies enough to deal with such coordinated inauthentic behaviour which evidently can have serious repercussions in a democracy?
How Tech Giants Fall Short on Combating CIB
Facebook employs its resources to actively detect and take down CIB. It either manually uncovers accounts and suspends them, or uses technology to automatically detect bots and accounts that violate its policies.
However, experts have frequently noted inadequacies in Facebook’s response to CIB.
In an article for digital magazine, Slate, Evelyn Doueck, a doctoral student at Harvard Law School, writes that there’s a “lack of clarity” on what exactly CIB means.
She argues that it can’t be defined as “an obvious category of online behavior that crosses a clearly demarcated line between OK and not OK.”
Doueck writes that “defining what is coordination and what is inauthentic is far from a value-free judgment call.” If a piece of content tries to game the algorithm, the behaviour should be deemed inauthentic.
Echoing Doueck’s thoughts, Waghre says that there’s no transparency in the guidelines of tech giants on how they identify what’s coordinated and inauthentic.
“While the Stanford report points out that the Pakistani accounts had used a browser extension to mass-report other accounts, it’s unclear if that’s the only criteria they employed to identify these accounts,” says Waghre.
Waghre also cites an example of a Twitter trend popular in May, #SaveMaleNursesInIndia, which came about after AIIMS announced its 80 percent female nurses and 20 percent male nurses policy.
“I studied around 20-30,000 tweets using that hashtag, and notice that 66 percent of the accounts had been newly created in May. Now, that is a clear sign of manipulation. However, Twitter saw an increase in accounts in May, so it’s extremely difficult to say whether these accounts came about organically or as a result of coordination.”
If the coordination had happened off platform, for example on WhatsApp, there’s no way Twitter could recognise it as ‘inauthentic’ behaviour, Waghre adds.
He also points out that there’s a whole closed ecosystem of Facebook groups, which cannot be penetrated but could be a source of CIB.
The Lowy Institute, a global think tank, also published an article titled, “Coordinated inauthentic behaviour: Facebook haunts US democracy,” which notes that while Facebook’s subsequent removal of several accounts is like “heralding of a new era,” its actions are relatively small in comparison to large number of campaigns.
Facebook asserts itself as an “intermediator, not the speaker, and thus not liable,” and profits off the engagement the content creates.
“Fury is a potent political weapon, and a powerful driver of online interactivity, which generates data on users’ preferences, attitudes, and likely behaviours. This data, and the capacity to use this data to predict future behaviour, is what makes Facebook (and others) immensely profitable, and thus powerful.”
CIB and Takedowns in India
In the run-up to elections, in April 2019, Facebook had taken down 700 accounts in India for CIB.
While 687 were identified to be connected with INC, 15 of the pages were said to be associated with Silver Touch Technologies.
The findings created a storm in the media, as Silver Touch ran a BJP-aligned page ‘The India Eye,’ and also developed the NaMo app.
The Indian Express reported that while the INC pages and account had amassed a following of 2,06,000 followers, and spent around Rs 27 lakh on ads, on the other hand, the 15 pages of Silver Touch had a greater following of 26,45,000, and spent close to Rs 50 lakh on ads.
Despite the mounting evidence, both Congress and BJP denied being associated to the pages that were taken down.
In the same report, Facebook also mentioned the 103 pages, groups and accounts taken down for CIB, operated by the employees of the ISPR (Inter-Service Public Relations) of the Pakistani military.
The network included military fan pages, general Pakistani interest pages and Kashmir community pages, which also posted content on Indian government and politics.
In February 2020, Facebook had also reported the suspension of 37 Facebook accounts, 32 pages, 11 groups and 42 Instagram accounts focusing on local news and events of UAE, including topics like Saudi Arabia’s role in the Yemen conflict.
While users tried to purport themselves as locals from the gulf region, Facebook found their origin to be a digital marketing firm in India, aRep Global.
This attempt to manipulate public discourse by malicious groups isn’t new. Armed with social media, the impact of such inauthentic behaviour is amplified, reaching lakhs of followers. Can a society, which thrives on transparency and free flow of information, uphold the ideals of democracy if its very foundation is rotten with lies and deception?
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)