advertisement
Editor: Varun Sharma and Vishal Kumar
Camera: Abhishek Ranjan
On Friday, when the nation’s attention was fixed upon the Union budget, a small advertisement tucked away within newspapers called for applicants to bid for the implementation of a nationwide Automated Facial Recognition System.
Unlike the 122-page long Finance Bill, 2019, the 172-page Request for Proposal (RPF) released by the National Crime Records Bureau (NCRB), hidden within the folders of its website, generated nearly no attention even though its implication are wide and severe.
NCRB, which has not published statistics on crimes committed in India since 2016, writes in the proposal that it has “conceptualised the Automated Facial Recognition System (AFRS)” as an effort towards “modernising the police force, information gathering, criminal identification.”
A digital system for facial recognition of citizens has legitimate benefits in identifying criminals, tracing missing persons and aiding investigations.
However, at the same time, the move raises pertinent concerns as worldwide instances of surveillance, privacy abuse, inaccurate results and most importantly, disproportionate impact on minorities have surfaced repeatedly.
The block came amidst growing fears of abuse by the government and pushing the city towards overt surveillance.
In India, two specific concerns arise:
Automatic facial recognition (AFR) is an advanced way of recognising people by using computers to scan their faces. According to a research paper, “It aims to identify people in images or videos using sophisticated pattern recognition techniques.”
Automated facial recognition is widely used in applications ranging from social media to advanced authentication systems. It is used to identify our faces in a group photograph on Facebook, to unlocking our phones as well as to identify faces from CCTV footage.
Some intended objectives of the AFRS:
The question to ask is – why is this a worry for Indians? Isn’t it a good thing that law enforcement agencies will be better equipped to apprehend criminals?
While that is the stated aim with which this, or any AFRS project, embarks on its journey, there are a number of major concerns:
No Data Protection Law: Among the biggest problems in launching such a massive project is the absence of a law to supervise and inform how a government agency can go about using and processing our images.
The draft data protection bill submitted to the government by the Justice BN Srikrishna Committee in July, 2018, has NOT been tabled in Parliament yet.
Little Supervision on Surveillance: As discussed earlier, experts have been fervently calling for a comprehensive surveillance reform. Currently, electronic surveillance is authorised under section 69 of the Information Technology Act, 2000.
It is not only vague with language like “sovereignty or integrity of India”, “friendly relations with other states”, “security of the state” or “public order”, given as grounds for surveillance but there is almost no publicly available information about the grounds on which surveillance decisions are granted.
Merging with Iris and Fingerprint Databases: Under section 2.2 – Functional Requirements of the AFRS System – point 21 states that the solution “should be compatible with other biometric solutions such as iris and fingerprints for generating comprehensive biometric authentication reports.”
This does raise serious concerns as it is unclear whether, in the absence of specific laws, it can be linked to Aadhaar’s CIDR database.
360 Degree Profiling: Point 31 states that AFRS System will be integrated with existing AFRS systems (and any other AFRS system established before signing of contract) of some advanced states. The Andhra Pradesh and Telangana governments do possess advanced databases like the controversial State Resident Data Hubs (SRDH).
SRDH are well-documented examples of 360-degree profiling of residents. A Huffington Post report had investigated how citizens could be looked up by religion or caste using Aadhaar numbers to connect disparate strands of data about them.
The RFP sets down a list of criteria that a bidder must fulfil in order to be eligible. The last date for submission of bids is 16 August and they will be opened on 19 August. Below are some of the conditions stated.
This is not the first attempt at a project that has serious privacy and surveillance concerns. In just the past one year, several similar plans have been unveiled.
1. Social Media Communication Hub: In April 2018, the Information & Broadcasting Ministry had issued a similar RFP for Social Media Communication Hub to monitor social media activity.
2. 10 Govt Agencies Authorised to Snoop: In December 2018, the government authorised 10 intelligence and investigating agencies and the Delhi Police to intercept, monitor and decrypt "any information" generated, transmitted, received or stored in "any computer".
3. Intermediary Liability Rules: Just five days later, on 24 December, the Ministry of Electronics and IT (MeitY), issued the The Draft Information Technology [Intermediaries Guidelines (Amendment) Rules], 2018.
These proposed amendments, drafted without any prior consultation with the public, propose that messaging apps, social networks, search engines, internet service providers, cyber cafes among others follow a content policing and filtering system.
Before India embarks on a project with such severe implications, it is wise to take a look at how other countries, that have already implemented it, are faring.
Impact on Minorities/Govt Abuse: A New York Times report revealed how the Chinese government was using a vast, ‘secret’ system of advanced facial recognition technology to track and control the Uighurs, a Muslim minority community.
Bias Against Women: At least one study carried out at Massachusetts Institute of Technology has revealed that FRS from giants like IBM and Microsoft is less accurate when identifying females. In the US, many reports have discussed how such softwares are particularly poor at accurately recognising African-American women.
Dangerous Inaccuracies: Even Amazon cannot get it right. Yes, Amazon! In a major embarrassment to the company, a test of its software called “Rekognition”, incorrectly identified 28 members of US Congress as other people arrested for crimes.
Disproportionately Harmful: In a scathing editorial in June, UK news publication The Guardian, denounced facial-recognition as a “danger to democracy”.
The disproportionality draws from all the previous points and busts the myth that government abuse happens only in autocratic or authoritarian countries.
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)