304 North Cardinal St.
Dorchester Center, MA 02124
When was FRT first introduced in Delhi? What are the concerns about using the technology on a mass scale?
Right to Information (RTI) responses received by the Internet Freedom Foundation, a digital rights organization in New Delhi, reveal that the Delhi Police considers matches with more than 80% similarity generated by its facial recognition technology (FRT) system as positive results.
Why is Delhi Police using facial recognition technology?
Delhi Police initially acquired the FRT to trace and identify the missing children. According to the RTI replies received from the Delhi Police, the procurement was allowed as per the 2018 directions of the Delhi High Court in Sadhan Haldar vs NCT of Delhi. However, in 2018, the Delhi Police told the Delhi High Court that the accuracy of the technology it acquired was only 2% and “not good”.
Things took a turn for the worse after several reports surfaced that the Delhi Police was using the FRT to monitor the anti-CAA protests in 2019. In 2020, the Delhi Police said in an RTI reply that although it obtained the FRT on the instructions of Sadhan Haldar, who related specifically to searching for missing children, used FRT for police investigations. The expansion of purpose for the use of FRT clearly demonstrates an example of “function creep”, where a technology or system gradually expands its scope from its original purpose to include and fulfill broader functions. According to available information, FRT was subsequently used by the Delhi Police for investigative purposes and also specifically during the 2020 Northeast Delhi riots, the 2021 Red Fort violence and the 2022 Jahangirpuri riots.
What is facial recognition?
Facial recognition is an algorithm-based technology that creates a digital map of the face by identifying and mapping an individual’s facial features, which it then compares to a database it has access to. It can be used for two purposes: first, for 1:1 identity verification, where a face map is obtained to compare with a person’s photo in a database to verify their identity. For example, 1:1 verification is used to unlock phones. However, it is increasingly used to provide access to any benefits or government programs. Second, there is 1:n identity identification, where a face map is obtained from a photo or video and then compared with the entire database to identify the person in the photo or video. Law enforcement agencies like Delhi Police usually procure FRT for 1:n identification.
For 1:n identification, FRT generates a probability or match score between the suspect to be identified and an available database of identified criminals. A list of possible matches is generated based on their probability of being a correct match with the corresponding match score. Ultimately, however, it is the human analyst who selects the final probable match from the list of matches generated by the FRT. According to the Internet Freedom Foundation’s Panoptic Project, which tracks the spread of FRT in India, there are at least 124 government-sanctioned FRT projects in the country.
Why is using FRT harmful?
In recent years, India has seen rapid deployment of FRTs by both the Union and state governments without introducing any law to regulate their use. The use of FRT presents two problems: problems related to misidentification due to the inaccuracy of the technology, and problems related to mass surveillance due to misuse of the technology. Extensive research into the technology has revealed that its accuracy rate drops significantly based on race and gender. This can result in a false positive, where the person is incorrectly identified as someone else, or a false negative, where the person is not verified as themselves. False-positive cases may lead to bias against a person who has been misidentified. In 2018, the American Civil Liberties Union revealed that Amazon’s facial recognition technology, Rekognition, incorrectly identified 28 members of Congress as people who had been arrested for a crime. Of the 28, a disproportionate number were people of color. Also in 2018, researchers Joy Buolamwini and Timnit Gebru found that facial recognition systems had higher error rates when identifying women and people of color, with the highest error rates when identifying women of color. The use of this technology by law enforcement agencies has already led to the wrongful arrest of three people in the US. On the other hand, cases of false negatives may result in an individual being excluded from access to underlying systems that may use FRT as a means of providing access. One example of such exclusion is the failure of biometric authentication under Aadhaar, which resulted in the exclusion of many people from receiving basic government services, leading to starvation deaths.
While accurate, this technology can result in irreversible damage as it can be used as a tool to facilitate state-sponsored mass surveillance. Currently, India does not have a data protection law or a specific FRT regulation to protect against misuse. In such a legal vacuum, there are no safeguards to ensure that authorities use the FRT only for purposes for which they are authorized, as is the case with the Delhi Police. FRT can enable continuous surveillance of an individual, resulting in violation of their fundamental right to privacy.
What did Delhi Police Answers 2022 reveal?
The RTI replies dated 25 July 2022 were shared by the Delhi Police after the Internet Freedom Foundation filed an appeal with the Central Information Commission to obtain the information after being rejected several times by the Delhi Police. In its response, the Delhi Police revealed that matches above 80% similarity are considered positive results, while matches below 80% similarity are considered false positives, which require further “confirmatory evidence”. It is not clear why 80% was chosen as the threshold between positive and false positive. There is no justification to support the Delhi Police’s claim that more than 80% agreement is sufficient to assume that the results are correct. Second, the categorization of results below 80% as false positives instead of negatives shows that the Delhi Police may investigate below 80% even further. People who share familial facial features, such as in extended families or communities, could thus become targets. This could lead to the targeting of communities that have historically been over-surveilled and faced discrimination by law enforcement.
The responses also mention that the Delhi Police is matching the photographs/videos against photographs collected under Section three and four of the Identification of Prisoners Act, 1920, which has now been replaced by the Criminal Procedure (Identification) Act, 2022. This Act allows for wider categories of data to be collected from a wider section of people, i.e., “convicts and other persons for the purposes of identification and investigation of criminal matters”. It is feared that the Act will lead to overbroad collection of personal data in violation of internationally recognised best practices for the collection and processing of data. This revelation raises multiple concerns as the use of facial recognition can lead to wrongful arrests and mass surveillance resulting in privacy violations. Delhi is not the only city where such surveillance is ongoing. Multiple cities, including Kolkata, Bengaluru, Hyderabad, Ahmedabad, and Lucknow are rolling out “Safe City” programmes which implement surveillance infrastructures to reduce gender-based violence, in the absence of any regulatory legal frameworks which would act as safeguards.
Anushka Jain is an Associate Policy Counsel and Gyan Prakash Tripathi is a Policy Trainee at Internet Freedom Foundation, New Delhi
RTI responses received by the Internet Freedom Foundation reveal that the Delhi Police treats matches of above 80% similarity generated by its facial recognition technology system as positive results. Facial recognition is an algorithm based technology which creates a digital map of the face by identifying and mapping an individual’s facial features, which it then matches against the database to which it has access.
The Delhi Police first obtained FRT for the purpose of tracing and identifying missing children as per the direction of the Delhi High Court in Sadhan Haldar vs NCT of Delhi.
Extensive research into FRT has revealed that its accuracy rates fall starkly based on race and gender. This can result in a false positive, where a person is misidentified as someone else, or a false negative where a person is not verified as themselves. The technology can also be used as a tool to facilitate state sponsored mass surveillance.