Facial recognition technologies

Whatsapp
Facebook
Twitter
Instagram
Telegram

By SERGIO AMADEU DA SILVEIRA*

The risks of extremely harmful effects on societies.

There is a logic reinforced by the current neoliberal supremacy that all invented technology must be used. A variant of this thought can be found in the phrase “when a technology is of commercial interest there is no way to stop it”. However, the facts indicate other possibilities. Many technologies were banned and others, after a certain period, were banned.

For example, chemical weapons are considered unacceptable and democratic countries do not use them. Several pesticides have been abolished, such as the dangerous DDT. In 2015, hundreds of personalities, including Noam Chomsky and Stephen Hawking, signed an open letter entitled “Autonomous Weapons: An Open Letter From AI & Robotics Researchers” calling for a ban on artificial intelligence weapons. The European Union has set a moratorium on transgenics for more than five years. Finally, several technologies have always been regulated by democracies, since their manufacture or use could bring risks and extremely harmful effects to societies.

Currently, a worldwide mobilization for the banning of facial recognition technologies is growing. In 2019, before the pandemic, lawmakers in San Francisco, California, decided to ban the use of facial recognition by local agencies, including police and transportation authorities. It was also defined that any surveillance technology needs to be approved by city administrators, and can no longer be considered an exclusively technical decision. The reason is simple. The benefits of facial recognition do not outweigh its risks and dangerous uses. According to several San Francisco city councilors, this technology has been used to further weaken marginalized social groups.

According to the Network of Security Observatories, in Brazil, 90% of people arrested for facial recognition are black. Identification biometrics based on faces, in general, use the so-called identification algorithms. deep learning or deep learning, one of the umbrella branches of artificial intelligence technologies that rely on a lot of data to acquire acceptable quality. In general, these algorithms are trained on photo databases to improve facial pattern extraction and their ability to identify faces.

MIT-Media Lab researcher Joy Buolamwini has demonstrated that machine learning algorithms can discriminate based on class, race, and gender. In a text signed with Timnit Gebru, called Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Buolamwini analyzed 3 commercial gender classification systems from a set of photos. They found that darker-skinned women were the most misclassified group (with error rates of up to 34,7%).

It is important to understand how an algorithmic facial recognition system works. It is an automated process that compares an image captured by a camera or collection device with images stored in a database. One of the algorithm's first missions is to be able to detect the person's face within the image. After detecting the face, it needs to be aligned, virtually placed in a certain position that facilitates the next phase, which is the extraction of measurements. The algorithm, according to its previous training, will measure the distance between the eyes, between the eyes and the nose, the position of the mouth, the texture of the skin, in short, it will extract measurements from the image, it will quantify it.

Then, depending on your model, you will compare the quantified image with each of the photographs scanned and inserted into your database. Thus, the algorithm issues a score while comparing two images, two faces, the one of your target and the one stored in the data structure. As I tried to show here, recognition systems are probabilistic. They cannot answer whether or not that image is of a certain person. They provide similarity and difference percentages.

Some systems can offer the percentage of comparison of several images and offer face alternatives to identify a certain target. Algorithms training is essential to be able to extract patterns from photographs, since they must extract patterns from images in different positions. This process requires thousands of photos to carry out the training. They often need reinforcements and tagging performed by humans.

The action of military drones that use facial identification systems can help us understand this problem. Researcher Gregory S. McNeal, in the text “US Practice of Collateral Damage Estimation and Mitigation”, analyzed the side effects of attacks carried out by drones. Such unmanned aerial vehicles have high-resolution cameras that allow identifying targets. McNeal assessed the collateral damage caused by drones that resulted in civilian deaths in Iraq and Afghanistan. He concluded that 70% of them were due to errors in identity detection, that is, they involved the so-called “positive identification” failure. But what would a positive identification be in a probabilistic system? 80% similarities? 90%? 98%? What is the acceptable percentage for us to consider that a wanted person has been detected?

Facial recognition is a biometric and make up the category of so-called sensitive data. They can create stigmas. They need to have their uses analyzed from the precautionary principle. They are currently used to identify dangerous classes and marginalized segments. They allow the pursuit of targets in real time. Automated facial recognition systems reinforce prejudices and expand structural racism in society, as well as favor harassment of homosexuals, transsexuals and undesirable activists for the Police. They are technologies of harassment, vigilantism and persecution.

In Brazil, I am considered a white person. Given my age and body type, if a police algorithmic system were to misidentify me from cameras in the middle-class neighborhood I live in, it would probably take a more civilized approach. It could even be taken to a Police Station. There the error of the facial recognition system would be detected and the “false positive” would be reported.

However, imagine a young black man arriving from work in Jardim Ângela or Sapopemba and being mistakenly identified by the facial recognition system as a dangerous criminal. Depending on the Rota unit that approached him, he might not have any chance of staying alive. I claim that facial recognition technologies can contribute, today, to the practices of extermination of young black people in the peripheries. They can serve to politically persecute leaders of social movements, especially in areas where militias are juxtaposed in the state machinery.

Furthermore, biometric identification is a typical device of the old eugenics devices. They are used to identify immigrants and undesirable segments in Europe and the United States. In China they serve an unacceptable authoritarianism in a democracy. People identified by cameras linked to facial recognition systems performing non-recommended actions will have their score changed and will have difficulties in obtaining benefits from the State.

Without the possibility of defense, without being able to contest the probability model of recognition, ubiquitous policing through cameras that feed facial recognition systems is not acceptable in democracies. We need to stop its expansion. In fact, we need to ban them if we want to have minimal consistency with the precautionary principle. We cannot use a technology that uses algorithmic systems that are flawed and that still do not allow an adequate explanation. We need to ban facial recognition technologies until they can be socially non-discriminatory, auditable and more secure.

*Sergio Amadeu da Silveira is a professor at the Federal University of ABC. Author, among other books, of Free software – the fight for the freedom of knowledge (Conrad).

 

See all articles by

10 MOST READ IN THE LAST 7 DAYS

See all articles by

SEARCH

Search

TOPICS

NEW PUBLICATIONS