Far right and the use of artificial intelligence

Whatsapp
Facebook
Twitter
Instagram
Telegram

By FLAVIO AGUIAR*

Researchers find widespread use of AI among far-right groups and terrorist organizations

On July 29th, around noon, in the city of Southport, in the northwest of England, a 17-year-old boy burst into a children's party at a dance and yoga school, organized by one of his teachers. Armed with a knife, the young man killed three children, aged six, seven and nine, injured eight others and two more adults who tried to protect them, including the teacher who had organized the event.

Police and ambulances arrived within minutes. Arrested in the act, the young man was identified as Axel Rudakubana, 17 years old, British citizen, whose parents came from Rwanda, Africa. As he was a minor, for legal reasons the police did not immediately release his identity.

Subsequently, false speculations began to circulate on social media.

In 24 hours, 27 million hits proliferated to a message that identified the assailant as a Muslim (which was not true) and gave him a false name. Other messages identified him as an illegal refugee, who had arrived in England by boat, seeking asylum. “Influencers” and a website identified as Channel3Now (who would later apologize) quickly disseminated such messages. One of these “influencers” cried that “the soul of Western man is torn apart when invaders kill his daughters”.

Another message – generated by Artificial Intelligence – featured on platform protect our children.”

Immediately, in Southport, a crowd, according to the police, encouraged by people who don't live in the city, started attacking a mosque, getting into a fight with the police. Attacks on mosques and reception centers for refugees and immigrants spread across several cities in England, including populous London and Manchester.

The case caught the attention of researchers into the relationship between extremist groups, especially the far right, and the use of artificial intelligence.

Researchers of Middle East Media Research, from the United States, drew attention to their report that maps dozens of similar cases. The report shows that such groups, using Artificial Intelligence tools, record the voices and images of artists, politicians and other famous people. Then they spread false messages as if they were their own, asserting white supremacy and attacking blacks, Muslims and Jews.

According to the researcher from the NETLab group, at the Federal University of Rio de Janeiro, right-wing extremist groups disseminate messages with instructions that go as far as illustrating the manufacture of weapons and explosives, always using Artificial Intelligence tools. In Latin America, the preferred targets for such messages have been Mexico, Colombia, Ecuador and Argentina.

Researchers on the topic draw attention to the fact that this use of artificial intelligence is also widespread among terrorist organizations such as the Islamic State and Al Qaïda.

In England, the attacks subsided after large anti-racist demonstrations took to the streets of dozens of British cities. Surveys showed that 85% of the population rejected violence. However, 42% of those interviewed recognized the legitimacy of demonstrations with those motivations, as long as they were peaceful.

* Flavio Aguiar, journalist and writer, is a retired professor of Brazilian literature at USP. Author, among other books, of Chronicles of the World Upside Down (boitempo). [https://amzn.to/48UDikx]

Originally published on Radio França Internacional.

See all articles by

10 MOST READ IN THE LAST 7 DAYS

See all articles by

SEARCH

Search

TOPICS

NEW PUBLICATIONS