A technological eugenics?

Image: cottonbro studio
Whatsapp
Facebook
Twitter
Instagram
Telegram

By TARCÍSIO PERES*

With the advent of the “brave new world” of AI, the language deficit is much more serious and urgent than the mathematical suffering

1.

In the early 2000s, the arrival of search engines like Google significantly transformed the way information was searched. Before that, finding reliable content involved lengthy searches in books, libraries and physical archives, depended on prior knowledge of website directories or required the purchase of encyclopedias on CD-ROM. The adoption of ranking algorithms, such as PageRank, made it possible to obtain more relevant results in a few seconds, freeing users from time-consuming processes and paving the way for faster and more direct access to a multitude of topics.

The growth of broadband and the greater capillarity of networks have also contributed to this democratization of knowledge, since they have drastically reduced the costs of internet access and expanded the reach of search engines. As a result, the habit of “going to the library” or relying on physical collections has given way to simple online research, increasingly accessible even in regions previously excluded from the digital universe. This phenomenon has accelerated the dissemination of a variety of content – ​​from explanations on school subjects to cutting-edge scientific articles – and has made the learning process much more dynamic and agile for students, professionals and curious individuals from all areas.

In recent years, however, the increasing monetization of these tools has changed the scenario of full access to information. Many search engines have started to prioritize advertisements and sponsored links, forcing users to scroll through several pages before reaching the organic results that actually answer their questions. This commercial movement affects the neutrality and educational function of the search engine, as it prioritizes profit over direct knowledge, causing the initial promise of democratization to be partially overshadowed by economic interests.

Meanwhile, the spread of generative artificial intelligence has promoted a peculiar historical phenomenon by reviving, through technology, two ideas initially proposed in the 19th century by Francis Galton. Known for his pioneering studies in genetics and anthropometry, Francis Galton created the concept of “regression to the mean” when he noticed, for example, that children of very tall parents often had a height closer to the population average, instead of maintaining the exceptional height of their parents. This phenomenon illustrates that extreme characteristics tend, naturally, to return to intermediate levels as generations pass.

Eugenics refers to a set of ideas and practices that aim to “improve” genetic characteristics of human populations through the selection and exclusion of certain groups or individuals. Officially emerging with the work of Francis Galton, eugenics gained strength in different countries at the beginning of the 20th century, especially through forced sterilization policies, restrictive immigration laws and other discriminatory measures.

These practices, justified under the claim of improving the “quality” of the population, resulted in serious human rights violations and tragically influenced totalitarian regimes, such as Nazism, which took such ideas to the extreme with mass extermination programs. Today, eugenics is widely condemned by the scientific community and society in general, both from an ethical and scientific point of view, recognizing the dangers and injustices inherent in any attempt to manipulate human genetic diversity in a coercive manner. Francis Galton became famous for defending eugenic ideas, arguing that the human population could be artificially improved by selecting the fittest individuals.

Today, although less evident and more subtle, it is possible to perceive a similar process in the functioning and use of generative artificial intelligence. This technology operates through direct interaction with users, producing results (texts, answers or knowledge) based on initial instructions, called prompts. These are the commands or questions asked by users to the artificial intelligence software (typically a chat). Here is a fundamental technical detail: the quality of the answers obtained depends directly on the clarity, precision, complexity and scope of these initial commands.

2.

When individuals with little success in linguistic constructions and articulations and limited erudition – such as functional illiterates – use chats with artificial intelligence, they tend to formulate vaguer, imprecise or simplified prompts. Since artificial intelligence requires clear information to generate qualified responses, these poorly structured prompts result in superficial and limited responses, which rarely explore the potential wealth of knowledge stored in the system's database.

In these cases, the AI ​​returns an average result to the user, slightly better than what the user could achieve on their own, but limited and without much depth. This dynamic is similar to the idea of ​​regression to the mean formulated by Francis Galton: users who are well below average end up improving their repertoire slightly, but remain stuck at an intermediate level. This average, borderline or mediocre place reminds us of the “Mediocristan” defined by Nassim Taleb.

Furthermore, artificial intelligence, when attempting to reproduce the user's intentions and preferences, may generate incorrect or inaccurate responses, even if it has a large collection of reliable data. Prompts saturated with flawed assumptions, biases or ambiguities lead the system to replicate such distortions, since the ultimate goal is to satisfy the demand formulated, even if this results in poor or factually questionable content. In this process, the technology reinforces possible mistakes and consolidates mistaken beliefs, limiting the depth of the information produced and making the educational role expected of a far-reaching tool unfeasible.

For example, a user who uses erroneous assumptions and writes something like “It was Dom Pedro II who discovered Brazil in 1500, right?” may induce the AI ​​to produce a response that corroborates this historically erroneous information, since the wording of the prompt leads the system to validate the question instead of correcting it incisively. Even if the AI ​​has access to reliable records that indicate Pedro Álvares Cabral as responsible for the arrival of the Portuguese in 1500, the ambiguous construction of the request may result in a final text that repeats or softens the initial distortion, highlighting how the tool reflects the user’s knowledge gaps and misconceptions. In this case, the acronym AI can be interchanged with “Automated Idiot.”Shit in, shit out”, the technocratic enthusiasts on duty would shout.

On the other hand, users who already have a good cultural background, language skills and argumentative skills are able to formulate more precise, detailed and intellectually sophisticated prompts. For example, while a user with little language skills might ask the artificial intelligence something like “who discovered Brazil?”, a more culturally prepared user could formulate a richer and more contextualized question, such as “in what way did the Portuguese colonization process influence the cultural and social formation of contemporary Brazil?”.

This second prompt generates more complete, detailed and relevant answers, as it provides the Artificial Intelligence with clear guidance on the desired level of depth. As a result, these users receive richer and more elaborate content, capable of further expanding their intellectual repertoire. They are, therefore, in a privileged place or “Extremistan” as opposed to Talebian Mediocristan. There is even a new market for “prompt sales” from the most privileged to the less fortunate.

In this way, what happens is that already culturally privileged users are able to further enhance their advantages, while those who face previous difficulties achieve only marginal improvements, or even intensify their deficiencies in the worst case. This logic perfectly exemplifies the so-called “Matthew effect”, taken from Gospel from Matthew, according to which “to him who has, more will be given, and he will have abundance; but from him who has not, even what he has will be taken away”. Applied to the context of artificial intelligence, culturally prepared users accumulate even more knowledge and deepen their skills, while culturally fragile users advance little and remain stuck at basic or intermediate levels.

3.

When we mention the results of Brazilian students in the Programme for International Student Assessment (PISA), it is already cliché to highlight their poor performance in mathematics. We cannot forget, however, that in reading, the average performance of Brazilian students was 410 points, below the OECD average of 476 points (data from the latest edition, in 2022). Half of Brazilian students did not reach the basic level of proficiency in reading, while in OECD countries this percentage was 26%. As we will reinforce here, with the advent of the “brave new world” of AI, the linguistic deficit is much more serious and urgent than the mathematical suffering.

The idea that mastery of language precedes and conditions other fields of human knowledge is not a recent innovation, having been widely debated and supported by different thinkers throughout the history of philosophy. Among the classical authors who defended this primacy, Aristotle stands out, for whom language was the fundamental basis of logic and, therefore, of all rational thought. He argued that a rigorous understanding of words and their correct application constituted the indispensable foundation of any subsequent intellectual investigation.

This perspective was later reinforced and reinterpreted in the Middle Ages, especially by scholastic authors such as Thomas Aquinas. For Aquinas, mastery of language was seen as an essential step in the knowledge of truth, since philosophical and theological reasoning depended on conceptual and linguistic clarity. Without mastery over discourse and precise definitions, thought would be trapped in terminological confusion and conceptual ambiguities. In medieval scholasticism, dialectics – the art of discussion and argumentation based on language – was considered a priority, preceding the development of mathematical or scientific disciplines, since it guaranteed the logical foundation and solidity necessary for any knowledge.

In modern times, this primacy of language was taken up again by authors such as Wilhelm von Humboldt, who emphasized the crucial role of language in the construction of human reality. Humboldt believed that language shapes thought and perception of reality, determining the way in which individuals understand and interact with the world. This idea anticipated later philosophical and epistemological approaches, which would emphasize the importance of linguistic structures in shaping forms of human thought.

In the 20th century, Ludwig Wittgenstein further consolidated this view by arguing that the limits of language represent the very limits of the intelligible world. Wittgenstein argued that many philosophical problems stemmed from linguistic confusions, indicating that conceptual clarity was a condition for resolving fundamental dilemmas in various areas of knowledge. Thus, it is clear that the idea that language precedes other knowledge and defines the intellectual horizon of individuals has deep historical roots, consolidated by different philosophical traditions long before its contemporary appropriation.

In this sense, a paradoxical situation is created: a technology often seen as capable of democratizing knowledge ends up functioning as a tool that deepens inequalities. Without the need for coercion or explicit intervention, generative artificial intelligence carries out an indirect cognitive and cultural selection, similar to the original ideals defended by Francis Galton, privileging individuals who are already socially favored by the superior quality of the initial commands they are capable of formulating.

*Tarcísio Peres He is a science professor at the Technology Colleges of the State of São Paulo. Author, among other books, of Profiting from the Sharks: The Stock Market Traps and How to Use Them to Your Advantage (Novatec Publisher) [https://amzn.to/3TKlVwU]

References


AGÊNCIA BRASIL. Less than 50% of students know the basics in mathematics and science. 2023. Available here.

Aristotle. Organon: categories, interpretation, first analytics, second analytics, topics, sophistical refutations. Translation and notes by Edson Bini. São Paulo: Edipro, 2016.

AQUINAS, Thomas of. Summa Theologica. Translation by Alexandre Corrêa. New York: Ecclesiae, 2016.

HOLY BIBLE. Pastoral edition. São Paulo: Paulus Editora, 1990. Matthew 13:12.

GALTON, Francis. Regression towards mediocrity in hereditary stature. Journal of the Anthropological Institute of Great Britain and Ireland, London, vol. 15, p. 246-263, 1886.

GALTON, Francis. Inquiries into human faculty and their development. London: Macmillan, 1883.

HUMBOLDT, W. von; HUMBOLDT, A. von. Über die Verschiedenheit des menschlichen Sprachbaues: und ihren Einfluss auf die geistige Entwickelung des Menschengeschlechts. Berlin: Druckerei der Königlichen Akademie der Wissenschaften, 1836.

TALEB, Nassim Nicholas. Black Swan Logic: The Impact of the Highly Improbable. Rio de Janeiro: BestSeller, 2008.

WITTGENSTEIN, Ludwig. Tractatus Logico-Philosophicus. Translation by Luiz Henrique Lopes dos Santos. New York: Edusp, 3rd ed., 2022.


the earth is round there is thanks to our readers and supporters.
Help us keep this idea going.
CONTRIBUTE

See all articles by

10 MOST READ IN THE LAST 7 DAYS

Umberto Eco – the world’s library
By CARLOS EDUARDO ARAÚJO: Considerations on the film directed by Davide Ferrario.
Machado de Assis' chronicle about Tiradentes
By FILIPE DE FREITAS GONÇALVES: A Machado-style analysis of the elevation of names and republican significance
The Arcadia complex of Brazilian literature
By LUIS EUSTÁQUIO SOARES: Author's introduction to the recently published book
Dialectics and value in Marx and the classics of Marxism
By JADIR ANTUNES: Presentation of the recently released book by Zaira Vieira
Culture and philosophy of praxis
By EDUARDO GRANJA COUTINHO: Foreword by the organizer of the recently released collection
The neoliberal consensus
By GILBERTO MARINGONI: There is minimal chance that the Lula government will take on clearly left-wing banners in the remainder of his term, after almost 30 months of neoliberal economic options
The editorial of Estadão
By CARLOS EDUARDO MARTINS: The main reason for the ideological quagmire in which we live is not the presence of a Brazilian right wing that is reactive to change nor the rise of fascism, but the decision of the PT social democracy to accommodate itself to the power structures
Gilmar Mendes and the “pejotização”
By JORGE LUIZ SOUTO MAIOR: Will the STF effectively determine the end of Labor Law and, consequently, of Labor Justice?
Brazil – last bastion of the old order?
By CICERO ARAUJO: Neoliberalism is becoming obsolete, but it still parasitizes (and paralyzes) the democratic field
The meanings of work – 25 years
By RICARDO ANTUNES: Introduction by the author to the new edition of the book, recently released
See all articles by

SEARCH

Search

TOPICS

NEW PUBLICATIONS