Negative impacts of using chatbots in education

Whatsapp
Facebook
Twitter
Instagram
Telegram

By ELEONORA ALBANO*

The indiscriminate popularization of chatbots could undermine educational traditions that depend on critical thinking

Capitalism’s attacks on critical thinking directly affect the future of universities. They have been occurring at least since the financialization of the teaching-research-extension tripod – which Big Techs recently usurped, setting themselves up as “patrons”.

This article focuses on the form of patronage that makes so-called conversational robots or 'chatbots' available on the internet.

Technoscience attributes intelligence to these robots, claiming that they are capable of preparing courses at any level and accelerating research, automating bibliographic reviews and reducing data collection and processing time.

It is demonstrated here that the relevance of these allegations depends entirely on the critical training of chatbot users.

It is essential that you know that these people are simply re-editing high-frequency “reasoning” on the internet. You also need to understand that this is done through a simplistic simulation of the Turing machine – which predicts the next symbol in a continuous and exhaustively interconnected sequence – which in fact creates nothing.

Thus, its uncritical and indiscriminate popularization may undermine educational traditions whose effectiveness rests on centuries of exercising critical thinking.

This article discusses the tools offered by Big Tech to students and educators, focusing on attributing intelligence to the most popular of them, namely, Large Language Models (hereinafter, GMLs, as in English LLMs – large language models, the best-known example of which is ChatGPT). To this end, it revisits and extends the arguments against such claims published on the website the earth is round in December 2023.

The steps involved are six.

First, I discuss, in the light of social science analyses, the incessant transformations of capitalism since financialization. Some contemporary authors, such as, for example, Tsoukalis,[I] argue that the term neoliberalism is losing its meaning, given the multiplication of current ways of obtaining profit.

My focus, however, will be behavioral, that is, I will try to understand the discomfort, anguish and insecurity caused by this incessant change in the production system.

I then proceed to examine the harmful consequences of this situation for the exercise of criticism, pointing out certain threats that already hang over it.

I continue to show how the unrestricted appropriation of internet content by Big Techs has paved the way for them to establish themselves as patrons of education, promising to equip teachers, students and researchers with a variety of tools.

Next, I analyze the most important of these, namely, the language technology of personal assistants (such as Amazon's Alexa or Microsoft's Copilot). First, I show that simulating the polite and discreet tone of a standard secretary incites all sorts of fantasies about the cognitive – and even socio-affective – capacities of writing/reading (or speaking/listening) machines.[ii]). I will now show how such fantasies affect and destabilize the daily lives of the people and institutions involved.

The first fantasy is that of machine sentience, that is, the belief in its ability to experience sensations and feelings, as well as to become aware of them.

Another fantasy resurrects the ancient myth of stone oracles, considered in Antiquity to be animated, sentient and intelligent. Its current version claims that the emulation of human reasoning by machines achieves perfect homology. I will demonstrate the falsity of this assumption by explaining step by step the behaviorist architecture, based on operant conditioning, of the GMLs behind chatbots.

The following is a demonstration that the “new” meanings created by language robots are not in fact new, as they are based exclusively on analogies that can be found, through sophisticated statistical functions, in the gigantic corpus constituted by the internet.

Criticism, because it requires the constant exercise of doubt, does not fit within this limit. The fact that the speed and efficiency of search engines allow the machine to provide acceptable, very good or even excellent answers does not, however, allow it to ask good questions. Thus, the questions remain the sole responsibility of the user.

Finally, I point out some pitfalls in these tools that can disorient users who want to use them to speed up and mechanize academic tasks such as preparing classes, reviewing texts, and organizing research data.

I conclude, then, that these are very useful resources – but only for those who know how to subject them to rigorous critical scrutiny.

The dizzying transformations of capitalism

Since financialization, capitalism has been creating new ways of making a profit that go far beyond the exploitation of workers. With the advent and popularization of the internet, consumers have started to work for free for the owners of the platforms, because, when using them, they automatically create a profile of their social relationships and consumption habits, to be sold, without their consent, to interested advertisers. As Shoshana Zuboff pointed out,[iii] This continuous monitoring violates user privacy and is, in fact, at the service of a powerful surveillance scheme.

Nowadays, this data trade is not limited to the clicks of Internet users. There are companies that, lacking the resources to create and maintain spaces on the Internet, store their databases in the clouds controlled by Big Techs. This radical change, which has destroyed markets by transferring them to so-called “cloud computing”, is aptly called “technofeudalism” by Yanis Varoufakis.[iv]

Technofeudalism exploits consumers and capitalists by exploiting an army of precarious and outsourced workers who classify and label data in a hierarchical structure only made possible by gigantic machines.

This is a highly qualified and specialized workforce. Some are exact scientists who collaborate in the production of the algorithmic structure of the network. Others are natural and human scientists who scrutinize the posted content and produce a complex and hierarchical classification grid to guide not only search engines, but also large language models – whose applications, in continuous expansion, are increasingly seducing and controlling users.

Feminist thinker Nancy Fraser[v] coined the term 'cannibal capitalism' to designate this form of service provision, which has been engulfing an ever-increasing number of individuals and institutions. Congruently, journalist and political analyst Raul Zibechi[vi] proposed the term 'mafia capitalism' for the case in which this voracious throat has links with corruption, drug trafficking and organized crime in general.

Let us now reflect on the effects of the abuse of this anonymous power on our minds and bodies. Bodies tired of the incessant use of screens, keyboards and mice turn their minds to virtual connections completely devoid of the vocal, gestural and tactile stimuli that give cohesion and coherence to physical coexistence. Rushed by this routine, they end up naturalizing the inescapable deprivation of socio-affective contact.

In the case of those who make a living from these activities, there is also disenchantment with the precariousness of the job market and concern about the number of hours needed to earn a basic income. The result is the multiplication of cases of chronic stress – for which psychiatry, without recognizing the complexity of the phenomenon, proposes the term 'burnout', treating them with medication.

In this scenario, the so-called “knowledge society” would be better described as the opposite, that is, “ignorance society”. Instead of fostering the enlightenment promised by its enlightenment claims, it often inundates the public with specialized knowledge that it manufactures, disseminates and dissipates according to fashion. The resulting explosion of vocabulary contributes to further increasing confusion.

Other threats to the mental health of the population lie in the incessant misleading propaganda about the advantages of wealth, luxury and ostentation. Lately, people with very low incomes have been risking what little they have in the online gambling craze. Immersion in the disruptive and repetitive mechanics of social networks deprives them of the most basic reasoning and the simplest empathy for others. Little by little, brutality takes over minds and hearts with unstoppable force.

The decline of criticism

The above picture is certainly not the cause, but the consequence of the gradual numbing of criticism. The cause is more remote: it lies in decades of attacks, open or covert, by successive versions of capitalism on the institutions that are the guardians of critical thought.

In around five decades, the financialization dictated by neoliberalism has weakened public education throughout the world.[vii] Schools at all levels had to seek partnerships and/or sponsorships to avoid charging tuition fees – or at least to keep their prices viable. Those that managed to maintain free curricular offerings increased the number of paid extension and specialization courses and expanded the range of paid extracurricular activities.

Governments are complicit in paying teachers very poorly, forcing them to take on more than one job to survive. This compromises not only their physical fitness, but also their dedication to continuing education.

On the other hand, private schools sell families beautiful promises, whether of professionalization and insertion in the job market, or of encyclopedic and/or multidisciplinary education that prepares for a world in constant change. The goal of learning is not, in general, to reflect on reality but to act upon it.

The last stronghold of critical thinking, public universities are mitigating their underfunding by opening up to privatization. Postgraduate and specialization courses are increasingly merging, and basic research is giving way to commissioned applied research.

To naturalize this situation is to deny that freedom of thought should be independent of any private sponsorship. In a democracy, it is necessary to preserve the autonomy of researchers. Thus, work on the issues raised by the trajectory of each field of knowledge must be financed by public sources.

Privatization at the edges has made schools – even public ones – vulnerable to the business world’s efficiency criteria. This often leads them to hire asset management companies to control their physical and symbolic assets in order to “optimize” their use, performance and value. Among the assets managed are the data of all the actors involved. And so, education is immersed, unwary, in one of the most aggressive practices of current capitalism. The assumption is that any physical or informational asset is saleable and can therefore be used to make a profit.

The ancient myth of the intelligent robot

Long before the automatons that entertained European royalty and aristocracy during the Enlightenment, there were already legends about intelligent machines capable of obeying their masters. In Odyssey, Homer relates that the god of metallurgy and crafts, Hephaestus, and his golden assistants used bellows to perform repetitive mechanical tasks. He also mentions that the Phoenicians had ships that obeyed the orders of their captains and moved at the speed of thought to avoid the dangers of navigation.

Artificial people, animals and mythical beings remained popular throughout antiquity, the Middle Ages and the modern age. Such creatures made of glass, clay or metal were generally seen as slaves or servants, used to fulfill a variety of needs, including sexual ones.

This was no mere imagination: the Hellenists possessed advanced mechanical technology, which allowed them to build automatons powered by springs, ropes and/or levers. This art was partially preserved in medieval Europe and spread throughout the world, first reaching Islam and then moving on to the East.

Eastern cultures also conceived of guardian automatons, in charge of palaces or reliquaries, such as that of Buddha.

Such mechanisms were powerful instruments of social control. They aimed to arouse both fascination and fear. At the same time, courtesan dolls, by providing physical support for masturbatory fantasies, fed the belief in a supposed “soul” of machines.

Such fantasies have also been popularized by literature. For example, in Gulliver's Travels, Jonathan Swift describes the Ingenuity, a machine that was “a project of improving speculative knowledge by practical and mechanical operations”. By renting it at a reasonable price, “any ignorant person, with a minimum of education and without a mentor”, could mobilize nothing more than his arm “to write books of philosophy, poetry, politics, law, mathematics and theology”.

The Eliza effect trivialized

The discovery that humans easily transfer affection to machines was made by Joseph Weizenbaum[viii], a German Jew whose family emigrated to America at the beginning of the rise of Nazism. The trauma of persecution and the difficulties of adapting to the new environment did not inhibit his exceptional talent for mathematics and computing. Despite the lack of support from his family, he had a brilliant academic career, which led him to occupy a position as a professor at MIT.

He is known as one of the fathers of artificial intelligence, although he rejected this nickname, as he believed that machines are only capable of calculating, not reasoning.

Having helped him overcome past traumas, psychoanalysis had a decisive influence on his career. At the same time, his embrace of socialism led him to explore the possibility of democratizing psychotherapy through digital means. To this end, he studied the available currents and set up a bold experiment with the easiest to emulate computationally, namely: Rogerian theory, named after its inventor, the American psychologist Carl Rogers.

This is a non-directive psychotherapy, which Carl Rogers defined as 'person-centered'. It essentially consists of inserting the patient's statement into phrases such as "you told me that...", followed by other clichés, vague but encouraging, such as: "And how can we deal with this?". It is, in essence, a bet on the therapeutic power of letting things out.

Although conceived by Joseph Weizenbaum as a research tool on the feasibility of a conversational robot, Eliza quickly became a surprise hit with the public, which ended up affecting the design of the study. The reason is that participants claimed that their conversations with the machine were private and refused to share their content with the researcher.

Obviously, Joseph Weizenbaum was convinced that Eliza could not be animated or intelligent. He understood, then, that the participants' attachment to the digital therapist was a form of transference, in the psychoanalytic sense.

It turns out that in the 1960s and 70s, the lobby of those interested in computerizing and automating society was already strong in the US and encouraged users to become emotionally involved with machines. Joseph Weizenbaum opposed this campaign by writing books and articles in which he examined the difference between human reasoning and symbolic computation.

But he soon reaped the bitter fruit of a violent rejection, led by his own colleagues at MIT, especially John McCarthy, today remembered as the father of Artificial Intelligence. The term, in fact, had been coined by him as a marketing ploy to attract funding from the US Department of Defense for a symposium he organized at Dartmouth College in 1956. Obviously, the military was enchanted by the tacit promise of digital surveillance.

The discomfort was so great that Joseph Weizenbaum preferred to return to Germany and continue working with his interlocutors there – all young, critical and enthusiastic.

However, at that time, American economic power was increasingly turning to the Internet as a place to control and manipulate behavior. Thus, it used every available space to popularize the idea that computers were capable of thinking. Decisive to the success of this propaganda were the defeats that successive robot players inflicted on renowned world chess champions.

Thus, the advent of GMLs was the last straw for the Eliza effect to flood the internet, encouraging users to become attached to their personal computers. Its conversational performance is so good that it leads the layman to identify its output with natural language. It is difficult for most people to believe that it is just a logical-symbolic calculation that has nothing to do with the structure and function of human languages.

Let's see below how this deception works.

The Behaviorist Mechanics of Large Language Models

The enormous word concatenation capacity displayed by Large Language Models has three components: (i) the appropriation of the entire content of the internet by Big Techs; (ii) the advent of a type of recurrent neural network capable of calculating multiple associations between words in real time – the so-called Transformers, that is, transformers; and (iii) the incessant organizing effort of legions of precarious but highly qualified workers from various areas of knowledge.

Note that human natural languages ​​contain important syntactic discontinuities, for example, in relative clauses. A sentence like “The frog that ate the insect died” is about the death of the frog, not the insect. This also occurs in morphology, in verbs like 'to root', which are formed by adding a prefix and a suffix to the root.

The operation of Large Language Models is, however, purely linear, i.e., it always consists of predicting the next word. So how do we deal with these discontinuities? The answer is simple: through a sophisticated probability calculation. The transformer obtains in real time the probabilities of co-occurrence between pairs of words from the entire Internet database, chooses the best bet, and moves on.

It is therefore worth asking how such simple operations can compose sequences that make sense to the reader.

The simplicity is only apparent. Co-occurrence probabilities are not calculated only for vocabulary. The corpus is annotated at several levels of analysis, which include syntactic, semantic and even pragmatic information. An optimization function selects the set of pairs with the greatest chance of coherently integrating all these aspects.

Linguistic annotators label the structural properties of the text: rules of conjunction and disjunction – i.e., syntax –; basic and associative meanings – i.e., semantics –; and reference to the text itself and/or the context, as in the case of personal pronouns and adverbs of place and time – i.e., pragmatics.

Annotators in other humanities and social sciences add multiple layers of content and stylistic tags. Similarly, annotators in the natural and exact sciences add tagged content from their fields. Finally, computer scientists familiar with transformers introduce feed forward of the network the resulting hierarchy of levels of analysis.

It is essential to note that the functioning of transformers is comparable to the most radical form of behaviorism, operant conditioning.[ix]. Combinations with the highest probability of success are reinforced, becoming more likely each time they are selected – which strengthens the other connections involved and affects the selection of the next pair. This procedure generates new examples of pairs of the same class, contributing to increasing their frequency in the network.

There is no doubt that this is an excellent method of computational simulation of natural language. However, to mistake the output of the transformer for natural statements is to attribute to humans a mind that functions through a succession of continually quantified and recalculated associations.

Incidentally, in the 1930s and 40s, Burrhus F. Skinner, the father of operant conditioning, responded to accusations of fascism from his colleagues by saying that his method of controlling behavior was solely aimed at creating better citizens. The discussion reached the New York Times, where, by the way, an online version of a 1972 report by journalist Robert Reinhold is available.[X] about a symposium held at Yale, in which Skinnerian ideas were condemned by the majority of the academic Psychology community.

Skinner failed in his educational projects, but was rescued by Big Tech to bring humans closer to machines. Today, unfortunately, the indiscriminate use of the algorithm that implements operant conditioning is already affecting user behavior. People increasingly imitate chatbots, abusing clichés. In the same way, they uncritically accept the clichés they receive in response to the questions they ask them.

In short, transformers do not produce new knowledge, as they can do no more than emulate, with a pastiche, the superficial form of simple reasoning. Thus, they only function as search engines when the objective is to compile information from reliable internet sources on a given subject. As is known, a few rare sites are moderated and/or curated by experts.

In contrast, Big Techs are now only interested in hiring annotators, not moderators. Everything that comes out of a transformer is fed back into the input corpus. Recently, the few humans who filtered and discarded inaccurate or false responses from chatbots were laid off by X and Meta. Microsoft still maintains some filters, but does not reveal the details of their operation. Thus, as moderation becomes increasingly precarious and opaque, factual errors accumulate – and the network becomes flooded with inaccuracies, lies and contradictions.

Furthermore, user questions and comments, no matter how naive, sectarian or offensive they may seem, are automatically incorporated into the database, making it an inexhaustible source of potentially dangerous biases. Truth gives way to falsehood or coexists with it, given the lack of clues to distinguish them.

In this way, the friendly and didactic tone of chatbots seduces and entangles the user, and gradually undermines their ability to recognize the factors involved in the question itself and evaluate or doubt the answer. It is easy to become comfortable with a mechanism that provides immediate and apparently direct answers because they are easy to repeat.

This ease, however, has a reckless side. Joseph Weizembaum would certainly have been depressed if he were around in 2023, when a Belgian father protested against environmental collapse by committing suicide with the support of a version of Eliza implemented by Eleuther AI based on Chat GPT. According to his wife, he had turned to the chatbot to treat his depression.

A dangerous balance – the transfer of responsibility to users

Let us now return to the issue of the quality of life of overworked teachers, who, in fact, are the majority, including in higher education.

At the university, chatbots are invading the administration, with consequent cuts in face-to-face customer service. There are also experiments in customizing bots for academic use. Even in this case, where the content is subject to filters, moderation is not satisfactory, due to the architecture feed forward of transformers.

Thus, the text search, compilation and organization services made available by Big Tech to workers in basic and higher education do nothing but increase their confusion and discomfort.

The handouts compiled with these resources tend to contaminate teaching with obviousness and misinformation, as they do not encourage reflection, only uncritical reproduction. Plagiarism, already so widespread on the Internet, now takes on a new form: blind, indiscriminate pastiche, without selection criteria.

In an era in which printed books are on the verge of disappearing, chatbots threaten to put an end to an educational tradition whose roots date back to antiquity.

What future will the ancient foundations of critical thinking hold? We will only know by carefully identifying and analyzing the effects – especially the less transparent ones – of language technologies on all sectors of society that impact formal and informal education. For those studying this obscure horizon, there are still a myriad of questions to be clarified.

* Eleonora Albano, retired professor from the Institute of Language Studies at Unicamp, is a psychologist, linguist, essayist; she coordinated the first Brazilian project on speech technology.

Notes


[I] Here.

[ii] As I showed in the first article cited, it is possible to give personalized voices to assistants 

[iii] Shoshana Zuboff. Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30, 75–89, 2015.

[iv] Yanis Varoufakis. Technofeudalism: What kll led capitalism. London: Vintage Books, 2023.

[v] Nancy Fraser.Cannibal capitalism. São Paulo: Literary Autonomy, 2024.

[vi] Here.

[vii] I analyzed this situation in published article Site the earth is round.

[viii] Joseph Weizembaum. Computer Power and Human Reason: From Judgment to Calculation. New York: WF Freeman & Co, 1976.

[ix] Skinner, B. F. (1938). The Behavior of Organisms: An experimental Analysis. New York: Appleton-Century-Crofts.

[X] Veja here.


the earth is round there is thanks to our readers and supporters.
Help us keep this idea going.
CONTRIBUTE

See all articles by

10 MOST READ IN THE LAST 7 DAYS

The American strategy of “innovative destruction”
By JOSÉ LUÍS FIORI: From a geopolitical point of view, the Trump project may be pointing in the direction of a great tripartite “imperial” agreement, between the USA, Russia and China
France's nuclear exercises
By ANDREW KORYBKO: A new architecture of European security is taking shape and its final configuration is shaped by the relationship between France and Poland
End of Qualis?
By RENATO FRANCISCO DOS SANTOS PAULA: The lack of quality criteria required in the editorial department of journals will send researchers, without mercy, to a perverse underworld that already exists in the academic environment: the world of competition, now subsidized by mercantile subjectivity
Grunge distortions
By HELCIO HERBERT NETO: The helplessness of life in Seattle went in the opposite direction to the yuppies of Wall Street. And the disillusionment was not an empty performance
Europe prepares for war
By FLÁVIO AGUIAR: Whenever the countries of Europe prepared for a war, war happened. And this continent provided the two wars that in all of human history earned the sad title of “world wars.”
Why I don't follow pedagogical routines
By MÁRCIO ALESSANDRO DE OLIVEIRA: The government of Espírito Santo treats schools like companies, in addition to adopting predetermined itineraries, with subjects placed in “sequence” without consideration for intellectual work in the form of teaching planning.
Cynicism and Critical Failure
By VLADIMIR SAFATLE: Author's preface to the recently published second edition
In the eco-Marxist school
By MICHAEL LÖWY: Reflections on three books by Kohei Saito
The Promise Payer
By SOLENI BISCOUTO FRESSATO: Considerations on the play by Dias Gomes and the film by Anselmo Duarte
Letter from prison
By MAHMOUD KHALIL: A letter dictated by telephone by the American student leader detained by U.S. Immigration and Customs Enforcement
See all articles by

SEARCH

Search

TOPICS

NEW PUBLICATIONS