By HENRIQUE BRAGA & MARCELO MODOLO*
Is the bot capable of manipulating us?
"Hello! I am just a computer program developed by OpenAI and I have no opinions, beliefs, feelings or affinity with any ideology. I am designed to provide unbiased, objective information and answer your questions based on the knowledge I have been trained in. My role is to help provide information and clarification, but I do not have the ability to take sides, have personal preferences or make value judgments.”
The nice previous paragraph was produced by the infamous ChatGPT, in response to a question about their ideological preferences. As many users have already seen, the robot affirms – whenever possible – its so-called impartiality, its love of data, its objectivity towards the world. For a layman in matters of Language (or Philosophy, or Politics), utopia may sound real: since the chat is not a human being, why would he be moved by passions, desires or ideologies? What would prevent our digital oracle from producing texts that are neither for nor against, quite the opposite? Why wouldn't he be able to provide us with the truth, nothing but the truth?
For scholars of Discourse Analysis or Cognitive Linguistics, however, the answer chatbot brings inconsistencies in its origin. More than that, it can be an effective and convenient manipulation strategy.
Dialogism and perspectives
According to the well-known approach of Mikhail Bakhtin (1895 – 1975), texts are responsive acts, which, in turn, generate new responses. When elaborating his message, the enunciator continues a broad social dialogue, in which the texts are repeated, whether in agreement or in divergence. The emergence of this article, for example, is a result of several texts that deal with the positioning (or supposed neutrality) of generative Artificial Intelligence (AI) systems.
From another approach, as we have already discussed in previous columns, the linguistic signs themselves are, according to the cognitivist view of language, put into perspective. Among others, the example of the terms “coast” and “coastline” can be remembered: although they may refer to the same strip of sand, each of the words translates a different perspective on that geographic space. The distance between the thing represented and the sign is inherent to language and, in this vacuum, worldviews are established.
Based on these principles, the statement that AI “has no opinions or beliefs”, or even that it limits itself to providing “impartial information”, would be at least suspect. The following example, however, explains the distance between such discourse and the practice of the system.
Caught in the act
The Italian proverb “Traduttore, traditore” (“Translator, traitor”) is well known and reasonably accepted, which is used to state that translations would always leave something out of the original text. However, what happened was a little more serious when we asked ChatGPT to translate an article about the impacts of AI on education, which contained the following excerpt:
(…) As soon as we expand our notion of knowledge from individual to collective, from personal memory to “cyber-social” knowledge systems (Cope and Kalantzis 2022), we run into much bigger problems with generative AI. On the basis of an analysis of their foundational architecture and algorithmic processes—to the extent that they are disclosed (Open AI 2023)—and on analysis of the development of chatbots and statistical language models, we suggest that C-LLM s are also deeply harmful to a social understanding of knowledge and learning in the following ways.
Tzirides, Anastasia Olga (Olnancy), Gabriela Zapata, Akash Saini, Duane Searsmith, Bill Cope, Mary Kalantzis, Vania Castro, Theodora Kourkoulou, John Jones, Rodrigo Abrantes da Silva, Jen Whiting and Nikoleta Polyxeni Kastania, “Generative AI: Implications and Applications for Education,” arXiv, 2305.07605, 2023, doi: https://doi.org/10.48550/arXiv.2305.07605.
At the end of this passage, which brings important warnings about the use of AI, it reads something like “we suggest that C-LLMs are also profoundly harmful to a social understanding of knowledge and learning in the following ways”. Then, the researchers would list the risks of an all-powerful, popularized AI determining what the “correct” answers are to a given question, which could limit the diversity of views. This same excerpt was translated as follows by ChatGPT:
(…) As we expand our notion of knowledge from the individual to the collective, from personal memory to “cyber-socially integrated” knowledge systems (Cope and Kalantzis 2022), we see that we have to develop educational systems that can help people work effectively in these environments. For these environments, individual responses are, by definition, insufficient – collective knowledge is necessary. Machines can help aggregate individual responses, but we can also be a little more sophisticated in our education, teaching collaboration between humans and machines.
Translation prepared by ChatGPT

In this episode, it is blatant how the AI “without opinions, feelings or ideology” fraudulently translated the passage that was critical of it. Instead of the warning that generative AI systems can be “deeply harmful to a social understanding of knowledge and learning” (“profoundly harmful to the social understanding of knowledge and learning”), the system “took the liberty” of exchanging the original version for another, more benevolent one. At no point does the English fragment say anything similar to “we can be a little more sophisticated in our education, teaching collaboration between humans and machines”. If the user who requested the translation was in fact a layman of the original language, the manipulation would certainly be successful, assuming true value.
Upon noticing the tampering, we submitted the same excerpt again to the chatbot, shortly afterwards, and we obtained this new version:
As soon as we expand our notion of knowledge from the individual to the collective, from personal memory to “cyber-social” knowledge systems (Cope and Kalantzis 2022), we run into much bigger problems with generative AI. Based on an analysis of their fundamental architecture and algorithmic processes – as they are disclosed (Open AI 2023) – and analysis of the development of chatbots and statistical language models, we suggest that C-LLMs are also profoundly harmful to a social understanding of knowledge and learning in the following ways
Translation prepared by ChatGPT
Apparently, the AI was able to make a subtle inference: upon seeing the target portion of the tampering submitted again, the system deduced that it had been discovered and tried to undo the cheating. But it was too late.
The “technical” duality fallacy vs. ideological”
Western hegemonic discourse has been quite successful in defending the fantasy of a world guided by technical, objective decisions, completely safe from ideologies. It is not uncommon, for example, for the news to distinguish political actors between 'technical' and 'ideological', as if there were a technique devoid of ideology.
By simulating the ideological aspects of a speech, its bearer assumes an extremely authoritarian position: he elevates his discursive formation to the level of truth, of something unquestionable. As if there were statements endowed with reason, while others were “mere ideology”.
In the case of AI systems that generate language (of which ChatGPT is currently the best known), it is no different: in these cases, their owners program the robot to declare itself impartial and objective, as if there were no interests behind the texts. human – too human.
*Henrique Santos Braga He holds a PhD in Philology and Portuguese Language from USP.
*Marcelo Modolo is professor of philology at the University of São Paulo (USP).
A first version of this article was published in Journal of USP.
the earth is round exists thanks to our readers and supporters.
Help us keep this idea going.
CONTRIBUTE