By IVAN DA COSTA MARQUES*
We should be building machines that work for us, rather than “adapting” society to be machine-readable and writable.
The topic of AI development brings at least two ideas that are very dear to Science-Technology-Society Studies (here CTS Studies)[I]. The first is that there is no “intelligence” without “learning”. The current development of AI is based on the hypothesis that an entity, whether classified as human or machine, “is not” intelligent, but becomes intelligent by learning and being trained. The challenge of evidence in this sense is figuratively faced by the presentation of a passive baby in which the symptoms of intelligence are not found. (ABU-MOSTAFA, 2023) In poetic license, “experts say that ‘machine learning’ is what they do, and ‘artificial intelligence’ is what they achieve”. This procedural view of the development of AIs (in the plural) resonates perfectly with the founding idea of STS Studies that truths acquire their scientific forms as they allow themselves to be constructed and known, distancing themselves from the dominant idea that they would already be previously given in Nature or Society, waiting to be discovered by Science.
The view of intelligence as the result of learning processes leads us to a second resonance between AI developments and STS Studies. In the ontology put into play by STS approaches, every entity, whether human or thing, is only identified and acquires existence as a “provisional juxtaposition of heterogeneous elements”. The founding hypothesis of AI developments is the proposition that a network of a staggering number of very simple elements, binary as we might say synapses in the brain are, is capable of “learning” and becoming “intelligent”. AI developers call these elements “parameters” and compare them to buttons that can be pressed or not, taps that can open or close, doorknobs that can turn to certain positions in a network learning or training process. AI developers thus configure a provisional juxtaposition of “parameters” in relation to information they find on the Internet. A provisionally configured or trained network can enter the market as an AI product.
We can mark the propositions enunciated in universities and research centers such as CALTECH in the 1980s as the first initiatives towards the current developments of AIs as networks inspired by synapses. For almost four decades this proposition inspired by “neural networks” was not able to become robust enough to generate computational artifacts capable of learning. At least part of the explanation for this was the lack of capacity of computers until then, relatively very small compared to the dimensions that need to be reached by the “parameter” networks. To be able to learn enough for symptoms of what is called AI to emerge, the networks needed to reach the scale of billions of “parameters”.[ii] But what is most intriguing is the fact that these networks acquire a certain capacity to act in the learning process as if on their own initiative and on their own, a phenomenon recognized and until now little understood by the specialists who develop and train them.
Previous forms of artifacts that historically invoked the idea of an artificial intelligence that surpassed human intelligence differ radically from current developments in AI, just as the idea of Science in a Platonized world differs greatly from the ideas of science in the world enacted by the ontology of STS Studies. Good examples of such previous artifacts would be IBM Deep Blue®, who surpassed or equaled world chess champion Gary Kasparov[iii] and IBM Watson® who beat a team of humans on the ratings show Jeopardy![iv] These two famous examples required months of work by teams of analysts and programmers entirely dedicated to finding an answer or solution to a problem that was defined from the outset in a very specific way. In short, these two emblematic cases of previous forms of AI are applications of the speed and information storage capacity of computers as a “brute force” to consult and carry out rigid and previously defined processes.
Today, it is common to classify AIs according to their two types of functions: discriminative and generative. The discriminative type can be asked questions such as “whose face is this?” or “what is the disease of someone with the following symptoms?”. The generative type can be assigned tasks such as “create a face” or “create a treatment for this person”. Given the scale of the Internet, I do not need to point out the myriad possibilities opened up by the existential (ontological) emergence of artifacts capable of using an ever-growing universe of “texts”, including videos and images, as a starting point to perform their functions.
In the ontology proposed by CTS Studies, the networks (entities, beings) that inhabit the universe are real or natural, but do not have a predefined form; they are collective, but are always juxtapositions of humans and things; and they are narrated, but are not just narratives.[v] Expanding from what is text on the Internet, it is plausible to assume that AIs privilege, at least initially, what is narrated in the entities that they create in the world.[vi] Given this privilege to the narrative, a technopolitical bifurcation has emerged that has gained visibility in the developed West (which does not include our country) regarding the future of AI development. The two sides of this bifurcation are expressed in two letters, both signed by renowned experts.
One of them, signed by thousands of professionals and businesspeople, including the iconic Elon Musk, was published by Future of Life Institute. It proposes a minimum six-month pause in “training (generative) AI systems more powerful than GPT-4” so that new regulations can be defined and implemented. There is no consensus on this pause or even on its feasibility, but the letter also demands that
AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems. These should include, at a minimum: new regulatory authorities empowered to deal with AI; oversight and tracking of high-capacity AI systems; a thorough auditing process; a certification ecosystem; adequate public funding for AI safety technical research; institutions to deal with the dramatic economic and political disruptions that AI will cause, especially for democracy. (Pause Giant AI Experiments: An Open Letter, 2023)
The other widely publicized letter, Statement from the listed authors of Stochastic Parrots on the “AI pause” letter, signed by the authors of the academic article published in 2021, now famous and informally known as “stochastic parrots” (BENDER; GEBRU; MCMILLAN-MAJOR; SCHMITCHELL, 2021), does not disagree with the need to design new regulations, but offers a technopolitical counterpoint to the first.
Contrary to the letter’s narrative that we must “adapt” to a seemingly preordained technological future and deal with “the dramatic economic and political disruptions that AI will cause, especially to democracy,” we do not agree that our role is to adjust to the priorities of a few privileged individuals and what they decide to build and proliferate. We should be building machines that work for us, rather than “adapting” society to be machine-readable and writable. The current race toward ever-larger AI experiments is not a preordained road where our only choice is how fast to run, but rather a set of profit-driven decisions. The actions and choices of corporations should be shaped by regulations that protect the rights and interests of people. (emphasis added) (Statement from the listed authors of Stochastic Parrots on the “AI pause” letter, 2023)
There is no space here to do more than observe that while the first letter reinforces the continuity of a path already taken by the West that reinforces a center that guarantees the meanings of what AIs come to offer as knowledge, the second opens up decentralized possibilities for the construction of AIs that come to offer facilities for new modes of existence in the construction of new common worlds. (LATOUR, 2017/2020)
STS Studies criticize and problematize the path that the West has been following, guaranteeing a center of meanings in the generation of knowledge. In doing so, STS Studies show how to epistemologically dignify knowledge from outside the West. STS Studies open up decentralized possibilities for the construction of knowledge that can facilitate the sharing of different modes of existence in a new common world.
*Ivan da Costa Marques He is a professor in the postgraduate program of History of Sciences and Techniques and Epistemology (HCTE) at UFRJ. Book author Brazil: opening of markets (Counterpoint). [https://amzn.to/3TFJnL5]
References
ABU-MOSTAFA, Y. Artificial Intelligence: The Good, the Bad, and the Ugly. Caltech Watson Lectures. caltech.edu/watson or on Caltech's YouTube channel (using this link) 2023.
BENDER, EM; GEBRU, T.; MCMILLAN-MAJOR, A.; SCHMITCHELL, S., 2021, ACM FAccT'21, Virtual Event, Canada. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ACM. 14.
LATOUR, b. We have never been modern – an essay on symmetrical anthropology. Translation COSTA, CI d. 1st ed. Rio de Janeiro: Editora 34, 1991/1994. 152 p. 8585490381.
LATOUR, b. Where to land? – How to orient yourself politically in the Anthropocene. Translation COSTA, MVA Rio de Janeiro: Bazar do Tempo, 2017/2020. 158 p. 978-65-86719-18-5.
Pause Giant AI Experiments: An Open Letter. See this link, 2023. Accessed on: July 31, 2023.
Statement from the listed authors of Stochastic Parrots on the “AI pause” letter. See this link, 2023. Accessed on: July 31, 2023.
Notes
[I] Looking at science, and especially technoscience, from the studies that in our metropolises are called Science Studies, here called CTS Studies (Science-Technology-Societies), leads to different understandings of what scientific knowledge is and possible new directions in its construction. There are several groups in Brazil, especially those associated with ESOCITE.BR (Brazilian Association of Social Studies of Science and Technology) (https://www.esocite.org.br/), which adopt the CTS perspective, opening fronts for knowledge of the “local”, so-called “situated” knowledge, including that arising from the African diaspora and indigenous peoples. By pointing out these openings, CTS Studies, despite having originated in our metropolises, can become powerful tools for criticizing the reproduction of coloniality in Brazil.
[ii] The scale of the human brain is trillions of synapses.
[iii] Veja using this link
[iv] The Jeopardy! program landed in Brazil with the name “The sky is the limit”. See using this link
[v] “Is it our fault if networks are at the same time real like nature, narrated like discourse, collective like society?” (LATOUR, 1991/1994:12)
[vi] The poststructuralist movement in the twentieth century (including thinkers such as Jacques Derrida, Julia Kristeva, Roland Barthes, Gilles Deleuse, Felix Guattari and Michel Foucault) opened a path for the reformulation of the participation of the “text” in “knowledge”.
the earth is round there is thanks to our readers and supporters.
Help us keep this idea going.
CONTRIBUTE