Artificial Idiocy

Image: ThisIsEngineering
Whatsapp
Facebook
Twitter
Instagram
Telegram

By SLAVEJ ŽIŽEK*

The problem isn't that chatbots are stupid; is that they are not "stupid" enough

There is nothing new in “chatbots” that are capable of sustaining a conversation in natural language, understanding the user's basic intentions and offering responses based on predefined rules and data. Its capacity, however, has increased dramatically in recent months, sending many into panic and despair.

Much has been said about chatbots being a harbinger of the end of student dissertations. But an issue that needs further attention is how chatbots should respond when human interlocutors use aggressive, sexist or racist statements to entice the robot to come up with its own nasty fantasies in response. Should artificial intelligences be programmed to respond at the same level as the questions put to them?

If we decide that some kind of control should be applied, then we must determine how far the censorship should go. Will political positions that some sectors consider “offensive” be prohibited? What about expressions of solidarity with West Bank Palestinians? The assertion that Israel is a state of apartheid (what former President Jimmy Carter once put in the title of a book)? Will all this be blocked for being “anti-Semitic”?

The problem does not end here. As writer and artist James Bridle warns us, the new artificial intelligences are “based on the mass appropriation of existing culture”, and the belief that they are “truly knowledgeable or meaningful is actively dangerous”. Therefore, we must be very cautious with the new image generators by artificial intelligence. “In their attempt to understand and replicate human visual culture in its entirety,” observes Bridle, “[they] also seem to have recreated our greatest fears. Perhaps this is just a sign that these systems are actually very good at mimicking human consciousness, even reaching the horrors that lurk in the depths of our consciousness: our fears of filth, death and corruption.”

But how good are new artificial intelligences at resembling human consciousness? Consider the bar that recently announced a special promotion under the terms: "Buy one beer for the price of two and get a second beer completely free!" For any human, this is obviously a joke. The typical “buy one, get one free” offer is reworked to cancel itself. It is an expression of cynicism appreciated as comic honesty in order to increase sales. One chatbot would you be able to understand that?

“Fucking” presents a similar problem. Although the word designates something that most people like to do (copulate), it also acquires a negative value (“We're fucked!”, “Fuck off!”). Language and reality are confused. Is artificial intelligence ready to discern such differences?

In his 1805 essay "On the Gradual Formation of Thoughts in the Process of Speech" (published posthumously in 1878), the German poet Heinrich von Kleist reversed the popular saying that one should not open one's mouth to speak unless one has a clear idea of ​​what to say: “That being so, if a thought is expressed in a confused way, it does not mean at all that such a thought was conceived in a confused way. On the contrary, it is possible that the ideas that are expressed in the most confused ways are just those that have been thought through most clearly.”

This relationship between language and thought is extraordinarily complicated. In a passage from one of his speeches in the early 1930s, Joseph Stalin proposes radical measures to "detect and ruthlessly combat even those who oppose collectivization only in their thoughts - yes, that's what I mean, we must fight". even the thoughts of people.” We can safely assume that this sentence was not prepared beforehand. As he let himself be carried away by the moment, Stalin immediately became aware of what he had just said. But instead of backing down, he decided to continue his hyperbole.

As Jacques Lacan later stated, this was one of those cases where truth emerges by surprise through the act of enunciation. Louis Althusser identified a similar phenomenon in the relationship between taking e surprise. Someone who suddenly becomes aware ("taking”) of an idea will be amazed at what it has accomplished. Again, any chatbot capable of doing this?

The problem isn't that chatbots are stupid; it's that they're not "stupid" enough. It's not that they are naive (incapable of ironizing and reflecting); it's that they're not naive enough (failing to notice the moments when naivety is masking insight). The real danger, then, is not that people mistake chatbots for real people; but of chatbots making real people talk like chatbots – unable to notice nuances and ironies, obsessively saying exactly what they think they want to say.

When I was younger, a friend went to a psychoanalyst for treatment after a traumatic experience. This friend's idea of ​​what such analysts expect from their patients was a cliché, and he spent the first session producing false "free associations" about how he hated his father and wished for his death. The analyst's reaction was naïve: he adopted a naive “pre-Freudian” position and scolded my friend for not respecting his father (“How can you talk like that about the person who made you what you are?”). This false innocence sent a clear message: I'm not buying your fake "associations." Would a chatbot be able to understand this subtext?

Probably wouldn't understand, because he's like Rowan Williams' interpretation of Prince Myshkin from the book The idiot by Dostoyevsky. According to the conventional interpretation, Myshkin, "the idiot", is "a positively good and beautiful man" who has been driven to solitary madness by the harsh brutalities and passions of the real world. In Williams' radical reinterpretation, however, Myshkin represents the eye of a storm: no matter how good and holy he is, he is the one who causes the chaos and deaths he witnesses because of his role in the complex web of relationships around him.

It's not that Myshkin is a naive simpleton. But that his particular brand of obtuseness makes him incapable of realizing his disastrous effects on others. He is a flat character who literally talks like a chatbot. Its “goodness” is based on the fact that, like a chatbot, it reacts to challenges without irony, offering platitudes devoid of any reflexivity, taking everything literally and relying on a mental mechanism of self-completion responses instead of forming sentences. ideas. For this reason, the new chatbots will do very well with ideologues of all stripes, from the crowd “Woke” contemporary to the nationalists “MAGA”, who prefer to stay asleep.

*Slavoj Žižek, professor of philosophy at the European Graduate School, he is international director of the Birkbeck Institute for the Humanities at the University of London. Author, among other books, of In defense of lost causes (boitempo).

Translation: Daniel Pavan.

Originally published on the portal Project syndicate.


The A Terra é Redonda website exists thanks to our readers and supporters.
Help us keep this idea going.
Click here and find how

See all articles by

10 MOST READ IN THE LAST 7 DAYS

See all articles by

SEARCH

Search

TOPICS

NEW PUBLICATIONS