Artificial Intelligence skills

Image: Francesco Paggiaro
Whatsapp
Facebook
Twitter
Instagram
Telegram

By YUVAL NOAH HARARI*

A Artificial intelligence invaded the operating system of human civilization

The fear of artificial intelligence (AI) has haunted mankind since the dawn of the computer age. Until now, these fears have centered on machines that use physical means to kill, enslave or replace people. However, in the last two years, new artificial intelligence tools have emerged that threaten the survival of human civilization in an unexpected way. Artificial Intelligence has acquired some remarkable abilities to manipulate and generate language, be it words, sounds or images. In this way, artificial intelligence invaded the operating system of our civilization.

Language is the stuff of which almost all human culture is made. Human rights, for example, are not inscribed in our DNA. Rather, they are cultural artifacts that we create by telling stories and writing laws. The gods are not physical realities. Rather, they are cultural artifacts that we create by inventing myths and writing scriptures.

Money is also a cultural artifact. Banknotes are just pieces of colored paper, and these days more than 90% of money isn't even banknotes, it's just digital information on computers. What gives money its value are the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff weren't particularly good at creating real value, but they were all extremely competent storytellers.

What would happen when a non-human intelligence became better than the average human being at telling stories, composing melodies, drawing pictures, and writing laws and scriptures? When people think of ChatGPT and other new AI tools, they are often drawn to examples like school children using AI to write their essays. What will happen to the school system when kids do this? But this type of question misses a more general picture. Forget school essays. Think of the next American presidential race, in 2024, and try to imagine the impact of Artificial Intelligence tools that can be created to mass produce political content, fake news and scriptures for new cults.

In recent years, the QAnon cult has coalesced around anonymous online messages known as “Q drops”. Followers collected, revered, and interpreted these Q drops as a sacred text. Although, as far as we know, all Q drops previous ones were composed by humans and the bots have only helped to spread them, in the future we may see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon this could become a reality.

On a more prosaic level, we may soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities we think are human but are actually AI. The problem is that it is utterly pointless for us to spend time trying to change one's stated opinions. muzzle of Artificial Intelligence, while Artificial Intelligence could hone its messages so precisely that it would have a good chance of influencing us.

Through its mastery of language, artificial intelligence could even form intimate relationships with people and use the power of intimacy to change our opinions and worldviews. While there is no indication that the AI ​​has a conscience or feelings of its own, to promote false intimacy with humans, it is enough for the AI ​​to make them feel emotionally attached to it.

In June 2022, Blake Lemoine, an engineer at Google, publicly stated that the chatbot LaMDA, which he was working on, had become sentient. That controversial claim cost him his job. The most interesting thing about this episode was not Mr. Lemoine, which was probably a fake. Rather, it was his willingness to risk his lucrative job for the sake of the chatbot of Artificial Intelligence. If artificial intelligence can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most effective weapon, and artificial intelligence has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade, social media has become a battleground for controlling human attention. With the new generation of AI, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology when AI fights AI in a battle to fake intimate relationships with us that can be used to convince us to vote for certain politicians or buy certain products?

Even without creating a “false intimacy”, the new Artificial Intelligence tools would have an immense influence on our opinions and worldviews. People can come to use a single AI adviser as a complete and omniscient oracle. No wonder Google is terrified. Why bother looking it up when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to inform me of the latest news? And what's the point of ads when I can just ask the oracle to tell me what to buy?

And even these scenarios don't capture the big picture. What we're talking about is potentially the end of human history. Not the end of the story, just the end of its human-dominated part. History is the interplay between biology and culture; between our biological needs and desires for things like food and sex and our cultural creations like religions and laws. History is the process by which laws and religions shape food and sex.

What will happen to the course of history when Artificial Intelligence takes over culture and starts producing stories, melodies, laws and religions? Earlier tools such as printing and radio helped to spread human beings' cultural ideas, but they never created new cultural ideas of their own. Artificial intelligence is fundamentally different. Artificial intelligence can create completely new ideas, completely new culture.

At first, artificial intelligence will likely mimic the human prototypes it was trained with in its infancy. But with each passing year, the culture of artificial intelligence will boldly go where no human has gone before. For millennia, human beings have lived inside the dreams of other human beings. For decades to come, we may find ourselves living inside the dreams of an alien intelligence.

The fear of artificial intelligence has haunted mankind only in the last few decades. But for thousands of years, human beings have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and create illusions. Consequently, since ancient times, human beings fear being trapped in a world of illusions.

In the XNUMXth century, René Descartes feared that perhaps a malicious demon was trapping him in a world of illusions, creating everything he saw and heard. In ancient Greece, Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave for life, in front of a blank wall. A screen. On this screen, they see various shadows projected. Prisoners confuse the illusions they see with reality.

In ancient India, Buddhist and Hindu sages indicated that all humans lived trapped in Maya – the world of illusions. What we normally consider reality are often just fictions in our own mind. People can wage whole wars, killing others and wanting to be killed, because of their belief in this or that illusion.

The artificial intelligence revolution is bringing us face to face with Descartes' demon, with Plato's cave and with Maya. If we're not careful, we can find ourselves trapped behind a curtain of illusions that we won't be able to tear apart, or even realize it's there.

Of course, the new power of Artificial Intelligence can also be used for good purposes. I won't expand on this, because the people who develop artificial intelligence already talk about it a lot. The job of historians and philosophers like myself is to point out the dangers. But certainly, artificial intelligence can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to ensure that new AI tools are used for good and not for evil. To do this, we first need to assess the true capabilities of these tools.

Since 1945, we have known that nuclear technology could generate cheap energy for the benefit of human beings, but it could also physically destroy human civilization. So we overhauled the entire international order to protect humanity and ensure that nuclear technology was used primarily for good. Now we have to deal with a new weapon of mass destruction that could annihilate our mental and social world.

We can still regulate new AI tools, but we need to act quickly. While nuclear bombs cannot invent more powerful nuclear bombs, Artificial Intelligence can exponentially create even more powerful Artificial Intelligence. The crucial first step is to require rigorous security checks before powerful AI tools are released into the public domain.

Just as a pharmaceutical company cannot release new drugs before testing their short-term and long-term side effects, technology companies should not release new AI tools before they are deemed safe. We need an equivalent of Food and Drug Administration for new technologies, and we needed it yesterday.

Won't the slowdown in public AI deployments cause democracies to lag behind more ruthless authoritarian regimes? It's just the opposite. Unregulated AI deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations depend on language. When artificial intelligence invades language, it can destroy our ability to have meaningful conversations, thus destroying democracy.

We just found an alien intelligence here on Earth. We don't know much about it, except that it could destroy our civilization. We owe an end to the irresponsible deployment of AI tools in the public sphere and regulate AI before it regulates us. And the first regulation that I would suggest is to make it mandatory for artificial intelligence to disclose that it is artificial intelligence. If I'm talking to someone and I can't tell if it's a human or an artificial intelligence, that's the end of democracy.

This text was generated by a human. Or is it not?

Yuval Noah Harari is professor of history at the Hebrew University of Jerusalem. Author, among other books, of Sapiens – A Brief History of Humankind (Company of Letters).

Translation: Fernando Lima das Neves.

Originally published on the magazine's portal The economist.


the earth is round exists thanks to our readers and supporters.
Help us keep this idea going.
CONTRIBUTE

See this link for all articles

10 MOST READ IN THE LAST 7 DAYS

______________

AUTHORS

TOPICS

NEW PUBLICATIONS