Technological choices

Image: Bebos
Whatsapp
Facebook
Twitter
Instagram
Telegram

By RAFAEL CARDOSO SAMPAIO*

Companies that control data and AI infrastructure profit from mass surveillance and predatory automation, while workers lose autonomy and income

In 2023, economist and MIT professor Daron Acemoglu, winner of the Nobel Prize for the book Why Nations Fail, brought to the global debate a scathing critique in another work, called Power and Progress: A Thousand-Year Struggle Between Technology and Prosperity. In it, Daron Acemoglu and Simon Johnson argue that the current trajectory of artificial intelligence—marked by ever-larger models, dependent on massive data, and geared toward indiscriminate automation—is not an inevitable fate. It is a technological choice, and choices, as we well know, can be revisited.

Daron Acemoglu and Simon Johnson start from a historical premise: technologies are shaped by those who hold power. In the case of modern AI, the dominant narrative, driven by giants such as OpenAI, Google, Microsoft and Meta, revolves around the quest for “artificial general intelligence” – machines capable of replicating (and replacing) human cognition. This vision, inspired by the ideas of Alan Turing, fuels a vicious cycle of power concentration. Companies that control AI data and infrastructure – such as cloud providers and Big Tech platforms – profit from mass surveillance and predatory automation, while workers lose autonomy and income.

Since leading the field of innovations in Artificial Intelligence is the goal at any cost, such companies have been spending gigantic amounts of resources, both in terms of resources and computation, which generates a series of related problems. Models like GPT-4 consume gigantic amounts of energy and cost millions of liters of water to cool their servers for trivial tasks, such as recognizing cats in photos or generating superficially coherent texts, as cited by the authors of the book. Meanwhile, socially relevant applications – such as early diagnosis of rare diseases or adapting crops to climate change – take a back seat in such companies.

The recent release of Chinese company DeepSeek's open source generative artificial intelligence model has caught the world's attention, striking directly at the heart of financial and speculative capital.  which maintains a large part of the structure of Silicon Valley's Big Techs. The coup took place precisely because it dismantled the previous narratives of this sector.

Until the launch of the R1 model of DeepSeek, the prevailing belief was that developing cutting-edge artificial intelligence models required astronomical investments, immense volumes of data, and cutting-edge Nvidia hardware for intensive computational processing, which would justify the proprietary nature and industrial secrecy of these technologies.

Despite being a company with a modest budget and team, compared to the giants of Silicon Valley, DeepSeek managed to train its AI model at a significantly lower cost, innovating in training techniques and in just a few months. Even without the best processing cards, the company developed a model that competes with the best products from OpenAI, Google and Anthropic. In addition, the laboratory innovated by releasing the model openly, allowing it to be used and processed by anyone interested in more modest servers.

DeepSeek’s success isn’t just technical—it’s political. It’s not just a technical innovation, it’s a practical manifesto against the “AI illusion,” a term coined by authors Daron Acemoglu and Simon Johnson to describe the naive belief that autonomous, superintelligent machines will automatically bring benefits to society.

In an article published in the newspaper The Guardian, Kenan Malik similarly argues that DeepSeek’s impact lies in having demystified the aura surrounding AI. Silicon Valley, he argues, has cultivated the image of AI as a precious and miraculous achievement, portraying its leaders as prophets and the technology as possessing almost magical powers, including the promise of “artificial general intelligence” (AGI). However, Malik notes that such claims stem less from technological possibilities and more from political and economic necessity, since the business model of AI relies on hype to drive investment and influence policy.

DeepSeek shows that it is possible to resist the Big Tech narrative and hype and build AI that serves human, not corporate, goals. Acemoglu and Johnson remind us that technology is a mirror of values: if we prioritize utility over AI, efficiency over surveillance, and collaboration over automation, the future will be radically different. DeepSeek reminds us that we can still change the course of innovation and that there are other options.

And Brazil in this story?

Well, we may initially be sad that the federal government did not initially take this new technological scenario so seriously. If there had been an initiative there, already in 2023, when ChatGPT was already impressing the world, it could certainly be the country that is showing off its own DeepSeek.

However, there is no point in shedding tears in the rain. This change also points to some interesting paths, as well as some urgent needs for Brazil. Two issues, in particular, caught the attention of DeepSeek, in addition to its technological innovation.

First, despite being free, its terms of service allow users' data, including interactions, to be stored in China and reused for training future models. Secondly, it has drawn attention for having a bias, and to some extent, censorship that prevents it from addressing topics that are sensitive to the Chinese government.

Now, the interesting thing about this question is that practically every major language model, such as ChatGPT, Claude and Gemini, already does this. Their terms of service clearly state that user interactions may be used to train their models and the data is stored on servers in the United States. Like any technology, these models present several biases from the perspective of their programmers, who are usually white, high-income men who live in Silicon Valley.

Therefore, any questions about disputed concepts, such as democracy, feminism, economic equality and many others will tend to be biased by this perspective, in addition, of course, to the biases already present in the training data, which can generate new forms of discrimination.

Despite this, when downloaded and run locally, DeepSeek does not apply censorship. Apparently, there is a second system that applies censorship only to the online version to comply with Chinese laws. Therefore, DeepSeek will work without these restrictions when run locally.

When we put all this information together, there are some short- and long-term actions for Brazil. In the very short term, it is urgent for the Brazilian government to issue regulations on the preservation of strategic data. There are two areas of priority attention: government data and scientific data. Government data represents one of the greatest assets of any government and often involves strategic and even secret decisions of the country.

In good faith, at this very moment, there are hundreds, if not thousands of public servants inadvertently uploading this data to ChatGPT and similar services and handing over this value for free to American (and now Chinese!) big tech companies. At this very moment, there are no clear ordinances or regulations from the federal government, as a whole, for extra care with such data. The same goes for other branches of government.

Likewise, academics are repeating the same mistake with cutting-edge scientific discoveries. Seeking to increase productivity, our scientists are analyzing data, writing and reviewing texts in generative artificial intelligence, freely handing over this data, the result of a large financial and intellectual investment, without any compensation. There is currently no regulation by the Ministries of Education or Science and Technology, nor by regulatory and research funding agencies, such as CAPES, CNPq and the like.

Exactly for this reason, in a own guide, I and other colleagues suggest that science adopt open models of Artificial General Intelligence and that they keep their data in sovereign clouds, which, in practice, could also be done by public servants. DeepSeek itself could be used for this, but there are other options already on the market. The government of the state of Piauí is about to release the Sovereignty, a language model fully developed by the state. Thus, Brazilian science and the State are fully capable of creating their own models.

In the medium and long term, we already know the recipe. Investments. The general Brazilian Artificial Intelligence Plan (PBIA) seems to be a good start and the news that came from the recent “National Conference on Science, Technology and Innovation” is equally exciting, to the point that we can think of a new turn of Brazilian science. So we seem to be heading in the right direction.

However, we must remember the lesson of DeepSeek. Do more with less, work with open and collaborative models, avoid the hype of big tech, develop technologies that are useful for our needs, as Daron Acemoglu and Simon Johnson remind us. Only then will we be able to effectively talk and think about effective digital sovereignty.[1]

*Rafael Cardoso Sampaio is a professor at the Department of Political Science at the Federal University of Paraná (UFPR).

Note


[1] This text was corrected and improved with the help of DeepSeek R1 and then duly reviewed, enhanced and appropriated by its human author.


the earth is round there is thanks to our readers and supporters.
Help us keep this idea going.
CONTRIBUTE

See all articles by

10 MOST READ IN THE LAST 7 DAYS

See all articles by

SEARCH

Search

TOPICS

NEW PUBLICATIONS