By André Campos Rocha*
Considerations on the impact of digital transformations
One of the basic aspects of the digital revolution is the conversion of numerous everyday events and phenomena - events, sounds, images, documents and so on - to the binary language, 0 and 1, of computers, which can thus analyze, measure , compare and generate predictions or apprehend patterns and relationships between things in the world that sometimes prove strange to common sense.
In this process, a new cultural setting emerges, marked by a radical break with previous forms of life. Just look, for example, at our smartphones, these compact and powerful devices, which accompany us from waking up to bedtime, and which make obsolete experiences that until recently made up the texture of our daily lives, now surpassed by the frantic pace of digital transformation.
Who even today leaves home with a wallet full of documents, if what belongs to our private lives is being duly packaged and compressed in our bits of information from our electronic devices? Who remembers the last time they raised their hand to order a taxi, in what was perhaps an anguished wait, given that the private transport market is increasingly being dominated by digital platforms? What will be the fate of banknotes and coins (and also of the little money pigs with which we used to adorn our homes) or even of physical maps, if our financial transactions and means of spatial orientation are directly integrated into our smarts?
By dematerializing and subsuming things into a single object, the smartphones portray the subordination of the cadence of change in our habits and conventions to the dynamics of digital innovation. More than that, these artifacts, true syntheses of recent technical conquests, denounce, from the shuffling of the spheres of work and free time to the point of indistinction (what is a social network if not a mixture of leisure and entertainment with space for professional advertisements and advertising?), as these same ordinary habits and behaviors are inscribed in the financial, technical, legal and operational arrangements that sustain and guide the digital universe. As there are obvious power imbalances in capacity and initiative to shape it, the world shown to us on screens is not a neutral portrayal. It is designed according to the interests of those companies and institutions that control our data. Although we are aware that by entering this universe we renounce our privacy and intimacy, the terms of the bargain are not always clear. The implication of this is that in order to operate competently on a daily basis, we depend on a set of technical actors that we do not know, the leaders of a universe whose operating logic is opaque to us, and whose consequences for our subjectivity are barely beginning to become aware of.
It was not always like this… As the Iranian blogger and activist Hossein Derakhshan (2015) attests, today we are experiencing the death of the hyperlink. the english word link gives the idea of a path, shortcut, connection. the time of hyperlink it was a time when world Wide Web he valued decentralization, breaking down hierarchies, performing a large navigable system of nodes and networks. It was the golden age of personal blogs, “windows into lives we knew little about, bridges that connected different existences to each other and therefore transformed them” (Derakhshan, 2015). It is no longer the case.
In the post-2008 crisis world, the infrastructure of the Western digital ecosystem is operated and dominated by five large companies located on the West Coast of the United States: Alphabet/Google, Facebook, Amazon, Apple and, to a lesser extent, Microsoft. The core of this system is corporate (Dijck, Poell and Waal, 2018). The vaunted ideals of Silicon Valley's "Californian ideology" which, in a curious mixture of rebellion and transgression to the New Left with entrepreneurship high tech, proclaimed the values of openness and sharing (Barbrook and Cameron, 2015), are nothing more than a chant in the face of the aggressiveness with which these big tech expand their business on the net, buying startups and making partnerships, blocking the path of other platforms and forcing them, at the price of their survival, to connect, in one way or another, to them. It is known that its main asset – especially in the case of Alphabet/Google, Facebook and Amazon – is the data provided by users. With them, these companies can extract the necessary knowledge to sell to third parties personalized advertisements that reach with millimeter precision the intended targets.
For this reason, they strive, above all, to attract our attention, an increasingly scarce resource in today's excited society full of screens (Türcke, 2010) and which, being non-transferable, becomes increasingly valuable. To generate value, to produce the essential information for the expansion of their business, it is of vital importance for these big tech that we remain as long as possible attached to their platforms – hence Derakhshan’s (2015) lament that social networks, unlike blogs of yesteryear, have a closed, self-referential character, since they tend to treat native texts and images , directly published on their domains, with much more respect than those hosted outside them. And the persuasion techniques they use constitute a “turning point” in marketing strategies, making the ideological devices of the “classic cultural industry”, which Adorno and Horkheimer (1985) point out, seem like child's play.
Throughout the XNUMXth century, marketing strategies sought above all to create consumer identification with products: Marlboro cigarettes fit very well with a haughty and independent personality; margarine was the ideal food for a prosperous and harmonious family environment. Currently, combining advanced computing and Big Data with the knowledge of behavioral economics and cognitive-behavioral sciences, these big companies not only seek to generate insights valuable things about us, but, in a much more decisive way, to transform who we are, to affect our personality, so that we become much more predictable people, easy prey to the intricate mechanisms of its algorithmic management (Bentes, 2019).
Confined in digital bubbles, we become much more politically radical, unable to apprehend multiple visions and different points of view on a given subject. Hooked by this superindustry of the imaginary, in the terms of Eugênio Bucci (2021), in whose techniques of domination lies the entire lexicon of behaviorist psychology – conditioning, triggers, variable rewards, decision architecture, etc. sucked into these gears of addiction, so that we often find ourselves in endless motions of tapping e scrolling on YouTube, Facebook or Instagram, which seduce us with overwhelming power.
Here, what is decisive is less the content than the form; or as media scholar McLuhan (1967) reminds us, the medium is the message. The communication pattern introduced by networks, which appeals to the unconscious and emotions, is a revolutionary factor in the contemporary cultural landscape. According to neuroscience studies, our brain is plastic and malleable, its network of synapses is shaped by our habits. Thus, from the point of view of intellectual history, the internet is a force that changed our minds (Carr, 2010). In the multiplicity of its stimuli, it promotes distracted and superficial thinking, which, linked to the performance pressures of neoliberal capitalism, gives rise to various psychic pathologies linked to the multitasking, fatigue and attention deficit (Han, 2017). In this ultra-fast delivery and reward system, which encourages the repetition of physical and mental actions, our minds are consumed in the middle. Hence, not only are technologies our extensions, but we also become extensions of technologies.
From this intertwining of culture, economics and technique, one must question the validity of that rational and well-informed individual with which liberal theory has always thought of society and politics. What is the place of thoughtful and reflected choices, when we are immersed in an environment conducive to the propagation of fake news and conspiracy theories, without knowing for sure, in the midst of a profusion of information that circulates quickly, where to look for the “truth” of the facts? How can we speak of an “autonomous subject” when we convert ourselves into particles located in an algorithmic web that has the capacity to condition our tastes and desires and direct our actions?
With the centralization of information in the hands of a restricted group of big tech, as a result, governments and corporations are becoming much more powerful than we are. And in relation to this, two characteristics of algorithmic management in the era of Big Data and artificial intelligence appear to be fundamental. First, it should be noted that algorithms are nothing more than sets of instructions, a series of steps that transform certain input data (inputs) into some intended result (outputs). Something commonplace, like a chocolate cake recipe, for example, can be thought of as an algorithm. The recipe consists of a set of instructions that aim to transform certain data inputs, the ingredients (eggs, flour, yeast, etc.), into a finished product, the warm cake ready to be devoured. Roughly speaking, the peculiarity of artificial intelligence systems is that they are provided with data inputs and outputs, and the algorithms seek the best means – the best set of steps – to reach the proposed objective. (In the trivial case of the cake, it is as if, faced with the ingredients and the ready-made cake, the algorithm was looking for the best methods to accomplish the task, in terms of using ingredients, optimizing costs, saving time, etc. )
Therefore, fed with large databases and boosted by today's computational processing power, these systems have an enormous capacity for efficiency, that is, for finding the optimal and appropriate paths between means and ends, whatever they may be, to such an extent that in them resides a somewhat irrational and dystopian possibility – widely used by science fiction series and novels – of surpassing or surpassing the “purposes” for which they were, in principle, programmed.
AlphaGo, artificial intelligence software developed by Alphabet/Google to defeat the world champion of Go (a much more complex board game than chess), Lee Sedol, not only beat the South Korean master, making apparently absurd moves, but also, shortly afterwards, he was defeated by his new version, AlphaGoZero. What's new about AlphaGoZero over its predecessor is that it has learned many of the tricks and techniques from the great gamers of Go playing millions of games against itself, receiving only the blank board and the rules of the game. This learning technique in which the machine dispenses with real-world examples, learning to solve complex problems autonomously, undoubtedly represents a big step towards the unknown of its cognitive faculties (Knight, 2017). And resources like this can be used in different sectors of society, which leads us to the second characteristic listed here: the ease with which certain artificial intelligence applications can be transposed from one context to another, detaching themselves from their original use.
The application Findface, developed by Alexander Kabakov, allowed users to do the upload of photos of unknown people and compare them with all the images shared on the Russian social network Vkontakte, sifting through billions of them in less than a second. When packaged and made available to the public, the app it was already being used by a photographer to identify strangers on the Moscow metro and also by a group of misogynistic sexists to harass and harass female sex workers. Kabakov ended up signing a contract with the Moscow municipality for his facial recognition algorithm to be used in 150.000 surveillance cameras throughout the city (Greenfield, 2017).
Indeed, artificial intelligence applications such as Findface of Kabakov have proven very effective in serving authoritarianism around the world. Their great advantage over traditional means of coercion and intimidation is that they are silent and therefore allow for a much broader and more systematic type of domination. If an authoritarian government wants to repress the opposition, it is no longer necessary to mobilize a large contingent of police forces armed to the teeth. In addition to the costs and risks involved, the effectiveness of these means comes up against the biological limits of the human body – although, as Crary (2014) reminds us, overcoming them is one of the goals of 24/7 neoliberal capitalism.
Human beings are hungry and thirsty: their energy reserves are finite. Automated artificial intelligence systems are relentless and ubiquitous, producing a change in behavior and creating an important inhibitory effect even in the absence of physical violence. Applied to large cities, the language of algorithms is that of surveillance, preventive control and scalable anomaly detection. Knowing they are being watched, knowing that real-time cluster detection algorithms – which determine when a larger group of people has formed – are being used by governments or even that bots artificial intelligence scour their information for messages critical of the regime, people will feel strongly compelled to conform (Feldstein, 2019).
The role of new technologies in state surveillance has been the subject of debate for some time now, at least since Edward Snowden revealed, in 2013, the global surveillance mechanisms employed by the NSA (National Security Agency) North-American. It is also known how social networks such as Facebook and Twitter have contributed to the rise of authoritarianism and the emergence of new right-wing populisms around the world, as witnessed by the performance of Cambridge Analytica in Brexit and in the election of Donald Trump in 2016. Although there are legal mechanisms with which western liberal democracies can confront such abuses, imposing regulations and fines on large platforms or passing laws to protect personal data, it is not difficult to imagine, despite speeches to the contrary, democratic governments giving in to the temptation to use artificial intelligence technologies to violate citizens' rights. And, although they have implemented “transparency mechanisms”, such as news checking, it is hard to believe that digital platforms and social networks are taking a radical turn towards democracy, since the model of surveillance and data extraction is the core and raison d'être of its business.
In addition, there is a relevant geopolitical component here, which will be fundamental in dictating the direction of technology in the years to come. In China – the world's second-largest economic power and a world leader in the 5G revolution – the use of artificial intelligence is part of a broader control system that underpins the Chinese Communist Party. There, the union of the great technology corporations – Alibaba, Tencent and Baidu, the big tech dominant in the Asian giant, unique to rival Silicon Valley companies – with the State it is of such magnitude that it has no parallel in the Western world. In addition to the “more traditional” methods of coercion and digital surveillance, used, for example, to repress the Muslim minority Uighur in the province of Xinjiang – closed-circuit TV with cameras equipped with facial recognition linked to Wi-Fi sniffers, which sift through contacts, emails and photos and videos on social networks – China is establishing a truly Orwellian national social credit system, where people's reputations are scored and those who don't “walk the line” (gossiping, crossing the wrong street or even having a messy garden) can see their chances of getting a job or even going to a good school jeopardized. Bearing in mind the grandiose project of the New Silk Road, a symbol of its ambition to challenge the Western liberal order, China should widen its radius of influence, exporting its governance models to other countries; and, in fact, these are already used in places as diverse as Zimbabwe, Malaysia or Singapore (Feldstein, 2019).
* André Campos Rocha is a doctoral candidate in social sciences at PUC-MG.
References
ADORNO, Theodor; HORKHEIMER, Max. Dialectics of Enlightenment: philosophical fragments. Rio de Janeiro: Jorge Zahar, 1985.
BARBROOK, Richard; CAMERON, Andy. The Californian Ideology. 1995. Available at: http://www.comune.torino.it/gioart/big/bigguest/riflessioni/californian_engl.pdf.
BENTS, Anna. The algorithmic management of attention: hook, know and persuade. In: POLIDO, Fabrício; ANJOS, Lucas; BRANDÃO, Luiza (org.). Politics, Internet and Society. Belo Horizonte: Institute of Reference in Internet and Society, 2019.
BUCCI, Eugene. The Superindustry of the Imaginary: how capital transformed the gaze into work and appropriated everything that is visible. Belo Horizonte: Authentic, 2021
CARRR, Nicholas. The shallows: what the Internet is doing to our brains. New York: WW Norton & Company, 2010.
CRARY, Jonathan. Late capitalism and the ends of sleep. São Paulo: Cosac Naify, 2014.
DERAKHSHAN, Hossein. Save the internet. PISEAGRAM, Belo Horizonte, number 08, page 52 – 55, 2015.
DIJCK, José van; POELL, Thomas; WAAL, Martijn de. The platform society... New York: Oxford University Press, 2018.
HAN, Byung-Chul. tiredness society. Petrópolis, RJ: Voices, 2017.
GREENFIELD, Adam. Radical technologies: the design of everyday life. London: Verse Books, 2017.
FELDSTEIN, Steven. How artificial intelligence is transforming repression. 2019. Available at: https://medium.com/funda%C3%A7%C3%A3o-fhc/como-a-intelig%C3%AAncia-artificial-est%C3%A1-transformando-a-repress%C3%A3o-c1bdba0bbacf.
KNIGHT, Will. AlphaGo Zero shows machines can become superhuman without any help. 2017. Available at: https://www.technologyreview.com/2017/10/18/148511/alphago-zero-shows-machines-can-become-superhuman-without-any-help/.
MLUHAN, Marshall. The medium is the message. An inventory of effects. New York: Bantam Books, 1967.
TÜRCKE, Christoph. Excited society: philosophy of sensation. Campinas, SP: Editora da Unicamp, 2010.