By LUÍS FELIPE SOUZA*
The fate of Artificial Intelligence does not have to mean human subjugation. The apparent inevitability of this future is more related to capitalist dominance than to technological development
Artificial Intelligence reads the world and can intervene in it by processing numerical data. The ability to write, speak, create images, drive vehicles, or even produce the smell of a tree that became extinct in the last century, is only possible due to the use of data that is converted into numbers. The capture of such data to be processed by Artificial Intelligence is made possible by ourselves, users of the vast technological network – often despite our will –, through the various devices that make up social media.
Personalized monitoring of the community and manipulation of big data they enable not only in-depth knowledge of the subject, but also the standardization of behaviors and dynamics of desire. The inevitable question then arises: is there still room to talk about human autonomy in the face of increasingly fine control over technology?
The topic of advances in technology is surrounded by nonsense and raises a profusion of emotions in relation to what is expected of the future. While the possibilities of experiencing corporeality and identities seem to multiply in virtual spaces, we are subjected to daily tension in the face of the feeling of reality being determined by digital devices. Concern about privacy and the amount of data that is provided without users' acquiescence is one side of the problem that culminates in the question of freedom in the face of future developments.
Dealing with human autonomy when subjectivity is constituted by intertwining with technological determinants seems to be an unfeasible task. In philosophy of technology, the debate is encouraged by different currents that oppose each other at the point where levels of determination and the possibilities of human intervention in the direction of technology are discussed. In this philosophy, there is an instrumental aspect that conceives technology as an apparatus that can be controlled and subordinated to the human will.
In this sense, the use of technology would be instrumental through its conditioning to human desire. For this conception to make sense, technology would need to be neutral in its values, without there being a moral overdetermination that directs it towards particular ends. On the other hand, there is, in the philosophy of technology, the substantivist current affiliated with the Frankfurt School, which understands it as loaded with a normativity that would constitute means of action for itself. Thus, technology would enjoy a certain autonomy due to its determinants and the judgments imbued within it.
Due to the substantial values it possesses, technology could not be available to control conditioned by human pleasure, as its action program would be based on defined values, such as power and efficiency. Substantivism criticizes the instrumentalist notion for its faith in the liberal progress of technology which, as it has no destinations, could culminate in the elevation of human status. If the course of technological development meets the demands of its designers, then technology has well-defined values and, therefore, acts in accordance with the morality of the capital that finances it. The autonomy achieved by techno-scientific machinery comes at the expense of the capacity for human intervention in the course of its development.
The problem takes on new contours when technological devices not only shape subjectivity, but also begin to reproduce criminal biases — based on the morality of capital. Computer scientist and race and gender studies intellectual, Joy Buolamwini, realizes that machine learning occurs in order to capture, process and convert data that operates from a discriminatory bias. In one of her experiments, Buolamwini, a black woman, noticed that software Artificial Intelligence in facial recognition devices could only perceive your face through the use of a white mask.
Buolamwini's experiment confirms that technological development does not follow the course of objective-neutral progress, but acts in accordance with the interests of those who design it. The determination of subjectivity by technology guarantees the perpetuation of violence, such as race and gender, which will continue to be present in the normative vocabulary of technology network users.
Questions involving human freedom versus Machine autonomy are often promoted due to fear of losing jobs, one of whose representatives in the collective imagination are Tesla's driverless cars. Work automation is one aspect of the growing autonomy that Artificial Intelligence embodies. The replacement of human strength by machines and the exclusion of labor functions due to digitalization are elements that drive the emergence of theses such as that of Jürgen Habermas. The sociologist argues that living work, that which takes place between humans and nature, has been replaced by the productive power of techno-scientific machinery.
Jürgen Habermas argues that scientific advances constitute the royal road to capital production, replacing the now inoperative labor value. His thesis is based on contemporary elements of the world of work, such as the precariousness and deproletarianization of manual labor in industries and factories. In this sense, humans would be in the process of being subjugated to machines due to techno-scientific developments.
The fears surrounding the topic of Artificial Intelligence, therefore, are products of the feeling of diminishing margin of human autonomy. The possibility of the coexistence of a space for decision-making free from the interference of technologies is then questioned. Andrew Feenberg, philosopher of technology, recognizes the substantivist character of technology which, permeated by capital values, blurs the line between the individual and the collective, shapes subjectivities, affections and desires.
The author, despite admitting the modeling force exerted by technology on subjectivity, still bets on the possibility of promoting democratic and collective interventions in the natural of technologies that typify aspects so intrinsic to human subjectivity. Andrew Feenberg is an important representative of the critical aspect of the philosophy of technology, which, despite admitting the substantive nature of technologies, sees the possibility of a contiguity between technoscience and the construction of technological models that are not exclusive.
Andrew Feenberg's bet echoes thinkers who believe there are ways to change the direction of technological development through an intervention in the way it is configured. It is about defending that technologies are not naturally impregnated with capitalist values, nor that they are teleologically destined to perpetuate violence. This means that the deliberate impressions of interests that shape them as means of perpetuating the centralizing power of capital were placed in their construction process.
In a similar line of thought, the direction of the world of work is not automation, precariousness, flexibility and work part time schedule by chance. The world of work does not present destructive characteristics, nor does it subordinate workers to the insecurity of misery because of the expansion of Artificial Intelligence – as if precariousness were an inevitable and necessary destiny in the technological context. Rather, the destiny of the world of work follows the path of fragmentation and pauperization because these are, exactly, the interests of capital that govern modern life.
The crises that capitalism has gone through over the decades have highlighted the need to change the structural bases that support the world of work. From hierarchical and specialized work, whose exponents are Taylorism and Fordism, work begins to present characteristics of greater flexibility, network decentralization, and female participation. However, these characteristics are accompanied by elements of the legacy thatcherist, such as the growing loss of rights, the fragmentation of work, especially in remote modalities, and the reduction in the organization of the proletarian body in unions, capable of demanding democratization and ensuring fundamental rights.
This is how Ricardo Antunes, a Brazilian sociologist, explains how technological development does not cause a qualitative leap in human life. This is a structural impediment that results from the submission of science to the relations between capital and labor. Therefore, it is not a question of judging new work organizations as essential structures resulting from a scenario of technological domination. On the contrary, it is about considering that scientific development is conditioned by capitalist imperatives and that, therefore, its results will not be converted into collective well-being.
Therefore, the fate of Artificial Intelligence developments does not need to mean human subjugation. The apparent inevitability of this future is more related to capitalist dominance than to technological development. It is in this sense that Eurídice Cabañes, philosopher and researcher of virtual games, sees in the link between virtual life and real life the possibility of experimenting with new identities endowed with possibilities that are often blocked by the conditions of materiality.
Technological devices can constitute a way through which the imperative directions dictated by their developers can be questioned. This may be a path to experiencing, in other worlds, new forms of corporeality and subjectivation. After all, as Cary Wolfe, theorist of post-humanism, reminds us, human beings are prosthetics, constituted in the multiplicity of relations of things-present and things-absent, of the organic and the non-organic, of the inside and the outside.
Artificial Intelligence, multiverses and the complexification of material reality can represent the experience of new forms of organization of subjectivity, without having to culminate in the erasure of the margin of singularity that is tributary to it. The technological situation, therefore, before representing the closure of subjective contingency and the end of work, seems to indicate the course of its transmutations into new morphologies.
*Luís Felipe Souza is a master's student in work psychology at the University of Coimbra.
the earth is round there is thanks to our readers and supporters.
Help us keep this idea going.
CONTRIBUTE