By ROB HORNING*
“Artificial intelligence” and the quest to redefine workers’ autonomy
The term “artificial intelligence” is a deeply ideological way of characterizing automation technologies. It is a manifestation of the general tendency to discuss technologies as if they were “powerful” in their own right – as if power were not a relative measure for differentiating abilities and prerogatives of social classes.
On the contrary, “artificial intelligence” seems to suggest that technology develops on its own, for its own reasons, exercising its capabilities independently of human political struggles. Its developments and consequences appear mysterious and obscure – what does artificial intelligence want? Will it enslave humanity? –, displacing to a distant future the relentless evil that capital already performs abundantly and that gave life to technological development.
There is nothing particularly mysterious about advances in machine learning [machine learning algorithm ] that fuels the current fever for artificial intelligence. This stems from the expansion of mass surveillance capabilities and the emergence of companies large enough to centralize and exploit all the data they have unilaterally captured. Through a stupendous application of energy-intensive processing capability, the data is converted into prediction simulations [predictive simulations] of various work activities.
Sometimes the purpose of the simulation is to replace human workers, as in cases featured prominently in a recent report by the The Washington Post, about copywriters who allegedly lost their jobs to ChatGPT: “Experts say that even the most advanced artificial intelligence does not match the writing skills of a human: it lacks verve and style, and often results in wrong answers, meaningless or biased. For many companies, however, cutting costs compensates for the drop in quality.”
Such simulations can occur not only to replace workers, but also to discipline them. They act as a permanent reserve army of hides, ready to work for lower standards and lower costs; and they also serve as normative points of comparison, allowing control of the work process to be transferred to management.
The simulations provide data that support the conceptions (imposed by management) that jobs can be carried out in a viable and sustainable way without the contribution of human workers. This is in line with surveillance-based management practices prescribed since the advent of Taylorism, if not earlier, as detailed by Meredith Whittaker in her account of the theories of Charles Babbage – an early advocate of computational machines.
Charles Babbage's ideas "about how to discipline workers", explains Meredith Whittaker, "are umbilically linked to the calculating machines he tried to build throughout his life". Likewise, “artificial intelligence” is inseparable from capitalist efforts to manage the profitability of labor – profit provides the standard of what counts as “smart”, just as “smart” devices are those that subject us to surveillance.
As with Taylor's time and motion studies, forecasting simulations appear as correctives to the inefficient use of cognitive and bodily skills by the employees themselves, abstracting all contingencies and proposing supposedly valid patterns or productivity in any case. . This abstract dimension, which makes workers interchangeable, is even more important than the standards and results themselves.
The forecast simulation, according to Sun-ha Hong, “is not so much an instrument for predicting future productivity, but more a social model for arbitrarily extracting and concentrating power – that is, [to suppress] people's usual ability to define their own situation.”
Whoever employs such systems is less concerned with the product – the output generated by a large language model, for example – than with how the systems disempower those subjected to them. The “social model” assumed in forecasting systems – in which each worker's individual contributions can be signaled and represented in terms of repetitive instructions – is more important than specific forecasts. The acceptance of automation technology, from this point of view, does not depend so much on its work performance, but on how much data the work yields. It will prove useful to bosses as you do the know-how of workers appear as useless.
This process is examined by the book Data driven [“Directed Data”], by Karen Levy. It's a recent study of how new forms of surveillance have affected the US long-haul trucking industry.
In the case of trucks, the [US] federal government ordered the installation of monitoring devices to prevent drivers from violating the rules of the daily limit of driving hours (rules that private companies pretended did not exist). This has allowed companies to install monitors that track much more data on driver performance, creating data streams that eliminate worker discretion and shift decision making to automatic, algorithmic systems.
As Karen Levy notes, long-distance trucking is an interesting case for studying the effects of automation, as the industry relies heavily on an atmosphere of independence that seems rewarding for the driver.
“Trucks are considered by their drivers both as a workplace relatively free of bureaucratic supervision and as a home, where they live, eat, sleep for days on end or even weeks. In such a place, your privacy is sacrosanct. Thus, considering the truck merely as a driving job is to take only one facet of what it means for those who call themselves truck drivers. The truck driver's work is linked to constructs cultural values of masculinity and virility, realized through demonstrations of physical and mental resistance”.
Balancing between the dangerous conditions and the exploitation of the industry is a compensatory sense of independence, based on the illusion of lack of a boss. The same logic can be found in working at home [home office], when assimilated as a special benefit for employees and not a means of increasing productivity. In both cases, the apparent freedom from human supervision serves as a pretext for imposing automated forms of surveillance, further subjecting workers' time and behavior to measurement by converting it into data.
Under surveillance, work is reworked to be more machine-readable, and more of the worker's effort has to be directed toward fitting the monitoring rather than devising more appropriate ways to get things done. As Karen Levy states, “monitoring abstracts organizational knowledge from local and biophysical contexts – what goes on on the road, around the truck and in the truck driver’s body – to enrich databases and provide managers with a collection of elements to evaluate truck drivers' work in new ways, controlling them in real time”.
This intensification of surveillance, thanks to such data, paves the way for a greater modification of work processes; at the same time, it seems to support the possibility that the employer, at the limit, automates all the work. As work becomes more supervised and less autonomous, it also simultaneously becomes more tedious and replaceable.
Under such conditions, "autonomy" is seen less as doing things one's own way and more as resisting the monitoring that suppresses independence. All forms of “tacit knowledge” [tacit knowledge] – to use Michael Polanyi's term – existing at work become less defensible as a source of productivity and more expendable as mere employee resistance. Worker autonomy persists there, not as a particular form of virtuosity or social practice conducted in conjunction with other workers, but as a fantasy of an inflated individual identity (i.e., the truck driver as the “lone wolf”, the “cowboy of the asphalt”, conqueror of the open road). So all of this still serves as their milieu's justification for management's even deeper intrusion into workers' behavior – regardless of how much surveillance has already been implemented.
As more surveillance is implemented, what gets out [of control] becomes both more salient and irrelevant. Hong, addressing warehouse workers compelled to don devices that monitor and correct their activities, writes: “The quantified expectations that govern the algorithmic workplace fulfill the desire – of managers and employers – for a certain non-human clarity, in which the various variations and ambiguities inherent in any act of working are not exactly eliminated, but simply neglected. The consequence for the worker is that his own work and his life become less presumptive and less optional”.
For those who work from home, this occurs through various monitoring and management suites installed on workers' devices (as detailed in this UK reporting). In the case of truck drivers, Karen Levy speculates that this occurs due to increasingly invasive forms of biometric surveillance: “More than being kicked out of the cab of the truck by technology, the truck driver remains firmly there, doing his job – but he is increasingly accompanied by it. by intelligent systems, which monitor your body in a direct and intrusive way, with wearable devices and cameras, often integrated into fleet management systems […]. Artificial Intelligence, in trucks, is experienced as a hybrid of man and machine. In trucks, surveillance and automation are complements, not substitutes”.
The fact that surveillance and automation generally tend to appear as “complements, not substitutes” more clearly underpins the idea of “augmented” Artificial Intelligence – a potential often evoked as a positive side, which idealizes workers assisted or even empowered by the use of technologies.
A lot of AI, when implemented by management, is not a different kind of “intelligence” but a more responsive form of employee oversight. Like any other information technology, it can be inserted, says Karen Levy, “between work tasks and embodied knowledge. It divides work processes into simple, rationalized, unskilled tasks; it decontextualizes knowledge from the physical workplace to abstract centralized databases; converts work practices into ostensibly objective, calculable and neutral records of human action”.
Its purpose is not to empower workers, but “to legitimize certain forms of knowledge while making others less valuable, with a potentially detrimental effect on workers' power”. Such technologies, sometimes euphemistically called "co-pilots" in the context of coding or other language tasks, are introduced to narrow the worker's arc of possibilities, making him focus only on embodied activities that can be expropriated, always already subsumed to capital and profitable to management.
Artificial Intelligence appears not as an “augmented” reality for workers, but as what Karen Levy calls a “forced hybridization”. It is implemented as a dynamic supervisor, or, worse, as a parasite capable of altering its host's behavior. Karen Levy cites the 2008 book The Culture of Soft Work [“The culture of soft work”], by Heather Hicks, in which it is argued that “when work activities encoded in machine parts merge with the human body the result is not liberated human beings, but more controlled”.
The truck drivers consulted by Karen Levy are repelled by the idea of the cyborg truck, of which they are an incarnate puppet, cohabited and propelled by capitalist machines to maximize their self-exploitation. “Here is the reality felt in the work of truck drivers today”, she writes. And the “algorithmic destruction of workers' bodies” driven by wearable devices at work in warehouses, as Hong describes it, is indeed a dark and dystopian description of it.
But one can also imagine a hybrid interface, which combines the emotional manipulation of chatbots with the algorithmic-managerial stimulus of the Skinner box [operant conditioning chamber] – in such a way that the parasite makes us love the source, in the same way as the infection by taxoplasma gondii makes people love cats. Maybe it's something similar to glasses VisionPro recently introduced by Apple, or perhaps something even more absurd.
At the end of March, OpenAI published a work report called "GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” [“GPTs are GPTs: an early look at the potential impact of large language models on the labor market”]. It is basically a piece of marketing for managers, aimed at extolling the potential of ChatGPTs to perform tasks abstracted from a wide range of occupations described as “exposed” to LLM predation [Large Language Models, large language models].
Such a methodology takes for granted and naturalizes the effects of information technology highlighted by Karen Levy: the division of work into simple tasks, the abstraction of specific contexts and the reduction of work to data. The authors use this methodology to conclude that "all occupations exhibit some degree of exposure to LLMs, and those with higher wages generally have more tasks with high exposure".
These findings (which should be taken Cum grain salis) reverse the usual assumption that anything that can be automated is, ipso facto, “low-skill” work – something that workers would ultimately benefit from by being released. On the contrary, the findings promise managers a future in which more of their subordinates can be pushed out of positions that allow them to exercise judgment.
The list of “occupations with tasks not indicated as exposed” [to LLMs] is revealing. It includes “equipment operators”, “helpers” and “repairmen”, as well as more “expressive” activities, such as “servants”, “butchers”, “fish cutters”. Many positions concern energy extraction: “drill tower operators”, “power line installers”. maybe the hippies calm down when they find there the activity of the “Motorcycle Mechanics”...
Obviously, most of these jobs share the requirements of physical strength, implying that “Artificial Intelligence” renders what we have left more or less economically useless. This suggests that a future dominated by cognitive automation will not be one of humans freed from the “shit jobs” they complained David Graebner (when he claimed a radical reordering of the world and political-social life).
Instead, it advocates a reoriented human work towards maintaining the capitalist wheels in a more literal sense – feeding machines with data and energy and maintaining our bodies. fits as we become biomechanical extensions of software programmed for exploration.
*Rob Horning is a journalist. Executive editor of the portal The New Inquiry. Author, among other books, of The New Age of Science and Technology (Los Angeles Review of Books).
Translation: Rafael Almeida.
Originally published on the portal Overland.
the earth is round exists thanks to our readers and supporters.
Help us keep this idea going.
CONTRIBUTE