Menu

The Full Automation Fallacy

We are at a critical moment, when digital technologies of automation, often referred to with buzzy vocabulary like “algorithms” and “AI,” are poised to transform work in a “Fourth Industrial Revolution.” Headlines scream about the imminent replacement of workers, often in language reminiscent of anti-immigrant rhetoric: robots are threatening to “take” or “steal” jobs. You can even go to the website willrobotstakemyjob.com and input specific occupations to get statistics on the likelihood of such theft. Writers have only a 3.8 percent chance—“totally safe”—while machinists face an alarming 65 percent. “Robots are watching,” the site cautions. These numbers are drawn from a widely cited 2013 report by economist Carl Benedikt Frey and computer scientist Michael A. Osborne that concluded 47 percent of total US employment would be automated by 2034.

Many writers on the radical left have accepted this framing of automation, while detourning its implications, making “full automation” central to the transcendence of capitalist exploitation. In Inventing the Future, Alex Williams and Nick Srnicek argue, “Without full automation, postcapitalist futures must necessarily choose between abundance at the expense of freedom (echoing the work-centricity of Soviet Russia) or freedom at the expense of abundance, represented by primitivist dystopias.” Aaron Bastani’s Fully Automated Luxury Communism pushes this idea to its limits, promising a future of boundless leisure for all, supplemented by a profusion of goods and services delivered sans human exploitation: “We will see more of the world than ever before, eat varieties of food we never have heard of, and lead lives equivalent—if we so wish—to those of today’s billionaires.”

Such a framing is both simple and attractive, especially to those of us trapped in dead-end jobs and eking out precarious existences; if robots, rather than we and our fellow workers, performed these tasks, and the productivity of technology were widely and equally dispersed, maybe we could spend our days doing more fulfilling activities than punching a clock. Like those cheesy banner ads that were all over the web in the late 2000s, you could have an egalitarian society with “one weird trick.” The bourgeoisie would hate this!

But Full Automators, whether dystopian or utopian, have a misguided approach to the question. According to historian Aaron Benanav, the specter of a postwork future has been conjured by anemic growth rates, not new technologies. Without radical change, our fate is further stagnation and crisis, not the world of the Jetsons. But new technologies certainly exist, and they are certainly doing, well, something to the work. But what?

Let’s start with a more basic question. What is automation? Machines have replicated and augmented human work processes for centuries, and that is often the colloquial use of “automation” in our current moment. But “automation” was not used to describe this process until 1947, when Delmar Harder, vice president of manufacturing at Ford Motor Company, created its Automation Department. The department’s engineers redesigned automobile production so that materials were automatically conveyed from one process to another, obviating the need for laborers to load and unload machines. Further, the process was itself increasingly machine-controlled, through a system of timers, switches, and relays—what technology historian David Hounsell calls the “electromechanical brain.”

Most of the technologies involved in automation had been developed and implemented in other industries years before their incorporation into Ford’s production process. What made automation new was its centrality to Ford’s manufacturing strategy, coming at a time of historic unrest among autoworkers, and in particular, on the heels of a costly twenty-four-day strike at Ford’s massive River Rouge plant in May of 1949. Not only would the new technologies dramatically reduce an unruly labor force, but they allowed Ford to decentralize its production away from the roiling unrest of Detroit as the company opened new automated factories, with new less militant employees, in Cleveland and Buffalo. Workers immediately perceived the threat, and automation was, from its inception, a deeply politicized issue. The history of automation reveals it as a political tool to subvert worker power, not simply an economic one to increase productivity.

Digging into the technical side of automation unearths more problems with the Full Automation Fallacy. David Autor, an economist, offers a useful corrective in his 2015 article “Why Are There Still So Many Jobs?,” its plaintive title a response to John Maynard Keynes’s Depression-era predictions of an automated future with a 15-hour work week. As Autor explains, rather than simply replace human jobs with machinic processes, automation affects labor in complex ways:

Changes in technology do alter the types of jobs available and what those jobs pay. In the last few decades, one noticeable change has been “polarization” of the labor market, in which wage gains went disproportionately to those at the top and at the bottom of the income and skill distribution, not to those in the middle.

Automation thus recomposes the workforce, isolating and rearranging tasks, altering job descriptions, and hollowing out middle-tier occupations.

Why does automation polarize instead of outright replace jobs? For one, many jobs require labor that is challenging to automate. Computers have to follow instructions laid out by programmers, so in order to substitute a computer for a worker, the worker’s tasks must be understood and articulated. However, much of the labor process is encompassed in tacit knowledge that workers are unable to articulate: “There are tasks for which neither computer programmers nor anyone else can enunciate the explicit ‘rules’ or procedures.” Even when tasks are known, automating them is easier said than done. On one end, computers cannot replicate the high levels of abstract thought required for managerial positions. On the other, jobs that require both manual work and flexibility, such as service sector jobs in food preparation and maintenance, are both difficult and cost-prohibitive to automate.

Take an example. In March 2018, Flippy, a burger-flipping robot, was rolled out at the Pasadena location of fast-food chain CaliBurger, to great fanfare and numerous headlines. The implication was clear: Would this humble machine spell the end of the fast-food job, the metonym for low-skilled entry-level occupations?

Not exactly. In an event that provoked far less press coverage, Flippy was retired after one day of work. CaliBurger’s owners took the honorable path of blaming Flippy’s failure on their human employees: workers, they explained, were simply too slow with tasks such as dressing the burgers, causing Flippy’s meaty achievements to pile up. However, a few discerning journalists had previously noted Flippy’s numerous errors in the relatively simple task that gave the robot its name. Flippy just wasn’t very good at its job. And so, yet another fully automated dream came crashing into messy reality.

According to Autor, the introduction of new kinds of information and control technology, such as what is currently hyped as “artificial intelligence,” supplements managerial work, and so increases the power and wages of bosses. On the other end, manual laborers (such as Flippy’s coworkers) see tasks eaten away and their movements reorganized and tightly controlled to make room for more rigid machines. Wages and working conditions deteriorate. But even then, automation stops short of “full”: such systems, as we will see, rely upon a stratum of human labor that is all but ineradicable. This is as true of Flippy as it is of the most powerful AI.

What do tend to be substitutable are not the lowest rungs, but those jobs requiring repetitive physical labor, as well as middle management jobs in operations. For example, Amazon’s warehouses use a software-directed system that coordinates human laborers, who select individual goods, with robots, who move large shelves. Algorithms replace middle-income jobs in managing the floor, leading to a polarized workforce of increasingly wealthy and powerful executives and programmers and increasingly degraded laborers who are substitutable not by machines, but by other humans; in other words, they are eminently replaceable.

But some have their targets set on precisely those positions in “knowledge work” that once promised a future safe from the robots. Technologists such as Kai-Fu Lee, former president of Google China, vow that white-collar jobs will go first. According to Lee, “The white collar jobs are easier to take because they’re purely a quantitative analytical process. Reporters, traders, telemarketing, telesales, customer service, analysts, they can all be replaced by software.”

But, of course, automation never completely erases human labor. Lee’s prediction of total replacement of knowledge work is a profound exaggeration, as a report by the think tank Data & Society makes plain: “AI technologies reconfigure work practices rather than replace workers,” while at the same time, “automated and AI technologies tend to mask the human labor that allows them to be fully integrated into a social context while profoundly changing the conditions and quality of labor that is at stake.” Investigating grocery store self-checkout, researchers found that Luddite customers hated and avoided the technology. In response, management cut staff to make lines so unbearable that customers gave up and used the machines instead. Even then, cashiers were still required to assist and monitor transactions; rather than reduce workload, the technologies were “intensifying the work of customer service and creating new challenges.”

Self-checkout is an example of what technology journalist Brian Merchant calls “shitty automation”:

If some enterprise solutions pitchman or government contractor can sell the top brass on the idea that a half-baked bit of automation will save it some money, the cashier, clerk, call center employee might be replaced by ill-functioning machinery, or see their hours cut to make space for it, the users will be made to suffer through garbage interfaces that waste hours of their day or make them want to hellscream into the receiver—and no one wins.

Shoppers understand that self-checkouts mean tasks have been sloughed off on them, what media scholar Michael Palm classifies as “consumer labor.” And so they take revenge, rebelling against the technological imposition of work. Theft is rampant at self-checkouts. Bandits share techniques on forums like Reddit: hit “Pay” to disable the bag scale and then bag more items; always punch in the code for the cheapest produce (usually bananas); when in doubt, just throw it in your bag and walk out. They also offer justification: “There is NO MORAL ISSUE with stealing from a store that forces you to use self checkout, period. THEY ARE CHARGING YOU TO WORK AT THEIR STORE.”

Consumer labor in self-checkouts is an example of how rather than abolishing work, automation proliferates it. By isolating tasks and redistributing them to others expected to do it for free, digital technologies contribute to overwork. Writer Craig Lambert uses the term “shadow work,” a term borrowed from philosopher Ivan Illich, to describe this common experience with digital systems. When new technologies “automate” positions away, remaining workers often feel the brunt of new tasks. He describes the “job-description creep” facilitated by new software packages. Where administrative staff may have once kept track of bureaucratic matters such as employees calling off work, now “absence management” software requires workers to handle it themselves. “I am not sure why it has become my responsibility to do data entry for any time away from the office,” a software developer tells Lambert. “Frankly, I have enough to do writing code. Why am I doing HR’s job?”

Surgeon Atul Gawande writes evocatively of the effect of digital shadow work on the medical profession. After the introduction of a new software system for tracking patients, Gawande, invoking the specter of Taylorism, describes the painful restructuring of his work, away from patients and toward more structured interactions with computers. “I’ve come to feel that a system that promised to increase my mastery over my work has, instead, increased my work’s mastery over me,” he writes. “All of us hunched over our screens, spending more time dealing with constraints on how we do our jobs and less time simply doing them.” But fighting against this bureaucratization-by-software, leads, he argues, to escalating burnout rates in the medical profession, the prevalence of which strongly correlates to how much time one spends in front of a computer. And Gawande’s specialization, surgery, is part of another technologically mediated crisis: as more daily activities revolve around typing and swiping, manual dexterity has declined: future surgeons have lost their ability to cut and stitch patients.

As technology critic Jathan Sadowski argues, much of what is hyped as a system of autonomous machines is actually “Potemkin AI”: “services that purport to be powered by sophisticated software, but actually rely on humans acting like robots.” From audio transcription services disguising human workers as “advanced speech recognition software” to “autonomous” cars run by remote control, claims of advanced machine intelligence not only amount to venture capital-chasing hype, but actively obfuscate labor relations inside their firms. As writer and filmmaker Astra Taylor argues, such “fauxtomation” “reinforces the perception that work has no value if it is unpaid and acclimates us to the idea that one day we won’t be needed.”67

While artificial intelligence is frequently likened to magic, it regularly fails at tasks simple for a human being, such as recognizing street signs—something rather important for self-driving cars. But even successful cases of AI require massive amounts of human labor backing them up. Machine learning algorithms must be “trained” through data sets where thousands of images are manually identified by human eyes. Clever tech companies have used the unpaid activity of users for years to do this: whenever you solve a ReCaptcha, one of those image identification puzzles to prove you’re not a bot, you are helping to train AI—likely designed by the Google service’s inventor, computer scientist Luis von Ahn, who came up with the idea through a practically Taylorist obsession with the unproductive use of time: “We’re reusing wasted human cycles.”

But free labor only goes so far in the current AI boom, and more reliable professionalized workers are needed to surmount what anthropologist Mary L. Gray and computer scientist Siddharth Suri describe as “automation’s last mile.” Getting AI systems to function smoothly requires astonishing amounts of “ghost work”: tasks performed by human workers who are kept away from the eyes of users, and off the company books. Ghost work is “taskified”—broken down into small discrete activities, “digital piecework” that can be performed by anyone, anywhere for a tiny fee.

The labor pool for ghost work is truly global, and tech companies have been eager to exploit it. Samasource, which specializes in training AI, specifically targets the world’s slum dwellers as a cheap solution for the “boring, repetitive, never-ending work” of feeding information into machine learning systems. The company’s workers are poorly paid, though it justifies this through the compulsory rhetoric of humanitarianism that proliferates in Silicon Valley. Samasource’s late CEO, Leila Janah, admits that employing low-wage workers from Kibera, Kenya—Africa’s largest slum—is a profitable strategy. But it is, she claims, also the moral choice, so as not to upset the equilibrium of their impoverished surroundings:

But one thing that’s critical in our line of work is to not pay wages that would distort local labour markets. If we were to pay people substantially more than that, we would throw everything off. That would have a potentially negative impact on the cost of housing, the cost of food in the communities in which our workers thrive.

Janah’s humanitarian efforts notwithstanding, Samasource’s business model reveals the real impact of networked digital technologies on the world of work. Even in a world of resurgent nationalism and hardening borders, the internet has created a massive globalized reservoir of human labor power for companies to tap into, as much or as little as needed: the “human cloud.” In this cloud, no far-flung locale need remain independent from the world’s most powerful corporations, and with intense competition, you have to be quick and compliant even to snatch a gig at all. And no moment may be left unproductive: jobs can be sliced down to microtasks, paid as piecework, or “gamified” so they aren’t paid at all. This potential future of work has nothing to do with expanding leisure from “full automation.” Quite the contrary: in this future, work seeps into every nook and cranny of human existence via capitalist technologies, accompanied by the erosion of wages and free time.

The ghost work of the human cloud may give the impression that low-wage gig workers are alleviating the burdens of the lucky few who manage to snag a comfortable career. But computer-facilitated taskification comes for us all. The fumbling medical students provide a dramatic example of what the saturation of everyday life with digital technology has wrought: the deskilling of everyday life. Ian Bogost, a media scholar and video game designer, observes that the proliferation of automated technologies, from self-flushing toilets to autocorrecting text messages, accelerate feelings of precarity and unpredictability. This is because rather than serve human needs, they force people to adapt to unpredictable and uncontrollable machine logic: “The more technology multiplies, the more it amplifies instability.” In response, we develop arcane rituals that make the toilet flush at the right time, or muddle through another “autocorrected” message full of typos. It is not simply a romantic critique that technology separates us from the sensuality of the world (though, humorously, Bogost relishes a physical paper towel over a sensor-triggered air dryer). It is a practical one: the supposed convenience of automated everyday life is undercut by our lack of control, our confusion, and the passivity to which technology conditions us. “Like people ignorant of the plight of ants,” he writes, “and like ants incapable of understanding the goals of the humans who loom over them, so technology is becoming a force that surrounds humans, that intersects with humans, that makes use of humans—but not necessarily in the service of human ends.”

This is precisely what philosopher Nolen Gertz describes as the “in-order-to-mindset”:

Modern technologies appear to function not by helping us achieve our ends but instead by determining ends for us, by providing us with ends that we must help technologies achieve. Thus the Roomba owner must organize their home in accordance with the maneuvering needs of the Roomba, just as the smartphone owner must organize their activities in accordance with the power and data consumption needs of the smartphone. Surely we buy such devices to serve our needs but, once bought, we become so fascinated with the devices that we develop new needs, such as the need to keep the device working so that the device can keep us fascinated.

Some scholars of contemporary technologies describe them in terms of older needs—or rather, older compulsions. For instance, social psychologist Jeanette Purvis notes that Tinder, the dating platform that ranks among the most popular apps of all time, works through an interface that uses “the same reward system used in slot machines, video games and even during animal experiments where researchers train pigeons to continuously peck at a light on the wall.” Users swipe through an endless supply of randomized potential mates, an infinitude that results in an incredible churn—1.4 billion swipes a day—and an overall lower satisfaction with dates.

So desperately hooked to swiping are Tinder users that competing services like Coffee Meets Bagel market themselves on providing fewer options. And as a kind of artistic immanent critique, some wags have started selling “The Tinda Finger,” a disembodied rubber digit that spins on a motor attached to one’s phone, thus automating the swiping process. “The idea is to maximize the potential for matches while you can spend your time focusing on other things”: automation to spare us from the “convenience” of automation.

The Tinda Finger speaks to a widespread dissatisfaction with technologies that promise to spare us effort while eating up more of our time with unfulfilling tasks. When we realize that what is pitched as “automation” is actually an assortment of techniques and technologies that function to degrade our work, sap our autonomy, exploit the poor, and give us more to do, we can start to envision alternatives. We can see “automation” as a kind of politics, one that we can start to resist, one banana at a time.

 

Gavin Mueller is the author of Media Piracy in the Cultural Economy: Intellectual Property and Labor under Neoliberal Restructuring. He is a Contributing Editor at Jacobin, and a member of the Viewpoint Magazine editorial collective. This is an extract from his new book, Breaking Things at Work: The Luddites Were Right About Why You Hate Your Job, out soon with Verso Books.

Image credit: Possessed Photography on Unsplash