One of the foundational materials of computer science is an endless piece of tape. “An unlimited memory capacity obtained in the form of an infinite tape, marked out into squares,” is how Alan Turing described his 1936 mathematical model and the forefather of the computer. According to Turing’s model, the machine would read and write symbols in individual cells on the endless tape, which would then determine the machine’s behavior. “Such a tape does not exist,” points out artist, writer, and educator Ingrid Burrington, who introduced me to the Turing Machine’s apparently infinite resource. “We don’t have infinite things. We live in a world of finite objects. The assumption of infinitude of resources, of energy, of innovations, creates the impression that we can and should keep growing, keep building, and keep expanding.”
Burrington has been making visible the systems and infrastructures underpinning our digital worlds for nearly a decade. Her assertions about Turing’s tape may sound self-evident, but they bear repeating at a time when the material and energy required to run global computational systems (websites, for example) and their associated infrastructure (like data centers), tend to be hidden from plain sight. Terms like “the cloud” further reinforce a sense of intangibility in the digital world; a “misunderstanding of data as a pure resource rather than reliant on resources,” as Burrington wrote in Architectural Design.
In reality, our digital activity has a clear impact on the physical world and comes with a real environmental cost. The carbon footprints of Amazon, Facebook, and, in particular, Google have been increasingly scrutinized in recent years. Although Google has been a carbon-neutral company since 2007, its reliance on carbon offsets to maintain this image is being taken to task by its own employees, no less.
Meanwhile, a parallel discussion is beginning to take place around the development of artificial intelligence and machine learning models, the next frontier of computational possibilities for these companies and independent computer scientists alike. A research paper by Emma Strubell, Ananya Ganesh, and Andrew MaCallum at the University of Massachusetts (UMass) caused a stir when it was published this June, reporting that training a large AI model can produce over 600,000 lbs in CO2 emissions, five times the amount produced by the average car over its lifetime. A month later, another paper titled Green AI, published by the Allen Institute for AI in Seattle, described a 300,000x increase in the computation required for deep learning research between 2012 and 2019, as research has increased. The publication of these reports and the media coverage they received illustrated the need for more nuanced conversations about the environmental impact of using AI, particularly in its early days, and a dismantling of the cultural assumption that digital processes are somehow ephemeral.
Terms like “the cloud” further reinforce a sense of intangibility in the digital world.
So where does this leave the field, and the designers that are beginning to use machine learning as part of their practice? Is it time to pack it all in, before the AI party has barely begun? Not so fast.
“You have to realize that AI doesn’t mean just one thing,” says David Rolnick, a research fellow at the University of Pennsylvania and the founder of Climate Change AI, an organization of volunteers from academia and industry exploring how to apply machine learning and AI to tackle the problems of climate change. Rolnick is keen to debunk some of the misunderstandings around the publication of the UMass paper, which focused on natural language processing (NLP) models. “It’s pretty much the largest AI model you can have, it’s designed to learn all of human language,” he says. In other words, the training of an NLP model is much more time and energy intensive than the training of most other, smaller models which can be done on a normal laptop—models such as those Rolnick has been studying to track carbon emissions, for example.
Jarno Koponen, head of AI and personalization at Yle, Finland’s national broadcasting company, has been researching the use of AI in the ever-changing digital media landscape. He says that the most common uses of AI for consumer applications—feeding us content on a news app, for example—use very specific and narrow AI models and therefore require much less computer processing power. That said, Koponen recognizes that there’s a lack of discussion around AI-related emissions and a need to bring it into broader consciousness at this stage of AI research. “We’re still in the early days of understanding the whole ecosystem around AI-powered consumer applications,” he says. “Based on these early findings, it’s becoming clear that principles for sustainable design and development need to be updated for the era of AI applications.”
For Rolnick and the authors of the UMass paper, a crucial step here will be a more critical approach to training and re-training AI models. Regardless of size, the training of an AI model (teaching a machine to recognize a face from a large set of photographs, for example) is always going to be much more resource intensive than simply running it, which can happen instantaneously. This means developers should be mindful of exactly what they are trying to achieve before re-training already functioning systems to make small improvements. “I don’t think this is any reason to reduce the use of AI, merely to use it more responsibly when we do,” argues Rolnick.
This sentiment is echoed by graphic designer Marie Otsuka and developer Nic Schumann, who have collaborated on a number of AI design projects such as Latent Space, an exploration into how ML models learn to recognize letterforms. The pair are critical of the relentless pursuit of accuracy as the predominant metric of success within the ML research community, as opposed to efficiency or cost. Reaching the top accuracy percentiles can be particularly costly in terms of required computing power: The difference between 80% and 90% accuracy might be the difference between using a regular laptop and a supercomputer cluster, a huge jump in energy requirements. “We should be working to find an equilibrium between cost and accuracy,” argues Schumann.
With this in mind, the duo are embracing an element of imprecision for their forthcoming type.tools project which will use ML to assist type designers to kern new typefaces. Rather than automatically doing the kerning for designers, the tool will make suggestions based on what it has learned from reading other typefaces, thereby speeding up the infamously laborious process. “We’re not trying to get to the level of it being perfectly correct, or of it giving you perfect predictions,” says Otsuka. “We just need to run it long enough so it can be something designers can work from.” Though not every AI application can afford a level of imprecision—particularly those that impact human livelihood, in which the stakes are higher—Otsuka’s raises an interesting point: that AI can and should be used in a way that aids, not replaces, humans in the equation.
The broader tech industry may have to learn some restraint: to stem the urge for innovation at all costs and to dismantle the fantasy of the digital world as a limitless resource.
At this early stage of research into the design possibilities of AI, designers are also limited by what can be achieved using consumer hardware. In creating the visual identity for Uncanny Values, an exhibition on artificial intelligence at the MAK in Vienna, Process Studio trained an AI model to produce its own emoji, feeding it with a dataset of 3,145 commonly used emojis. Process chose to work with emoji in part because they are naturally low-res. “It was all very DIY, much of it was developed on a five-year-old laptop,” says Moritz Resl of Process. “This limitation of resources actually brought us to using emoji in the first place, as we wanted something emotional and relatable, like a face, but we couldn’t handle actual photos with our hardware and budget.”
As computational power increases and the accessibility of AI broadens, designers will have to make more serious climate considerations. Central to the ideas of the UMass authors, the work of Climate Change AI, Jarno Koponen, and others is a fundamental questioning of the role of AI as a tool for social good, rather than simply using it for its own sake. David Rolnick and Climate Change AI see machine learning playing a key role in social good as a way “to accelerate meaningful strategies for tackling climate change,” by tracking deforestation or monitoring carbon emissions, for example, while acknowledging it can only be “one piece of the puzzle.”
More importantly, the broader tech industry may have to learn some restraint: to stem the urge for innovation at all costs, and to dismantle the fantasy of the digital world as a limitless resource. As Ingrid Burrington puts it, the prevailing idea that we shouldn’t question new technologies in relation to their resource consumption “is getting harder and harder to justify in an era of climate change rapidly getting to the point where we can’t do anything to intervene.”
This story is part of an ongoing series about UX Design in partnership with Adobe XD, the collaboration platform that helps teams create designs for websites, mobile apps, and more.