Hey, so you’re a monk. It’s about 1440AD and you’re just chilling out in your quarters, listening to a bit of chanting through the stone walls, practicing your black-letter while you digest your dinner. You’ve got a killer commission coming up to add some nice rubrics to a popular manuscript—The Bible—and you don’t want to fuck it up. Suddenly your pal Hildegard is bashing down your door shouting about some guy called Gutenberg: “We’re fucked dude,” he’s yelling, “that witch is gonna make us all redundant with his printing press. I’m off to become a coroner, people are always dying.” Buzz killer.
Fast forward 600 years, and the graphic designer is that stressed-out monk, and this time Gutenberg’s witchcraft is taking the form of algorithms.
Back in 2013, a research paper outlined the probability of particular occupations being replaced by intelligent machines by 2033. “Graphic designer” is not on the list, but I’d suggest we sit somewhere between “machinist” and “actor.” The rationale for placing us there in the table is that, much like an actor, our job requires a certain level of creativity within particular bounds, and is about communicating with people. There’s also a level of technical expertise required, like a machinist.
A lot of the work we do as designers is prescriptive. We work within set screen sizes and resolutions, to standardized paper formats with grids that have been calculated proportionally. We work with colors that have been numerically serialized and indexed, and can be developed into color systems mathematically. We create palettes of type styles from typefaces that are accessible online, can be categorized based on historical developments and genres, and laid out according to a base unit. These parts are modular and mechanical, perfect for automation. And a lot of our current workflow is pretty automated anyway—consider some of those tasks that would have been completed manually in years gone by, like typesetting.
This is not new thinking, of course. The Modernists were mucking about with modular principles back in the 1950s across all creative disciplines, from art to architecture; and Shakespeare and classical musicians from medieval to modern were experimenting with numerical limitations way before that. Plus, we’ve kind of been the monk many times in our history: first it was Gutenberg, then phototypesetting, then the Mac and Adobe, and after that came digital cameras and the internet. With each technological development came massive disruption and reaction, followed by an evolving of the role of the designer.
Tightly defined tasks can be automated pretty easily with tightly defined programming—outputs generated by hard lines of code. These kinds of algorithms have been used in design and the creative arts for some time.
London’s Field takes this automated approach into highly creative territory, its work uniting the worlds of art, technology, and design with technical flair and keen aesthetics. The studio’s work for paper company GF Smith is an interesting example of an algorithmically augmented process being applied in a commercial graphic design context. Field created a digital 3D structure with the help of an algorithm, rendering over 10,000 images of this structure from various angles and vantage points. The final part of the process was hand-editing the output of the program, making sure that each image was sufficiently stunning to be used commercially. A true designer/algorithm collaboration.
The process here is very much led by Field—from the outset they’re determining the parameters, the program, and therefore the output. It’s a human brain synthesizing experienced visual cues built up over years of working as a designer, setting the creative direction, then using an algorithm to help bring that to life. But what if you could encourage the program to synthesize the information for you, and generate an outcome based on the experience with that data?
Designer Jon Gold has been looking at applying machine-learning techniques to standard graphic design procedures. Machine-learning is an approach toward developing machine intelligence, which Andrew Ng, chief scientist at Baidu Research, and founder of the Google Brain project (software famous for identifying a cat without previously being taught what a cat is) defines as “the science of getting computers to learn, without being explicitly programmed.” Gold uses this approach to analyze typefaces and typographic trends, and generate unlikely type pairings based on what the computer has learned. He describes his exploration wonderfully in his essay, Taking the Robots to Design School.
Although Gold’s work might seem niche, and applies to the narrow-AI spectrum, it gives us a taste of what we can expect our future workflow to look like. It’s easy to forget that once a model has been created with a mastery of a particular area, it can then be embedded and combined with other models that may have been trained on a very different problem—like color theory or composition, for example.
He writes, “Looking at the current crop of design tools after taking a glimpse at the future is frustrating. There is a total lack of contextual and industrial awareness in our software. The tools manipulate strings, vectors and booleans; not design. But tools are the ripest place to affect change.
“I’m building design tools that integrate intelligent algorithms with the design process; tools that try to make designers better by learning about what they’re doing. What we’re doing. Augmenting rather than replacing designers.”
And so here we face our own monastic crisis. History tells us that we shouldn’t be worried. Since the dawn of time we have been augmenting our limited physical capabilities in one way or another, using our opposable thumbs to fashion and grip tools—in one sense our hands are just sockets for peripheral machinery. Technology is but an extension of man, to channel McLuhan and Flusser. And it appears that it’s no different when looking at the development of design. The key here is the idea of augmentation.
Yes there will undoubtedly be casualties along the way, but it’s been a long time since we were sat in stone monasteries scratching biblical verse onto fine vellum (although Pinterest may have you thinking otherwise). The discipline has moved on. With each technological development the role of the designer has changed. We’ve likely all felt the pressure to add more strings to our bow: learn how to code, or pick up some motion graphics knowledge, and as designers we’re more multidisciplinary than ever before. It’s perfectly plausible that some of the next skills we must embrace are around machine intelligence.
AutoDesk is a software company that creates tools for the design industry, and their Dreamcatcher project is a pioneering collaboration with artificial intelligence in design. Dreamcatcher, similar to Field’s GF Smith project, is a generative design program but for physical structures. The software gives the designer a variety of ways to articulate their need, including “natural language, image inference, and CAD geometry.” Once the problem space has been outlined satisfactorily, the program gets to work sifting, sorting and optimising its way through its machine-learned knowledge base to produce a series of solutions. It’s then down to the designer to guide the program by refining variables, or closing out various solutions.
The approach taken to the software gives us strong clues as to what our design workflow might become, and how our roles might evolve. With Dreamcatcher, the value of the designer comes less in the direct creation of form—functional or stylistic—and more in the definition of the design problem and criteria. In a video that outlines how they have used the software to help design a pretty wild-looking car, research scientist Erin Bradner says “instead of specifying points, lines, and surfaces, a CAD designer of the future will specify goals and constraints, they’ll think at a higher level about their design problem.”
Executive creative director at SapientNitro APAC Claire Waring agrees. “AI is going to bring with it an explosion of creative possibilities that shortcut the digital design process. A designer’s role will evolve to that of directing, selecting, and fine tuning, rather than making. The craft will be in having vision and skill in selecting initial machine-made concepts and pushing them further, rather than making from scratch. Designers will become conductors, rather than musicians.”
It’s an extremely interesting, not to mention abrupt, upending of the way some of us think of ourselves as designers. By taking this higher level, we’re not directly connected to the visual or physical output. It’s not unreasonable to think that we may one day find ourselves totally disassociated from the form-giving process, in some cases working to abstract a problem to such a degree that it’s impossible to know what forms might come from the program. We’ll trust that they will satisfy the criteria specified, but we may not know why or how.
At the beginning of each project we undertake, we receive a brief. Part of the brief relates to the purpose of this particular piece of communication, and thereafter follows a series of considerations and constraints. We approach these briefs with all the associated baggage of experience that comes with them. We’ll work through our own knowledge base for similar problems we have encountered in the past—how to avoid having to make the logo bigger, or how to suggest a more appropriate alternative to Papyrus. As demonstrated by Dreamcatcher, algorithms will approach the same brief in a totally new way. Not as a designers, graphic or otherwise, but as models of thought unbound by conventions of taste or style, and even of graphic design. There’s potential that through the use of machine intelligence we could start to probe at more fundamental questions of human communication and emotion. Deep.
At the end of the video, with a smile, Bradner anecdotally lets us glimpse her motivations. “In six years, my daughter will be 16. I want to sit down with her and design her first car. I think this is going to excite the next generation of drivers.”
That’s a wonderful thought. But I can hear our old friend Hildegard from beyond the grave: “You’re fucked, too. In 6-10 years, there won’t be any more drivers.”
As machine-learning techniques improve and computing power becomes cheaper and more powerful, algorithms will be able to tackle more and more complex scenarios.
Artificial intelligence is now quickly applied to problems in every little corner of our lives, from transportation and advertising, to food, and homes, and sex. Our habitats, behaviors, daily rituals, and relationships are going to be transformed drastically in the coming decades by this new technology. Graphic design is not an isolated discipline—our work must function within a given environment. As our environments change profoundly so will our work and our methods of working. No longer the designers ourselves, we might become stewards of algorithms, data gatekeepers closely guarding our secret stash of rare ffffound data-dumps on which to train our models.
But further than that, at a fundamental level, graphic design works as a kind of user interface; we organize information visually to allow people to navigate through and understand the world. As graphic designers we channel interactions through a primarily visual funnel, stripping out everything that can’t be accommodated in the medium. But as sensors, processing power and machine intelligence become more portable, ubiquitous, and increasingly part of the designer’s lexicon, we will one day find that our user interfaces are no longer primarily visual, but more sonic, haptic, and multi-sensory. With that comes a brand new palette of output at our disposal.
Allow me to take this to a more extreme conclusion: We’re beginning to interface with our devices in more conversational terms as natural-language-processing algorithms improve, and in more gestural terms with improvements in computer vision and virtual reality. When coupled with renewed interest in anticipatory design, a massive signal appears pointing to a future in which graphic designers are barely required at all. But by taking a designer’s mindset to problem-framing and solving, we have skills that position us well to help shape the deployment and development of this new artificial intelligence in design; helping to develop new tools, new interfaces, and new interactions that shape future worlds in unimaginably profound ways.