top of page

Designers: Will AI Take Your Job? (Hint: It Depends on How You Define Design)

New multimodal, large language, generative AI models, such as those developed by #OpenAI and #Google, are powerful game changers. Their ability to generate novel output by recombing vast sources of data from the web will have profound, as yet unimagined impacts on our economy and society. One of the most profound will be the ways it reshapes the labor market: the jobs that are available (or not) and the skills they require.


Because of their ability to produce a variety of graphic, text, audio, and visual outputs, the jobs most likely to be impacted are those in design and other creative professions. Whether these jobs are eliminated or just reshaped will depend on local situations: the demands of a particular set of tasks, for a particular project or job, as they are overlaid on the capabilities of these models. But designers and creatives will be affected. And no doubt, employers will look for opportunities to eliminate jobs when possible, given the way our economy is structured.


Whether or not you, as a designer, become redundant depends on how you define, think of, and represent “design”. To avoid being laid off, you must have a deep understanding of both your profession and the capabilities of these AI systems so that you know how to work with them, so that you can emphasize the differences between your capabilities and theirs, and so that you can sharpen your unique capabilities, relative to these systems. Furthermore, you must be able to articulate your unique capabilities to employers and clients.


In this article, I will give you a high-level description of the systems most relevant to design: generative multimodal large language AI models. I will help you think about their capabilities and shortfalls. And I will help you think about and maybe rethink the uniquely human, in-the-world capabilities that designers bring to a task and how you can fine-tune these capabilities and emphasize them with your employer and clients. I will help you think about design and our design profession as the uniquely human way to create the world that we want. As Nobel Laureate Herbert Simon puts it, the capability to take us from the current situation to a preferred one, the capability to make the world a better place.



What are the Current Capabilities of Multimodal Large Language AI Models?


I say “current” here because these models are self-learning and always improving their ability to do their current task set, as they interact with users. In addition, it is very likely that OpenAI, Google, and others, will release subsequent models with additional capabilities, as they compete for market share.


These systems are started with certain algorithms, with millions or billions of data points, and with some guardrails. They are then trained on these data sets by humans and/or by self-training. But because they continue to self-learn, based on their interactions, they develop capabilities that the developers don’t fully understand. So, we can only make assessments of their capabilities as we observe their current behaviors.



What are these capabilities?


Generative

First of all, these models are generative. That is, they act by analyzing the words in a prompt, for example, and drawing on their analysis of the data they’ve been trained on to respond with, in the case of #Chat GPT, a unique string of words that constitute syntactically and semantically reasonable sentences and paragraphs that address the prompt request. These are surprisingly sophisticated and human-like responses that would impress any ad copy writer.


For example, I prompted #Chat GPT4 with: “List four reasons why a client should hire a designer”. And within seconds, it responded, word after word, line by line, with four numbered reasons: Professional expertise, creative problem solving, branding and identify, and time and efficiency. It ended by saying: "Overall, hiring a designer ensures that the client's design needs are met with professionalism, creativity, and expertise, resulting in visually compelling and impactful designs that support their goals and objectives."


Chat constructed a unique response; it did not merely reproduce a passage that it found somewhere on the web. The response directly addressed my request both in substance and form. It made four cogent points and then went on to elaborate on each. Each point was made from the client’s perspective, specifying the advantage that the designer provides to the client’s business. And then it went beyond my prompt, concluding with that summary of the points, even though I didn’t specifically request it.


Multimodal

But text is not the only mode it works in and this is where AI has significant implications for designers and other creatives. While Chat GPT was built on the vast text resources of the Web, other models use data sets of images, music, chemical structures, etc. Generative programmed transformers (GPTs) work just the same with these symbolic data sets as they do with words. That is, they analyze them looking for interconnections and patterns and then match patterns to those they analyze in prompts, whether the data are words, images, musical notes, computer code, etc.


The first of the multimodal models that came to the public’s attention was OpenAI’s #DALL-E, which was introduced in January 2021. DALL-E analyzes patterns among millions of image-text pairs scraped from the Web and is able to generate a unique set of images, given text input. For example, I gave DALL-E 2 the prompt: “A photograph of a smiling housewife using ‘Tide’ laundry detergent.” It returned this group of rather unimpressive images for me to choose from:



AI generated image for demonstrative purpose
AI generated image for demonstrative purpose


There are problems with the hands, as well as other artifacts, and with the text within the image (perhaps there is a guardrail prohibiting the use of registered trademarks).

But these models are constantly improving. Other similar applications, such as #MidJourney and #Stable Diffusion, have come out since DALL-E’s introduction and they allow the option to use of a reference image (as DALL-E 2 does now) along with text as promotes. They also provide a variety of built-in filters and adjustments to generate sophisticated, high resolution images. The quality of these images is coming to match or exceed the quality of human generated art.


Some systems, such as #Runway’s #Gen2, even allow you to generate animations based on text or graphic input. Google’s #MusicLM and OpenAI’s #Jukebox are tuned to generate background music or even original songs. And in software design, applications of generative AI, such as Chat GPT4 with GitHub Copilot, take text input and generate lines of operable code.

Future trends

These generative models are beginning to be integrated into existing applications and this trend will continue, especially for those applications that deal with multimedia. For example, Adobe is integrating its generative text to image model, #Firefly, into #Photoshop such that users can add, subtract, or replace portions of their photos with pieces of AI generated images by using the selection tool and a text prompt.


Future applications will be specifically developed to augment various AI models and stich them together, with application programming interfaces, to make suites of tools that are even more powerful and easier to use than the separate applications. This trend might result in something like this hypothetical example, which pulls together various AI models to create an environment that allows for code generation based on natural language voice commands.

Another trend is that large language models are becoming specialized using large data sets specific to particular domains. An example of this trend are models trained on databases of chemical structures, contributing to significant gains in the design of new drugs. As this tend plays out, other models are likely to be trained on specialized data sets in architecture, fashion, engineering, scholarly research, and so on, supporting major breakthroughs in these fields.


Yet another trend is the use of AI in the design and production of physical products, such as kitchen appliances, shoes, and cars. These designs can be supercharged by connecting generative AI systems with other applications and devices. For example, multimodal AI models are currently being used to aid in product design during the conceptualization phase. The results of this conceptualization phase are mere images but they can be connected to a CAD package for generating the production specifications of the product and then to a CAM application to actually produce the product.


Current Limitations of these AI Systems?

Again, the word “current” must be used here, since these systems are ever-evolving and novel applications are constantly being produced. This means that, no doubt, there will be other trends that emerge that we can’t yet imagine. But at least for now, there are significant limitations among AI systems. Designers need to know these limitations because they often correspond to the unique strengths that humans can bring to the process.


Here I will not address the myriad issues and problems with AI associated with “the apocalypse”, security hacking, the invasion of privacy, intellectual property, deep fakes, etc. Rather I will address issues in a much narrower sense, those most closely associated with the design process.


Lack agency and executive function

The biggest limitation is that generative AI models cannot make executive decisions. That is, they don’t know what needs to be designed, they can’t start the design process on their own and they don’t know when it is done. While they have a lot of general knowledge, they don’t understand the local situation and they have no idea of what the preferred situation is, to us Simon’s terms. They don’t know which problem can be addressed by what kind of design: Can the design goal be achieved with a physical artifact, a service, a motivational campaign, or what? Nor do AI systems know which of the many problem situations are top design priorities. All these issues need to have been addressed before AI models are employed and need to be represented in the prompts.


Limited by the data they are trained on.

Generative models can be very creative in their responses. But this creativity is based on finding patterns in the data they have access to and that they are trained on. They can’t go beyond the data.


Chat GPT, for example, has access to vast amounts of data scraped from the Web. At the same time, the data it draws on are very noisy. Consequently, the likelihood of generating many off-target combinations is very high. The noise is also likely to include biases in the results and misinformation. Some of these limitations can be addressed by the careful wording of prompts. Others require careful review and assessment of the output. Accepting the results of a request to generate product ad copy, for example, without careful review would be risky.


In addition, the database is often time limited. As far as Chat GPT is concerned, the database was locked as of September 2021. So, it cannot comment on or include in its analyses anything that happened after that.


Untrustworthy.

A related issue is that the produced results can be inaccurate, not based on reality, or even outright bizarre. Beyond the kind of “hallucination” produced by Kevin Roose’s conversation with Microsoft’s Bing, the more-disconcerting results for designers are those that sound reasonable but are totally made up. For example, a legal brief filed by an attorney using Chat GPT cited legal cases in his argument that, as it turned, did not to exist.


Chat GPT and other text models are trained to produce results that sound like reasonable, even authoritative, human speech. And as tempting as it is to cut and paste such reasonable sounding outputs, it is essential to check the accuracy of any factual statements. Unfortunately, this necessity can significantly reduce the productivity gains from Chat GPT, adding to the temptation to skip a review.


Lacks local context knowledge.

Large language models, such as Chat GPT, have a lot of general knowledge about the world and about language. But they have no context knowledge of the immediate local situation.


Yet knowledge of the local situation—knowledge of your client, of those who might use and benefit from your design, of their needs and problems, their physical and social context, etc.—is essential to the design process. All of this relevant information would need to be represented by prompts. But even then, it is not clear that these models would “understand” the complex network of local, interacting factors that affect designs.


No emotions or morality.

Finally, and perhaps most importantly, these systems have no human emotions or moral grounding. There is significant work being done in which AI systems can at some level “understand” and express emotions. But it is a profoundly limiting factor that AI does not exist in the physical world and cannot directly experience pleasure, pain, desire, etc. that are the basis for human emotions and empathy with the emotions of others. Many psychologists claim emotions are the foundation of moral response among humans. This lack of emotional experience in AI systems limits how they can respond to moral issues and moral situations. The limitation reduces the ability of these systems to understand the affective component of human situations and how they might be able to respond to them. While the outputs of these system often evoke emotions among humans, the systems cannot feel compassion or rage and they do not know what is bad and good.


There are built-in guardrails that AI systems follow and their statements may sometimes sound morally authoritative. And researchers are experimenting with AI systems that can reason morally. But these systems operate differently than large language models. They are extremely limited in their capabilities and not nearly able to handle the complex emotional and moral situations in which designers sometimes find themselves. For the foreseeable future, AI systems will be limited in their moral capacity.


Finally, related to agency and executive function, these systems don’t have the emotional or moral insight to know what people need, desire, or aspire to nor do they, on their own, have a conception about some ideal or desirable future situation that a design could contribute to achieving. In short, they don’t know what makes a design “good”.


What Makes Designers Distinctive (and Your Job Safe)


What does this all say about the future of ad copy writers, graphic designers, web designers, app developers, user experience designers, and architects?


Functions normally associated with design will continue: ad copy will be written, illustrations generated, websites constructed, apps developed. Products will be produced, buildings designed. But these questions remain: To what extent will these functions be performed by AI systems or by people? And what skills will designers need in the future?


At the extreme end, entire sets of functions will be served by AI-controlled systems. For example, one can imagine a fully automated game design process in which given an initial prompt, a generative AI system can come up with a new game concept, implement it in code, distribute it online, continually collect data, and use that to finetune the game. Conceivably, none of this would require human illustrators, programmers, or data collectors or analyzers.


At the other end, people will be primarily responsible for these functions, augmented by AI systems. They may even retain their titles: graphic designers, app developers, web designers, etc. But their skill sets will be different; traditional technical skills will be replaced by skills in operating AI systems.


However, it isn’t merely by being effective AI tool users that designers will be saved from replacement. It is by emphasizing and fine tuning the distinctions between what AI can currently do, what it can’t do, and the unique contribution people can make. This distinction will establish the worth of designers to employers and clients and, perhaps, even to themselves.


Designers are thinking, feeling humans that live in the world and they are great at understanding and acting on complex in-the-world situations. The biggest difference between humans and AI systems is the executive function, the moral compass, and rich local knowledge that humans can bring to the design job. This distinction requires designers and design educators to think very differently about what design is and what it is not, to think quite broadly about design as a process and a profession. Too often design is thought of and taught quite narrowly, as a specialized set of skills and knowledge that address a narrow range of problems defined by disciplines, such as software design, product design, and architecture when the solution may not require the design of software, products, or buildings.


Design is not technique.

AI systems will dramatically transform creative productions associated with particular artistic techniques. Creatives who spend their education and early career years mastering particular techniques—such as photography, illustration, the mastering of musical instruments, etc.—are the most likely to be replaced by people who do not have these technical skills but have knowledge of AI tools that can be used to generate digital products like those of these creators. This applies less for creatives exercising their craft in the real world, such as live performances in music, dance, and theater. But even here, the employability and income of these creatives will be diminished by the fact that AI-generated music, “actors”, “dancers”, etc. will compete for opportunities in the digital world, opportunities that are currently available to creatives.

Furthermore, the creative works of these artists are often used to train generative multimodal models, thus creating a competition between artists and their own works—an intellectual property issue that has yet to be resolved.


Even designers who are defined by technical skills that are normally considered safe or even advantaged by technological developments, such as software or app developers and webpage designers, are threatened by AI. Who needs people with sophisticated coding skills in specialized languages if people with high-level programming knowledge but few coding skills are sufficient to produce advanced software products?


Art and design schools and their faculty that focus primarily on technique will also be transformed by these AI developments. Curricula that haven’t yet moved away from teaching traditional production and performance techniques and toward digital skills will be forced to do so, as students seek these new, more employable skills. And they are advised to think more broadly about what design really is.


Design is not just technique or even primarily about being creative. Design is not about photographic skill, illustration, animation, writing, or coding. Design is about solving important complex problems in the physical, social world. And design is not just about being an imaginative, out-of-the box creator. If your creations are not making a positive contribution to human experience or solving complex human problems, you’re not using your full set of design capabilities.


Design is not just the attributes of a good design.

Good designs are relatively easy to identify if the criteria are specific attributes of the designed artifact: the color scheme is complementary, the composition is balanced, the design for a building structurally sound, the ad copy is grammatically correct and well-stated, the code is elegant. AI is up to that kind of assessment and can generate some amazing images, building designs, statements, and code.


But good design is not only about aesthetics, functionality or look and feel. Design is not just the product, the designed artifact. Design is about solving problems and having a positive impact on the world. And it is about the process for getting there. Therefore, good design, if we use Simon’s definition as a process that takes us from a current state to a desired state, implies a different assessment criterion—not attributes of the product but attributes of its impact on the situation as a result of that process.


AI systems are not well suited to make those assessments. The guardrails built into AI systems often tell them what they should not do—not provide information that would knowingly harm someone, not share individual data without consent, not discriminate against individuals or groups, not break the law. But they don’t know what they should do; they don’t know about designs that are good. Designers can make that assessment.


Design is about solving complex problems. For the foreseeable future, good design will require human designers if we define design not just as technique, not just as the attributes of the well-designed artifact, but as a process of successfully changing situations that aren’t working, that are problematic, that are harmful into situations that meet people’s needs and desires—designs that solve human problems. These are good designs. Great designs are ones that address problems that are particularly complex, ill-defined and highly constrained. And great designers are ones who can solve these wicked problems, ones who can make the world a better place.


What does this involve? What are designers’ super powers?


To solve problems, designers need to know a lot about the local design situation: where things are working, where and why they are not, who all are involved in the situation and in what ways, what the constraints are, and what human, physical and financial resources are available. Designers also need to understand the preferred situation, what people need, what they want, what they fear, what they desire, dream about, and aspire towards, things that people sometimes don’t even understand, themselves. AI can’t provide these insights; designers can.


Designers need to design with purpose and values and understand the values of the people affected most by their designs. Of course, values vary among people. But there are underlying principles that most people draw on in their lives, even if they disagree on how to get there or what it might look like when they do, principles such as avoid harm, increase happiness and well-being, advance knowledge and agency, address injustice, build relationships. Good designers need to tap into these values, as locally felt, build consensus around them, manage conflicts, and turn these values into articulated, desired outcomes to aim toward. And good designs have the intended impact. So designers must be able to assess the impacts of their designs, both intended and unanticipated, and adjust the design accordingly.


And, of course, designers need to know how to get from current situations to the preferred ones. Designers must have the ability to determine if this situation requires a product, that situation calls for a service, yet another would benefit from a built environment. They need to be able to assemble and coordinate the human and digital expertise, knowledge, and skills of different specializations to address these complex problems. They need to collaborate with the people most affected by the design outcomes. They need to manage the work flow and the resources that support it and make the tradeoffs that will inevitably arise. They need to tryout the products of this process, to see if these products are working, and if they are moving the needle toward the preferred situation. And they need to know when they are done, when the products, services or experiences achieve the desired impact, or perhaps when they are good enough.


It is in the process of getting from here to there that AI will be an essential partner. For many of the components of the process, AI and other digital tools will be used in combination to support and augment the conceptualization and generation of prospective desired outcomes, the collaboration among people and resources, the rapid prototyping and implementation of the design, and the assessment of its impact, the finetuning of the design.


An essential design skill will be knowing which tool can be effective in the process and in what way. And an essential skill in using AI will be the ability to express in words or images the complex relationships, problems and constrains that exist in the current situation and the as-yet-unrealized future situation that is desired. Words, images, music, and computer functionality will, no doubt, be an important part of these solutions and AI is great at generating these forms. But it will be essential to know when and how these productions can contribute to solving the problem, to making an impact. But above all, to make the world a better place, the designer must be able to make judgments about what is most important to design.


AI will not replace the complex set of skills, knowledge, and values of designers, if design is conceived of as the process of solving important, complex, real world, human problems. AI will not replace you; it will be your friend.


_________________

I would like to thank @Anshul Sonak, @Suzee Barrabee, and @Anders Sundstedt for comments on an earlier draft. Any remaining errors are mine.

Recent Posts

See All

Subscribe to our Newsletter. 



We rely on donations and sponsorship to keep this platform and its efforts alive. We do not want to monetize it because we do not want to go the traditional media route. 

 

Care to help us make impact?

sponsor icon.png

Sponsor

volunteer icon.png

Volunteer

  • LinkedIn
  • Twitter
  • Soundcloud
  • YouTube
  • Instagram
  • WhatsApp
  • Pinterest
  • Vimeo
  • Facebook
bottom of page