Human-Centered, Robot-Driven: Ethical Considerations for ML in Design

Female cyborg face, her eyes, nose, and mouth human, the rest of her a complex cybernetic robot, pieces of her human facade cracked and crumbling to reveal more robot under the surface.

Cyborg face, generated by Midjourney

UPDATE 2.6.23—I’ve decided to correctly refer to these systems using their actual technologies (i.e. ML—Machine Learning) rather than the market-speak and false narrative of AI (Artificial Intelligence). The article has been updated accordingly.

Modern ML systems like ChatGPT and Midjourney changing how we design, and how we think about design today and in the future. This comes at a cost. If we’re not willing to play this game right we have no business playing it at all. Below I discuss some of the tools used today, some of the issues we’ve seen with these systems, and how we can work together to have our ML-generated cake and consume it too.

At the end of the article, I’ve listed just a few of these systems already in play today.

What are these ML thingamabobs anyway?

An elaborately constructed device with gears and moving parts but no clear use or purpose

Thingamabob, generated by Midjourney

In a nutshell, systems like ChatGPT and Midjourney can generate human-like output (e.g. text and images respectively). Their models are trained on large amounts of existing data and can generate new, derivative content based on said data.

While these models have the potential to enhance the design process, they also raise several ethical, moral, and practical issues.

Bad actors

a shady, masked character in a digital mask and a hoodie made of code and electrical signals on a dark background

A shady character, generated by Midjourney

One of the main concerns with ML-generated content is the potential for the proliferation of propaganda and misleading information. These systems can also be used to impersonate real people and organizations, not only spreading falsehoods but also violating privacy with the potential to cause real harm. Additionally, it raises questions about the authenticity of any and all content, and the ability to trace it back to its original author.

Bias

A yin-yang symbol, but the black side is shattered and crumbling, the white dot like a volcano erupting, the white side is unbroken, but covered in black fragments, the background cracked and breaking too

Unbalanced, generated by Midjourney

Additionally, ML-generated content can perpetuate bias and discrimination. ML models are only as unbiased as the data they are trained on, and if the data is biased, the generated designs will also be biased. This could lead to designs that exclude or marginalize certain groups of people, which is a major ethical concern. A good example of this is using ML to recommend people for a promotion to management. Sounds brilliant right? Remove any notion of gender, race, physical attributes, or ability…perfectly objective yeah? Amazon discovered this isn’t so. Their system was trained on résumés of the people who’d historically done well in those positions in the large tech company. Care to guess which gender dominated at the tech giant? So even without the system being aware of gender, the bias was built into the training models simply because of how things worked in the past. Bias has found its way into our judicial system, mortgage lending, resident screening, and more affecting real people’s lives—and generally not for the better.

Theft

man dressed in black stealing a piece of art, running

Man in black stealing art, generated by Midjourney

Some artists have gone so far as to launch a lawsuit against some of the ML generators like Stability AI and Midjourney. They claim their rights were infringed upon by having their work (and likely millions of others) scraped illegally by these companies to train their models. 

Without the billions of pieces of work generated by these artists, the models would not be able to function and there would be no tool, but not a single artist has been attributed let alone compensated for the billions of hours taken to produce the original work.

Recognizing the problems

Cover for Weapons of Math Destruction by O'Neil. Yellow background with red triangle/spikes shooting in from all sides towards the middle. In the center is a skull and crossbones formed from various symbols and shapes from flow-charting and diagrams.

Weapons of Math Destruction, by Cathy O’Neil

In "Weapons of Math Destruction", Cathy O'Neil describes three checks to identify "Weapons of Math Destruction" (WMDs) which are any systems, technologies, or models used in harmful ways, often to marginalized groups. The three checks are:

  1. Opacity: A WMD is opaque if it is hard to understand how it works, and if its creators are unwilling or unable to explain it. This makes it difficult for people to question, investigate, or validate the model's assumptions, biases, or decisions.

  2. Scale: The more people the tool has access to, and the more it's used, (the bigger the scale) the bigger the risk of the tool harming large groups of people.

  3. Damage: A WMD causes damage if it’s used to make decisions with a negative impact on people's lives and if the people who are most affected by the model are not the ones who are best positioned to understand or challenge it. This makes it difficult for people to fight back against the model's decisions or to change the model itself.

Book cover for Technically Wrong by Wachter-Boettcher. Teal background with a big read circle with a red X in its center located at the upper-right. Text reads "Technically Wrong, sexist apps, biased algorithms, and other threats of toxic tech"

Technically Wrong, by Sara Wachter-Boettcher

Sara Wachter-Boettcher's "Technically Wrong" also highlights the importance of considering the ethical, moral, and societal implications of technology.

One of the key points they both make is the importance of using diverse and unbiased data sets when training ML models to reduce the potential for bias and discrimination in ML-generated designs. They both emphasize the importance of transparency and accountability when using ML, by being transparent about the technologies being used, and by monitoring and evaluating the ML-generated content to ensure they’re inclusive, unbiased, and not harmful.

O'Neil's WMD model highlights the importance of addressing the root causes of bias, such as the use of biased data sets, and the lack of diversity in tech companies. Wachter-Boettcher emphasizes the importance of designing technology with a human-centered approach, considering the potential consequences and impact on society when using technology, and the need to be transparent and accountable for the products they create.

What can we do?

Robot shaking hands with a very strong human. Their hands are horrific blends of each other.

Robot and Human Handshake”, generated by Midjourney

Much like the monstrosity that is the image above, humans and machines alike have a lot of work to do to ensure we don’t make a giant mess of everything. Several actions can be taken today when leveraging ML in our work to avoid ethical, moral, and practical problems.

If you’re a data scientist or leader:

  • Use diverse and unbiased data sets: When training ML models, use diverse and unbiased data sets. This will help to reduce the potential for bias. This. Must. Be. Done.

  • Consider the social and ethical implications: Consider any potential consequences and impact when building generative systems. Ask yourself how a bad actor might use this for nefarious purposes.

  • Encourage ethical guidelines for ML usage: Work with industry groups and organizations to establish ethical guidelines for the use of ML.

If you’re a designer:

  • Be transparent about the use of ML: Clearly disclose when ML is being used in your process and the specific ML technologies being used. This will help to promote transparency and accountability.

  • Continuously monitor and evaluate generated content: Regularly monitor and evaluate the generated content to ensure they are inclusive, unbiased, and not harmful.

  • Consult with experts: Seek advice from experts in ML ethics, privacy, and legal issues when implementing ML in design work.

  • Invest in your education and professional development: Stay current on the latest developments and best practices in ML-based design to stay informed about the ethical and practical issues surrounding ML-based design.

Elevate artists and designers, don’t exploit them

3-story tall robot with a single blue eye lifting a group of passengers up on a platform, trying to help them.

Bots lift us up where we belong, generated by Midjourney

  • Implement clear attribution and copyright policies: Clearly state how ML-generated content will be attributed and ensure that the original creator is credited for their work.

  • Use ML to augment, not replace, human creativity: ML should be used to assist designers in the creative process, not replace them. This will ensure human creativity and artistic expression are still valued, and keep humans centric in the process.

  • Educate artists and creators about ML: Educate artists and creators about the capabilities and limitations of ML so they can make informed decisions about how they want to use it in their work.

  • Encourage collaboration between artists and ML experts: Encourage collaboration between artists and ML experts to ensure that ML is used in a way that supports and enhances the artist's vision.

  • Encourage Fair Use and Open-source policies: Encourage the usage of open-source ML technologies to ensure accessibility and fairness. Transparency into the algorithms will help prevent them from being maliciously used.

  • Protect intellectual property and provide compensation: Provide artists and creators with attribution and compensation for the use of their work in training models.

ML has the potential to enhance the design process, but it raises several ethical, moral, and practical issues. It’s paramount that everyone, designers, developers, leaders, and end-users alike, is aware of these issues and actively takes steps to mitigate them. This includes being transparent about how the ML models work, being accountable for the generated content, and being aware of and addressing bias in the data and generated content. Additionally, it's important to consider how ML-generated content may impact artists and creators and to work towards fair compensation and attribution for their work. By taking these steps, we can ensure ML is used responsibly and ethically, while still reaping the benefits of this powerful technology.

Sidenote: Not for nothing, the courts are literally still out on who exactly owns the output from generative systems. While OpenAI’s terms seem to indicate users own their output, the law is a lot more divided at the moment in terms of actual copyright.

And finally, it’s up to the technologists to take a bigger role in policing ourselves, and asking whether something should be done as often as we ask if it can be done, as well as how we do it.


Author’s note: This article was written with the assistance of ChatGPT and has a GPTZero score of 268.4: “text is likely human-generated”.


Some existing ML design and content systems:

GPT-3 (Generative Pre-trained Transformer 3): A language model developed by OpenAI, it can be used for NLP (natural language processing) tasks such as text generation, language translation, and language understanding.

Autodesk Dreamcatcher: A generative design tool that uses algorithms to generate design solutions based on design constraints and goals. It allows designers to explore a wide range of design possibilities, leading to more innovative and unique solutions.

Microsoft Sketch2Code: An ML-powered design tool that can turn a hand-drawn wireframe into a functional website. It uses ML (machine learning) to understand the design and automatically generate the corresponding code.

Midjourney: An ML-based generative design tool that can generate images and videos based on input like text prompts, or other images or videos. It's used to generate new and unique designs and art.

Next
Next

The Designer's Secret Weapon: How ML is Revolutionizing Web Design