7 Heuristics For Humane Design

What is humane design? And how can we apply it to our product design process, and in the UX/UI design of apps?

John Fallot
John Fallot
|
May 2, 2023
|
16
Min Read
Share

In April of 2012, two student organizers at the State University of New York at Purchase entered a large room and began to unfold chairs in a semicircle. As other students began entering around 8pm, the atmosphere gradually grew tense: all the black students were sat on the right, and all the white students on the left. 

Everyone was there to discuss the recent murder of Trayvon Martin, and the state of race in America.

Taking a deep breath, the organizers mixed up the students’ seating like in the bus scene from Remember The Titans. White and black students now sat together, and talked with candor about systemic racism with their dignity respected. Changing the configuration of the room had made it a safer space

So what does this story have to do with digital user experience design? 

After all, this was a physical space with real people, not a virtual application on a phone. Wasn’t the room just a morally impartial container for human conversation? Surely, the fact that the organizers had stepped in to adjust things was simply how humans had used the room that day?

We tend to have a view that design is—or should be—neutral and rational; something onto which users might project their own views and biases, but without biases of its own.

But what if we have it backwards? What if today’s designs are not impartial, but are facilitating, or even encouraging, users to act with hostility towards one another? What if these mere “containers” are driving our conversations in ways, and towards outcomes, that we didn’t ourselves intend?

In this piece, I’m going to explore some research, events, and experiences that suggest this is very much the case.

Immoral Design & Its Consequences

In all likelihood, the view of design as something blameless, or impartial is part of a myth believed by both companies and citizens to distance themselves from the consequences of immoral design

Immoral design isn’t the same as "bad" design. 

Bad design is a term that can be used to describe a product or service that fails to meet its goals and causes headaches for users.

Immoral design, on the other hand, may be found in a product that’s elegant and well-engineered, but that also—like an illicit substance—provokes dependence, societal harm, and even physical violence

Consider how contemporary buzzwords excuse our decisions as businesses and designers. Has your delivery app’s business model contributed to lower wages and people working three jobs to make ends meet? Don’t worry, it’s because of the gig economy. Did you create a social platform that fuels misinformation and increasing hopelessness? Don’t worry, it’s the fault of post-truth politics.

What these platforms have in common is a tendency to view people only as a means of making money, not as ends in themselves. For this reason, a more connected life hasn’t, for most of us, led to greater quality of life.

So, what’s to be done? I’d like to propose a few ways that we as designers might start creating apps that systematically bring people both dignity and delight, without undermining companies’ bottom lines.

The Four Axioms of Designing for Dignity

Let’s dig a little deeper into what it might mean to design for dignity. First, let’s be specific about the perceptions we’re bringing to the table. These make up what I call the Four Axioms of Designing for Dignity.

  • Axiom 1: To design is to render intent, behavior is the medium of design, and systems are a finite set of behaviors of any scope (e.g. a product or institution).

  • Axiom 2: Systems, and the feelings and behaviors they are designed to encourage, will always uncover the design’s original intent.

  • Axiom 3: Humans desire dignity, form identities wherever it’s lacking, and a system that fails to afford dignity will inevitably face problems.

  • Axiom 4: Humane design is the proposed practice of designing systems so that users experience dignity throughout the system, and its core holding is that designers must view users, not [just] as a means to making capital, but as ends unto themselves.

Axiom 1: To design is to render intent, behavior is the medium of design, and systems are a finite set of behaviors of any scope (e.g. a product or institution).

Humane Design Heuristics Axiom 1

Axiom 1 combines the logic of both Center Centre co-founder, Jared Spool, and Dalberg Design co-founder, Robert Fabricant. It means that a product’s look and feel, and a user’s journey as they are using it, depends on its intentions. Is the intended outcome to make users feel welcome and heard, or is it just to present everything that it can do without consideration of feature priority? 

This axiom also holds that design is not limited to screens or pages. It invites us to broaden our horizons about where designing for dignity might be applied. Consider that meeting room at the State University of New York at Purchase: it is a designed space. Its original contours created tension and animosity. A redesign—requiring students to mix-and-mingle with other another without regard to race or past friendships, eliminated that tension almost instantly. Similarly, a remote control, a corporate structure, Alexa, virtual reality headsets, a way of life; all of these are large parts of our everyday lives which can be redesigned in a humane way. 

Axiom 2: Systems, and the feelings and behaviors they are designed to encourage, will always uncover the design’s original intent.

Humane Design Heuristics Axiom 2

Axiom 2 argues that, if a product or system is producing an undesirable effect, you can trace it back to the initial intention and find a one-to-one match. Psychologist Adam Alter, author of the book Irresistible, makes the case in his TED Talk Why our screens make us less happy that as mobile devices and technologies have grown more advanced, the screen time we’ve spent away from our devices has dwindled to nothing, mostly due to the devices lacking stopping cues like how books are broken into chapters and shows have end credits.

Design ethicist, Tristan Harris, echoes this point in an interview he gave with Vox: “It’s designed to hook you,” he says, explaining that the bright colors, infinite scrolling, and constant notifications lure us into using these devices without end. Those sources of constant stimulation were deliberately designed things, and their intent was capturing your continued attention at all costs—whether that’s to keep you subscribed to a premium app, or show you as many sidebar ads, commercials, and promoted content pieces as possible.

That drive for attention has had staggering consequences both on and off these platforms. There’s a video from Vox Strikethrough series called, quote, Why every social media site is a dumpster fire that summarizes this perfectly. “Humans are social animals at their root, and they’re constantly looking for reinforcement signals or signals that we belong,” says Jay Van Bavel, Associate Professor at NYU.  

He researches what kind of information people respond to on social media. He found that “moral-emotional words like blame’, ‘hate’, and ‘shame’ were way more likely to be retweeted than retweets with neutral language” because it sent the clearest signal about where a person stood on the issue. And that, while a physical environment would allow for social checks and other social cues to afford communicating with dignity, the ease of blocking and dissociating from disagreeing viewpoints drives us into tribes. 

This, in turn, makes us vulnerable to conspiracy theories, misinformation campaigns, and downright hostile propaganda from bad actors. Journalist Carlos Maza, sums up the issue nicely towards the end: “The problem isn’t that a few bad apples are ruining the fun, it’s that these sites are designed to reward bad apples.” Which harkens back to the original intent of social media sites: profit off users’ attention, without regard to how it happens. This, as Axiom 3 posits, is a case of users not being treated with dignity, and consequently, the system facing problems. 

Axiom 3: Humans desire dignity, form identities wherever it’s lacking, and a system that fails to afford dignity will inevitably face problems.

Humane Design Heuristics Axiom 3

Axiom 3 pulls from the works of both Donna Hicks, Ph.D., author of Dignity: The Essential Role It Plays in Resolving Conflict, and Francis Fukuyama, Ph.D., author of Trust: The Social Virtues and the Creation of Prosperity and the now infamous statement that the Cold War’s conclusion represented the “end of history” because democratic capitalism was considered to have successfully proven itself as the best rendered system over all others.

Donna Hicks, having convened warring parties all over the world from Sri Lanka to Palestine, explores the role dignity plays in the breakdown and restoration of relationships in her work. Hicks identified ten ways to honor the dignity of others that she called the Essential Elements of Dignity, which are: 

  • acceptance of identity [as equal to your own];
  • acknowledgement [of their existence and the impact of your actions on them];
  • inclusion, safety [both physical and psychological];
  • fairness, freedom [to make our own decisions/from control];
  • understanding [and giving others the chance to explain themselves];
  • [giving others] benefit of the doubt;
  • responsiveness [to the pain others may be experiencing]; and
  • righting the wrong [when we have caused pain].

When seeking to reconcile with others, we fail to observe the above heuristics at our peril.

Meanwhile, Francis Fukuyama (notwithstanding intervening developments regarding the end of history) states that “identity is based on a universal human desire to have one’s dignity recognized,” and argues that, in modern society, our insistence on being correct in the face of disagreement, or even contrary evidence, drives people to form political in-groups and out-groups—us-and-them distinctions—in order to satiate and actualize that desire for recognition. He further makes the case that social capital—the capacity of people to cooperate and trust one another—is a determinant of a society’s economic success; and highlights economies from the former Soviet Union as examples of where mistrust has hampered economic development. 

To bring this back to product design: heuristics already exist to both improve and test user navigation and delight within an application. Yet we seldom test the capacity of systems to handle user reports about objectionable content, or to anticipate objectionable content at all. As such, users’ senses of safety and recognition are often undermined, and therefore, so is their dignity. This often comes at considerable cost to both the brand equity and bottom line of companies that fail to take these events seriously. Sometimes it can even prove catastrophic.

An in-depth report from Last Week Tonight with John Oliver found that, in 2012, Facebook launched a prolific expansion into Myanmar such that Facebook became synonymous with going on the Internet altogether. It’s similar to how in the United States, we Google something; in Myanmar, you Facebook it. They did so without building sufficient infrastructure to police objectionable content: it only staffed four moderators for the whole of the country. It also failed to adequately translate requisite calls to action for content moderation into Burmese. The outcome was catastrophic, as misinformation spread on Facebook in Myanmar exacerbated racial animosity against the Rohingya, and contributed to a state-sponsored campaign of violence whose death toll is fast approaching 10,000.

The cost in human life, exacerbated through a failure to confront misinformation and hate speech, is incalculable. But there are, and continue to be, calculable damages to Facebook’s brand equity, revenue, and stock value because of these failures to act responsibly. These costs could easily have been preempted had the product been designed for dignity in the first place. This is an expression of Axiom 3: it’s expensive to be immoral, so don’t.

Axiom 4: Humane design is the proposed practice of designing systems so that users experience dignity throughout the system, and its core holding is that designers must view users, not only as a means to making capital, but also as ends in themselves.

Humane Design Heuristics Axiom 4

Axiom 4 ties all the others together, affirming the view that we must design for user’s welfare, not against it. In Grounding for the Metaphysics of Morals, the 17th-century philosopher, Immanuel Kant, put forward the following “categorical imperative”—a rule of ethics that defines what is moral:

[You must] act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.

Kant was skeptical about consequentialism—assessing an action’s moral value by its consequences—partly because an action’s outcomes can be difficult to foresee. However, it seems to me that there are many cases where we can reasonably foresee negative results. Facebook was warned by civil society groups about the risks of expanding in Myanmar without accounting for existing ethnic tensions, and the expansion did indeed result in a human rights catastrophe. 

So Axiom 4 maintains that a clear and conscionable design is a good design that should reasonably anticipate its outcomes, and more often than not, produce good ones.

Seven Humane Design Heuristics

You might be thinking, “this is all well and good, but how might we put these principles into action?” To that point, I’d like to advocate the following “humane design heuristics” that we can apply in our work as digital designers.

  1. Notification Bundling. The system should bundle notifications throughout the day, so as to reduce user stress.

  2. Stopping Cues. The system should use pagination and other visual cues to give users a sense of satiation and completeness when they have finished a task.

  3. Desaturated Mode. The system should use desaturated, muted colors, and allow users to toggle grayscale modes.

  4. Doherty Threshold Observance. The system’s feedback should not exceed 400ms, so as to prevent addictiveness.

  5. Recognition and Safety. An inventory should be taken to more fully consider its impact on society.

  6. Diverse Input Validation. The system should accommodate for edge cases, nuances, and implicit biases.

  7. Raise the Conversation. The system should guide conversations towards healthier outcomes using smart replies and less-threatening serif fonts.

The first three Heuristics—Notification Bundling, Stopping Cues, and Desaturated Mode—reiterate the recommendations of the Center for Humane Technology, as outlined in this Vox video titled It’s not you. Phones are designed to be addicting. Indeed, in recent months we’ve seen these features added to products including Google’s Android Pie OS and Facebook’s Instagram app.

The fourth point, Doherty Threshold Observance, pulls from the findings of Walter J. Doherty and Ahrvind J. Thadani in the 1982 IBM Systems Journal, which found that

When a human being’s command was executed and returned an answer in under 400 milliseconds, it was deemed to exceed the Doherty threshold, and use of such applications were deemed to be “addicting” to users.

This heuristic therefore suggests keeping the loading times of applications to 400 milliseconds or slower, pending more research.

The fifth point, Recognition and Safety, draws from an initiative that the Artefact Group in Seattle, Washington, has spearheaded called the Tarot Cards of Tech, which were made with one question in mind: Are we designing products that support the world we all want to live in? Writing in a corporate article, Rob Girling & Emilia Palaveeva, respectively Artefact’s CEO and Marketing and Communications Director, had this to say: 

In the last 60 years... design has done an outstanding job evolving to address the problems of the day, while extracting and incorporating insights from other disciplines [such as emphasizing user needs]. But with recognition, comes responsibility. If followed blindly and left unchecked, this cult of designing for the individual today can have disastrous long term and unintended consequences [for humanity]. A platform designed to connect becomes an addictive echo chamber with historic consequences (Facebook); an automation system designed to improve safety undermines our ability to seek information and make decisions (the plane autopilot); a way to experience a new destination like a local squeezes lower income residents out of affordable housing (Airbnb).

The prompts are wildly insightful, ranging from ‘What’s the worst headline about your product that you could possibly imagine?’ with regard to anticipating scandals and disruptions to society, to ‘What might cause people to lose trust in your product?’ as it concerns addressing user needs and edge cases, and are a must for anyone designing a product that might “move fast and break things”, as Mark Zuckerberg once, inauspiciously, declared.

Diverse Input Validation means that prioritizing and accommodating edge cases, like allowing users to customize their gender identity on social media, or not to make assumptions that users are out to use the app strictly as your “ideal” user journey intended. Design For Real Life by Eric Meyer & Sara Wachter-Boettcher, from the must-read A Book Apart series, is the principal inspiration for this heuristic, and offers excellent case studies on the topic. 

In the case of product design, as information designer and programmer Evan Hensleigh puts it: “Edge cases define the boundaries of who [and] what you care about”. And if you overlook those edge cases, the boundaries within which your app can operate and command appeal is likely to shrink. Your edge case can offer a chance to send the loudest, clearest signal about what your company believes. Consider the recent news that Lyft is partnering with Voto Latino to help get Dodge City voters to the polls, after city leaders there moved the sole polling place to outside the city limits. That sends a crystal clear signal to prospective and current users that Lyft might be a product that reflects their sense of identity and belonging. 

Conversely, if your app takes an outright dismissive approach towards edge cases, not accommodating them once it’s clear that metrics concerning your target audiences are being met, then it follows that your app is missing an opportunity to broadcast a clear signal on that issue or edge case—or even that your company lacks the kind of driving moral language that declares “these are the causes that really matter to our product and user community”. As the business consultant Simon Sinek puts it, “People don’t buy what you do, but why you do it.” That, to mirror Professor Jay Van Bavel’s research from earlier about people seeking signs that they belong, people are looking for chances to join an in-group and have their dignity recognized. 

Erika Hall, author of Conversation Design—another in that A Book Apart series—sums up that final heuristic perfectly in an October 16, 2018 Twitter thread. She writes:

If Gmail can suggest replies, social platforms can message users who are about to send hateful posts and suggest they do otherwise. I bet it would help. People change their behavior when they feel like they are being observed. It’s not a 100% fix, but could be a good complement to other anti-harassment measures. The perceived downside will be that it dampens raw engagement. And the algorithm could learn which interventions are effective over time. This is not an original suggestion, but it would require optimizing for something other than sheer “activity” on the platform... If benign actors have to consent to their data being aggregated and trafficked, part of the bargain should be that the data aggregation is also being used to create a safer neighborhood.

She closed that thread by writing that it’s time for designers to start worrying about preventing harmful interactions as much they think about enabling helpful ones. It’s past time that we did.

Find out more about the author’s work at johnfallot.com
You can also follow John Fallot on Twitter and LinkedIn

Launch a career in ux design with our top-rated program

Top Designers Use Data.

Gain confidence using product data to design better, justify design decisions, and win stakeholders. 6-week course for experienced UX designers.

Launch a career in ux design with our top-rated program

Top Designers Use Data.

Gain confidence using product data to design better, justify design decisions, and win stakeholders. 6-week course for experienced UX designers.