September 2019
Text by Dimiter Simon

Image: Allvisionn/istockphoto.com

Dimiter Simov (Jimmy) helps businesses and people make their software products understandable and easy to use. He spends most of his time as a usability consultant and interaction designer. He also has a background in quality assurance and technical communication. His career revolves around how people are expected to use software and devices.


https://about.me/dsimov  


https://twitter.com/dsimov 


https://www.linkedin.com/in/dsimov/ 

Cognitive biases in technical communication

We are nowhere near as rational as we might think. How we make decisions, hold conversations, and even write documentation or train Artificial Intelligence is greatly influenced by our previous knowledge and experience – which are by no means objective.

We all make decisions every day. Technical communicators are no exception. Should I accept that job offer and join the big team in a big enterprise? Or should I join the small team in the startup business and be the one and only person responsible for documentation? Or should I freelance instead? How much should I pay for my DITA-based authoring system? Should I add more details to the description of the feature that I am explaining? Should I rewrite the procedure based on the feedback from that user, or just ignore him? The product owner and a developer have read through the documentation and told me that it is ok, so do I still need to ask users to read it as well before launch?

The answers will differ, of course, depending on various factors. When faced with these questions, we would rationally consider the different factors and base our decisions on the pros and cons. Sounds reasonable. Yet, many times this is not how we go about finding the answer. Usually our subconscious is ahead of us – it has processed the information based on what it already knows, and the decision is there. We just follow. Unfortunately, sometimes we later realize that we have made a mistake. Our subconscious brain has acted too quickly and has led us in the wrong direction.

This predisposition of our brains to decide fast based on the available information and previous experiences and err, is a known phenomenon called cognitive bias. We cannot eliminate this predisposition, but understanding that cognitive biases exist can help us make correct decisions more often.

Cognitive biases

The theory behind cognitive biases claims that they represent a limitation to objective thinking. These seemingly irrational patterns of thinking are caused by the tendency of the human brain to filter the information that it receives through personal experiences and preferences while attempting to make fast and risk-averse decisions. According to Wikipedia, "a cognitive bias is a systematic pattern of deviation from norm or rationality in judgement."

You can think of cognitive biases as shortcuts for quick decisions without wasting much energy on processing information. As a result, these shortcuts do not always lead us to correct decisions. The human brain is a very complex system that requires a lot of energy to work. It has evolved over millions of years and has helped us survive and become who we are today. During this time, it has learned to process and use information in a quick and efficient manner. Unfortunately, these ways are often inefficient and ineffective.

Cognitive science, social psychology, behavioral economics, and other sciences have been exploring and describing cognitive biases for years. As of today, the number of described cognitive biases exceeds 180 and is growing. You can find a list of cognitive biases on Wikipedia.

The Cognitive Bias Cheat Sheet classifies cognitive biases according to four factors:

 

  1. Тhere is too much information in the world but our senses cannot process everything, so we have found ways to filter it.
  2. There is not enough meaning in the information we have because it is not complete, so we have found ways to fill in the gaps with things we know or assume we know.
  3. The question "What should we remember?" is crucial because our brain is constrained by its capacity and by the speed at which it processes and stores incoming information, so we have found ways to store only the most essential. What we remember is what has made it through the filters in factor 1 and what fills in the gaps in factor 2.
  4. Often, we need to act fast because the world will not wait. We have limited time and information to make a decision, so we have found ways to make good decisions as quickly as possible. Our survival depends on this.

Whether we realize it or not, we all share these predispositions or, in other words, we are all biased. Technical communicators are no exception.

Here are a few examples of cognitive biases and how they affect technical communicators.

Halo effect

The halo effect is the tendency of humans to create opinions about people and things based on partial information collected through initial impressions and observations. For example, when we see attractive people, we tend to think of them as being smart, happy, competent, and successful in their careers. We create our opinions based only on their appearance without any other information. The same applies to websites, products, and even documentation. If we like them at a first glance, we are more likely to think they are better and more functional. The halo is also transferable: If I think that a product is good, I will assume that the company that makes this product is good too. I am also very likely to think that other products by the same company are good.

What this means for technical communicators:

 

  • We might be more lenient with colleagues who have an attractive appearance.
  • Users might think better of our documentation when it is nicely formatted and visually appealing.
  • If your documentation is good, you will be indirectly raising the opinion of users about the product and the company.

 

 

Confirmation bias

Confirmation bias is a tendency to seek and interpret information in a way that confirms our own knowledge and expectations. Let's say we're developing a chatbot for ordering pizza. We believe that pizzas should be savory, so we will teach the bot to offer different cheeses, vegetables, and spices as ingredients – that is, the standard offer with commonly used products. Then, we take our bot to testing. Every person who uses our chatbot the way we expect them to, we will view as a successful example, and thus will deepen our belief that pizzas must be savory. As we do not believe in dessert pizzas, we will not teach the bot to offer chocolate, fruits, and sugars as ingredients. If there are people who try to order a dessert pizza, e.g. chocolate pizza, their attempt will fail. Probably, we will simply interpret their attempt as an error or view them as people who are just challenging the bot. Most likely, we will ignore them.

What this means for technical communicators:

 

  • We are likely to ignore feedback that we do not agree with and pay more attention to feedback that confirms our beliefs.
  • When reviewing the work of a colleague, we might fail to see gaps when the content matches our own expectations and knowledge.

 

 

Curse of knowledge

The curse of knowledge is our tendency to assume that everyone around us has the same knowledge as we do and thus can understand us. When we know something, we struggle to imagine that someone else does not know this. We tend to remember only the essentials and erase from our memory how we got to know a certain thing and what it was like to not know it. Take online banking, for example. Often it contains information and terms known to people in the banking and financial spheres: bankers, accountants and financiers. For non-professionals, on the other hand, it can be a challenge to differentiate between "balance" and "available balance".

What this means for technical communicators:

 

  • Developers, product owners, and other colleagues on your team might not be perfect reviewers for your content because, as product experts, they might not notice omissions.
  • We must be careful with what we omit from instructions, because what is obvious to us might not be obvious to our readers.

 

 

Bias in AI

So, we are biased, okay. But computers aren’t, right? Unfortunately, we can even pass on our biases to our products and services. This is particularly important when it comes to Artificial Intelligence (AI). So, how do we transmit our biases to the AI that we build?

Machines have no conscious need to survive. When there is a shortage of memory, people add more memory. When processing speed is insufficient, people increase the CPU power. When the battery is low, people plug in the power cord. Accordingly, machines, including the ones running on Artificial Intelligence, are not subject to the limitations that we see in humans. So, machines cannot be biased. However, the people who create the machines are biased, and the machines depend on the algorithms, patterns, and data that people create and provide. Thus, people transfer their biases to AI.

Here is an example to illustrate this: In 2015, Andrej Karpathy, now director of Artificial Intelligence Development at Tesla, conducted an experiment as a student in Artificial Intelligence at Stanford. His goal was to teach the machine to recognize good selfies. Karpathy used pictures of men and women of different ages and races. To determine which selfies are good, he used the number of "likes" each selfie received on social networks. A photo that received many "likes" was considered good.

For everybody with a basic knowledge of photography, the requirements for a good portrait are clear: The object is in focus, the lighting is right, the head is not cut off, and the camera is not too close to the face. The result of Karpathy’s experiment, however, told a different story: According to social media users, to take a good selfie you need to be a young white woman. All pictures classified as "good" by the AI algorithm turned out to be of young white women. If someone did not meet this requirement, his or her picture was not in the category of "good selfies", even if the picture was perfect.

The data used to train the model was influenced by the preferences of the people who voted for the pictures. This is just one of many examples of how we influence the Artificial Intelligence systems that we create, how they resemble us, and how we need to be careful how we create and train them.

Unlike machines, people have ways of noticing their mistakes and can take action to fix them. A machine that is incorrectly trained cannot repair itself; human intervention is needed.

Ideally, we would not transfer our biases to machines at all. Instead of creating biased Artificial Intelligence and then correcting it, we would create unbiased AI. Unfortunately, we are not there yet, so we need methods of debiasing. Not much research has been conducted on the subject of how we pass on biases to machines and how we can avoid it or neutralize the effect. However, some guidelines are already in place. A team of researchers from the Czech Republic and Germany have examined 20 different cognitive biases and derived guidelines for neutralizing their effect when developing rules and models for machine self-learning. The full report is available here

Conclusion

Cognitive biases affect our work: how we approach tasks, design and write documentation, make videos, structure conversations, train AIs, and so on. As a result of their influence, we often think and act irrationally, and thus miss important facts and gaps. As the effects of biases can vary from mildly negative to disastrous, it is important to know how to mitigate them.

This article only scratches the surface. Explore the field of cognitive biases and train yourself and the people you work with to be aware of and understand cognitive biases.

The halo effect is strong. It is not always easy to avoid its influence. To see its impact, try blind reviews, in which you do not know who wrote the respective content.

Regarding confirmation bias, postponing the final decision and delaying the work might help. Make sure you do not come to conclusions in a hurry, and that you have time to review and edit your work.

Set explicit guidelines for writing and reviewing in favor of and against a hypothesis. This will help to counteract the curse of knowledge as well.

Although there is no easy solution to the effect of cognitive biases in Artificial Intelligence systems, it is important to stay alert when creating such systems. We need to be aware that these biases resemble us, their developers, and are part of our vision of the world.

After all, we do not need documentation and smart machines filled with human prejudices and cognitive biases, do we?