5.3 Computing Bias

3 min readjune 18, 2024

Minna Chow

Minna Chow

Minna Chow

Minna Chow


AP Computer Science Principles ⌨️

80 resources
See Units

What is Bias?

As we've discussed throughout these guides, computing innovations can reflect existing biases.
Biases are tendencies or inclinations, especially those that are unfair or prejudicial. Everyone has their own biases, but certain biases, especially those based on someone's identity, can be actively harmful to society.
Biases exist in the world and in individuals. Computing innovations use data from the world around them, data that people pick out to feed to the computing innovation to use. Therefore, computing innovations can reflect those existing biases.

Examples of Bias

Bias can be embedded at all levels of development, from the brainstorming phase to the work done after release. This can take the form of a bias written into the algorithm itself or bias in the data used. Let's look at some examples.
  • Criminal risk assessment tools are used to determine the chances that a defendant will re-offend, or commit another crime. This information is then used to influence decisions across the judicial process.
    • However, these algorithms are trained to pick out patterns and make decisions based on historical data, and historical data is historically biased against certain races and classes. As a result, risk assessment tools may disproportionally flag people from certain groups as risks.
  • Facial recognition systems are often trained on data sets that contain fewer images of women and minorities than white men. Algorithms might be biased by exclusion because they're trained on sets of data that aren't as diverse as they need to be.
  • Recruiting algorithms used by companies to help them sort through large quantities of applicants can be biased against certain races or genders. For example, if successful candidates for a position tend to be men because historically only men have applied, an recruting algorithm might teach itself that male candidates are preferred over female ones.

What can we do to prevent bias in computing?

Luckily, people can take steps to combat these biases, and the first step is understanding and acknowledging that bias could exist. Here are some working suggestions for preventing biases:
  • Use diverse and representative data sets: Using data sets that are diverse and representative of the overall population can help to reduce bias in machine learning models (such as our facial recognition program above).
  • Review algorithms for potential biases: Carefully reviewing algorithms for potential biases, and testing them on diverse data sets, can help to identify and mitigate any biases that may be present.
  • Incorporate fairness metrics: Using fairness metrics, such as demographic parity or equal opportunity, can help to ensure that machine learning models do not produce discriminatory outcomes.
  • Address human bias: It is important to be aware of the potential for human bias in the development and use of technical systems, and to actively seek out and address potential sources of bias.
  • Increase diversity in the tech industry: Increasing diversity in the tech industry can help to bring a wider range of perspectives and experiences to the development of technical systems, reducing the potential for bias.
By taking these actions, we're not only benefitting our programs, but also society as a whole. After all, algorithms are written by people. Being able to find and eliminate bias in computers can help us eliminate bias in ourselves as well.
Browse Study Guides By Unit
🕹Unit 1 – Creative Development
⚙️Unit 2 – Data
📱Unit 3 – Algorithms & Programming
🖥Unit 4 – Computer Systems & Networks
🧐Exam Skills