Skip to Main Content

Artificial Intelligence

AI is integrated into more and more of our lives, how can you use it safely at school?

Bias in AI Generated Content

We see biases reflected differently in the various forms of AI we’re currently working with.

For example, Resume evaluation systems down-rank women who have attended women’s colleges because of a historical bias against hiring both women, and from those schools, leading to a skewed system where men’s resume’s are more likely to pass muster.

We see data collection biases where certain groups are poorly represented in the collected dataset. For instance, voice recognition systems perform poorly on higher pitched (ie/ female) and Scottish accents because of statistically relative rarity and initial data collection biases (Tatman, 2017).

Leffer, writing for Scientific American, discusses how humans absorb biases from AI. How might the now-banned predictive policing software in Santa Cruz, CA (encouraging increased policing in Black and Brown neighbourhoods, leading to false accusations against people of colour) have changed department officials’ biases over the time it was being used?

Below is an image illustrating how bias is absorbed into AI systems, how it expresses biases at different points, and how these biases amplify and reinforce the prior biases creating a vicious cycle of bias intensification.

a flow chart showing bias coming from various sources into AI systems where algorithmic bias and bias in the data create expressions of bias by AI resulting in real world harms and an intensification of pre-existing bias

Hendrycks, D. (2024). Introduction to AI Safety, Ethics, and Society. Center for AI Safety. https://drive.google.com/file/d/1JN7-ZGx9KLqRJ94rOQVwRSa7FPZGl2OY/view

Tatman, R. (2017). Gender and Dialect Bias in YouTube’s Automatic Captions. In D. Hovy, S. Spruit, M. Mitchell, E. M. Bender, M. Strube, & H. Wallach (Eds.), Proceedings of the First ACL Workshop on Ethics in Natural Language Processing (pp. 53–59). Association for Computational Linguistics. https://doi.org/10.18653/v1/W17-1606

Leffer, L. (2023, October 26). Humans Absorb Bias from AI--And Keep It after They Stop Using the Algorithm. Scientific American. https://www.scientificamerican.com/article/humans-absorb-bias-from-ai-and-keep-it-after-they-stop-using-the-algorithm/

Bias in the Human Operator

  • Confirmation bias (or Affirmation bias)
    • The way you phrase prompts can encourage an LLM to answer in alignment with your preconceived notions
  • Automation bias
    • Humans are quick to accept computer outputs as fact without further verification