Bias and AI Models

Carter Harrod
2 min readFeb 24, 2021

In her book “Weapons of Math Destruction,” O’Neil describes the bias in AI models as inserted by the creators of the AI. The whole point intuitively of developing statistical models and AI to help with actions is to eliminate human subjectivity and bias from manipulating actions. O’Neil argues that because humans are the ones who create the machines and these models, we insert our own bias into them while making them. This intern doesn’t mean the bias goes away, it’s just obfuscated behind the facade of technology. Another way that bias can be added into models would be through biased data. She argues that even if a model is designed which is “unbiased,” the model’s agent education is supplied by biased data which we provide. An example she supplied in chapter 1 was an AI which was designed to predict repeat offenders of criminal activity, and subsequently increase sentencing amounts on this basis. The bias she observed was that repeat offenders were arrested by police officers, thus humans which incorporate bias into their arrests. This leads the AI to then incorporate this bias into its own decision making.

O’Neil describes a model as the collection of data which is used by an agent to understand it’s surroundings. An example of a model would be an airplane AI utilizing data about the sky; atmospheric pressure, airspeed, current velocity, altitude, etc. Data which wouldn’t be incorporated into the AI’s model would be cars on a highway, subway station data, etc. Sometimes these models are intuitively built like the airplane model. However, more often models are developed where the entire scope of necessary data is unknown, leading to flawed models and analysis. An example of this flawed model was one used to weed out “bad” teachers. This model overly incentivized test scores as opposed to teacher reviews or other methods of analysis. This led to unjust firing of hundreds of teachers due to the flawed model.

--

--