Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img

Humans are complex, AI should be too

Even though artificial intelligence (AI) is integrated into professional, personal and educational spaces that does not mean it is inclusive. It is made with...
HomeColumnsHumans are complex, AI should be too

Humans are complex, AI should be too

Even though artificial intelligence (AI) is integrated into professional, personal and educational spaces that does not mean it is inclusive. It is made with only one demographic in mind and that is a problem.

In early September, The Travel, a website for travel news, published an article covering concerns of potential bias in the emerging AI screening tool at the Canadian border. The tool is intended to identify potential flight risks, but the tool is biased.

Joy Buolamwini, author of the national bestseller Unmasking AI, has proven that face recognition favours white faces.

The computer scientist and author spoke about AI in 2023 on the NPR podcast called Fresh Air. She explained how artificial intelligence is trained through datasets. In the case of facial recognition, the artificial intelligence would be trained to detect patterns and use these to form a final product. Buolamwini found many of the patterns were based on white male standards.

The patterns repeat biases.

Buolamwini also found this notion extended to age and beauty ideals. After looking at studies showing older women being misgendered by these tools more often than younger women, she examined different gender classification datasets. She found many of these sets pulled from celebrities that fit the young, thin, light-skinned ideal and rejected women that did not.

Sounds a lot like discrimination from a human.

Since artificial intelligence needs to be trained by humans to do its job, we must question the mindsets behind these creations. Minorities are rarely the default standard, so if creators rely on popular representation, they will disregard anybody that does not meet that benchmark.

This exclusion not only pushes bias but also stereotypes and misinformation.

In The Travel article, University of Toronto professor Ebrahim Baghari compared the AI screening tool to the COMPAS tool used in American criminal courts. COMPAS is supposed to predict the likelihood of a defendant committing a crime again based on its algorithm. The tool only led to inaccuracies.

Pulling from algorithms is similar to pulling on stereotypes.

In 2016, ProPublica released an investigation showing black defendants with high-risk predictions did not re-offend, while white defendants with low-risk predictions did. Stereotypes cannot apply to specific people and circumstances.

Artificial intelligence simplifies human complexities by creating a full picture using shallow details, ignoring details that could be needed for accuracy.

It is important to program artificial intelligence from a variety of mindsets instead of idealistic datasets. If these machines are meant to help people of all demographics, then creators should factor in the shared experiences and characteristics of those demographics. This increases not only inclusion but also accuracy.

If artificial intelligence perpetuates systemic biases, it is not an advancement. It is a reinforcement of closed-minded beliefs.

Repeating patterns only reinforce stereotypes and create misinformation. Excluding race and gender puts people into default boxes that don’t fit.

If AI tools are made to serve humans, they must honour the complexity of individuals.

Listen now

Featured podcast

Autoimmune or Psychiatric: how much damage is misdiagnosis causing?

According to a March article about the impacts of misdiagnosed autoimmune disorders in Psychology Today, there is significant overlap between autoimmune disorders and psychiatric...