Human Influence on AI Learning

(This article was contributed by the SIT Geeks AISG Student Chapter)

Artificial intelligence (AI) is an interdisciplinary branch of computer science that focuses on creating machines programmed to mimic human intelligence. AI learns through combining large amounts of data and fast, iterative processing, allowing it to learn automatically from patterns and/or features in the data. This is also known as machine learning (ML), a subset of AI. In the past decade, ML has evolved, bringing massive success in applications such as image recognition, recommendation systems, online advertising, etc. ML is now used in employment screening, social justice, and intelligent virtual assistants (eg. Siri and Alexa). However, due to data bias, ML may not be the best for these systems. Hence, this article aims to address how human prejudices and biases affect how AI systems learn.

Data bias in machine learning is a type of error in which certain dataset elements are more heavily weighted and/or represented than others. In general, the data used for machine learning has to be representative of the real world. This is important because this data is how the machine learns to do its job. However, we have many complications even in the real world, which would affect the learning of AI. 

Human bias

All datasets are flawed, emphasizing the need and importance of data preparation. These flaws are a result of human subjective interpretation. In recent years, society has started to wrestle with how much these human biases can influence artificial intelligence systems — with harmful results. An example of human bias is AI recruitment software, which uses automation to aid companies in sourcing candidates, screen numerous resumes while reducing repetitive manual tasks. This software might sound like a great idea and a solution to tedious menial tasks. However, it has been developed to match human admission decisions, doing so with 90 to 95 percent accuracy.

In 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination, and the computer program it was used to determine which applicants would be invited for interviews was determined to be biased against women and those with non-European names. Even in recent years in 2014, a team of software engineers at Amazon was building a program to review job applicants’ resumes just to realize that the system discriminated against women for technical roles. 

As a result, Amazon recruiters forewent the use of the software due to discrimination and fairness issues. Implementing an algorithm did not solve biased human decision-making however, neither will returning to human decision-makers. 

Is bias all bad?

AI and ML are riddled with human biases, which is not ideal. However, the fact that we are becoming increasingly aware of these issues, forces us to confront these realities. In most cases, AI can reduce human subjective interpretation of data as ML algorithms learn to consider only variables that improve predictive accuracy, reducing human fatigue and emotion errors. This proves that algorithms can improve decision-making, resulting in a fairer process. For example, Jon Kleinberg and others have shown that algorithms could help reduce racial disparities in the criminal justice system in a research paper he wrote. Millions of times each year, judges have to make jail-or-release decisions. An algorithm was trained by them to predict whether defendants were a flight risk from their rap sheet and court records using data from hundreds of thousands of cases in New York City. When tested on over a hundred thousand more cases that it hadn’t seen before, the algorithm proved better at predicting what defendants will do after releasing than judges. Their algorithm’s advice could cut crime by defendants awaiting trial by as much as 25 percent without changing the numbers of people waiting in jail. Alternatively, it could be used to reduce the jail population awaiting trial by more than 40 percent, while leaving the crime rate by defendants unchanged. This is due to judges having a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or racial inequities.

Conclusion

Many existing human biases can be transferred to machines because technologies are not neutral; they are only as good or bad as those who develop them. Additionally, with technological advancements, several approaches have emerged to enforce fairness constraints on AI models, and despite human judgment being necessary to ensure that AI-supported decision-making is fair, we should still embrace AI in the future.




Written by:

Lim Kok Fong, Jodie Moh, Magdalene Yip, Gideon Yip
– 1st EXCO, SIT Geeks AI Student Chapter

The views expressed in this article belong to the SIT Geeks AI Student Chapter and may not represent those of AI Singapore.

References

  1. https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
  2. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
  3. https://academic.oup.com/qje/article-abstract/133/1/237/4095198
  4. https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans

Author