Explaining AI in a way that isn’t all gobbledygook

Getting on the property ladder is stressful enough for any first-time buyer, but throw in the complications of race, and the entire exercise can quickly turn into a nightmare. Wannabe homeowners in America have long been aware of this, but their suspicions were confirmed in 2021 when The Markup, a nonprofit publication that investigates tech-related issues, published a damning report.

After analysing more than two million home mortgage applications in the U.S., the report’s authors discovered that for every 100 applicants with similar financial characteristics, nine black people were rejected as opposed to five whites. Comparably, eight Native Americans, seven Latinos and seven Asians were unsuccessful in their loan requests. In short, the authors concluded, the artificial intelligence (AI) algorithms that many lenders rely on to arrive at such decisions are inherently racist.

AI algorithms have many applications in today’s world. For example, banks use such technology to screen suitable mortgage applicants. (Image credit: Alachua County)

 

The use of such systems is becoming increasingly ubiquitous, says computer scientist Jun Sun, a professor at Singapore Management University who studies how AI systems can be improved. In particular, those based on neural networks — AI pattern-recognition technology that mimics the workings of the human brain — are growing in popularity and use, thanks to their exceptional performance in solving many real-world problems. Bank loans aside, neural network models are useful for facial recognition, language translation, self-driving cars, among other applications.

“Basically you feed a big bunch of data into the system and then somehow these neurons will reconfigure themselves into accomplishing a certain task,” says Sun. “This all sounds magical but neural networks are complex black-box models. There are always the questions of: How does it all work? Can we trust them?”

Seeking the answers to those questions was the fundamental driver that led Sun to apply for an AI Singapore research grant in 2019. Developing AI that is explainable is critical for many reasons, he says. Most importantly, it helps to build people’s trust in such technology. “In general, if we are able to understand any decision made by an AI system and what contributes to that decision-making, that will help us gain some trust that it is making the right decision,” says Sun.

This is incredibly important as AI predictions have tremendous potential to alter our lives, from influencing whether we get to buy our dream homes to receiving a life-changing medical diagnosis.

Furthermore, explainable AI helps computer scientists build better algorithms, such as creating ones that are less biased to black lenders. “If we want to improve certain systems, then we need to know what’s causing the results,” says Sun.

Keeping it simple

To Sun, explaining AI predictions using the classic computer science concept of abstraction appeared to be the most logical approach. Abstraction refers to the process of extracting only essential information from a complex system while omitting unnecessary data. “In the case of explaining AI, that’s exactly what we want to do — find simple concepts with only very relevant information that is meaningful to humans,” he says. “Human brains are very simple in the sense that we don’t do heavy computation well.”

“For example, if I give you a huge number of reasons why the bank rejected your loan, such as saying that the credit score is calculated according to this really complicated formula, you probably wouldn’t be able to make much sense of things,” says Sun. “But if I tell you simple facts such as loans are more likely to be approved if you have a stable job or if you’re young, it’s much easier to understand.”

AI systems can be complicated and complex. But in order for humans to trust the predictions they make, and for scientists to improve them, finding simple ways to explain such algorithms is imperative. (Image credit: Ars Electronica)

He offers up another example, this time of an algorithm analysing a picture and concluding it’s a car. The AI system might have measured a number of the car’s features, such as its depth or the angle at which its bonnet is curved, to make such a prediction. However these are “really low-level details that wouldn’t be interesting to humans because we don’t notice these things,” he says. “But if you tell me something that’s more high-level, more human understandable — like how there are wheels — that’s a much better way of explaining the AI system.”

The trick is finding the ‘right’ level of abstraction. “There are many ways, many concepts that we can use to explain the same AI system,” says Sun. “But we have to find the small handful of concepts, or the right combination, that can explain the majority, say 90%, of the decisions being made. That’s what I mean by the right level.”

A better understanding

Over the course of three years, Sun worked with his co-principal collaborator Lu Wei from the Singapore University of Technology and Design as well as researchers from Zhejiang University, Huawei, and elsewhere to create human-understandable, explainable AI systems using the concept of abstraction.

In one instance, they created a model based on decision trees which could explain, for example, that a person’s bank loan was rejected by the AI system primarily because she had a pre-existing loan and an annual income below $50,000 — factors that emerged evident as there were other applicants with similar earning powers but who were approved because they were loan-free.

Additionally, Sun and his collaborators also developed a toolkit that can be applied to existing AI systems to measure how explainable their predictions are. The researchers then took things a step further, retraining some models using samples generated by their own algorithms to see if that improved decision explainability without significantly reducing the accuracy of predictions. The results, Sun says, were highly encouraging.

Overall, the past three years have been incredibly fruitful, he says. “Explaining AI was never really an easy problem to start with, but we can honestly say that after this project, we have a much better understanding of AI technology in general and how to meet people’s expectations.”

But doing so also revealed that the problem of explaining AI well is “much deeper than we can solve,” he adds. Some systems are just “really hard to explain.” On top of that, an ongoing challenge in the field is that AI models are getting ever more complicated. Take for instance ChatGPT, the revolutionary new AI chatbot that can write essays so well that some schools have taken to banning the technology. “If you look at the number of parameters it uses, it’s really huge,” says Sun, “meaning that the model is really, really complicated.”

“Coming up with meaningful concepts for such big models is basically impossible right now,” he adds. “But that’s always a good thing in terms of research — it means that there’s work to be done to solve this problem again for big models.”

That’s partly the reason why halfway through the project, Sun decided to take the findings he obtained and apply for a larger grant from the Ministry of Education, Singapore. The new funding, which lasts for five years, will allow Sun and his team to study big AI systems, including their safety, security, and other aspects.

Reflecting on his work and career, Sun says: “All in all, I’m a system analysis guy. We try to make the world a bit better by looking at all these AI computer systems in a bigger context to make them safe and reliable. That’s our goal here.”

Author