5 things to know about XAI

As AI becomes an important part of our daily lives, we need to know how it behaves and why it makes its predictions. These questions are the subject of the emerging field of Explainable AI, or XAI. According to the 22nd annual CEO survey by PwC, the vast majority (82%) of CEOs agree that AI-based decisions need to be explainable in order to be trusted. XAI has the potential to make AI models more trustworthy, compliant and performant. This can in turn drive business value and accelerate AI adoption.

But what exactly is XAI?

XAI is essentially AI that is capable of showing its human operators how it came to its conclusions. There are XAI methods and frameworks you can adopt to help you understand the rationale for the predictions made by your machine learning model, so that you can assess the trade-offs in the decision making trajectory.

As humans, we must be able to fully understand how decisions are being made so that we can trust the decisions of AI systems. This is especially critical as we progress to third-wave AI systems, where machines have the ability to understand the context and adapt accordingly.


What to know before embarking on XAI

You may still be sceptical about the value of XAI, or you may be eager to invest in it already.  As with all technology solutions, we believe the key is in understanding how the solution will work for you, rather than blindly adopting it. Here, we share 5 things you need to know about XAI.


1. Explainability is not transparency. But explainability is a part of achieving transparency.

“Transparency leads to trust” has become a truism in the AI scene. As the global community seeks to develop governance norms for use of AI, transparency in AI, or what is commonly referred to as AI transparency, is being touted as a key qualifier for building trust. In 2019, the EU Commission’s High-Level Expert Group on AI (AI HLEG) stated transparency as one of seven key requirements for the realisation of ‘trustworthy AI’. This is emphasised again in the Commission’s white paper on AI, published in February 2020.

As a concept, the dimensions for measuring AI transparency are still being debated. But what does transparency mean for your business and the AI solution you are developing currently?

The key is to understand what’s hindering transparency and trust for you. We know that unexplainable machine learning models are a problem. XAI can help with this. But there are also other problems that can also hinder trust in the model, such as the lack of visibility into training data sets, inability to trace model provenance, and delays in identifying model degradation.

XAI can help with opening up the black box of your AI system. But don’t forget that there are also other factors that can hinder the level of transparency necessary for building trust with your stakeholders.

2. Be clear about your business objective for XAI before embarking on it

Before deciding on how you want to implement XAI, do you know WHY you want it? Is it to achieve compliance with existing regulatory frameworks? Or to get ahead of the curve to find the sweet spot between governance and innovation as trends point towards greater AI regulation? Or is it to eliminate specific biases in your AI algorithm? Or to mitigate the risk of a major reputation dent if and when an unexplained bias occurs?

Understanding your business objective for XAI will help you be more incisive about which aspect of XAI you need to invest in.


3. Understand what constitutes a good explanation from your audience’s perspective 

A key question in XAI is who are you making the AI system more transparent for?

Most discussions on XAI provide a general sense of what constitutes a ‘good’ explanation. While this may be a useful baseline, why not take a step further to understand who are your key stakeholders that expect explainability? Some helpful questions to consider are:

  • What is their current level of understanding of AI?
  • What are their expectations for explainability and transparency? What is the difference in expectations between stakeholders such as consumers, business leaders and regulators?
  • What are their cognitive biases towards AI?
  • How best do they receive information?

Understanding where they are at and what they expect will help you create readable explanations that meet your stakeholders’ needs.

4. Recognise the trade-off between performance and explainability

Each AI system provides varying levels of explainability and performance. Generally, there is an inverse relationship between the prediction accuracy of an algorithm, and the level of interpretability. For example, deep learning models can be more performant but are generally less interpretable than regression models.

It is also helpful to determine the type of explanations that you need. For example, some explanations help you to understand the data better. Other explanations help you to understand the model better. If the liability for your AI predictions is low, then you may not need a high level of explainability.

Work with your data scientist to understand the trade-offs between model performance and explainability, and choose the system that best meets your business objective.

5. XAI alone is not enough to build trust.

Lastly, XAI must be complemented with robust governance over the operation of the AI system. These include MLOPs practices such as data versioning, code versioning, model versioning, audit trail etc. Investing in the broader governance architecture will allow you to improve the reproducibility in your AI systems, and accelerate business growth with AI.  

Going forward, people and governments will demand more explanation as everyone knows there is no ‘one size fits all’ AI solution. So, what should you focus on in 2021? Stay up to date with ongoing discussions on AI transparency, and work with your AI team to translate this into meaningful insights for your stakeholders.