Responsible AI: Navigating policy and innovation in AI

In recent years, the advances in the applications of AI and its market potential have placed the spotlight on the opportunities of AI for society. We hear promises of AI upgrading jobs, conserving energy, preempting cancer and more. 

On the other hand, high profile AI blunders have also called to question the impact of AI on fundamental rights such as privacy, and the moral responsibilities of tech companies that create these AI products. Academics, technologists and the public have raised questions about the ethical boundaries to guide the use of AI. 

From philosophical discussions to technical fixes, the discussion is now shifting towards policy, standards and regulation. The defining question is: how might we use AI responsibly? 

Different perspectives on Responsible AI


The challenge is, responsible AI means different things, depending on your vantage point. 

From a policy maker’s perspective, responsible AI is about “accountability” - all stakeholders in the development and deployment of AI must have defined roles and should act responsibly to the best of their ability. To a regulator on the cusp of defining legal implications, responsibility is liability. For an enterprise straddling AI innovation and business bottomline, responsibility is a fiduciary duty. To a human being who is concerned about potential harms from AI, responsibility is an absolute necessity. 

How do we straddle these different interests to ensure AI is used responsibly to spur better innovations? 

At the heart of responsible AI is mutual trust. But what is trustworthy to you, may not be trustworthy to me. What you perceive to be trusted, may not be trustworthy in reality. This does not mean that we disregard trust as a lofty, amorphous concept to be confined to governance reports. But it certainly means that we need to translate trust as an ethical aspiration into descriptions and actionable steps that can be commonly agreed on. 

BasisAI participates in Open Loop 

In July 2020, BasisAI was invited to be the dedicated private-sector technical assistance partner of Open Loop, a global experimental governance programme that sought to make AI governance more practical. The programme recruited 12 AI companies from the Asia-Pacific region to participate in the six-month prototyping programme. Under the mentorship of BasisAI, programme participants used BasisAI's proprietary machine learning platform, Bedrock, to develop their own XAI solutions in response to Singapore’s AI governance framework.

Through Open Loop, we saw a strong commitment to building trust as a hygiene factor of AI governance. Open Loop provided a common ground for the Singapore Infocomm Media Development Authority and private-sector stakeholders to apply AI governance principles in practice. 

What trust in AI means in practice

Some proponents of responsible AI have pushed for explainability as the basis of trust in AI. Can AI solutions be technically made more explainable so that business owners, data scientists and risk owners make more nuanced trade-offs? Can AI solutions be communicated in terms that end users can understand? 

Explainability may help, but there are also other critical dimensions such as robustness and security of AI systems that matter in light of cybersecurity attacks, or a process to adopt a fairness criteria to ensure unintended discrimination is avoided. 

What is clear to us, is the quality of the Machine Learning (ML) system matters. Depending on the business objectives, cultural context and customer expectations, there are different dimensions of trust that can be enabled by the ML system. 

How to build capacity for responsible AI


At BasisAI, we sit at the intersection of AI innovation and governance. We understand the importance of translating AI governance into policy tools that will protect the interests of end users. Importantly, we know how to help enterprises scale AI in real-time.

We think there are 3 things enterprises can do to build capacity for responsible AI: 

Start with the end in mind

Start with the end in mind. It’s easy to get lost in the search for the most explainable, robust or fair AI algorithm. The UK Information Office has identified 6 different types of explanation

  1. Rationale explanation: the reasons that led to a decision, delivered in an accessible and non-technical way. 
  2. Responsibility explanation: who is involved in the development, management and implementation of an AI system, and who to contact for a human review of a decision. 
  3. Data explanation: what data has been used in a particular decision and how; what data has been used to train and test the AI model and how. 
  4. Fairness explanation: steps taken across the design and implementation of an AI system to ensure that the decisions it supports are generally unbiased and fair, and whether or not an individual has been treated equitably. 
  5. Safety and performance explanation: steps taken across the design and implementation of an AI system to maximise the accuracy, reliability, security and robustness of its decisions and behaviours. 
  6. Impact explanation: the impact that the use of an AI system and its decisions has or may have on an individual, and on wider society. 


Knowing which type of explanation you need to give will help you identify specific areas in the AI lifecycle you need to focus on to develop explainability. 

Build in compliance-by design

The landscape for AI regulation is evolving rapidly. From softlaw to auditing frameworks, enterprises who are able to start embedding AI governance principles into their AI systems will be more likely to adapt as the regulatory tools mature. 

For a start, make sure you create an environment that encourages multidisciplinary team work. Bring technical and non-technical stakeholders together to explore the possible trade-offs in your AI system before building it. Analyse both business and technical methods for managing trade-offs. As you develop and deploy the AI system, explore ways to encourage oversight for ongoing monitoring of the system performance. Google’s Model Card Toolkit is designed to facilitate AI model transparency reporting for developers, regulators, and downstream users. Also, check out how Bedrock enables AI governance

Develop a set of metrics to test your AI system


Handing an AI governance document to a data scientist is like handing a restaurant management guide to a chef. The chef might be able to interpret it to some extent, but to truly align the chef with the restaurant owner, that would require the owner to define some intermediary steps. Similarly, in the world of developing and deploying AI systems, we need to help data scientists connect principles to practice by defining metrics. 

Take for example, fairness. It broadly means avoiding unfair bias. But there are at least 21 different definitions of fairness. The same outcome can be considered fair according to some definitions and unfair according to others. The concept of fairness also evolves over time. 

To get past this theoretical block, consider having a strategy for selecting fairness metrics that are aligned with the business outcomes. This fairness decision tree from Aequitas, an open source bias audit toolkit, is a helpful starting point. Based on the fairness criteria, a suitable metric (or a few metrics) such as equal opportunity or statistical parity can be selected to test for the extent of fairness in a decision outcome. This breaks down the principle of fairness into something practical and actionable for data scientists, while giving decision makers a tool to stay accountable for decisions made by an AI. 

If used well, AI has the potential to make enterprises more innovative and effective. However, AI also raises risks and compliance challenges. For AI adoption to mature, we need a sound and reasonable set of policy tools to promote and limit, where appropriate, the scope and impact of AI applications. And we also need to build AI systems that are designed to empower humans to use them responsibly.