RAI: What is Responsible AI?

If data is the new oil, then “trust in technology” is the volatile asset that needs to be monitored and managed. According to the 2020 Edelman Trust Barometer Special Report: Trust in Technology, trust in tech globally has dropped 4 percent year-on-year.

Despite the wave of ethical AI principles and guidelines published between 2016 and 2019, big technology players such as Apple have been hit by AI blunders. Governments have not been spared, with the most recent being the UK's Home Office scrapping its algorithm for UK visa applicants because it was deemed to be racist. In 2019,  Forrester predicted that there will be three high profile AI-related blunders in 2020, but we have already exceeded that.

It is not surprising that we are seeing 'Responsible AI', or RAI, emerge in light of the wavering confidence in technology for social good.

Double-edged sword of AI

Previous technological revolutions have altered the way we live, work and relate to each other. Likewise, AI has the power to improve and disrupt lives. Where there’s social transformation, there will also be ethical implications for mankind to navigate.

Take deepfake technology as an example. The assumption that “what is realistic is real” is now being questioned. What you see and hear may not be what it truly represents. It may be entertaining to watch Harrison Ford in Solo: A Starwars Story, but it’s disconcerting to know that the same technology can also threaten democratic discourse. Deepfake technology is challenging the reliability and validity of audiovisual communication which we have traditionally relied on as markers of truth.

Another example is the trade-off between data privacy and public good, in the use of AI to fight the battle against COVID-19. Contact tracing has been a key strategy that many countries have used to contain disease spread. Depending on the data collection mechanism, you may view contact tracing either as a problem or panacea. In the centralised model, location data is collected via mobile phones and stored in a centrally run database. In a decentralised model, the data is stored in the user’s phone or a token. There are concerns that contact tracing is leading to an over-collection of personal data, resulting in a greater risk of data privacy breaches and sinister surveillance. On the other hand, an overemphasis on privacy could impact the ability to gather information essential for effective contract tracing, further impeding efforts to limit the outbreak. There are no clear cut answers to navigating ethical boundaries. 

Trend towards greater AI regulation

In both of these examples, the evaluation of the opportunities vis-à-vis dangers of AI is only meaningful when discussed in the context of its interaction with human beings. As a technology, AI is a set of techniques and algorithms that help to mimic human intelligence. Algorithms are ethically neutral. But the human beings designing and using the algorithms hold moral positions and are subject to biases.

As the momentum for AI adoption increases, there is a greater awareness of the risks associated with deploying AI systems that violate legal, ethical, or cultural norms. Emerging signs are pointing towards greater AI regulation to manage these downsides. The EU has begun developing regulatory proposals and guidelines, such as the Artificial Intelligence White Paper released by the European Commission in February 2020. The White Paper further examines risks posed by AI systems, on top of the personal data and privacy risks highlighted under the EU General Data Protection Regulation (GDPR). While the US adopts a more laissez-faire attitude to AI regulation domestically, there are similar signals towards greater accountability. The California Consumer Privacy Act of 2018 (CCPA) acts as the US' first domestic data privacy law. New York City passed its first algorithmic accountability bill in 2017, which assigned a task force to examine the way the city government agencies use algorithms. The proposed Algorithmic Accountability Act of 2019 could extend algorithmic accountability nationwide through the empowerment of the Federal Trade Commission to issue new regulations when necessary.

While some believe that turning these guidelines into laws will be the logical next step, others are pushing for soft-law solutions such as self-regulation and voluntary practices that go with robust auditing standards. Regardless, humans will be held accountable for how AI is used, and algorithmic bias is a significant issue. There are expectations for AI to be used responsibly, and enterprises must take steps to address it now.

Responsible AI as a proactive stance

When AI starts to make decisions like how human beings do, there is a need to ensure that these decisions are explainable, fair and made responsibly. Yet, the majority of AI adopters today are relying on third parties to develop and deploy AI, which makes it challenging for them to own the narrative on the benefits and downsides of using AI. The probabilistic and nondeterministic nature of AI development in a complex AI supply chain also adds to the opaqueness of these AI systems.

The key is for enterprises to build and deploy responsible AI systems from the start, to minimise risk and pre-empt unintended AI biases. In the 2020 Edelman Trust Barometer, 54% of respondents said communicating the downsides of emerging technology would ultimately increase their trust. In a data-driven economy, trust is a prerequisite for success. Responsible AI is the means to bridge the trust gaps that impede AI adoption and acceleration.

Responsible AI is a way to enable governance-by-design

Responsible AI is often used as an umbrella term for different practices undertaken to improve our trust in AI, such as explainable AI, MLOps and compliance. There are many responsible AI principles, toolkits and frameworks available that articulate a process to make visible the implications of using AI.

At the heart of Responsible AI, is a vision to build robust and trusted systems that make visible these three principles:
1. Are decisions explainable?
2. Are decisions made by the AI ethical and fair?
3. Are you able to trace and review the decision-making components used to build the AI system?

What you will need in a Responsible AI toolkit

Responsible AI needs to sit in the intersection of decision makers and the AI system in order for it to be actionable. The aspirations of explainability, fairness and responsibility need to be articulated in terms that make sense to those that have a stake or are impacted by the AI system, such as the chief technology officers, data scientists and consumers.

Here we propose 5 tools that must be a part of every responsible AI toolkit:

1. Understand what makes a good explanation to your audience
Explainability is the extent to which the internal mechanics of the algorithmic system can be explained in human terms. The extent of explanation required must match the audience, its needs and expectations. For example, a consumer will most likely desire comprehensibility more than comprehensiveness. On the other hand, an auditor will expect reasonable fidelity and specificity in explaining the steps leading up to a decision.  

2. Pre-empt unintended bias in the system
Fairness is a major challenge to using AI at scale. As a concept, fairness is often strongly associated with being “bias-free”. However this may not be realistic as the source of bias is often in the input data that humans collect and select. This bias can then result in either algorithmic bias or more human bias. From a research standpoint, there are 21 mathematical definitions of fairness! On top of this, every organisation operates in a unique cultural context.

Instead of focusing on finding a perfect definition of fairness, there needs to be room for enterprises to pick their own definition of fairness, and focus on the desired outcome - preempt unintended bias. 

What does it mean to select an appropriate definition of fairness?

Unfairness happens when different groups are treated differently, for example, between men and women. Imagine that a group of men and women are applying for student loans for university education. 

The “equal treatment” definition of fairness says that the proportion of men and women that apply and get accepted must be the same, because loans are essential to getting a basic education. One objection to this is it does not take into account the risk of the individual being able to complete their education and repay the loan. If men are more irresponsible and therefore higher risk, we shouldn’t expect the same university admission rate. Instead, the “equal opportunity” definition of fairness would ensure the AI decision is fair by equalising the proportion of men and women who get the loan and who graduate. However, if we changed the context to consider admission into foundational primary schools, then the “equal treatment” definition of fairness would be the more appropriate principle since access to early years education is critical to future wellbeing.


3. Design the system for scalability and governance
Productionising AI can be a complex process involving multiple parties and stages that are often not integrated. It requires software engineering and big data capabilities. It requires good project management and a common language to bridge different expertise.

We think MLOPs is extremely promising in providing a technology “paved road” for data scientists and IT operations teams to work together throughout the lifecycle of the machine learning model.

4. Monitor the system
Unlike traditional software systems, AI works by combining large amounts of data with algorithms to enable software to learn automatically from patterns or features in the data. This learning ability also means that AI systems are more amenable to failing silently without warning.

AI systems must be monitored continuously even after they go live. Find out more about how you can carry out AI monitoring with our proprietary platform, Bedrock.

Why monitoring is crucial

In 2016, Microsoft’s AI chatbot, Tay, was corrupted by trolls within 16 hours of her release on Twitter with inflammatory and offensive tweets. Tay’s successor, Zo, was launched a few months later, and purportedly swung to the other extreme of political correctness, raising concerns of “AI censorship without context”. Microsoft has since shut down Zo in 2019. 

Today, the COVID-19 pandemic is rapidly changing consumers’ daily habits, consumption patterns and behaviour toward online channels. Data collected under business-as-usual conditions cannot be extrapolated to once-in-a-lifetime events such as a pandemic. For example, if you are a food delivery enterprise using an AI system that was trained on data pre-pandemic, the model will no longer be relevant. Historically, demand patterns could be predicted based on a certain time of the day (e.g. late night suppers) or seasons of the year (e.g. festive occasions). The pandemic has introduced new factors such as the number of COVID-19 cases in the community, and regulations for in-person dining. Maintaining accurate demand models is critical to provide food delivery enterprises with the data to hire enough help to fulfil food orders. 


5. Maintain the system
A drift happens when the AI model begins to diverge from the intended directives of the original programming. Once a drift is detected, action needs to be undertaken quickly to course correct before damage is done. Enterprises that keep a systematic record of vital data on the AI build process will be in a better position to diagnose the problem, troubleshoot and roll-back. Make sure your teams have the means to review, update and retrain your models.

The time is now

Responsible AI may be a widely discussed topic, but not enough organisations are putting it into practice. We’ve already seen enterprises and governments get it wrong. This has had adverse effects on the people they serve and their bottom line. Even though an increasing number of enterprises are getting behind responsible AI as a component to business success, the 2020 State of AI and Machine Learning Report by Appen states that only 25% of companies said unbiased AI is mission-critical.

We are at a critical inflection point in the application of Responsible AI. There are both push and pull factors to integrate AI governance into your enterprise’s technology stack. It starts with enabling your AI engineers and practitioners to run AI systems responsibly. Don’t stop at checking the box for compliance. Assign equal weight to privacy, security, responsibility and product features in your enterprise. Understand the RAI solutions landscape.  Have the foresight to unlock the business potential of AI innovation and governance through responsible AI. 

Download our AI Governance and Responsible AI white paper and explore Bedrock today.