AI regulation is here: what does it mean for businesses?
In April 2021, the European Union Commission has once again planted its flag on advancing AI development in a trustworthy manner with its Proposal for a Regulation on a European approach for Artificial Intelligence.
Since 2018, the Commission has been carrying out extensive stakeholder consultation to discuss the vision for AI in Europe, and what trustworthy AI means in practice. The proposed AI regulation takes a major step forward in outlining harmonised rules on AI for the European Union, and broad obligations in relation to all AI systems from providers to users.
Potential impact on the industry
The proposed AI regulation will impose significant obligations impacting businesses across many sectors of the economy, not only digital, as well as multiple domains of services.
However it’s unclear how implementable these obligations will be. Some applaud the risk-based and targeted approach, such as defining high-risk AI in the form of software for critical infrastructure, algorithms that police use to predict crimes etc.
Others have commented that the list of exemptions is too wide. For example, the use of “real-time” remote biometric identification systems in publicly accessible spaces for law enforcement purposes are subject to broad exemptions subject to specific requirements, including prior judicial or administrative authorization for each individual use. This defeats the purpose of prohibiting AI systems.
Limits to regulation
The proposed AI regulation is a step in the right direction towards providing safeguards to the use of AI technology. But there is a natural limit to what regulation can do. In addition, AI does not lend itself to easily drawn boundaries, simply due to the nature of AI as a self-learning technology.
The need a combination of regulatory and practice measures
In the coming months and years, we see a need for a combination of regulatory and practice measures to truly create a healthy governance ecosystem where AI innovation can thrive.
Technical standards form the backbone of many of the world’s most trusted technologies. Currently, the AI industry lacks proven methods to translate AI governance principles into practice. Many governments, multisectoral groups and private sector companies have introduced ‘trustworthy’ AI principles. But many are still struggling to roll-out these principles into concrete technical standards.
These safeguards should neither remain as broad aspirational statements, nor be reduced to checklists. Technical standards need to be defined as comprehensible and actionable steps for business, data, technology and risk teams within the enterprise.
These standards also need to be crafted in the context of industry and use cases. For example, there are 21 technical definitions of fairness. Depending on whether you take the perspective of the decision maker, society or end user, the metrics for fairness will differ. The multiplicity of contexts, stakeholders and applications need to be considered when designing meaningful technical standards.
Businesses must be proactive in defining best practices
To shift from principles to practice, it is incumbent on businesses who are providers and users of AI systems to be proactive in defining best practices. Businesses should already be responsible for the AI solution throughout its lifecycle. This includes anticipating potential failures before deploying a solution, live monitoring of the model, retraining the model whenever the context has changed etc. With this practical knowledge, businesses can take the lead to implement what they consider to be industry best practice, rather than waiting for regulation to drive action.
AI regulation will only continue to grow
There is no doubt, the proposed EU AI regulation will serve as a reference point for other future regulations globally. Regulatory frameworks regarding AI, will only continue to grow, especially as we attempt to prevent AI failures.
The key will be to have a mix of regulation and technical standards, that is focused on the highest standards for AI development, while also ensuring progress in innovation.