The future of AI regulation

AI regulation is on the agenda for multiple global regulators. Explore this thought-leadership interview with our guest Dr. Ansgar Koene to learn more about what this might mean for your organisation.

What are your thoughts about the regulatory policy landscape on AI governance?

I think there’s been a recognition that AI is being used in so many use-cases and therefore regulators that are focusing on multiple sectors need to gain some kind of understanding around how AI will potentially have an impact on the way in which current regulations are trying to provide safety & good operating practice in their sector.

At the moment we’ve got a bit of an exploration that’s going on including on the question of whether there should be a new regulator focusing on AI exclusively, or should it be a case of different regulators needing to just be up-skilled and be empowered to deal with AI questions as is relevant to them. In-fact, the discussion in policy circles started from the point of view of AI at a conceptual level with the debates around AI as some form of automation of both decision making & human intellectual activity. We absolutely want to make sure that humans maintain agency, we want to ensure that there is human oversight.

The big focus of 2018-2019 was principles but now, in 2021 we’re at the stage of how do you transform these principles into practical rules and regulations to deal with the challenges that arise from AI. It also becomes more necessary to start to think about the fact that, not all AI is the same thing – machine learning is different from other forms of AI, computer vision is different from natural language processing, which is different from recommendation systems.

Maybe we don’t want to have a single regulator to cover all of these, maybe we actually do need to be focusing on each sector with tailored regulation. I think that conversation is ongoing, it hasn’t really been resolved yet. In the UK, for instance, the data protection authority, the ICO has been tasked to a large extent to try to deal with questions to do with AI primarily around data privacy but they’re being pushed to go beyond the remit of personal data so it’s an ongoing discussion.

What are the biggest challenges in regulating AI?

I think one of the biggest challenges that is becoming visible is the question of how do we regulate the addition of an AI component to an existing process that needs governance. There are few exceptions like autonomous vehicles which may be conceived to be adding something completely new to the transportation mix. For example, with AI-enabled recruitment, we’re simply automating part of the hiring process by pre-filtering with AI.

Primarily, in such a case of adding AI to an existing process, it’s not really clear how to identify if this AI actually introduces new risks that need to be regulated differently. Let’s take for instance the example of AI in hiring. At the core you’re not allowed to discriminate based on race or gender or other non relevant factors when it comes to the hiring process and really whether or not you’re doing this discrimination through an AI or you’re doing it through human decision making is pretty much beside the point. What you are regulating is that there shouldn’t be any discrimination and so that raises the question on whether we need to change anything in the regulation there or is it just a case of maybe we need a new process for providing the evidence that you are complying with the existing regulation, by applying the appropriate risk assessments.

Within the enterprise, who cares the most about such AI risks?

I guess it’s currently still largely approached from the traditional compliance led approach – the development teams are primarily focusing on achieving the functional requirements of the system and then we have compliance teams that are assessing whether you are compliant with various regulatory issues. There isn’t really a sort of structural introduction yet, I think, of assessing ethical risks or the societal impact kind of risks . As part of the larger discussion around business ethics and ESG (environmental social governance) these kind of questions need to be answered.

How does one get started with governing AI?

I think you start with a clear AI strategy. When you’re creating your requirements set, do you have clear justifications for why you’re making certain choices on the selection of the data sets that you’re going to use. Further still, have you documented your process for how you’ve collected the data and how you’ve chosen which data set to use. To a large extent, this comes down to the ability to document what you’ve done and to provide justifications for these. I’m focusing on the documentation aspect here because this is the kind of challenge that we see now with the attempts at AI auditing, but then there is insufficient documentary evidence to certify anything.

What is your opinion on AI regulation proposed by academia & it’s applicability to enterprise AI use cases?

It’s a very broad question as there are quite a number of approaches that are being taken in academia. There are some that are trying to get into technical methods within the computer science community. We’ve had various academics who have been trying to operationalise a definition of AI fairness.

On the one hand, this is something that you can build into a toolkit to introduce that into development cycle. On the other hand, it is also being criticised within academia as an insufficient understanding of the bigger picture on the need to provide transparency on the decision making process. We have a part of the academic literature that is focusing on ethical concerns to identify application areas where AI is is not the appropriate solution approach. These tend to be something that address government and policymakers.

For instance, the conversation around when is probabilistic machine-led decision making fundamentally not appropriate to a certain type of domain. In society, we can think of the criminal justice system where I would say, people should be judged based on what they did, not based on whether they seem to fall in a population groups that statistically has been shown to end up in prison more often. These kinds of discussions are not really something that addresses enterprise so much as they are things that address government regulation.

Where is the AI regulation discourse leading to?

I think we are we are heading into a direction where these technologies are having a significant impact on people and on society.

Therefore, they need to be regulated in a similar way to other kinds of domains like vaccines in healthcare or the safety measures that we need with self-driving cars in transportation. As a result, it is going to transform this space into something that will be more strongly regulated with certification regimes. It may still take a little bit of time if we consider the countries that are leading on the the process of developing AI regulation – Singapore, Europe to a large extent also the UAE, which is exploring in this space – they are currently performing regulatory gap analysis and drafting new regulation proposals.

This is still going to take probably at least a year to crystallise and then another year or so to formulate into clear regulations. Within that time period we will be seeing the publication of more technical standards around the space. The ISO/IEC’s joint technical committee on AI as well as the AI work of the IEEE-SA has really picked up steam and is likely to start publishing standards in the run up to this year. We will soon be getting a bigger body of guidance of what this best practice looks like. Also, in the US, we are seeing greater attention to the question about how an assessment of these systems should work and what role benchmarks can play.

I expect that in about 3 years time we will probably be looking at a completely different landscape regarding any regulation on AI.

Dr. Ansgar Koene is Global AI Ethics and Regulatory Leader at EY and is a thought-leader on AI governance, trusted AI frameworks and AI related public policy engagement. He is also a Senior Research Fellow at the Horizon Digital Economy Research institute (University of Nottingham) where he contributes to the policy impact and public engagement activities of the institute and the ReEnTrust and UnBias projects.  

 

All opinions expressed by this interview participant are solely their personal opinions and do not reflect the opinions of their current or past employers.

We use cookies to enhance site navigation, analyse usage & assist in marketing.