AI, Data, Analytics and the Risk innovation function

AI, data and analytics (AIDA) are fuelling tremendous change within banking & financial services. Risk teams are integral contributors to this digital transformation. Explore our AI Risk insights and what this might mean for your organisation.

Background

Automation is part of the financial services industry today in many forms, think of automated fraud detection, robo advisors, credit risk evaluations, dynamic pricing, claims handling in insurance (e.g., extraction, classification of claims data), anti-money-laundering efforts in banking (e.g., reconciliation, monitoring of transaction data) the list is endless. At a human level, decisions are now being increasingly made by algorithms and the decision outcomes affect ordinary lives. For example, access to credit cards, loans & mortgages are, at minimum, influenced, by machines at statistical scale. Similarly, health & life cover decisions within insurance are now being automated.

These decision-making algorithms rely on multiple data-sets that specify what the correct outputs are for a sample representational data-set of objects and population demographics. This helps the system learn a model which can be applied to a larger data-set and make predictions about what the correct outputs should be for them.

Overview

This subjects the industry to various regulatory regimes that set out specific guidelines for business processes, model governance and systems of record. As we all know, there are strong penalties and fines associated with non-compliance. In addition, privacy directives mandate specific rulings on personally identifiable data which fuels most AI models in consumer oriented use-cases, which leads to further security, ethical & legal risks. Increased scrutiny, by regulators, and now, in recent times, shareholders, media & society, results in reputational risks.

AI is no longer the exclusive preserve of large technology giants. With easier avenues to build, buy or partner, AI is already part of the value chain in various industries today in many forms with machine learning emerging as the clear choice for multiple use cases.With the pandemic, stretched operational capacities and cost optimisation, mandate less dependency on humans. Automation, enabled by artificial intelligence is now almost a boardroom imperative at most enterprises. This urgency destabilises abilities to effectively control and govern newer technology risks. Existing risk control frameworks need significant improvements to be effective.

Why is this important?

In addition, AI models developed with data from a pre-Covid timeline are not entirely accurate any more. Such algorithms need to be retrained to take into account newer data patterns and present performance risks. Every so often, we hear yet another story, of an algorithm going awry – from re-calibrated exam results to a controversial use of AI to process visa applications and several more widely published AI failure stories on mass surveillance and bias in speech recognition. Black Lives Matter and its campaigns have brought to the fore more awareness of societal injustices, exacerbated by human biases – which find their way into AI algorithms, in systems that influence hiring to welfare benefits & criminal justice outcomes.

This is ensuring that policy regulators and the judiciary are more likely to draw up compliance laws for AI technologies in various geographies, including Europe, specifically for use-cases involving human outcomes. Albeit, in it’s early day, GPT-3 and it’s promise has added to the stakes and speed with which newer AI models will change the industry, it’s critical for the enterprise to be aware of novel risks on the horizon. There is no industry endorsed auditing framework in place for governing AI, nor any generally accepted AI-specific regulations, standards or mandates. Multiple think tanks, the Big 4, academia as well as government agencies have published thought leadership on this problem. In-fact, Deloitte’s most recent,third edition of the “State of AI in the Enterprise” 2020 survey states that 57% of AI adopters have “major” or “extreme” worries about how new and changing regulations could impact their AI initiatives.

Recommendations

Here are some best practices to solve for this problem:

  • Institute an AI Risk management charter as part of an AI strategy planning exercise
  • Ensure participation of the Risk function in any steering committee on AI
  • Foster a culture of AI Risk awareness across the value chain by breaking down silos between the risk teams & the data office
  • Encourage data-science teams to bundle-in model explainability as a standard, extend this requirement to third party AI solution vendors
  • Develop a strategic early warning system, designed for AI risk sensing, to identify any increase in AI risk scores thereby predicting loss events due to AI, ahead of time

The enterprise risk function need not necessarily stifle innovation. An effectively balanced AI Risk appetite that supports model experimentation can make a huge difference to competitive advantage.

We use cookies to enhance site navigation, analyse usage & assist in marketing.