WIth a raging pandemic, we are all aware of this unprecedented change in the global business landscape. With stretched operational capacities and cost optimisation, there is now a ubiquitous need for less dependency on humans across the value chain. Artificial intelligence is now a compelling proposition for most enterprises.
This urgency destabilises abilities to effectively control and govern newer technology risks. Automation is already well established in multiple industries today in many forms with machine learning emerging as the clear choice for high value use cases. This AI can & does go wrong! Given the stakes and speed with which AI is changing the enterprise, it’s critical to be aware of newer risks on the horizon.
- The risk of automation not complying with existing data controls (impacted controls)
- The risk of technology that not only automates processes but also redefines processes (without 3LoD involvement)
- This lack of human intervention that leads to an immediate problem of ownership vs responsibility/liability
Traditional GRC tools aren’t quite designed for such risks, nor are they purpose-built for monitoring risks dynamically created by constantly learning algorithms.
In the last ~10 weeks, we spoke with 37 different individuals with varying levels of seniority and industry experience within the Risk function. 31 agreed that it’s only a matter of time when AI in the enterprise will be regulated just like with data and the GDPR regime. 17 of them didn’t think they understood AI well enough to articulate the controls needed.
- One common theme which has resonated is managing risk with multiple forms of intelligent automation — RPA, ML & NLP
- Risk governance is a function that is typically involved in the AI development process only as an after thought — a tick-box exercise
- Currently used internal controls are simply inadequate to manage AI risks
- Visibility and transparency in decision outcomes as well as mitigating bias creep both statistical and human, is an area of interest
- There is no platform that links data science, engineering, risk and compliance functions with a single pane of glass view
We think there’s a gap in the market that can be addressed with a lighter-weight platform that addresses AI risks exclusively.
We are now working with a select few early alpha customers and would welcome feedback from more.
At Zupervise, our Mission is to champion governance for enterprise AI technologies;
Our vision is of a world, where AI is deployed responsibly and
with transparency, our prime value driver, embedded into the way we work.