Keeping a check on AI Oliver Wyman’s CEO highlights the pitfalls of regulating AI

AI leaders gathered in Riyadh in September for the Global AI Summit (GAIN). Hosted by the Saudi Data & Artificial Intelligence Authority, GAIN 2024 explored themes such as generative AI, AI in urban life and the ethical governance of AI.  Nick Studer, president and CEO of Oliver Wyman, who spoke on ‘AI governance in an interconnected [...]

Oct 29, 2024 - 22:00
Keeping a check on AI Oliver Wyman’s CEO highlights the pitfalls of regulating AI

AI leaders gathered in Riyadh in September for the Global AI Summit (GAIN). Hosted by the Saudi Data & Artificial Intelligence Authority, GAIN 2024 explored themes such as generative AI, AI in urban life and the ethical governance of AI. 

Nick Studer, president and CEO of Oliver Wyman, who spoke on ‘AI governance in an interconnected world’, sat down with Sir Martyn Lewis from Inside Saudi.

ML – How important was this conference for the business world? 

NS – Very important. It was truly global and brought together lots of people. What is important about this conference is that many AI developments are global in nature, but quite US-centric. Various governments are obviously trying to establish their own national presence to protect their sovereignty. There’s actually huge activity in the GCC, and, specifically, in Saudi Arabia around investing in AI, and this was a window onto that. A lot of progress is being made there. 

ML – What was your main message on AI governance, not just to businesses, but also to people and governments? 

NS – The Council of Europe has published the first AI treaty to be signed by its members. For AI governance they identified 450 different initiatives around the world. Those emanate from academia and civil society, as well as from big intergovernmental bodies. But this demonstrates we’re still unsure of how to solve this problem. 

My message particularly was on how countries can protect their own sovereignty while also benefitting from collaboration with other nations. We don’t know where the United Nations will come out. They’ve put out a draft, which seems to indicate they’re aiming for some kind of overriding, unifying governance. This will be very hard because the interests and capabilities of each nation simply diverge too much. 

I believe regulation will end up as ‘polycentric’, i.e. lots of different initiatives that overlap. Ultimately, it will reflect geopolitics. Happily, there is much countries can do to protect their sovereignty, such as investing in their own national skills around AI, supporting the private sector, government use cases, etc. 

Inevitably, however, in such a new area that is moving so fast, value is to be had in collaboration and cooperation, particularly around prohibitive use cases, things that we don’t want AI to venture into. But also, perhaps just in sharing lessons and data. 

I worry that regulators and lawmakers trying to protect us will result in companies saying we’re not going to issue our product in your market because we can’t meet your regulations. Apple is going that way with Apple Intelligence in Europe. My fear is that the balance between consumer demand and legislative and regulatory protection will be very hard to
settle. 

ML – So who wins?

NS – I think the consumer will ultimately win. Take social media, for example. It has been around for a while, but now there is backlash over whether it was wise to give all this data away or to give this to our children? At the time, the consumer wanted it. So, it’s the Old English proverb of ‘act in haste and repent at leisure’. I fear in the AI space, the consumer may win but regret it later in some areas. 

ML – Were there any conclusions about how business and governments should balance AI’s benefits with the risks? 

NS – Corporate leaders see AI increasingly as an opportunity and less as a threat or risk. The bigger fear is a competitor may come up with some revolutionary use case that kills your business. 

So, I think governance is basically being placed back onto governments, which is probably where it should be. But I’ve also had some interesting conversations around the role of investors. However, the investor community aims to try and bet big which could lead to some aggressive use cases before they’re fully understood. 

At the conference, It was a smorgasbord of views. But there was definitely a strong sense money is moving from understanding and developing large language models into testing and putting use cases into production for large-scale employee engagement. 

ML – What about the redundancies that result from AI? 

NS – Ultimately, this has always been the challenge: this is a latter-day Industrial Revolution. Just like before, there will be a big shift in the nature of jobs. I think there are many things AI will not replace. But we have to make sure we are tooling people up with the right skills and training to be able to interact with the user technology.

ML – How would you sum up Saudi Arabia’s role in the future development of AI?

NS – As Saudi Arabia seeks to diversify its economy, they’re all in on AI and making all the right moves to get there. They could have an extremely powerful role in their region given the Arabic language basis of much of what they’re investing in. I’m aware of at least eight Arabic-based large language models in development in the Kingdom.  But, also, by taking on a bridging role due to their geopolitical position between east and west, I believe there is an awful lot they can do.