Edition: International | Greek
MENU

Home » Analyses

AI: A World of New Opportunity and Risk

This isn’t a new story – a novel technology disrupts society, bringing with it many benefits but also major risks and costs

By: EBR - Posted: Monday, January 17, 2022

A new toolkit for C-suite execs on how to responsibly adopt artificial intelligence.
A new toolkit for C-suite execs on how to responsibly adopt artificial intelligence.

by Theodoros Evgeniou, Kay Firth-Butterfield,Arunima Sarkar and Caroline Zimmerman*

This isn’t a new story – a novel technology disrupts society, bringing with it many benefits but also major risks and costs.

We saw it during the Industrial Revolution, which vastly improved the average living standard, but also led to poor labour conditions and environmental degradation – over a timeline that was difficult to foresee. And here we are, at the dawn of the AI revolution, where the advent of cloud computing and computer processing power, cheap storage, new algorithms, as well as new product and service innovations realises the benefits of a technology – from driverless cars and virtual reality to medical diagnostics and predictive machine maintenance.

In tandem, however, we also see some negative, often unintended consequences of these technologies. They go from the rise of fake news and algorithms that favour the incendiary and divisive over the factual, to major privacy breaches and AI models that discriminate against minority groups or even cost human lives.

AI is a powerful tool, and it’s never been more important for C-suite executives to understand both how to leverage it for growth and innovation, and how to do so responsibly and ethically. They need an understanding of the long-term impact – both positive and negative – of the algorithms they build and deploy. It’s by no means a charted path; success is as much about asking the right questions, keeping an open mind and being aware of the key issues at stake, as it is about finding the “right” answers.

The World Economic Forum, with supporting research from INSEAD’s Hoffmann Global Institute for Business and Society, has created a guide for C-suite executives who are committed to adopting AI technologies effectively and responsibly. This guide takes the form of questions that executives should be asking themselves as they build their AI capabilities. It also offers some possible answers to these complex issues.

Building an effective AI capability

Building an AI capability that delivers business value is a challenge in its own right. Gartner predicts that 80 percent of analytic insights will fail to deliver business value at scale in 2022. It’s tempting for executives to believe that AI will magically deliver new revenue streams or efficiency gains, but the truth is that AI initiatives ought to undergo the same rigorous business planning as any other project.

An AI initiative should, first and foremost, align with the organisation’s key strategic goals and directly contribute to moving the KPIs that buttress this strategy. In other words, executives must “know the why” for AI initiatives. An iterative development that starts with simple, explainable models is recommended. Investments in more complex solutions that may deliver marginal accuracy gains but are much harder to interpret or deploy at scale should be avoided.

The best rules of thumb to mine business value from AI are: “Don’t get caught in the hype” and “Start simple and test value.” As Eugene Yan, a data scientist at Amazon, famously said, “The first rule of machine learning: Start without machine learning.”

It is helpful for executives to possess a broad understanding of the key stages of an AI initiative and the technical risks at each of these stages. For instance, executives broadly underestimate the amount of data cleansing and preparation that are required for building viable algorithms. Data scientists, on the other hand, are likely to focus on building the most accurate model possible using the latest techniques, without understanding the business context and many trade-offs.

The most successful AI initiatives are close collaborations between data talent, business stakeholders and sponsors, engineers as well as end users who are available to test the solution and send feedback. Building the right team and upskilling the organisation to enable this kind of collaboration is essential for success.

One sure-fire sign that an organisation has evolved along the data and AI maturity scale is the shift from a defensive data capability – primarily focused on reporting on and understanding the past – to an offensive one – focused on how data and AI can be used to set the strategy, deliver profit and support innovation.

Another key sign is that the data talent becomes increasingly specialised, moving from generalists to technical specialists (data analysts, data scientists, machine learning engineers) and also business partner specialists (data scientists gain expertise in a specific commercial area, such as marketing attribution or pricing analytics). No matter the degree of specialisation, the most effective AI teams do not have a fixed structure; their structure evolves with the changing needs of the business.

Managing AI risks

Running an effective AI capability, however, is more than simply leveraging these technologies to realise EBIT and market share gains. Now more than ever, executives must have a keen understanding of the new business risks involved in developing algorithms. They must ensure that their organisations are pro-actively mitigating them and that they comply with upcoming regulations. The list of potential risks can appear daunting, from algorithms that lead to more severe prison sentences to minority defendants due to biases in the training data, to job losses and “winner takes all” economic models due to increased automation, or even risks to democracy itself due to the polarising effects of the algorithms used to promote content on social media and create an unsafe online space.

AI also raises questions of accountability. Who is responsible:

  • When a driverless car crashes
  • In a lawsuit claiming unfair hiring informed by AI algorithms
  • When the wrong medical treatment is prescribed because an AI diagnostic system contained errors
  • For a large financial loss incurred by an algorithmic trading platform?

The organisations that mitigate these risks best are those that build on their own ethical standards and gateways throughout the AI lifecycle, from how they collect and prepare data, to how they build, test and deploy models. Those that adopt new data and AI risk management practices, processes and tools to both comply with upcoming regulations and to ensure customer trust.

For example, one major North American bank used various techniques to de-bias its data to ensure that its credit scoring algorithm would automatically grant credit to all eligible applicants and not exclude minority groups, who were potentially less represented in the underlying data. While the technical team carried out the de-biasing techniques on the ground, it was the executive team’s commitment to investing in ethical AI that ensured the robustness of this process.

The opportunity to use AI to grow and innovate has never been greater, but neither have the risks. It’s a long road to leverage these technologies profitably, ethically, safely and at scale. The new WEF toolkit is a starting point for engaging in the right debates to ensure executives consider the salient issues in their decision making and their organisations’ ways of working. Of all the skills an AI-ready executive team must possess, asking the right questions is probably the most important.

*Professor of Decision Sciences and Technology Management at INSEAD, he has been working on machine learning and AI for almost 25 years, a World Economic Forum Academic Partner for Artificial Intelligence, a co-founder of Tremau and Head of Artificial Intelligence and Machine Learning at the World Economic Forum and AI Lead, Centre for Fourth industrial Revolution at the World Economic Forum and Research Associate at the INSEAD Hoffmann Global Institute for Business and Society

**first published in:knowledge.insead.edu

READ ALSO

EU Actually

‘Complete the internal market’

N. Peter KramerBy: N. Peter Kramer

‘Complete the internal market’, former Italian Prime Minister Enrico Letta writes in “The independent High-Level Report on the future of the Single Market”

View 04/2021 2021 Digital edition

Magazine

Current Issue

04/2021 2021

View past issues
Subscribe
Advertise
Digital edition

Europe

China’s Xi Jinping visits Serbia on anniversary of 1999 NATO bombing

China’s Xi Jinping visits Serbia on anniversary of 1999 NATO bombing

Chinese President Xi Jinping arrived in Serbia on Tuesday escorted by MIG-29 jets in a tightly secured visit coinciding with the 25th anniversary of the NATO bombing of China’s embassy

Business

New dynamic economic model with a digital footprint

New dynamic economic model with a digital footprint

It is a fact that a new dynamic economic model is now beginning to emerge in entrepreneurship in the framework of the 4th industrial revolution and the digital challenges of our time

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2024. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron