Edition: International | Greek
MENU

Home » Analyses

We’re failing at the ethics of AI. Here’s how we make real impact

The global COVID-19 crisis has acted as a world-wide accelerator for the rollout of AI initiatives. Technologies that would’ve taken place over five years have taken place over six months

By: EBR - Posted: Friday, January 14, 2022

The global COVID-19 crisis has acted as a world-wide accelerator for the rollout of artificial intelligence (AI) initiatives.
The global COVID-19 crisis has acted as a world-wide accelerator for the rollout of artificial intelligence (AI) initiatives.

by Anja Kaspersen and Wendell Wallach*

The global COVID-19 crisis has acted as a world-wide accelerator for the rollout of AI initiatives. Technologies that would’ve taken place over five years have taken place over six months.

While the pandemic has embedded AI into our lives at lightspeed, it’s also amplified the urgency to understand the rules and ethics concerning it. For instance, technology companies, tasked with biometric tracking and tracing applications, are now in possession of massive amounts of our personal biodata and no clear set of rules on what to do with it and how to protect it.

As a result, companies and stakeholders find themselves putting out fires, like cybersecurity failures, prolific dis- and misinformation spreading and indiscriminate data sales, all easily preventable issues.

With such high stakes, why aren’t the rules and governance of AI systems more clear?

It’s not from lack of effort. Over the last few years, a surge of principles and guidance to support the responsible development and use of AI have emerged without any significant change.

For impact, we must drill down on three main issues:

1. Broaden the existing dialogues around the ethics and rules of the road for AI.

The conversation about AI and ethics needs to be cracked wide open to understand the subtleties and life cycle of AI systems and their impacts at each stage.

Too often, these talks lack a far enough reach, and focus solely on the development and deployment stages of the life cycle, although many of the problems occur during the earlier stages of conceptualization, research and design.

Or else, they fail to comprehend when and if an AI system will operate at a stage of maturity required to avoid failure within complex adaptive systems.

Another problem is that companies and stakeholders may focus on the theater of ethics, seeming to promote AI for good while ignoring aspects that are more fundamental and problematic. This is known as "ethics washing," or creating a superficially reassuring but illusory sense that ethical issues are being addressed to justify pressing forward with systems that end up deepening problematic patterns.

Let transparency dictate ethics. There are many tradeoffs and grey areas within this conversation. Let’s lean into those complex grey areas.

While "ethics talk" is often about underscoring the differing tradeoffs that correspond with various courses of action, true ethical oversight rests on addressing what’s not being accommodated by the options selected.

This vital – and often overlooked – part of the process is a stumbling block for those trying to address the ethics of AI.

2. The talk about ‘ethics’ is not being translated into meaningful action.

Too often, those in charge of developing, embedding and deploying AI systems fail to understand how they work or what potential they might have to shift power, perpetuate existing inequalities and create new ones.

Overstating the capabilities of AI is a well-known problem in AI research and machine learning, and it’s led to a complacency toward understanding the actual problems they’ve been designed to solve, as well as identifying potential problems downstream. The belief that incompetent and immature AI system once deployed can be remedied by a human on the loop or assumption that an antidote exists, especially compatible with cybersecurity, is an erroneous and potentially dangerous illusion.

We see this lack of comprehension demonstrated in our decision-makers, who fall for a myopic, tech-determinist narrative and apply tech-solutionist and optimization approaches to global, industry and societal challenges. They’re often blinded by what’s on offer rather than focused on what the problem actually requires.

To have a true understanding of the ethics of AI, we need to listen to a much more inclusive cast of experts and stakeholders including those who grasp the potential downstream consequences and limitations of AI, such as the environmental impact of the resources required to build, train and run an AI system, its interoperability with other systems and the feasibility of safely and securely interrupting an AI system.

3. ‘Ethics talk’ is led by decision-makers who lack key understanding.

Too often, those in charge of embedding and deploying AI systems fail to understand how they work or what potential they might have to perpetuate existing inequalities and create new ones.

Overstating the capabilities of AI is a well-known problem in AI research and machine learning, and it’s led to a complacency toward understanding the actual problems they’ve been designed to solve, as well as identifying potential problems downstream.

We see this lack of comprehension demonstrated in our decision makers, who take a myopic, tech-solutionist, optimization approach to applying AI systems to global, industry and societal challenges. They’re often blinded by what’s on offer rather than focused on what the problem actually requires.

To have a true understanding of the ethics of AI, we need to listen to a much more inclusive cast of experts and stakeholders who grasp the potential downstream consequences of AI, such as the environmental impact of the resources required to build, train and run an AI system, its interoperability with other systems and the feasibility of safely and securely interrupting an AI system.

4. The dialogue about AI and ethics is confined to the ivory tower.

Concepts such as ethics, equality and governance can be viewed as lofty and abstract. We need to ground the AI conversation in discerning meaningful responsibility and culpability.

Anyone who assumes that AI systems are apolitical by nature would be incorrect, especially when systems are embedded in situations or confronted with tasks they were not created or trained for.

Structural inequalities are commonplace, like in the predictive algorithms used in policing that are often clearly biased. What’s more, the people who are most vulnerable to negative impacts are often not empowered to engage.

Part of the solution and the challenge is finding a shared language for re-conceptualizing ethics for unfamiliar and implicit tensions and tradeoffs. Large-scale technological transformations have always led to deep societal, economic and political change, and it’s always taken time to figure out how to talk about it publicly and establish safe and ethical practices.

However, we’re pressed on time.

Let’s work together to re-envision ethics for the information age and cut across siloed thinking in order to strengthen lateral and scientific intelligence and discourse. Our only viable course is a practical and participatory ethic, which ensures transparency, ascribes responsibility and prevents AI being used in ways that dictate rules and potentially cause serious harms.

*Former Head of Geopolitics& International Security, World Economic Forum and Scholar, Interdisciplinary Center for Bioethics, Yale University
**first published in: www.weforum.org

READ ALSO

EU Actually

European Parliament elects an anti-abortion President

N. Peter KramerBy: N. Peter Kramer

Roberta Metsola has been elected as the youngest ever President of the European Parliament and the first woman in the role for 20 years

View 04/2021 2021 Digital edition

Magazine

Current Issue

04/2021 2021

View past issues
Subscribe
Advertise
Digital edition

Europe

EU should advance foreign intelligence-gathering capacity, EU lawmaker says

EU should advance foreign intelligence-gathering capacity, EU lawmaker says

The EU should develop its own foreign intelligence services to provide itself with credible information about possible foreign threats, according to an upcoming European Parliament proposal

Business

5 forces driving the new world of work

5 forces driving the new world of work

Labour markets around the world were already going through significant transformations when COVID-19 hit two years ago

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2022. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron