Edition: International | Greek
MENU

Home » Analyses

It’s time to change the debate around AI ethics. Here’s how

The current conversation around AI, ethics and the benefits for our global community is a heated one

By: EBR - Posted: Wednesday, July 21, 2021

"Heated debate about the development of Artificial Intelligence (AI) is often affected by ethical concerns that can create fear about this type of technology."
"Heated debate about the development of Artificial Intelligence (AI) is often affected by ethical concerns that can create fear about this type of technology."

by Kay Firth-Butterfield, Ted Kwartler and Sarah Khatry*

The current conversation around AI, ethics and the benefits for our global community is a heated one. The combination of high stakes and a complex, rapidly-adopted technology has created a very real state of urgency and intensity around this discussion.

Promoters of the technology love to position AI as a welcome disruptor that could bring about a global revolution. Meanwhile, detractors lean into the potential for disaster: the possibility of AI super-intelligence, thorny ethical questions like the classic trolley problem, and the very real consequences of algorithmic bias.

It’s all too easy to get caught up in the hype and create a situation whereby the world does not fully benefit from the development of AI technology. Instead, we should take a moment to assemble a critical perspective on the many voices fighting for our attention on AI ethics.

Some of these voices belong to businesses that know they have been too slow in adopting AI technology. Others come from businesses that dove into AI early and have benefited from the confusion and a lack of regulation to pursue bad practices. Finally, in this day and age of influencers, there are those who broach the subject of AI ethics to grow their personal brand, sometimes without the required expertise.

Establishing the facts

Clearly, it’s a minefield out there, but we must brave it. This conversation is too important and too vital to neglect. With that in mind, here are some key facts that should be used to help inform this debate:

1. Reports about the dawn of Artificial General Intelligence (AGI) have been grossly exaggerated.

AGI refers to a broader form of machine intelligence than standard AI. It covers machines with a range of cognitive capabilities and the ability to learn and plan for the future. It’s the real-life realisation of the technology of science fiction books and movies, where computers rival humans in terms of intelligence and reason.

The more we have learned about AI over the decades, however, the less optimistic our estimates of AGI’s arrival have become. Instead, almost all AI systems in our modern world belong to a subcategory called machine learning (ML), which is extremely narrow and learns only through example. These machines do not think independently.

In fact, some of the tools that currently market themselves as AI are actually far older than ML. They are based on simpler statistical, expert or logic-based algorithms. However, our cultural overemphasis on intelligence promotes the personification of AI, diminishing human accountability.

2. The concept of "AI as a black box" is a myth.

ML algorithms can certainly vary in complexity, with some lending themselves more readily to human interpretation than others. That said, a variety of tools and techniques have been developed to probe even the most opaque algorithms and quantify how they respond to different inputs. The issue is that these tools can be quite technical for some algorithm stakeholders.

When an AI system is too poorly understood to be relied upon, it should probably not be deployed to sensitive situations. In such circumstances, further vetting should be performed or behavioural guardrails implemented to ensure the system can be deployed in a way that is clear and safe for users and other stakeholders. "AI as a black box" should never be used as an excuse to absolve human decision-makers of responsibility.

3. AI is not the first technology to promise both great risk and great opportunity.

Beyond hot-button moral quandaries such as trolley problems, AI now faces another class of ethical questions. These issues are perhaps quieter, or less flashy, but they will ultimately have a broader human impact.

Properly addressing these questions will require lucid, calm and holistic evaluations of AI using the methodology of systems safety, which identifies safety-related risks and uses design or processes to manage them. Nuclear power, aviation, and biomedicine, among many other industries, have evolved into safe and reliable industries, in large part due to the rigorous implementation of such risk-based systems safety frameworks.

Maintaining control of AI’s development

We need to see and analyse this technology as it really is. The simple truth remains that all AI in our current and foreseeable future is composed of ML-based systems, that is, advanced statistical algorithms governed by code and people. These systems can and should be under our control. Risks can be enumerated, mitigated and monitored, transforming a crisis of confusion into a mature technology in service of our shared advancement.

Increasingly, this message is finding a platform and it’s beginning to shape AI’s development meaningfully. The latest proposed regulations from the European Union, for example, take significant steps in the right direction by defining high-risk use cases, for example. The data science community wants to build models that align to societal values and improve outcomes. This well thought-out proposal from the EU will enable innovation and industry growth by standardising expectations among practitioners.

Those who pretend we are not capable of governing and responsibly deploying this technology promote a falsehood. The AI industry certainly faces a major challenge in pushing the boundaries of this technology now and in the future. Through partnership, clarity, and pragmatism, we can be ready to face this challenge.

*Head of Artificial Intelligence- Machine Learning; Member of the Executive Committee, World Economic Forum and VP, Trusted AI, DataRobot and Managing Director, AI Ethics, DataRobot
**first published in: www.weforum.org

READ ALSO

EU Actually

Rising energy prices expose EU’s dependence

N. Peter KramerBy: N. Peter Kramer

Rising energy prices are mercilessly exposing dependence of the EU. The EU must import 90 percent of its gas, 43 percent of which came through Gazprom last year

View 03/2021 2021 Digital edition

Magazine

Current Issue

03/2021 2021

View past issues
Subscribe
Advertise
Digital edition

Europe

Emerging renewable energy markets require well-designed auctions

Emerging renewable energy markets require well-designed auctions

Auctions are an essential part of the policy toolkit for promoting cost-efficient renewable energy development. Still, policymakers must be prepared to tailor their design to local conditions and adjust for market-specific challenges

Business

An ethical, open and transparent office culture can be designed. Here’s how

An ethical, open and transparent office culture can be designed. Here’s how

In recent years, increasing numbers of consumers and employees have been calling on businesses to lead in new, more ethical ways, but organisations still have a lot of work to do on this front

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2021. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron