Edition: International | Greek
MENU

Home » Analyses

This is why we need to talk about responsible AI

Bias in AI and other negative consequences of the technology have become common media fodder

By: EBR - Posted: Thursday, November 12, 2020

"Making Responsible AI part of stakeholder feedback will not only help avoid reputational damage, but will ultimately increase customer engagement."
"Making Responsible AI part of stakeholder feedback will not only help avoid reputational damage, but will ultimately increase customer engagement."

by Steven Mills and Daniel Lim*

Bias in AI and other negative consequences of the technology have become common media fodder.

The impression that media coverage gives is that only a few companies are taking steps to ensure the AI systems they develop aren’t inadvertently harming users or society.

But results from an IDC survey show that many companies are moving towards Responsible AI. Nearly 50% of organizations reported having a formalized framework to encourage considerations of ethics, bias and trust.

But why are so few companies pulling back the curtain to share how they are approaching this emerging focus? The silence is puzzling given the commitment to the responsible use of technology these investments signal.

Work in progress

Responsible AI is still a relatively new field that has rapidly developed over the past two years, with one of the first public guidelines for implementing Responsible AI in 2018.

Yet, only a few companies are publicly discussing their ongoing work in this area in a substantive, transparent, and proactive way. Many other companies, however, seem to fear negative consequences (like reputational risk) of sharing their vulnerabilities. Some companies are also waiting for a “finished product,” wanting to be able to point to tangible, positive outcomes before they are ready to reveal their work.

They feel it is important to convey that they have a robust solution with all the answers to all the problems relevant to their business.

We’ve also seen that willingness to be transparent varies by industry. For example, an enterprise software company that speaks regularly about bug fixes and new versioning may find Responsible AI to be a natural next step in their business. However, a company that monetizes data may worry that creating this kind of transparency will unearth greater stakeholders concern about the business model itself.

Through our conversations with companies, we’ve seen no one has conquered Responsible AI, and everyone is approaching it from a different angle. And largely, there is more to gain from sharing and learning than continuing to work towards perfection in silos.

All risk and no reward?

With so many news stories about AI gone wrong, it’s easy to keep the strategies under wraps. But it’s important to understand the reward of sharing lessons with communities.

First, talking openly about efforts to improve algorithms will build trust with customers—and trust is one of the greatest competitive advantages a company can have. Furthermore, as companies like Apple have proven, embracing a customer-centric approach that incorporates feedback loops helps build better products.

Making Responsible AI part of stakeholder feedback will not only help avoid reputational damage, but will ultimately increase customer engagement. Finally, the data science profession is still in its early stages of maturity. Models and frameworks that incorporate ethics into the problem solving process, such as the one published by researchers at the University of Virginia, are just beginning to emerge.

As a result, Responsible AI practices such as societal impact assessments and bias detection are just starting to make their way into the methodologies of data scientists. By discussing their challenges with their peers in other companies, data scientists and developers can create community, solve problems and, in the end, improve the entire AI field.

As champions of Responsible AI, we urge companies to lean into Responsible AI, engaging with peers and experts to share not only the wins, but also the challenges. Companies must work together to advance the industry and build technology for the good of all.

5 ways to join the Responsible AI discussion

We’ve taken our conversations with corporate executives and through our participation in World Economic Forum Responsible Use of Technology project community, and distilled our learning into five areas where companies can help build transparency into their Responsible AI initiatives.

Create and engage in safe spaces to learn: Closed forums such as the World Economic Forum Responsible Use of Technology provide a safe, achievable step toward transparency—a place for companies to speak openly in a risk-free, peer-to-peer setting. Interactions with other companies can accelerate knowledge sharing on Responsible AI practices, and build confidence in your own efforts.

Engage your customers and community: Customer engagement and feedback builds stronger products. Adding Responsible AI to these dialogues is a great way to engage with customers in a low-risk, comfortable environment.

Be deliberate: You don’t need to go from “zero to press release.” Give your programme time to develop: Begin with dialogue in closed forums, speak with your employees, maybe author a blog post, then expand from there. The important thing is to take steps towards transparency. The size of the steps is less important. Taking this progressive approach will also help you find your voice.

Diversity matters: Engaging with stakeholders from diverse backgrounds is an essential step in the process of improving Responsible AI. Actively listening to and addressing the concerns of people with different perspectives throughout the design, deployment, and adoption of AI systems can help identify and mitigate unintended consequences. This approach may also lead to the creation of better products that serve a larger market.

Set the right tone: Cultural change starts at the top. Senior executives need to set a tone of openness and transparency to create comfort sharing vulnerabilities and learnings. Ultimately, this will ease organizational resistance to engaging in public dialogue about Responsible AI.

We are still in the early stages of Responsible AI, but we can make rapid progress if we work together to share successes, learning and challenges. Visit the WEF Shaping the Future of Technology Governance page to begin engaging with peers.

*Partner & Chief AI Ethics Officer, Boston Consulting Group (BCG) and Fellow, Artificial Intelligence and Machine Learning, World Economic Forum
**first published in: www.weforum.org

READ ALSO

EU Actually

Is France setting the tone for modern agricultural laws?

N. Peter KramerBy: N. Peter Kramer

Following promises made to protesting farmers, the French government has presented a new draft of the agricultural policy law

View 04/2021 2021 Digital edition

Magazine

Current Issue

04/2021 2021

View past issues
Subscribe
Advertise
Digital edition

Europe

EU’s 2050 net zero goals at risk as EV rollout faces setbacks

EU’s 2050 net zero goals at risk as EV rollout faces setbacks

The EU needs to rethink its policies to make a 2035 ban on new petrol car sales feasible as electric vehicles (EVs) remain unaffordable and alternative fuel options are not credible, the EU’s external auditor said

Business

New dynamic economic model with a digital footprint

New dynamic economic model with a digital footprint

It is a fact that a new dynamic economic model is now beginning to emerge in entrepreneurship in the framework of the 4th industrial revolution and the digital challenges of our time

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2024. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron