Edition: International | Greek
MENU

Home » Analyses

Why philanthropy needs to prepare itself for a world powered by AI

Artificial intelligence presents itself in both grand and mundane ways. It accelerates the scientific process, leading most recently to the development of COVID-19 vaccines at record speed

By: EBR - Posted: Friday, April 16, 2021

The assurance that technology is being built and used ethically is an important consideration when it comes to AI.
The assurance that technology is being built and used ethically is an important consideration when it comes to AI.

by Vilas Dhar and Kay Firth-Butterfield*

Artificial intelligence presents itself in both grand and mundane ways. It accelerates the scientific process, leading most recently to the development of COVID-19 vaccines at record speed. It runs self-driving cars, allowing them to smoothly navigate downtown streets. And it manages our emails and online calendars, improving our productivity and well-being.

But A.I.’s potential for transforming human learning and experience also sparks unease and raises fundamental questions. Who should control the creation and use of these tools? Are we comfortable handing a small group of technologists the keys to our social and economic development engine? And what role should philanthropy play in protecting the most vulnerable and ensuring that A.I. benefits the greater good?

Controversies over facial recognition, automated decision making, and COVID-19 tracking have shown that realizing A.I.’s potential requires strong buy-in from citizens and governments, based on their trust that the technology is built and used ethically.

To explore these challenges, we recently brought together a group of 20 senior philanthropic leaders representing institutions including the Schmidt Family Foundation, the Mastercard Center for Inclusive Growth, and the Berggruen Institute at a virtual convening of the World Economic Forum. Our conversation reflected philanthropists’ profound interest in both the positive potential for A.I. and the need to more deeply understand how to harness, steer, and govern these tools to prevent misuse and ensure they are deployed for social good.

Those conversations contributed to the launch of a new Global AI Action Alliance — a platform for philanthropic and technology leaders to engage in the development of ethical A.I. practices and tools. They also led to the creation of an action plan that can help pave the path forward for deeper philanthropic participation in the effective, safe, and equitable use of A.I. to address societal need. The plan encompasses four key areas:

A commitment to learning. While some foundations are tech-savvy, philanthropy as a field is not at the forefront of digital transformation. But we shouldn’t leave philanthropy’s response to A.I.’s challenges and potential to a handful of foundations focused on technological innovation. A broad swath of philanthropic organizations, regardless of their focus, need to invest in learning about A.I., sharing their perspectives across the field and with grantees, and adapting traditional strategies to incorporate these technologies.

We need to be honest about our organizational blind spots and commit to building internal capacity where needed. That means learning from and hiring data scientists and A.I. practitioners. The Rockefeller Foundation has led the way in this area, hiring a chief data officer early on and convening working groups on the design and implementation of responsible A.I.

And today, the Patrick J. McGovern Foundation, which one of us — Vilas Dhar — heads, deepened its own knowledge base by announcing plans to merge with Silicon Valley-based Cloudera Foundation to provide greater A.I. resources and expertise to grantees. Cloudera’s $9 million endowment and $3 million in existing grants, along with its staff and CEO, Claudia Juech, will form a new Data and Society program within the Patrick J. McGovern Foundation.

Integration of A.I. into key grant-making areas. Rather than relegating topics involving A.I. and data to the IT team, foundation leaders should consider how these technologies affect their key focus areas. Educational outcomes, for example, can be addressed through A.I. technologies that provide better language translation, increased access to online learning platforms, and interactive teaching tools. A.I. can also play an integral role in addressing issues such as food insecurity. For example, a nonprofit called the Common Market uses A.I. to improve its food supply chains between farmers, growing networks, and food banks across Texas, the Southeast, and the Mid-Atlantic.

At each stage of the decision making and programming process, philanthropic leaders should be asking, “What is the potential application of A.I., and what are the benefits and risks?”

Investment in safe data sharing. Philanthropic institutions have the advantage of looking across a wide range of organizations in a particular field or region and are well positioned to support the aggregation and sharing of data and technical knowledge. The fact that they rarely do so is a missed opportunity. A.I. tools rely on massive amounts of data to learn and pinpoint patterns on issues such as policing, homelessness, and public health. But for many nonprofits, it is challenging to amass data in meaningful quantities or to securely store and analyze the data they gather, especially since funding for such internal operations is typically scarce.

Philanthropic organizations should play a central role in supporting efforts to make data more accessible to grantees through vehicles such as data cooperatives and data trusts. These entities link data held by otherwise separate groups, providing even small nonprofits with robust data and analysis capabilities. Unlike many commercial data-gathering sources, they also address privacy concerns by ensuring that data is held confidentially and applied only for its intended use.

The Himalayan Cataract Project, for example, which seeks to cure blindness around the world through simple and inexpensive cataract surgery, is building a shared framework for how patient data is gathered, distributed, and used among ophthalmologic health organizations. This common standard not only gives health workers better insights on how to treat patients who may be served by multiple organizations but also ensures their privacy by imposing strict guidelines on how the data is used.

Diversification of voices. The conversation about development, ownership, and use of technology should expand to include philanthropists, activists, policy makers, and business leaders. In recent A.I. gatherings, we’ve brought together social-change activists and business leaders to facilitate discussions between those who understand the problems facing society and those who can build the solutions. Platforms such as data.org, launched by the Rockefeller Foundation and the Mastercard Center for Inclusive Growth, are furthering this type of dialogue by highlighting and funding A.I. solutions from around the world on issues such as improving economic well-being and creating safe and sustainable cities.

During our roundtable conversation at the World Economic Forum, Dan Huttenlocher, dean of the MIT Stephen A. Schwarzman College of Computing and board chair of the MacArthur Foundation, observed that “A.I. can help us leapfrog some of the societal challenges we face, but we have to design it to do so. There’s no such thing as a ‘good technology’ in and of itself — we have to make it work for us.”

Philanthropy occupies a position of financial privilege, moral responsibility, and public leadership. We must use that position as a platform for collaboration among those inside and outside of our field to build a future in which A.I. works safely and effectively to help solve humanity’s greatest challenges.

*Trustee, The Patrick J. McGovern Foundation and Head of Artificial Intelligence& Machine Learning; Member of the Executive Committee, World Economic Forum
**first published in: www.weforum.org

READ ALSO

EU Actually

Is France setting the tone for modern agricultural laws?

N. Peter KramerBy: N. Peter Kramer

Following promises made to protesting farmers, the French government has presented a new draft of the agricultural policy law

View 04/2021 2021 Digital edition

Magazine

Current Issue

04/2021 2021

View past issues
Subscribe
Advertise
Digital edition

Europe

EU’s 2050 net zero goals at risk as EV rollout faces setbacks

EU’s 2050 net zero goals at risk as EV rollout faces setbacks

The EU needs to rethink its policies to make a 2035 ban on new petrol car sales feasible as electric vehicles (EVs) remain unaffordable and alternative fuel options are not credible, the EU’s external auditor said

Business

New dynamic economic model with a digital footprint

New dynamic economic model with a digital footprint

It is a fact that a new dynamic economic model is now beginning to emerge in entrepreneurship in the framework of the 4th industrial revolution and the digital challenges of our time

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2024. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron