Edition: International | Greek
MENU

Home » Analyses

Emerging legislation on commercial uses of facial recognition shows the work ahead

Facial recognition has emerged as a powerful biometric technology, both in practice and in our collective imagination

By: EBR - Posted: Friday, June 26, 2020

US state legislative proposals on commercial uses of facial recognition services (FRS) reveal how much work lies ahead in getting to laws that effectively address human rights and civil liberties (beyond privacy and security interests) and also that effectively instruct and engage business in doing so.
US state legislative proposals on commercial uses of facial recognition services (FRS) reveal how much work lies ahead in getting to laws that effectively address human rights and civil liberties (beyond privacy and security interests) and also that effectively instruct and engage business in doing so.

by Karen Silverman and Andrea Ortega*

Facial recognition has emerged as a powerful biometric technology, both in practice and in our collective imagination. Revelations about its use in the public domain and by law enforcement have fueled discussion about ethical concerns and corresponding legislative efforts to ban or limit its use.

Moreover, IBM has recently announced that it will “no longer offer general purpose IBM facial recognition or analysis software,” and Microsoft and Amazon have called for a year-long pause on police use of their facial recognition technologies and for Congress to take up ethical-standard setting.

These latest developments underscore that the time has come to figure out appropriate uses, limits and safeguards on use-case level challenges – both with the state of the technologies themselves, and how they are deployed within and by people in specific industries and settings.

Differences across emerging legislation proposals

US state legislative proposals on commercial uses of facial recognition services (FRS) reveal how much work lies ahead in getting to laws that effectively address human rights and civil liberties (beyond privacy and security interests) and also that effectively instruct and engage business in doing so.

Washington State has made progress on its proposed Washington Privacy Act (WPA), the first of its kind to tackle specifically commercial uses of FRS and to import nondiscrimination requirements beyond traditional privacy ones. California was close on Washington’s heels with Assembly Bill 2261, which would have imposed restrictions on certain commercial FRS activities, but was just blocked as part of a larger legislative effort that would likewise have enabled certain law enforcement uses.

These emerging proposals, however, reveal varying standards highlighting the difficulty of legislating in this arena, the work remaining to define clear, consistent and effective standards, and the uncertainties that will confront businesses as they proceed (at least in the short and medium term).

Existing proposals, for instance, advance different approaches to non-discrimination obligations. The WPA requires third-party testing for detecting “accuracy and unfair performance differences across distinct subpopulations.” It does not, however, define what comprises “unfair performance” and it is not clear whether it extends to guarding against the use of FRS to violate basic civil rights beyond the access to goods and services. (California’s option did offer such guidance, but other laws from other localities in the future may not account for these issues leaving gaps to be bridged.)

Differences extend to issues of oversight as well. The WPA requires controllers to ensure “meaningful human review” and test the FRS in operational conditions before deployment, specifically for FRS intended to make decisions that produce legal or similarly significant effects on consumers. It does not define what might comprise a “meaningful human review” of FRS. California’s measure attempted to address this issue, including review or oversight by one or more individuals who are trained and who “are ultimately responsible for making decisions based, in whole or in part, on the output of a FRS.” Still, varying definitions will complicate matters for businesses trying to comply when measures that don’t align on key definitions.

What businesses can do now

With so much in development, businesses have the opportunity to contribute to the policy debate and to define the coming standards. The World Economic Forum is helping tackle this challenge with its multistakeholder approach and actionable governance framework and is calling for engagement from businesses and stakeholders. A current project, Responsible Limits on Facial Recognition Technology, will help develop a governance framework to ensure safe and trustworthy use of FRS technology. As these and other initiatives take shape, there are some valuable steps that firms can take now:

1. Think about, and beyond, the controller/processor framework

The distinction between controllers and processors, borrowed from the EU’s GDPR, lies at the heart of the WPA and California’s now defunct AB 2261. As currently drafted, any business’ status as one or the other (or neither) will determine its privacy and — importantly — its nondiscrimination obligations. Indeed, the proposals seem to assume that businesses using FRS for identification, verification and persistent tracking of individuals will be controllers, and that businesses developing or supplying FRS to controllers will be processors so long as they process personal data — collect, use, store, or analyze, among other operations — following instructions of the controller.

In practice, however, businesses using or supplying FRS may not always fall into the controller/processor framework, and their roles may change over time. For instance, an FRS supplier may be a controller from the start, or initially operate as a processor but end up a controller if it starts processing personal data outside of the controller’s instructions (e.g. when training a generic AI tool). Others might initially not process personal data but at some point start processing an FRS customer/controller’s specific customer data (e.g. when implementing that tool).

Moreover, FRS developers or suppliers that do not process personal data (including persistent tracking) would remain outside of the legislation’s scope and, therefore, would neither be subject to the privacy nor the nondiscrimination obligations. Under this scenario, a controller would have to contractually require such FRS developer to nonetheless submit in writing to these nondiscrimination obligations — in particular, to third-party testing and implementing mitigation plans — in order to ensure compliance with its own nondiscrimination obligations. In addition, FRS developers or suppliers may also nonetheless wish to contractually require FRS users to comply with applicable federal or state nondiscrimination laws.

Legislation may someday address FRS uses beyond this controller/processer construct, and that will be clarifying. Meanwhile, however, businesses should keep in mind that a) their handling of personal data increasingly defines new obligations, and b) existing nondiscrimination requirements may already apply to their digital activities.

2. Anticipate the most likely requirements and put a diverse team in charge

Under these proposals, both controllers and processors would be subject to broad nondiscrimination requirements through cross-contracting, human oversight, testing and audit requirements. This predominantly human-at-the-end-of-the-loop approach, however, might not be sufficient to address discrimination concerns around the use of FRS, especially as currently drafted. They are, in any case, a starting point.

Businesses contemplating the use of FRS can start building out robust, trustworthy systems by focusing on the similarities between these proposals, including their calls for human oversight, pre-release testing for bias and harmful impacts, and regular audits (and by thinking upfront about their overall AI strategies and trust standards). More broadly, the work to build trustworthy AI systems should be a fundamental part of all the different phases of the product cycle, embedding standards into the design and operation of FRS systems and into the teams operating them. To do this well, at the outset, business needs to consider the different uses in depth, the quality and impacts of these technologies — in particular at the management and oversight levels —and ensure that the responsible teams represent a diversity of lived experience and expertise.

Looking ahead

The differences raised between even just these two legislative approaches would result in a lack of clarity for businesses attempting to standardize or anticipate compliance protocols.

Still, legislation and regulation are inevitable in time Congress could soon enact standards and limitations for specific uses. California will almost certainly — sooner or later — try again to legislate commercial uses of this technology. And Illinois and Texas could also attempt to amend their biometrics privacy laws to incorporate nondiscrimination requirements. Likewise, certain cities (including New York City and San Francisco) are contemplating expanded bills of FRS on private use. Likewise, the Brookings Institute has just issued a report that identifies preemption as one major impediment to regulation in the US, and buried in this issue are inconsistencies among legislative proposals as well as the Constitutional question regarding the balance of federal and State authorities.

As proposals take shape, it is critical for developers and users of these technologies to engage in the policy debate to define the coming standards (and to develop and test their own trust standards). Not doing so could risk longer and more costly market suspensions, business interruptions and reputational damage on the backend — well before any law has anything to say about it.

*Global AI Council Member, CEO and Founder, The Cantellus Group and Technology and privacy attorney, LLM Law & Technology from UC Berkeley Law
**first published in: www.weforum.org

READ ALSO

EU Actually

Respite for Wikileaks founder Assange

N. Peter KramerBy: N. Peter Kramer

Wikileaks founder Julian Assange can stay in the United Kingdom for at least another two months

View 04/2021 2021 Digital edition

Magazine

Current Issue

04/2021 2021

View past issues
Subscribe
Advertise
Digital edition

Europe

From abortion rights to assisted dying: Macron’s 180-degree shift

From abortion rights to assisted dying: Macron’s 180-degree shift

In the latest episode of our Today in the EU podcast, we are looking at how European elections have impacted French President Emmanuel Macron’s policy choices

Business

Artificial intelligence and competitiveness in the retail sector

Artificial intelligence and competitiveness in the retail sector

The importance of AI and machine learning in the retail market is confirmed by the projected dramatic growth of AI services worldwide, which will skyrocket from $5 billion to $30 billion by 2030

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2024. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron