by Samuel Stolton
Artificial Intelligence technologies in the EU are set to come under the scope of new legislation that the European Commission aims to put forward in April.
This comes after a protracted period of policy consultation on the best direction for the EU to pursue in the field, with risks and benefits of equal measure being weighed up against one another.
There has been no shortage of those calling for the EU to equip itself with the resources to compete more seriously on the world stage in the development of innovative AI technologies. But on the same point, some stakeholders have been keen to remind the EU of the importance of preserving fundamental rights to privacy and the potential pitfalls that may arise should risks be overlooked.
This policy brief analyses the current state of play in the EU with regards to the foreseen regulatory frameworks for AI, the different positions thus far adopted by policy stakeholders, and ultimately the future for next-generation AI on the bloc.
BACKGROUND AND ISSUES
The onset of next-generation Artificial Intelligence (AI) applications in Europe presents new regulatory challenges with respect to technologies deemed to present risks to the existing legal framework, rights, and ethics.
The scope of potential challenges is broad, with many AI technologies already featuring prominently in our everyday life: algorithms deciding on the fate of our loan applications, recognising faces on public streets, flagging potentially illegal content online, targeting adverts to individual profiles online, estimating the outcome of elections, and even being employed across warzones the world over, as a means to highlight areas of potential hazard and risk.
The use of algorithms to supplement human intelligence received a boost at the turn of the Millenium, with the onset of machine learning devices and the realization among technologists that ‘Big Data’ could be taken advantage of in predictive mechanisms, effectively providing machines with the vast intelligence required to conduct complex real-time decisions.
In this vein, of vital importance to the operation of AI technologies, is access to and utilisation of data streams. While the EU has strict safeguards on the use of personal data for this cause, as part of its General Data Protection Regulation, it is now seeking to harness the power of industrial data sharing as a means to boost competitiveness with other global players in the field of data-driven innovation.
Meanwhile, the EU seeks to mitigate the potential pitfalls in utilizing swathes of data for Artificial Intelligence applications as part of a new regulatory approach for AI. February 2020’s White Paper on AI, presented by the Commission, put forward a series of new measures that the EU intends to introduce as a means of tackling Artificial Intelligence technologies deemed to be of ‘high risk.’ It is with these technologies in particular that the EU will attempt to hone in on as part of a new regulatory environment for Artificial Intelligence.
While certain member states across the bloc, including Germany, would like to see broader rules being laid out, AI technologies will be expected to abide by a new series of rules that respect European values, while also fostering innovation – a balance that is easier said than done. This is a difficulty further exasperated by the fact that there are cultural differences the world over that will impact the functionality of algorithms used in AI, particularly with regards to ethics, which can often differ depending on cultural contexts. In this regard, the EU is hoping to standardise a very ‘European’ approach to ethics in AI, which pursues a human-centred approach.
In February 2020, the European Commission published its ‘White Paper’ on Artificial Intelligence, which laid the groundwork for new rules against AI tech deemed to be of ‘high risk.’ As part of the roadmap, the EU executive noted that certain technologies would be earmarked for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’
Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include those with a risk of death, damage or injury, or with legal ramifications.
On presenting the plans in February 2020, Commission President Ursula von der Leyen said that “high-risk AI technologies must be tested and certified before they reach the market.”
Sanctions could be imposed should certain technologies fail to meet such requirements. Such ‘high-risk’ technologies should also come “under human control,” according to Commission documents.
For areas deemed not to be of high-risk, an option could be to introduce a voluntary labelling scheme which would highlight the trustworthiness of an AI product by merit of the fact that it meets “certain objective and standardised EU-wide benchmarks.”
Another area in which the Commission will seek to provide greater oversight is the use of potentially biased data sets that may negatively impact demographic minorities.
In this field, the executive has outlined plans to ensure that unbiased data sets are used in Artificial Intelligence technologies, avoiding discrimination of under-represented populations in algorithmic processes.
However, the Commission held back on introducing strict measures against facial recognition technologies. A leaked version of the document had previously floated the idea of putting forward a moratorium on facial recognition software.
The Commission instead opted to “launch an EU-wide debate on the use of remote biometric identification,” of which facial recognition technologies are a part.
However, more recently, the Commission has not ruled out a future ban on the use of facial recognition technology in Europe, mulling over a public consultation on the subject.
Speaking to MEPs on the European Parliament’s Internal Market Committee in September, Kilian Gross of the Commission’s DG Connect said all options were still on the table.
Gross, who heads DG Connect’s Technologies and Systems for Digitising Industry Unit, also noted how the EU’s general data protection regulation (GDPR) covers the processing of biometric technology in certain cases, but that the Commission would also examine whether the GDPR is sufficient in terms of data acquired from facial recognition technology.
The Commission’s work in this context leads up to a legislative ‘follow up’ to the White Paper, which the Commission will present on 21 April. Rather than introduce hard regulation, however, the legislation is likely to present clarifications on the uses and types of certain Artificial Intelligence technologies that fall into the ‘high-risk’ bracket, and therefore requiring greater oversight.
Moreover, the Commission would also like to exercise its influence on the world stage in the field of AI. Speaking at a recent online event, Kim Jorgensen, head of cabinet of Executive Vice-President Margrethe Vestager, said that now was the right time for the bloc to pursue a transatlantic accord with the United States on Artificial Intelligence, following Joe Biden’s inauguration as the new US president.
While the Commission has been drafting legislation for Artificial Intelligence, there has been no shortage of reports from the European Parliament in the field.
In October last year, Parliament adopted three wide-ranging texts on the subject. A report from Spanish S&D MEP Iban Garcia del Blanco urged the Commission to present a new legal framework outlining the ethical principles to be used when developing, deploying and using artificial intelligence, robotics and related technologies in the EU.
German EPP MEP Axel Voss’s text calls for a future-oriented civil liability framework to be adapted, making those operating high-risk AI strictly liable if there is damage caused. French Renew MEP Stephane Sejourne underlines the key issue of protecting intellectual property rights (IPRs) in the context of artificial intelligence.
And more reports are in the offing from Parliament, with attention also paid to the use of Artificial Intelligence in criminal law matters, as well as the use of AI in education, culture and the audiovisual sector.
Meanwhile, Parliament has further reified its intention to probe the potential risks and benefits of artificial intelligence technologies by establishing its own Special Committee for AI, which was established in June 2020.
On some of the more controversial issues surrounding biometric AI, Parliament has adopted an unambiguous stance, with consecutive reports highlighting the potential pitfalls at play. This was evidenced most recently as part of a resolution led by Identity and Democracy MEP Gilles Lebreton, adopted by Parliament in January this year, which called for the Commission to duly consider a moratorium on the software until all fundamental rights concerns have been taken into account.
Member states have thus far adopted divergent approaches to the future of AI regulation in Europe. On the one side are EU nations very sensitive to future risks – particularly in the field of data protection and the processing of biometric data streams. But there is also a contingent of EU member states determined to pursue innovation in the field of AI, as a means to better compete with other actors on the world stage.
In this latter group, October 2020 saw no less than fourteen member states come forward with their plans for the future of Artificial Intelligence, urging the European Commission to adopt a “soft law approach”.
In a position paper spearheaded by Denmark and signed by digital ministers from other EU tech heavyweights such as France, Finland and Estonia, the signatories call on the Commission to incentivise the development of next-gen AI technologies, rather than put up barriers.
“We should turn to soft law solutions such as self-regulation, voluntary labelling and other voluntary practices, as well as robust standardisation process, as a supplement to existing legislation that ensures that essential safety and security standards are met,” the paper noted.
“Soft law can allow us to learn from the technology and identify potential challenges associated with it, taking into account the fact that we are dealing with a fast-evolving technology,” it continued.
Along with Denmark, the paper has also been signed by Belgium, the Czech Republic, Finland, France Estonia, Ireland, Latvia, Luxembourg, the Netherlands, Poland, Portugal, Spain and Sweden.
However, the softer policy angle could come into conflict with some of the other positions adopted by EU countries.
Germany, for example, is concerned that the Commission only wants to apply restrictions on AI applications deemed to be of high-risk, and would prefer a much broader scope for technologies that would be subject to new rules.
Berlin is also concerned that the Commission’s current plans would lead to a situation in which “certain high-risk uses would not be covered from the outset if they did not fall under certain sectors.”
Moreover, Germany’s June position also made clear reference to the risks to civil liberties posed by biometric remote identification tech, noting how they could lead to a potential encroachment on fundamental rights.
*first published in: www.euractiv.com