Connect with us

Regulation

EU to Launch First AI Regulations

mm
Updated on
EU AI regulations

On April 21st, the European Union will announce its first regulatory framework governing the use of artificial intelligence. The new regulations will entirely ban ‘high risk' machine learning systems, and introduce minimum standards for other machine learning technologies, imposing penalties of €20 million, or 4% of company turnover, for breaches.

A draft report of the new law, obtained by Politico, would seek to foster innovation and development of AI systems for the general benefit of the EU's economy and society, in areas such as manufacturing, improved energy efficiency and climate change modeling; but it would prohibit the use of machine learning in credit scoring systems, automated evaluation of penal sentences, and the assessment of suitability for social security benefits and asylum or visa applications, among other prohibitions to be revealed later.

The draft explicitly states that Chinese-style social scoring systems for individuals and companies are in opposition to the values of the European Union, and will be banned under the regulation, along with ‘mass surveillance' technologies powered by AI.

Regulatory Oversight

Subsequent to its appointment of a High-level expert group on artificial intelligence in March of 2021, the EU also intends to institute a new European Artificial Intelligence Board, with each member state represented, along with a representative from the European Commission and the EU data protection authority.

Perhaps the most sweeping and potentially controversial edict in the draft is that it forbids systems that cause harm to EU populations by ‘by manipulating their behavior, opinions or decisions', which would arguably include many technologies that power the analysis of commercial and political marketing.

The regulations will make exceptions for the combating of serious crime, permitting prescribed deployments of facial recognition systems, within limits of scope and duration of use.

As with the broad sweep of GDPR, it seems that these new regulations may be general enough to incite a ‘chilling effect' in areas where strict guidelines for the use of AI are not provided, with corporations risking exposure where their use of machine learning falls into a potential grey area within the regulations.

Bias Under New EU AI Regulations

However, by far the biggest challenge and possible legal quagmire comes in the form of the draft regulation's stipulation that data sets not ‘incorporate any intentional or unintentional biases' that may facilitate discrimination.

Data bias is one of the most challenging aspects in the development of machine learning systems – hard to prove, difficult to address, and deeply bound up with the central cultures of data-gathering bodies. The issue is increasingly placing private and state research bodies in a cross-current between the need to accurately represent distinct groups (practically the founding objective of computational mathematics and empirical statistical analysis) and the potential to promulgate racial profiling and cultural demonization, among other considerations.

Therefore it's likely that non-EU markets will be hoping that the new regulation will provide at least some specific areas of guidance, and a range of applicable definitions in this regard.

External Resistance To EU AI Regulation

The new regulation is likely to have a deep impact on the legal ramifications of using machine learning to analyze public-facing data – as well as on such data as it will still be possible to extract from web users in the post-tracking age that's currently being ushered in by Apple, Firefox and (to a lesser extent), Chrome.

Jurisdiction may need to be clearly defined, for example in cases where the FAANG giants gather user data in compliance with GDPR, but process that data through machine learning systems outside the European Union. It's not clear whether algorithms derived via such systems could be applied to platforms inside the EU, and even less clear how such an application could possibly be proven.

In the case of the use of AI to inform custodial decisions and sentencing, a growing trend in the United States, the UK's own occasional experiments in this sector would have been covered under the new regulations if the country had not exited the European Union.

In 2020 a White House draft memorandum on AI regulation stated the American case for low regulation of AI, declaring that ‘Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth'. Arguably, this attitude seems likely to survive the Trump administration under which the memorandum was published, but rather reflects coming abrasion between the US and the EU in the wake of the new regulation.

Similarly, the UK AI Council's ‘AI Roadmap' expresses great enthusiasm for the economic benefits of AI adoption, but a general concern that new regulations not be allowed to hamper this progress.

The First Real Law For AI

The EU's commitment to a legal stance on AI is innovative. The last ten years have been characterized by a blizzard of white papers and preliminary committee findings and recommendations from governments around the world, concentrating on the ethics of AI, with few actual laws being passed.

AI ETHICS PAPERS

Geographical distribution of issuers of ethical AI guidelines by number of documents released, in a survey from 2019.  The highest number of ethical guidelines are released in the United States and within the European Union, followed by the United Kingdom and Japan. Canada, Iceland, Norway, the United Arab Emirates, India, Singapore, South Korea, Australia are represented with 1 document each. Having contributed to a specific G7 statement, member states of G7 countries are highlighted separately. Source: https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf

Further Reading

National AI policies & strategies (OECD)