Written by Sarosh Bana.
Two key panels of Members of the European Parliament (MEPs) on Tuesday (13 February) ratified landmark rules framed to regulate artificial intelligence (AI), ahead of a vote by the legislative assembly in April that will pave the way for the world’s first such legislation on the technology.
Members of the two Parliamentary committees, on civil liberties and consumer protection, gave clearance to the provisional legislation, called the Artificial Intelligence Act, that aims at safeguarding fundamental rights, democracy, the rule of law and environmental sustainability from high risk AI, while promoting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact, and set safeguards for a technology used in industries, ranging from banking to cars to electronic products and airlines, as well as for security and police purposes.
The legislation prioritises public interest in prohibiting biometric systems that categorise people by their political, religious and philosophical beliefs, sexual orientation, and race, untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, emotion recognition in workplaces and education institutions, and AI systems that manipulate human behaviour to circumvent free will, and which exploit the vulnerabilities of people, due to their age, disability, social or economic situation.
The provisional enactment will now have to be formally adopted by both Parliament and Council to become EU law. Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcoming meeting.
“Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on AI will keep the European promise – ensuring that rights and freedoms are at the centre of the development of this ground-breaking technology,” said co-rapporteur Brando Benifei (S&D, Italy). “Correct implementation will be key – Parliament will continue to keep a close eye to ensure support for new business ideas with sandboxes, and effective rules for the most powerful models.”
Co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU is the first in the world to set in place robust regulation on AI, guiding its development and evolution in a human-centric direction. The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities.” He added that the EU has made impressive contributions to the world, and the AI Act will be yet another that will have a significant impact on citizens’ digital future.
The proposed Act also seeks to ensure that businesses, especially small and medium enterprises (SMEs), can develop AI solutions without undue pressure from industry giants controlling the value chain. To this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market. It penalises non-compliance with fines ranging from €35 million, or 7 per cent of global turnover, to €7.5 million, or 1.5 per cent of turnover, depending on the infringement and size of the company.
For AI systems classified as high-risk, due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law, the legislation mandates fundamental rights impact assessment, among other requirements, which will also apply to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour are also classified as high-risk.
General-purpose AI (GPAI) systems, and the GPAI models they are based on, will also need to comply with transparency requirements proposed by Parliament in order to account for the wide range of tasks AI systems can accomplish and the quick expansion of their capabilities. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
There are more stringent obligations for high-impact GPAI models with systemic risk. If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.
Akil Hirani, Managing Partner and Head of Transactions at India’s reputed law firm of Majmudar & Partners, says the AI Act will have a bearing on Indian businesses as well, as it will apply extra-territorially and will impose compliance burden on non-EU entities that either have AI systems in the EU market or install AI into their services provided in the EU.
He notes that the legislation will also apply when the ‘outputs’ of an AI system are used, or are intended for use, within the EU. Even offshore development will be regulated if the output is on EU territory.
“With the finalisation of the EU Act under way and global discussions on AI regulation gaining momentum, it is an opportune moment for the Indian government to monitor AI initiatives and tools being designed, trained, implemented and used within India and undertake an impact assessment in the Indian context,” mentions Hirani. “As India is the most populous country in the world with a vast working age population, it is imperative to ensure that AI does not adversely displace human job seekers in our country, many of whom are low skilled.”