05/21/2024 | News release | Distributed by Public on 05/21/2024 04:19
The European Parliament formally endorsed the Artificial Intelligence Act (AI Act) on the 13 March 2024. With the AI Act nearing its passage into law, the EU is set to be the head of the pack to legislate a detailed framework for regulating Artificial Intelligence (AI). The AI Act is expected to become law before the end of the current Parliament's legislature on 6 June 2024. It will enter into force 20 days after its publication and be fully applicable 24 months later.
Last year we explored the vast capabilities of AI (from the terrifying to the ridiculous) and underscored the need for businesses to be proactive in understanding the risks and the benefits associated with AI use based on existing laws.
By adopting a human-centric, risk-based approach, the AI Act aims to manage risks associated with AI while optimising AI's innovation and efficiency potential.
With the exponential advance in AI technology, the mass beta-testing humanity is undertaking, and regulation doing its best to catch up, individuals and businesses should stay informed to position themselves appropriately for what lies ahead.
A sense of unease about the speed at which AI is developing has led to louder demands for its regulation. It's one of the few topics the public, businesses, and regulators can agree on. The AI Act seeks to calm anxieties by placing an outright ban on the more dystopian capabilities of AI (think profiling employees based on their race, gender or politics, or using AI to manipulate human behaviour).
In addition to easing these worries, the AI Act sets to create a sphere of certainty and risk profile for businesses by setting parameters around what AI usage is acceptable and what is not. If these boundaries are clearly understood, the EU hopes that businesses can use AI confidently to maximise efficiency and growth.
The AI Act is designed to respond to the level of risk associated with AI. The greater the risk, the stricter the enforcement measures. The riskiest forms of AI (those that threaten safety and human rights) are banned altogether, while the AI systems that pose a limited risk are simply required to operate with a certain level of transparency. This approach seeks to ensure the "punishment fits the crime".
Banned AI systems are those that carry an "unacceptable" risk. This category targets the most serious concerns raised by the EU when the AI Act was proposed in 2021, such as use of AI for social scoring or coercive subliminal messaging.
Banned AI includes systems that:
There are some exclusions to the prohibition. For example, facial recognition AI may be used to help find missing persons or to prevent specific terrorist threats, as well as to locate those suspected of crimes such as murder and kidnapping.
AI systems that are considered to create a "high risk" are not banned but are managed with restrictions. These systems can create genuine benefits and efficiencies if used properly.
High risk AI includes systems that:
General purpose AI systems, like ChatGPT and Google's Gemini, are classified as lower risk. These foundational models must meet certain transparency requirements, publish summaries of the content used to train the AI, and comply with EU copyright law.
Minimal-risk AI systems, such as spam filters and search and recommendation engines, are not regulated under the AI Act.
The AI Act targets all levels of the AI supply chain, capturing those who provide, deploy, import, distribute, and manufacture AI systems in the EU. It also covers anyone whose AI "output" is used in the EU. "Output" can include content, predictions, recommendations, or decisions generated by an AI system. The AI Act has extraterritorial effect, extending to actors outside of the EU if their AI systems are placed on the EU market or used in the EU, making it important for New Zealand AI developers looking to expand into the EU. That said, even those that are not affected by the AI Act's extraterritorial scope should still take an interest, as it sets the baseline for AI regulation across the globe.
The AI Act is expected to become law by June 2024. It will enter into force 20 days after its publication and be fully applicable 24 months later. The implementation will be phased, with the bulk of the provisions coming into effect after two years. A shorter deadline will apply to the most serious AI risks, which will be banned within six months. A three-year deadline will apply to AI systems that are already regulated by EU law, such as medical devices, industrial machinery, cars and safety components of regulated products.
The AI Act imposes significant penalties for those who breach it. Contravening the AI Act's prohibition of certain activities can result in a fine up to €35 million or 7% of the infringer's annual turnover. Breaching the provisions that apply to high-risk systems can result in fines of up to €15 million or 3% of annual turnover.
To support the incoming changes, the European Commission will establish the European Artificial Intelligence Office (AI Office). The AI Office faces the challenge of ensuring the right balance is struck when it comes to the regulation of AI so that the economic and societal benefits of its use can still be maximised.
Like privacy regulation, New Zealand initially looked like it might be forerunner in AI regulation. In 2019, New Zealand was a pilot country for the World Economic Forum's Reimagining Regulation for the Age of AI: New Zealand Pilot Project, where a multi-stakeholder policy project was anchored in New Zealand, but did not evolve into actual legislative efforts. We may have missed the boat to be a pioneer, but we can leverage larger countries' groundwork, while adding our unique touch, to develop AI regulations tailored to New Zealand. AI and its regulation is certainly on the Government's radar, but regulatory activity is yet to be seen.
Technology does not stand still, demanding laws and regulations to keep up with the pace. There is no denying that our society and risk is changing with the emergence of increasingly sophisticated AI systems. We recently discussed that the challenge for regulators will be striking the right balance between providing protection in the face of this new frontier of risk while ensuring regulation is not overly intrusive. This is a delicate dance. We want to protect New Zealand from a future that represents an algocracy more than it does a democracy, but we also want to harness the productivity enhancements that AI can offer.
If your business will be affected by the AI Act, it's never too early to start thinking about compliance. As AI systems and our uses of these systems will continue to change, it is essential that this exercise is ongoing and not fixed at a point in time. By using the right tools and tapping into up-to-date expertise, your business can remain agile throughout upcoming changes.
To discuss the best response for your business, please get in touch with our experts Campbell Featherstone, Hayley Miller, Güneş Haksever or Ashleigh Ooi.
This article was written by Lucy Tustin, a solicitor, and Güneş Haksever, a senior associate, in our corporate and commercial team.