12/09/2024 | Press release | Distributed by Public on 12/09/2024 16:19
December 9, 2024, Covington Alert
Tomorrow, the Federal Senate of the Brazilian National Congress is scheduled to vote on the country's new artificial intelligence ("AI") legal framework. The bill is modeled on the EU AI Act and takes a human rights, risk management, and transparency approach.
After five deadline extensions, the AI legal framework bill was reported by the Senate Temporary Committee on Artificial Intelligence ("CTIA") last Thursday, December 5, and placed on the Senate floor schedule for a simple majority vote tomorrow, December 10. The reported bill incorporates 85 amendments introduced during CTIA debates.
If approved by the Senate, the bill will be sent to the House of Deputies, where it will likely be further amended. The new AI legal framework is a priority for both the congressional leadership and President Luiz Inácio Lula da Silva's administration. If adopted by Congress, the framework will be the first major piece of legislation in Brazil to regulate the digital economy since the approval of the Civil Rights Framework for the Internet Act of 2014 ("MCI") and the General Personal Data Protection Act of 2018 ("LGPD").
Tomorrow, the Senate is also scheduled to vote a draft constitutional amendment granting the federal government exclusive competence over cybersecurity legislation and regulation. By changing Article 22 of the Constitution, the amendment will block any cybersecurity legislation or regulation at the state or local government level. If approved in two votes by three-fifths of the Senate, the draft amendment will also be sent to the House.
The proposed new AI legal framework sets rights and obligations for developers, deployers, and distributors of AI systems. These rights and obligations relate to the development, implementation, use, adoption, and governance of AI systems. They cover all systems except those for personal, non-commercial use; developed exclusively for defense; used only for research and development; or data-related storage and transfer infrastructure services. The executive branch will regulate flexible obligations in three additional cases: free and open source, national development, and public interest projects.
The framework also requires that AI systems be based on 20 "fundamentals," including the centrality of humans, the protection and promotion of human rights; and compliance with 17 other principles.
The proposed new AI legal framework defines two sets of rights. First, four rights of a person or group affected by an AI system: the right to information; the right to data privacy and protection; the right to human choice and participation; and the right to non-discrimination and bias correction. Second, three rights of a person or group affected by a high-risk AI system or an AI system that generates relevant legal outcomes: the right to explanation; the right to challenge and to request revision; and the right to human oversight and review.
The framework also creates incentives for developers, deployers, and distributors to conduct a preliminary risk assessment of an AI system to determine its risk level. It explicitly includes two tiers of risks as well as a third, implicit, tier: excessive risk, high risk, and non-excessive or high risk.
The seven AI systems listed in the bill under the excessive risk category cannot be developed, implemented or used, with few exceptions. The additional 12 systems listed under the high risk category can be developed, implemented, or used as long as they comply with the framework rules. This second list can be further expanded through regulation.
The proposed new AI legal framework imposes a number of governance obligations for developers, deployers, and distributors of AI systems. These obligations will be detailed in regulation, and cover both high-risk and non-excessive or high risk systems; as well as general purposes and generative AI systems, and synthetic content. They also cover the system's life cycle.
In addition to governance obligations, the framework requires developers and deployers of high-risk AI systems to conduct an algorithmic impact assessment. The assessment will take the form of a continuous interactive process throughout the AI system life cycle. With the exception of trade secrets, the assessment results must be made public.
The proposed new AI legal framework establishes additional rules related to conformity assessment, civil liability, best practices and governance, communication of serious incidents, public AI databases, oversight, administrative sanctions, regulatory sandbox, workers' protection, sustainable development, small businesses and startups, governments, and the framework's implementation.
The framework also designates the Brazilian National Data Protection Authority (ANPD), the country's existing data privacy regulator, as the main regulator of AI systems. ANPD actively advocated for this role and recently initiated a public consultation on the connection between AI and data protection.
The CTIA rapporteur decided to exclude from the bill all provisions that could be interpreted as a congressional mandate to regulate social media, except language on "information integrity" that continues to be seen by the opposition as providing a loophole for social media regulation.
There is substantial opposition in Congress to any attempt by the executive branch to regulate social media platforms. In 2023, the Lula administration, with the support of the Speaker of the House, tried and failed to approve the so-called "Fake News Bill" to establish a legal framework for social media and instant messaging. Another attempt was abandoned in 2024 despite being a priority for the administration. There are concerns among several congressional leaders that the Lula administration might use the new AI legal framework bill as a legislative vehicle to approve social media regulation.
The CTIA rapporteur also decided to keep in the bill five contentious articles establishing AI-related copyright obligations. These provisions require the developer of AI systems to inform the use of content protected by copyright, including for text and data mining, and model training. The copyright owner can bar the use of content to develop AI systems, except if the system is employed by a limited number of organizations-including scientific institutions, museums, public archives, libraries, and educational organizations-and complies with specific rules.
Any developer, deployer, or distributor that uses content protected by copyright in text and data mining or in training or development of an AI system for commercial use will need to pay the copyright owner for its use. Payments must comply with additional rules set in the framework. Moreover, developers, deployers, and distributors must comply with existing civil rights legislation when AI systems use images, sounds, voices, or videos protected by copyright.
The draft constitutional amendment also scheduled for a Senate floor vote tomorrow is seen as a key step in an effort by Congress to establish a legal framework for cybersecurity. While the executive branch already adopted a cybersecurity policy, the lack of a clear congressional mandate hinders the executive branch's ability to effectively implement it.
If you have any questions concerning the material discussed in this client alert, please contact the members of our Global Compliance practice.