Brazil’s AI Act in the Making: A Comparative Analysis of PL 2.338/2023 and the EU AI Act

Brazil AI regulation PL 2338/2023 EU AI Act comparative analysis

23 de março de 2026

Compartilhe:

Abstract. This article examines Brazil’s proposed artificial intelligence regulation — PL 2.338/2023 — through the lens of comparative law, analysing its structural relationship with the EU AI Act (Regulation EU 2024/1689). The central argument is that Brazil’s legislative framework represents not a mere transposition of the European model, but a conscious re-architecture of its risk-based paradigm around a human rights-centred approach — an effort that is theoretically more ambitious but institutionally more fragile, given Brazil’s nascent regulatory infrastructure. The article maps structural convergences and divergences between the two frameworks, identifies the points in Brazil’s bill most likely to be amended by the Chamber of Deputies, and draws practical implications for multinational companies operating in both jurisdictions. It also identifies gaps in the comparative legal literature on AI regulation in emerging economies, proposing a research agenda that may contribute to the broader debate on global AI governance.

Introduction: Why Brazil’s AI Regulation Matters Beyond Its Borders

The global debate on artificial intelligence regulation has, to date, been largely centred on three jurisdictions: the United States, which has pursued a fragmented, sector-by-sector approach; China, whose AI governance framework reflects a distinct political and economic model; and the European Union, which in August 2024 brought into force the world’s first comprehensive AI regulation — Regulation (EU) 2024/1689, widely known as the EU AI Act. Brazil, the world’s fifth-largest democracy and one of its most digitally intensive societies, has attracted comparatively less attention in the academic literature on comparative AI law — a gap that this article argues is both analytically unwarranted and practically consequential.

The case for paying close attention to Brazil’s regulatory trajectory is not merely demographic. Brazil operates one of the world’s most advanced real-time payment systems — PIX, with over 175 million registered users — and its federal government’s digital identity platform (gov.br) holds accounts for over 169 million citizens. The country’s legal and regulatory tradition draws simultaneously on civil law influences from Portugal and Germany, constitutional rights jurisprudence shaped by the 1988 Federal Constitution, and a growing body of data protection law anchored by the Lei Geral de Proteção de Dados (LGPD, Law 13.709/2018) — itself consciously modelled on the EU’s General Data Protection Regulation. In this context, Brazil’s decision to construct an AI regulatory framework heavily inspired by the EU AI Act is neither accidental nor inevitable: it is a deliberate policy choice with significant implications for how the European regulatory model travels across legal and institutional boundaries.

The comparative legal literature on AI regulation has expanded rapidly since the EU AI Act’s adoption, with important contributions mapping the Act’s constitutional dimensions,1 its interaction with the broader EU data governance ecosystem,2 and its sectoral implications. Yet analysis of how emerging economies are adapting the European model to distinct institutional, political and economic contexts remains sparse. Brazil’s PL 2.338/2023 offers a rare case study: a large emerging economy with a sophisticated legal tradition, directly engaging with the European framework and consciously modifying it for domestic purposes. Understanding where and why Brazil departed from the EU model is both a scholarly contribution and a practical necessity for companies navigating dual compliance obligations across the Atlantic.

This article proceeds as follows. Section II provides the legislative background of PL 2.338/2023 and an honest account of its current status — emphasising that this is a bill, not a law, and one whose final text remains uncertain. Section III maps the regulatory architectures of both frameworks in comparative perspective. Section IV examines Brazil’s theoretically distinctive innovation: a human rights chapter without direct equivalent in the EU AI Act. Section V analyses what this article identifies as the central vulnerability of Brazil’s approach: the institutional gap between regulatory ambition and enforcement capacity. Section VI addresses extraterritorial reach and its implications for multinational companies. Section VII identifies the points in the bill most likely to change in the Chamber of Deputies. Section VIII concludes with a research agenda.

Legislative Background: From PL 21/2020 to PL 2.338/2023

Brazil’s legislative engagement with AI regulation predates the EU AI Act by several years. PL 21/2020, authored by Deputy Eduardo Bismarck, was approved by the Chamber of Deputies in 2021 — at a time when the EU was still in early consultation phases of what would become the AI Act. PL 21/2020 was, however, criticised by civil society organisations and legal scholars as insufficiently protective of fundamental rights and misaligned with international regulatory best practices. Rather than transmitting that bill to the Senate, the Brazilian legislative process opened a new track.

In 2022, the Senate created a Commission of Jurists dedicated to AI regulation — a body of legal scholars, technologists and civil society representatives tasked with elaborating a more comprehensive framework. Their work resulted in PL 2.338/2023, presented by Senator Rodrigo Pacheco, the Senate President at the time. The bill was the subject of extensive public consultations, twelve public hearings in the Chamber of Deputies (between May and September 2025), and significant lobbying from the technology sector, civil society organisations, artists (concerned about copyright implications for generative AI training), and trade unions (concerned about labour protections that were eventually weakened in the final Senate text).

The Senate approved PL 2.338/2023 by a symbolic vote on 10 December 2024 and remitted it to the Chamber of Deputies in March 2025. As of the date of this article, the bill awaits the rapporteur’s opinion in the Special Committee of the Chamber of Deputies, under the rapporteurship of Deputy Aguinaldo Ribeiro (PP-PB). A significant constitutional complication has emerged: by attributing regulatory competencies to the ANPD, the text approved by the Senate may contain a vício de iniciativa — a constitutional flaw arising from the fact that legislation creating or expanding public expenditure and administrative structures may only be initiated by the Executive Branch. To remedy this, the Executive submitted a complementary bill in December 2025 creating the Sistema Nacional para Desenvolvimento, Regulação e Governança de Inteligência Artificial (SIA), which must be consolidated with PL 2.338/2023 before or during the Chamber’s vote.

The practical implication of this legislative history is fundamental for any analysis: PL 2.338/2023 is a bill, not a law. Its final text — including its risk classification criteria, liability regime, rights provisions, and governance architecture — may be substantially amended by the Chamber of Deputies. If the Chamber introduces significant changes, the bill must return to the Senate for a further vote. Brazil’s 2026 electoral calendar (presidential elections on 4 October 2026) creates a de facto legislative window that closes in August 2026. If approval is not achieved within that window, the bill will most likely be deferred to 2027. This article analyses the text approved by the Senate in December 2024 as the primary reference, with explicit identification of the provisions most likely to change.

Regulatory Architecture: Risk Classification in Comparative Perspective

At the level of regulatory architecture, PL 2.338/2023 and the EU AI Act share a foundational premise: not all AI systems pose the same risks, and regulatory obligations should be proportional to the potential for harm. Both frameworks adopt a tiered risk classification system, distinguishing between systems that are absolutely prohibited, systems that are subject to heightened obligations by virtue of their high-risk nature, and systems that must comply with lighter transparency requirements. This structural parallel is not coincidental — Brazil’s legislative process explicitly engaged with the EU AI Act as a primary reference, and several provisions of PL 2.338/2023 can be traced to their European counterparts.

The table below maps the key structural elements of both frameworks as they currently stand:

DimensionEU AI Act (Reg. EU 2024/1689)PL 2.338/2023 (Senate text)
Core paradigmRisk-based, provider-centricRisk-based + human rights chapter
Prohibited systemsArt. 5 — social scoring, subliminal manipulation, real-time biometric surveillance (with exceptions)Art. 9 — social scoring, subliminal manipulation; recognition facial for public security with exceptions
High-risk categoriesAnnex III — education, employment, essential services, law enforcement, migration, justice, critical infrastructureArt. 18 — similar sectors; notably excludes credit scoring from the highest-risk category (contested)
Transparency obligationsArt. 13 — technical documentation, instructions for useArt. 12 — transparency to users, indication of AI involvement
Human oversightArt. 14 — mandatory for high-risk systemsArt. 13 — mandatory for high-risk systems; right to human determination in Art. 7
Individual rights chapterNot present as standalone chapter (addressed indirectly via GDPR)Chapter III — dedicated rights of persons affected by AI systems
Regulatory authorityNational Competent Authorities (NCAs) + European AI OfficeANPD as SIA coordinator (subject to constitutional amendment)
Maximum sanctions€35M or 7% of global annual turnoverR$50M or 2% of Brazilian revenue (proposed)
Entry into forceAugust 2024 (phased application through 2027)Not yet applicable — bill pending in Chamber

The convergences are real and deliberate. The divergences, however, are analytically significant. The most structurally important departure from the EU model is not at the level of prohibited conduct or high-risk categories — which are broadly parallel — but at the level of what the framework is designed to protect. The EU AI Act is primarily a market regulation: it governs who can place AI systems on the EU market and under what conditions. Brazil’s PL 2.338/2023 is simultaneously a market regulation and a rights instrument — a distinction with profound implications for how the framework will be interpreted and enforced.

The Human Rights Paradigm: Brazil’s Theoretical Innovation

The most intellectually distinctive feature of PL 2.338/2023, and the one most likely to attract attention from comparative law scholars, is its standalone chapter on the rights of individuals affected by AI systems. Chapter III of the Senate text establishes a set of substantive rights that go beyond what the EU AI Act provides directly: the right to prior information about interaction with AI systems; the right to human determination in decisions that significantly affect an individual’s interests; the right to non-discrimination and correction of algorithmic bias; the right to privacy and data protection; and the right to explanation of automated decisions.

The EU AI Act addresses some of these concerns indirectly — the GDPR’s Article 22 provides for the right not to be subject to solely automated decisions with significant effects, and the AI Act’s transparency and human oversight requirements create structural conditions for accountability. But the EU legislator’s choice was to regulate AI primarily through obligations on providers and deployers, trusting the existing rights infrastructure to protect individuals. Brazil’s legislator made the opposite bet: by enshrining individual rights explicitly in the AI framework, the bill creates directly enforceable entitlements that do not depend on the technical architecture of GDPR-equivalent protections.

This is theoretically more ambitious for at least two reasons. First, it makes the human rights dimension of AI governance legally visible in a way that the EU model does not — signalling to courts, regulators and affected individuals that the framework is not only about market access but about fundamental rights. Second, it creates a direct cause of action for individuals harmed by AI systems, potentially bypassing the more complex path of LGPD-based litigation that currently represents the primary avenue for data rights enforcement in Brazil.

The theoretical ambition, however, comes with an enforcement paradox. Rights provisions are only as strong as the institutions capable of enforcing them. The EU AI Act was built on top of decades of GDPR enforcement infrastructure, national supervisory authorities with established competences, and a functioning European court system with a rich jurisprudence on data protection and fundamental rights. Brazil’s proposed framework is being constructed on top of an ANPD that has existed for less than six years, that issued its first significant sanction only in 2023, and that is currently navigating a constitutional challenge regarding the very competencies the AI framework seeks to assign to it. The comparison between regulatory text and regulatory capacity is where the most important questions for future research reside.

Institutional Fragility: The Governance Gap

The central argument of this article is that PL 2.338/2023’s theoretical strengths are undermined by a structural vulnerability that the comparative literature has not yet adequately addressed: the mismatch between the regulatory ambitions embedded in the text and the institutional capacity available to enforce them.

The EU AI Act presupposes a regulatory ecosystem that took decades to build. EU Member States have data protection authorities with established competences, experienced staff, and enforcement track records. The European Data Protection Board provides coordination across jurisdictions. The newly created European AI Office at the European Commission level adds a layer of technical expertise and cross-border oversight. Sector-specific regulators — financial supervisors, health authorities, labour inspectorates — are being tasked with AI oversight in their respective domains. This institutional architecture did not materialise when the AI Act was passed: it existed before, and the AI Act extended it.

Brazil’s situation is structurally different. The ANPD was established by decree in 2019 and began operations in 2020. Its enforcement capacity has grown — the Meta/Facebook and World/Worldcoin cases demonstrated a willingness to act against major technology companies — but it remains a young institution with limited technical staff and a budget that does not approach the resources available to European supervisory authorities. The constitutional challenge identified in December 2025 — the vício de iniciativa regarding the SIA’s governance structure — reveals that even the architecture of the regulatory system has not been definitively resolved, and that Brazil is attempting to write the governance provisions of an AI framework while simultaneously discovering the constitutional limits of the legislative branch’s authority to do so.

The parallel with LGPD’s implementation trajectory is instructive. The LGPD was enacted in 2018. The ANPD was created in 2020. Implementing regulations were issued gradually through 2021 and 2022. The first significant administrative sanction of note — against a company for failure to appoint a Data Protection Officer and respond adequately to a data subject’s request — was issued in 2023. If AI regulation follows a similar trajectory, meaningful enforcement of Brazil’s AI framework may not be operational until 2030 or beyond, even if the law is approved in 2026. For companies assessing compliance timelines, this institutional reality is as important as the legal text itself.

Extraterritorial Reach: Implications for Multinational Companies

Both the EU AI Act and PL 2.338/2023 claim extraterritorial reach. The EU AI Act applies to providers that place AI systems on the EU market, deployers established in the EU, and providers and deployers located outside the EU whose systems produce effects within the Union. PL 2.338/2023, as approved by the Senate, adopts a similar effects-based criterion: the framework applies to any natural or legal person that develops, deploys, or uses AI systems in Brazilian territory or whose systems produce effects in Brazil, regardless of where the entity is established.

For companies operating simultaneously in Brazil and the European Union, this creates a prospect of dual compliance — obligations under both frameworks that are structurally similar but technically distinct. The areas of overlap are significant: both frameworks require transparency to users, human oversight of high-risk systems, documentation and auditability, and impact assessments for sensitive applications. A well-designed compliance program built on the EU AI Act’s requirements will capture most of what Brazil’s framework demands. The areas of divergence — particularly the individual rights chapter and the liability regime, which may be altered by the Chamber — will require Brazil-specific attention.

For German and European companies with operations in Brazil, the practical challenge is one of sequencing. The EU AI Act is already in force and its phased application timeline is known. Brazil’s framework is not yet law, its final text is uncertain, and its regulatory authority is still consolidating its competences. The prudent approach is to implement EU AI Act compliance as the baseline — since the EU framework is more demanding in several respects and is legally certain — while monitoring Brazil’s legislative developments and building in the flexibility to adapt compliance programs as the Brazilian framework takes final shape. Companies for which Brazil represents a significant market should begin mapping their AI systems against the proposed Brazilian risk classification now, even in the absence of a final law, precisely because the mapping process takes time and will need to be completed before any compliance deadline is imposed.

For legal practitioners advising clients on cross-border AI compliance, the ability to analyse both frameworks from a position of genuine dual expertise — grounded in practice under both Brazilian and European law — is not a luxury but a prerequisite for effective counsel. The perspective of a lawyer admitted in both jurisdictions, familiar with both the EU AI Act’s implementation in the German and European market and with Brazil’s legislative process and institutional context, offers analytical depth that jurisdiction-specific practice cannot provide. For a discussion of the current state of Brazil’s AI regulatory framework in Portuguese, including a detailed analysis of PL 2.338/2023’s structure and legislative timeline, see our companion article in the site’s Portuguese-language section.

Points Most Likely to Change in the Chamber of Deputies

Any analysis of PL 2.338/2023 would be analytically incomplete without an honest assessment of which provisions are most likely to be amended by the Chamber of Deputies. The following five areas concentrate the most significant political and technical disagreements in the current legislative debate:

Copyright and AI training data. The Senate text includes provisions requiring AI developers to disclose which copyright-protected works were used in training their systems and granting rights holders the right to opt out of such use. These provisions generated significant pushback from the technology sector and were the subject of intense lobbying during the Chamber’s public hearings. The final formulation — including whether opt-out rights will survive, whether they will apply retroactively, and how they will be enforced — remains contested and is likely to be modified.

Credit scoring as a high-risk application. Civil society organisations and consumer protection advocates consistently argued during the legislative process that AI-based credit scoring should be classified as a high-risk application, given its significant impact on individuals’ access to financial services. The Senate text did not fully accept this classification. The Chamber may revisit the issue, with the outcome depending on the balance between financial sector lobbying and consumer rights advocacy.

The liability regime. Whether developers and deployers of high-risk AI systems should be subject to strict liability (regardless of fault) or fault-based liability remains one of the most commercially significant open questions. The Senate text’s approach to this issue is ambiguous in ways that the Chamber may resolve in either direction. The outcome will determine the insurance and risk management obligations of companies operating high-risk AI systems in Brazil.

Labour protections. The version of PL 2.338/2023 originally elaborated by the Commission of Jurists included more robust protections for workers affected by AI-driven automation — including requirements for impact assessments on employment and worker participation in AI governance processes. These provisions were substantially weakened in the Senate, following lobbying from industry confederations. Labour unions and civil society organisations have indicated they will seek to restore them in the Chamber.

The ANPD’s role and the SIA structure. The constitutional challenge regarding the governance structure — the vício de iniciativa — will need to be resolved either through the consolidation of the Executive’s complementary bill or through a different structural solution. How the Chamber resolves this may fundamentally alter the distribution of regulatory authority between the ANPD, sector-specific regulators, and any new AI-specific body created by the SIA framework.

Conclusion: Brazil as a Laboratory for Global AI Governance

PL 2.338/2023 is, at its core, an experiment in regulatory transfer — an attempt to adapt a framework designed for the institutional context of the European Union to the political, economic and institutional realities of Brazil. The analysis in this article suggests that the experiment has produced a text that is theoretically interesting and practically uncertain in roughly equal measure.

The theoretical interest lies in Brazil’s choice to go beyond the EU model’s risk-based architecture and embed a standalone human rights chapter in the AI framework. Whether this represents a genuine innovation in comparative AI governance or an aspirational provision that will be diluted in enforcement depends on factors that cannot yet be determined: the final text approved by the Chamber, the pace of institutional capacity-building at the ANPD, and the willingness of Brazilian courts to interpret AI rights provisions expansively. The evidence from LGPD implementation suggests a cautious optimism — rights have been enforced, but enforcement capacity has grown more slowly than the regulatory text implied.

The practical uncertainty lies in everything discussed in Section VII: the provisions most likely to change, the constitutional challenge yet to be resolved, and the electoral calendar that may defer final approval to 2027. For companies and legal practitioners, the appropriate response to this uncertainty is not paralysis but strategic preparation: mapping AI systems against the proposed risk classification framework, building compliance programs on the EU AI Act baseline while monitoring Brazilian legislative developments, and engaging with legal counsel capable of advising across both jurisdictions.

The broader research agenda opened by Brazil’s experience is significant. How do emerging economies with sophisticated legal traditions adapt regulatory frameworks designed for different institutional contexts? Does a human rights-centred approach to AI governance produce better outcomes for affected individuals than a risk-based approach — and if so, under what institutional conditions? How should regulatory cooperation between Brazil and the European Union be structured to avoid compliance fragmentation for companies operating in both markets? These are questions that the comparative legal literature on AI governance has only begun to address, and that Brazil’s legislative experiment makes newly urgent.

For those engaged in the academic study of AI governance, Brazil’s PL 2.338/2023 offers a rare opportunity: a detailed case study of conscious regulatory borrowing, with all its adaptations, compromises and institutional constraints visible in the legislative record. The analysis presented here is necessarily provisional — the bill is still being written. But the analytical framework for evaluating its final form is already available. The conversation between Brussels and Brasília on AI regulation has only just begun.

For further information on the legal implications of artificial intelligence and emerging technologies for companies operating in Brazil and Germany, visit our practice area hub.

Frequently Asked Questions

Is PL 2.338/2023 already in force in Brazil?

No. As of March 2026, PL 2.338/2023 is a bill pending in the Chamber of Deputies — not yet a law. It was approved by the Brazilian Senate in December 2024 and is currently awaiting the rapporteur’s opinion in the Chamber’s Special Committee. The EU AI Act, by contrast, entered into force in August 2024 and is progressively applicable. For the latest legislative developments, see the official tracking page at the Chamber of Deputies.How does Brazil’s proposed AI regulation compare to the EU AI Act?

Both frameworks adopt a risk-based classification approach. Brazil’s key structural difference is a dedicated chapter on the rights of individuals affected by AI systems — not present as a standalone element in the EU AI Act, which relies on the GDPR for individual rights protections. Brazil’s proposed sanction regime (up to R$50M or 2% of Brazilian revenue) is less severe than the EU’s (up to €35M or 7% of global turnover). For a detailed comparison, see our Portuguese-language analysis.Do foreign companies need to comply with Brazil’s AI regulation?

Under the proposed text, yes — to the extent that their AI systems produce effects in Brazilian territory, regardless of the company’s place of establishment. This extraterritorial criterion mirrors the EU AI Act’s approach. Companies with operations in both Brazil and the EU face the prospect of dual compliance obligations under structurally related but technically distinct frameworks. Early mapping of AI systems against the proposed Brazilian risk classification is advisable even before the law is finalised.What is the ANPD’s role in AI oversight?

Under the Senate text, the ANPD (Brazil’s National Data Protection Authority) was positioned as the coordinating body of the proposed National AI Governance System (SIA). This assignment faces a constitutional challenge — the Executive Branch submitted a complementary bill in December 2025 to resolve it. The ANPD’s final role depends on how the Chamber resolves this constitutional issue. Regardless of the outcome, the ANPD already exercises oversight over AI systems that involve personal data processing, under the LGPD’s existing framework.When will Brazil’s AI law be approved?

This is genuinely uncertain. Brazil’s 2026 electoral calendar creates a legislative window that effectively closes in August 2026. If the bill is not approved before then, approval will likely be deferred to 2027. Even if approved in 2026, implementing regulations, regulatory capacity-building and judicial interpretation will require additional years before the framework is fully operational. Companies should plan compliance timelines accordingly.

Notes

1 Finck, M. (2026). The EU Artificial Intelligence Act: A Commentary. Oxford University Press; Finck, M. (2026). ‘The Constitutional Dimension of AI Regulation in the EU: The AI Act and Its Impact on Member State Competence’, Common Market Law Review (forthcoming). Available at the Chair for Law and Artificial Intelligence, University of Tübingen.

2 Finck, M. (2023). ‘In Search of the Lost Research Exemption: Reflections on the AI Act’, GRUR International ikaf100; Montag, C. & Finck, M. (2024). ‘Successful implementation of the EU AI Act requires interdisciplinary efforts’, Nature Machine Intelligence.

3 On risk-based versus rights-based approaches to AI governance, see generally: Calo, R. (2017). ‘Artificial Intelligence Policy: A Primer and Roadmap’, UC Davis Law Review 51; Doshi-Velez, F. & Kortz, M. (2017). ‘Accountability of AI under the Law: The Role of Explanation’. Berkman Klein Center Working Paper.

Maurício Lindenmeyer Barbieri is Managing Partner at Barbieri Advogados. He holds an LLM in Labour Procedural Law from the Federal University of Rio Grande do Sul (UFRGS, Brazil) and is admitted to practise in Germany (Rechtsanwaltskammer Stuttgart, no. 50.159), Portugal (Ordem dos Advogados de Lisboa, no. 64.443L) and Brazil (OAB/RS no. 36.798 · OAB/DF · OAB/SC · OAB/PR · OAB/SP). He is a Certified Public Accountant (CRC-RS no. 106.371/O) and a member of the German-Brazilian Lawyers’ Association (Deutsch-Brasilianische Juristenvereinigung — DBJV). His practice spans Brazilian and German law, with a particular focus on technology regulation, cross-border transactions and AI compliance for companies operating in both jurisdictions. He is currently pursuing doctoral research in AI law in comparative perspective.

This article is for informational and academic purposes only and does not constitute legal advice. The legislative status of PL 2.338/2023 is subject to change; the article will be updated as significant legislative developments occur. For legal advice on AI compliance in Brazil, Germany or cross-border operations, please contact Barbieri Advogados.