The Global Race to Regulate Artificial Intelligence: Lessons from the US, Canada, and the EECUna publicación
Comprehensive analysis of how the United States, Canada, and the European Economic Community regulate artificial intelligence. Explores ethical approaches, legislative challenges, and key lessons for global regulationDescripción de la publicación.
1/12/20254 min read


The Global Race to Regulate Artificial Intelligence: Lessons from the US, Canada, and the EEC
These days, we are all talking about artificial intelligence and its contribution to humanity, but have we stopped to consider the risks it could entail if not properly regulated? (I recommend reading our article "Elon Musk and the Artificial Intelligence Revolution: Between Dream and Danger" on this same blog). This concern led us to investigate in some depth what some countries are doing in this regard.
I believe that the regulation of artificial intelligence (AI) has become a global priority due to its impact on technological, ethical, and social aspects. Jurisdictions such as the United States, Canada, and the European Economic Community (EEC) have adopted diverse approaches, reflecting their legal, economic, and cultural contexts. This article analyzes in depth the regulatory initiatives in these three regions, highlighting their similarities, differences, and challenges.
AI Regulation in the United States
The United States adopts a decentralized and fragmented approach to AI regulation, combining efforts at the federal and state levels. At the federal level, laws such as the National Artificial Intelligence Initiative Act of 2020 seek to foster innovation through research and technological development in key agencies. This legislation promotes a collaborative approach between the government, industry, and the academic sector.
Another important milestone is the Blueprint for an AI Bill of Rights, introduced in 2022. While not binding, this document establishes ethical principles such as transparency, privacy, and non-discrimination. In 2024, the report of the Bipartisan House Task Force on AI provided recommendations to guide future legislation, underlining the importance of balancing innovation with the protection of civil rights.
At the state level, since 2019, 17 states have passed 29 laws related to AI. California, Colorado, and Virginia lead these efforts with regulatory frameworks that address data privacy and accountability. For example, the Utah Artificial Intelligence Policy Act, in effect since 2024, regulates the use of generative AI and protects consumers.
A distinctive aspect of the United States is the use of "regulatory sandboxes," which allow for the testing of AI technologies under regulatory supervision. These initiatives foster innovation while ensuring regulatory compliance. However, the lack of cohesion in state and federal laws poses risks of fragmentation and legal gaps.
AI Regulation in the EEC
The European Economic Community stands out for its comprehensive and preventive approach. The Artificial Intelligence Act (AI Act), approved in 2024, is the first globally recognized legal framework that comprehensively regulates AI. This approach categorizes AI systems based on their level of risk:
Unacceptable risk: Technologies such as social scoring systems, which evaluate and classify people based on their behavior, interactions, or collected data, are prohibited due to concerns about privacy, discrimination, and the negative impact on fundamental rights.
High risk: Applications in critical infrastructure, health, and education require strict conformity assessments.
Limited risk: Systems such as spam filters must meet minimum transparency requirements.
A key aspect of the AI Act is its human-centered approach, promoting trust in AI technologies and ensuring they are safe and ethical. Government agencies have additional transparency obligations to ensure the responsible use of AI.
However, this framework also faces criticism. Companies must meet complex risk assessment requirements, which can increase operating costs. Additionally, the broad definition of AI in the legislation has raised concerns about its applicability to emerging systems, which could stifle innovation.
AI Regulation in Canada
Canada has adopted an intermediate approach, seeking to balance innovation and ethical values. The Artificial Intelligence and Data Act (AIDA), introduced as part of the Digital Charter in 2022, establishes principles for the ethical design, development, and use of AI.
AIDA emphasizes privacy, security, and non-discrimination. Companies must comply with strict regulations on the collection and use of personal data. However, the law faces significant challenges: legislative delays and a minority government have hindered its approval. This political instability also affects the modernization of privacy legislation, such as the Personal Information Protection and Electronic Documents Act (PIPEDA).
To address these gaps, Canada has implemented a voluntary code of conduct for companies developing generative AI. This code seeks to build trust until formal regulations are approved.
Comparative Analysis
The regulatory approach of each region reflects its priorities and contexts:
The United States prioritizes flexibility and economic growth, allowing self-regulation in many sectors. However, this can result in inconsistent protections and regulatory gaps.
The EEC adopts a preventive approach, focused on the protection of fundamental rights and strict regulation of high-risk applications. While it is a pioneering model, it faces challenges related to its complexity.
Canada balances innovation with ethics, aligning its regulations with international standards. However, its progress is limited by political instability.
In general, these approaches highlight the need for global collaboration to harmonize AI regulations. Technology transcends borders, and regulatory efforts must reflect this reality to maximize the benefits of AI while mitigating its risks.
Conclusion
The regulation of artificial intelligence is a complex task that requires a balance between innovation and ethics. The United States, Canada, and the EEC offer distinct models, each with strengths and limitations. While the United States promotes flexibility, the EEC prioritizes the protection of rights, and Canada seeks an intermediate approach.
As AI continues to transform society, it is essential for governments, businesses, and civil society to collaborate to develop regulatory frameworks that promote trust, security, and inclusion. Policymakers must establish closer communication channels among stakeholders, such as non-governmental organizations, academic institutions, and the private sector, to ensure that all perspectives are heard. Furthermore, it is crucial to invest in education and public awareness about the benefits and risks of AI, promoting the development of skills in technological and ethical areas. This comparative analysis highlights the importance of learning from international best practices and fostering cross-border collaborations to build a more equitable, sustainable, and human-centered technological future.