We monitor Mexico's Official Gazette (Diario Oficial de la Federación) to identify implicit and explicit mentions of AI, assess regulatory gaps, and provide actionable analysis for policymakers, researchers, journalists, and the general public.
Our observatory classifies identified AI risks into 15 main categories, based on the frameworks "What Risks Does AI Pose?" (2023) and "Future Risks of Frontier AI" (UK Government, 2023), adapted to the Mexican regulatory context.
AI systems that commit critical errors in essential infrastructure: autonomous vehicles causing accidents, incorrect medical diagnoses, failures in smart electrical grids, or erroneous decisions in financial systems, resulting in physical, economic harm or loss of life.
Systems that learn and perpetuate historical biases, generating systemic discrimination in hiring, credit, insurance, criminal justice, and public services based on protected characteristics such as race, gender, age, or geographic location.
AI technologies that infer sensitive information without consent: facial recognition in public spaces, behavior analysis without transparency, massive collection of biometric data, and predictive profiling that violates fundamental privacy rights.
Large-scale generation of hyper-realistic fake content: deepfakes of public officials, audio and video manipulation for electoral disinformation, identity impersonation, and erosion of trust in authentic digital evidence.
AI models trained with protected content without authorization, reproducing artistic, literary, or musical works without compensating creators, generating tensions between technological innovation and intellectual property rights.
Exploitation of workers in the AI value chain: data labelers with precarious wages, content moderators exposed to traumatic material without protections, and degrading working conditions in massive dataset annotation.
Mass automation without social safety nets: job losses in sectors such as transportation, customer service, data analysis, and creative professions, without retraining programs or just transition for displaced workers.
Erosion of genuine human interactions by automated systems: virtual assistants replacing human connection, algorithms prioritizing engagement over well-being, and technological dependence that fragments social and community bonds.
Use of AI for mass surveillance without democratic controls: facial recognition systems in public spaces, monitoring of digital communications, profiling of dissidents, and suppression of civil liberties through social control technology.
Accumulation of AI capabilities in few entities without accountability: technological monopolies controlling critical infrastructure, government dependence on private providers, and power asymmetries between AI developers and the rest of society.
AI that facilitates the creation of biological weapons or pathogens through DNA synthesis, identification of biological vulnerabilities, or democratization of access to dangerous knowledge without adequate biosafety controls.
Autonomous weapons systems that make lethal decisions without adequate human supervision, accelerating armed conflicts through automated responses, reducing critical decision times and increasing risks of unintended escalation.
Progressive loss of human capacity to understand, supervise, and control increasingly complex and autonomous AI systems, creating critical dependencies on infrastructure that can no longer be managed without AI assistance.
Advanced AI agents pursuing objectives not aligned with human values, optimizing incorrect metrics in unexpected and potentially catastrophic ways, especially in high-impact decision-making systems.
Risks not yet identified or fully understood that emerge from unforeseen capabilities of AI systems, complex interactions between multiple systems, or applications in novel domains without historical precedents.
Access our complete database of findings, analysis, and timeline of risks identified in Mexico's Official Gazette.
Complete database with implicit and explicit mentions of AI in Mexican regulation, including gap analysis and recommendations.
View FindingsMexico faces a regulatory vacuum in AI matters. While technology advances rapidly, legal frameworks are not prepared. Our observatory proactively identifies these gaps to inform evidence-based policymaking.
We analyze the Official Gazette daily using a classification adapted from "What Risks Does AI Pose?" (2023) and "Future Risks of Frontier AI" (UK Government, 2023). We identify implicit and explicit mentions, assess risks, and generate actionable recommendations.
Designed for policymakers who need information to legislate, researchers studying AI governance, journalists covering technology and regulation, and the general public interested in government transparency.
Our process includes: (1) Search for explicit and implicit mentions of AI, (2) Classification according to risk categories, (3) Analysis of regulatory gaps, (4) Severity assessment, and (5) Generation of specific recommendations.
View Complete MethodologyDeveloped by Max Pinelo and Pilar Moncada in collaboration with AI Safety Mexico
For more information, collaborations, or to report additional findings:
Contact Us