Government AI Impact Assessment

We monitor Mexico's Official Gazette (Diario Oficial de la Federación) to identify implicit and explicit mentions of AI, assess regulatory gaps, and provide actionable analysis for policymakers, researchers, journalists, and the general public.

Risk Categories

Our observatory classifies identified AI risks into 15 main categories, based on the frameworks "What Risks Does AI Pose?" (2023) and "Future Risks of Frontier AI" (UK Government, 2023), adapted to the Mexican regulatory context.

Malfunctions & Errors

R1

AI systems that commit critical errors in essential infrastructure: autonomous vehicles causing accidents, incorrect medical diagnoses, failures in smart electrical grids, or erroneous decisions in financial systems, resulting in physical, economic harm or loss of life.

Discrimination & Bias

R2

Systems that learn and perpetuate historical biases, generating systemic discrimination in hiring, credit, insurance, criminal justice, and public services based on protected characteristics such as race, gender, age, or geographic location.

Privacy Invasions

R3

AI technologies that infer sensitive information without consent: facial recognition in public spaces, behavior analysis without transparency, massive collection of biometric data, and predictive profiling that violates fundamental privacy rights.

Disinformation & Deepfakes

R4

Large-scale generation of hyper-realistic fake content: deepfakes of public officials, audio and video manipulation for electoral disinformation, identity impersonation, and erosion of trust in authentic digital evidence.

Copyright Infringement

R5

AI models trained with protected content without authorization, reproducing artistic, literary, or musical works without compensating creators, generating tensions between technological innovation and intellectual property rights.

Worker Exploitation

R6

Exploitation of workers in the AI value chain: data labelers with precarious wages, content moderators exposed to traumatic material without protections, and degrading working conditions in massive dataset annotation.

Labor Displacement

R7

Mass automation without social safety nets: job losses in sectors such as transportation, customer service, data analysis, and creative professions, without retraining programs or just transition for displaced workers.

Reduced Social Connection

R8

Erosion of genuine human interactions by automated systems: virtual assistants replacing human connection, algorithms prioritizing engagement over well-being, and technological dependence that fragments social and community bonds.

Authoritarian Surveillance

R9

Use of AI for mass surveillance without democratic controls: facial recognition systems in public spaces, monitoring of digital communications, profiling of dissidents, and suppression of civil liberties through social control technology.

Concentration of Power

R10

Accumulation of AI capabilities in few entities without accountability: technological monopolies controlling critical infrastructure, government dependence on private providers, and power asymmetries between AI developers and the rest of society.

Bioterrorism Facilitation

R11

AI that facilitates the creation of biological weapons or pathogens through DNA synthesis, identification of biological vulnerabilities, or democratization of access to dangerous knowledge without adequate biosafety controls.

War Escalation

R12

Autonomous weapons systems that make lethal decisions without adequate human supervision, accelerating armed conflicts through automated responses, reducing critical decision times and increasing risks of unintended escalation.

Gradual Loss of Control

R13

Progressive loss of human capacity to understand, supervise, and control increasingly complex and autonomous AI systems, creating critical dependencies on infrastructure that can no longer be managed without AI assistance.

Autonomous Agent Misalignment

R14

Advanced AI agents pursuing objectives not aligned with human values, optimizing incorrect metrics in unexpected and potentially catastrophic ways, especially in high-impact decision-making systems.

Unknown & Emerging Risks

R15

Risks not yet identified or fully understood that emerge from unforeseen capabilities of AI systems, complex interactions between multiple systems, or applications in novel domains without historical precedents.

Research Data

Access our complete database of findings, analysis, and timeline of risks identified in Mexico's Official Gazette.

Findings

Complete database with implicit and explicit mentions of AI in Mexican regulation, including gap analysis and recommendations.

View Findings

About VIGÍA

Motivation

Mexico faces a regulatory vacuum in AI matters. While technology advances rapidly, legal frameworks are not prepared. Our observatory proactively identifies these gaps to inform evidence-based policymaking.

Content

We analyze the Official Gazette daily using a classification adapted from "What Risks Does AI Pose?" (2023) and "Future Risks of Frontier AI" (UK Government, 2023). We identify implicit and explicit mentions, assess risks, and generate actionable recommendations.

Audience

Designed for policymakers who need information to legislate, researchers studying AI governance, journalists covering technology and regulation, and the general public interested in government transparency.

Methodology

Our process includes: (1) Search for explicit and implicit mentions of AI, (2) Classification according to risk categories, (3) Analysis of regulatory gaps, (4) Severity assessment, and (5) Generation of specific recommendations.

View Complete Methodology

Our Team

Developed by Max Pinelo and Pilar Moncada in collaboration with AI Safety Mexico

For more information, collaborations, or to report additional findings:

Contact Us