Mexico Makes AI-Generated Deepfakes a Federal Crime
By Diego Valverde | Journalist & Industry Analyst -
Fri, 02/20/2026 - 12:40
Mexico’s Chamber of Deputies approved federal reforms that criminalize AI-generated intimate content, digital stalking, and online sexual harassment, closing legal gaps around deepfakes and synthetic abuse. The changes update the Federal Penal Code and gender violence law to equate simulated content with real material, enabling prosecution amid rapid Generative AI adoption. The reform directly impacts digital platforms, AI developers, telecoms, education institutions, and content-hosting services operating in Mexico, requiring stronger moderation and privacy-by-design controls.
Mexico’s Chamber of Deputies unanimously approved reforms to penalize the creation and distribution of intimate content created or altered with AI, classifying digital stalking and sexual harassment as federal crimes to strengthen human integrity within technological environments.
The integration of Generative AI into gender-based crimes necessitated an update to the federal regulatory framework to address legal gaps that previously allowed impunity in cases of digital image manipulation. PAN Federal Deputy Annia Gómez says that the legislative intent focuses on establishing clear boundaries for technological usage without stifling innovation.
"The generation and dissemination of simulated intimate content without consent is violence and as violence it must have clear legal consequences,” says Gómez, highlighting that the reform does not seek to stop technological innovation, but to establish limits against practices that harm the integrity of women.
This transition toward a model of legal responsibility ensures that the digital simulation of an individual via deep learning algorithms, commonly known as deepfakes, is legally equivalent to the unauthorized use of authentic material. The elimination of technical ambiguity allows the judiciary to prosecute crimes that were previously shielded by the artificial nature of the evidence.
The rapid evolution of Generative AI has simplified the creation of synthetic content with high levels of realism. According to an international study led by UNICEF, ECPAT, and INTERPOL, which included Mexico, at least 1.2 million children across 11 nations reported being affected by the manipulation of their images via deepfakes containing explicit sexual content in previous years. Statistically, this figure represents one in every 25 students in a typical classroom setting.
Before this reform, the Mexico legal system lacked specific mechanisms to address violence perpetrated through fabricated representations. The absence of a technical classification allowed the creation of synthetic pornographic material to remain outside the scope of sexual abuse or privacy violations.
Legal arguments often centered on the fact that the content did not correspond to a physical recording of a real act. However, the psychological impact, social stigma, and risk of extortion associated with the distribution of these materials are identical to those generated by authentic content.
“It is a new form of violence,” said President Claudia Sheinbaum in December 2024, emphasizing the urgency for judges to address this type of aggression. She stressed the need to update legal frameworks to respond to these emerging technologies.
PRI Federal Deputy Abigail Arredondo notes that the modification responds to the increase in deepfakes affecting young women, noting that the impact of these behaviors can be devastating for victims, specifically for young individuals who are in the process of forming their character and self-esteem.
The prosecution of Instituto Politecnico Nacional student Diego “N,” establishes a definitive legal precedent as the first trial in Latin America to link Generative AI manipulation directly to sexual exploitation. After utilizing algorithms to alter more than 20,000 images of female classmates for unauthorized distribution on the Telegram platform, Diego now faces a sentence of five years in prison for the crime of human trafficking in the form of storing child pornography.
Detailed Technical and Regulatory Implementation
The approved bill modifies the General Law on the Access of Women to a Life Free of Violence, integrating an explicit definition of digital violence. This is now defined as any malicious act that affects intimacy, privacy, or dignity through technological means. The updated text includes behaviors such as:
-
The generation and dissemination of simulated intimate content.
-
Stalking or harassing via digital platforms.
-
Denigrating, intimidating, or humiliating individuals.
-
Distributing smear campaigns.
-
Subtracting or manipulating personal data.
-
Usurping the identity of a woman based on gender stereotypes.
Simultaneously, the Federal Penal Code has been amended to typify behaviors that previously lacked a precise normative description at the national level.
Technical Classification of Stalking and Sexual Harassment
The reform establishes a technical distinction between sexual harassment and stalking, allowing for more efficient prosecution of persistent behaviors that alter the daily life of victims.
For sexual harassment, the law defines the crime as repeated siege or intimidation for lewd purposes through any medium. The established penalty ranges from one to three years in prison and up to 600 days of fines.
The new provision considers stalking a differentiated conduct that does not require sexual intent. It involves repeated surveillance, tracking, or unwanted contact that affects the life of the victim.
The penalty for stalking ranges from six months to one year in prison. Both criminal figures include aggravating circumstances when there is a relationship of labor, educational, or domestic subordination, or when the victim is a minor or belongs to a vulnerable group.
Implications for Digital Safety and Child Protection
Im a report, UNICEF emphasizes that deepfakes constitute material of sexual abuse regardless of their synthetic origin. The ability to generate thousands of images daily in deep web forums through automated processes has complicated the detection and response efforts of authorities. AI facilitates the mass production of AI-assisted child sexual abuse material (MASI), which makes it difficult for investigators to distinguish between images representing individuals in need of urgent physical protection and those whose images have been fabricated.
The implementation of these reforms means that using technological instruments to generate sexual representations without authorization is formally classified as digital gender violence. This regulatory progress necessitates a reevaluation of content moderation policies for digital platforms and companies operating within Mexico.
From a corporate and technological standpoint, the enactment of these provisions requires a transition toward a "Privacy by Design" model. Developers of AI systems must integrate safety mechanisms to prevent the generation of non-consensual pornographic content. Organizations and the private sector are encouraged to establish partnerships to implement the following guidelines:
-
Effective oversight frameworks: The creation of mechanisms to ensure that AI-based products and services support the rights of all individuals.
-
Pre-deployment damage prevention: Testing technologies to identify and mitigate risks, such as the generation of abusive materials or automated discrimination.
-
Data protection and privacy: Integrating privacy into the design of algorithms to prevent the misuse of personal data that could facilitate the creation of violent material.
-
Algorithmic transparency: Ensuring that AI systems are transparent about their functions and the data they use to avoid the amplification of discriminatory biases.
The Chamber of Deputies also approved nine other reforms in areas such as sustainable development, inclusive education, and housing. With the approval of modifications regarding digital violence, the congress updates the federal framework to incorporate behaviors committed through digital platforms.







