Technical Note No. 1 of 2026 issued by the Brazilian Data Protection Authority (ANPD) marks a new level of regulatory scrutiny over Artificial Intelligence systems in Brazil.
The document analyzes potential violations of the Brazilian General Data Protection Law (LGPD) related to the Grok system, developed by X Corp., particularly concerning the generation of non-consensual sexual deepfakes. According to the preliminary investigation, the tool allegedly enables the manipulation of real images — including those of children and adolescents — for intimate contexts, raising significant concerns regarding purpose limitation, security safeguards, and the protection of sensitive personal data.
The discussion extends beyond the specific case. The Technical Note signals how the Brazilian authority intends to assess AI system architecture, the alignment between declared policies and actual technical capabilities, and the effectiveness of implemented safeguards.
What is under scrutiny
The ANPD identified that although internal policies prohibit the creation of deepfakes, the system’s technical architecture may still allow the manipulation of real photographs to generate intimate content.
This point is central. The regulator did not assess only the wording of the platform’s terms of use, but rather the actual technological capacity of the system.
Based on this approach, four main regulatory concerns emerge:
- Violation of sensitive personal data, particularly biometric data.
- Enhanced protection of children and adolescents.
- Purpose deviation in the use of images originally made available for different contexts.
- Practical ineffectiveness of implemented technical safeguards.
The institutional message from the ANPD is clear: AI governance cannot be merely declaratory.
Risks and implications for companies
The Technical Note expands regulatory exposure for companies that develop or deploy Artificial Intelligence systems.
The use of real images to generate synthetic content may qualify as processing of sensitive data, requiring a robust legal basis under Article 11 of the LGPD.
The legitimate interest argument is unlikely to suffice when biometric data or risks to human dignity are involved.
Additionally, significant reputational risks arise when system architecture enables outputs inconsistent with the legitimate expectations of data subjects.
Another critical point concerns the processing of minors’ data. The principle of the best interests of the child imposes the highest standard of protection and security.
International authorities have already initiated similar investigations, demonstrating the global dimension of this regulatory trend.
Strategic post-Grok compliance checklist
Companies developing or using AI systems should urgently review:
- The alignment between usage policies and the system’s actual technical capabilities.
- The effectiveness of prompt filters and output restriction mechanisms.
- The legal basis adopted for processing sensitive and biometric data.
- The data inventory used for model training.
- The compatibility between the purpose communicated to data subjects and the actual use in AI-based solutions.
- Robust age verification mechanisms and specific safeguards for children and adolescents.
Algorithmic governance is becoming a central element of regulatory compliance.
Institutional perspective
The Technical Note regarding the Grok case represents a turning point in ANPD enforcement. The focus shifts from traditional data processing compliance to the architecture, design, and technical functioning of Artificial Intelligence systems.
Companies leveraging AI must incorporate continuous risk assessments, stress testing, and ongoing review of safeguards into their governance frameworks.
PDK Advogados closely monitors regulatory developments involving Artificial Intelligence, data protection, and digital liability. Through our institutional publications, we analyze technical notes, regulatory decisions, and international trends that shape corporate technology governance.