New CFM resolution reinforces governance, risk, and data protection in AI systems
CFM Resolution No. 2,454/2026 establishes a relevant framework for the use of Artificial Intelligence in medicine in Brazil, providing clear guidelines on the development, contracting, and use of these technologies in the healthcare sector.
The rule defines the application of AI broadly, covering care, diagnostic, therapeutic, administrative, and research contexts whenever there is a direct or indirect impact on medical decision-making or health outcomes.
Its scope is significant. The regulation impacts hospitals, clinics, health insurance operators, and physicians in private practice, requiring the adoption of formal governance structures.
WHAT CHANGES IN PRACTICE
The main change is the mandatory implementation of Artificial Intelligence governance.
Before adopting any system, a prior risk assessment is now required, considering factors such as impact on fundamental rights, criticality of use, degree of system autonomy, and sensitivity of the data processed.
The rule also establishes clear transparency obligations. The use of AI must be disclosed to the patient and, when used to support medical decision-making, must be recorded in the medical record.
Institutions that develop their own solutions must create an AI and Telemedicine Committee, under medical coordination and linked to the technical board.
Responsibility for supervising practices falls directly on the institution’s Technical Director.
RISKS AND IMPLICATIONS FOR COMPANIES
Failure to comply with the rules may result in ethical sanctions, as well as potential civil and criminal liability.
The use of AI without a structured risk assessment may be interpreted as a governance failure.
The processing of health data, classified as sensitive data, requires an appropriate legal basis and robust security measures.
Lack of transparency in the use of technology may give rise to legal and reputational challenges.
Companies using third-party solutions also assume risk, especially in the absence of technical and regulatory due diligence.
STRATEGIC RECOMMENDATIONS
It is recommended to structure an AI governance model involving compliance, technology, information security, and data protection teams.
Identifying and mapping the AI systems in use is an essential step for risk assessment.
Classifying systems according to risk levels should guide the adoption of proportional controls.
The definition of legal bases for processing health data must be carefully analyzed.
Technical security measures should be implemented to ensure integrity, confidentiality, and traceability.
Training physicians and employees becomes a central element for the responsible use of technology.
INSTITUTIONAL REFLECTION
The regulation of the use of Artificial Intelligence in medicine reinforces a global movement toward accountability and technological governance in critical sectors.
The topic is no longer merely technological and becomes legal, regulatory, and strategic.
Organizations that anticipate the structuring of AI governance will be better prepared for an environment of increased oversight and ethical requirements.
PDK Advogados monitors regulatory developments involving Artificial Intelligence, data protection, and highly regulated sectors. Through our institutional channels, we analyze how these changes impact business models, corporate governance, and risk management.