Key Takeaways
- The AMA has introduced a new policy framework aimed at protecting physicians from unauthorized AI-generated deepfakes that misuse their identities through fake images, videos, and audio
- The framework calls for strict consent requirements, mandatory labeling of AI-generated content, rapid takedown mechanisms, and stronger enforcement powers for regulators and platforms
- AMA leaders warn that physician impersonation deepfakes threaten patient safety, undermine trust in evidence-based medicine, and contribute to the spread of misleading or harmful health information
Responding to the growing threat of artificial intelligence-generated “deepfakes” that impersonate physicians, the American Medical Association (AMA) announced a sweeping new policy framework aimed at protecting doctors from unauthorized AI-generated images, videos, and audio.
Developed by the AMA Center for Digital Health and AI, the framework seeks to modernize physician identity protections, close legal loopholes, and strengthen safeguards for patient safety, professional integrity, and public trust in medicine.
“AI deepfakes that impersonate physicians are not just scams—they are a public health and safety crisis,” said AMA CEO John Whyte, MD, MPH. “When bad actors exploit a doctor’s identity, they undermine patient trust and can steer people toward harmful, unproven care. We need strong action by federal and state lawmakers to protect physicians’ identities, ensure transparency, and stop this fraud. Safeguarding professional integrity is essential to preserving trust and delivering high-quality care in a rapidly evolving digital landscape.”
The AMA warned that deepfake technology is increasingly being used to falsely portray physicians endorsing unproven treatments, fraudulent health products, and misleading medical information. The organization said such impersonation schemes not only threaten individual physicians but also erode public confidence in evidence-based medicine and the patient-physician relationship.
To confront the issue, the AMA’s framework outlines seven core policy principles designed to establish clearer legal protections and accountability standards across the health care and technology sectors.
Among the central proposals is the recognition that a physician’s identity—including their name, image, likeness, voice, and digital replicas—is a protected right that cannot be transferred or used without explicit, informed consent.
The framework also calls for a broad prohibition on deceptive AI-generated impersonations of physicians, particularly when content falsely suggests a doctor’s endorsement, authorship, or participation in medical messaging that could mislead patients.
The AMA is further advocating for strict opt-in consent requirements before any physician identity can be used in AI-generated content. According to the proposal, consent agreements should clearly define the intended use, audience, purpose, and duration of the content, while also allowing physicians to revoke permission if circumstances change.
Transparency measures are another key component of the policy blueprint. The AMA said all AI-generated or manipulated depictions of physicians should include plain-language disclosures and digital watermarks, with patients notified in advance before interacting with any synthetic or AI-generated medical professional representation.
Under the framework, responsibility for preventing impersonation would extend beyond physicians themselves. Technology platforms, hospitals, health systems, and AI vendors would share accountability for implementing safeguards, including rapid takedown procedures, conspicuous labeling standards, and restrictions on the misuse of professional medical titles.
The AMA also called for stronger enforcement mechanisms and practical remedies for physicians harmed by deepfake impersonation. Recommendations include preserving audit logs, improving coordination with investigations, empowering federal agencies to pursue enforcement actions, and requiring annual public reporting on impersonation incidents.
Finally, the organization emphasized that physicians should not bear the administrative burden of protecting their identities. The framework advocates for standardized, reusable consent systems and institutionally supported protections that make identity safeguards automatic rather than cumbersome.
The AMA said the new framework will guide its advocacy efforts with lawmakers, regulators, health care organizations, and technology companies as policymakers grapple with the rapid rise of generative AI tools.
The organization added that it stands ready to collaborate with government and industry leaders to establish enforceable standards that protect both physicians and patients from the growing dangers posed by AI-generated deception.