Is AI the solution to America’s glacial drug approval process, or a Pandora’s box for public health? That’s the question circulating around Robert F. Kennedy Jr.’s recent statements as Secretary of Health and Human Services. Kennedy, now with huge sway over federal health policy, has vowed to unleash an “AI revolution” at the FDA, promising to speed up drug approvals and overhaul the country’s vaccine safety monitoring system, VAERS, using artificial intelligence.

“We’re implementing it in all of our departments. At FDA, we’re accelerating drug approvals so that you don’t need to use primates or even animal models. You can do the drug approvals very, very quickly with AI,” Kennedy explained to Tucker Carlson. The idea is ambitious too ambitious for many experts, considering where AI stands in medicine today.
The FDA itself has started to adopt generative AI, declaring a complete integration at all its centers by June 30, 2025. Commissioner Martin Makary explained, “The agency-wide deployment of these capabilities holds tremendous promise in accelerating the review time for new therapies.” From June 30 on, “all centers will be operating on a common, secure generative AI system integrated with FDA’s internal data platforms” (FDA will fully adopt generative AI by end of June, Martin Makary says). However, as the FDA’s own pilot projects demonstrate, the technology remains in the testing phase for scientific scrutiny, and its application to high-stakes regulatory processes is riddled with complexity.
The impulse to allow AI to approve drugs “very, very quickly” collides with a mountain of regulatory and scientific complexity. AI analytics have promise in weeding through vast datasets, looking for patterns in reports of adverse events, and even forecasting drug effectiveness. Yet the accuracy of these systems is based on the diversity and quality of the data used to train them. Recent analysis of FDA-approved AI medical devices shows that 14.5% of devices reported race or ethnicity data, and only 0.6% reported socioeconomic data (A scoping review of reporting gaps in FDA-approved AI medical devices). Gaps like these have the potential to further exacerbate health disparities and erode trust in AI-driven decisions.
Post-market monitoring and transparency are also areas of weakness. In only 1.9% of FDA summaries of AI devices were links to published scientific studies of validation included, and fewer than 10% of them reported on prospective post-market studies. The absence of regular demographic and statistical reporting on device approvals, as noted in an extensive review, “may exacerbate health disparities” and hinder an ability to evaluate real-world efficacy.
Kennedy also has ambitions for AI-inspired reform at the VAERS system, which he has called “designed to fail.” He asserts that AI will produce “a system that actually works,” but wouldn’t elaborate on how the new system will overcome the known problems of underreporting, noise in the data, and spurious correlations that have long haunted vaccine safety monitoring. Whereas AI can identify infrequent side effects more quickly, however, there is concern from experts that algorithmic bias and unexplained judgment may cloud the issue if the data are incomplete or biased.
Built atop these technical controversies is Kennedy’s unmasked disdain for expert consensus and his eagerness to proliferate vaccine disinformation. “We need to stop trusting the experts, right?” he explained to Carlson, questioning the role of scientific expertise in public health. His suggestion of a COVID vaccine “truth commission” and threats to prosecute Anthony Fauci mark a new departure from settled norms of evidence-based policy.
The regulatory world is racing to catch up. The FDA’s just-released draft guidance on the use of AI in drug development and medical devices provides a risk-based “credibility assessment framework” and highlights the importance of rigorous validation, transparency, and mitigation of bias (A New Era for Biotech, Diagnostics and Regulatory Compliance). Business enterprises are now required to prove that their AI models behave in the same way across diverse populations of patients, and that cybersecurity measures are documented to guard sensitive health information.
The stakes are especially high as Kennedy seeks to fast-track approvals for rare disease therapies and cell and gene therapies, areas where traditional trials are slow and expensive. “We’re going to do everything in our power to sweep away the barriers from getting those solutions to market,” Kennedy declared at a recent FDA roundtable (RFK Jr looks to alter rare disease regulation by fast). But as NIH director Jay Bhattacharya warned, “It will take a regulatory framework that makes sure that the [therapies] that do advance for patient care are the ones that are actually safe and have excellent evidence behind them.”
While AI speeds up drug discovery and review, the problem is not only speed, but that new systems are transparent, equitable, and based on sound science. The danger of sacrificing depth for speed particularly in a politicized environment full of misinformation has never been greater.

