- AiNews.com
- Posts
- OpenAI and FDA Explore AI Use in Drug Evaluation
OpenAI and FDA Explore AI Use in Drug Evaluation

Image Source: ChatGPT-4o
OpenAI and FDA Explore AI Use in Drug Evaluation
OpenAI is in ongoing talks with the U.S. Food and Drug Administration (FDA) about using artificial intelligence to assist with the agency’s drug evaluation work, according to a report by Wired. The discussions point to a growing interest in applying generative AI to one of the most time-intensive parts of the U.S. healthcare system.
The report, citing unnamed sources familiar with the meetings, says a small OpenAI team has met multiple times in recent weeks with FDA officials and two individuals connected to Elon Musk’s “Department of Government Efficiency,” or DOGE.
Project “cderGPT” Aims to Support Drug Review
The collaboration appears to focus on a tool referred to as cderGPT, a name that suggests a role in the FDA’s Center for Drug Evaluation and Research (CDER). This center oversees both over-the-counter and prescription medications in the United States and plays a central role in ensuring the safety and efficacy of drugs before they reach the market.
While specific details remain limited, the AI tool would likely be used to streamline certain parts of the drug review process—particularly near the end of development, where regulatory paperwork and analysis often slow final approvals.
AI's Promise—and Limits—in Drug Development
Drug development is a notoriously lengthy process, often stretching over a decade from early trials to regulatory clearance. Advocates have long suggested that AI could help accelerate this timeline by handling repetitive tasks, analyzing vast amounts of data more quickly, or flagging risks that would take humans longer to detect.
OpenAI’s reported project with the FDA would target only a small portion of the overall process. But even modest gains in speed or efficiency could have significant public health and economic impacts, especially as drug pipelines become more complex.
However, the use of generative AI in healthcare regulation raises new challenges. AI models—especially large language models—are known to sometimes produce incorrect or unverifiable outputs. That unreliability could create risk if such tools are used to inform life-critical decisions without strong oversight.
What This Means
If confirmed and expanded, the FDA’s work with OpenAI would represent one of the highest-profile attempts to bring generative AI into a U.S. regulatory body. While early-stage and likely limited in scope, the project reflects a larger trend: governments are beginning to test AI not just for policy or communication, but for core operational processes.
Still, fundamental questions remain about how to ensure accuracy, fairness, and accountability in systems that involve AI judgment. For now, the collaboration appears focused on exploration—not deployment.
As AI enters the heart of public health decision-making, the question is no longer what it can do—but whether we can trust it to do it right.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.