Developers using security testing tools to identify vulnerabilities and mitigate risks in enterprise AI agents before deployment. Image Source: DALL·E via ChatGPT (OpenAI)

OpenAI to Acquire Promptfoo to Secure Enterprise AI Agents


OpenAI announced plans to acquire Promptfoo, an AI security platform designed to help enterprises identify and remediate vulnerabilities in AI systems during development. The company said Promptfoo’s technology will be integrated into OpenAI Frontier, its platform for building and operating enterprise AI agents, which OpenAI refers to as “AI coworkers.”

The acquisition comes as enterprises and developers begin deploying AI agents capable of interacting with software tools, company data, and operational workflows, increasing the need for systems that can test and secure these applications before they are deployed. This development is particularly relevant for enterprise technology leaders, AI developers, and governance teams responsible for overseeing AI systems in production environments.

OpenAI’s planned acquisition of Promptfoo highlights a growing need for security testing and evaluation tools as companies begin deploying AI agents into real enterprise systems.

The industry is racing to deploy autonomous AI agents, and the security infrastructure around those agents is becoming as critical as the AI models themselves.

In short: OpenAI is moving to embed security testing directly into the infrastructure companies use to build and deploy AI agents.

Key Takeaways: OpenAI’s Promptfoo Acquisition and AI Agent Security

OpenAI plans to acquire Promptfoo, an AI security platform used to test and secure large language model applications before deployment, and integrate its technology into the OpenAI Frontier platform for enterprise AI agents.

  • OpenAI is acquiring Promptfoo, a security platform used to test AI systems for vulnerabilities such as prompt injection attacks, jailbreaks, and data leaks.

  • The technology will be integrated into OpenAI Frontier, OpenAI’s platform for building and operating enterprise AI agents (“AI coworkers”).

  • Promptfoo tools are already used by more than 25% of Fortune 500 companies and widely adopted by developers for red-teaming AI systems.

  • The acquisition focuses on automated security testing, risk detection, and governance tools for AI agents operating in real business workflows.

  • The deal highlights a broader industry trend: as AI agents become more capable, security infrastructure is becoming essential for enterprise AI deployment.

Promptfoo Platform: Security Testing and Red-Teaming for AI Systems

Promptfoo has built tools specifically designed to test and evaluate large language model applications before they are deployed in real-world environments.

Its platform enables developers and enterprises to simulate potential attacks or system failures, helping identify AI vulnerabilities such as:

  • Prompt injection attacks

  • Jailbreak attempts

  • Data leaks

  • Tool misuse

  • Out-of-policy agent behavior

The company’s technology is already widely used across the enterprise landscape. According to OpenAI, Promptfoo tools are trusted by more than 25% of Fortune 500 companies, along with a large community of developers using its open-source command-line interface (CLI) and libraries for evaluating and red-teaming large language model (LLM) applications.

As part of the acquisition, OpenAI says it plans to continue supporting Promptfoo’s open-source project while expanding enterprise capabilities inside the Frontier platform.

OpenAI Frontier Platform to Integrate AI Agent Security Testing

The acquisition reflects OpenAI’s broader effort to build enterprise-grade infrastructure for AI agents, particularly as companies begin deploying them inside operational workflows.

According to the company, several key capabilities will be expanded inside OpenAI Frontier:

Security and safety testing built into the platform

Automated security testing and red-teaming tools will become a built-in part of the Frontier platform, helping organizations detect and mitigate risks including prompt injections, jailbreaks, data leaks, tool misuse, and unsafe agent behavior.

Security integrated into development workflows

OpenAI says Frontier will integrate security evaluation earlier in the AI development process, allowing teams to identify, investigate, and remediate agent risks before AI systems reach production environments, making security a core part of how enterprise AI systems are developed and operated.

Oversight and governance capabilities

The platform will also provide reporting and traceability tools designed to help organizations document testing, monitor system changes, and meet growing governance, risk, and compliance requirements for AI systems.

Why AI Agent Security and Evaluation Are Becoming Enterprise Priorities

Traditional AI models primarily generate content such as text or code. But the rise of AI agents capable of taking actions across software tools and enterprise systems introduces new operational risks.

Agents can potentially:

  • Access company databases

  • Execute scripts or APIs

  • Interact with internal systems

  • Perform multi-step workflows

This increased capability means enterprises must address security, compliance, and governance challenges before deploying AI systems broadly across their organizations.

OpenAI says the Promptfoo acquisition is intended to help businesses deploy AI systems that are both reliable and secure at enterprise scale.

OpenAI and Promptfoo Leaders on Securing Enterprise AI Agents

Srinivas Narayanan, CTO of B2B Applications at OpenAI, said the acquisition will strengthen enterprise AI deployment.

Promptfoo brings deep engineering expertise in evaluating, securing, and testing AI systems at enterprise scale. Their work helps businesses deploy secure and reliable AI applications, and we’re excited to bring these capabilities directly into Frontier.”

Promptfoo CEO Ian Webster said the company was created to address growing concerns about AI system security.

“We started Promptfoo because developers needed a practical way to secure AI systems. As AI agents become more connected to real data and enterprise systems, securing and validating them is more challenging and important than ever. Joining OpenAI lets us accelerate this work, bringing stronger security, safety, and governance capabilities to the teams building real-world AI systems.”

The acquisition is still subject to customary closing conditions.

Q&A: OpenAI’s Acquisition of Promptfoo

Q: What did OpenAI announce?
A: OpenAI announced plans to acquire Promptfoo, an AI security platform used to test and evaluate large language model applications for vulnerabilities before deployment.

Q: What is Promptfoo?
A: Promptfoo is a security and evaluation platform that helps developers and enterprises test AI systems for risks such as prompt injection attacks, jailbreaks, data leaks, and unsafe outputs.

Q: How will OpenAI use Promptfoo technology?
A: OpenAI plans to integrate Promptfoo’s testing and security capabilities directly into OpenAI Frontier, its enterprise platform for building and operating AI agents.

Q: Why is security testing important for AI agents?
A: AI agents can interact with tools, data sources, and enterprise software systems, which introduces risks such as data exposure, manipulation attacks, or unintended system actions. Testing frameworks help identify these risks before deployment.

What This Means: AI Agent Security Becomes Core Enterprise Infrastructure

As organizations begin deploying AI agents inside real business workflows, security and evaluation systems are becoming foundational infrastructure rather than optional safeguards.

The key point: OpenAI’s planned acquisition of Promptfoo highlights the growing need for security testing, red-teaming, and governance tools as enterprises move from experimenting with AI to deploying AI agents in operational environments.

AI agents can interact with software tools, company databases, and internal systems, which introduces new risks such as prompt injection attacks, unintended system actions, and sensitive data exposure. Platforms that help organizations systematically test and monitor AI systems are becoming an essential part of enterprise AI development.

Who should care:
Enterprise technology leaders, AI developers, cybersecurity teams, and compliance leaders responsible for deploying or overseeing AI systems in production environments, because these teams must ensure AI agents operate securely, comply with governance requirements, and avoid exposing sensitive company data.

Why this matters now:
As AI systems gain the ability to interact with tools, data, and enterprise software, organizations need reliable ways to test agent behavior, detect vulnerabilities, and document compliance before deploying AI into critical workflows.

What decision this affects:
Organizations evaluating AI strategies may increasingly prioritize security evaluation frameworks, red-teaming tools, monitoring systems, and governance processes when selecting AI platforms and deploying agent-based systems.

In short: As AI agents move from experimentation to real-world deployment, the tools used to test, secure, and monitor those systems may become just as essential as the AI models themselves.

As enterprises adopt AI across more business functions, the platforms that combine powerful models with security, testing, and governance infrastructure may ultimately define the next generation of enterprise AI systems.

Sources:

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.

Keep Reading