
OpenAI CEO Sam Altman presents the company’s biggest updates at Dev Day 2025, unveiling ChatGPT Apps, AgentKit, Codex GA, and the Sora 2 API — signaling ChatGPT’s evolution into a full development platform. Image Source: ChatGPT-5
OpenAI Dev Day 2025: ChatGPT Apps, AgentKit, Codex GA, and Sora 2 API
Key Takeaways: OpenAI Dev Day 2025 Announcements
ChatGPT becomes a platform. OpenAI unveiled ChatGPT Apps — interactive, in-chat applications built using the new Apps SDK — transforming ChatGPT into a hub where developers can build, deploy, and monetize AI-powered tools.
AgentKit simplifies agent creation. Developers can now build production-grade AI agents in minutes using AgentKit’s visual Agent Builder, embeddable Chat Kit, and new evaluation tools. In a live demo, an OpenAI engineer created two agents and added a security layer in just six minutes — a striking demonstration of how fast the process can be.
Codex graduates on GPT-5-Codex. Now generally available, Codex runs on GPT-5-Codex and supports code generation, refactoring, and automation. During the demo, Codex connected a Sony camera to an Xbox controller, then lit up the stage lights on command — all with zero manual coding.
Sora 2 API brings cinematic control. Sora 2 now pairs visuals with synchronized audio, enabling creators to add realistic sound, camera motion, and environmental effects. OpenAI’s demo transformed a photo of a dog into a lively, fully scored video scene, showcasing how far generative video has advanced.
GPT-5 expands the model family. GPT-5-Codex powers coding tasks, GPT-5 Pro delivers advanced reasoning for industries like finance and healthcare, and GPT Real-Time Mini enables low-latency voice agents at 70% lower cost.
The AI-native economy begins. From app monetization to instant in-chat purchases, OpenAI is building the infrastructure for developers to reach global audiences through conversation — redefining software as something you talk to, not type into.
A Shift From Chatbot to Platform
OpenAI’s 2025 Dev Day marked a strategic leap forward: ChatGPT is evolving from a chatbot into a full developer platform. CEO Sam Altman and his team introduced new SDKs, agent tools, and creative APIs that make AI not just something you use, but something you build with.
The underlying message was clear: AI has moved beyond being a development assistant — it has become the platform itself, forming the backbone of a new AI-driven economy where developers can reach hundreds of millions of users through conversation.
ChatGPT Apps: A New Kind of In-Chat Software
OpenAI introduced ChatGPT Apps, a new way for developers to build interactive, personalized applications that run directly inside ChatGPT. Using the Apps SDK — available today in preview — developers can connect data, trigger actions, and design interactive user interfaces that appear seamlessly within a conversation, allowing users to interact with fully functional tools right beside their messages.
During the live demo, OpenAI engineers showcased apps from Coursera, Canva, and Zillow that responded dynamically to chat commands.
Coursera streamed educational videos inside the chat window, while ChatGPT summarized and explained the lecture in real time.
Canva generated a poster for a fictional dog-walking business, then instantly converted it into a full investor pitch deck — all within the same thread.
Zillow demonstrated a real-estate search where the user asked ChatGPT which city would be best to expand their business into, received “Pittsburgh” as a suggestion, and then viewed interactive listings for homes there — filtering by features like “three bedrooms and a yard for the dog,” right inside the conversation.
OpenAI also unveiled a discovery and monetization model for these apps. Developers will soon be able to submit their apps for review, appear in a ChatGPT App Directory, and earn revenue through in-chat payments powered by the Agentic Commerce Protocol — enabling instant checkout experiences directly within ChatGPT.
The Apps SDK and early partner apps are live in preview now, with monetization features rolling out later this year.
AgentKit: Building Agentic Workflows Without the Overhead
OpenAI also introduced AgentKit, a full-stack toolkit for creating and deploying production-grade AI agents directly inside ChatGPT. The platform combines everything developers need to build context-aware agents — without the heavy orchestration that’s historically slowed AI projects.
AgentKit includes:
Agent Builder — a visual canvas for building agent logic and multi-step workflows.
Chat Kit — an embeddable chat interface that developers can style to match their own brands.
Evals for Agents — tools for trace grading, performance testing, and automated prompt optimization.
Connector Registry — a secure way to connect internal tools and APIs under centralized admin control.
In one of the most impressive demos of the day, OpenAI engineer Christina built and deployed two agents with a security layer in just six minutes, finishing with two minutes to spare on her eight-minute live clock. Using Agent Builder, she wired up specialized agents to answer Dev Day questions, attached datasets, added PII guardrails for data protection, and pushed the entire workflow live — all without writing a single line of code.
The demonstration underscored how AgentKit reduces AI development from months to minutes, allowing builders to iterate visually, deploy instantly, and integrate securely.
Codex Graduates, Powered by GPT-5-Codex
OpenAI’s Codex — its AI assistant for developers — officially moved from research preview to general availability. Powered by the new GPT-5-Codex model, it’s designed to serve as an AI teammate that understands context, writes and reviews code, and now even runs in real-time collaboration with human engineers.
In a live on-stage demo, OpenAI engineer Ramon built a camera control interface that could pan, zoom, and adjust lighting using only Codex commands. Codex scaffolded an entire Node.js integration with a Sony camera via the 30-year-old VISCA protocol, inferred hardware documentation automatically, and even wired up an Xbox controller to move the camera — all without the engineer writing a single line of code.
Then, using OpenAI’s new Realtime API, Ramon connected Codex to the venue’s lighting system via an MCP server, enabling voice commands like:
“Can you shine the lights toward the audience?”
Codex obeyed — literally illuminating the crowd.
Codex is now available with:
A Slack integration for coding help in team conversations.
A Codex SDK for automating workflows.
Admin dashboards for analytics, controls, and reporting.
According to OpenAI, engineers using Codex internally complete 70% more pull requests per week, and nearly every piece of OpenAI’s own code now passes through a Codex review before shipping.
Sora 2 API: Creative AI Goes Cinematic
For creators, OpenAI announced the Sora 2 API, giving developers direct access to its cinematic text-to-video model. The updated version adds fine control, improved realism, and — for the first time — synchronized soundscapes that align perfectly with the visuals.
Demos highlighted Sora 2’s expanded creative capabilities:
It can take a handheld iPhone clip and transform it into a sweeping cinematic sequence.
It can add ambient audio, synchronized voiceovers, and natural sound effects that match each scene.
It can remix or extend scenes, change aspect ratios, and generate complex motion sequences from simple prompts.
In one example, Sora 2 turned a photo of a dog into a lively video featuring new “dog friends” running alongside it. In another, it produced a product concept video for a fictional e-commerce brand, complete with visuals, narration, and background score.
Early partners like Mattel are already using Sora 2 to prototype toys and marketing content, turning early sketches into vivid promotional videos within minutes.
The GPT-5 Family Expands
OpenAI also expanded its GPT-5 model lineup with three distinct versions:
GPT-5-Codex — powering the new Codex experience, specialized for software engineering and complex reasoning across large codebases.
GPT-5 Pro — the company’s most capable reasoning model to date, designed for industries requiring precision and reliability, such as finance, healthcare, and legal services.
GPT Real-Time Mini — a lighter, faster voice model that delivers the same expressive quality as OpenAI’s advanced speech models, but at 70% lower cost, enabling real-time AI assistants and voice-enabled agents across devices.
Sam Altman described voice as “one of the primary ways people will interact with AI,” positioning Real-Time Mini as the bridge between conversational agents and the broader Internet of Things.
Q&A: OpenAI’s Vision for the AI Economy
Q: What was the central theme of OpenAI’s Dev Day 2025?
A: OpenAI positioned ChatGPT as a full-fledged development platform — not just an assistant — enabling developers to create interactive apps, deploy intelligent agents, and integrate monetization directly into conversations.
Q: What’s new about ChatGPT Apps?
A: “This is the next step in turning ChatGPT into a platform,” said Sam Altman, CEO of OpenAI. “You can now run full applications right inside the chat — from designing with Canva to booking with Zillow — all powered by the Apps SDK.”
Q: How does AgentKit change the agent-building process?
A: “AgentKit lets anyone create agents without a complex stack,” said Christina, the OpenAI engineer who led the demo. “In under six minutes, I built two secure, production-ready agents and deployed them live — no code, no delays.”
Q: What makes the new Codex different from earlier versions?
A: The new Codex runs on GPT-5-Codex and handles both reasoning and execution. It can write, test, and refactor code autonomously. “We’re seeing engineering velocity increase dramatically,” Altman noted. “Codex is becoming the teammate every developer will want.”
Q: What’s special about the Sora 2 API?
A: Sora 2 adds cinematic control with synchronized audio and higher realism. “This is where creativity meets control,” said an OpenAI product lead. “It’s not just generating video — it’s producing scenes with sound, emotion, and story structure.”
Q: How is GPT-5 evolving?
A: GPT-5 now includes Pro, Codex, and Real-Time Mini models. Pro is optimized for high-accuracy reasoning in specialized fields, while Mini delivers responsive, natural speech interactions for voice agents and devices.
Q: Why does this matter?
A: “AI is no longer just a layer on top of software,” said Sam Altman. “It’s becoming the platform itself — the operating system for the AI economy.”
What This Means: The Rise of the AI-Native Platform
OpenAI’s Dev Day 2025 wasn’t just a showcase — it was a paradigm shift. The company is building the infrastructure for an AI-native economy where conversation becomes the new interface, apps become agents, and developers become platform partners.
ChatGPT Apps make chat the new app store.
AgentKit collapses agent development timelines from months to minutes.
Codex and GPT-5-Codex redefine software creation as collaboration, not labor.
Sora 2 brings cinematic storytelling within reach of anyone with a prompt.
The implications go beyond OpenAI’s ecosystem: this marks the first step toward an internet where AI isn’t a layer on top of apps — it is the app.
As Altman put it, software used to take months or years to build. Now, with AI, it can take minutes — and the future of building belongs to those who can imagine faster than they can code.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.