
Mira Murati, former OpenAI CTO, introduces Tinker, the first product from her startup Thinking Machines Lab, now available in private beta. Image Source: ChatGPT-5
Mira Murati’s Thinking Machines Lab Launches First Product, Tinker API
Key Takeaways: Mira Murati’s Thinking Machines Lab Debuts Tinker API
Mira Murati, former CTO of OpenAI, has released the first product from her startup Thinking Machines Lab.
The new platform, Tinker, is a flexible API for fine-tuning language models, giving researchers more control over customization.
Tinker removes infrastructure hurdles by managing distributed training, scheduling, allocation, and failure recovery.
Academic groups at Princeton, Stanford, Berkeley, and Redwood Research have already tested the system on tasks from theorem proving to reinforcement learning.
Tinker enters private beta today, free to start, with usage-based pricing coming soon.
Thinking Machines Lab: Murati’s First Product Launch After OpenAI
Mira Murati, who served as Chief Technology Officer at OpenAI during its years of rapid growth, is now making her first move as the founder of Thinking Machines Lab.
The company announced Tinker, a flexible API for fine-tuning language models, now in private beta. Unlike traditional fine-tuning services, Tinker is designed to give researchers direct control over algorithms and data, while abstracting away the heavy infrastructure challenges of distributed training.
Murati positioned the product as a step toward broadening access to advanced AI research: “Our mission is to enable more people to do research on cutting-edge models and customize them to their needs.”
Tinker API: Flexible Fine-Tuning Without Infrastructure Barriers
With Tinker, developers can fine-tune both small and large open-weight language models, including massive mixture-of-experts architectures such as Qwen-235B-A22B. Scaling between models can be done by changing a single line of Python code, lowering the friction of experimentation.
The service runs as a managed platform on Thinking Machines Lab’s infrastructure. It handles scheduling, resource allocation, and failure recovery automatically. By leveraging LoRA (Low-Rank Adaptation) techniques, Tinker can pool compute across multiple training runs, reducing costs while supporting concurrent research projects.
Developer Ecosystem: Open-Source Cookbook for Post-Training Methods
To complement the API, Thinking Machines Lab is releasing the Tinker Cookbook, an open-source library that provides modern implementations of post-training methods.
The cookbook runs directly on the Tinker API and uses low-level primitives such as forward_backward
and sample
. This allows researchers to build upon established approaches rather than recreating them from scratch, accelerating experimentation across domains.
Research Adoption: Princeton, Stanford, Berkeley, and Redwood Research
Though newly announced, Tinker has already been adopted by prominent academic and research groups:
The Princeton Goedel Team trained mathematical theorem provers.
The Rotskoff Chemistry Group at Stanford fine-tuned a model for advanced chemistry reasoning.
Berkeley’s SkyRL group ran reinforcement learning experiments with asynchronous loops, multi-agents, and multi-turn tool use.
Redwood Research applied reinforcement learning with Qwen3-32B to difficult AI control tasks.
These early use cases highlight the range of problems Tinker can support, from mathematics and science to complex reinforcement learning systems.
Private Beta and Pricing Plans for Tinker
Tinker is now in private beta, with a waitlist open to researchers and developers. Thinking Machines Lab has confirmed that the service will be free to start, with usage-based pricing set to launch in the coming weeks.
Thinking Machines Lab will begin onboarding users to the platform today, with sign-ups now open through the Tinker waitlist.
Q&A: Mira Murati’s Tinker API and Thinking Machines Lab
Q1: What did Mira Murati announce?
A1: She announced Tinker, the first product from her startup Thinking Machines Lab.
Q2: What is Tinker?
A2: Tinker is a flexible API for fine-tuning open-weight language models, offering researchers more control without infrastructure overhead.
Q3: How does Tinker differ from other fine-tuning services?
A3: It provides low-level primitives for experimentation while handling distributed training and infrastructure automatically.
Q4: Who is already using Tinker?
A4: Early adopters include Princeton, Stanford, Berkeley, and Redwood Research, applying it to tasks ranging from theorem proving to reinforcement learning.
Q5: How can developers access Tinker?
A5: Developers can join the private beta waitlist. The platform is free to start, with usage-based pricing coming soon.
What This Means: Expanding Access to AI Research and Customization
The launch of Tinker underscores a broader shift in the AI ecosystem: making advanced model customization accessible to a wider community of researchers and developers. By focusing on open-weight models and simplifying fine-tuning infrastructure, Thinking Machines Lab is lowering the barrier to entry for experimentation at scale.
For Mira Murati, this first product signals how her new company intends to position itself — as a bridge between cutting-edge AI research and practical developer tools. Early adoption by universities and research labs points to strong demand for platforms that combine flexibility, scalability, and cost efficiency.
As onboarding begins, Tinker could become a critical tool in shaping how institutions, startups, and independent researchers customize AI for specialized domains, pushing forward the next wave of innovation beyond proprietary models.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.