- AiNews.com
- Posts
- OpenAI’s Chief Scientist: AI Could Produce Novel Research by the End of the Decade
OpenAI’s Chief Scientist: AI Could Produce Novel Research by the End of the Decade

Image Source: ChatGPT-4o
OpenAI’s Chief Scientist: AI Could Produce Novel Research by the End of the Decade
This article is based on an interview originally published by Nature.
Jakub Pachocki, OpenAI’s chief scientist since 2024, believes artificial intelligence models will soon be capable of producing original research and making measurable economic impacts. In a conversation with Nature, Pachocki outlined how he sees the field evolving — and how OpenAI plans to balance innovation with safety concerns.
Pachocki, who joined OpenAI in 2017 after a career in theoretical computer science and competitive programming, now leads the firm’s development of its most advanced AI systems. These systems are designed to tackle complex tasks across science, mathematics, and engineering, moving far beyond the chatbot functions that made ChatGPT a household name in 2022.
Toward AI That Thinks and Discovers
When asked about the future role of reasoning models, Pachocki predicted a major shift over the next five years. “Today you can talk to a model, but it’s only an assistant that needs constant guidance. I expect this will be the primary thing that changes,” he said.
He pointed to OpenAI’s Deep Research tool, which can already work unsupervised for short bursts and produce useful results, despite using minimal computing resources. Pachocki suggested that applying much more computing power to open research problems could lead to AIs that "are actually capable of novel research."
He foresees particular advances in fields such as autonomous software engineering and hardware design, with similar breakthroughs possible across other scientific disciplines.
The Growing Role of Reinforcement Learning
Pachocki emphasized that recent progress in AI reasoning stems in large part from reinforcement learning — a training technique where models learn through trial, error, and human feedback.
Early versions of ChatGPT, he explained, used a two-stage process: unsupervised pre-training to absorb large amounts of data, followed by reinforcement learning to fine-tune the model into a useful assistant. Recent improvements have deepened the reinforcement learning phase, helping models not just respond better but develop new "ways of thinking."
However, Pachocki cautioned that AI models do not think like humans. “A pre-trained model has learned some things about the world, but it doesn’t really have any conception of how it learned them, or any temporal order as to when it learned things,” he said. Still, he added, “I definitely believe we have significant evidence that the models are capable of discovering novel insights."
Plans for an Open-Weight Model
OpenAI, which has mostly released proprietary models, plans to soon launch its first open-weight model since GPT-2 in 2019. This will allow researchers to download and train the model further, offering new opportunities for academic and independent study.
Pachocki said he is "quite excited" about the initiative, viewing it as a critical step in understanding how different deployment methods affect people. However, he noted that releasing open versions of frontier-level models remains unlikely because of safety risks. Instead, OpenAI aims to release an open model that is "better than the available open models," but not among its most advanced.
Rethinking AGI Timelines
Pachocki also reflected on how his views on artificial general intelligence (AGI) have evolved. Early in his career, he considered mastery of the game of Go a distant milestone. The defeat of a human Go champion by AI in 2016 reshaped his expectations.
"When I joined OpenAI in 2017, I was still among the biggest sceptics at the company, but milestones have fallen faster than I expected," he said. Pachocki pointed to rapid gains in passing the Turing test, mathematical problem solving, and software development as evidence that AI progress is accelerating.
Looking ahead, he said the next major benchmark will be AI making "actual measurable economic impact," particularly through the creation of novel research and valuable software. Pachocki expects substantial advances toward this goal before the end of the decade — and possibly even within the next year.
What This Means
Jakub Pachocki’s comments highlight how artificial intelligence is moving from an experimental tool to an active driver of innovation and economic change. His expectation that AI could soon generate novel research and valuable software autonomously signals a future where machines may contribute directly to scientific discovery, product development, and industry growth.
If realized, this shift could reshape how organizations approach research and development, lowering barriers to entry for new ideas and accelerating the pace of innovation across disciplines. It could also challenge existing models of education, intellectual property, and workforce development, as AI systems begin to tackle tasks traditionally reserved for human experts.
At the same time, Pachocki’s emphasis on reinforcement learning and safe deployment reflects a growing awareness that progress must be balanced with caution. Releasing powerful models into open environments carries risks that researchers, companies, and policymakers will need to manage carefully.
As AI moves closer to contributing original ideas and making tangible economic impacts, society faces a crucial period of adaptation—one that will define how these technologies shape the decades ahead.
This article is based on an interview originally published by Nature.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.