• AiNews.com
  • Posts
  • Google Unveils AlphaEvolve: Gemini-Powered AI for Algorithm Discovery

Google Unveils AlphaEvolve: Gemini-Powered AI for Algorithm Discovery

A software engineer sits at a wooden desk in a modern office with exposed brick and shelves in the background. He is focused on a large desktop monitor displaying a diagram titled "AlphaEvolve: AI Agent for Algorithm Design." The diagram shows Gemini Flash and Gemini Pro feeding into an evolutionary framework with code and flowchart elements, leading to icons representing Computing, AI Training, and Mathematics. The scene is naturally lit, with a realistic and professional atmosphere.

Image Source: ChatGPT-4o

Google Unveils AlphaEvolve: Gemini-Powered AI for Algorithm Discovery

Google has introduced a new AI agent called AlphaEvolve that combines its most powerful large language models with automated testing tools to create and improve complex algorithms. Built around the Gemini family of models, AlphaEvolve is designed to evolve code for use in computing, AI training, and mathematical research—showing early promise in both industrial and theoretical domains.

AlphaEvolve has already improved systems across Google’s infrastructure, including data centers, chip design, and large-scale AI models. It also helped discover new solutions to long-standing mathematical problems, underscoring its potential in both applied and scientific settings.

How AlphaEvolve Works

AlphaEvolve is not just a code generator—it’s a full evolutionary agent. It proposes solutions in the form of code, evaluates their accuracy using automated metrics, and refines the best ones through repeated iterations.

To balance speed and depth in this process, AlphaEvolve uses two complementary models from the Gemini lineup:

  • Gemini Flash, for quickly generating a wide range of ideas.

  • Gemini Pro, for producing deeper, higher-quality code suggestions.

Each proposed program is tested using quantitative benchmarks. This allows AlphaEvolve to operate effectively in domains where solutions can be measured objectively, such as math and computer science.

Applications Across Google’s Infrastructure

Over the past year, AlphaEvolve has contributed to multiple layers of Google’s computing systems—including data centers, hardware, and core software infrastructure. Its discoveries have led to performance improvements that scale across the company’s global operations.

Because these systems power not just Google products but also its AI development and deployment pipelines, even small optimizations can have a cascading effect. For example:

  • More efficient data center scheduling increases compute availability without requiring new infrastructure.

  • Smarter hardware designs improve chip performance for training and running AI models.

  • Algorithmic gains in key operations like matrix multiplication accelerate AI training and inference, reducing both cost and energy use.

Taken together, AlphaEvolve’s contributions support a more powerful and sustainable digital ecosystem, with benefits that ripple outward to end users, developers, and researchers across Google's platforms.

Data Center Scheduling

AlphaEvolve created a simple but highly effective rule-of-thumb solution for Borg, Google’s data center management system, helping it schedule computing tasks more efficiently without needing complex calculations. The solution improved resource utilization by an average of 0.7% globally—an efficiency gain that allows more work to be done without expanding computing capacity.

Importantly, the code produced was interpretable and maintainable, offering benefits beyond performance: it was easy to understand, debug, predictability, and easy deployment.

Hardware Design

In chip development, AlphaEvolve proposed a rewrite of a Verilog arithmetic circuit, a low-level hardware description used by engineers to design chips. The suggestion involved removing unnecessary bits from a highly optimized circuit used for matrix multiplication, one of the most computation-heavy tasks in AI workloads.

Crucially, AlphaEvolve’s design passed rigorous verification checks to ensure it still functioned correctly—a non-negotiable requirement in hardware engineering. The optimized circuit has since been integrated into an upcoming Tensor Processing Unit (TPU), Google’s custom AI accelerator.

By working in Verilog—the same language chip designers use—AlphaEvolve shows how AI systems can collaborate directly with hardware engineers, accelerating the design and validation of next-generation chips.

AI Model Training

AlphaEvolve also delivered measurable performance gains in training and running large AI models. One key breakthrough came from restructuring matrix multiplication tasks, which are central to nearly every stage of model training. By dividing these large operations into smarter, more manageable subproblems, AlphaEvolve sped up this core computation by 23%—which translated into a 1% reduction in Gemini’s overall training time.

In large-scale AI systems, even a 1% gain represents a significant savings in time, energy, and cost. And because model training often requires expert manual optimization, AlphaEvolve's automated approach also shortens the engineering cycle—cutting weeks of tuning down to just days of guided experimentation.

AlphaEvolve also tackled one of the most complex layers of optimization: low-level GPU instructions used in inference kernels, especially in Transformer-based models. These instructions are usually so tightly optimized by compilers that human engineers rarely attempt to improve them manually. Yet AlphaEvolve managed to achieve up to a 32.5% speedup for the FlashAttention kernel, a component critical for managing memory and performance in attention mechanisms.

By surfacing these hard-to-find efficiencies, AlphaEvolve helps researchers identify performance bottlenecks and quickly integrate improvements—boosting productivity and unlocking further compute and energy savings.

Breakthroughs in Mathematics

AlphaEvolve is also advancing mathematical research, particularly in areas where algorithmic solutions can be tested and verified with precision.

New Matrix Multiplication Algorithms

Starting from a minimal code skeleton, AlphaEvolve developed key components of a novel gradient-based optimization procedure. This led to the discovery of multiple new algorithms for matrix multiplication, a core problem in computer science.

One standout achievement: AlphaEvolve found a way to multiply 4x4 complex-valued matrices using 48 scalar multiplications. This surpasses the long-standing Strassen algorithm from 1969, which had been the most efficient known method for this specific case. It also improves on AlphaTensor, Google's earlier system for algorithm discovery, which had only achieved improvements for binary arithmetic in similar tasks.

Tackling Over 50 Open Problems

To test its versatility, researchers applied AlphaEvolve to more than 50 unsolved problems across fields like:

  • Mathematical analysis

  • Geometry

  • Combinatorics

  • Number theory

Setting up each experiment took just hours, thanks to the system’s general-purpose design. In about 75% of the problems, AlphaEvolve independently rediscovered existing state-of-the-art solutions. Even more notably, in 20% of cases, it produced better results than previously known, effectively moving the needle on open research questions.

One high-profile example is the kissing number problem, which has intrigued mathematicians for over 300 years. This geometric puzzle asks how many non-overlapping spheres can touch a single central sphere. In 11-dimensional space, AlphaEvolve identified a configuration with 593 outer spheres in 11-dimensional space, establishing a new lower bound—a significant contribution to this area of mathematical inquiry.

Looking Ahead

AlphaEvolve marks a shift from single-function code generation to developing full-scale, verifiable algorithms across diverse domains.

A user-friendly interface is currently being developed by the People + AI Research (PAIR) team, with an Early Access Program planned for selected academic users. Google is also considering broader availability in the future.

Because AlphaEvolve can be applied to any domain where algorithmic solutions can be automatically verified, Google sees potential in fields like material science, drug discovery, sustainability, and other wider business applications.

What This Means

AlphaEvolve is more than a tool for improving algorithms—it’s a step toward automated scientific and engineering insight. By designing provably correct solutions across software, hardware, and mathematics, it demonstrates that large language models can move from assisting with tasks to contributing meaningfully to innovation itself.

This matters because some of the most important challenges in computing and science today—like optimizing AI efficiency, accelerating hardware design, or advancing unsolved mathematical problems—are increasingly constrained not by data, but by the time and expertise required to explore solutions. AlphaEvolve reduces those barriers. It can test thousands of ideas, evolve better ones, and deliver results that are not only correct, but deployable.

In fields where progress depends on trial-and-error, expert intuition, and tight feedback loops, agents like AlphaEvolve introduce a new dynamic: machine-led iteration at human-level relevance. The implications extend from industrial applications to scientific discovery—offering new ways to scale research, lower costs, and unlock solutions previously out of reach.

This is not just an advance in what AI can do—it’s a shift in who, or what, gets to participate in solving the hardest problems.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.