
A conceptual illustration of an AI-powered data center operating in orbit, highlighting how space-based compute could reduce reliance on Earth’s power grids and water-intensive cooling systems. Image Source: ChatGPT-5.2
Nvidia-Backed Startup Trains AI Model in Space, Advancing Orbital Data Centers
Key Takeaways: AI Compute Moves Into Orbit
Starcloud trained and ran AI models in space using an Nvidia H100 GPU, marking the first known instance of high-powered AI compute operating in orbit.
The company’s Starcloud-1 satellite is running large language models, including NanoGPT and Gemma, directly from space.
Orbital data centers could address rising energy demand, water usage, and emissions from Earth-based AI infrastructure.
Starcloud plans to scale toward gigawatt-level space-based data centers powered by continuous solar energy.
Nvidia-Backed Startup Trains First AI Model in Space, Marking a Milestone for Orbital Data Centers
A Washington-based startup has successfully trained and run artificial intelligence models in orbit for the first time, signaling that space-based data centers may be moving from theory to reality. Backed by Nvidia, Starcloud says the milestone demonstrates how orbital computing could help relieve mounting energy and infrastructure pressures on Earth.
Starcloud Demonstrates AI Training in Space
In early November, Starcloud launched its first satellite equipped with an Nvidia H100 graphics processing unit, a chip roughly 100 times more powerful than any GPU previously sent into space.
Using that hardware, the company trained NanoGPT, a large language model created by OpenAI founding member Andrej Karpathy, on the complete works of Shakespeare. The model produced text in Shakespearean English, making it the first AI model known to be trained entirely in orbit.
Starcloud has since confirmed that its Starcloud-1 satellite is also running Gemma, an open-source large language model from Google, based on the company’s Gemini models. According to CNBC, this marks the first known instance of an LLM running on a high-powered Nvidia GPU in space.
“Anything you can do in a terrestrial data center, I’m expecting to be able to be done in space. And the reason we would do it is purely because of the constraints we’re facing on energy terrestrially,” Starcloud CEO Philip Johnston said in an interview.
“Greetings, Earthlings! Or, as I prefer to think of you — a fascinating collection of blue and green. I’m Gemma, and I’m here to observe, analyze, and perhaps, occasionally offer a slightly unsettlingly insightful commentary. Let’s begin!” the model wrote in a message transmitted from the recently launched satellite.
Why Space-Based AI Data Centers Are Gaining Attention
Starcloud’s orbital experiment comes as traditional data centers on Earth face growing constraints, including mounting pressure on power grids, massive water consumption for cooling, and rising greenhouse gas emissions.
According to the International Energy Agency, electricity consumption from data centers is projected to more than double by 2030, driven largely by AI workloads.
Traditional data centers also face well-documented challenges, including:
strain on regional power grids
billions of gallons of water consumed annually for cooling
significant greenhouse gas emissions
Space-based data centers would rely on continuous solar energy, allowing AI systems to operate without the interruptions caused by Earth’s day-night cycles or weather conditions. Johnston told CNBC that orbital facilities could operate at up to ten times lower energy costs than terrestrial data centers.
Scaling Toward Gigawatt-Level Orbital AI Infrastructure
Starcloud plans to build a 5-gigawatt orbital data center, using large solar and cooling panels measuring roughly four kilometers in both width and height. According to the company’s white paper, a space-based compute cluster of that size could produce more power than the largest power plant in the United States while remaining smaller and cheaper than an equivalent terrestrial solar installation.
Johnston said the company’s ability to run Gemma in orbit demonstrates that space-based data centers can support a range of advanced AI models, including those that require large compute clusters.
“We can query it, and it will respond in the same way that when you query a chat from a database on Earth, it will give you a very sophisticated response,” Johnston said. “We can do that with our satellite.”
In a statement to CNBC, Tris Warkentin, product director at Google DeepMind, said that “seeing Gemma run in the harsh environment of space is a testament to the flexibility and robustness of open models.”
Starcloud estimates its satellites will have a five-year operational lifespan, aligned with the expected durability of Nvidia’s chips in orbit. The company is part of the Nvidia Inception program and has participated in Y Combinator and the Google for Startups Cloud AI Accelerator.
Looking ahead, Starcloud plans to expand its orbital computing capabilities by deploying multiple Nvidia H100 GPUs and integrating Nvidia’s Blackwell platform on its next satellite launch, scheduled for October 2026. That satellite will also include a module running a cloud platform from infrastructure startup Crusoe, allowing customers to deploy and operate AI workloads directly from space.
Real-World Use Cases for AI Compute From Orbit
Beyond technical demonstrations, Starcloud says orbital AI compute could enable real-time intelligence, allowing data to be processed and analyzed directly in space rather than transmitted back to Earth for analysis.
One near-term focus is real-time satellite imagery analysis. Starcloud is already working on customer workloads by running inference on imagery from observation company Capella Space, enabling faster detection of events such as wildfires, maritime emergencies, and other time-sensitive incidents.
According to Johnston, space-based AI systems could identify the thermal signature of a wildfire at the moment it ignites, allowing alerts to be sent immediately to first responders. Similar techniques could be used to help locate lifeboats from capsized vessels at sea, where rapid response can be critical.
Starcloud has also integrated satellite telemetry into its AI systems, allowing the model to access real-time data such as altitude, orientation, location, and velocity. Users can query the system about the satellite’s current position or trajectory, receiving responses generated directly from onboard AI rather than ground-based infrastructure.
By performing inference in orbit, Starcloud argues that orbital compute could reduce latency, lower bandwidth demands, and enable faster decision-making for applications that depend on real-time awareness from space-based sensors.
Summarizing the broader implications of in-orbit inference, Johnston said, “Running advanced AI from space solves the critical bottlenecks facing data centers on Earth.”
“Orbital compute offers a way forward that respects both technological ambition and environmental responsibility,” he added.
Risks and Challenges of Orbital AI Data Centers
Despite the technical milestone, orbital data centers face a range of unresolved challenges that could slow large-scale deployment.
One major concern is radiation exposure. Space-based hardware must operate in an environment with higher levels of cosmic radiation, which can degrade electronics over time and increase the risk of hardware failure. While modern GPUs are powerful, maintaining long-term reliability in orbit remains an engineering challenge.
In-orbit maintenance is another limitation. Unlike terrestrial data centers, satellites cannot be easily repaired, upgraded, or physically serviced once deployed. Hardware failures may require full replacement launches, increasing cost and complexity.
Orbital data centers must also contend with space debris and collision risk. As Earth’s orbit becomes more congested, satellites face growing hazards from debris, raising concerns about operational safety and long-term sustainability.
There are also regulatory and governance questions. Issues related to data sovereignty, security, and compliance become more complex when AI workloads are processed outside national jurisdictions. Coordination around space traffic management and international regulation is still evolving.
Finally, economic viability remains an open question. Launch costs, manufacturing complexity, and long development timelines could challenge the near-term business case, even as interest from governments and enterprises grows.
Analysts, including those at Morgan Stanley, have noted that while orbital data centers offer compelling advantages, overcoming these technical, regulatory, and economic hurdles will be critical before the model can scale commercially.
Growing Industry Interest in Orbital AI Data Centers
Starcloud’s milestone comes amid growing interest from major technology companies and startups exploring space-based data center infrastructure.
In early November, Google announced a “moonshot” initiative called Project Suncatcher, which aims to deploy solar-powered satellites equipped with the company’s tensor processing units to support future AI workloads. Lonestar Data Holdings, a privately held company, is pursuing plans to establish what it describes as the first commercial lunar data center on the surface of the Moon.
Meanwhile, Aetherflux, founded by former Robinhood co-founder and CEO Baiju Bhatt, has announced plans to deploy an orbital data center satellite in the first quarter of 2027, underscoring growing interest from new entrants in the space-based compute market.
Interest is also emerging among leading AI companies. According to The Wall Street Journal, OpenAI CEO Sam Altman has explored a potential acquisition or partnership with a rocket manufacturer, a move that could position OpenAI to compete more directly with Elon Musk’s SpaceX, which is a key launch partner for Starcloud.
Commenting on Starcloud’s recent launch, Dion Harris, senior director of AI infrastructure at Nvidia, said in a statement to CNBC: “From one small data center, we’ve taken a giant leap toward a future where orbital computing harnesses the infinite power of the sun.”
Q&A: AI Compute and Data Centers in Space
Q: What makes this milestone significant?
A: This is the first known case of a high-powered AI model being trained and run in orbit using a modern Nvidia H100 GPU, demonstrating that advanced AI workloads can operate reliably in space.
Q: Why run AI models in space instead of on Earth?
A: Space-based data centers can access continuous solar power, avoid terrestrial cooling constraints, and reduce strain on Earth’s energy grids and water resources.
Q: What models has Starcloud run in orbit so far?
A: Starcloud has trained NanoGPT in space and is running Gemma, an open large language model from Google, on its satellite.
Q: Are orbital data centers ready for large-scale deployment?
A: While this demonstration proves feasibility, large-scale orbital data centers still face technical, regulatory, and operational challenges before commercial deployment.
What This Means: A Turning Point for AI Infrastructure
Starcloud’s successful training and operation of AI models in orbit demonstrates that future AI infrastructure may no longer need to be confined to Earth. Instead of relying solely on Earth-based data centers that compete with communities for electricity, water, and land, advanced AI workloads may be able to operate beyond the planet’s surface.
That distinction matters as AI demand accelerates and traditional data centers increasingly strain local power grids and water supplies. On Earth, cooling high-performance AI systems often requires large volumes of water and electricity drawn from municipal infrastructure. In space, however, data centers rely on continuous solar power and dissipate heat through radiative cooling and closed-loop thermal systems, allowing AI systems to operate without drawing on residential electricity grids or local water resources.
While orbital data centers remain in their early stages, proving that modern, high-powered GPUs can run reliably in orbit moves the concept from theoretical planning into practical feasibility. It suggests a future in which the most energy-intensive AI workloads do not have to be concentrated near population centers or compete directly with communities for essential resources.
If space-based compute can be developed responsibly and economically, it could reshape how governments and enterprises think about energy-intensive AI systems — not as infrastructure that must remain tethered to Earth, but as systems that can operate wherever power and cooling are most readily available.
Starcloud’s milestone does not signal an immediate migration of AI infrastructure into space. But it does establish a credible foundation for rethinking how AI systems are powered and cooled — and where the next generation of global compute infrastructure may ultimately reside.
Sources:
CNBC — ‘Greetings, earthlings’: Nvidia-backed Starcloud trains first AI model in space as orbital data center race heats up
https://www.cnbc.com/2025/12/10/nvidia-backed-starcloud-trains-first-ai-model-in-space-orbital-data-centers.htmlStarcloud — Orbital data center white paper
https://starcloudinc.github.io/wp.pdfCrusoe — Crusoe to become first cloud operator in space through partnership with Starcloud
https://www.crusoe.ai/resources/newsroom/crusoe-to-become-first-cloud-operator-in-space-through-partnership-with-starcloudScientific American — Reporting on AI-driven growth in data center energy demand through 2030
https://www.scientificamerican.com/article/ai-will-drive-doubling-of-data-center-energy-demand-by-2030/Nvidia — Nvidia Inception program overview
https://www.nvidia.com/en-us/startups/Google for Startups — Cloud AI Accelerator (North America)
https://startup.google.com/programs/accelerator/cloud-ai/north-america/Google — Project Suncatcher technical paper
https://services.google.com/fh/files/misc/suncatcher_paper.pdfPR Newswire — Aetherflux announces orbital data center targets Q1 2027
https://www.prnewswire.com/news-releases/aetherflux-announces-orbital-data-center-targets-q1-2027-302635961.htmlMSN / The Wall Street Journal — Reporting on Sam Altman’s exploration of a potential space launch partnership
https://www.msn.com/en-us/news/technology/sam-altman-has-explored-deal-to-build-competitor-to-elon-musk-s-spacex/ar-AA1RFtIP?ocid=BingNewsVerp
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.
