Scan to Download Gate App
qrCode
More Download Options
Don't remind me again today

Former NASA Scientist Refutes Musk! Building Data Centers in Space Is Even More Absurd Than Manned Missions

Former NASA engineer and Google Cloud expert Taranis published a post harshly criticizing the idea of building data centers in space, calling it a “completely impractical and terrible idea.” As an expert with a Ph.D. in space electronics and 10 years of experience at Google, he systematically breaks down the fatal flaws of this concept from four key perspectives: power supply, cooling, radiation tolerance, and communications.

NASA Expert Background and Insights from ISS Astronauts’ Work Experience

ISS先進熱控制系統

(Source: Boeing)

To clarify his qualifications, the author is a former NASA engineer and scientist with a Ph.D. in space electronics. He also worked at Google for 10 years, across various departments including YouTube and the cloud division responsible for deploying AI computing power. This dual expertise in both space engineering and cloud computing makes him highly qualified to comment on this topic.

He states outright at the beginning of the article: “This is absolutely a terrible idea; it really makes no sense at all.” There are many reasons, but in short, the electronic equipment required to run a data center—especially those deploying AI computing power in the form of GPUs and TPUs—is simply not suitable for operation in space. He reminds readers not to rely on intuition if they haven’t worked in this field, since the real-world challenges of operating space hardware in space are not always obvious.

This warning comes from his hands-on experience at NASA. The challenges posed by the space environment to electronic equipment are far beyond what most people imagine. Even astronauts working on the International Space Station (ISS) have to deal with many technical problems that are nonexistent on Earth. Every system on the ISS is meticulously designed to cope with vacuum, radiation, and extreme temperature fluctuations—designs that often mean performance compromises and massive costs.

Power Supply: ISS-Scale Solar Array Can Only Power 200 GPUs

The main reason people want to build data centers in space seems to be the abundance of power. But the NASA engineer points out that this is not the case. Basically, you have two options: solar and nuclear. Solar means deploying arrays of solar panels with photovoltaic cells, and while that can work, it won’t magically be better than solar panels on Earth. You don’t lose that much power traveling through the atmosphere, so your intuition about the required area is pretty much correct.

The largest solar array deployed in space to date is the ISS system, which at peak can provide just over 200kW of power. Deploying this required several space shuttle flights and a lot of astronaut labor; its area is about 2,500 square meters—over half the size of an American football field.

Using the NVIDIA H200 as a reference, each GPU device requires about 0.7kW per chip. These can’t run standalone, and power conversion isn’t 100% efficient, so in reality, 1kW per GPU is a better benchmark. Thus, a massive ISS-sized array could power about 200 GPUs.

Power Demand Comparison

ISS Solar Array: 200kW peak power, 2,500 square meters area

Single H200 GPU: 1kW actual power consumption

GPUs Supported by ISS-Scale Array: About 200 (equivalent to 3 ground racks)

OpenAI Norway Data Center Project: 100,000 GPUs

To reach OpenAI’s capacity, you’d need to launch 500 ISS-sized satellites. For comparison, a single server rack typically holds 72 GPUs, so each giant satellite is only equivalent to about three racks. Nuclear is no help either: a typical radioisotope thermoelectric generator (RTG) outputs about 50W to 150W, not even enough to run a single GPU.

Cooling Nightmare: Vacuum Environment Renders Convection Cooling Useless

Many people’s first reaction to this concept is: “Space is cold, so cooling will be easy, right?” The NASA engineer’s answer is: “Uh… no… really not.”

Cooling on Earth is relatively simple. Air convection works well—blowing air across heatsinks is an effective way to transfer heat into the air. If you need higher power density, liquid cooling can move heat from the chip to a larger heatsink elsewhere. In space, there is no air. The environment is nearly a perfect vacuum, so convection doesn’t happen at all.

Space itself doesn’t have a temperature—only matter does. In the Earth-Moon system, the average temperature of almost anything is basically the same as the Earth’s average temperature. If a satellite isn’t rotating, the side facing away from the sun will gradually cool to about 4 Kelvin, just above absolute zero. On the sunward side, it can get extremely hot, reaching several hundred degrees Celsius. So thermal management requires very careful design.

The author has designed camera systems that flew in space, and thermal management was at the core of the design process. He designed his system to consume at most about 1 watt at peak, dropping to about 10% when idle. All the power turns into heat, so it had to be transferred by bolting the edge of the circuit board to the frame.

Cooling even a single H200 would be an absolute nightmare. Heatsinks and fans simply won’t work, and even liquid-cooled versions need to transfer heat to a radiator, which then has to radiate it into space. The ISS’s Active Thermal Control System (ATCS) uses ammonia cooling loops and large radiators, with a dissipation limit of 16kW—so about 16 H200 GPUs, a bit more than a quarter of a ground rack. The radiator system is 13.6m x 3.12m, or about 42.5 square meters.

If we use 200kW as a benchmark, you’d need a system 12.5 times larger, or about 531 square meters—about 2.6 times the size of the corresponding solar array. This would now be a satellite with a surface area larger than the ISS, and all this only equates to three standard server racks on Earth.

Radiation Threat: GPUs in Space Are as Exposed as Unprotected Astronauts under Cosmic Rays

輻射耐受度

(Source: Wikipedia)

This enters the author’s Ph.D. research field. Suppose you could power and cool electronics in space—you still have the problem of radiation tolerance. There are two main radiation sources in space: the sun and deep space. This basically involves charged particles moving at a significant fraction of light speed, from electrons to atomic nuclei. These can cause direct damage to the materials making up the chips when they strike.

The most common consequence is a single event upset (SEU), where a particle passing through a transistor briefly causes a pulse that shouldn’t occur. If this causes a bit to flip, it’s called an SEU. Worse is single event latch-up, when a charged particle’s pulse causes the voltage to exceed the chip’s supply rails, possibly creating a conductive path between rails that shouldn’t be connected, permanently frying the gates.

For longer missions, you also need to consider total dose effects. Over time, a chip’s performance in space degrades, as repeated particle strikes make the tiny field-effect transistors switch more slowly. In practice, this leads to a decrease in maximum feasible clock rates and an increase in power consumption.

GPUs and TPUs, and the high-bandwidth RAM they depend on, are absolutely the worst-case scenario for radiation tolerance. Small-geometry transistors are inherently more vulnerable to SEUs and latch-up. Chips truly designed to work in space use different gate structures and much larger geometries; the processors typically used have performance comparable to PowerPCs from 20 years ago (2005). You could, of course, manufacture a GPU or TPU using this process, but the performance would be just a tiny fraction of current-generation Earth-based GPUs/TPUs.

Communications Bottleneck and Conclusion

Most satellites communicate with Earth by radio, and it’s hard to reliably get more than about 1Gbps of speed. By comparison, a 100Gbps rack-to-rack interconnect is considered low-end for typical server racks on Earth, so it’s easy to see this is also a significant gap. The NASA engineer concludes: “I suppose if you really wanted to do this, it’s just barely possible, but first, it would be extremely difficult to achieve, the costs would be disproportionately high compared to Earth data centers, and at best you’d get mediocre performance. To me, I think this is a catastrophically bad idea.”

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)