Over half the global population is within 120 miles of the seashore a geography fact that has silently been informing some of the most utilitarian experiments so far of the “non-land” information center, and of why plans to take compute off-planet are getting less welcome reception than its rhetoric of rockets would suggest.

In response to being questioned about the push to put data centers in orbit, OpenAI CEO Sam Altman did not consider it an incremental engineering roadmap. “I honestly think the idea with the current landscape of putting data centers in space is ridiculous,” that the notion was ludicrous and caused laughter in the room. Altman also opined that orbital facilities “make sense someday,” yet cited launch costs and the simple issue of servicing hardware to be obstacles that cannot be overcome. “We are not there yet,” he said. There will come a time. Space is great for a lot of things. Orbital data centers are not something that’s going to matter at scale this decade.
That rejection coincides with a time when the physical presence of data centers is increasing rapidly on earth, subjecting them to criticism: electricity usage, the grid interconnection bottleneck, and local fears of water utilization and industrial intensity. What is left is an industry that seeks solutions to curb land, water, and cooling limitations without making reliable services a science project.
The ocean floor is one of the ways in which orbital compute has not yet been stress-tested.
The Project Natick by Microsoft involved a shipping-container-sized module and tested it 117 feet off the coast of the Orkney Islands in Scotland during a period of two years. The technical bet was not novelty because it will be about improved survival rates of server under the excuse of a sealed and stable environment, eliminating typical failure drivers on land, oxygen-induced corrosion, humidity changes, temperature variability, and regular “human touch” maintenance. By the time the container was recovered, its algae-covered, barnacle-ringed, appearance, Microsoft had been given a startling reliability result, which is that server failure in underwater module environments was one-eighth the rate in similar land-based high-speed settings. Another finding of the experiment was that there is a way to cool, without removing freshwater, making use of the seawater around as a thermal reservoir.
That philosophy of “Lights-out” designing systems so that they can run years before technicians need to replace components is one that is transferred directly to what is needed in orbit. Practically, undersea validation is simplified. Connectivity was also considered by Microsoft engineers as part of the systems problem, and the module remained online and locked even during recovery efforts, a lesson that the hard work is not always simply the compute density, it is the networking, power, and operations in hostile environments.
Proponents of space say that even cleaner math of energy and heat rejection are provided by orbit. Project Suncatcher, the research program at Google, envisions a time when small clusters of satellites will carry accelerators and communicate with each other via optical links, with tens of terabits per second of inter-satellite bandwidth to provide inter-satellite connection similar to that of a data-center. In a bench configuration, Google claims to have already tested optical links at 800 Gbps each-way transmission with a single transceiver pair and it has detailed plans to test prototypes with Planet by early 2027.
However the objections to engineering that Altman raises are not cosmetic. Muhammad (2009) observes that Orbit introduces failure modes that are not found in a sealed tube located on the ocean floor. Radiation may cause both temporary and irreversible faults in more sophisticated chips; high coupling in the current large-scale training workloads requires that a single fault can spread through a job. Large training runs are prone to interruptions even here on Earth, with Breakthrough Institute finding Meta reported 419 unexpected interruptions in 54 days of training Llama 3 on H100 GPUs, which shows how sensitive large distributed systems can be before adding any orbit. Above, redundancy strategies are more cumbersome, resupply turns into a launch campaign, and “repair” turns into redesign.
In that regard, the nearest off-planet compute that can be taken into reality in the near future is smaller than a data center and it is more like a filter to downlink useful stuff after screening satellite data in space. The remaining near-term sustainability benefits of the industry are more terrestrial, or rather sub-aquatic, at places where reliability, cooling, and power integration can be tried on a full scale without risking simple maintenance as a mission.

