As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
I've had actual, real-life deployments in datacentres where we just left dead hardware in the racks until we needed the space, and we rarely did. Typically we'd visit a couple of times a year, because it was cheap to do so, but it'd have totally viable to let failures accumulate over a much longer time horizon.
Failure rates tend to follow a bathtub curve, so if you burn-in the hardware before launch, you'd expect low failure rates for a long period and it's quite likely it'd be cheaper to not replace components and just ensure enough redundancy for key systems (power, cooling, networking) that you could just shut down and disable any dead servers, and then replace the whole unit when enough parts have failed.
Exactly what I was thinking when the OP comment brought up "regular launches containing replacement hardware", this is easily solvable by actually "treating servers as cattle and not pets" whereby one would simply over-provision servers and then simply replace faulty servers around once per year.
Side: Thanks for sharing about the "bathtub curve", as TIL and I'm surprised I haven't heard of this before especially as it's related to reliability engineering (as from searching on HN (Algolia) that no HN post about the bathtub curve crossed 9 points).
Wonder if you could game that in theory by burning in the components on the surface before launch or if the launch would cause a big enough spike from the vibration damage that it's not worth it.
I suspect you'd absolutely want to burn in before launch, maybe even including simulating some mechanical stress to "shake out" more issues, but it is a valid question how much burn in is worth doing before and after launch.
Vibration testing is a completely standard part of space payload pre-flight testing. You would absolutely want to vibe-test (no, not that kind) at both a component level and fully integrated before launch.
The analysis has zero redundancy for either servers or support systems.
Redundancy is a small issue on Earth, but completely changes the calculations for space because you need more of everything, which makes the already-unfavourable space and mass requirements even less plausible.
Without backup cooling and power one small failure could take the entire facility offline.
And active cooling - which is a given at these power densities - requires complex pumps and plumbing which have to survive a launch.
The whole idea is bonkers.
IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
I have no idea if that's any more economic, but at least it solves the most obvious redundancy and deployment issues.
> The analysis has zero redundancy for either servers or support systems.
The analysis is a third party analysis that among other things presumes they'll launch unmodified Nvidia racks, which would make no sense. It might be this means Starcloud are bonkers, but it might also mean the analysis is based on flawed assumptions about what they're planning to do. Or a bit of both.
> IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
This would get you significantly less redundancy other than against physical strikes than having the same redundancy in a single unit and letting you control what feeds what, the same way we have smart, redundant power supplies and cooling in every data center (and in the racks they're talking about using as the basis).
If power and cooling die faster than the servers, you'd either need to overprovision or shut down servers to compensate, but it's certainly not all or nothing.
Short version: make a giant pressure vessel and keep things at 1 atm. Circulate air like you would do on earth. Yes, there is still plenty of excess heat you need to radiate, but dramatically simplifies things.
even a swarm of satellites has risk factors. we treat space as if it were empty (it's in the name) but there's debris left over from previous missions. this stuff orbits at a very high velocity, so if an object greater than 10cm is projected to get within a couple kilometers of the ISS, they move the ISS out of the way. they did this in April and it happens about once a year.
the more satellites you put up there, the more it happens, and the greater the risk that the immediate orbital zone around Earth devolves into an impenetrable whirlwind of space trash, aka Kessler Syndrome.
serious q: how much extra failure rate would you expect from the physical transition to space?
on one hand, I imagine you'd rack things up so the whole rack/etc moves as one into space, OTOH there's still movement and things "shaking loose" plus the vibration, acceleration of the flight and loss of gravity...
I suspect the thermal system would look very different from a terrestrial component. Fans and connectors can shake loose - but do nothing in space.
Perhaps the server would be immersed in a thermally conductive resin to avoid parts shaking loose? If the thermals are taken care of by fixed heat pipes and external radiators - non thermally conductive resins could be used.
Connectors have to survive the extreme vibration of a rocket launch. Parts routinely shake off boards in testing even when using non-COTS space rated packaging designed for extreme environments. That amplifies the cost of everything.
The Russians are the only ones who package their unmanned platform electronics in pressure vessels. Everyone else operates in vacuum, so no fans.
The original article even addresses this directly. Plus hardware returns over fast enough that you'll simply be replacing modules with a smattering of dead servers with entirely new generations anyways.
It would be interesting to see if the failure rate across time holds true after a rocket launch and time spent in space. My guess is that it wouldn’t, but that’s just a guess.
I think it's likely the overall rate would be higher, and you might find you need more aggressive burn-in, but even then you'd need an extremely high failure rate before it's more efficient to replace components than writing them off.
The bathtub curve isn’t the same for all components of a server though. Writing off the entire server because a single ram chip or ssd or network card failed would limit the entire server to the lifetime of the weakest part. I think you would want redundant hot spares of certain components with lower mean time between failures.
We do often write off an entire server because a single component fails because the lifetime of the shortest-lifetime components is usually long enough that even on-earth with easy access it's often not worth the cost to try to repair. In an easy-to-access data centre, the component most likely to get replaced would be hot-swappable drives or power supplies, but it's been about 2 decades since the last time I worked anywhere where anyone bothered to check for failed RAM or failed CPUs to salvage a server. And lot of servers don't have network devices you can replace without soldering, and haven't for a long time outside of really high end networking.
And at sufficient scale, once you plan for that it means you can massively simplify the servers. The amount of waste a sever case suitable for hot-swapping drives adds if you're not actually going to use the capability is massive.
I'd naively assume that the stress of launch (vibration, G-forces) would trigger failures in hardware that had been working on the ground. So I'd expect to see a large-ish number of failures on initial bringup in space.
Electronics can be extremely resilient to vibration and g forces. Self guided artillery shells such as the M982 Excalibur include fairly normal electronics for GPS guidance. https://en.wikipedia.org/wiki/M982_Excalibur
On the ground vibration testing is a standard part of pre-launch spacecraft testing. This would trigger most (not all) vibration/G-force related failures on the ground rather than at the actual launch.
The big question mark is how many failures you cause and catch on the first cycle and how much you're just putting extra wear on the components that pass the test the first time and don't get replaced.
Appreciate the insights, but I think failing hardware is the least of their problems. In that underwater pod trial, MS saw lower failure rates than expected (nitrogen atmosphere could be a key factor there).
> The company only lost six of the 855 submerged servers versus the eight servers that needed replacement (from the total of 135) on the parallel experiment Microsoft ran on land. It equates to a 0.7% loss in the sea versus 5.9% on land.
6/855 servers over 6 years is nothing. You'd simply re-launch the whole thing in 6 years (with advances in hardware anyways) and you'd call it a day. Just route around the bad servers. Add a bit more redundancy in your scheme. Plan for 10% to fail.
That being said, it's a complete bonkers proposal until they figure out the big problems, like cooling, power, and so on.
Indeed, MS had it easier with a huge, readily available cooling reservoir and a layer of water that additionally protects (a little) against cosmic rays, plus the whole thing had to be heavy enough to sink. An orbital datacenter would be in a opposite situation: all cooling is radiative, many more high-energy particles, and the weight should be as light as possible.
> In that underwater pod trial, MS saw lower failure rates than expected
Underwater pods are the polar opposite of space in terms of failure risks. They don't require a rocket launch to get there, and they further insulate the servers from radiation compared to operating on the surface of the Earth, rather than increasing exposure.
The biggest difference is radiation. Even in LEO, you will get radiation-caused Single Events that will affect the hardware. That could be a small error or a destructive error, depending on what gets hit.
Had they said "the array will be so large it'll have its own gravity." then you'd be making a valid point.
But they didn't say just "gravity", they said "gravity well".
> "First, let us simply define what a gravity well is. A gravity well is a term used metaphorically to describe the gravitational pull that a large body exerts in space."
So they weren't suggesting that it will be big enough to get past some boundary below which things don't have gravity, just that smaller things don't have enough gravity to matter.
Given all mass has gravity, and gravity can be metaphorically described by a well, all mass has a gravity well. It is not necessary for mass to capture other mass in its gravity. A well is a pleasant and relative metaphor humans can visualize - not a threshold reached after certain mass.
"Large" is almost meaningless in this context. Douglas Adams put it best
> Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space.
From an education site:
> Everything with mass is able to bend space and the more massive an object is, the more it bends
They start with an explanation of a marble compared to a bowling ball. Both have a gravity well, but one exerts far more influence
As mentioned in the article the Starcloud design requires solar arrays that are ~2x more efficient than those deployed on the ISS. Simply scaling them up introduces more drag and weight problems as do the batteries needed to suffice for the 45 minutes of darkness the satellite will receive.
>The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
And once you remove all the moving parts, you just fill the whole thing with oil rather than air and let heat transfer more smoothly to the radiators.
Oil, like air, doesn't convent well in 0G, you'll need pretty hefty pumps and well designed layouts to ensure no hot spots form. Heat pipes are at least passive and don't depend on gravity.
A light oil has a density of 700kg per cubic meter. Most common oils are denser.
Then you'd need vanes, agitators, and pumps to keep the oil moving around without forming eddies. These would need to be fairly bulky compared to fans and fan motors.
I'd have to see what an engineering team came up with, but at first glance the liquid solution would be much heavier and likely more maintenance intensive.
On Earth we have skeleton crews maintain large datacenters. If the cost of mass to orbit is 100x cheaper, it’s not that absurd to have an on-call rotation of humans to maintain the space datacenter and install parts shipped on space FedEx or whatever we have in the future.
If you want to have people you need to add in a whole lot of life support and additional safety to keep people alive. Robots are easier, since they don't die so easily. If you can get them to work at all, that is.
That isn't going to last for much longer with the way power density projections are looking.
Consider that we've been at the point where layers of monitoring & lockout systems are required to ensure no humans get caught in hot spots, which can surpass 100C, for quite some time now.
No, I mean like you crumple to the ground and cook to death if there isn't someone close enough to grab you within a few minutes. 212F ambient air. Like the inside of a meat smoker, but big enough for humans.
DC's aren't quite there yet, but the hot spots that do occur are enough to cause arc flashes which claim hundreds of lives a year.
This sort of work is ideal for robots. We don't do it much on Earth because you can pay a tech $20/hr to swap hardware modules, not because it's hard for robots to do.
It's all contingent on a factor of 100-1000x reduction in launch costs, and a lot of the objections to the idea don't really engage with that concept. That's a cost comparable to air travel (both air freight and passenger travel).
(Especially irritating is the continued assertion that thermal radiation is really hard, and not like something that every satellite already seems to deal with just fine, with a radiator surface much smaller than the solar array.)
It's all relative. Is it harder than getting 40MW of (stable!) power? Harder than packaging and launching the thing? Sure it's a bit of a problem, perhaps harder than other satellites if the temperature needs to be lower (assuming commodity server hardware) so the radiator system might need to be large. But large isn't the same as difficult.
Neither getting 40MW of power nor removing 40MW of heat are easy.
The ISS makes almost 250KW in full light, so you would need approximately 160 times the solar footprint of the ISS for that datacenter.
The ISS dissipates that heat using pumps to move ammonia in pipes out to a radiator that is a bit over 42m^2. Assuming the same level of efficiency, that's over 6km^2 of heat dissipation that needs empty space to dissipate to.
Well sure. If you think fully reusable rockets won’t ever happen, then the datacenter in space thing isn’t viable. But THAT’S where the problem is, not innumerate bullcrap about size of radiators.
(And of course, the mostly reusable Falcon 9 is launching far more mass to orbit than the rest of the world combined, launching about 150 times per year. No one yet has managed to field a similarly highly reusable orbital rocket booster since Falcon 9 was first recovered about 10 years ago in 2015).
I suspect they'd stop at automatic rendezvous & docking. Use some sort of cradle system that holds heat fins, power, etc that boxes of racks would slot into. Once they fail just pop em out and let em burn up. Someone else will figure out the landing bit
I won't say it's a good idea, but it's a fun way to get rid of e-waste (I envision this as a sort of old persons home for parted out supercomptuers)
I used to build and operate data center infrastructure. There is very limited reason to do anything more than a warranty replacement on a GPU. With a high quality hardware vendor that properly engineers the physical machine, failure rates can be contained to less than .5% per year. Particularly if the network has redundancy to avoid critical mass failures.
In this case, I see no reason to perform any replacements of any kind. Proper networked serial port and power controls would allow maintenance for firmware/software issues.
Don’t you need to look at different failure scenarios or patterns in orbit due to exposure to cosmic rays as well?
It just seems funny, I recall when servers started getting more energy dense it was a revelation to many computer folks that safe operating temps in a datacenter should be quite high.
I’d imagine operating in space has lots of revelations in store. It’s a fascinating idea with big potential impact… but I wouldn’t expect this investment to pay out!
Space is very bad for the human body, you wouldn't be able to leave the humans there waiting for something to happen like you do on earth, they'd need to be sent from earth every time.
Also, making something suitable for humans means having lots of empty space where the human can walk around (or float around, rather, since we're talking about space).
Underwater welder, though being replaced by drone operator, is still a trade despite the health risks. Do you think nobody on this whole planet would take a space datacenter job on a 3 month rotation?
I agree that it may be best to avoid needing the space and facilities for a human being in the satellite. Fire and forget. Launch it further into space instead of back to earth for a decommission. People can salvage the materials later.
The problem isn't health “risk”, there are risks but there are also health effects that will come with certainty. For instance, low gravity deplete your muscles pretty fast. Spend three month in space and you're not going to walk out of the reentry vehicle.
This effect can be somehow overcome by exercising while in space but it's not perfect even with the insane amount of medical monitoring the guys up there receive.
Good points. Spin “gravity” is also quite challenging to acclimatize to because it’s not uniform like planetary gravity. Lots of nausea and unintuitive gyroscopic effects when moving. It’s definitely not a “just”
Every child on a merry go round experiences it. Every car driving on a curve. And Gemini tested it once as well. It’s a basic feature of physics. Now why NASA hasn’t decided to implement it in decades is actually kind of a mystery.
What if we just integrate the hardware so it fails softly?
That is, as hardware fails, the system looses capacity.
That seems easier than replacing things on orbit, especially if StarShip becomes the cheapest way to launch to orbit because StarShip launches huge payloads, not a few rack mounted servers.
As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- regular launches containing replacement hardware
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.