Huh. I weirdly enough have worked with a lot of those sites from the remote sensing side, but never really know what the overall project was. Just "use the NEON sites for examples". I should have looked it up more at the time. Thanks for sharing!
I also have spent quite awhile as an exploration geophysicist. I miss it! I work purely with satellite data now, which is decidedly less tangible.
I've done a fair bit in the field, but a huge part of my career has been mining old datasets and reinterpreting things in light of new data/etc.
What the article is describing isn't new in any way. But it also doesn't remove the need for fieldwork or the need for the experience of having done fieldwork to use existing datasets. Observational sciences (e.g. geology, biology, etc) where you can't easily replicate the environment you are studying in the lab are always going to hinge on some sort of fieldwork.
Finding creative ways to use existing data doesn't change that.
It may not be immediately obvious to folks outside of geoscience, but the main way something like this is useful is as a measure/metric to compare things. Looking at the number of faces of fractured pieces isn't normally something we do often in geology.
Sure, the pieces average 6 faces when materials are relatively homogenous and iostropic (i.e. no preferential direction to break in and no free surface nearby). However, as they note in the article, this isn't always the case. Things like mud flats and other cases with very anisotropic materials and/or free surfaces nearby don't fracture with the same average.
This is a good example of a potential metric that could be used to give some clues about overall material behavior even if all you have are the broken remains.
Fractal dimension is also pretty esoteric. However, it's somewhat widely used in geoscience, even though what we're measuring isn't _actually_ fractal. It's still a very useful comparative metric, though, because it lets us measure how complex an interface or surface is quantitatively and scale-independent.
It reminds me of how we use measures like the VIX in finance; not because markets are actually log-normal, but because having a standardized way to compare "choppiness" across different periods is incredibly useful. I like your fractal dimension example too. Even if real coastlines aren't truly fractal, being able to say "this coastline has dimension 1.3 vs 1.7" gives you meaningful information about erosion patterns, wave energy, and rock composition. The cube metric could work similarly for forensic geology.
It's still quite controversial whether or not they produce oxygen that way. It's been hypothesized, but I wouldn't consider it a consensus or settled. There are also microbes that can produce oxygen without light, so there are other mechanisms to explain "dark oxygen" in deep sea ecosystems.
With that said, the simple truth of it is that we know next to nothing about these ecosystems and really can't accurately estimate impacts. They're quite possibly significant, but we just don't have much info to go off of and studies like this are sorely needed.
What really worries me is that I keep hearing "cooling is cheap and easy in space!" in a lot of these conversations, and it couldn't be farther from the truth. Cooling is _really_ hard and can't use efficient (i.e. advection-based air or water cooling) approaches and are limited to dramatically less efficient radiative cooling. It doesn't matter that space is cold because cooling is damned hard in a vacuum.
The article makes this point, but it's relatively far in and I felt it was worth making again.
With that said, my employer now appears to be in this business, so I guess if there's money there, we can build the satellites. (Note: opinions my own) I just don't see how it makes sense from a practical technical perspective.
Yeah, I don't see a way to get around the fact that space is a fabulous insulator. That's precisely how expensive insulated drink containers work so well.
If it was just about cooling and power availability, you'd think people would be running giant solar+compute barges in international waters, but nobody is doing that. Even the "seasteading" guys from last decade.
These proposals, if serious, are just to avoid planning permission and land ownership difficulties. If unserious, it's simply to get attention. And we're talking about it, aren't we?
You should read the linked article, they talk about it there. You radiate the heat into space which takes less surface area than the solar panels and you can just have them back to back.
In general I don't understand this line of thinking. This would be such a basic problem to miss, so my first instinct would be to just look up what solution other people propose. It is very easy to find this online.
Taking a system which was conceptualized about a quarter of a century ago and serves much different needs than what a datacenter in space needs (e.g. very strict thermal band, compared to acceptable temperature range from 20 to 80 degrees) isn't ideal.
The physics is quite simple and you can definitely make it work out. The Stefan Boltzman law works in your favor the higher you can push your temperatures.
If anything a orbital datacenter could be a slightly easier case. Ideally it will be in an orbit which always sees the sun. Most other satellites need to be in the earth shadow from time to time making heaters as well radiators necessary.
These data centers are solar powered, right? So if they are absorbing 100% of the energy on their sun side, by default they'll be able to heat up as much as an object left in the sun, which I assume isn't very hot compared to what they are taking in. How do they crank their temperature up so as to get the Stefan Boltzmann law working in their favor?
I suppose one could get some sub part of the whole satellite to a higher temperature so as to radiate heat efficiently, but that would itself take power, the power required to concentrate heat which naturally/thermodynamically prefers to stay spread out. How much power does that take? I have no idea.
σ is such a small number in Stefan-Boltzman that it makes no difference at all until your radiators get hot enough to start melting.
You not only need absolute huge radiators for a space data centre, you need an active cooling/pumping system to make sure the heat is evenly distributed across them.
I'm fairly sure no one has built a kilometer-sized fridge radiator before, especially not in space.
You can't just stick some big metal fins on a box and call it a day.
Out of curiosity, I plugged in the numbers - I have solar at home, and a 2 m2 panel makes about 500w - i assume the one in orbit will be a bit more efficient without atmosphere and a bit more fancy, making it generate 750w.
If we run the radiators at 80C (a reasonable temp for silicon), that's about 350K, assuming the outside is 0K which makes the radiator be able to radiate away about 1500W, so roughly double.
Depending on what percentage of time we spend in sunlight (depends on orbit, but the number's between 50%-100%, with a 66% a good estimate for LEO), we can reduce the radiator surface area by that amount.
So a LEO satellite in a decaying orbit (designed to crash back onto the Earth after 3 years, or one GPU generation) could work technically with 33% of the solar panel area dedicated to cooling.
Realistically, I'd say solar panels are so cheap, that it'd make more sense to create a huge solar park in Africa and accept the much lower efficiency (33% of LEO assuming 8 hours of sunlight, with a 66% efficiency of LEO), as the rest of the infrastructure is insanely more trivial.
This argument assumes that you only need to radiate away the energy that the solar actively turns into electricity, but you also need to dissipate all the excess heat that wasn’t converted. The solar bolometric flux at the earth is 1300 w/m2, or 2600 for 2 sq m. That works out to an efficiency of ~20% for your home solar, and your assumed value of 750 w yields an efficiency of ~30%, which is reasonable for space-rated solar. But assuming an overall albedo of ~5% that means that you were only accounting for a third of the total energy that needs to be radiated.
Put another way, 2 sq m intercepts 2600 w of solar power but only radiates ~1700 w at 350 k, which means it needs to be run at a higher temperature of nearly 125 celsius to achieve equilibrium.
It receives around 2.5kW[0] of energy (in orbit), of which it converts 500W to electric energy, some small amount is reflected and the rest ends up as heat, so use 1kW/m^2 as your input value.
> If we run the radiators at 80C (a reasonable temp for silicon), that's about 350K, assuming the outside is 0K which makes the radiator be able to radiate away about 1500W, so roughly double.
1500W for 2m^2 is less than 2000kW, so your panel will heat up.
>Depending on what percentage of time we spend in sunlight (depends on orbit, but the number's between 50%-100%, with a 66% a good estimate for LEO), we can reduce the radiator surface area by that amount.
You need enough radiators for peak capacity, not just for the average. It's analogous to how you can't put a smaller heat sink on your home PC just because you only run it 66% of the time.
Yes it's fun. One small note, for the outside temp you can use 3K, the cosmic microwave background radiation temperature. Not that it would meaningfully change your conclusion.
It's definitely a solvable problem. But it is a major cost factor that is commonly handwaved away. It also restricts the size of each individual satellite: moving electricity through wires is much easier than pumping cooling fluid to radiators, so radiators are harder to scale. Not a big deal at ISS scale, but some proposals had square kilometers of solar arrays per satellite
That exactly. It's not that it's impossible. It's that it's heavy to efficiently transport heat to the radiators or requires a lot of tiny sats, which have their with problems.
But heat = energy, right? So maybe we don’t really want to radiate it, but redirect it back into the system in a usable way and reduce how much we need to take in? (From the sun etc)
Useful, extractable energy comes from a temperature differential, not just temperature itself. Once your system is at temperature equilibrium, you cant extract energy anymore and must shed that temperature as heat
That's not how physics works. Heat in and of itself does not contain usable energy. The only useful energy to be extracted from heat comes from the difference in temperature between two objects. You can only extract work from thermal energy by moving heat from one place to another, which can only happen by moving energy from a hot object to a cold one.
This is all fundamental to the universe. All energy in the universe comes exclusively from systems moving from a low entropy state to a higher entropy state. Energy isn't a static absolute value we can just use. It must be extracted from an energy gradient.
I've always enjoyed thinking about this. Temperature is a characteristic of matter. There is vanishingly little matter in space. Due to that, one could perhaps say that space, in a way of looking at it, has no temperature. This helps give some insight into what you mention of the difficulties in dealing with heat in space - radiative cooling is all you get.
I once read that, while the image we have in our mind of being ejected out of an airlock from a space station in orbit around Earth results in instant ice-cube, the reality is that, due to our distance from the sun, that situation - ignoring the lack of oxygen etc that would kill you - is such that we would in fact die from heat exhaustion: our bodies would be unable to radiate enough heat vs what we would receive from the sun.
In contrast, were one to experience the same unceremonious orbital defenestration around Mars, the distance from the sun is sufficient that we would die from hypothermia (ceteris paribus, of course).
A perfect vacuum might have no temperature, but space is not a perfect vacuum, and has a well-defined temperature. More insight would be found in thinking about what temperature precisely means, and the difference between it and heat capacity.
I think your second sentence is what they were referencing. Space has a temperature. But because the matter is so sparse and there’s so little thermal mass to carry heat around as a result, we don’t have an intuitive grasp on what the temperature numbers mean.
To rephrase it slightly. It's not a perfect vacuum, but compared to terrestrial conditions it's much closer to the former than the latter. The physics naturally reflects that fact.
To illustrate the point with a concrete example. You can heat something with the thermal transfer rate of aerogel to an absurdly high temperature and it will still be safe to pick up with your bare hand. Physics says it has a temperature but our intuition says something is wrong with the physics.
I think the better argument to be made here is "space has a temperature, and in the thermosphere the temperature can get up to thousands of degrees. Space near Earth is not cold."
Are you actually making that article, or just "quoting" it as some kind of hypothetical? Regardless, without mentioning heat capacity, I don't see any point to your quotation in this context.
Sorry to hear you can't see it. Let me try to assist you in understanding what you are missing.
Yes I'm making that argument. Because it's true. The temperature of what particles do exist, 500km above the earth, is more likely to be in the thousands of degrees farenheit than below zero farenheit.
The discussion being had, if you read comments above your original, is that it's widely thought that "space is cold" and therefore it's good for cooling.
You're right that heat capacity means that the temperature of space is not relevant to its ability to cool a datacenter. You're wrong that making that argument is a good way to get people to actually change their mind.
Instead, attack the idea at its foundation. Space is not cold, not in the places where the data centers would be. It's much easier to get someone to understand "the temperature at 500km where the auroras are is very hot" than "blah blah heat capacity".
Assuming merely attitude control, sure only radiative cooling is available, but its very easy to design for arbitrary cooling rates assuming any given operating temperature:
Budget the solar panel area as a function of the maximum computational load.
The rest of the satellite must be within the shade of the solar panel, so it basically only sees cold space, so we need a convex body shape, to insure that every surface of the satellite (ignoring the solar panels) is radiatively cooling over its full hemisphere.
So pretend the sun is "below", the solar panels are facing down, then select an extra point above the solar panel base to form a pyramid. The area of the slanted top sides of the pyramid are the cooling surfaces, no matter how close or far above the solar panels we place this apex point, the sides will never see the sun because they are shielded by the solar panel base. Given a target operating temperature, each unit surface area (emissivity 1) will radiate at a specific rate, and we can choose the total cooling rate by making the pyramid arbitrarily long and sharp, thus increasing the cooling area. We can set the satellite temperature to be arbitrarily low.
Forget the armchair "autodidact" computer nerds for a minute
Making the pyramid arbitrarily long and sharp will arbitrarily diminish the heat conductance through the pyramid, so the farther from the pyramid base, the colder it will be and the less it will radiate.
So no, you cannot increase too much the height of the pyramid, there will be some optimum value at which the pyramid will certainly not be sharp. The optimum height will depend on how much of the pyramid is solid and which is the heat conductance of the material. Circulating liquid through the pyramid will also have limited benefits, as the power required for that will generate additional heat that must be dissipated.
A practical radiation panel will be covered with cones or some other such shapes in order to increase its radiating surface, but the ratio in which the surface can be increased in comparison with a flat panel is limited.
we are not discussing a schoolbook exercise, we are not calculating passive heat conduction of a pyramid heated to a base, since it's not a schoolbook exercise we can decide on the condition, we could put in heat pipes etc.
its CPU/GPU clusters, so we don't have 0 control on where to locate what heat generators, but even if we had 0 control over it, the shape and height of the pyramid does not preclude heat pipes (not solid bars of metal, but having a hot side where latent heat of a gas condensing to a liquid on the cold side and then evaporating on the hot side).
> The rest of the satellite must be within the shade of the solar panel,
Problem is with solar panels themselves. When you get 1.3kW of energy per square meter and use 325w of that for electricity (25% efficiency) that means you have to get rid of almost 1kW of energy for each meter of your panel. You can do it radiatively with back surface of panels, but your panels might reach equilibrium at over 120°C, which means they stop actually producing energy. If you want to do it purely radiatively, you would need to increase temperature of some surface pointing away from sun to much more than 120°C and pump heat from your panels with some heatpump.
When the cost of the solar panels does not matter you can reach an efficiency close to 50% (with multi-junction solar cells) and the panels will also be able to work at higher temperatures.
Nevertheless, the problem described by you remains, the panels must dissipate an amount of heat at least equal with the amount of useful power that is generated. Therefore they cannot have other heat radiators on their backside, except those for their own heat.
the point is that even with 100% INefficient solar panels the pyramidal sides can be made to have an arbitrarily large area, and due to convexity of the pyramid each infinitesimal surface element of the radiating sides can emit the full hemisphere, so given any target temperature, we can design the pyramid sharp enough (same base, different height, so that heat absorbed is constant and heat emitted must equal it in steady state, then by basic thermal radiation math, the asymptotic temperature it will settle at can be made arbitrarily close to temperature of the universe, by making the pyramid sharper.)
no matter how inefficient the solar panels, even with 1% efficiency, you could make the pyramid sharp enough to dissipate the heat stabilizing at any arbitrary low temperature (well, must still be above the temperature of CMB)
> Temperature is a characteristic of matter. There is vanishingly little matter in space. Due to that, one could perhaps say that space, in a way of looking at it, has no temperature.
Temperature is a property of systems in thermal equilibrium. One such system is blackbody radiation, basically a gas of photons that is in thermal equilibrium.
The universe is filled with such a bath of radiation, so it makes sense to say the temperature of space is the temperature of this bath. Of course, in galaxies, or even more so near stars, there's additional radiation that is not in thermal equilibrium.
Jusssst had this conversation two nights ago with a smart drunk friend. To his credit when I asked "what's heat?" and he said "molecules moving fast" and I said "how many molecules are there in space to bump against?" He immediately got it. I'm always curious what ideas someone that isn't familiar with a problem space comes up with for solutions, so I canvased him for thoughts -- nothing novel, unfortunately, but if we get another 100 million people thinking about it, who knows what we'll come up with?
I got really annoyed when I first realized that heat and sound (and kinetic energy) are both "molecules moving," because they behave so dramatically differently on a human scale.
And yes, obviously they aren't moving in the same way, but it's still kind of weird to think about.
Author also forgot batteries for the solar shade transition period and then additional solar panels to charge these batteries during the solar "day" period. then insulation for batteries. Then power converters and pumps for radiators and additional radiators to cool the cooling infrastructure.
Overall not a great model. But on the other hand, even an amateur can use this model and imagine that additional parts and costs are missing, so if it's showing a bad outlook even in the favorable/cheating conditions for space DCs, then they are even dumber idea if all costs would be factored in fully. Unfortunately many serious journalists can't even do that mental assumption. :(
I'd say int makes much more sense to just shut off in the sunshade. The advantage of orbital solar, comes not so much from the lack of atmosphere, but the fact that depending on your orbit, you can be in sunlight for 60-100% of the time.
That proposal I've seen a few times too, basically put up a constellation up there, linked with laser comms and then transfer data to the illuminated sats in a loop. That sounds possible, but I have doubts. First of all if we take 400 km orbit, the "online" time would be something like 50 minutes. We need to boot up the system fully, run comm apps, locate a peer satellite and download data from it (which needs to be prepared in a portable form), write it locally and start calculations, then by the end of the 50 min repeat. All these operations are slow, especially boot time of the servers (which could be optimized of course). It would be great if some expert could tell us if it is feasible or not.
You can use everything as a radiator, but you can't use everything as a radiator sufficiently efficient to cool hot chips to safe operating temperature, particularly not if that thing is a thin panel intentionally oriented to capture the sun's rays to convert them to energy. Sure, you can absolutely build a radiator in the shade of the panels (it's the most logical place), but it's going to involve extra mass.
You also want to orient those radiators at 90 degrees to the power panels, so that they don't send 50% of their radiation right back to the power panels.
Cooling isn't anymore difficult than power generation. For example, on the ISS solar panels generate up to 75 W/m², while the EATCS radiators can dissipate about 150 W/m².
Solar panels have improved more than cooling technology since ISS was deployed, but the two are still on the same order of magnitude.
So just 13.3 million sq. meters of solar panels, and 6.67 million sq. meters of cooling panels for 1 GW.
Or a 3.651 km squared and 2.581 km squared butterfly sattelite.
I don't think your cooling area measures account for the complications introduced by scale.
Heat dissipation isn't going to efficiently work its way across surfaces at that scale passively. Dissipation will scale very sub-linearly, so we need much more area, and there will need to be active fluid exchangers operating at speed spanning kilometers of real estate, to get dissipation/area anywhere back near linear/area again.
Liquid cooling and pumps, unlike solar, are meaningfully talked about in terms of volume. The cascade of volume, mass, complexity and increased power up-scaling flows back to infernal launch volume logistics. Many more ships and launches.
Cooling is going to be orders of magnitude more trouble than power.
How are these ideas getting any respect?
I could see this at lunar poles. Solar panels in permanent sunlight, with compute in direct surface contact or cover, in permanent deep cold shadow. Cooling becomes an afterthought. Passive liquid filled cooling mats, with surface magnifying fins, embedded in icy regolith, angled for passive heat-gradient fluid cycling. Or drill two adjacent holes, for a simple deep cooling loop. Very little support structure. No orbital mechanics or right-of-way maneuvers to negotiate. Scales up with local proximity. A single expansion/upgrade/repair trip can service an entire growing operation at one time, in a comfortable stable g-field.
Solar panels can in principle be made very thin, since there are semiconductors (like CdTe) where the absorption length of a photon is < 1 micron. Shielding against solar wind particles doesn't need much thickness (also < 1 micron).
So maybe if we had such PV, we could make huge gossamer-thin arrays that don't have much mass, then use the power from these arrays to pump waste heat up to higher temperature so the radiators could be smaller.
The enabling technology here would be those very low mass PV arrays. These would also be very useful for solar-electric spacecraft, driving ion or plasma engines.
> active fluid exchangers operating at speed spanning kilometers of real estate, to get dissipation/area anywhere back near linear/area again
Could the compute be distributed instead? Instead of gathering all the power into a central location to power the GPUs there, stick the GPUs on the back of the solar panels as modules? That way even if you need active fluid exchanger it doesn’t have to span kilometers just meters.
I guess that would increase the cost of networking between the modules. Not sure if that would be prohibitive or not.
Lets not forget that you have to launch that liquid up as well. Liquids are heavy, compared to their volume. Not to mention your entire 'datacenter' goes poof if one of these loops gets frozen, explodes from catching some sunlight, or whatever. This is pretty normal stuff, but not at this scale that would be required.
Take those 40,000 satellites, and combine their solar panels, and combine the cooling panels, and centralize all the compute.
Distances are not our friend in orbit. Efficiency hyperscales down for many things, as distances and area scale up.
Things that need to hyperscale when you scale distance and area:
• Structural strength.
• Power and means to maneuver, especially for any rotation.
• Risk variance, with components housed together, instead of independently.
• Active heat distribution. Distance is COMPOUNDING insulation. Long shallow heat gradients move heat very slowly. What good does scaling up radiative surface do, if you don't hyperscale heat redistribution?
And you can't hyperscale heat distribution in 2D. It requires 3D mass and volume.
You can't just concatenate satellites and get bigger satellites with comparable characteristics.
Alternatives, such as distributing compute across the radiative surface, suffer relative to regular data centers, from intra-compute latency and bandwidth.
We have a huge near infinite capacity cold sink in orbit. With structural support and position and orientation stabilization for free. Let's use that.
Let’s say you need 50m^2 solar panels to run it, then just a ton of surface area to dissipate. I’d love to be proven wrong but space data centers just seem like large 2d impact targets.
You need 50sqm of solar panels just for a tiny 8RU server. You also forgot any overhead for networking, control etc. but let's even ignore those. Next at the 400km orbit you spend 40% of the time in shade, so you need an insulated battery to provide 5kWh. This would add 100-200kg of weight to a server weighing 130kg on its own. Then you need to dissipate all that heat and yes, 50sqm of radiators should deal with the 10kW device. We also need to charge our batteries for the shade period, so we need 100sqm of solar panels. And we also need to cool the cooling infrastructure - pumps, power converters, which wasn't included in the power budget initially.
So now we have arrived to a revised solution: a puny 8RU server at 130 kg, requires 100sqm and 1000 kg of solar panels, then 50-75 sqm of the heat radiators at 1000-1500 kg, then 100-200 kg of batteries and then the housing for all that stuff plus station keeping engines and propellant, motors to rotate all panels, pumps, etc. I guess at least 500kg is needed, maybe a bit less.
So now we have a 3 ton satellite, which costs to launch around 10 million dollars at an optimistic 3000/kg on F9. And that's not counting cost to manufacture the satellite and the server own cost.
I think the proposal is quite absurd with modern tech and costs.
Only on a short distance. To effectively radiate a significant amount of heat, you need to actually deliver the heat to the distant parts of the radiator first. That normally requires active pumping which needs extra energy. So now you need to unfold sonar panels + aluminium + pipes (+ maybe extra pumps)
Orbital assembly of a fluid piping system in space is a pretty colossal problem too (as well as miles of pipes and connections being a massive single point failure for your system). Dispersing the GPUs might be more practical, but it's not exactly optimal for high performance computation...
It’s a fun problem to think about but even if all the problems were solved we would have very quickly deprecating hardware in orbit that’s impossible to service or upgrade
Maybe you should re-read the "do things that don't scale" article. It is about doing things manually until you figure out what you should automate, and only then do you automate it. It's not about doing unscalable things forever.
Unless you have a plan to change the laws of physics, space will always be a good insulator compared to what we have here on Earth.
I've done some reading on how they cool JWST. It's fascinating and was a massive engineering challenge. Some of thos einstruments need to be cooled to near absolute zero, so much so that it uses liquid helium as a coolant in parts.
Now JWST is at near L2 but it is still in sunlight. It's solar-powered. There are a series of radiating layer to keep heat away from sensitive instruments. Then there's the solar panels themselves.
Obviously an orbital data center wouldn't need some extreme cooling but the key takeaway from me is that the solar panels themselves would shield much of the satellite from direct sunlight, by design.
Absent any external heating, there's only heating from computer chips. Any body in space will radiate away heat. You can make some more effective than others by increasing surface area per unit mass (I assume). Someone else mentioned thermoses as evidence of insulation. There's some truth to that but interestingly most of the heat lost from a thermos is from the same IR radiation that would be emitted by a satellite.
The computer chips used for AI generate significantly more heat than the chips on the JWST. The JWST in total weighs 6.5 tons and uses a mere 2kw of power, which is the same as 3 H100 GPUs under load, each of which will weight what, 1kg?
So in terms of power density you're looking at about 3 orders of magnitude difference. Heating and cooling is going to be a significant part of the total weight.
For some decades now I’ve heard the debunk many times more than the bunk. The real urban myth appears to be any appreciable fraction of people believe the myth.
Space hardware needs to be fundamentally different from surface hardware. I don't mean it in the usual radiation hardenrining etc, but in using computing substrates that run over 1000c and never shut down. T^4 cooling means that you have a hell of a time keeping things cool, but keeping hot things from melting completely is much easier.
The transistors are experimental, and no one is building high-performance chips out of them.
You can't just scale current silicon nodes to some other substrate.
Even if you could, there's a huge difference between managing the temperature of a single transistor, managing temps on a wafer, and managing temps in a block of servers running close to the melting point of copper.
I think the point is, yes, cooling is a significant engineering challenge in space; but having easy access to abundant energy (solar) and not needing to navigate difficult politically charged permitting processes makes it worthwhile. It's a big set of trade offs, and to only focus on "cooling being very hard in space" is kind of missing the point of why these companies want to do this.
Compute is severely power-constrained everywhere except China, and space based datacenters is a way to get around that.
Of course you can build these things if you really want to.
But there is no universe in which it's possible to build them economically.
Not even close. The numbers are simply ridiculous.
And that's not even accounting for the fact that getting even one of these things into orbit is an absolutely huge R&D project that will take years - by which time technology and requirements will have moved on.
Lift costs are not quite dropping like that lately. Starship is not yet production ready (and you need to fully pack it with payloads, to achieve those numbers). What we saw is cutting off most of the artificial margins of the old launches and arriving to some economic equilibrium with sane margins. Regardless of the launch price the space based stuff will be much more expensive than planet based, the only question if it will be optimistically "only" x10 times more expensive, or pessimistically x100 times more expensive.
I don't get this "inevitable" conclusion. What is even a purpose of the space datacenter in the first place? What would justify paying an order of magnitude more than conventional competitors? Especially if the server in question in question is a dumb number cruncher like a stack of GPUs? I may understand putting some black NSA data up there or drug cartel accounting backup, but to multiply some LLM numbers you really have zero need of extraterritorial lawless DC. There is no business incentive for that.
You must be very young. This was well-known back in the day. Lots of articles (some even posted here some time back) of rant on cars, how they were ruining everything.
Btw The cute one-line slam doesn't really belong here. It's an empty comment, adds zero to the conversation, contributes nothing to the reader. It only makes a twelve year old feel a brief burst of endorphins. Please refrain.
The idea that its faster and cheaper to launch solar panels then get local councils to approve them is insane. The fact is those Data Center operates simply don't want to do it and instead want politicians to tax people to build the power infrastructure for them.
But space isn't actually cold, or at least not space near Earth. It's about 10 C. And that's only about a 10 C less than room temperature, so a human habitable structure in near earth space won't radiate very much heat. But heat radiated is O(Tobject^4 - Tbackground^4), and a computer can operate up to around 90C (I think) so that is actually a very big difference here. Back of the envelope, a data center at 90C will radiate about 10x the heat that a space station at 20C will. With the massive caveat that I don't know what the constant is here, it could actually be easy to keep a datacenter cool even though it is hard to keep a space station cool.
As you intimated, the radiated heat Energy output of an object is described by the Stefan-Boltzmann Law, which is E = [Object Temp ]^4 * [Stefan-Boltzmann Constant]
However, Temp must be in units of an absolute temperature scale, typically Kelvin.
So the relative heat output of a 90C vs 20C objects will be (translating to K):
383^4 / 293^4 = 2.919x
Plugging in the constant (5.67 * 10^-8 W/(m^2*K^4)) The actual values for heat radiation energy output for objects at 90C and 20C objects is 1220 W/m^2 and 417 W/m^2
The incidence of solar flux must also be taken into account, and satellites at LEO and not in the shade will have one side bathing in 1361 W/m^2 of sunlight, which will be absorbed by the satellite with some fractional efficiency -- the article estimates 0.92 -- and that will also need to be dissipated.
The computer's waste heat needs to be shed, for reference[0] a G200 generates up to 700W, but the computer is presumably powered by the incident solar radiation hitting the satellite, so we don't need to add its energy separately, we can just model the satellite as needing to shed 1361 W/m^2 * 0.92 = 1252 W/m^2 for each square meter of its surface facing the sun.
We've already established that objects at 20C and 90C only radiate 1220 W/m^2 and 417 W/m^2, respectively, so to radiate 1252 W per square meter coming in from the sun facing side we'll need 1252/1220 = 1.026 times that area of shaded radiator maintained at a uniform 90C. If we wanted the radiator to run cooler, at 20C, we'd need 2.919x as much as at 90C, or 3.078 square meters of shaded radiator for every square meter of sun facing material.
You use arbitrary temps to prove at some temps it’s not as efficient. Ok? What about at the actual temps it will be operating in? We’re talking about space here. Why use 20 degC as the temperature for space?
He didn't use 20C as the temperature of space. He used the OP's example of comparing the radiative cooling effectiveness of a heat SOURCE at 90C (chosen to characterize a data center environment) and 20C (chosen to characterize the ISS/human habitable space craft).
You forgot about the background. The background temp at Earths distance from the sun is around 283K. Room temperature is around 293K, and a computer can operate at 363K. So for an object at 283K the radiation will be (293^4 - 283^4) = , and a computer will be (363^4 - 283^4)
(293^4 - 283^4) = 9.55e8
(363^4 - 283^4) = 1.09e10
So about 10x
I have no problem with your other numbers which I left out as I was just making a very rough estimate.
The background temp at Earth's orbit is due to the incidence of solar flux, which I took account of.
I'm assuming the radiators are shaded from that flux by the rest of the satellite, for efficiency reasons, so we don't need to account for solar flux directly heating up the radiators themselves and reducing their efficiency.
In the shade, the radiators emission is relative to the background temp of empty space, which is only 2.7 K[0]. I did neglect to account for that temperature, that's true, but it should be negligible in its effects (for our rough estimate purposes).
The temperature that you raise to the fourth power is not Celsius, it's Kelvin. Otherwise things at -200 C would radiate more heat than things at 100 C. Also the temperature of space is ~3 K (cosmic microwave background), not 10 C.
There is a large region of the upper atmosphere called the thermosphere where there is still a little bit of air. The pressure is extremely low but the few molecules that are there are bombarded by intense radiation and thus reach pretty high temperatures, even 2000 C!
But since there are so few such molecules in any cubic meter, there isn't much energy in them. So if you put an object in such a rarefied atmosphere. It wouldn't get heated up by it despite such a gas formally having such a temperature.
The gas would be cooled down upon contact with the body and the body would be heated up by a negligible amount
These satellites will certainly be above the themosphere. The temperature of the sparse molecules in space is not relevant for cooling because there are too few of them. We're talking about radiative cooling here.
The Sun is also not 10 C. Luckily you have solar arrays which shade your radiators from it, so you can ignore the direct light from it when calculating radiator efficiency. The actual concern in LEO is radiation from the Earth itself.
Yes, python is not statically typed. It shouldn't be. Don't expect static typing behavior and typing in python is _not_ static typing in any way. It's documentation only, not static typing.
One of my biggest gripes around typing in python actually revolves around things like numpy arrays and other scientific data structures. Typing in python is great if you're only using builtins or things that the typing system was designed for. But it wasn't designed with scientific data structures particularly in mind. Being able to denote dtype (e.g. uint8 array vs int array) is certainly helpful, but only one aspect.
There's not a good way to say "Expects a 3D array-like" (i.e. something convertible into an array with at least 3 dimensions). Similarly, things like "At least 2 dimensional" or similar just aren't expressible in the type system and potentially could be. You wind up relying on docstrings. Personally, I think typing in docstrings is great. At least for me, IDE (vim) hinting/autocompletion/etc all work already with standard docstrings and strictly typed interpreters are a completely moot point for most scientific computing. What happens in practice is that you have the real info in the docstring and a type "stub" for typing. However, at the point that all of the relevant information about the expected type is going to have to be the docstring, is the additional typing really adding anything?
In short, I'd love to see the ability to indicate expected dimensionality or dimensionality of operation in typing of numpy arrays.
But with that said, I worry that typing for these use cases adds relatively little functionality at the significant expense of readability.
I also had a very hard time to annotate types in python few years ago. A lot of popular python libraries like pandas, SQLAlchemy, django, and requests, are so flexible it's almost impossible to infer types automatically without parsing the entire code base. I tried several libraries for typing, often created by other people and not maintained well, but after painful experience it was clear our development was much faster without them while the type safety was not improved much at all.
> Can MyPy do the instance checking? Because of the dynamic nature of numpy and pandas, this is currently not possible. The checking done by MyPy is limited to detecting whether or not a numpy or pandas type is provided when that is hinted. There are no static checks on shapes, structures or types.
So this is equivalent to not using this library and making all such types np.ndarray/np.dtype etc then.
So we expend effort to coming up with a type system for numpy, and tools cannot statically check types? What good are types if they aren't checked? Just a more concise documentation for humans?
Numpy ships built-in type hints as well as a type for hinting arrays in your own code (numpy.typing.NDArray).
The real challenge is denoting what you can accept as input. `NDArray[np.floating] | pd.Series[float] | float` is a start but doesn't cover everything especially if you are a library author trying to provide a good type-hinted API.
It actually doesn't, as far as I know :) It does get close, though. I should give it a deeper look than I have previously, though.
"array-like" has real meaning in the python world and lots of things operate in that world. A very common need in libraries is indicating that things expect something that's either a numpy array or a subclass of one or something that's _convertible_ into a numpy array. That last part is key. E.g. nested lists. Or something with the __array__ interface.
In addition to dimensionality that part doesn't translate well.
And regardless, if the type representation is not standardized across multiple libraries (i.e. in core numpy), there's little value to it.
E.g. `UInt8[ArrayLike, "... a b"]` means "an array-like of uint8s that has at least two dimensions". You are opting into jaxtyping's definition of an "array-like", but even though the general concept as you point out is wide spread, there isn't really a single agreed upon formal definition of array-like.
Alternatively, even more loosely as anything that is vaguely container-shaped, `UInt8[Any, "... a b"]`.
Ah, fair enough! I think I misread some things around the library initially awhile back and have been making incorrect assumptions about it for awhile!
I wonder if we should standardize on __array__ like how Iterable is standardized on the presence of __iter__, which can just return self if the Iterable is already an Iterator.
I am using runtime type and shape checking and wrote a tiny library to merge both into a single typecheck decorator [1]. It‘s not perfect, but I haven‘t found a better approach yet.
wisdom in the community. people who can excel at things don’t go to big tech companies because they won’t be appreciated , they will get pipped if they can’t wear any of those chains shackles well
Good point, but I think we're talking past each other a bit.
Typing in python within the scientific world isn't ever used to check types. It's _strictly_ only documentation.
Yes, MyPy and whatnot exist, but not meaningfully. You literally can't use them for anything in this domain (they wont' run any of the code in question).
Types (in this subset of python) are 100% about documentation, 0% about enforcement.
We're setting up a _documentation_ system that can't express the core things it needs to. That worries me. Setting up type _checks_ is a completely different thing and not at all the goal.
I'm trying to count the rounds of major layoffs in my career (e.g. 10% or more of the company let go at once). I _think_ it's 5, but it might be a bit more. I've been lucky each time, but that also means I wasn't one of the ones taking risks. Layoffs cut from both sides of the performance curve and leave the middle, in my experience.
I wish I'd done this more.
In some cases there was no way to. For example, we once woke up to find that the European half of our team had been laid off as part of huge cuts that weren't announced and even our manager had no idea were coming. There's no good way to do layoffs, but I think that "sudden shock" approach is worst of all, personally. You don't get to say goodbye in any way and people don't get to plan for contingencies at all. (The other extreme of knowing it's coming for a year and applying for your own job and then having 2 months to sit around after you didn't get it also sucks, and I've done that as well. You can at least make plans in that case, though.)
On the other hand, in a _lot_ of other cases, you do have a chance to say goodbye. Take it. This is really excellent advice. It's worth saying something, at very least to the people you really did enjoy working with.
There's a decent chance you work with some of those folks in the future, and even if you don't, it really does mean something to be a kind human.
There's a lot of companies with IP that can be extracted or systems that can be sabotaged by a bitter employee. There's also the extreme cases of someone who knows they are being fired who can do a shooting/arson/some other extreme scenario.
I'm not saying I agree with the shock approach but there are definitely some generic risks that I don't think paint a bad picture of the company by their existence.
As a company, we entrust our employees with a lot of agency and access to our systems, networks and data. We do not spy on our employees nor have intrusive systems to prevent them from seeing/copying internal IP.
Therefore, while these operating procedures foster an agreeable environment for our collaborators to thrive and do actual things without too much segmentation, it makes it painful when a hard decision results in people getting suddenly both very angry against the company, and very capable to inflict damage upon it.
Flock really does have a huge amount of potential for abuse. It's a fair point that private companies (e.g. Google, etc) have way more surveillance on us than the government does, but the US and local governments having this level of surveillance should also worry folks. There's massive potential for abuse. And frankly, I don't trust most local police departments to not have someone that would use this to stalk their ex or use it in other abusive ways. I weirdly actually trust Google's interests in surveillance (i.e. marketing) more than I trust the government's legitimate need to monitor in some cases to track crimes. Things get scary quick when mass surveillance is combined with (often selective) prosecution.
> I weirdly actually trust Google's interests in surveillance (i.e. marketing) more than I trust the government's legitimate need to monitor in some cases to track crimes
You shouldn't.
When a company spies on everyone as much as possible and hordes that data on their servers, it is subject to warrant demands from any local, state, or Federal agency.
> Avondale Man Sues After Google Data Leads to Wrongful Arrest for Murder
Police had arrested the wrong man based on location data obtained from Google and the fact that a white Honda was spotted at the crime scene. The case against Molina quickly fell apart, and he was released from jail six days later. Prosecutors never pursued charges against Molina, yet the highly publicized arrest cost him his job, his car, and his reputation.
The thing is though, cops harass people, cops abuse their power, courts prosecute who they want, with or without Flock. This is a valid concern, but the root of the issue, I think what we should focus on first or primarily, is that the justice system isn't necessarily accountable for mistakes or corruption. As long as qualified immunity exists, as long as things like the "Kids for Cash" scandal (which didn't need Flock) go on, it doesn't really matter what tools they have, or not.
> As long as qualified immunity exists, as long as things like the "Kids for Cash" scandal (which didn't need Flock) go on, it doesn't really matter what tools they have, or not.
But, given that those abuses exist and are ongoing, we should not hand the police state yet another tool to abuse.
> I weirdly actually trust Google's interests in surveillance more than I trust the government's
I don't think this is weird at all. Corporations may be more "malicious" (or at least self centered), but governments have more power. So even if you believe they are good and have good intentions it still has the potential to do far more harm. Google can manipulate you but the government can manipulate you, throw you in jail, and rewrite the rules so you have no recourse. Even if the government can get the data from those companies there's at least a speed bump. Even if a speed bump isn't hard to get over are we going to pretend that some friction is no different from no friction?
Turnkey tyranny is a horrific thing. One that I hope more people are becoming aware of as it's happening in many countries right now.[0]
This doesn't make surveillance capitalism good and I absolutely hate those comparisons because they make the assumption that harm is binary. That there's no degree of harm. That two things can't be bad at the same time and that just because one is worse that means the other is okay. This is absolute bullshit thinking and I cannot stand how common it is, even on this site.
[0] my biggest fear is that we still won't learn. The problem has always been that the road to is paved with good intentions. Evil is not just created by evil men, but also my good men trying to do good. The world is complex and we have this incredible power of foresight. While far from perfect we seem to despise this capability that made us the creatures we are today. I'm sorry, the world is complex. Evil is hard to identify. But you got this powerful brain to deal with all that, if you want to
>I don't think this is weird at all. Corporations may be more "malicious" (or at least self centered), but governments have more power. So even if you believe they are good and have good intentions it still has the potential to do far more harm. Google can manipulate you but the government can manipulate you, throw you in jail, and rewrite the rules so you have no recourse. Even if the government can get the data from those companies there's at least a speed bump. Even if a speed bump isn't hard to get over are we going to pretend that some friction is no different from no friction?
That's all as may be, but you're ignoring the fact that governments are buying[0][1][2][3] the data being collected by those corporations. That's not "friction" in my book, rather it's a commercial transaction.
As such, giving corporations a pass seems kind of silly, as they're profiting from selling that data to those with a monopoly on violence.
So, by all means, give the corporations the "benefit of the doubt" on this, as they certainly have no idea that they're selling this information to governments (well, to pretty much anyone willing to pay -- including domestic abusers and stalkers too), they're only acting as agents maximizing corporate profits for their shareholders. Which is the only important thing, right? Anything else is antithetical to free-market orthodoxy.
People suffer and/or die? Just the cost of doing business right?
> but you're ignoring the fact that governments are buying the data being collected by those corporations
Did I?
>> Even if the government can get the data from those companies there's at least a speed bump. Even if a speed bump isn't hard to get over are we going to pretend that some friction is no different from no friction?
I believe that this was a major point in my argument. I apologize if it was not clear. But I did try to stress this and reiterate it.
> giving corporations a pass seems kind of silly
Oh come on now, I definitely did not make such a claim.
>> This doesn't make surveillance capitalism good and I absolutely hate those comparisons because they make the assumption that harm is binary. That there's no degree of harm. That two things can't be bad at the same time and that just because one is worse that means the other is okay.
You're doing exactly what I said I hate.
The reason I hate this is because it makes discussion impossible. You treat people like they belong to some tribe that they do not even wish to be apart of. We're on the same side here buddy. Maybe stop purity testing and try working together. All you're doing is enabling the very system you claim to hate. You really should reconsider your strategy. We don't have to agree on the nuances, but if you can't see that we agree more than we disagree then you are indistinguishable from someone who just pretends to care. Nor do you become distinguishable from an infiltrating saboteur[0].
Stop making everything binary. Just because I'm not in your small club does not mean I'm in the tribe of big corp or big gov. How can you do anything meaningful if you stand around all day trying to figure out who is a true Scottsman or not?
This depends a lot on what you do. Try working with a decision analyst sometime. The entire economic model with a decision tree and monte carlo analysis of cost overruns, etc for a multi-trillion dollar decision will literally be a arcanely-complex spreadsheet or two on someone's laptop.
With that said, it's still a great tool for the job because the different stakeholders can inspect it.
reply