> “After launch, Parker Solar Probe will detect the position of the Sun, align the thermal protection shield to face it and continue its journey for the next three months, embracing the heat of the Sun and protecting itself from the cold vacuum of space.”
What a phenomenal piece of engineering! The article was not only fascinating to read as a non-astronomer/lay person, but it also makes it all look like child’s play, the way they decided what materials to use and how.
> “And to withstand that heat, Parker Solar Probe makes use of a heat shield known as the Thermal Protection System, or TPS, which is 8 feet (2.4 meters) in diameter and 4.5 inches (about 115 mm) thick.“
So is someone going to be bothering someone else about TPS Reports  over the expected seven year span of this probe? Sorry, I couldn’t resist making that reference! :)
> One key to understanding what keeps the spacecraft and its instruments safe, is understanding the concept of heat versus temperature. Counterintuitively, high temperatures do not always translate to actually heating another object.
> In space, the temperature can be thousands of degrees without providing significant heat to a given object or feeling hot. Why? Temperature measures how fast particles are moving, whereas heat measures the total amount of energy that they transfer. Particles may be moving fast (high temperature), but if there are very few of them, they won’t transfer much energy (low heat). Since space is mostly empty, there are very few particles that can transfer energy to the spacecraft.
So space has high temperature, but since matter is far apart the temperature isn't transferred very much.
Seems the answer is that you don't need matter for heat radiation.
Space is cold, but it doesn't feel cold.
Fans/blowers can drive covection artificially, though.
Convection (natural or artificial) doesn't work in the absence of a convecting fluid, even when not in free-fall.
Definitely not space: https://www.youtube.com/watch?v=xdJwG_9kF8s from about the 3 min 40 second mark.
(FWIW, the slinky stuff is also really cool; weight -- in the contact sense but not in the mg sense -- is dissipational, and it's nice to see that demonstrated, so I'm glad your comment caught my attention.)
> space does not feel cold
If any part of you which you expose to space (if it's shielded from solar heating, etc.) is moist -- your skin, your eyes, your tongue, the insides of your nose -- you will feel that part getting cold very quickly thanks to evaporative cooling, which works very well in free-fall and in the absence of a convecting fluid.
Normal convective cooling (think your computers CPU or your phone's backside) or evaporative cooling (sweat on your skin, discardable heatsinks) work by transferring heat to some medium. In case of CPUs you do it twice, once from CPU to metal and then from metal to Air to get a larger cooling surface.
In space you don't get that, or atleast not without having it be expensive af. The only way to loose heat energy is by radiating it away naturally (infrared light that our bodies like to emit carries heat away from our body).
This is very slow and requires a very different cooler design and some design metrics overall (if your CPU points it's heat surface at some other components of the craft, that component might overheat due to that).
yeah, this article was a masterpiece of science writing. all of the difficult concepts were boiled down into very fruitful analogies and metaphors which clarified things succinctly.
But really, it's cool that they're using carbon-carbon protection similar to that which was originally developed for the leading edges of the Space Shuttle. And I really want to know how they built foamed carbon for the interior.
I'm guessing that they're using white ceramic paint on top instead of a reflective foil shield (like the Webb uses) because the foil would be shredded by the solar particles.
In the video, Thermal Protection System Engineer Betsy Congdon says it's 97% "air."
I can't say whether it's actually air, or she's simplifying things for the general public or not.
She also says twice that "water" is used in the radiators. But I'd have to believe that NASA's using something that absorbs/dissipates heat a little more efficiently. Perhaps whatever it is will end up in desktop gaming rig cooling systems eventually.
The temperature range is about 15C to 125C, at high pressure this is most ideal for use with water and water itself is a rather good coolant.
A tire/tyre rubber has tensile strength, most (all?) pressure vessels on earth are tensile. It seems odd the difference between compression and tension when it comes to certain forces.
Anyway, I just though it would matter.
Heat resistant material will eventually reach equilibrium where the back side is almost as hot as the front side unless it's cooled somehow.
In space, most of the environment is "cold" in that there's not much energy coming from it, so if you show the distant stars a hot radiator, it'll cool off pretty well, because it's all send and no receive. But nearby bodies are different: Earth is approximately room temperature (and in low Earth orbit takes up quite a bit of the sky); the Sun is pretty hot, for a much smaller portion of the sky (depending on your distance.)
If you're not near a planet, and you can reflect away most sunlight, you get pretty cold. See James Webb Space Telescope or anything else with sun shades, including the PSP.
The corona will change this a little, because it's a somewhat denser plasma than usual, but not by much. It'll still be a question of radiator size and heat, which is fairy easy to calculate.
There's a limit on the rate, rather than the amount, given by the Stefan-Boltzmann law. There will be grey body corrections for the heat shield.
Some details: https://en.wikipedia.org/wiki/Black_body#Radiative_cooling
Some even more gory details (slide 21 gives a sort of grey body curve; slides 108-113 are directly relevant; slides 34, 36 & 37 form a handy quick reference; slide 86 has a couple of graphs about foams):
and already that's more than I will ever really want to know, but this is probably of interest as a stepping stone for other HN readers. :-)
> how much heat an area of space near a star can have added to it
The region of space near a sun-like star generally has heat passing through it, starwards->infinity. Not much sticks around, and certainly not for very long.
There are limits on how much heat can be in the outer atmosphere of stars can be; the important thing is that "heat" here involves the presence of an amount substance, as well as how the average bit of substance in the region is moving in relation to other bits of substance in the same region ("temperature"). The limits are complicated because matter will tend to be blown away by the flux of radiation from the star through the outer atmosphere. The Eddington Limit is relevant here; Eddington's equation describes how radiation drives winds through a stellar atmosphere, to the limit where it blows the atmosphere out "to infinity".
The outer atmospheres of stars with atypically strong magnetic fields can be Super-Eddington, as can those around compact objects like neutron stars. Black hole accretion discs can also be Super-Eddington and the matter in the inner portions may get hot enough to disintegrate into gamma rays. This is called a "[big] blue bump", and implies temperatures of a hundred megakelvins to a few hundred gigakelvins or so (quasars have the very hot bits around them), although the temperature in much of the disc will struggle to reach a megakelvin.
> Is it infinite? Is the limit so high that it's far beyond even the atmosphere of a sun?
We've seen the "yes" answer to the second question.
The first is straight forward: high heat ~ high energy, and if you put enough energy into a small enough volume, it collapses into a black hole. It's easier to do this with a large amount of relatively low-temperature matter instead of a smaller amount of much higher-temperature matter.
> And are there special areas of space where the heat limit has reached its maximum and if so what does that mean for the properties of that space?
Black holes probably exist, given observations to date. Stellar mass ones are very cold. We have no evidence for black holes much smaller than our sun. They would be warmer, and would become very hot for extremely low-mass black holes. Here cold and warm relate to the very blackbody-like spectrum of the Hawking Radiation they emit. (~ nanokelvins or less. As said above, the accretion disc material can have much higher temperature).
The deepest layers of neutron stars are extreeeeemely hot (~ terakelvins). So are the outer layers. If they collapse, that heat is locked up within the black hole.
Pair-instability supernovae are the next hottest thing you can have, probably. In those, it's so hot in the core that the light produced as nuclei bump into each other is heavy in gamma radiation; a little hotter and you get gamma rays hot enough to turn right back into electron-positron pairs. (~ tens of gigakelvins). The heavy outer layers of the star then crash inwards as they lose their support from the outward pressure of the light (back to Eddington again). Kaboom! The Kaboom is likely so massive that not enough matter is left in the vicinity of the former star to leave a remnant like a neutron star or black hole. The matter thrown out of such a supernova can have ridiculously high temperatures, but rapidly become sparse enough that the heat per cubic metre drops off to nearly nothing. Hot protons arrive at Earth from such explosions as very-high energy cosmic rays, and when detectors pick them up lots of astronomers will get paged.
Penultimately some things which have very high temperature, but not much heat (because there's not much matter at that temperature; it's stray wispy sparse particles with lots of empty space between them. The products of smashed-together lead ions at the LHC is in exakelvins. Depending on model, the temperature of dark matter in active galactic nuclei can be in zetakelvins.
The daughter products of the highest-energy cosmic rays smashing into atoms in our atmosphere can be in yottakelvins. If we could collapse a spherical shell of photons into a black hole (a "kugelblitz"), the final temperature before the black hole appeared would be on the scale of the Planck temperature, meaning hundreds of millions of yottakelvins.
Finally, the extremely early universe probably had regions of higher temperatures still. Nothing we know prevents the temperature of the big bang from being infinite. However, nearly everyone hopes that quantum gravity would abolish that infinity by e.g. gravitational radiation undergoes a phase change in dense ultra-high but finite temperature regions, kind-of like how pair-instability supernovae's innermost pressures drop at their temperature maximum when super-hot gamma rays change into electron-positron pairs.
In a thick atmosphere like earth, heat radiation comes from all sides, so no chance at radiating off very much heat.
Desert nights, without the thermal mass of clouds and wet ground, cool quickly.
Earth is doing "barbeque roll" thermal management. Sitting between bright hot Sun, too hot, and dark cold deep-space sky, too cold, Earth spins, mixing too hot with too cold into not too bad. There seems an opportunity to improve the clarity of science education explanations, down to kindergarten, as the cooling half of that balance is pervasively left unmentioned. I wish I could find a forum where opportunities like this were discussed.
Also fun is using photonic structures to engineer your thermal emission spectrum to concentrate energy at frequencies where the atmosphere is more transparent. So you couple more directly than usual with the deep-space sky.
 https://news.stanford.edu/2017/09/04/sending-excess-heat-sky... https://www.nextbigfuture.com/2016/12/cooling-by-radiating-h...
The corona can be expected to be out to ~12 solar radii, which suggests about a day of really severe conditions. (The data pass is 30 hours, which suggests that's about right.) That's why it needs to be a really good head shield.
It's smallest orbit has a period of 88 days.
I'd love to get some deeper insight into how NASA writes and tests software, I can only guess it's a million miles from how most of us work. Anyone know of any good talks, articles from engineers there?
The PDF linked to in the discussion is no longer there, but I found it on standards.nasa.gov here: https://standards.nasa.gov/standard/nasa/nasa-gb-871913
There are also some interesting product management related guidelines from NASA, like this from 2014: https://snebulos.mit.edu/projects/reference/NASA-Generic/NPR...
It initially seems kind of ridiculous to me that everything has an acronym, but I suppose it's no more ridiculous than choosing a name that sounds like a Pokemon. Maybe less so.
In any case, thanks for sharing that.
> The development and verification of the Charring Ablating Thermal Protection Implicit System Solver (CATPISS) is presented. [...]
Not sure industry would try this one either, though it is very memorable.
Which is not to say that what NASA and its contractors do isn’t cool or that they don’t spent ungodly amounts of time and money on testing and verification, but you also don’t load one line of code more than is absolutely necessary onto a machine that absolutely must work at all times.
It’s an important lesson to learn and a good skill to exercise from time to time, but honestly it’s also something that doesn’t apply to most of our work as software engineers. For most software most people are willing to knock a couple of nines off the reliability of a piece of software in exchange for higher-quality output, lower costs, and more features. If my data analysis pipeline fails one time in ten because an edge case can use all the memory in the world or some unexpected malformed input crashes the thing but yields more useful output than if I kept it simple and hand-verified every possible input, well, that can be a fine trade off. If your machine learning model for when to retract the solar panel occasionally bricks and leaves the panel out to be destroyed, that’s less acceptable.
Coincidentally, I spent the weekend banging around with an old TRS-80 Model 100, and it's been very interesting to see what workarounds and compromises were made to conserve space.
For example, the machine ships with no DOS at all, so if you're working with cassettes or modem only, you don't have that overhead.
If you do add a floppy drive, when you first plug it in, you flip some DIP switches on the drive and it acts like an RS-232 modem, and you can download a BASIC program from the drive into the computer that, when run, generates a machine-language DOS program and loads it out of the way into high memory.
I don't have one of those sewing machine drives, so I went with a third-party DOS, which weighs in at... wait for it... 747 BYTES.† An entire disk controller with command line interface in 2½ tweets.
I can see how you would ensure reliability through proper requirements specification, a good software development process, separate independent implementations and extensive verification.
However, every time I read a popsci article about space flight software, they talk about this capability to push new code to the spacecraft while it is in flight.
I'm really curious to learn what this looks like in practice (technical details). Do they really have the ability to do an "ad-hoc" upload and execution of arbitrary code on these systems? If so, how are the ad-hoc programs tested and verified?
My understanding is that some spacecraft launch with beta/alpha equivalent software. Correct me if I'm wrong, but I believe that the rovers do this, with simple software installed first, then more complicated versions installed once they know everything is working.
It's somewhat similar to updating your iphone, but instead you use a huge dish to do the transmission and the bitrate is pretty horrendous.
I'm going to need a definition of "ad-hoc" here; no-one "deploys straight to production" on a spacecraft. Any patches have to be thoroughly tested on simulators and models of the spacecraft on earth before they are transmitted.
That makes sense, but is almost a bit disappointing. After all, that is exactly how it works for the boring systems here on earth. From various wired & co articles I had the impression that there was possibly something more; a mechanism that would allow users to send elaborate "commands" to the spacecraft to perform "ad-hoc" tasks at runtime. (What I mean by "ad-hoc" tasks are tasks that are unknown at the time of validation/testing of the software.)
There is also a way to send pre-programmed task lists to them which are executed sequentially, with delays if necessary.
That kind of thing is in the hands of operations, so it's not usually the job of the software team to test in the normal manner.
Ground won't send telecommands to a spacecraft to modify a piece of memory without knowing exactly what they're doing first.
Wouldn’t you want the benefit of those extra months to perfect the software?
To answer your question about software upload, the PSP has 3 redundant CPUs (primary, hot spare, backup spare), and each has multiple boot images. To upload software, the team uploads it to an inactive image of the backup spare CPU, promotes it to hot spare for long enough to collect the data it needs, reboots it into the new image, and then rotates it into the primary role, which is a seamless transition unless something goes wrong, and then the new hot spare takes over again within a second. Once they're sure the software is working, they can update the other CPUs. Before any of this, new software is tested on identical hardware set up on the ground with physics simulations.
See also, "Solar Probe Plus Flight Software - An Overview" from http://flightsoftware.jhuapl.edu/files/_site/workshops/2015/
Amazing that they had the ability to just run ad-hoc LISP on the spacecraft. It appears their method to ensure safety in the face of arbitrary code execution was to divide up the spacecraft into isolation zones and run the parts that have a REPL on a non-essential CPU. From :
> To protect the main DS-1 mission from possible misbehaviors of RA, the design included a “safety net” that allowed the RA experiment to be completely disabled with a single command, issued either from the ground or by on-board fault protection.
But: Once you include a REPL or another mechanism to push and execute arbitrary code "ad-hoc", I wonder how that could possibly be tested an validated? Surely as soon as you add the ability to run arbitrary code, there is no way of testing for all possible states of the system as part of the validation process?
In other words, how do you allow the user to push arbitrary code, but prevent them from putting the spacecraft into a condition from which it can not be recovered? The only way I could naively think of would be to only allow the user to push code to a completely isolated CPU that has a remote-reset functionality from the main/comms CPU.
Still, the popsci articles I read made it sound like there might be more to it. It would be excellent to find some first-hand accounts/sources on how this looks like in reality.
Lights-out management indeed.
It is complete overkill when "all" you're going to lose is a robot and some pride, as with a space probe you want to have lots of features and this level of safety is very restrictive on development effort.
More than likely, the spacecraft in question is written in C or C++ with the help of RTEMS or VxWorks. It is probably running a radiation hardened, very slow processor.
If anyone is interested JPL publishes their code standards doc for C: https://lars-lab.jpl.nasa.gov/JPL_Coding_Standard_C.pdf
91cm and 10cm, to save anyone else doing the conversion. Also, it seems to understate the closeness: Closest approach is 6.1 million km, which is 1/24th of 1 astronomical unit, but four inches is 1/9th of a yard-stick.
are these wires on the outside of the spacecraft? but what about the silicon of all the electronic stuff that this thing must be keeping? The cooling surface would also get a bit hot (it would always get some more energy at some rate), so how does the coolant transmit any heat away from the probe?
The difference is that the water molecules are more tightly packed, than the air molecules in the oven. In space, they are quite far apart.
..and: please don't! :)
Fast reflexes won't help your hand to recover from a bad burn, they won't prevent it either, your hand is much too large to be immersed fully and retracted before substantial damage will occur.
> Temperature measures how fast particles are moving, whereas heat measures the total amount of energy that they transfer.
It appears the heat shield is a carbon sheet sandwich. At first I was guessing some form of tungsten-carbine, but that is the traditional material of NASA heat shields.
> Why is the solar wind a breeze closer to the sun but a supersonic torrent farther away? Why is the corona itself millions of degrees hotter than the surface?
I suspect the answer to all those questions is simply gravity, but it will be nice to verify such things with data.
I suspect the answer to all those questions is simply gravity, but it will be nice to verify such things with data"
Can you explain your hypothesis a bit?
The Sun has enormous gravity. It comprises 99.86% of the solar system's total mass. It would seem heat can be greater expressed where it is more free upon the vacuum of space upon escaping the gravity that confines the high density mass. https://en.wikipedia.org/wiki/Sun
As for solar wind momentum it would make sense that a particle is accelerating away from the Sun at a near constant energy that is less confined by gravity over distance... at least until it hits termination shock at the edge of the solar system.
Of course these are all speculations and hopefully the probe will provide the data to qualify more valid conclusions.
It explains the temporal heating behavior at some scales. But it doesn't give a mechanism of heating. It could be electron beam target heating. But it could also be mediated by plasma waves.
The electron beams need acceleration and the most common suggestion is x-point magnetic reconnection providing up and down voltage gradients due to changing magnetic field. But the amount of electrons needed is unphysically large; the entire electron contents of the relevant volume of the corona.
There are plasma wave models that don't require unphysically large parameters.
These two (and a couple other) options aren't clarified by the observation of heating profiles. With the launch of Parker Solar Probe and the DKIST (diffraction limited solar telescope) the two models above will finally be testable. Spectroscopy of ion species by DKIST will tell what kind of heating is happening and Solar Probe will be there to measure the input from the corona.
That explains why energy is not transferred by conduction or convection to the spacecraft. But what about energy (heat) transfer by radiation? Why won't the spacecraft get all the energy from radiation and have its temperature shoot up?
I don’t want to downplay the good design and engineering that went into this, but should we be so confident without actually having done something like this thousands of times?
There sure is. At least two of the systems (positioning and water cooling) are active systems that could fail.
> but should we be so confident without actually having done something like this thousands of times?
"we" are confident enough that we rely on it to protect a > 1 billion USD probe. What's the use in adding a lot of ifs and maybes to some piece of marketing/explanation?
If it fails, adding some ifs and maybes to a marketing video won't really change anything.
But I'm also curious about what happens in the event of a solar flare or similar - from an engineering standpoint, what's their safety margin? Solar density goes up two hundred percent?
I imagined they would try to save weight in some places if it allows them more freedom in others. Although I have no idea how much water is used in the first place so it might be a moot point.
I'm just the inquisitive type. Explaining that something is used always makes me wonder "why not something else" :).
Basically, there are two ways you can design a cooling system: one that involves phase change, and one that simply transports heat from A to B by moving a heat carrier, usually a liquid.
Phase change systems typically have a higher efficiency, because the phase change has larger relative energy than heat capacity. But it also has two disadvantages: it only works near the boiling point of the medium, and you need to deal with all the pressure changes that come with a phase change.
The Parker probe seems to use a "mere" transport, and there the heat capacity and the working range of the medium is very important, and water is a pretty good choice on both of these criteria.
Oh, and you also don't want something very corrosive to destroy your expensive space craft from the inside :-)
IIRC the ISS uses two cooling systems, one based on water that is used throughout the station, and then an ammonium-based system that takes the heat off the water and transports it to the heat radiators. But on the ISS, maintenance is possible.
I hope that kind of materials can be mass-produced on the short term future to be used as insulation for homes!
I’m American and I’m embarrassed by this. This is science, make it easy for people to understand. Use SI units, please.