Hacker News new | more | comments | ask | show | jobs | submit login
Traveling to the Sun: Why Won’t Parker Solar Probe Melt? (nasa.gov)
311 points by shreyanshd 6 months ago | hide | past | web | favorite | 121 comments

Towards the end, it says:

> “After launch, Parker Solar Probe will detect the position of the Sun, align the thermal protection shield to face it and continue its journey for the next three months, embracing the heat of the Sun and protecting itself from the cold vacuum of space.”

What a phenomenal piece of engineering! The article was not only fascinating to read as a non-astronomer/lay person, but it also makes it all look like child’s play, the way they decided what materials to use and how.

> “And to withstand that heat, Parker Solar Probe makes use of a heat shield known as the Thermal Protection System, or TPS, which is 8 feet (2.4 meters) in diameter and 4.5 inches (about 115 mm) thick.“

So is someone going to be bothering someone else about TPS Reports [1] over the expected seven year span of this probe? Sorry, I couldn’t resist making that reference! :)

[1]: https://en.m.wikipedia.org/wiki/TPS_report

Yeah... so if you could just come in on SUNday... that'd be great. thanksssss.

Correct me if I'm wrong, but isn't space actually not cold at all -- there's barely any matter there to have a temperature?

From the article:

> One key to understanding what keeps the spacecraft and its instruments safe, is understanding the concept of heat versus temperature. Counterintuitively, high temperatures do not always translate to actually heating another object.

> In space, the temperature can be thousands of degrees without providing significant heat to a given object or feeling hot. Why? Temperature measures how fast particles are moving, whereas heat measures the total amount of energy that they transfer. Particles may be moving fast (high temperature), but if there are very few of them, they won’t transfer much energy (low heat). Since space is mostly empty, there are very few particles that can transfer energy to the spacecraft.

So space has high temperature, but since matter is far apart the temperature isn't transferred very much.

The effective temperature of a vacuum is the temperature of whatever is on the other side, because that determines whether radiated energy is emitted or absorbed. For most of space, the "other side" is the cosmic microwave background, which has a temperature of about 3K. So yes, space is generally pretty frickin' cold.

Good question. How does heat radiation work in space?

Seems the answer is that you don't need matter for heat radiation.

Convection doesn't work in space, so you only lose heat to radiation.

Space is cold, but it doesn't feel cold.

(Natural) convection doesn't work in free-fall. That includes the Vomit Comet[1], the ISS, the Apollo capsule between the Earth and the Moon, and so on.

Fans/blowers can drive covection artificially, though.

Convection (natural or artificial) doesn't work in the absence of a convecting fluid, even when not in free-fall.

Definitely not space: https://www.youtube.com/watch?v=xdJwG_9kF8s from about the 3 min 40 second mark.

(FWIW, the slinky stuff is also really cool; weight -- in the contact[1] sense but not in the mg sense -- is dissipational, and it's nice to see that demonstrated, so I'm glad your comment caught my attention.)

> space does not feel cold

If any part of you which you expose to space (if it's shielded from solar heating, etc.) is moist -- your skin, your eyes, your tongue, the insides of your nose -- you will feel that part getting cold very quickly thanks to evaporative cooling, which works very well in free-fall and in the absence of a convecting fluid.

- --

[1] http://math.ucr.edu/home/baez/physics/General/Weight/whatIsW...

You don't need it but it's useful.

Normal convective cooling (think your computers CPU or your phone's backside) or evaporative cooling (sweat on your skin, discardable heatsinks) work by transferring heat to some medium. In case of CPUs you do it twice, once from CPU to metal and then from metal to Air to get a larger cooling surface.

In space you don't get that, or atleast not without having it be expensive af. The only way to loose heat energy is by radiating it away naturally (infrared light that our bodies like to emit carries heat away from our body).

This is very slow and requires a very different cooler design and some design metrics overall (if your CPU points it's heat surface at some other components of the craft, that component might overheat due to that).

>The article was not only fascinating to read as a non-astronomer/lay person

yeah, this article was a masterpiece of science writing. all of the difficult concepts were boiled down into very fruitful analogies and metaphors which clarified things succinctly.

Not a rocket scientist, but this article is very well written for an audience with a minimal understanding of Physics and Chemistry. Articles like these help high school students realise that what they're learning now is not really a waste of time.

Dad joke: "They'll go at night"

But really, it's cool that they're using carbon-carbon protection similar to that which was originally developed for the leading edges of the Space Shuttle. And I really want to know how they built foamed carbon for the interior.

I'm guessing that they're using white ceramic paint on top instead of a reflective foil shield (like the Webb uses) because the foil would be shredded by the solar particles.

Speaking of jokes, I guess this is how they make the foamed carbon: https://www.youtube.com/watch?v=Wex_yKfrTo4

I wonder how much carbon foam insulation costs.


Well essentially they are going at night, bringing their own night with them

I'm curious about the foam, too. Normally, foam contains a lot of air. But that kind of foam will blow itself apart in a vacuum. How do they make foam where all the air pockets are replaced with vacuum? Is vacuum-filled foam a better or worse insulator than air-filled foam?

It could be open-cell foam instead of closed-cell foam. In earth atmosphere the cells would still be filled with air, but the air would be able to leave without blowing the foam apart.

Foam can be made and slowly evacuated. Vacuum filled foam is a better insulator

I think it would be better if it was vacumn voids rather than air filled since air will conduct heat via conduction.

Normally, foam contains a lot of air. But that kind of foam will blow itself apart in a vacuum.

In the video, Thermal Protection System Engineer Betsy Congdon says it's 97% "air."

I can't say whether it's actually air, or she's simplifying things for the general public or not.

She also says twice that "water" is used in the radiators. But I'd have to believe that NASA's using something that absorbs/dissipates heat a little more efficiently. Perhaps whatever it is will end up in desktop gaming rig cooling systems eventually.

They used deionized water, not unusual for gaming rigs either.

The temperature range is about 15C to 125C, at high pressure this is most ideal for use with water and water itself is a rather good coolant.

The only substance that can transport more energy than water that I know of is ammonia. But it's quite corrosive and it has to be pressurized at 50-100 bar to make a difference.

And if the foam cells have a vacuum in them (in? lack of?) how does it not implode when on earth.

The difference between Vacumn and 1atm is 14psi, bike tires pumped to between 50-120psi (on my road bike for example), 14 psi isn't that large a pressure for a reasonably strong/stiff material.

I know what you mean but in a way I think there is a difference since the material has to withstand inward pressure; compressive forces.

A tire/tyre rubber has tensile strength, most (all?) pressure vessels on earth are tensile. It seems odd the difference between compression and tension when it comes to certain forces.

Anyway, I just though it would matter.

Texas A&M, represent!

The part of the answer here that NASA always seems to skip (perhaps because it's not as fun as talking about all the cool heat-resistant technology) is that the vehicle is in an orbit that will only take it into the corona for a short time. While the closest approach is very close, its aphelion is (at closest) about the orbit of Venus. This gives it time to cool down after each corona encounter.


Thank you. I was wondering just this.

Heat resistant material will eventually reach equilibrium where the back side is almost as hot as the front side unless it's cooled somehow.

Also, the cool side of the craft will stay cool by radiating its heat away into space.

Is there a limit to the amount of heat a body of space can transmit via black-body radiation? I wonder if the performance of the heat transfer is going to be a useful measurement for science on this mission. I would think so. I just can't help but wonder how much heat an area of space near a star can have added to it. Is it infinite? Is the limit so high that it's far beyond even the atmosphere of a sun? And are there special areas of space where the heat limit has reached its maximum and if so what does that mean for the properties of that space?

The limit is based on your radiators, how much "empty" sky you can point them at, and how hot you can get them. Those are all fairly finite. Space being vacuum, there's no particular "amount of heat" it can hold, when you radiate heat it just goes away. But there is a difference in bodies that can send heat to you.

In space, most of the environment is "cold" in that there's not much energy coming from it, so if you show the distant stars a hot radiator, it'll cool off pretty well, because it's all send and no receive. But nearby bodies are different: Earth is approximately room temperature (and in low Earth orbit takes up quite a bit of the sky); the Sun is pretty hot, for a much smaller portion of the sky (depending on your distance.)

If you're not near a planet, and you can reflect away most sunlight, you get pretty cold. See James Webb Space Telescope or anything else with sun shades, including the PSP.

The corona will change this a little, because it's a somewhat denser plasma than usual, but not by much. It'll still be a question of radiator size and heat, which is fairy easy to calculate.

> Is there a limit to the amount of heat a body of space can transmit via black-body radiation?

There's a limit on the rate, rather than the amount, given by the Stefan-Boltzmann law. There will be grey body corrections for the heat shield.

Some details: https://en.wikipedia.org/wiki/Black_body#Radiative_cooling

Some even more gory details (slide 21 gives a sort of grey body curve; slides 108-113 are directly relevant; slides 34, 36 & 37 form a handy quick reference; slide 86 has a couple of graphs about foams):


and already that's more than I will ever really want to know, but this is probably of interest as a stepping stone for other HN readers. :-)

> how much heat an area of space near a star can have added to it

The region of space near a sun-like star generally has heat passing through it, starwards->infinity. Not much sticks around, and certainly not for very long.

There are limits on how much heat can be in the outer atmosphere of stars can be; the important thing is that "heat" here involves the presence of an amount substance, as well as how the average bit of substance in the region is moving in relation to other bits of substance in the same region ("temperature"). The limits are complicated because matter will tend to be blown away by the flux of radiation from the star through the outer atmosphere. The Eddington Limit is relevant here; Eddington's equation describes how radiation drives winds through a stellar atmosphere, to the limit where it blows the atmosphere out "to infinity".

The outer atmospheres of stars with atypically strong magnetic fields can be Super-Eddington, as can those around compact objects like neutron stars. Black hole accretion discs can also be Super-Eddington and the matter in the inner portions may get hot enough to disintegrate into gamma rays. This is called a "[big] blue bump", and implies temperatures of a hundred megakelvins to a few hundred gigakelvins or so (quasars have the very hot bits around them), although the temperature in much of the disc will struggle to reach a megakelvin.

> Is it infinite? Is the limit so high that it's far beyond even the atmosphere of a sun?

We've seen the "yes" answer to the second question.

The first is straight forward: high heat ~ high energy, and if you put enough energy into a small enough volume, it collapses into a black hole. It's easier to do this with a large amount of relatively low-temperature matter instead of a smaller amount of much higher-temperature matter.

> And are there special areas of space where the heat limit has reached its maximum and if so what does that mean for the properties of that space?

Black holes probably exist, given observations to date. Stellar mass ones are very cold. We have no evidence for black holes much smaller than our sun. They would be warmer, and would become very hot for extremely low-mass black holes. Here cold and warm relate to the very blackbody-like spectrum of the Hawking Radiation they emit. (~ nanokelvins or less. As said above, the accretion disc material can have much higher temperature).

The deepest layers of neutron stars are extreeeeemely hot (~ terakelvins). So are the outer layers. If they collapse, that heat is locked up within the black hole.

Pair-instability supernovae are the next hottest thing you can have, probably. In those, it's so hot in the core that the light produced as nuclei bump into each other is heavy in gamma radiation; a little hotter and you get gamma rays hot enough to turn right back into electron-positron pairs. (~ tens of gigakelvins). The heavy outer layers of the star then crash inwards as they lose their support from the outward pressure of the light (back to Eddington again). Kaboom! The Kaboom is likely so massive that not enough matter is left in the vicinity of the former star to leave a remnant like a neutron star or black hole. The matter thrown out of such a supernova can have ridiculously high temperatures, but rapidly become sparse enough that the heat per cubic metre drops off to nearly nothing. Hot protons arrive at Earth from such explosions as very-high energy cosmic rays, and when detectors pick them up lots of astronomers will get paged.

Penultimately some things which have very high temperature, but not much heat (because there's not much matter at that temperature; it's stray wispy sparse particles with lots of empty space between them. The products of smashed-together lead ions at the LHC is in exakelvins. Depending on model, the temperature of dark matter in active galactic nuclei can be in zetakelvins. The daughter products of the highest-energy cosmic rays smashing into atoms in our atmosphere can be in yottakelvins. If we could collapse a spherical shell of photons into a black hole (a "kugelblitz"), the final temperature before the black hole appeared would be on the scale of the Planck temperature, meaning hundreds of millions of yottakelvins.

Finally, the extremely early universe probably had regions of higher temperatures still. Nothing we know prevents the temperature of the big bang from being infinite. However, nearly everyone hopes that quantum gravity would abolish that infinity by e.g. gravitational radiation undergoes a phase change in dense ultra-high but finite temperature regions, kind-of like how pair-instability supernovae's innermost pressures drop at their temperature maximum when super-hot gamma rays change into electron-positron pairs.

... which only works due to the low density of corona.

In a thick atmosphere like earth, heat radiation comes from all sides, so no chance at radiating off very much heat.

> In a thick atmosphere like earth, heat radiation comes from all sides, so no chance at radiating off very much heat.

Desert nights, without the thermal mass of clouds and wet ground, cool quickly.

Earth is doing "barbeque roll" thermal management. Sitting between bright hot Sun, too hot, and dark cold deep-space sky, too cold, Earth spins, mixing too hot with too cold into not too bad. There seems an opportunity to improve the clarity of science education explanations, down to kindergarten, as the cooling half of that balance is pervasively left unmentioned. I wish I could find a forum where opportunities like this were discussed.

Also fun is using photonic structures to engineer your thermal emission spectrum to concentrate energy at frequencies where the atmosphere is more transparent.[1] So you couple more directly than usual with the deep-space sky.

[1] https://news.stanford.edu/2017/09/04/sending-excess-heat-sky... https://www.nextbigfuture.com/2016/12/cooling-by-radiating-h...

When the heat flow trough the material reaches equilibrium, the back will radiate exactly as much heat as the front side receives. At that point the heat shield serves no purpose.

If the heat shield is where most of the delta T goes, it will definitely serve a purpose. The corona normally drops off as it goes off into space, but if you can concentrate most of that drop in one place, then your electronics will be much happier.

[0] https://www.electronics-cooling.com/1997/09/one-dimensional-...

Short on what timescale though? Surely hours/days. Long enough to approach equilibrium, I would think.

Here's a graph of the PSP distance from the sun in solar radii: https://space.stackexchange.com/questions/17562/how-does-the...

The corona can be expected to be out to ~12 solar radii, which suggests about a day of really severe conditions. (The data pass is 30 hours, which suggests that's about right.) That's why it needs to be a really good head shield.

It's smallest orbit has a period of 88 days.

>This all has to happen without any human intervention, so the central computer software has been programmed and extensively tested to make sure all corrections can be made on the fly.

I'd love to get some deeper insight into how NASA writes and tests software, I can only guess it's a million miles from how most of us work. Anyone know of any good talks, articles from engineers there?

There was a discussion on HN a while back RE NASA's software safety guidlines. Here is the link to the discussion: https://news.ycombinator.com/item?id=12014271

The PDF linked to in the discussion is no longer there, but I found it on standards.nasa.gov here: https://standards.nasa.gov/standard/nasa/nasa-gb-871913

There are also some interesting product management related guidelines from NASA, like this from 2014: https://snebulos.mit.edu/projects/reference/NASA-Generic/NPR...

Hello! I’m a FSW dev at NASA Langley. As others have said, the talks from the FSW workshop are a great start. If you want to see a well-used framework, check out CFS (https://cfs.gsfc.nasa.gov)

Off topic, but I've always been interested by the way that government agencies almost exclusively choose acronyms for their software. Meanwhile private companies (especially in the last decade or two) almost always choose unrelated, single words.

It initially seems kind of ridiculous to me that everything has an acronym, but I suppose it's no more ridiculous than choosing a name that sounds like a Pokemon. Maybe less so.

In any case, thanks for sharing that.


> The development and verification of the Charring Ablating Thermal Protection Implicit System Solver (CATPISS) is presented. [...]

Not sure industry would try this one either, though it is very memorable.

There are a lot of really good links, but to be honest 99% of the secret to writing bulletproof code is “write the most simple, boringest program you can”.

Which is not to say that what NASA and its contractors do isn’t cool or that they don’t spent ungodly amounts of time and money on testing and verification, but you also don’t load one line of code more than is absolutely necessary onto a machine that absolutely must work at all times.

It’s an important lesson to learn and a good skill to exercise from time to time, but honestly it’s also something that doesn’t apply to most of our work as software engineers. For most software most people are willing to knock a couple of nines off the reliability of a piece of software in exchange for higher-quality output, lower costs, and more features. If my data analysis pipeline fails one time in ten because an edge case can use all the memory in the world or some unexpected malformed input crashes the thing but yields more useful output than if I kept it simple and hand-verified every possible input, well, that can be a fine trade off. If your machine learning model for when to retract the solar panel occasionally bricks and leaves the panel out to be destroyed, that’s less acceptable.

you also don’t load one line of code more than is absolutely necessary

Coincidentally, I spent the weekend banging around with an old TRS-80 Model 100, and it's been very interesting to see what workarounds and compromises were made to conserve space.

For example, the machine ships with no DOS at all, so if you're working with cassettes or modem only, you don't have that overhead.

If you do add a floppy drive, when you first plug it in, you flip some DIP switches on the drive and it acts like an RS-232 modem, and you can download a BASIC program from the drive into the computer that, when run, generates a machine-language DOS program and loads it out of the way into high memory.

I don't have one of those sewing machine drives, so I went with a third-party DOS, which weighs in at... wait for it... 747 BYTES.† An entire disk controller with command line interface in 2½ tweets.


The part that I find the most intriguing is "corrections can be made on the fly".

I can see how you would ensure reliability through proper requirements specification, a good software development process, separate independent implementations and extensive verification.

However, every time I read a popsci article about space flight software, they talk about this capability to push new code to the spacecraft while it is in flight.

I'm really curious to learn what this looks like in practice (technical details). Do they really have the ability to do an "ad-hoc" upload and execution of arbitrary code on these systems? If so, how are the ad-hoc programs tested and verified?

There is usually a piece of software running on the machine which basically just does this - allows you to command an image upload to the SSD, do a checksum of the file, then install it if all goes well. There is also usually a simpler version of the software on a redundant SSD or partition which the onboard computer will install if it detects that the software that is currently installed is malfunctioning.

My understanding is that some spacecraft launch with beta/alpha equivalent software. Correct me if I'm wrong, but I believe that the rovers do this, with simple software installed first, then more complicated versions installed once they know everything is working.

It's somewhat similar to updating your iphone, but instead you use a huge dish to do the transmission and the bitrate is pretty horrendous.

I'm going to need a definition of "ad-hoc" here; no-one "deploys straight to production" on a spacecraft. Any patches have to be thoroughly tested on simulators and models of the spacecraft on earth before they are transmitted.

Thanks for the reply! So what you're saying is that it's just a "normal" over-the-air software update. I.e. you add some new functionality and then do a full system test of all functions of the software before replacing the entire image?

That makes sense, but is almost a bit disappointing. After all, that is exactly how it works for the boring systems here on earth. From various wired & co articles I had the impression that there was possibly something more; a mechanism that would allow users to send elaborate "commands" to the spacecraft to perform "ad-hoc" tasks at runtime. (What I mean by "ad-hoc" tasks are tasks that are unknown at the time of validation/testing of the software.)

Yes, we can send commands to the spacecraft once it's up there to do thing like modify memory or hard drive contents directly, turn on/off or command payloads and equipment. The full list is pretty exhaustive - anything you could want to be able to do, you can command manually. These things aren't 100% autonomous (though they have autonomous elements in the software).

There is also a way to send pre-programmed task lists to them which are executed sequentially, with delays if necessary.

That kind of thing is in the hands of operations, so it's not usually the job of the software team to test in the normal manner.

Very interesting! Is this kind of command capability (e.g. ability to modify memory contents) something that is usually only available on "non-critical" subsystems, or would you generally expect to also find it on critical components, like the communication or navigation modules?

The ability to modify memory contents is pretty much universal; you can modify things like eeprom contents, the RAM, hard drive, etc. There is no differentiation between critical and non-critical; it's all just fairly critical.

Ground won't send telecommands to a spacecraft to modify a piece of memory without knowing exactly what they're doing first.

It makes sense when you think anout it. These missions last months even years before a specific piece of software becomes useful because it’s for a specifi part of the mission.

Wouldn’t you want the benefit of those extra months to perfect the software?

I think that in some cases it's simply due to the launch schedules being more optimistic than reality; the hardware has to be done, but the software development doesn't necessarily have to stop once you launch the thing.

In this case, the "corrections on the fly" refer to all of the real-time responses that the software makes without ground involvement. In the case of a solar limb sensor detecting the sun, the probe will abandon its data collection for that near approach, and go into an emergency response that has been made as straightforward and deterministic as possible, to maximize the chances of recovery for all single-fault and some double-fault scenarios.

To answer your question about software upload, the PSP has 3 redundant CPUs (primary, hot spare, backup spare), and each has multiple boot images. To upload software, the team uploads it to an inactive image of the backup spare CPU, promotes it to hot spare for long enough to collect the data it needs, reboots it into the new image, and then rotates it into the primary role, which is a seamless transition unless something goes wrong, and then the new hot spare takes over again within a second. Once they're sure the software is working, they can update the other CPUs. Before any of this, new software is tested on identical hardware set up on the ground with physics simulations.

See also, "Solar Probe Plus Flight Software - An Overview" from http://flightsoftware.jhuapl.edu/files/_site/workshops/2015/

Thanks - that was exactly the kind of info I was looking for!

Amazing that they had the ability to just run ad-hoc LISP on the spacecraft. It appears their method to ensure safety in the face of arbitrary code execution was to divide up the spacecraft into isolation zones and run the parts that have a REPL on a non-essential CPU. From [1]:

> To protect the main DS-1 mission from possible misbehaviors of RA, the design included a “safety net” that allowed the RA experiment to be completely disabled with a single command, issued either from the ground or by on-board fault protection.

[1] https://ti.arc.nasa.gov/m/pub-archive/176h/0176%20(Havelund)...

From previous articles, remote updates seem to be a core part of spacecraft software/operating systems. I even recall one situation where a spacecraft had a REPL built in that was used to fix a problem (slowly) remotely! They also have multiple levels of operation and watchdog functionality. I have no direct experience with that beyond following news about spacecraft.

Remote updates -- where you replace a full (sub)system -- are one thing, since you can always run the normal software validation procedure on the new version of the software. So an OTA update of a system (even in flight) does not sound like rocket science (yet)...

But: Once you include a REPL or another mechanism to push and execute arbitrary code "ad-hoc", I wonder how that could possibly be tested an validated? Surely as soon as you add the ability to run arbitrary code, there is no way of testing for all possible states of the system as part of the validation process?

In other words, how do you allow the user to push arbitrary code, but prevent them from putting the spacecraft into a condition from which it can not be recovered? The only way I could naively think of would be to only allow the user to push code to a completely isolated CPU that has a remote-reset functionality from the main/comms CPU.

Still, the popsci articles I read made it sound like there might be more to it. It would be excellent to find some first-hand accounts/sources on how this looks like in reality.

One of the outer solar system probes had a radio that had enough built in logic to accept software updates and maneuvering commands independently of the two redundant on-board science computers.

Lights-out management indeed.

Here's an article about it I read a while back, interesting read: https://www.fastcompany.com/28121/they-write-right-stuff

This redundant software and hardware setup typically isn't necessary when humans aren't involved. The space shuttle system is similar to what you will find on a Boeing or Airbus aircraft. Redundant software, written by different people in different countries with completely different cultures in different languages (on purpose), running on multiple machines with different hardware and voting on the decisions to be made.

It is complete overkill when "all" you're going to lose is a robot and some pride, as with a space probe you want to have lots of features and this level of safety is very restrictive on development effort.

More than likely, the spacecraft in question is written in C or C++ with the help of RTEMS or VxWorks. It is probably running a radiation hardened, very slow processor.

They don't do 3x calculations and voting but they do often have redundant computers they can switch over to in case of failure. Curiosity had to switch to it's 'B-side' computer back in 2013 when A-side had a memory issue. Even when not carrying humans it's still a billion/million dollar mission that probably wouldn't be replicated for a while if ever (within the researchers life times at least) that could be scuttled by a softwer bug.

If anyone is interested JPL publishes their code standards doc for C: https://lars-lab.jpl.nasa.gov/JPL_Coding_Standard_C.pdf

Most spacecraft have some form of redundancy to guard against single point failures. It's a waste of money to send up failure prone hardware. Amateurs building cubesats, probably not, but the big players aren't going to take that sort of risk.

You are right, they have redudnancy in all cases - but it isn't usually software written by multiple teams with different hardware.

Talk on JPL's software for the Curiosity rover: https://www.usenix.org/conference/hotdep12/workshop-program/...

Their cost per line of code is also, pardon me here, astronomical. That quality has a cost most shops cannot stomach.

There are many good videos from the yearly Flight Software workshop http://flightsoftware.jhuapl.edu/

also, I would imagine that there would be a strong bias towards reuse...which leads you to long term standardization of not just language but also CPU architecture.

Hello! FSW dev from NASA Langley here. We do try to do reuse as much as possible, but small satellites (CubeSats) are starting to change that. There are so many new pieces of hardware and so much experimentation going on to see what’s feasible in space. There are new RTOS frameworks being developed both by commercial and government (CFS, F-prime). If you’re interested in this in particular there is a conference called SmallSat which hosts the talks from previous years. https://smallsat.org

> If Earth was at one end of a yard-stick and the Sun on the other, Parker Solar Probe will make it to within four inches of the solar surface.

91cm and 10cm, to save anyone else doing the conversion. Also, it seems to understate the closeness: Closest approach is 6.1 million km, which is 1/24th of 1 astronomical unit, but four inches is 1/9th of a yard-stick.

>Another challenge came in the form of the electronic wiring — most cables would melt from exposure to heat radiation at such close proximity to the Sun. To solve this problem, the team grew sapphire crystal tubes to suspend the wiring, and made the wires from niobium.

are these wires on the outside of the spacecraft? but what about the silicon of all the electronic stuff that this thing must be keeping? The cooling surface would also get a bit hot (it would always get some more energy at some rate), so how does the coolant transmit any heat away from the probe?

Melting points: sapphire - 3,722°F (2,050°C) niobium - 4,491°F (2,477°C)

Importantly, they likely have very different thermal conductivity and and specific heats. They probably also need to have reasonably similar thermal expansion coefficients so heating and cooling cycles do not cause them to strain and break.

Not a scientist, but I assume using "radiative heat transfer" ie a hot surface shielded from the sun emitting thermal radiation away from the spacecraft.

Yes, but the shield would have two sides, where one of them is facing the sun.

As I understand, temperature is a measure of how fast atoms and molecules are vibrating, and not a measure of how much energy per unit area of contact can be transferred in a unit time.

Exactly. Stick your hand into an oven at 100 celsius ...OR... Stick your hand into into a pot of water at 100 celsius.

The difference is that the water molecules are more tightly packed, than the air molecules in the oven. In space, they are quite far apart.

> Stick your hand into into a pot of water at 100 celsius

..and: please don't! :)

Fast reflexes will prevent major damage. It won't prevent pain though.

As someone with extensive scarring on one arm from a water burn I'd prefer it if you didn't put things like this out there.

Fast reflexes won't help your hand to recover from a bad burn, they won't prevent it either, your hand is much too large to be immersed fully and retracted before substantial damage will occur.

Maybe for a fingertip, but I wouldn't try it with a full hand. The water stuck to the hand will take time to cool down no matter how fast you remove it. Unless you're just saying second degree burns don't count as major damage...

Is that also a reason that you feel warmer in high humidity heat?

That's only part of it - sweat not being able to evaporate is another part. I'm not sure which is the bigger effect.

The article's video described this

As explained in the article.

> Temperature measures how fast particles are moving, whereas heat measures the total amount of energy that they transfer.

They have a nice theory that the spacecraft won't melt when it gets close to the sun, but do they really know for sure? NASA doesn't succeed on every mission. It is possible something will go wrong, and the probe will melt.


It appears the heat shield is a carbon sheet sandwich. At first I was guessing some form of tungsten-carbine, but that is the traditional material of NASA heat shields.

> Why is the solar wind a breeze closer to the sun but a supersonic torrent farther away? Why is the corona itself millions of degrees hotter than the surface?

I suspect the answer to all those questions is simply gravity, but it will be nice to verify such things with data.

No. The latter question is the “coronal heating problem “ which is a major unexplained phenomenon in Physics: https://en.m.wikipedia.org/wiki/Corona#Coronal_heating_probl...

"Why is the corona itself millions of degrees hotter than the surface?

I suspect the answer to all those questions is simply gravity, but it will be nice to verify such things with data"

Can you explain your hypothesis a bit?

Certainly. Heat is an expression of energy. The first gas law explains the relationship between volume, mass, and energy. In this case we are talking about the interchange between a plasma and a gas. While the gas law works well for gasses it is still less uniformly applicable to other states of matter. https://en.wikipedia.org/wiki/Gas_laws

The Sun has enormous gravity. It comprises 99.86% of the solar system's total mass. It would seem heat can be greater expressed where it is more free upon the vacuum of space upon escaping the gravity that confines the high density mass. https://en.wikipedia.org/wiki/Sun

As for solar wind momentum it would make sense that a particle is accelerating away from the Sun at a near constant energy that is less confined by gravity over distance... at least until it hits termination shock at the edge of the solar system.

Of course these are all speculations and hopefully the probe will provide the data to qualify more valid conclusions.

Not GP, but here's an article that talks about it. Note that this is relatively recent. It's not something that was thoroughly understood in ancient times exactly.


> https://www.nasa.gov/feature/goddard/sounding-rockets/strong...

It explains the temporal heating behavior at some scales. But it doesn't give a mechanism of heating. It could be electron beam target heating. But it could also be mediated by plasma waves.

The electron beams need acceleration and the most common suggestion is x-point magnetic reconnection providing up and down voltage gradients due to changing magnetic field. But the amount of electrons needed is unphysically large; the entire electron contents of the relevant volume of the corona.

There are plasma wave models that don't require unphysically large parameters.

These two (and a couple other) options aren't clarified by the observation of heating profiles. With the launch of Parker Solar Probe and the DKIST (diffraction limited solar telescope) the two models above will finally be testable. Spectroscopy of ion species by DKIST will tell what kind of heating is happening and Solar Probe will be there to measure the input from the corona.

> Particles may be moving fast (high temperature), but if there are very few of them, they won’t transfer much energy (low heat). Since space is mostly empty, there are very few particles that can transfer energy to the spacecraft.

That explains why energy is not transferred by conduction or convection to the spacecraft. But what about energy (heat) transfer by radiation? Why won't the spacecraft get all the energy from radiation and have its temperature shoot up?

Radiant shielding

A key sensor (Faraday cup [1]) for studying the solar wind, outside of the shield, was constructed of not one but apparently four varieties of unobtainium: Titanium-Zirconium-Molybdenum alloy, acid-etched tungsten, sapphire crystal tubes, and niobium wiring. All together these could make a fascinating engagement ring.

[1] https://en.wikipedia.org/wiki/Faraday_cup

The article makes it sound like there is no possible way for the probe to melt. Is this actually the case? Is there no possibility that manufacturer defects or a solar anomaly that could cause unexpected problems?

I don’t want to downplay the good design and engineering that went into this, but should we be so confident without actually having done something like this thousands of times?

> Is there no possibility that manufacturer defects or a solar anomaly that could cause unexpected problems?

There sure is. At least two of the systems (positioning and water cooling) are active systems that could fail.

> but should we be so confident without actually having done something like this thousands of times?

"we" are confident enough that we rely on it to protect a > 1 billion USD probe. What's the use in adding a lot of ifs and maybes to some piece of marketing/explanation?

If it fails, adding some ifs and maybes to a marketing video won't really change anything.

It's worth noting that the initial orbits will all be fairly far out and they will ease into the tighter orbits after a few years of testing.

Doesn’t it undermine the credibility of NASA if something goes wrong? The public who is the audience for this article is also paying the bills (through taxes). I’d settle for just changing “won’t melt” to “shouldn’t melt”. I think that appropriate for something that’s never been attempted.

(former software engineer on PSP here) It's a certainty that it will melt eventually, but that's not a satisfying answer to the question "why won't it melt?" The satisfying answer is "for the time period when it won't melt, it won't melt because of..."

Either way we’re chucking lots of money at the sun so if it fails we still learn something, not to mention developed a lot of technologies along the way.

For sure, but it would be a shame if NASA lost confidence and trust (and possibly funding) with their stakeholders (the public), because they weren’t more upfront about the potential risks. As a scientific organization that has experienced significant (and expensive) failures before, I expect better.

It seems like you want to discredit NASA now ("shouldn't melt") vs some imagined possibility("won't melt" but it does). With the amount of design, analysis, testing, and independent review and verification of the systems, backup systems, triply redundant systems, and autonomy, we can be as sure of it not melting as we can be sure of anything. And we certainly spent a large amount of money on this (about 1/10th of the World Cup), but in performance and value per $, it's a great deal.

Not at all. I expect NASA has done as much due diligence, planning, testing, and verification as possible. I just don’t think they are being upfront as they should be about the possible risks for a previously unattempted scientific endeavor. We’ve had massive failures (including NASA itself), in environments that are much better understood and with systems that have actually had exposure to those environments.

I also didn’t intend on “shouldn’t melt” to be taken sarcastically. I was trying to show how I would like something changed in the article. Probably a bad use of quotes on my part...

Manufacturing defects are part of engineering - they're good at this and there will be multiple scans done of the shield before it's sent up (x-rays etc) to check for defects.

But I'm also curious about what happens in the event of a solar flare or similar - from an engineering standpoint, what's their safety margin? Solar density goes up two hundred percent?

Any reason why they're using water for cooling? There's no other liquid with lower density and perhaps better thermal properties?

I imagined they would try to save weight in some places if it allows them more freedom in others. Although I have no idea how much water is used in the first place so it might be a moot point.

> The coolant used for the system? About a gallon (3.7 liters) of deionized water. While plenty of chemical coolants exist, the range of temperatures the spacecraft will be exposed to varies between 50 F (10 C) and 257 F (125 C). Very few liquids can handle those ranges like water. To keep the water from boiling at the higher end of the temperatures, it will be pressurized so the boiling point is over 257 F (125 C).

I just imagined that for an object sent to the Sun they'd want to... overdo it a little. Go for something synthetic that would behave even better or would save 1/2 Kg.

I'm just the inquisitive type. Explaining that something is used always makes me wonder "why not something else" :).

Water has a higher specific heat than most substances, and probably the highest of anything that's a liquid across the operating temperature range for the electronics.

Not a spacecraft engineer, merely somebody who has studied physics a long time ago, so take my answer with a grain of salt.

Basically, there are two ways you can design a cooling system: one that involves phase change, and one that simply transports heat from A to B by moving a heat carrier, usually a liquid.

Phase change systems typically have a higher efficiency, because the phase change has larger relative energy than heat capacity. But it also has two disadvantages: it only works near the boiling point of the medium, and you need to deal with all the pressure changes that come with a phase change.

The Parker probe seems to use a "mere" transport, and there the heat capacity and the working range of the medium is very important, and water is a pretty good choice on both of these criteria.

Oh, and you also don't want something very corrosive to destroy your expensive space craft from the inside :-)

IIRC the ISS uses two cooling systems, one based on water that is used throughout the station, and then an ammonium-based system that takes the heat off the water and transports it to the heat radiators. But on the ISS, maintenance is possible.

It blows my mind to think that we DO have materials that can withstand such temperatures, even after understanding the part about heat transfer.

I hope that kind of materials can be mass-produced on the short term future to be used as insulation for homes!

With 200km/s deep orbit we should have probably brought several tons of fuel to do Oberth burn say for a piece of the probe which would then pass by the Vojager in a few years and be on track to get to alpha centauri in several thousand years :)

The article and the videos it contains are amazingly clear. Cannot wait for the first results of that mission! I'm always amazed by NASA great endeavors, and how these scientists can be so sure their probe are not going to melt...

This mission is also very cool in the sense that the period from launch to first interesting data is short. There is going to be a Venus flyby on 28 Sep, and a Sun flyby on 1 Nov.

Some of the most incredible astronomy shots I've seen are of close(r) ups of the sun - I can't wait to see what Parker sends back as it closing in on the Sun.

Can we use same probe to travel to Earth's core?

go at night...

Inches? A yard stick? Millions of degrees Fahrenheit?

I’m American and I’m embarrassed by this. This is science, make it easy for people to understand. Use SI units, please.

It's written for a general American audience so it makes sense to use those units.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact