Hacker News new | past | comments | ask | show | jobs | submit login
Perseverance Rover lands on Mars [video] (youtube.com)
1663 points by malloreon 12 days ago | hide | past | favorite | 495 comments





You can also watch an EDL visualization in your browser: https://eyes.nasa.gov/apps/mars2020/#/home

And read about how it will use Terrain Relative Navigation to find a safe landing spot: https://www.jpl.nasa.gov/news/a-neil-armstrong-for-mars-land...

Perseverance is phenomenally complex, its Sample Caching System alone contains 3,000+ parts and two robotic arms. So exited for all the sciencing this nuclear-powered, sample-drilling, laser-zapping behemoth can do when it joins its friends on the only planet (known) to be inhabited solely by robots.

Edit: Percy is about to release its two 77 kg Cruise Mass Balance Devices (is this what NASA calls 'weights'?) to setup the right lift-to-drag ratio for entry. Mars InSight will be listening for the 14,000 km/hr impacts of these weights, providing useful calibration data. We wrote about this in this week's issue of our space-related newsletter, Orbital Index - https://orbitalindex.com/archive/2021-02-17-Issue-104/


It turns out that they took 640lbs (!) of weight to mars to be tossed off at various points during EDL. The video is worth watching if you'd like some more of the nitty-gritty behind the process: https://www.youtube.com/watch?v=W0NakShgbHY

Note: that's 290kg in NASA units

And knowing the difference is an important part of rocket science:

https://www.simscale.com/blog/2017/12/nasa-mars-climate-orbi...


Great video! I am also surprised by the fact that they bring that much mass just to jettison. In theory they could mounted some of the useful mass on a slider/rail system to achieve the necessary adjustment to the mass distribution without dropping mass overall, but apparently it wasn't worth the complexity/volume cost.

I'm sort of surprised we don't yet have ML powered "de-accent-ization". His french accent isn't hard to understand at normal speed, but when I set it to 1.5x or 2x speed it becomes hard to decipher in a way native speakers usually are not. If there was just a button (for him or me) to hit to tweak the sounds a bit to reduce the accent, I bet this problem would go away.


I’m amazed that people still say things like “de-accent” as if there was such a thing as “no accent”. You are asking for a button that makes his French accent more like your own. It’s a separate thing from native vs. non-native speakers - there are plenty of native English speakers with accents that you would also find challenging.

You are reading something into my comment that wasn't there in order to pick a boring fight. There is of course no such thing as no accent a priori, but there is such a thing as "accents understood by (vastly) more people" and "accents closer to the mean accent of native speakers". When I learn Russian, my English accent is not on the same footing as a Muscovite's; the intended notion of "de-accenting" the English accent on my Russian is obvious.

Consider responding to the substance of the comment instead.


> accents closer to the mean accent of native speakers

Notice that, for the case of English, most speakers are not native, by a huge margin! Native English speakers are a biased minority, and with a lot of variation within. Not sure that an "average native" accent is a useful concept at all. I, for one, tend to find most non-native English speakers vastly easier to understand than many native speakers.


This is a dumb fight to pick. Dude just wants some ML software to figure out how to change accents.

Maybe he wants to change the accent to a Texas accent, or the Queen's english, who cares, it's the ML part that's interesting.


> Notice that, for the case of English, most speakers are not native, by a huge margin!

I'm well aware, and this does not rebut any of my points.

> I, for one, tend to find most non-native English speakers vastly easier to understand than many native speakers.

That "many" native speakers in a language of hundreds of millions of speakers are hard to understand does not challenge the claim that a non-native accent brought closer to any native accent, much less the mean native accent, will for the large majority if listeners be easier rather than harder to understand.


What is the "mean native accent" for English? Would it be possible to somehow synthesise that? I'd love to hear what it sounds like!

I responded to the part of the comment that I had a response to, and I didn't mean any kind of hostility, much less to pick any kind of fight.

There is no such thing as "no accent". There is such a thing as a more neutral accent, or an accent more widely understood.

Trash.

I, for one, would be uncomfortable with AI removing my accent. I understand it's for other people to understand me better - and I am fine with AI-generated subtitles - but altering the way I speak would reduce the amount of "me" in ways I'm not fully ok with.

What's special about speech that makes this argument apply to speech alteration but not to subtitles? It's a tool to make you easier to understand, not to erase your person.

Speech is more than articulating words. It is also about rhythm and melody and idealy also body language.

The way someone speaks is very unique ... and it is actually very, very important how you speak to bring your point across. Or ... to convince people.

A robot voice might present the best arguments, but it will very likely loose to a good speaker who can (literally) tune in to his audience.

Speech is a complex pattern of sound waves, containing much more information, than binary encoded words.

So if there was a ML tool to make people with strong accent more understandable, why not. But you can also numble without any accent.

And I can enjoy and understand certain people with strong accents much better than natives, because they are just good speakers.

And having subtitles is one thing, but changing their voice .. would require consent I believe. (unless you run the tool for yourself, but I believe parents point was, he speaks and then automatically a tool enhances his voice, I would not like that, too)


You wouldn't need to use it on videos you produce if you don't want.

Interesting, are you uncomfortable with the current option to increase or decrease playback speed?

No need to have a fancy all-new ML algorithm - stick a text-to-speech output on the auto-generated video subtitles and you can set it to whatever language you like.

If the speech recognition / subtitling algorithm can't understand the nuances of the language, that's going to be a problem anyway... accented pronunciation is so multidimensional, you're pretty much going to have to transcribe syllables/phonemes first...


I was hoping someone would link this video, it describes the various phases with a ton of details that.

Some of the other videos on this channel are just as in-depth: the ones about the plumes/exhaust of rocket engines as well as star occlusions are incredibly detailed.


Thanks for introducing me to the French Space Guy! I am hooked.

> Cruise Mass Balance Devices

They put those in to make the probe seem higher-quality. They got the idea from Beats headphones.


Just to set the record a bit straighter and ruin your joke, those pics of beats with weights were knockoff beats from the flea market not real.


We were doing that at Meraki back in the early 2010s (it turned out they were also useful as a heat sink and, because the metal was exposed to air, a radiator). Pretty sure Meraki got the idea from some Apple product or other.

Speaking of Meraki, I had an awful experience with those devices. Basically we were in building with TONS of other wifi networks around and Meraki network just went crazy from time to time until we fine tune it down to the channel for each AP. I mean, for something you pay $$$ to buy the devices and $$$ each month for the subscription is a pretty poor experience.

Nortel and it’s predeccessors bell northern telecom used to put weights in the handsets so they had a nice feel to them.

> is this what NASA calls 'weights'?

Well no, the Cruise Mass Balance Devices are intended to Balance the Mass of the spaceship during Cruise conditions. That these Devices are single-part and constructed out of a single chunk of metal each should not be construed as merely being 'weights'. :)


What I'm getting from this, is that you can use weights to balance the mass of the spaceship during cruise conditions.

Actually more like that you can eject weights to intentionally un-balance the mass of a spaceship so it'll glide rather than falling straight down.

Atmospheric drag force center of drag and center of gravity to line up on a same axis, which force the craft to fly slightly sideways if spacecraft isn't perfectly balanced. Done carefully, it leads to direction of flight being slightly sideways, which is awkward but basically same as having lift towards that direction. Add roll control thrusters into the mix, and you get a really crude glider, with fixed pitch force, zero yaw control and barely controllable roll. With JPL-class engineering, such a spacecraft will be capable of actively correcting landing location.


It’s like hypersonic curling

Ships do this all the time (ballast), and anybody who's ever flown on a light aircraft or helicopter also knows the importance a pilot places on weight distribution.

I guess what's surprising is that they needed that much weight (140+kg seems like a lot?) and couldn't redistribute existing componentry; guess the knapsack algorithm wasn't good enough, or that they just couldn't break up enough pieces?

And yes, Cruise Mass Balance Devices sounds like the type of name a tired engineer would come up with to convince upper management...lol


They can't redistribute existing componentry because of competing requirements: during cruise, the spacecraft needs to be balanced around the rotational axis (perseverance rotates at 2RPM in cruise). During re-entry, an asymmetric weight distribution is needed to generate lift.

Surely you mean: "What I'm getting from this, is that you can use devices to balance the mass of the spaceship during cruise conditions"...j/k

Maybe they have both, and needed a name to distinguish the ejectable weights from the non-ejectable weights?

Got it. Fancy weights.

Also, InSight's SEIS seismometer is a true marvel: "We have been able to detect, at about 10 hertz, displacement of the ground of the order of less than 5 picometers…which is a fraction of the size of an atom." — https://eos.org/features/a-modern-manual-for-marsquake-monit...

LIGO puts this shame with 10^11 better sensitivity.

At sensitivity/gram insight wins. Per dollar probably not.

They should make km the default unit and not miles....

NASA has experience in unit foul-ups. Mars Climate Orbiter is the $125M poster project reminding everyone of the importance in having consistency in units.

"At 09:00:46 UT Sept. 23, 1999, the orbiter began its Mars orbit insertion burn as planned. The spacecraft was scheduled to re-establish contact after passing behind Mars, but, unfortunately, no signals were received from the spacecraft.

An investigation indicated that the failure resulted from a navigational error due to commands from Earth being sent in English units (in this case, pound-seconds) without being converted into the metric standard (Newton-seconds).

The error caused the orbiter to miss its intended orbit (87 to 93 miles or 140 to 50 kilometers) and to fall into the Martian atmosphere at approximately 35 miles (57 kilometers) in altitude and to disintegrate due to atmospheric stresses."[0]

[0] https://solarsystem.nasa.gov/missions/mars-climate-orbiter/i...


Ever been on an airplane? Altitude was almost certainly measured in kilofeet.

As noted in other comments, NASA (like the rest of the United States [1]) does use the metric system.

But it doesn’t matter. Nothing about the metric system makes it uniquely suitable to landing on Mars. Or space travel in general. What matters is a consistent standard.

Internally NASA could use Armstrongs. Where 1 is the weight or height of Neil Armstrong at KSC on July 16, 1969 at 13:32:00 UTC. It doesn’t matter. As long as it is consistent.

[1]: https://en.m.wikipedia.org/wiki/Metrication_in_the_United_St...


Consistency is not the only value of a system of units. Convenience is also of importance. And that is where the metric system shines. Having all measures of a unit in multiples 10 combines perfectly with our decimal calculations. Having as few as possible magical numbers to convert and combine between units makes making mistakes harder. How many calories in kinetic energy has a pound going 10 miles per hour ? I know a kilo going 10 meters per second has 50 joules of kinetic energy without looking up anything and doing the calculation in my head.

Can’t talk to the statute mile, but the nautical mile is sublime: one minute of latitude. The amount of math you can do in your head with a system with so many factors of 2, 3, and 5 is truly amazing.

Also the average of one minute of arc on a great circle route, which was the real handy thing about it at sea.

Yes, meridians (on which latitude is measured) being a special case of great circle routes that happen to pass through the poles. That said, as a practical matter, you are more frequently contemplating a chart of smaller scale than "globe" so you're usually counting up distance between fixes, or distance to next turn in a harbor, or some such thing, where you have a compass in hand and just need to set the compass to the length of a mile. The nearest latitude tick marks are quite handy for that.

I'm not aware of any tricks to make mile calculations easier but the fractional scale common with the inch is very useful for real-world mental calculation and practical exchange. Effectively everything is powers of two. Got something between 1/4 and 1/2 inch? Great, use 1/8ths. It's three. Not close enough? How about five 16ths? It infinitely scales to provide another unit that is suited to the measurement at hand. In some contexts you might just say you pick the closest 1/8th. In others you might use 32nds. You can use the same measuring devices to agree on an ad-hoc standard that everyone understands.

But with the metric system you only really get cm (too coarse) and mm (too fine) but you don't get something like 9/16 so you can't "work in 16ths" and have everything be whole units again.

Adjusting HVAC in degrees-C is infuriating to my Fahrenheit sensibilities. 20C is cold, 22C is hot. 21C is probably ok but really I want something like 20.5C. The comfortable range for a room is 3-5 whole units of F, but requires a bunch of fractions in C that you may not even have available on your thermostat.

Sure, converting between units is easy in the metric system. That doesn't make it the best thing to use all the time. Hell, the idea of thousandths of an inch is used commonly, so even the imperial system is base 1000 in some cases. But I've never seen anyone utilize the fractional scale with metric units, probably because the units are the wrong size for that to be useful.


But with the metric system you only really get cm (too coarse) and mm (too fine) but you don't get something like 9/16 so you can't "work in 16ths" and have everything be whole units again.

People who use metric units are perfectly happy rounding to the nearest 0.5cm or 0.25cm if that's what's needed, exactly as people do with inches. Why on earth would you imagine people use mm if something doesn't call for them?


But does anyone say 1/8th centimeter? Seems easier to just drop down to millimeters at that point. Which is fine, but it loses out on the convenience of fractions.

People do say things like 1.25mm, which is 1/8th of 1 cm.

Presumably someone who uses Imperial would say 5/128th inches if they wanted to describe something that's equivalent to 1mm?


Most likely they would use mils/thou.

25mils ~ 1mm

Or 1/16in as others have said.


40 mils ~ 1 mm

1000 mils / 25.4 mm = 39.3

This simple fact still seems wrong to me somehow.


Brain fart

As an occasional woodworker and carpenter, I can tell you having evenly divisable (inch) measurements makes mental division a whole lot easier. It's just a case of using the right tool for the job.

It’s unfortunate that fractional measurements and the base size of units get conflated.

Maybe metric users do use fractions and I just don’t hear about it. Is that table one and a quarter meters high?


We use millimetres. That table is 1250mm high.

If you're cutting it yourself, a precision of 1mm is finer than your saw blade or pencil line anyway, so it's plenty enough.

When I hear anything past about 1/8th of an inch my brain shuts down, and I give up.


My argument is a mm is too fine in that situation. 1/8ths and 1/16ths are ideal when working at those scales.

In reality I use both systems all the time. It’s situational.

> When I hear anything past about 1/8th of an inch my brain shuts down, and I give up.

Realistically, same. 32nds don’t get used outside of some specialty wrenches. 16ths are a practical limit where other scales start to make more sense. Probably millimeters.


So 3 mm is a weird measure but 1/8 of an inch is just perfect? You are like those guys who say that Fahrenheit is better because it feels "more natural and obvious"

I don’t understand why it has to be one or the other? Working in fractions is nice sometimes. Inches are a useful size for some situations. I find it easier to say that’s three eighths than 9mm because my ruler doesn’t have different marks for the factors of each mm mark.

I use both systems.

I do prefer Fahrenheit for HVAC (and weather) because it’s higher resolution and has reasonable values at human scales. Thermostats that lack half-degrees-c are never quite right IMO.


> I do prefer Fahrenheit for HVAC (and weather) because it’s higher resolution and has reasonable values at human scales.

So you are one of those, lol. There is nothing "less human" about 25 C than say 72 F. Nothing, it just happen to be the scale you are used to.Both are arbitrary.

> Fahrenheit for HVAC (and weather) because it’s higher resolution

99.99% of thermostats and thermometer in C had at least 1 decimal place. At usual "human temperatures" the difference in resolution between the scales is less than 2X, so even assuming only integer values, I am willing to bet against you in a double blind test that you cannot differentiate 68F vs 69F in an statistical significant way.

> I find it easier to say that’s three eighths than 9mm

Just because you are used to. Fractions are more complicated than integers, every elementary school program knows it.

So to summarize, the problem is not with the magnitude of the units which is arbitrary (a degree F and inches are not more human, logical or normal that a degree C or cm)the problem is with the convoluted way of the imperial system for multiples and submultiples of the base unit.


“Human scales” meaning temperatures that won’t burn my skin or give me frostbite. 70 is nice. 60 is cool. 50 is cold. 40 is really cold. 80 is hot. 90 is really hot. 100 is potentially dangerously hot.

I guess 20.5 is nice, 15.5 is cool, 10 is cold, 4.5 is really cold, 26.5 is hot, 32 is really hot and 37.7 is dangerously hot. It’s fine if you are used to it but I don’t really see a benefit.

I was in a hotel room in Japan that only had whole unit adjustments for the A/C. To get 20.5C I had to switch to Fahrenheit. I guess I was unlucky.

I find distances in metric and imperial perfectly usable and use both regularly.

As outlined in detail elsewhere in the thread there are advantages to working in fractions in some situations. Specifically when using a ruler or tape measure with different markings for 1/2, 1/4, 1/8 and 1/16. There’s no reason that has to be unique to inches, it just works out well in some cases.


> I guess 20.5 is nice, 15.5 is cool, 10 is cold, 4.5 is really cold, 26.5 is hot, 32 is really hot and 37.7 is dangerously hot.

Or, you know, 20, 15, 5, 30 and 40 instead of the arbitrary decimals you chose to use to prove your point


Sure, you can pick even numbers in either scale that are awkward decimals in the other. I just prefer the ten degree bands of the Fahrenheit scale for these ranges.

> I just prefer the ten degree bands of the Fahrenheit scale for these ranges.

To the identical 5 degrees range of the Celsius scale ?


It's not really identical though. Like I said, Fahrenheit is higher resolution at these scales so that is an advantage. It doesn't mean everyone should convert to F. Just that both systems have benefits. If I changed my perception of the world to C I wouldn't actually gain anything, personally, in the context of weather and HVAC.

If I need to take measurements while boiling water or making ice then I would probably use C.


It's exactly the same. The perception you have on Fahrenheit is exactly the same anyone using Celsius have, with multiples of 5 instead of 10

Five is not equal to ten so no, it is not the same. That's the entire point.

Right.

Interesting comment on Fahrenheit, as I would say it has too much resolution for day to day use. A nice sunny day is in "the low 70s". A cold winter day is in "the high 20s". There is too much precision in the units to give an exact numeric value, so we round it to low/mid/high. That implies that the general unit we should be using is somewhere around 3 times larger than Fahrenheit degrees, because that is the size of the unit we use in speech anyways.

Yeah it's personal opinion. Either scale works. Nobody gives weather reports to the public in kelvin.

People often say something is 1 and a half meters long. I don't understand how people can work with inch measurements. How do you divide 7"3/8 by 5? This seems a major pain.

One and a half seems natural. What about something like a quarter meter? I guess that just becomes some number of centimeters?

> How do you divide 7"3/8 by 5?

Same way I divide 4.7625 cm by 5. With a calculator.


> I guess that just becomes some number of centimeters?

25, yes. It's not too hard to do the math.

> Same way I divide 4.7625 cm by 5. With a calculator.

That's roughly 0.95 right by intuition, but (7"3/8) / 5 doesn't come easy to me.


You multiply 7* 8 then add 3 and put that number over 8* 5, resulting in 59/40, roughly 1.5”.

There's something just right about 1/16th of an inch. About the same as a millimeter, and easy to do math with... it is weird though 1/8th, centimeter is hard to conceive, for me anyway.

When more precision is needed, so easy to go to the 32nd


All you're saying is that inches are what you're used to. Being in the UK I am familiar with both inches and mm, and mm are far easier to work with than 16ths of an inch.

If working with simple units of ten is beneficial then every mission should redefine units in terms of expected velocities and vehicle size so they are optimized for the actual calculations at hand.

That's not realistic, obviously, so we just pick one. The units in the system are arbitrary, really.

In reality regardless of the system you choose every calculation is going to end up with fractions of something. You aren't just going to do it in your head.


Physicists are quite fond of redefining units so that the constants they care about are all just 1. (https://en.wikipedia.org/wiki/Natural_units).

For example, you could define mars units where the gravitational acceleration on mars is 1. Now your velocity in freefall is just equal to the time you've been freefalling! You don't even have to do a calculation!

(note: Don't actually do this. Gravitational acceleration isn't a constant when you're doing orbital mechanics.)


Honestly I would expect something like that to be happening at a hardware level. The number of bits in a memory address for the ground sensing radar is very interesting. Or the algorithm to determine vehicle acceleration given the voltage reading of a solid-state sensor vs the baseline. The metric system vs the imperial system is not an interesting distinction in these contexts.

The main convenience of it is when deriving formulas, rather than when applying them. With all the constants set to 1, they don't need to be tracked throughout the formula, and can be put back in at the very end by looking at the units. Sprinkle the appropriate (hbar*c) or (mu0*epsilon0) at the end, and then you get your constants back.

It does matter. See the errors caused in the past by using the imperial system. The metric system has a number of advantages.

The errors were not caused by the imperial system. The errors were caused by using the imperial system and the metric system. Specifically in expecting one system and getting the other.

The thing is, though, that you have these conversions even within the imperial system (https://en.m.wikipedia.org/wiki/Imperial_units#/media/File%3...) but not within the metric system.

My experience with U.S. students is that they are having a much harder time making sense of the imperial system (that they are used to) than doing problems in metric, even though they don’t use it in everyday life.


Ok, so?

First off, you linked to a list of english measures which are not used in the US. Nobody uses fathoms or barleycorns.

Here is the list of actual US customary units: https://en.m.wikipedia.org/wiki/United_States_customary_unit...

Second, none of that is relevant to landing on Mars.

The only problem space where metric has an advantage is in converting between meters, kilometers and millimeters.

That’s great, and it’s easy to learn. But it doesn’t suddenly make all problems of distance easier to solve.

If I am traveling toward Mars at 47 meters per second it doesn’t help me to know that is also .047km per second. And converting to kilometers per hour involves using base 60 twice anyway because metric time is unwieldy.

In reality none of your measurements are going to be nice round numbers. Mentally converting from meters to km might be nice sometimes but it’s essentially a party trick.

It won’t help the lander make decisions. The hardware doesn’t inherently work in base 10.

Does NASA mix meters and kilometers? Isn’t that the same problem that destroyed the Mars Climate Orbiter?

The fact is the units are irrelevant beyond just being defined and used consistently.

Also, I can’t think of a situation where I need to convert miles to feet. My bike ride is six miles, I’m never going to express that in feet. If I need to describe the size of a thing in a room I will probably use feet, maybe inches if it is small. Probably not feet and inches. I wouldn’t use miles at all. Easy conversion between those units just isn’t a problem that comes up. It’s more important to me to have reasonably sized units and that the person I am communicating with understands them.


Without looking could you tell me..

How many pounds does a cubic feet of water have?

How many BTUs do you need to heat 10x10x3 ft water pool 20 degrees F?

How much work in ft-lb is done by gravity when a 10 oz mass drops from 19 yards?

How many HP are needed to rais 2400 lbs 74 inches in 30 sec?

It is obvious you have 0 experience doing back-of-the-envelope calculations for scientific or engineering purposes. It is a no-contest between the metric and the imperial systems.


I’d love to have a box of JPL envelopes so I can do calculations like a real engineer.

> If I am traveling toward Mars at 47 meters per second it doesn’t help me to know that is also .047km per second.

Yes it does. It means you can immediately sanity-check your numbers even if you don't have a good sense of what meters and kilometers are, because you have that base/kilo relationship.

> My bike ride is six miles, I’m never going to express that in feet.

You can eyeball how fast you're going in feet per second and have a rough idea of how long your ride is going to take. Or rather you could if you had any idea of how long your ride was in feet. There are lots of little everyday things that just become much easier.


I’m not sure how that sanity check works, can you explain? Do you mean checking conversion between meters and kilometers? Because sure, that’s easier but you could just do everything in meters instead and not run the risk of crashing a spacecraft because of unnecessary conversions or bad assumptions.

I estimate my bike ride progress in landmarks and time. Not feet per second. Did I get to the boat ramp in 20minutes? Better speed up and get to the park by 30.


> Do you mean checking conversion between meters and kilometers? Because sure, that’s easier but you could just do everything in meters instead and not run the risk of crashing a spacecraft because of unnecessary conversions or bad assumptions.

But you'll have small distances and large distances and pieces from external sources who use measurements on a scale that makes sense to them. You can make your external sources do conversions themselves, but that's just moving the problem around. There will usually end up being a point, or probably several points, where you have to relate a small distance to a large distance, and wherever that happens, a human sanity check is a help.

> I estimate my bike ride progress in landmarks and time. Not feet per second. Did I get to the boat ramp in 20minutes? Better speed up and get to the park by 30.

Precisely - you have no sense of the relation between your speed and how far you can go, because you're using a terrible measurement system, and you don't even notice the how that's robbing you of the ability to develop useful intuitions.


My bicycle doesn’t even have a speedometer so I’m not sure how the metric system is supposed to expand my world. I’m happy looking around and glancing at my watch.

Miles per hour is literally a measure of distance over time. If I wanted to use my GPS I could very easily determine how far I can go in a given amount of time. I can do this equally well in the metric or imperial systems, without converting to feet or meters.


Do you never ride somewhere where you don't know how far apart everything is beforehand? I can eyeball 100/200/300m and add those up into km. I could probably learn to eyeball 100/200/300 yards, but forget relating that to miles.

Yes I understand. Imperial is awesome because you can divide a foot by exactly 2, 3, 4 , and 6, which of course, is the main problem everyone has every day. Metric on the other hand sucks because none of my measurements will ever be nice round numbers.

The hardware doesn’t think in base 10, but having more than that in imperial makes it better?

Your document lists 12 mass units alone. I rest my case, what could possibly be more logical, convenient, and need less conversion.

If communication was your major goal, then the system that is used by 7.3 billion people on this planet would be your choice.


> Yes I understand. Imperial is awesome because you can divide a foot by exactly 2, 3, 4 , and 6, which of course, is the main problem everyone has every day.

Great, we agree.

> Metric on the other hand sucks because none of my measurements will ever be nice round numbers.

Depends on the situation. Metric units can be useful.

> The hardware doesn’t think in base 10, but having more than that in imperial makes it better?

No, it means neither system has an advantage so just pick one. Or invent a new one that allows better hardware utilization.

> Your document lists 12 mass units alone. I rest my case, what could possibly be more logical, convenient, and need less conversion.

I don’t convert, I just pick the unit that fits the problem.

> If communication was your major goal, then the system that is used by 7.3 billion people on this planet would be your choice.

Yeah I use the metric system all the time. Just like NASA.


>> The hardware doesn’t think in base 10, but having more than that in imperial makes it better? >No, it means neither system has an advantage so just pick one. Or invent a new one that allows better hardware utilization.

But one of the systems does have an advantage because it stays in base 10, whereas the other doesn't.

>> Your document lists 12 mass units alone. I rest my case, what could possibly be more logical, convenient, and need less conversion. >I don’t convert, I just pick the unit that fits the problem

But you can't if you just use the `intuitive unit', and that's the whole problem. How would you measure the amount of liquid fuel in, say, the small tank for an attitude control thruster of some probe? How does that add to the overall mass of the whole probe? Or to the force you then need to accelerate it by a certain amount? And now compared to the whole launcher?

In which units do you measure everything going on in a small wind tunnel model, and how do you compare that with the real thing?

Under which conditions do you go from fluid ounces to ounces to cups to pints to quarts to gallons (also note that, again, you not only switch units but bases)?

>Yeah I use the metric system all the time. Just like NASA.

Good for you, it solves all the problems.


> But one of the systems does have an advantage because it stays in base 10, whereas the other doesn't.

That's a benefit to humans, not to hardware, which was the context in which I was speaking.

> But you can't if you just use the `intuitive unit', and that's the whole problem. How would you measure the amount of liquid fuel in, say, the small tank for an attitude control thruster of some probe? How does that add to the overall mass of the whole probe? Or to the force you then need to accelerate it by a certain amount? And now compared to the whole launcher?

Honestly? I'd probably measure it in volts. That's what the hardware is doing after all. That's my point, it doesn't help the computer to convert to base 10 and do calculations that way. Fuel level is measured in volts using binary. For a human something like grams probably makes more sense so sure, display it in those units. But that's a conversion.

> In which units do you measure everything going on in a small wind tunnel model, and how do you compare that with the real thing?

Again, volts on strain sensors. Maybe analog or maybe binary, in newtons. Again, the hardware doesn't think in units humans prefer. There has to be a conversion that doesn't use simple in-your-head math.

> Under which conditions do you go from fluid ounces to ounces to cups to pints to quarts to gallons (also note that, again, you not only switch units but bases)?

Cups, pints, quarts and gallons are all based on the ounce and powers of two. A gallon is 128oz, a half gallon is 64oz, a quart is 1/2 of a half gallon (or a quarter gallon) or 32oz (also, approximately a liter). A pint is half a quart or 1/8th of a gallon or 16oz, a cup is half a pint or 1/16th of a gallon or 8 oz. These fractional scales are really handy for converting between units in some situations. The unit fits the task at hand or you can trivially double or halve the size of the unit if needed. It's the same fractional scale and math used with the inch.


Frankly, I can't even tell if you're just being sarcastic at this point.

I'm not being sarcastic.

I have gone much farther here than I normally would. I don't think I can constructively explain this to you any further.


I just want to add. It's quite common in carpentry to work with 120cm base wood. Which devides just as nice. And even then it's easier to convert when moving into bigger or smaller units.

>The errors were caused by using the imperial system and the metric system

Using one universally accepted system is core idea behind metric system. Now, it looks like it is competition between two equal systems, but historically it is competition between ideas 'we should have one universal system' and 'every country/area can stay on their local systems'. Just all other legacy local systems (outside u.s. customary) disappeared.


Quick, how many miles is 26357 feet?

With metric it's a matter of shifting the decimal.

How much is a sixteenth inch anyways.


Yes, conversion between mm, cm, m and km is easy. What's your point? If something is miles away why do I care about it in terms of feet? How many meters is the sun from here? How many km is 1/3 of an AU? How many seconds does it take light to go a meter in vacuum?

A 16th is half an 8th. Twice as much as a 32nd. AKA 2^4, 2^3 and 2^5, respectively.


Actually, NASA uses metric system internally. Imperial units are probably used for general public convenience.

Considering that one of NASA's roles is to inspire young people to enter STEM I think it would be important to promote metric as much as possible.

I don't see how the choice here can promote STEM.

Do you think kids find it sexy to talk about meters, kilograms and degrees Celcius rather than feet, miles, pounds and Fahrenheit?

I can't fault for choosing what is more understandable to the target audience.


As another commenter said, NASA uses metric.

During the stream, you can hear the various teams giving measurements in metric, whilst the media gave coverage in imperial.

It's a pretty interesting video from that perspective, as you can hear the two "realities" being translated for the intended audience.


> And read about how it will use Terrain Relative Navigation to find a safe landing spot:

So basically TERCOM from cruise missles but used on space crafts? All you need is a radar countour map of the area and it can automate it's way to the endzone.

https://en.wikipedia.org/wiki/TERCOM


Perseverance's Terrain Relative Navigation uses a camera system and generates a full 3D position fix, but the idea is similar.

Not at all.

Any idea when they will start experimentation? I want to find microbial life!

I don't know in general, but JPL published a video [0] yesterday of three interviews. One of the systems engineers for the MOXIE (atmospheric oxygen separation) unit will wait several weeks after landing before their first experiment. Actually, scientific american has published a timeline that seems to corroborate that [1].

[0] https://youtu.be/TUd604rBR6I?t=643

[1] https://www.scientificamerican.com/article/the-first-100-day...


What is it about that that excites you so much?

Is it the idea that life could originate elsewhere and that there might really be aliens?

Or is it the idea that Mars could support some sort of colony?

Or the hope of completely novel microbiology?


Yes.

Any one of these things would be a massive boon to our understanding of life throughout the solar system and broader universe, right down even to here on Earth. All three of them would arguably mark a new era in Earth's history.


we are very screwed if they find life on Mars. It means life is incredibly common and thus the Great Filter theory is true and we only have a few years left as a species most likely.

I find it extremely hard to believe you could kill every human being on earth at this point. We’ve reached critical mass, we aren’t going anywhere. When we had that few thousand individuals population bottleneck in the past was when it was dicey. What sort of event could kill every human and end our species? I can only think of planet-wide extinction events like massive asteroid impacts that sterilized the whole earth. And we haven’t ever had one of those in billions of years. Call me too optimistic but I think humans are too resourceful. Some of us would survive anything smaller.

I wonder what a huge Carrington solar storm would do to humanity. If electricity went out everywhere, transformers burned up all over, electronics fried. If this caused transport failures, mass starvation could follow. I really hope a severe solar storm would not be as bad as that and hopefully someone could enlighten me on this.

> If this caused transport failures, mass starvation could follow.

The good news is that a substantial chunk of the world's cargo transportation runs on diesel (or other combustion setups similarly not reliant on electronics), so in a pinch it could probably keep going. Same with agricultural machinery. Might need to replace or refurbish some ECUs, but I'm sure there are enough clever mechanics out there willing and able to bypass those in an emergency that we'd be back up and running pretty quick on that front.

ICE 1, electric 0 ;)

It's refrigeration that'd have me more concerned, since pretty much all modern refrigeration is electrically powered (last I checked). Diesel generators might come in clutch there, assuming the refrigeration units themselves don't rely on any fancy electronics.


>I wonder what a huge Carrington solar storm would do to humanity

Worst case it sets us back to 1870ish, maybe. Depends on how fast things go to crap vs how fast things can be rebuilt.

Likely case you'd basically get a "purge" because society as we know it can't keep on rolling with the kind of economic breakdown something like that would cause so there's be a lot of dying in the interim but if you don't starve or get shot in the first 6mo you're probably good with the very old, very young and unproductive bearing the brunt of it (same as every other disaster) It would be like the black death, but global and all at once. Balance of power globally would definitely be altered in unforeseeable ways but the overall net result is things would bounce back hard.


Killing literally every single human being is not easy. Sure, killing off half of humanity is pretty easy to conceive, but to kill all of humanity it takes a lot more work.

Luckily, we have great minds working on this problem: https://www.appliedeschatology.com/

There have been at least five mass extinction events in the last 500 million years. The most recent one wiped out all non-avian dinosaurs, after they had dominated the earth for 100 million years. Tool-using apes with language have been around for less than 5 million. I think it’s far too early to say we’ll survive the next extinction event, or even make it that far before diverging into new species.

> When we had that few thousand individuals population bottleneck in the past was when it was dicey.

Yeah, but those individuals were presumably all in pretty close proximity to one another. If we were left with a few thousand individuals across the entire range of the human-inhabited Earth, we'd have one heck of a time continuing as a species.

In any case, the risk of an extinction event on Earth is exactly why I believe space colonization needs to be Priority Zero for humanity, from two different angles:

1. Living beyond Earth means that we as a species are that much more resilient against a literal-Earth-shattering catastrophe (and if we can get the bulk of Earth's current/future population off of Earth, then we might very well be able to avoid a couple different plausible extinction events).

2. If we can colonize entirely inhospitable worlds like Mars or the Moon (or my votes, Ceres, Venus, and Enceladus), then "colonizing" Earth is easy-peasy-lemon-squeezy even if it does become Venus 2: Greenhouse Boogaloo.


Disagree. If there is a filter at all then it could easily be that we’ve already passed it. Maybe the filter is the formation of multicellular life, for example. Also, Earth and Mars have exchanged a lot of material. If we find Mars life, it would not at all be surprising to learn it is related to Earth life.

It's hard to imagine that the change from unicellular life to multicellular life is a great filter. Even single cell life has evolved to crazy complexity that blurs the lines between single and multicellular life.

> If we find Mars life, it would not at all be surprising to learn it is related to Earth life.

If it's DNA/RNA based, we might actually be able to determine the relationship and whether that's true or not.


Earth is pretty special. We’ve got a big ol’ moon (seriously, the Moon is huge for a planet our size), Jupiter running interference for us with its massive gravity and incomprehensible magnetic field, our host star is very polite, we’ve even got a magnetic field AND ozone layer on the planet itself. Not to mention it’s kept life going for 30% of the age of the universe. It’s a good CV for Great Filter applications.

> and thus the Great Filter theory is true

Or we're just ahead of the curve.


Some interesting reasons: the proverbial "2nd genesis", panspermia possibilities of our own planet, and answering lots of questions on formation of life on ours and any other planet we might encounter.


Something I don't understand: when they say "X is 1 minute from happening", does that mean it's really 1 minute from happening or does that mean "in 1 minute we'll receive the signal that X has happened"?

Maybe my question makes more sense in the case of "X is happening right now", because then I should either understand "we infer that X should have happened right about now" or "we have confirmed via signal that X has happened", and that's a big big difference.

I know in some cases they explicitly say the latter, so I guess my real real question is, do they just keep the communication delay implied in all countdowns & references in discussion, to avoid confusion?

(ETA: No need to let me know about simultaneity problems in relativity — earth and mars are, relative to c and to macroscopic time scales, essentially not moving relative to each other AFAIK, so that simultaneity is essentially well-defined. My question was about a much more boring classical-universe problem.)


Yes, those are all Earth Receive Time, that is, when they were saying that eg. entry interface is two minutes away, in reality the rover was already sitting on the surface and we were just waiting for the radio signal to get here.

> Earth Receive Time

Ah great, that's a great phrase to make everything clear and provide a kind of "frame of reference" to think & communicate in. Always need these abstractions.


The local Czech stream (20k viewers!) I watched went over the events in real time and then commented how events were happening as signals were received on Earth - with the final confirmation of successful landing coming first via Twitter, no less! :)

Still a very nice yet nerve wrecking idea to do it like this - you know the lander is on Mars now. But is it safely on the ground or is there a third Shapirelli crater now ? You don't know! A huge relief in the end. :)


But aren’t those two events essentially simultaneous in the relativistic sense? That is by some definitions of “simultaneous”?

In one sense, light experiences no time during travel, so anytime you are hit by radiation (like from a star) there is frame of reference in which the event was instantaneous.

On the other hand, if you were on Earth and I was in between Earth and Mars, I would receive the data more quickly than you, and I could even watch it whiz by me on its way to you. The thing about relativity is that it’s... relative!


Waiting for the physicist in the room to point out: there is no such thing as simultaneity!

:)


Simultaneity is at least as well defined as clock time. Both clock time and the velocity of your reference frame can have arbitrary constants added to them to yield equally valid coordinate systems. So "it's not really simultaneous" is analogous to "it's not really 6:30 PM."

Well, it's 6:30 PM somewhere.

lifts glass


From The Hitchhiker's Guide to the Galaxy: "Time is an illusion. Lunchtime doubly so."

Depends what type of physicist you ask.

According to the energy-time uncertainty principle we don't even know when exactly the RF waves that transmitted information hit the receiver on Earth either.


In the absence of a physicist, I suppose that a software engineer in the room might do. After all, the counterintuitive consequences of relativity have their counterparts in counterintuitive effects in distributed systems and concurrent programming. In both cases, the core issue that misleads our intuition is the lack of a shared global clock that would impose a total ordering [1] on events/reads/writes/etc. Instead, events in both situations are only ordered partially [2]. In relativity the ordering is determined by the speed of light, in distributed systems the ordering is determined by what messages have been exchanged by two nodes and in concurrent programming reads and writes are ordered by synchronization actions such as lock acquisition and release, memory barriers etc (c.f. the happens-before relationship in JMM [3] and other memory models).

See for example [4] and [5].

[1]: https://en.wikipedia.org/wiki/Total_order

[2]: https://en.wikipedia.org/wiki/Partially_ordered_set

[3]: https://en.wikipedia.org/wiki/Java_memory_model

[4]: https://www.youtube.com/watch?v=UYZIHP120go

[5]: https://www.microsoft.com/en-us/research/publication/time-cl...


S/he already did. We just don't know yet.

You can just say "they".

(Not trying to be judgy. People just seem to forget about the gender neutral use of "they".)


I'm not a native speaker but the problem I have with it is that it has the wrong numerus. Although I have done so in papers recently, because it seems to be a trend, sentences like "If a person chose option B, they were categorized as a cautious assessor" seem ungrammatical to me. (In this case it's easy to reformulate the sentence in plural and simpler, but that's not always the case and I hope you get the point.)

If the person is hypothetical or not fleshed-out and without a gender, like your example, they is fine and I don't even notice. But if the person is known, the use of they catches me off guard every time. I recently read a book by Brandon Sanderson that had aliens on another planet with a different gender system, so Sanderson just used they to refer to those aliens even though the characters were obviously either feminine or masculine. It completely broke the illusion of the story and was a complete turn-off. I always notice in such cases, but for some reason some people say it's totally standard English.

Ann Leckie has written a great sci-fi series, Ancillary Justice/Ancillary Sword/Ancillary Mercy, where the main character assumes everyone is a she and keeps referring to them as that all the way through (though it becomes fairly obvious to the reader which are hes and shes).

I found it actually fairly annoying at first, but quickly got over it because it is related to the plot. Same for Sanderson, I've read all his books and am not even sure now which one you're talking about!

Sometimes we need a nudge to notice the unconscious bias around us, and it's often uncomfortable.


It was Starsight (Skyward series, book 2). The diones are apparently genderless or have unknown genders, and reproduce by merging into a single being for a while so they can "try out" the adult version of the kid they're going to have. If they don't like the kid they can reject that version and try again (eugenics), otherwise the actual kid is born from an egg or something.

The whole genderless thing was completely irrelevant to the plot—the diones could have been male or female and little else would have changed. It's not that it made me uncomfortable, but it was an unnecessary distraction.

Anyway, the book was a 2/5 because it threw almost everything from book 1 away, had way too many characters and places, ended on a serious cliffhanger, and the fictional science was extremely far-fetched. Meanwhile, book 1 was fantastic (a full 1/1 in my opinion) was focused on a small cast of characters on a small set, had great character development, and resolved nicely.


> > the problem I have with it is that it has the wrong numerus.

Not really: https://www.merriam-webster.com/words-at-play/singular-nonbi...


Why is it acceptable to use the wrong pronoun, such as "they", for someone who chooses the pronoun "she" or "he"?

> used to refer to a person of unspecified gender.

How is this offensive to anyone??


For the same reason referring to anyone with the wrong pronouns is, I would assume. Isn't that offensive? I didn't realize this was up for debate in 2021.

But you're not referring to someone with the wrong pronoun, but using a gender neutral one.

I'm sorry, I honestly don't understand what distinction you're trying to point out. If someone's preferred pronoun is "she", for example, and I refer to her as a "they", then that's the wrong pronoun, isn't it? That's literally the definition of "wrong", at least the definition I understand. The right pronoun is "she" and other pronouns such as "he" and "they" and "it" are, by exclusion, wrong.

But it seems like you're saying there's some kind of complex relation where sometimes people don't get to choose their own pronouns, but other people get to choose which one out of many to use based on convenience. Maybe it would help my understand if you could provide a chart relating the pronouns someone chooses with the pronouns other people are then allowed to use?


> Maybe it would help my understand if you could provide a chart relating the pronouns someone chooses with the pronouns other people are then allowed to use?

I think this is everything wrong with the world currently. Provide you with a chart, so I can justify using gender neutral pronoun?

https://apastyle.apa.org/blog/singular-they

This is barely more than a year old, is it already outdated? I am an asshole for daring to use “they”?


> If a person uses “she” or “he,” do not use “they” instead. Likewise, if a person uses “they,” do not switch to “he” or “she.” Use the pronouns the person uses.

This matches my own understanding; I have no idea why you are referring to this article as if it supports your bizarre crusade to misgender people.


You are aware that we are talking about the mars rover, right?

The comment that you replied to:

> You can just say "they".

Was a reply to this:

> S/he already did. We just don't know yet.

Which was talking about the mars rover.


They are not talking about a specific person, so how is "they" wrong?

There are like 500 other ways you could have said what you just said as well. Doesn’t mean the way you said it is invalid.

She both did and did not until we observe :p.

And even that is not correct. Events happening propagate with speed of light - the event horizon. We can predict we will receive information of something happening, bit that's just prediction about future events, regardless of the location.


> simultaneity

not in the true accurate to the picosecond sense of the word, no, but the exact word simultaneity is used when discussing number and density of satellites about a given latitude/longitude in the starlink beta program. Since they're LEO and orbiting at only 550 km, the satellites above a given spot on the ground vary greatly in the not-yet-complete sparse network.

Usually related to discussions of whether a beta test customer terminal will briefly hiccup and lose connection to its default gateway, or if somebody is at a sufficiently high latitude that they can have full coverage for all 86400 seconds in a day.

https://satellitemap.space/ has a good animated visualization of this.


Came here to suggest The Order of Time by C Rovelli, which explains this in such a captivating way.

I think this video explains the issue quite well in only two minutes:

https://www.youtube.com/watch?v=wteiuxyqtoM


Another one here, it is in Catalan but with subtitles in english. Minute 18 is where it is explained although I think is worth it to watch all of it

https://www.ccma.cat/tv3/alacarta/quequicom/tempus-fugit-sub...


But that only happens with sufficiently high relative velocities. Earth and Mars are effectively in the same reference frame.

Oh but there is. Just not in the same frame of reference.

It's the latter. The Earth-Mars latency at this time is something like 11 minutes, and the landing itself takes about 7 minutes, so when we on Earth first saw the craft entering atmosphere on Mars, by that time all the landing was already over, one way or another.

NASA has a good breakdown of their expected miletones at 'earth receive time' - https://www.nasa.gov/feature/jpl/nasa-s-next-mars-rover-is-r...

Apparently the latency time is currently 11 mins 22 seconds -- which is somewhere near the average. It goes from under 4 mins to over 22mins depending on distance.


It looked like they were quoting time as it would appear for an earth-local observer (i.e. a million light-year away supernova that showed up five minutes ago happened "five minutes ago" not 1 million years and five minutes ago.

With your personal light cone, it's fine to equate "now" with what you see in the moment. It just has to be clear what you mean for situations where communication might be ambiguous. If you have a person on mars, be sure to be precise what you mean when you tell them to do something in five minutes, when they receive the message they won't know if you mean five minutes after they receive the message or anywhere between 17 minutes before and 2 minutes after they receive the message.

When you get into relativistic speeds (and especially very short time intervals), nobody can even agree on when something "actually" happened, different observers have different opinions about what happens when even after you account for light travel time.


Relative time (in five minutes) without relativistic speeds is actually uniform. The is no observable difference to either participant.

And there is no concept of now in a significantly distant location. Related video: https://youtu.be/pTn6Ewhb27k


Doesn’t relativity tell us it doesn’t matter?

There's a lag time in communication due to distance. I don't see what that has to do with relativity.

PBS Space Time recently explained what the present time means within general relativity[1]. As I understand it... it matters in this context.

[1]: https://www.youtube.com/watch?v=EagNUvNfsUI


True fun begins when you consider General Relativity (which takes into account gravity and acceleration). From what I heard there is no definition of simultaneity and you can define it in different ways.

It depends on what coordinate system you're using. Simultaneity is ill defined in relativity. There's only future, past and "spacelike-separated" (neither past nor future). When they say, "X is 1 minute from happening," it's actually neither in the past nor the future. It's currently spacelike-separated, but in 1 minute, it will be in our past.

Yes, yes, you have shown you know what relativity is. But the relative velocity of earth and mars — which I can't convince Wolfram Alpha to tell me, but it's got to be on the order of their orbital velocity so let's say 5x10^4 mph — is a tiny tiny fraction of c so their inertial reference frames are essentially identical. So sitting in our reference frame, we can make inferences about what's happening "now" on mars,such that these inferences are consistent (to within that tiny fraction of c) with all of our current and future observations in this reference frame; i.e., consistent with a classical(+ finite speed of light) model of the universe. Which is why I left this out of my question and only asked about the consequences of a finite speed of light.

Put another way, simultaneity is perfectly well defined in a single inertial reference frame, and for purposes of my question, earth and mars can be considered to be relatively motionless.


Now is the part where someone brings up even with the low relative velocity, you haven’t accounted for the simultaneity issues we’d have if the earth collided with a black hole while the rover landing was happening.

Or the fact that mars is not as deep in the gravity well of the Sun! I wonder if they have to account for that one when programming, like, antenna aiming or something.

IIRC gps suffers about 1hz of blueshift due to descending into earths gravity. I think the velocity (Doppler) shift of the spacecraft is a way bigger factor than the gravitational shifting.

No, I don’t think you’re quiet understanding what they are saying. They aren’t talking about the different speeds of earth or Mars.

Simultaneity is not “perfectly well defined in a single intertidal reference frame”. That is just a convention.

If the RTT of earth to Mars is 20 minutes, then we can say that it takes us 20 minutes for our message to reach the rover, and the rover’s message arrives instantly, and that’s a consistent definition of simultaneity.

https://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/si...


Except that an observer located in the (hypothetical/approximate) common reference frame, but situated halfway to Mars, will report observations inconsistent with this definition (NB of course we receive their report at a time consistent with the definition, but the contents of the report are not consistent). So yes, you can play games with your definition of simultaneity, but you will win stupid prizes like observers in the same reference frame no longer agreeing about simultaneity when such a result is worse than what relativity requires.

Your link points this out. You can play these games; I don't dispute it. But it's a separate matter entirely from anything to do with relativity, as your link points out, which is itself separate from the classical problem I originally posed. So we are now two steps removed from anything relevant to the Mars rover; I guess we get a sense of pride and accomplishment?


It doesn't depend on the relative velocity of Earth and Mars. It depends on your coordinate system. You can use Schwarzschild coordinates centered on the Sun and use the time coordinate to define simultaneity, but that's an arbitrary choice.

It's a great achievement with some really interesting work done on the landing algorithms with terrain recognition and it seemed to have worked exceptionally well.

Looking forward for the next landing in May of the Chinese rover and all the science these robots will produce. Also, the test of Ingenuity, the helicopter, will be very interesting to watch, that could really pave the way for a different exploration style in the future.

And finally, maybe the next transfer window will already see some Starships, that would really change everything.


Early starships got me thinking: given the high likelihood of a failed starship landing, and maybe having the ability to send one before fully engineering a payload…

What would you ballast a starship with for the practice missions?

Useful materials which might survive a RUD and aim for someplace near a likely landing zone? If you crash the parts of a milling machine, a lathe, some tooling, some assorted metals stock, and a bunch of assorted wire, well sure you just cleared out a machine shop auction, but maybe there comes a day when an early Mars colony would be thrilled to go clean up your “landing” site.


Solar cells Would not handle a rud. But Just fill the entire thing with solar cells. If they want to produce methane and co2 on Mars they will need lots of power.

Soil? Though not sure it is prudent to potentially spread active biological material all over a pristine (eco)system.

Mars soil is usable for plants. Or should be at least. So no need to bring soil from earth.

Why not work on some very small machines that can mine and refine materials to make more of themselves?

Because "refine" really means "melt at temperatures ranging from 500 to 4000 degrees celsisu and then extract via various mechanical processes and/or using various reactants which often require gigantic plants to produce and are highly unstable".

Basically all of the history of science until 200 years ago was figuring out "mine and extract" and the course of civilization is very much linked with the price & quality of metal structures they could produce. But it's gotten so good we take it for granted.

However that is because of gigantic plants situated in specific areas where energy is cheap that do this thing at an amazing scale.

Aluminum is cheap as chips, except that it used to be more expensive than platinum (and at a much higher impurity ratio than the stuff we use for baking or for making cheap cases).

Heck, gold is "a thing" because we could purify and mold it without bringing it to a melting point and it was, for a very long time, the only metal available to us to do anything with, way before the bronze age.

And the problem with the refinement process is that you can't really "be smart" about it, reaching very high temperatures is one of those things you can't really scale down in an efficient way. You'd have to propel 100,000 tons of factory to mars in order to efficiently refine anything remotely close to the metals we had access to 100 years ago.

Which is not to touch on the mining bit, that is in itself very complicated (see how slowly and shallowly rovers are currently able to drill).

Are there workarounds for this? Maybe, I don't think anyone knows them though, they are not the kind of thing that's within easy reach. Maybe if we happen to stumble upon large reserves of bismuth or lead or gallium or mercury close to the surface of Mars, and build a whole branch of engineering around using those to build machinery... ? But my limited knowledge of geophysics and geology tells me that finding those in large amounts is very unlikely.

For reference, if you take an oven, that can reach, say, 450 degrees celsius (home) and up to 700 (industrial). Those aren't enough to refine any "useful" metal (e.g. iron) and building them requires materials that were produced at 1500+ degrees.

IANAChemist/IANAMaterialScientist/IANABlacksmith though, so take with a spoon of salt.


I agree with you overall. There are some really interesting things you can do on the moon. Check out this video:

https://www.youtube.com/watch?v=9-RTBGnzNks


We don't have to be efficient, just small and reliable, no matter how slow... if there were enough small machines to turn out enough materials to build new ones faster than the failure rate, then geometric growth wins, and you can build whatever you want, eventually.

I don't think you're getting my point, consider reading again. There's a fundamental limit you will hit here, you can't just "make it slower" or "make it worst" to lower that limit.

There is a fundamental limit of power... I get that. The Perseverance Mars Rover has one experiment that requires 180 watts of power, (The Oxygen Generator experiment) and it has a 110 watt RTG powering everything. They charge up some lithium batteries during down time, and use them to make up the difference.

In the limit, If something takes 5,000 watts, you could run it for a few minutes/day, with that same RTG, provided you had suitable energy storage.

Perhaps they could gather grains of material, and just sort them, one at a time, only keeping the iron rich material, or use a permanent magnet to gather ferrous material. You could sinter the grains together using a microwave or laser pulse.

The results don't need great quality, just enough tensile and compressive strength to be mechanically stable during additive or subtractive manufacture.

Lots of minds have been thinking about refining metals for a very long time, but they haven't been thinking about doing it on Mars, with limited power, and very far outside the box of normal constraints, like cost.

This is one time capitalism doesn't apply at all... and most solutions assume capitalist incentives and costs, instead of going back to first principles thinking.


It could be something very low-tech even. Just a machine that turns solar-energy + some mechanical power into say mars-dust bricks, non-stop for use in future missions? Maybe something that just keeps digging a perpetually deeper and deeper trench in a straight line so that subsequent missions don't need big drills to find out below-surface samples?

But thinking about it now, I can't envision that we here would be able to come up with something sublime/novel that a huge army of really-smart people haven't already after spending decades thinking about it. Then again, we have a lot of smart people concentrated in this forum, so who knows if a weird/silly conversation triggered by IT-minded people, acts as a catalyst for the engineer-lurkers that see it.


All of those things need to be launched from earth, that is still expensive. Then transported to Mars, expensive. Then landed on Mars, expensive.

And then you land your digger, at a cost of at least a couple 100 million, and you have a dumb robot that digs a small hole?

We should do industrial build up, but for me that starts with solving cheap transportation from earth to Mars first. Then you can hope to bring things like nuclear reactors, that can actually produce the power you need significant work.

Currently we are going for science reason and very few people are even working on going with humans or industrialisation.


Don’t litter Mars Elon. Stick your landing. LOL.

Bunch of 2x4s and screws/nails :D.

Water.

we playing factorio here?

Ingenuity is maybe the most interesting and coolest advance for space travel. The idea of a remote drone to explore Mars is just rad! I can totally nerd out about that!

A great video where the host visits the drone, interviews its makers, and goes over the cool technical aspects of it and its mission: https://m.youtube.com/watch?v=GhsZUZmJvaM

You may be interested in dragonfly then: https://www.nasa.gov/dragonfly


That is not Ingenuity's source code, that's the software framework used to link the various software modules. It's generic to any mission / instrument.

correct

The Nasa person they have helping narrate what's going on is so genuinely happy the landing went well. It made me kinda tear up. It's infectious just how excited all these people are about this project. Also, I was a bit worried he was going to pass out. 10/10, would watch again (and probably will with my kids)

The audible whew from one of the crew members after maximum deceleration when the telemetry re-established was heart-rending. Years of work, and there's nothing anyone here can do eleven light-minutes away; it was either going to work or one of the thousands of things that had to happen correctly wasn't going to happen.

Everything happened correctly. :)


This guy? https://www.youtube.com/watch?v=gm0b_ijaYMQ&t=1h41m36s

That is Rob Manning, an absolute legend! Here is an interview with him from a few years back: https://solarsystem.nasa.gov/people/2280/rob-manning/

He also wrote this great book: https://www.amazon.com/Mars-Rover-Curiosity-Curiositys-Engin...


I love the fact that you could hear people saying things like "yes yes yes YES YES!" in the background as data came in. Like you say, very infectious

First surface photo is in too!

https://i.imgur.com/C2s1job.jpg


NASA is making the raw images of everything available:

https://mars.nasa.gov/mars2020/multimedia/raw-images/


Today's images are from Sol 0. Zero-based counting rules.

You can see what this page looked like for Curiousity at https://mars.nasa.gov/msl/multimedia/images/?page=0&per_page...

Can't wait till they start posting raw images :)


Thanks for that! I just found this especially awesome shot of Curiosity: https://mars.nasa.gov/resources/21929/curiositys-dusty-selfi...

It's very high res. You can see the holes/damage on the wheels -- Perseverance will have new wheels because of it. And also, won't have the 'morse code spelling' on the wheels either. It's amazing that this kind of damage/wear couldn't have been predicted in tests. The amount of dust that has settled on top in what appears to be predictable channels is also interesting.

Looks like they have more than 300 000 images on the raw site: https://mars.nasa.gov/msl/multimedia/raw-images/?order=sol+d...


N00b question: why is it black & white?

Other posters have pointed out that it's the hazard avoidance camera, but they haven't said why the hazard avoidance camera is black and white.

When you do computer vision, the first step you do is convert your color image into a black and white image, and run your CV algorithms on the black and white image. This is because when you're looking at objects and shapes and stuff, it's contrast that tells you where the boundaries between things are. This is true even in a human world of human objects, which tend to be many colored. It's even more true on Mars where basically everything is varying shades of orange. So having color doesn't help a whole lot, and you also have to do the additional step of converting the color image to black and white, which takes CPU power and adds latency. Remember, the purpose is hazard avoidance- latency is bad.

Additionally, color camera sensors aren't actually color sensors. They're black and white sensors. In front of every pixel on the black and white sensor is a filter that is either red, green, or blue. Pixels are grouped into sets of four, and there are two pixels with green filters, one pixel with a blue filter, and one filter with a red filter. (sometimes one of the green filters is omitted, giving red, green, blue, and b&w, or sometimes one of the green filters is a filter that allows IR, or something like that.) So if you have a 16MP camera, the camera has 8M green, 4M red, and 4M blue pixels. This means two things; first of all, if you just wanted a black and white image in the first place, a color sensor gives less detail than the equivalent black and white sensor, and second, you need to do additional processing to convert the raw output from the sensor into an image that's usable for anything. The additional processing adds latency.


Just as a heads up, the HazCams on Perseverance are in fact in color (Source: https://link.springer.com/article/10.1007/s11214-020-00765-9 - "The Mars 2020 Navcams and Hazcams offer three primary improvements over MER and MSL. The first improvement is an upgrade to a detector with 3-channel, red/green/blue (RGB) color capability that will enable better contextual imaging capabilities than the previous engineering cameras, which only had a black/white capability.") Your observations are correct though - the stereo precision is important, so there was additional analysis of the stereo depth computation to make sure it wouldn't cause an issue.

Huh, I guess so. Looking over the study it looks like they had issues by looking at dirt in scoops and being unable to tell whether it's Martian dirt or a shadow.

I have a feeling I'd be the angry guy in the meeting who wouldn't accept the consensus. "but what about latency! what about the descend and landing!" shakes fist


Nah, your concerns are 100% reasonable - they just operate on a different context. On Earth, latency is king. On Mars, especially until the Primary Mission is complete, it's all about risk mitigation. Since we're light-minutes away from Earth, a few frames of latency is nothing. At the same time, you want to avoid breaking your $3B machine, which is hard to operate given the time-of-light delay and comms limitations. Just a different set of tradeoffs. IIRC they first tested on-device deep learning for hazard avoidance in Curiosity, but don't quote me on that.

-Worked at JPL for a few years and have dozens of friends, a few in the vision system.


Thank you for the explanation. That was highly interesting. Does anyone else know if the human eye does perceive color directly? Is this at all technically possible? And if yes, why aren't we doing it with cameras?

Short answer: No. We (the majority anyway, as some are colourblind) only perceive lightness, reddish, greenish, and bluish. The brain uses the info and effectively synthesises the image in our brains.

Long answer: Colour is a very rabbithole topic but Captain Disillusion has a summary of it (https://youtu.be/FTKP0Y9MVus) and Technology Connections has a discussion (https://youtu.be/uYbdx4I7STg).


What do you mean by "directly"? Color is a human abstraction over the reception intensity of certain wavelengths of light.

What do you mean “abstraction”? The colors that I am seeing look very concrete to me. (Also, the “wavelength theory” of color perception does not explain why TV screens work.)

The human retina is composed of cells that are responsive to different wavelengths of light. Color is the word that we use to describe the subjective sensations associated with certain patterns of stimulation of those cells. There is no "yellowness" in a bananna. We cannot construct an instrument capable of measuring "yellow" as such. What we can measure are the intensities of wavelengths of light.

We can notice that when people say they perceive "yellow" that the spectral intensity graph has certain patterns. This is the physical phenomenon that produces the sensation of "yellow."

Humans are not good at judging reality introspectively. We experience everything heavily filtered through a variety of lenses. Our feeling that color is "concrete" is not predictive or explanatory... we cannot build mechanisms based on it. The idea that our perception of color is a result of interactions between certain wavelengths of light and certain photosensitive tissues in our eyes is both predictive and explanatory. We can design systems that have similar types of wavelength intensity sensitivity components and measure the physical response of those systems. That's how cameras work.

We can reverse the process and take those measured wavelength intensities and re-emit them from variable-wavelength light sources and produce images. That's how you're reading what I've typed right now - the images produced by the display you're looking at were generated in this fashion.

I'm not sure what you mean by the “wavelength theory” of color perception.


> We cannot construct an instrument capable of measuring "yellow" as such.

Of course we can. We can capture the signal sent through the optical nerve and then reproduce it as a stimulus which will make the brain “see” yellow color.

Besides, humans are capable of distinguishing literally millions of colors, of which just a tiny fraction can be attributed to measuring particular wavelengths (or, more accurately, particular energies of the incident photons). In that way the eye is different from the ear (which performs a kind of Fourier analysis of the sound wave).


Well, the instrument wouldn't me measuring yellowness... it would be measuring electrical impulses that (in some individuals) correspond to the (verbally asserted) perception of "yellow". "Yellow" is not a characteristic of the world; it's a convenient label that humans apply to some bucketed sets of sensory perceptions.

I agree that there are sensory perceptions humans are capable of perceiving and labeling as colors that cannot be attributed to external physical phenomena, but those are largely artifacts of the way our brain processes signals. For example if you stare at a purple dot for some time, then look away, you'll perceive a yellow dot where there is no external set of photons corresponding to the wavelengths that normally trigger the sensation of yellow striking your retina.

This is just more explanation about how "yellowness" is a characteristic of our brains, not of the external world.

Or did you mean something other than what I'm referring to here? I think that for the vast bulk of humans, the vast bulk of the colors they perceive regularly are due to photons striking rods and cones in their eyes at various intensities, causing color sensations to occur in the brain. Do you think something else is happening?

You seem to understand how the eye works, and some neuroscience, so I don't understand how you can have the questions that you raise about whether we can build cameras that sense "color" instead of "light"


...it's complicated. Very complicated. However complicated you think it is, it's more complicated than that. Please note that I'm not an expert in human eyeball physiology, I'm just a computer programmer who's tried pretty hard to come to a better understanding of how to make computer vision better. (I've failed, fyi. Caveat emptor.)

The human eye has four basic cell types, rod cells and cone cells, and there are three subtypes of cones, short, medium, and long. The three subtypes of cone cells sense blue, green, and red light more or less directly. Medium and long cone cells, which directly detect green and red light, almost entirely overlap. [0] It is more accurate to say that long cone cells detect yellow light than it is to say it detects red light. There is a brain system which measures the difference in response between the long (red) and medium (green) cells and uses the difference to say "aha! this must be red!"

The ratio of short (blue) medium (green) and long (red (yellow)) cone cells are roughly 2%, 2/3, and 1/3. The cells in your eye which detect blue light are more or less a rounding error. The cells which detect green light are roughly twice as numerous as the cells which detect red (well, yellow) light. If you see a thing and think, "man, that's awfully blue," it's not because your eyes are telling you "hey, this thing is awfully blue". The "blue" signal is barely noticeable in the overall signal; but your brain jacks up its responsiveness to the minuscule blue signal.

One of the side effects of the completely fucked ratios between the three types of cones is that your perception of the overall brightness of a thing is mostly down to how green it is. This shows up in lots of standards; NTSC, JPEG, the whole nine yards. If you've ever implemented a conversion between RGB and any luminosity-chroma colorspace (YUV, YCbCr, YIQ, NTSC, any of them) there's a moment where you'll go "wait a minute this doesn't make any fucking sense". You look at the numbers and the luminosity channel is just... green, and you know that the other two chroma channels are quartered in resolution. And you'll think that makes no sense. But that's how it works.

Then you'll remember that color sensors have their pixels arranged in groups of four, with two green, one red, and one blue channel. There must be some green conspiracy.

And there is. It's your brain. It's your eyeballs with 2/3 of its cone cells being green sensitive ones.

Those are your cone cells. Rod cells are entirely different. It's trivial to say well, cone cells see color, rod cells see black and white, but it's more complicated than that. Rod cells are excellent in low light conditions, cone cells not so much. Cone cells see motion very well, rod cells not so much. Cone cells can discern fine detail, rod cells do not. Rods and cones are not evenly distributed across the retina either; cone cells are densely packed in the center, rod cells are more common in peripheral vision.

Look at a colorful thing directly; take a note of how colorful it is. Now look away from it, so it's only in your peripheral vision; take a note of how colorful it is. Does it seem just as colorful? It isn't. That's your brain fucking with you. Your brain knows it's in your peripheral vision and all the colors are muted out there, so your brain exaggerates the colorfulness. Cone cells are 30 times as dense in the center of your vision as they are just outside the center of your vision. [1] That's why you can read a word directly where you're looking but it's very difficult to read elsewhere.

The reality is that your retinas give a fucking mess of bullshit to your brain, and the brain is the most incredible image processing system conceivable. It takes bullshit that makes no damn sense and -- holy shit I forgot to talk about blind spots.

Ok, so your rods and cones have a light sensitive thing, with a wire in the back, and all the wires get bundled up in the optic nerve that goes to the brain. Here's the thing: they're fucking plugged in backwards. The wires go forward, and are bundled up between your retinas and the stuff you're looking at. The big fat optic nerve therefore constitutes a large chunk of your vision where you can't see anything. Your brain just.. invents stuff where the optic nerve burrows through your retina.

Other weird stuff. If it's bright, the rods and cones send no signal, if it's dark, they send a strong signal. It's inverted. There's apparently a very good reason for this but I don't remember what it is. Also, the rods continuously produce a light sensitive substance that amplifies the light sensitivity but is destroyed in the process. It takes a long time to build up a reserve. This is why it takes time to "build up" your dark vision, and why it's so easily destroyed by lighting a cigarette. The physiology of "ow it's bright" as opposed to "it's bright" isn't just on your retinas, it's also on your eyelids and your iris, but more importantly, it's shared between your two eyes. This is why closing one eye makes it less painful when you go from a dark place to a bright place.

The point is, the study of human vision is not the study of the human eye. The study of human vision is the study of the human brain.

Much of what we do with color spaces and image compression is dictated by our stupid smart eyeballs and our stupid smart brains. Video codecs compress with 4:2:0 chroma subsampling because the brain's gonna decompress that shit better than a computer can anyway. Cameras have twice as many green sensitive pixels as blur or red pixels because the eye resolution is much sharper in green than other colors. More advanced image and video compression schemes will try harder to account for human eye-brain physiology.

[0] https://upload.wikimedia.org/wikipedia/commons/0/04/Cone-fun...

[1] https://upload.wikimedia.org/wikipedia/commons/3/3c/Human_ph...


> If it's bright, the rods and cones send no signal, if it's dark, they send a strong signal. It's inverted. There's apparently a very good reason for this but I don't remember what it is.

The reason is to prevent light fatigue in eyes. Ears and nose experience a quick fatigue when exposed to the same stimulant for a long time. With inverted arrangement in eyes, you have a naturally stimulated inhibition rather than a fatigue inhibition.


I believe in purple.

After you get done exploring how we perceive colors associated with different wave lengths of light, and how nobody really knows whether these are common somehow, or unique to each of us, that sentence should bring you both a chuckle and some wonder about perception.


From the physiological standpoint human individuals are far, far from being unique. The electrochemical reaction of a neuron in the cortex which indicates the perception of ‘red’ is pretty much the same in any human (and not only).

Whether that subjective perception is the same remains unknown. We have no solid way to communicate any of that yet.

I am inclined to believe it is, but we do not really know.


The world is a complicated place, Hobbes.[1]

The lower "HazCams" hazard avoidance cameras (which captured those initial photos) are there to detect hazards (rocks, trenches, etc.). They are stereoscopic, lightweight, and high resolution.

My guess is that using color sensors would have either increased the 3D mapping precision or added weight/power/bandwidth requirements, or otherwise been less robust in that environment.

Those cameras were also pre-deployed for the landing phase and likely transmit more quickly due to the lower data information. The other cameras were shielded for the landing phase.

The navigation and other cameras are in color, and I expect we'll be seeing better images shortly.

[1] This comes to mind whenever a question like that is asked: http://4.bp.blogspot.com/-CWM1zDcmWXs/TroD0VsX4WI/AAAAAAAAAV...


> My guess is that using color sensors would have either increased the 3D mapping precision or added weight/power/bandwidth requirements, or otherwise been less robust in that environment.

I think you meant to say decreased? In which case I think you would be correct! Camera pixels are made up of these things called photosites which don't by themselves record color, only brightness. In order to record color information, the photosites are placed behind a Bayer filter[1], which effectively reduces the resolution of the camera by 3, because in order to get the color of a pixel you need its red, green and blue component. Bayer filters also frequently have a small blurring filter in front of them to make sure that nearby photosites with different color filters get the information they need.

If you're looking for the highest resolution image possible, black and white is the way to go!

[1]: https://en.wikipedia.org/wiki/Bayer_filter


That's why "real" space cameras usually have color filters on a carousel before the sensor - they take 3 pictures each with different filter and BAM, color!

That way you get high regulation as well as color. You can also have some special (infrared, ultraviolet, etc.) Filters on the carousel, not just RGB.


>and BAM, color!

and BAM, false color! FTFY


> I think you meant to say decreased?

I did, thank you. I think my brain had already skipped ahead to the added weight/complexity concept while my fingers were stuck on that part of the sentence.

I should probably read things after I type them...


FYI - the HazCams on Perseverance are in fact in color (this is new, they were black and white on Curiosity)! Stereo precision was a concern based on the switch to color sensors, so there was some algorithmic work done to make sure it wouldn't cause an issue. (Source: https://link.springer.com/article/10.1007/s11214-020-00765-9 - "The Mars 2020 Navcams and Hazcams offer three primary improvements over MER and MSL. The first improvement is an upgrade to a detector with 3-channel, red/green/blue (RGB) color capability that will enable better contextual imaging capabilities than the previous engineering cameras, which only had a black/white capability.")

Interesting, I didn't know that. I knew the Cachecam was color, but somehow missed that detail, despite actually seeing the camera in person at one point...

Wow, real upgrades all around compared to Curiosity!

What are they going to do next ? Put on board a solar powered Mars helicopter ?? ;-)


And by "increased" I meant the "decreased" kind...

Just an enthusiast, no real answers, but here's a guess:

These are hazard cameras, designed to be inputs into the guidance algorithms on board. It might make sense for such a camera to be B/W to reduce on board processing required. There's also a glass cover on them, and a lot of dust from the landing, so that may be obscuring true color if the cameras do in fact take color images.

Also they may have just transmitted a lower quality B/W image to get something back to Earth quickly, since higher res images take longer to uplink.


It's from a hazard camera, which is not used for main photography. Better images will come soon.

Worth noting that these first pictures are sent in the first seconds after touchdown, you can even still see the dust in the air from the landing (even if it was craned down to reduce dust). It also explains the very low resolution in general, they want to get confirmation ASAP, no time for high quality high resolution images.

The low resolution and fuzz is also because they still have the lens caps on - they are of course transparent lens caps in case the explosive bolts that will release them fail. Redundancy!

This is one of the cooler things that I learned today. Could they go even further: make the caps themselves lenses+filters. Take photos. And then blow them off for new photos.

would dust stay in the air longer or shorter than on Earth?

also is it technically correct to call the Martian atmosphere "air"?


Yes, but it's not technically correct to call Martian seismic tremors "earthquakes".

https://www.imdb.com/title/tt0080745/goofs

Flash Gordon (1980) Goofs

At the very beginning of the film, Ming and his henchman are discussing "an obscure body in the SK system", which the inhabitants refer to as the planet "Earth", pronounced as if the word is completely foreign to them. However, at that moment, Ming activates a button on his console labeled "Earth Quake".

http://bobcanada92.blogspot.com/2020/10/flash-gordon-logic.h...


Star Trek calls them "quakes" I noticed

Dust falls much, much faster on Mars. The density of Mars's surface atmosphere is ~160x lower than on Earth.

Right. One "proof" advanced by Moon landing conspiracy theorists was that dust settled much faster in videos than it should if it were really in Lunar gravity.

Miriam Webster says yes to part two:

> the mixture of invisible odorless tasteless gases (such as nitrogen and oxygen) that surrounds the earth

> also : the equivalent mix of gases on another planet

I would naively guess yes to part one but it's complicated: Mars has less gravity, much less atmospheric pressure, colder temps, and greater gravitational influence from its moons than Earth. Wikipedia says the mechanism of the planet's dust storms isn't well understood.

https://en.wikipedia.org/wiki/Atmosphere_of_Mars#Dust_and_ot...


My guess is lower image size, which means image can get transferred faster.

This is the right answer. The camera (and its 8 siblings) are capable of color HD imaging - the sensor has a Bayer filter. This image used a binning mode to produce a downsampled frame that could be more rapidly transferred back over the lower bandwidth comms used during landing. Binning combines the Bayer pattern and so color information is lost.

Also doesn't help that there is a (transparent) lens cover in front of the lens obscuring the view.


That is most certainly correct. They also mentioned that these are images from engineering cameras, so they are normally responsible for navigation. The real HD footage will come in over the next hours as the bandwidth just is not large enough.

Elon Musk needs to provide some Starlink sats for a better connection.


Starlink would most certainly be of little direct use here.

What I could imagine is having Starlink satellites around Mars that allow to route data from rovers anywhere on the planet to a dedicated high-performance communications platform that handles communication with Earth.


In fact that's exactly what they're doing: the Mars Reconnaissance Orbiter is serving as a communications relay, as it did for previous landers.

It's just that since there have never been more than a handful of spacecraft active on Mars at any given time, there's currently no point in spending huge amounts of money to launch a whole constellation of satellites for continuous coverage.


And a photographer! MRO took what might be my very favorite picture of all time: https://www.space.com/16946-mars-rover-landing-seen-from-spa...

Could be still a nice exercise if someone could compute how many Starlinks could a Falcon Heavy throw to Mars transfer orbit & if they could be able to actually capture into Martian orbit by their default means of propulsion (do they actually have any high thrust engines ?).

Not only the MRO, but other orbiting assets as well, particularly NASA's MAVEN and ESA's TGO. Even the venerable 2001 Mars Odyssey is still used as needed, I think.

Even ESAs Mars Express is still around - since 2003!

Anyone know the bandwidth they're working with, at least roughly?

Here's a page with data about the Deep Space Network:

https://mars.nasa.gov/msl/mission/communications/#data

"The data rate direct-to-Earth [from Mars] varies from about 500 bits per second to 32,000 bits per second"


Clarification, that is for the old Curiosity rover. The page for Perseverance has some additional information

> 160/500 bits per second or faster to/from the Deep Space Network's 112-foot-diameter (34-meter-diameter) antennas or at 800/3000 bits per second or faster to/from the Deep Space Network's 230-foot-diameter (70 meter-diameter)

for high-gain antenna, and

> Approximately 10 bits per second or faster from the Deep Space Network's 112-foot-diameter (34-meter-diameter) antennas or approximately 30 bits per second or faster from the Deep Space Network's 230-foot-diameter (70-meter-diameter) antenna

for the low-gain antenna, which I believe the first two images were sent through

https://mars.nasa.gov/mars2020/spacecraft/rover/communicatio...


Maybe it was the low gain antenna but via MRO or other orbiter? 30 bits per second seems like a bit too slow to get even the two small images back so quickly.

That's rover directly to earth, when reconnaissance orbiter is used to relay it's around 2 mbit to orbiter.

This was explained on the feed. It's from a lower-res safety camera mainly used for object avoidance on the ground. High definition images will be available later.

https://mars.nasa.gov/mars2020/multimedia/raw-images/

It seems that NASA is being awesome and making all raw images available as they get them. So far just the 2-ish.


I heard on the live stream that it was taken by a camera that is used by the driving system.

Guessing its black and white/high contrast to help see rocks etc. And probably much lower res, smaller file size too for transferring.


Here's the answer from NASA: https://youtu.be/gm0b_ijaYMQ?t=6240

It's an "engineering cam" that's not really meant for taking nice pictures, more to see where the thing is going. There'll be some better Instagram selfies soon though.

The shadow features are fantastic!

Greetings from Jezero Crater! Really doesn't look alien. RLike the high mesa of New Mexico sans flora ;)

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: