Hacker News new | comments | ask | show | jobs | submit login
Mars to Earth at an average of 29 kbits/s (nasa.gov)
193 points by kghose on Aug 10, 2012 | hide | past | web | favorite | 85 comments



It's awesome that they've been involving the public so much in the details of how the rover mission really works. I'd actually been wondering about this exact question, since it seems to be one of the limiting factors for the quality of images and such. From a physics standpoint, sending data is clearly a lot more complicated than I'd previously assumed.


> It's awesome that they've been involving the public so much in the details of how the rover mission really works.

It must be mission critical to spread awareness since their budget cuts are getting worse and one way to correct that is to get people excited about space exploration. They now put a considerable effort in PR, involving people in the mission, sharing most of the information and making professional videos. And i believe this is why we see twitter accounts of most astronomers, satellites, rovers and other NASA properties. The NASA's ustream has been very popular lately and they have presence on every major social network. And curiosity was named after an essay written by a student in a competition organised by NASA for school students.


"And curiosity was named after an essay written by a student in a competition organised by NASA for school students."

Nasa has been doing this for some time now, Sojourner[1], and MERs (Spirit & Opportunity) have also been named by student competitions.

[1] http://mars.jpl.nasa.gov/MPF/rover/name.html


Yes, the clue is in the comparison to 'home modems'. The technology was finalised 2004, which means design around 2000 or so?

I'm impressed with the reception technology that allows 'direct to home' communication at all given the power levels available.


Absolutely, this seems like one of the most amazing in a long string of incredible things. There’s only enough energy to power two 60w light bulbs, and yet it can transmit all the way to earth while also powering the computer running the thing.


This probably has more to do with the massive antenna (also known as a radio telescope) here on earth that are used to pick up the signal. Fortunately a few such antenna exist around the globe, so Mars should be in view of one of these antennas at all times. According to Wikipedia (http://en.wikipedia.org/wiki/Deep_Space_Network), radio telescope complexes used by NASA are located at: Goldstone Deep Space Communications Complex near Barstow, California, USA Robledo de Chavela near Madrid, Spain Canberra Deep Space Communications Complex near Canberra, Australia


You'd be amazed what you can do just with a watt of radio power.

If a transmitter and a receiver is in space, what's the furthest distance the receiver can go away from and have a 1 watt signal still above the noise floor? Assume the transmitter is a perfect omnidirectional.

About 9200 miles.


9200 miles is not very much in space. That's like 0.006% of the distance between Earth and Mars.


I well realize that.

I was assuming absolute worst conditions, at 1 watt PEP.

With hardware noise reduction, we can drop easily to .1 watt and still keep everything else consistent.

Next, if we choose a directional antenna, we can also greatly magnify directed output. No sense in directing energy in the ground, is there?

We can also have receiver systems (dish arrays) that can provide great gains in reception and transmitting.


In general, space travel is very risky. It's not like they can iterate. Pulling something off like landing on Mars is very impressive. Keeping this in mind, I feel it was actually quite risky that NASA streamed the landing live.

I think NASA should receive big kudos for taking that real risk.


So one wonders at what point you uplink to a satellite with ejectable flash drives which, when full, detach, make a gravity sling shot pass to leave orbit and then burn for a rendezvous with earth 9 months later. If I did the math right a 1TB drive returned from Mars in 9 months would be an effective bandwidth of 470K bits per second (or 47K bytes per second)


While we are dreaming, it would be better to have said flash drive picked up by a spacestation, refueled in orbit, and shot back to Mars.

Hmm. If storage continues to improve as fast as it has been, once the space infrastructure is there, this might be cost effective. We have a lot of data to send.

In particular, we are going to need to have a local cache of the internet on Mars. At best, the average minimum latency is 225 million kilometers / the speed of light = 12.51 minutes.

Vast data barges, roaming the solar system, keeping everyone in sync....

Building an interplanetary Internet is going to be very, very fun.

On a related note, never underestimate the bandwidth of a truck filled with...

Well I was going to say blueray disks. But the cheapest I can find bluray disks are 3 cents per gigabyte. 3 TB harddrives are easily had at 5 cents per gigabyte. If the data only needs to go 60 miles, then 120 drives ($18,000) in a truck would get 1 TB/sec, or 8,000 mbps...

Ouch. I'll go with a megabit connection. Once upon a time that sort of thing worked for bulk data ;)


An Interplanetary Internet is not a joke: It's been a serious field of study for a while. There is a SIG on the topic, and lots of work has been done to think about how Internet protocols would work across interplanetary distances and frequent disconnections:

http://www.ipnsig.org/aboutstudy.htm


We need to find some hive-mind aliens and dissect them in order to develop philotic protocols for instant transmission.

0 latency on funny cat videos from Uranus.


> In particular, we are going to need to have a local cache of the internet on Mars.

I predict somewhere today at Google an engineer just got an idea for his 20% project. "Red Cache"


I hope so. The trick would be, after there is a cache of the Internet on Mars, to only have to send the changes, not the whole Internet again.

Straight snapshots would probably be the only viable option though. And maybe make the policy, once a decade do a complete update. And every year, do "any webpage accessed by colonists in the past year will be updated. And the top 5-15% of webpages by popularity on Earth."

And during the year, the top 5% of webpages, and any webpage with specifically Mars relevant content would be updated "wirelessly."

The numbers would change constantly, depending on bandwidth and costs and whatnot. But the question "how to build the best asynchronous shadow Internet given such and such constraints" is fascinating.


Did you know -- there are more ports on the internet than just 53 and 80?

-- A message from your friendly local hacker wizard


The only thing your plan doesn't account for is how we get the updates from the web developers on Mars :-)


Perhaps, but I just got an idea for an interview question...


Latency matters, too. Often all the data is necessary to make decisions so (for example) only immediately transmitting a limited subset of the data is not really an option.

It’s not like that rover is driving around autonomously. It has some autonomy, but it can’t decide what’s interesting and what’s not, what has to be investigated and what not.


True enough, but it would help in answering the critics who want NASA to return a 1080p/120fps 3D movie from mars. Send the movie via the flash drive.


The initial problem is building and sending the camera to Mars.


Unfortunately, to do a gravitational slingshot, you need a body other than the one relative to which you want to accelerate. So you can use Mars to accelerate relative to the Sun, but not relative to Mars.

This means if you're already in Mars orbit you need fuel(TM) to escape. One might consider a slingshot around Deimos or Phobos but I don't think that'd be helpful enough to justify its cost (could be wrong, but I've never seen it suggested in any Mars->Earth mission plan).


Sounds like a modern take on the Corona satellites.

http://en.wikipedia.org/wiki/Corona_(satellite)


I see one huge issue: if you lose the flash drive, you lose ALL of the data you collected. At least you a assured to get part of the data when it is broadcast.


Timeliness is quite important. In order to plan where to go and what to do next you need results from what you've just done. Without that you're just fumbling around blind.


Such drives would be far more expensive than what we use daily. Factor in electronic shielding needed to prevent soft-errors from cosmic rays: http://en.wikipedia.org/wiki/Cosmic_ray#Effect_on_electronic...


470K bits == 58.75K bytes

(assuming 8-bit bytes)


Typically bytes are coded 8N1 (or 1 start bit, one stop bit, and 8 data bits) so 10 baud gets you 1 byte. Hence the nominal divide by 10 (this is 8b/10b encoding)

But in a pedantic sort of way, the computation should add extra bits for ECC since the drive has ECC bits already embedded in it, so a 1TB drive probably has more than 10 Tbits it is probably closer to 15 - 20 terabits.


you also need parity bits


So those are the lander->earth and lander->sat speeds. How fast can the satellites transmit to earth? If we somehow always had a sat in reach of the lander, how much data would we be able to send?

Edit: Did the research. MRO's antenna can handle .5 to 4 megabits depending on the distance between planets. Wow, having to wait for flybys is a huge bottleneck. http://mars.jpl.nasa.gov/mro/mission/communications/commxban...


I wonder if it's possible to launch an orbiter in Mars synchronous orbit. That'll eliminate the bottleneck.


Would have been nice if Mars Telecommunications Orbiter [1] had flown. In addition to a higher orbit more suitable for communications, it would have included an experimental high-bandwidth laser link to Earth. Alas, the project got cancelled for budgetary reasons.

[1] http://en.wikipedia.org/wiki/Mars_Telecommunications_Orbiter


Wouldn't just a higher orbit achieve the same thing? You'd be in sight for longer periods, right? Without locking you into a location the way synchronous would.

The current orbiters are primarily for imaging, so it makes sense that they're as low as possible. If you were always going to have several surface missions, it might make sense to have higher orbiters primarily tasked with communication relay.


As someone explained to me on another thread, the orbiters they have now are in polar orbits. A more equatorial orbit would create very frequent uplinks.

One thing is that they didn't know Gale Crater was where they were going to land until pretty soon before launch. There were a number of other sites. And if they chose one further from the equator, that satellite wouldn't necessarily be in the right place.


I read this the other day and found it interesting: http://en.wikipedia.org/wiki/Interplanetary_Internet

Apparently NASA had plans for launching an orbiter specifically as an optical communications hub for Mars but scrapped it in 2005:

"As of 2005, NASA has canceled plans to launch the Mars Telecommunications Orbiter in September 2009; it had the goal of supporting future missions to Mars and would have functioned as a possible first definitive Internet hub around another planetary body. It would use optical communications using laser beams for their lower ping rates than radiowaves."


Now it's ESA's turn to try it: http://www.esa.int/esaMI/ESOC/SEMM5IHWP0H_0.html

(OT: Note the picture at the bottom, the astronaut is one of ours: Austrian Space Forum, a private space R&D org.)


I'm wondering how do they prevent non-NASA entities from sending commands to the rover - are they using some sort of cryptographic signatures when sending the commands?


I'm not sure about Curiosity in particular, but I work in the space industry and can say that commands are normally timestamped and encrypted. The encryption stops someone from commanding the spacecraft, the timestamp stops someone from recording a transmission and resending it later.


So there’s basically a real Goldeneye key. And you could really use a Goldeneye key duplicator before leaving Severneya!


You need access to the big Deep-Space Network of radio dishes.


1. You can hack into the DSN, and try to do it that way.

2. Set up an 80-meter dish in your back yard, with a team of NASA RF experts.

With step (2), the authorities will zero in on you in a second, so the emphasis is definitely on the cyber security of the ground infrastructure and not on SSL-over-space-link-communication.


(2): there are plenty of third world countries or foreign government agencies where placing a 80-meter dish is not a problem.

Although the process seems a bit intimidating - http://deepspace.jpl.nasa.gov/dsn/features/dsnbuilt1.html


Not just assembling the dish - you also have to know the ephemeris schedule, bit coding scheme (it changes based on circumstances), packet formats, software infrastructure... In the great scheme of things, my friend, Cuba or Zimbabwe secretly erecting an 80 meter dish just to f--k with a rover on mars isn't really a concern. Even though ESA or China or Japan might conceivably have the theoretical capability to do this... there are other things to worry about.


They could just DOS it by blasting it with white noise.


This kinda raises question. I for one would not use encryption just to maximize bandwith efficiency. Deep space program are no where close to being a tactical advantage over another country. I wouldn't see why someone would like to screw it up.

On the other side, there's always a moron out there to crash the party. So even if I wouldn't use encryption, I wouldn't be surprised if they use it.


I for one would not use encryption just to maximize bandwith efficiency.

Encryption typically doesn't make messages longer.


We're not talking about your average data. We're talking about optimized data. Of course there's an overhead.

EDIT: removed specific type of data


I'm not sure what you mean by "optimized" data.

Edit: my very rudimentary understanding of information theory tells me that there's no "average" or "optimized" data, only varying amounts of entropy (AKA compressibility). My limited knowledge of radio and electrical transmission says that transmitted data is encoded to make effective use of the transmission medium (e.g. 8b/10b encoding, 8-to-14 modulation, TMDS).

So, if by "optimized" data you mean "high entropy" data, encryption should have no effect on that. Or, if by "optimized" you mean encoded for the transmission channel, you would run your encryption process before the channel encoding process.


I do have a question however. I just read about 8b/10b encoding and I think we can agree that 8b/10b has around 20-25% overhead. This means that for the same throughput (29k/sec) we have 21.75k/sec of data.

Since TMDS is a superset of 8b10b than it's a fair assumption to think it has the around the same overhead.

Now I know we're only talking encoding here, not encryption but still, I wonder:

What kind of encoding are they using, because smaller the encoding is, the weaker the encryption is (Unless I got this wrong...)?

Now if all my assumptions are correct could it be possible that they use an encoding that makes encryption useless?


Encoding != Encryption. Encoding is on a line level, so for example, 8b/10b is used in HDMI to prevent a DC bias on the line. The way it works is by making sure within a certain number of 8 bit inputs (I think 3), the resulting 10b outputs will have an equal number of 1s and 0s, therefor preventing the physical wire from charging itself closer to V+ or V-, which in turn helps prevent bit errors.

8b/10b has exactly 25% overhead, this is the difference between baud per second (what the line is capable of transmitting, which is on the 10b side), and bits per second (which is the decoded 8b data). If you have a 10kilobaud/second line, you'll have an effective data rate of 8kbit/second.

Encryption is done before the encoding stage, the encoding scheme doesn't care and isn't affected by the encryption.


At first, I wouldn't have thought that they would use encoding to maximize efficiency. But now that I think about it, their goal isn't efficiency but rather consistency. So yeah I was wrong.

Maybe I thought this was more primitive than what it really is :/


You seem to be trying to say something, but it's not coming across. What is primitive? How is it primitive? How do you describe the difference between efficiency and consistency in your usage? Is English not your primary language?

My exposure to this stuff is limited to a few exercises in a Matlab class years ago, plus what I read on HN and Wikipedia. Based on what I understand, from a data transmission perspective, there is no difference between efficiency and consistency. There is a theoretical maximum amount of data that can be transmitted using a given medium, bandwidth, and error rate. The only goal is to get as close to that maximum as possible.

Also, there's no shame in admitting you're wrong. In fact, every opportunity to be wrong is an opportunity to learn.


Yeah, english isn't my primary language and I haven't spoken it in a long time so sorry if I can't make my point across :)

I'm not ashamed for being wrong. If that'd be the case I would have been ashamed my whole life heh.

When I said consistency I was referring to low-error rate in the signal. I should have said that but couldn't find the correct word for it.

Efficiency meant efficiency in the bandwidth usage.

So in that sense, there's always a tradeoff between efficiency (use 100% of the bandwidth to transmit real data) and consistency (make sure that 100% of the sent data is error-free)

Now, when I first wrote my comment. I had the feeling that communication was very basic commands. like a 2 bit integer for movement direction (00 up, 01 down, 10 left, 11right). So if you'd use such a scheme to communicate instruction to the rover, there would be an obvious overhead (encoding 2 bit into 4, 10, etc).

But now that I really think about it, instruction sent to the rover are probably very complex, which increase entropy, which in the end, made my first assumption wrong, is this clear? I hope it is ;) If not, I can rephrase !


In low-bandwidth environments, the pipeline is like this:

1) plain data initial stream.

2) compress it.

3) encrypt it or/and sign it.

4) add error-checking or even error-correction information.

The reason you do encryption (3) after compression (2) is because you don't want encryption to affect the compression rate of the initial stream.

If you were to do it in the reverse order (first encryption and then compression) then yes, the resulting message could be bigger.


There isn't. You could easily achieve zero overhead by shipping a rover with a OTP for time-keyed IVs and just not do any padding.


I'm guessing there is a lot of data that never gets sent to earth. Do they send thumbnails of the images to determine what is interesting? Is there a software tool available that creates a thumbnail and then creates a copy of the original image that is dependent on the thumbnail to be recreated. If you can't tell I wasn't really sure how to word that last sentence. Basically, less data would be required to transmit because you are only sending the data required to reconstitute the real image. Depending on the size of the thumbnail this might result in a negligible difference.

For example, a very naive implementation might create a thumbnail from every pixel where x or y is odd and then when/if you want the original image you can get the data required to put the image back together (it would be a similar sized thumbnail but with the even pixels) For this naive approach the thumbnail would likely be much larger than you want and would not help at all :)


This may be a silly question, but I don't understand why the bandwidth is so low. I understand the latency will be high, since you're limited by the speed of light; but wouldn't you be able to get more bandwidth just by increasing the spectrum of light used, as well as the baud of the transmitter/receiver?


Even with optimal coding you are still limited by achievable signal over noise due to the Shannon limit.


Using Sloppy to view websites gives a reasonable idea of how slow this is.

(http://www.dallaway.com/sloppy/)


Very cool concept. Unfortunately, this only proxies the starting hostname and doesn't rewrite absolute URLs, so on most modern websites, image/CSS/JS resources (being served from subdomains or S3 etc.) will load unproxied and not get slowed down.


The high gain system they have for curiosity has an upper end of 2Mbps - to the orbiter.

The orbiter can talk to earth at up to 6Mbps depending on how far away we are at that time.

6Mbps at that distance is a staggering feat of engineering.


Does anyone know more about the architecture of this space radio network? For example does it use tcp/ip on top over radio or is it packet switched as with amateur packet radio?


Generally they use the CCSDS specifications, CCSDS is an international body with the space agencies of each major country contributing. Google for CCSDS, they have a bunch of open standards. Space networks really aren't quite at the level of fully mature, layered, multi-hop packetized networks... It's a Hard Problem.

Basically, TCP/IP doesn't scale to meet the constraints of deep-space networks (high latency, asymmetric data rates, intermittent connectivity). It just breaks down. That's why there's a lot of active research in DTN (delay/disruption tolerant networking and ad-hoc networking protocols). ION (Interplanetary Overlay Network) is the JPL implementation of the DTN protocols, and has been flown in a demonstration capacity on some extended-mission/end-of-life Deep Space missions. Much of it has been open sourced at this point.


Here's the protocol stacks http://public.ccsds.org/publications/default.aspx , here's source code for some higher layers https://ion.ocp.ohiou.edu/


Here's some publicly available technical information about how the DSN works: http://eis.jpl.nasa.gov/deepspace/dsndocs/810-005/


Up in Sweden they have DTN's for reindeer farmers. A helicopter which brings food supplies acts as the data transfer relay.

DTN book for anyone interested in the area. http://www.amazon.com/dp/1596930632


A friend of mine used to work at JPL on transmissions from one of the older explorers. Maybe Voyager?

Anyway, he compared the energy used to transmit from the probe to that of a flower petal falling from a height of six feet.


What's the latency of that data connection?


I asked this question on quora, about 14 minutes.

http://www.quora.com/Curiosity-Mars-Rover-1/What-is-the-bit-...


Round-trip latency is the time it takes a signal to travel from Earth to Mars and back.

As Mars and Earth orbit around the sun, their distance relative to each other is constantly changing. The minimum distance from Earth to Mars is 35,000,000 miles, and the maximum is 249,375,000 miles.

Since light travels at 670,000,000 miles per hour in a vaccuum, we can calculate the minimum and maximum round-trip latency:

  Minimum round-trip latency
  = (2 * 35,000,000 miles) / (670,000,000 miles per hour)
  = (0.167164 hours) * (60 hours per minute
  = 10.0299 minutes

  Maximum round-trip latency
  = (2 * 249,375,000 miles) / (670,000,000 miles per hour)
  = (0.744402 hours) * (60 hours per minute
  = 44.6642 minutes


14 minutes one-way, so a "ping" would take 28 minutes.


It varies. As others have said, it's currently just under 14 minutes.

But it can be as low as 3 minutes or as high as 21 minutes, depending on the distance from Earth to Mars. Which of course varies as they orbit the sun.


The real question is if its has an IPv6 address yet or if its still stuck using IPv4?


It doesn't use IP... it uses a protocol called CCSDS


The latency is about 14 minutes, which is funny because we consider 100 ms latency a bad thing here on earth.


There must be some amateur radio people out there able to receive these signals, no? With the protocol stacks mentioned in the other comments, I wonder if they're able to extract the encrypted data. Which leads me to think there have got to be people/governments trying to figure those out.


Nice, just like my first modem back in 1996. Now let me host a dedicated QuakeWorld Server on Mars!


You might find that the 1700 second round trip latency is a tad higher than your modem in 1996.


I love this statement (from the Preventing Busy Signals page): "The Deep Space Network (DSN) communicates with nearly all spacecraft flying throughout our solar system."


My suggestion is a satellite going around the Sun between Earth and Mars, so that there would be more visibility.


It's faster than my work's connection!


You are joking but in 2003 I had to live with 6 kbps (yes SIX KILOBITS) at a time when my monopoly telco was advertising broadband speeds.


So, it's basically like dial-up back in the day.


what if they open sourced all of their components on github..




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: