Hacker News new | comments | show | ask | jobs | submit login
Mars rover camera project manager explains 2MP camera choice (dpreview.com)
179 points by k33l0r on Aug 9, 2012 | hide | past | web | favorite | 86 comments



It should be noted, as well, that the Hazcams and Navcams are "build-to-print" exact copies of the ones that flew on the MERs in 2003. So most of the images we've been seeing thus far are actually taken with tech much older than the Mastcams, probably designed around the turn of the millennium. And there's nothing wrong with that -- they are very reliable and do exactly what is needed.

Source: http://www-robotics.jpl.nasa.gov/publications/Mark_Maimone/f...


The mast camera is actually two cameras with zoom of each camera fixed at two different values:

http://msl-scicorner.jpl.nasa.gov/Instruments/Mastcam/

James Cameron lobbied to make zoom lenes for the cameras so they could be set at the same zoom level to create stereo pairs for 3D images.

JPL had "problems designing the lens without using wet lubricants which would require battery-sapping heating"

Difficult trade-offs must have been made everywhere - who knows where that extra battery power went that the zoom lens gave up.


The first time I watched Apollo 13 with my son, he was about five. I explained what was happening as "You can't call a tow truck in space."

Likewise, there are no camera shops on Mars.


Given that the Plutonium power core is putting off 2000 watts of heat in order to generate 125 watts electricity, I find it surprising that they don't have enough heat to keep wet-lubricants heated sufficiently. But - that's an awfully big Rover, and it does get pretty cold on Mars...

2000 watts at 14 years in kilowatt hour - 245,000 kilowatt hours.


Remember, it's got a 14 year lifetime because the half-life of Pu-241 is 14.1 years. At the end of that time, half of its mass will be gone, and it'll be producing half as much heat. It won't produce a steady 2KW for 14 years and then bing, go out like a lightbulb. Power produced will slowly fall over time.


You mean, half of the plutonium mass will be gone. It will be almost completely be replaced by other elements. Though I don't know how much energy they give off in radiation, or whether they are stable.


They should eventually release a series of regular-quality images taken sequentially as the camera pans left and right, and have a competition to see who can squeeze the most resolution out of them using image enhancement and fusion techniques. There are lots of interesting stochastic methods out there (e.g. "compressed sensing") that have application for this kind of thing. It would make a great academic competition.


During one of the press conferences Ravine mentioned they had hundreds of images of a single spot on the ground taken by MARDI, the descent camera, after it had landed and that a member of his team was working on using those to build a super-resolution image of the surface. I like the idea of a competition though.


I think the explanation is satisfactory, but how often, if at all, do other components of such projects change, up until lift off? I imagine engines and other propulsion parts won't be upgraded much after the original spec, even if such components even remotely followed the same curve of improvement as digital chips do. But what about other types of sensors used by the rover? What is the most recent piece of tech used by the rover?

In any case, it's kind of surprising that by 2004, NASA engineers wouldn't have proposed a solution that anticipated vast improvements in digital sensor technology, so that something, in 2009-2010, could be "dropped in" (relatively speaking, not literally) as a replacement.

Of course such a design feature is going to take way more planning and resources than it would for the holiday consumer camera lineup...but a) this is NASA, some of the best of the best engineers. And b) while panning-and-stitching is always a solution, doesn't that have additional operational risk of its own? Additional panning requires additional mechanical movement and attention to moving parts.


Last minute changes are made. Or at the very least I know of one such change in the case of Curiosity.

The dust covers over the haz cams (the navigation cameras on the belly of Curiosity) were added in the last minute. Here is the engineer who implemented them writing about the covers: http://forum.nasaspaceflight.com/index.php?topic=29612.msg93...

Basically, the Phoenix lander (landed on May 25th 2008) kicked up more dust than expected. They were worried and did a review for Curiosity, but found out that only the haz cams (because of their location on the belly) were in danger, so they decided to add dust covers to them (and also kept them transparent to see whether there really was a dust problem – as you could see from the first photos with covers still attached there very clearly was).

My guess is that stuff gets changed and updated a) if there is money and resources left or b) if the mission is in danger if you don’t change something.

The 2MP sensor is very clearly good enough. Any update in resolution would give you diminishing returns – so something like that gets pushed back.


It's all about risk. It's not about "leftover money".

Cost, schedule, risk. They are the fundamental resources for a project like this. Cost and schedule are more familiar than risk.


So if it’s about cost it’s about leftover money. That’s one and the same. (One just doesn’t sound as professional as the other.)


You offered two possible explanations for the use of a 2MP camera, (a) money, (b) risk. I was saying that, in this case, it did not really have to do with money.

My second sentence above was trying to point out that many people underestimate how important risk is to a space mission.


Oh, no, I wasn’t claiming that this change had anything to do with money. In this case it was clearly related to risk. I was just trying to make a general point.


Thanks, understood.


Yeesh, I don't even remember the Phoenix lander. Kinda crazy that there have been enough to start losing track of them.


It's the one that found water ice on Mars.


Personally I'd be very wary of that. As I said on another comment, this is a planned two-year mission. So if they drop-in a compatible component and later found out there was a manufacturing problem that shortens its life, that's not acceptable. I think NASA's very failure-avoidant, and I really can't blame them, because their mistakes are extremely expensive and occasionally deadly.

The alternative they chose, it sounds like, was extreme caution, using well-tested components at the possible expense of image quality. It's Good Enough.

And like others have said, macroscopic images aren't the sole or even primary purpose of this mission. At this point, it's just whiz-bang PR, since Spirit and Opportunity got enough pictures to last a lifetime. The secret sauce here is the spectrum analyzers, close-up camera, rock-vaporizing laser, etc. THAT'S the important stuff, scientifically.


As for panning and stitching, don't they already want to be able to point the camera in different directions?

And sensors -- they're not just looking for fancy new imaging sensors, they're looking for well-tested, radiation-tolerant sensors that can handle a range of temperatures. And then you need to redesign the rest of the circuit around it to handle more data -- all the chips driving the fancy imaging chip would have to be well-tested, radiation-tolerant chips that handle a range of temperatures.

The risk here is "use tech that's 8 years old" or "increase the chance that something goes wrong on a $2.5 billion dollar project".


Based on history, the way NASA tends to improve missions via software rather than hardware. For example, stitching lower resolution images together rather than developing a higher pixel count camera.

Getting a camera there is far more important than its spec sheet, and given that the lifetime of nuclear powered instruments can exceed 30 years (e.g. Pioneer and Voyeger) any over-achieving mission is going to be dependent on obsolescent hardware for a very long time. When Curiosity was developed, the choice of a four megapixel camera would still be quaint by today's consumer expectations.


They tried this idea with the zoom lenses:

"In early 2010, NASA reconsidered the VFL [zoom lens] cameras and work resumed on assembling these cameras, which will replace the FFL cameras described here if the work is completed in time and the instruments meet their requirements."

http://msl-scicorner.jpl.nasa.gov/Instruments/Mastcam/


They can generate low-res, closeup 3D images of the ground using the stereo hazcams and manual colorization. It's far from what stereo zooming mast cameras would have given them, but I still think it looks cool: http://nitrogen.posterous.com/curiositys-view-of-mars-in-pse...

And yes, I'll probably keep posting this image in nearly every thread that mentions Curiosity's cameras until NASA starts giving us more color imagery.


2004 technology aside, this is called the "Good Enough Factor". Obviously every super geek at NASA wants an 20 MP camera with full zoom,100 year lifetime, 3d, blah blah blah. But, they are working in very exact specifications and budget. So, you have to opt for the option that satisfies the Good Enough Factor.

What will get us closest to what we actually want, without totally breaking spec and screwing with the time and monetary budget.


Everyone is missing the point here. The problem isn't that these cameras are old tech and using outdated CCD censors. If these were the best images we've seen to date of Mars, you're right, there'd be a "Good Enough Factor" — the best images we've seen from a NASA mission, but not as good as modern DSLRs. Understandable.

The problem, that I think a lot of people are missing, is that Viking 1, from 1976, took higher quality pictures of Mars. http://f.cl.ly/items/0k2w2d1C1O3w3e0t300f/NASAQualityDegreda...


You are comparing the very first initial images from Curiosity with the final processed images from Viking. Also, the haz-cams on Curiosity are there only to help Curiosity see where it is going. Whatever design trade-offs they make, they need to make the priority be "don't break the 2.5 billion dollar rover that is supposed to last for over a decade."

If you use a low-res camera to take 1000 images of the same thing, you can use software to make a high-res image from those.


Maybe NASA will do a Kickstarter project for their next rover. :-)


I see it now. Pledge 1 million dollars (1 of 5 left)and be invited to NASA command center during Rover touchdown. Plus all of the above.


I would like to see this result in a frustrated James Cameron building his own Mars probe for filmmaking


The lack of an on-board microphone could/should do that, too. :-) Pictures are a great, but a full-30fps video (with audio!) would greatly enhance our feeling of what it's like to be on the surface there.


Mars has a very thin atmosphere, so a microphone would capture very little data if any. Star Wars aside, there isn't much sound in the universe.


Mars is also windy, and microphones can be made to be very sensitive. Plus, there might be some interest in hearing the dust from one of the common dust storms hitting the rover.

Not a high priority in terms of data, but potentially interesting nonetheless.


It appears that NASA already sent a microphone to Mars.

And you can listen to it:

http://mars.jpl.nasa.gov/msp98/lidar/microphone/mic_found.ht...

[Edit] Never mind, Mars Polar Lander crashed.

http://en.wikipedia.org/wiki/Mars_Polar_Lander#Communication...


NASA sent microphones to Mars twice, but neither were used due to technical reasons (one because the vehicle crashed).


The martian atmosphere is extremely thin. Sound probably carries very poorly, so all you would realistically hear is the vibration from mechanisms on the rover itself.


I'm surprised he didn't just kick in his own funding. Is $10 million enough? $20 million? What's the price of going from being a director to a legend?



This is a serious lack of vision for such an iconic company. Good science and technical ability, doubtless. But someone skimped on a real public relations goldmine here.


Because he's an actual scientist not an Internet rasterbator.


I really liked the article - it was very coherent and easy to follow.

BTW, is there a full-res movie of the descent? I've only seen the 'thumbnails' stitched together of the heat-shield and the parachute.


Not yet. Only 20 or so of the full frames of the descent have been downloaded so far. Will probably be a day or more before they are all down.

You can see the frames that are down here: http://mars.jpl.nasa.gov/msl/multimedia/raw/?s=0


Thanks for the pointer. This really brought home to me the 'baud rate' issue of interplanetary communication.

Right now, I'm so used to streaming movies at home that I forget the challenges of communications across such vast distances .

Perhaps our children/grand-children will be using the faster interplanetary internet and will recall these days with the same whimsy as we do now our twisted-pair modem days.


I'd like to see someone make an animated .gif with those images.


It would be neat if one day they send rovers in modules - one nuclear power supply to last dozens if not a hundred years, then a mini-robot sent to replace the modular camera with a 15MP one after they upgrade the bandwidth from 2mpbs to 10mbps and then one day 100mbps.

That 15-30 minute pingtime is a problem that cannot be overcome unless they find a faster than light wave that can be used for transmission - or a way to manipulate quantum entanglement for communications.


I think the problem is that, if you are going to make a second launch, it's nearly always better to land some place different. We will accumulate a bunch of data on Gale Crater over the next several years, but we have essentially zero data on a whole bunch of very interesting parts of Mars.


With enough supplementary shipments, and enough patience, a rover could stroll all over the planet.


Can anyone explain what the key limits on bandwidth are when communicating from Mars? Is it just a matter of higher bandwidth requiring higher energy consumption?


It's wireless transmission. Your bandwidth is a function of the frequency of your communication channel, and long-distance transmission generally uses low-frequency signals (and as a result, low speed).

The one way to get around this is to use a multi-channel link, that is to say you communicate over several frequencies simultaneously. This is more difficult, both technically and because you must find unused frequencies.


I'm a bit disappointed that they didn't take this as a chance to try out cobbling up a "radiation box" and put commercial equipment in that. Surely, that would be a good experiment data point that could be used in future endeavors.


It sounds like they already tried something like that on ISS, and the commercial cameras still had problems.


I feel like you could simulate those conditions here on earth pretty accurately. At the very least, it sounds like they have some radiation sensors on the rover, so we will soon have a better idea of what conditions are like.


just curious, why they are releasing only black and white pictures


The sensor is black and white. This lets them collect the most accurate information about how much light hits each part of the sensor. To take a color image they put a color filter in front of the lens, then they can recolor the black and white image according to the color of the filter. If they take multiple pictures with different filters, they can reconstruct a color image. http://areo.info/mer/


Actually, the MastCams on Curiosity are bayer-pattern color sensors.


Wow, I had no idea. Thanks for the info. http://www.nasa.gov/mission_pages/msl/multimedia/pia15109.ht...


They have sent back at least one color photo, which has been released [1] as a test that the cameras are working. They're still getting everything operational (and I believe they have a software upgrade to do yet) before the major operations get underway. The black and white cameras don't need to be deployed and calibrated like the color/high-res cameras do.

[1] http://www.reuters.com/article/2012/08/08/us-usa-mars-idUSBR...


The B&W cameras are primarily used for navigation and hazard avoidance. The rover is equipped with color cameras, and they will get stood up in time. For now they are starting with the basic pieces and gradually bringing more online.


As for the 3D factor, given the way the Curiosity can move, can't it just snap one image, move slightly to the side and snap it again for stereo LR?


Why not change the system so that new technology is able to be fitted?


The limiting factor is the testing and integration, not the new technology.

if it takes X years to do sufficient radiation and integration testing on new technology, you can't possibly include any technology "newer" than X years old. And trying to keep yourself open to include the latest-technology-you-possibly-can means you need to have manpower ready to do that testing at the last possible moment, which has scheduling implications on everything else.

So you need to prioritize which technologies you most want to be able to integrate in their latest/greatest form. And slightly fancier pictures are just not going to be that high on the list.


Please. The quality of images from the MastCam will be sufficient for the plebs.


Now that I find out that the main reason for using a 2 MP camera is because that's what the specifications were in 2004, I'm a lot more disappointed than if it was just about the speed of transfer between Mars and Earth.

2 MP cameras were in high-end phones in 2004, but you'd expect NASA to use something a little more advanced than what was available in phones in 2004. How much more would it have cost them to use a 5 MP one? $100 more if they chose one in 2004 and stuck with it, or $10 more if the camera was added just a year ago. So this makes me think that they just didn't think this would be such an important factor, compared to say making the robot work.


Question: You've just won a trip to some interesting far off place (Australia, Italy, Peru, Alaska… whatever). Do you go out the day before you leave, and buy the latest $2,000 DSLR so you can get the best possible photos?

Practically anyone who is more than a point and shoot photographer, even a very amateur one will tell you that is complete insanity. There are too many things that could fail. You might not be comfortable with the camera in all lighting situations. It could have some defect in the lens that you won't have time to get replaced. You could find out the LCD preview is darker than what you're really shooting, and everything is over-exposed. The number of things that can go wrong just because it's an unknown quantity are huge.

On a trip like that, you take your trusty camera that you've shot thousands of photos with and know inside and out, even if that means that some of your photos won't be the absolute highest resolution money can buy.


You didn't read the article, did you?

The entire mission can return around 250 mbits of data per day to Earth, because it's on Fricking Mars!

Going to a 4MP camera CCD was considered and rejected because it'd result in slower image capture during time-sensitive processes (such as the descent cam) and gobble down far more of that valuable bandwidth which is shared among all Curiosity's instruments.

Going to a 4MP camera would be sensible if and only if they could have doubled their wireless bandwidth across a 300 million kilometre gap. Bandwidth is the killer bottleneck on interplanetary missions, not pixels.


I can understand their choice of a tested technology, but I don't understand the bandwidth argument. Is the processor so underpowered that it couldn't crop the image when needed?


(Sometimes I despair)

If you crop 2MP out of a 4MP image, you might as well have started out with a 2MP camera in the first place. Except that if your 4MP CCD is the same size as your 2MP CCD, the light gathering area per pixel is significantly less -- you capture fewer photons, hence less information about what you're pointing the camera at! Raw pixel count is not a good measure of the imaging accuracy of a digital camera. In the case of Curiosity, they may only be 2MP CCDs, but they're the best 2MP CCDs that money can buy, and they're being fed by the best optics NASA could source. It's a far cry from your phone camera ...


That's why I stated "when needed". I thought I didn't have to explain it in more detail, but it seems I do. You obviously have an interest in these things and rightfully have issues with pure MP arguments. In this case you read something into my comment that wasn't there. That's my despair, and one of the reasons why I try to avoid commenting on technical posts.

Note that I also stated that I did understand that there were other technical reasons. My comment was limited to the question of bandwidth only.

My original question that you didn't reply to still remains. From a pure bandwidth point of view, they could crop the image when they need a high transfer rate so that they can have a higher resolution when they have enough available bandwidth.

If they chose 2MP for other reasons, that's fair enough.


I don't think you quite understand the difference between cropping and compression. Cropping an image is the equivalent of choosing which part of the image Facebook displays as your profile picture. While this method could reduce the size of the image to transmit down to '2MP', there's really no purpose in taking photos that are essentially cut in half.


Scientists have historically been very skeptical about automated region-of-interest cropping, or fancy novelty-detection methods, or even certain kinds of compression. They are always afraid that something of importance will be filtered out. It's a difficult argument to win.

Prioritized downlink is accepted (you still get everything, but there's some automation that finds the most interesting stuff and sends it first).

There's even experimental acceptance of planning/machine vision systems that choose targets opportunistically while a rover is moving from point A to point B.

That is, points A and B were chosen by science planners. But while the robot is moving from A to B, it looks at stuff and stops en route to collect more data if it sees something interesting. You can sell this to scientists because they still get the data from points A and B (they're in control) but they also get more data from in-between, that might be interesting, and that they would not get otherwise.

This has been used on Opportunity and won the NASA software of the year award last year (http://www.jpl.nasa.gov/news/news.cfm?release=2011-380). It's a harder problem than it sounds like, because the robot has to re-plan its activities on-the-fly ("plan" in the sense of moving cameras, turning the robot, etc.)


This is not a point-and-shoot camera for your grandma to use. It's hardened against radiation, vibration, heat, cold, dust, etc., sending back high-quality RAW images for a planned two-year mission.

So no, it's not "$100 more".


I think your argument can be used to reach the opposite conclusion: since most of the cost is not in the sensor but in the hardening against radiation, vibration, heat, cold, dust, etc. then putting in a better sensor would be basically free. Of course the article explains why they didn't; maybe mtgx just didn't read it.


Maybe... IANAEE but chip design and materials is probably still important for all that. It's not necessarily a regular chip wrapped in heat/radiation/whatever shielding. I'm trying to google around a bit for more info but have to get back to work eventually...


It is an off-the-shelf Kodak sensor.


If your camera phone sensor fails you walk to a shop and get a new camera.

If the sensor on a Mars rover fails there are thousands and thousands of dollars of wasted opportunity.

Thus, they pick something reliable that they know, and test the hell out of it, and fix the spec.

This has pros and cons.

Pros include a well known set of kit with extensive testing and documentation.

Cons include being locked into a manufacturers product line. If that product goes obsolete you're stuck searching for stock of product in various resellers inventory. I guess NASA can just buy extra stock. But I've seen the result of odd devices being used on aerospace kit, and it becomes impossible to create a sensible quote. Subcontractor A puts a request in for obsolete part, and that goes to a bunch of vendors. Subcontractor B puts the same request in, and so you end up in a bidding war for parts that you might not buy. Given that yo're quoting on something that you'll be building in maybe three months there's no chance of providing an accurate quote.

tl:dr when designing a Mars rover use components that you know and buy lots of them. When designing an aircraft avoid anything that is listed near end of life, and try to pick something that's not going to go obsolete in 3 years, and try to find something that can have alternatives.


What do you think extra megapixels would give you? Judging about quality of the camera by pixels count is silly.


Exactly, they can pan the camera and stitch the images together - something that is not as easy to do with your handheld camera


In a mission to gather data then clearly extra megapixels give you extra data. You cannot argue against that! We aren't talking amateur photography here where the quality of the shot is important and a decent lens beats higher megapixels, we are talking acquiring measurements of the amount of photons in a discrete spatial region. Clearly a higher resolution of sensor might allow scientists to see something not visible in lower resolution images. Although I do admit that in this case I understand the choice due to specification and testing, but to suggest that extra megapixels do not give more information is silly.


Actually, the opposite is true. Photon absorption/detection is a quantum event, and limited by probability. For a given sensor chip size (and technology generation), fewer, larger sensels are going to provide samples that are statistically closer to the Absolute Truth. (Averaging repeated samples will reduce the error further.)

Using a well-corrected lens of an appropriately longer focal length, and thus a narrower field of view, with or without panoramic stitching, will provide at least[1] the same linear resolution of a given subject, but with less sampling error.

[1] At least, since apochromatic correction is easier in longer focal length lenses provided that no super-wide-aperture bokeh heroics have gone into the design. Rectiliearity (the absence of barrel or pincushion distortion) is also easier to achieve. Flare can be reduced without inducing undue mechanical vignetting, increasing contrast.


The bottleneck to a large extend is the bandwith/costs of transmitting the extra data. NASA will create higher resolution images by stiching low res imagery for us wanting higher res.


For perspective, a single nut or bolt often costs hundreds to thousands of dollars by the time you are done with the Mil Spec qualification. You are off by a factor of at least 3, likely 4 or 5.


People tend not to understand the tracability requirements either.

Every single part - each nut, bolt, and washer, has a batch number and can be traced from the rover back through assembly, through stores, through goods in, through the suppliers, through to the manufacturers.

This paperwork alone adds significant;y to the cost.


Of course, once you stop seeing this is as crazy, you are doomed to mediocre results.


No, it would be crazy if the rover failed due to a bad bolt, and no one could explain why the bolt was bad or even where it came from.

Also, no one is stopping you from building, launching, landing, and operating a multi-purpose roving vehicle on the surface of a foreign planet, 50 to 400 million kilometers away, that's designed to last for several years and withstand extremes of temperature, pressure, and radiation.

If you can do better...


I still think there is a middle ground. I guess I will just have to make my own mars rover in my spare time though, that seems viable.


If your point is that there is a lot of bureaucracy, then I wholeheartedly agree. Unfortunately, in any government project, that is going to be the case.

You cannot on the one hand expect great things, and on the other hand chastise every failure. I'm not saying you specifically would do this, but any failure would result in a storm of controversy about how billions of dollars were wasted to land a pile of scrap metal on Mars while X, Y, and Z pressing political crises are a better use of our time and money.

A private company, on the other hand, can afford to take greater risks. If you're actually able to do it on your own, more power to you, but I would recommend some help.


You also need to think about the risk.

The chances of a problem with newer technology may be small, but the impact of that risk (that you've gone all the way to Mars and your camera doesn't work) is HUGE.

Everything on that vehicle will have been reviewed, tested and retested and retested again all of which takes time. You don't just throw something new on there at the last minute and launch.


>How much more would it have cost them to use a 5 MP one?

Well they'd need to design it, get it built and delivered in time for the whole SLEW of tests that something going into space has to pass. If it failed any one of those since it would have had to be redesigned (and all within the same size/weight as the original chip because other components would have been build around it) then repeat.

Or...you know, they could use a chip they are completely familiar with, that has been tested and used in similar applications and is completely fine for the mission.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: