James Cameron lobbied to make zoom lenes for the cameras so they could be set at the same zoom level to create stereo pairs for 3D images.
JPL had "problems designing the lens without using wet lubricants which would require battery-sapping heating"
Difficult trade-offs must have been made everywhere - who knows where that extra battery power went that the zoom lens gave up.
Likewise, there are no camera shops on Mars.
2000 watts at 14 years in kilowatt hour - 245,000 kilowatt hours.
In any case, it's kind of surprising that by 2004, NASA engineers wouldn't have proposed a solution that anticipated vast improvements in digital sensor technology, so that something, in 2009-2010, could be "dropped in" (relatively speaking, not literally) as a replacement.
Of course such a design feature is going to take way more planning and resources than it would for the holiday consumer camera lineup...but a) this is NASA, some of the best of the best engineers. And b) while panning-and-stitching is always a solution, doesn't that have additional operational risk of its own? Additional panning requires additional mechanical movement and attention to moving parts.
The dust covers over the haz cams (the navigation cameras on the belly of Curiosity) were added in the last minute. Here is the engineer who implemented them writing about the covers: http://forum.nasaspaceflight.com/index.php?topic=29612.msg93...
Basically, the Phoenix lander (landed on May 25th 2008) kicked up more dust than expected. They were worried and did a review for Curiosity, but found out that only the haz cams (because of their location on the belly) were in danger, so they decided to add dust covers to them (and also kept them transparent to see whether there really was a dust problem – as you could see from the first photos with covers still attached there very clearly was).
My guess is that stuff gets changed and updated a) if there is money and resources left or b) if the mission is in danger if you don’t change something.
The 2MP sensor is very clearly good enough. Any update in resolution would give you diminishing returns – so something like that gets pushed back.
Cost, schedule, risk. They are the fundamental resources for a project like this. Cost and schedule are more familiar than risk.
My second sentence above was trying to point out that many people underestimate how important risk is to a space mission.
The alternative they chose, it sounds like, was extreme caution, using well-tested components at the possible expense of image quality. It's Good Enough.
And like others have said, macroscopic images aren't the sole or even primary purpose of this mission. At this point, it's just whiz-bang PR, since Spirit and Opportunity got enough pictures to last a lifetime. The secret sauce here is the spectrum analyzers, close-up camera, rock-vaporizing laser, etc. THAT'S the important stuff, scientifically.
And sensors -- they're not just looking for fancy new imaging sensors, they're looking for well-tested, radiation-tolerant sensors that can handle a range of temperatures. And then you need to redesign the rest of the circuit around it to handle more data -- all the chips driving the fancy imaging chip would have to be well-tested, radiation-tolerant chips that handle a range of temperatures.
The risk here is "use tech that's 8 years old" or "increase the chance that something goes wrong on a $2.5 billion dollar project".
Getting a camera there is far more important than its spec sheet, and given that the lifetime of nuclear powered instruments can exceed 30 years (e.g. Pioneer and Voyeger) any over-achieving mission is going to be dependent on obsolescent hardware for a very long time. When Curiosity was developed, the choice of a four megapixel camera would still be quaint by today's consumer expectations.
"In early 2010, NASA reconsidered the VFL [zoom lens] cameras and work resumed on assembling these cameras, which will replace the FFL cameras described here if the work is completed in time and the instruments meet their requirements."
And yes, I'll probably keep posting this image in nearly every thread that mentions Curiosity's cameras until NASA starts giving us more color imagery.
What will get us closest to what we actually want, without totally breaking spec and screwing with the time and monetary budget.
The problem, that I think a lot of people are missing, is that Viking 1, from 1976, took higher quality pictures of Mars. http://f.cl.ly/items/0k2w2d1C1O3w3e0t300f/NASAQualityDegreda...
If you use a low-res camera to take 1000 images of the same thing, you can use software to make a high-res image from those.
Not a high priority in terms of data, but potentially interesting nonetheless.
And you can listen to it:
[Edit] Never mind, Mars Polar Lander crashed.
BTW, is there a full-res movie of the descent? I've only seen the 'thumbnails' stitched together of the heat-shield and the parachute.
You can see the frames that are down here:
Right now, I'm so used to streaming movies at home that I forget the challenges of communications across such vast distances .
Perhaps our children/grand-children will be using the faster interplanetary internet and will recall these days with the same whimsy as we do now our twisted-pair modem days.
That 15-30 minute pingtime is a problem that cannot be overcome unless they find a faster than light wave that can be used for transmission - or a way to manipulate quantum entanglement for communications.
The one way to get around this is to use a multi-channel link, that is to say you communicate over several frequencies simultaneously. This is more difficult, both technically and because you must find unused frequencies.
if it takes X years to do sufficient radiation and integration testing on new technology, you can't possibly include any technology "newer" than X years old. And trying to keep yourself open to include the latest-technology-you-possibly-can means you need to have manpower ready to do that testing at the last possible moment, which has scheduling implications on everything else.
So you need to prioritize which technologies you most want to be able to integrate in their latest/greatest form. And slightly fancier pictures are just not going to be that high on the list.
2 MP cameras were in high-end phones in 2004, but you'd expect NASA to use something a little more advanced than what was available in phones in 2004. How much more would it have cost them to use a 5 MP one? $100 more if they chose one in 2004 and stuck with it, or $10 more if the camera was added just a year ago. So this makes me think that they just didn't think this would be such an important factor, compared to say making the robot work.
Practically anyone who is more than a point and shoot photographer, even a very amateur one will tell you that is complete insanity. There are too many things that could fail. You might not be comfortable with the camera in all lighting situations. It could have some defect in the lens that you won't have time to get replaced. You could find out the LCD preview is darker than what you're really shooting, and everything is over-exposed. The number of things that can go wrong just because it's an unknown quantity are huge.
On a trip like that, you take your trusty camera that you've shot thousands of photos with and know inside and out, even if that means that some of your photos won't be the absolute highest resolution money can buy.
The entire mission can return around 250 mbits of data per day to Earth, because it's on Fricking Mars!
Going to a 4MP camera CCD was considered and rejected because it'd result in slower image capture during time-sensitive processes (such as the descent cam) and gobble down far more of that valuable bandwidth which is shared among all Curiosity's instruments.
Going to a 4MP camera would be sensible if and only if they could have doubled their wireless bandwidth across a 300 million kilometre gap. Bandwidth is the killer bottleneck on interplanetary missions, not pixels.
If you crop 2MP out of a 4MP image, you might as well have started out with a 2MP camera in the first place. Except that if your 4MP CCD is the same size as your 2MP CCD, the light gathering area per pixel is significantly less -- you capture fewer photons, hence less information about what you're pointing the camera at! Raw pixel count is not a good measure of the imaging accuracy of a digital camera. In the case of Curiosity, they may only be 2MP CCDs, but they're the best 2MP CCDs that money can buy, and they're being fed by the best optics NASA could source. It's a far cry from your phone camera ...
Note that I also stated that I did understand that there were other technical reasons. My comment was limited to the question of bandwidth only.
My original question that you didn't reply to still remains. From a pure bandwidth point of view, they could crop the image when they need a high transfer rate so that they can have a higher resolution when they have enough available bandwidth.
If they chose 2MP for other reasons, that's fair enough.
Prioritized downlink is accepted (you still get everything, but there's some automation that finds the most interesting stuff and sends it first).
There's even experimental acceptance of planning/machine vision systems that choose targets opportunistically while a rover is moving from point A to point B.
That is, points A and B were chosen by science planners. But while the robot is moving from A to B, it looks at stuff and stops en route to collect more data if it sees something interesting. You can sell this to scientists because they still get the data from points A and B (they're in control) but they also get more data from in-between, that might be interesting, and that they would not get otherwise.
This has been used on Opportunity and won the NASA software of the year award last year (http://www.jpl.nasa.gov/news/news.cfm?release=2011-380). It's a harder problem than it sounds like, because the robot has to re-plan its activities on-the-fly ("plan" in the sense of moving cameras, turning the robot, etc.)
So no, it's not "$100 more".
If the sensor on a Mars rover fails there are thousands and thousands of dollars of wasted opportunity.
Thus, they pick something reliable that they know, and test the hell out of it, and fix the spec.
This has pros and cons.
Pros include a well known set of kit with extensive testing and documentation.
Cons include being locked into a manufacturers product line. If that product goes obsolete you're stuck searching for stock of product in various resellers inventory. I guess NASA can just buy extra stock. But I've seen the result of odd devices being used on aerospace kit, and it becomes impossible to create a sensible quote. Subcontractor A puts a request in for obsolete part, and that goes to a bunch of vendors. Subcontractor B puts the same request in, and so you end up in a bidding war for parts that you might not buy. Given that yo're quoting on something that you'll be building in maybe three months there's no chance of providing an accurate quote.
tl:dr when designing a Mars rover use components that you know and buy lots of them. When designing an aircraft avoid anything that is listed near end of life, and try to pick something that's not going to go obsolete in 3 years, and try to find something that can have alternatives.
Using a well-corrected lens of an appropriately longer focal length, and thus a narrower field of view, with or without panoramic stitching, will provide at least the same linear resolution of a given subject, but with less sampling error.
 At least, since apochromatic correction is easier in longer focal length lenses provided that no super-wide-aperture bokeh heroics have gone into the design. Rectiliearity (the absence of barrel or pincushion distortion) is also easier to achieve. Flare can be reduced without inducing undue mechanical vignetting, increasing contrast.
Every single part - each nut, bolt, and washer, has a batch number and can be traced from the rover back through assembly, through stores, through goods in, through the suppliers, through to the manufacturers.
This paperwork alone adds significant;y to the cost.
Also, no one is stopping you from building, launching, landing, and operating a multi-purpose roving vehicle on the surface of a foreign planet, 50 to 400 million kilometers away, that's designed to last for several years and withstand extremes of temperature, pressure, and radiation.
If you can do better...
You cannot on the one hand expect great things, and on the other hand chastise every failure. I'm not saying you specifically would do this, but any failure would result in a storm of controversy about how billions of dollars were wasted to land a pile of scrap metal on Mars while X, Y, and Z pressing political crises are a better use of our time and money.
A private company, on the other hand, can afford to take greater risks. If you're actually able to do it on your own, more power to you, but I would recommend some help.
The chances of a problem with newer technology may be small, but the impact of that risk (that you've gone all the way to Mars and your camera doesn't work) is HUGE.
Everything on that vehicle will have been reviewed, tested and retested and retested again all of which takes time. You don't just throw something new on there at the last minute and launch.
Well they'd need to design it, get it built and delivered in time for the whole SLEW of tests that something going into space has to pass. If it failed any one of those since it would have had to be redesigned (and all within the same size/weight as the original chip because other components would have been build around it) then repeat.
Or...you know, they could use a chip they are completely familiar with, that has been tested and used in similar applications and is completely fine for the mission.