Hacker News new | comments | ask | show | jobs | submit login
Why don't they just..? (jgc.org)
293 points by jgrahamc on Aug 20, 2012 | hide | past | web | favorite | 136 comments



When I first started in corporate programming and had to pick up on big projects that someone else had left off, I often found myself saying the same thing a lot. "Man, they did this in such a dumb way. Why didn't they do it this way? They must not have been very smart."

So I would go in and "fix" things... only to find out a month later that the code was the way it was because of some obscure edge case that I had never thought of. It turns out that in my arrogance I was the dummy all along.

After that happened a couple of times, I stopped approaching strange code with that attitude. Programmers are in general pretty clever, and if you see something strange in some code, don't assume it's because they're dumb. At first assume it's because you're dumb, and only change your mind if careful and deliberate analysis--and talking to someone else with history in the organization--proves otherwise.


After the recent discussion about code-comments, that sort of code is exactly the sort that requires an explanatory comment.

Then all of a sudden, you don't need to waste time on careful and deliberate analysis, and hunting down people who might know about it, and making assumptions, because the 2 minutes it would take to write a few lines of explanation would save you all of that.

At which point, the real question is: why did they choose not to document this non-obvious solution, and the edge case that required it?


At which point, the real question is: why did they choose not to document this non-obvious solution, and the edge case that required it?

From my personal experience with "corporate programming", the usual suspects are:

1) corporate culture that dictates that you need to get the code out ASAP and let someone else worry about maintenance

2) original author's assumption that he/she will be the only one to touch that code

I've been guilty of #2 before I learned that even if I am the only one to touch the code, if I wait long enough before I come back to it, I'll still have the same problem as a newcomer would.

As for #1, this is a typical corporate culture for any company whose business isn't producing code (and for quite a few whose business is precisely that).


3) The code was indeed obvious for the people who had worked on the project for a while.

When you get familiar with a domain it is very easy to get blind to what people without exposure to it will consider obvious or not.


3) A culture that thinks/mandates that "unnecessary comments" "clutter" the code.

4) Documentation that is held separate / made in different systems. That diverge over time. Or, one failure in training of the new guy is to not provide them a good overview of the documentation system(s). And/Or, the documentation organization and systems are cumbersome to the point of being unuseful, unless you already have a pretty good idea of where stuff is (and isn't -- all those empty forms that end up being ignored).


#4 has made my life miserable in the past, and will probably do so again. ISO certification requirements seem to exacerbate it, interestingly (frustratingly) enough.


1) corporate culture that dictates that you need to get the code out ASAP and let someone else worry about maintenance

Unfortunately, three months from now, "someone else" is you, and "getting the code out ASAP" means figuring out the spaghetti code you wrote but no longer understand.

In the long run, doing it right the first time helps us go faster. Therefore, it's part of the programmer's job to resist pressure to do it wrong.


Comments, and, one would hope, some tests showing how the obscure edge case interacts with everything else.


Sometimes you don't know what the edge cases are because you don't know what the everything else is. If you return something sorted, somebody will write a consumer that depends on the sorting. If you change backends and return things in a different order, you've broken their code. Sure, their code was broken from the start, but it used to work, now it doesn't, and you're the guy who changed something. The only way to prevent this, randomly sorting every unsorted collection you export, isn't feasible to reliably maintain.


> The only way to prevent this, randomly sorting every unsorted collection you export, isn't feasible to reliably maintain.

It's feasible to maintain. But people might rely on it, too. As a random example (and easily avoidable example), quicksort works with random input, but 'breaks' for pre-sorted input.


That depends on the pivot chosen. If your quicksort implementation actually hits n^2 behavior with sorted input, it's probably not the best implementation.

Any implementation can hit n^2 with random input. It's just highly unlikely.


Yes, I know. That's what the "easily avoidable" was for. I couldn't think of a better example in a few minutes, but if something can go wrong, I am sure it will eventually go wrong.


Agreed, with one caveat: even better than explanatory comments are explanatory method and variable names, when possible.

For example, you could have one 10-line method, interspersed with comments, or you could have a hierarchy of method calls, each with a name so clear that it needs no comments. The second method is less likely to get out of sync with what the code actually does, and it's more testable.

This the lesson that stuck with me most from Clean Code, which I highly recommend.


Even better is naming things as intuitively as possible, and to use explanatory comments where they would be helpful.

Not trying to pick on you, but every time I see someone implying that "naming things better" is a panacea, I want to throw it out there that sometimes, nothing beats a good comment to explain (in natural language) what's going on here.


We're on the same page. Generally, I find that naming things better helps explain what the program is doing, and comments help explain why. And comments will stand out best if they are only used when needed to answer the question "why didn't they just..." :)


I've come across a lot of code where the code itself was the clearest way to communicate the subtlety inherent in it. I think you have an inflated expectation of what human languages can document in a reasonable space. Many times it will take months to understand the edge case even if you have a human trying to describe it to you directly, let alone by reading a non-interactive comment.


The code may well be the clearest way to communicate the subtlety. That doesn't mean a comment isn't helpful to communicate that there's subtlety there and give some idea of what sort of case it's meant to be about.

  // It may look as if you could just say p->adjust() here,
  // but that doesn't work because of a subtlety involving
  // cancelled credit cards belonging to purchasers in Uganda.
  // Please tread carefully!
... And then, if talking to another human being is the only real way to grasp what's going on, you use your version control system to find out who added that comment and go talk to them.


You'll run into problems as soon as it's not the code that's left which explains the subtlely, but the code that you had to remove. In such cases, a comment explaining (for example) why the obvious choice of library FooBar was abandoned for a more direct-metal approach would provide the answer to the code which isn't there.


In these cases a simple, "Function is required to handle multiple edge cases", comment would at least alert other developers to a reason for ostensibly odd or overly complex code. This along with well commented test cases solves most issues.


But you might not even need to understand the edge case, you just need to know that it's there, and it means the code is more complex than it really should be. The code itself cannot reasonably communicate the developer's true intent, and so a short comment can remove any ambiguity even if it doesn't explain the issue in full. The GP assumed incompetence, when it was really pragmatism.


After the recent discussion about code-comments, that sort of code is exactly the sort that requires an explanatory comment.

If the code isn't the clearest comment you have, you're doing it wrong.

Comments are a crutch for people who write poor code. Comments have zero authority or guarantee of accuracy, and more often than not have little correlation with the actual code.

Code is canonical. Comments are noise.


I think you're wrong for one simple reason: business logic.

Business logic is complicated and rarely defined by a developer but by a product manager. Often you can understand what's being done in the code, but the WHY is necessary to understand why it's there and what it is trying to do. I believe giving a brief synopsis of the business logic in a method comment and, if it's not super straightforward, a brief overview of the steps or algorithm, is incredibly useful.

I guarantee you won't be able to figure out what your program is doing by looking through years old wikis left by product managers no longer at your company.


Just to be clear, the person I responded said, in effect, that the code did X. They made code that did Y, and then realized it didn't do X. That is the story of a million ill conceived code rewrites.

That the code did X is clear by the code: No amount of words can refute or change that the code does X. The danger is replacing code is always in the behavior of the code, and never simplified descriptions of its actions.

To business logic, as someone who has worked heavily with business code for years (laughable commentary from imbeciles to the contrary), business logic in comments is one of the worst choices a team can make because it is an escape hatch. It negates the need for verbose, traceable code. It negates the need for vastly superior external proof.


Mathematicians have much more convenient symbols and powerful notation than programmers. Yet they use text, too. Even wondered why?

Donald Knuth, who gives money to people who spot bugs in his code, even invented literate programming to mix prose explanation and code better. Is he an idiot?


I'll happily be "so brave" and reiterate a simple truism of software development -- comments are the battle cry of terrible developers.

They are the crutch of people who can't read code: "Add more comments because otherwise I can't make sense of what the statements are doing." It is the English speaker demanding that every French passage have an English translation, rather than simply learning French.

They are the crutch of people who can't write code. "My code is a gigantic, illiterate mess, so instead read the comment at the top that has no guarantee of being robust or accurate."

Bringing up mathematicians and Knuth are both irrelevant distractions. Software development in the modern world is a very structure, self-describing affair, or at least it should be. Comments are the short-circuit from having to figure out how to do that.


I'm not meaning to insult you, but you sounds like a programmer who's experience consists solely of personal projects and academic assignments.

As has been said, comments explain why you are doing something in a certain way. That why is often related to a business process, several business process, and/or 25 different outside cases.

The code can be amazingly clean and organized, but you comment to indicate why you did something one way and not another.


I'm not meaning to insult you

Sure you are. And that's okay.


I'd like you to offer some examples of code that successfully documents its own raison d'ĂȘtre - in a way that a comment couldn't do better - instead of repeating this 'crutch of poor developers' rhetoric.

I mean, I can write self documenting code without any comments, and it's perfectly understandable.

    before_create :auto_increment

    def auto_increment
      self.count + 1
    end
That code fails to tell the programmer why it's in the application logic and not defined in the database schema. What should I do instead, so I can code without crutches?

    def auto_increment_natural_key
        self.count + 1
    end
That's not a great deal better, it's still just saying what it does, not why it's there in the first place.

    def auto_increment_natural_key_because_another_app_relies_on_it
        self.count + 1
    end
Is that it?


Your code doesn't actually seem to do anything, so a comment would either be contradictory to the code, which is confusing, or explain why your code does nothing, which is silly.

I would argue that in this case, it is far better to just understand the code and then fix it appropriately. A comment would leave you second guessing.

    def auto_increment
      self.count += 1
    end
(though, for the purposes of self documenting code, count probably is not a great variable name either)


Rather ironically, it appears my poor choice of example could have done with some more documentation.


I'm now curious to know what you would have written for documentation.

Though, I think your example turned out to be a great one because it highlights that not everyone reads code the same way. Personally, I load large segments of code into memory and then mentally step through. You left me wanting more to see the context in which the method was to be executed, but I didn't really feel the need for comments.

However, with just the one method that you wrote in your example, that seems to indicate that you read just one function at a time. If you are not observing the code as whole, I guess a comment would help. I often find them taxing though, as they have to be read into memory as well.

I think that discrepancy is why the comments vs. self-documenting code debate exists.


The problem was I initially hinted at the context in my first example, with a call to Rails' `before_create` callback method. I then omitted it from every other example, thinking it'd be inferred. Evidently that wasn't very clear.

But the way in which people read code is an interesting point, which I actually think might be worthy of its own discussion. Looking at the psychology behind this would be good, I think.


> Evidently that wasn't very clear.

Actually, I would say your intent there was quite clear, which is where I came to realize that the code did nothing. Without that context, one could assume the method was used for its return value where the code could very well have had purpose as written.

What wasn't clear was how the count attribute was intended to be used throughout the rest of application. In the real world I would start to build a mental model around the uses of count, which was not available from your example. In terms of self-documenting code, I liken a short code snippet like your example to a sentence fragment in english. The entire sentence, or statement if you will, encompasses much more of the codebase.

This is definitely an fascinating topic, but unfortunately one that is very difficult to discuss for many reasons. I wonder how we can dig into the physiology aspects that you raise without the prevalent "my way is the only true way" attitude?


Thanks for trying!

Now we just put the comment in the variable name. The compiler just checks the variable names all match up, but doesn't check whether their names make sense.

Perhaps the example chosen was too much of a toy to yield valuable insight?


That code fails to tell the programmer why it's in the application logic and not defined in the database schema

Why isn't it a GUID? Why is it 32 bits instead of 64? Why is it signed yet starts at 1? Why isn't it a string? How will the identities be merged?

The notion that answering one single question provides clarity is ridiculous.


def auto_increment_natural_key_here_because_if_a_product_doesnt_have_a_key_assigned_because_it_came_back_from_return_and_wasnt_assigned_on_during_the_recieve_shipment_process_the_key_doesnt_exist_and_another_app_relies_on_it

Pretty sure that's it.


Your sarcasm is screwing with HN's page formatting.


This is what element inspectors/editors are for. Snip, snip.


I had to use inspector to make the page readable, but it's a PITA. GP is a jerk.


Agreed. Just pointing out there's a workaround.


Your hubris is certainly self-documenting, if it gives you any comfort.


He (or she) is just early on the path, and like a newborn kitten their eyes are still closed. In the mean time keep them away from pointy things.


I hope that caricature gives you some comfort in your utter mediocrity.


What a useful contribution.


Do you have github/sourceforge? I'd love to read some of your non-trivial code.

I'm not trying to be an a$$hole btw. I am not a coding guru, I genuinely want to minimize the need for comments in my code and am willing to learn from examples.


Do you have github/sourceforge? I'd love to read some of your non-trivial code.

Read almost any non-trivial successful project for good examples. The Linux kernel. Firefox. etc. The frequency and verbosity of comments tends to have a direct correlation with the simplicity of the code (which is the exact opposite of normal expectations).


Linux and Firefox code basis are messes.

Have you written any large projects that you've had to maintain over years, or worked with large teams, or handed off maintenance of a large project to others?


Two projects that are enormous successes, both with more contributors than any code that you've ever touched, I would wager. "Messes". Indeed.

To your questions, while you're rhetorically asking, trying to wink to the crowd in the implication that the answers are telling, yes, actually I have. To very good effect. I'm speaking from actual experience here, not just the hilarious patter of the bottomfeeder that is far too typical on HN.


> Two projects that are enormous successes, both with more contributors than any code that you've ever touched, I would wager.

No.

> "Messes". Indeed.

Yes, messes. Why do you think Chrome is eating Firefox's lunch ? Google has both a better-implemented product and sufficient marketing clout to push it.

Have you worked on Linux kernel code?

> I'm speaking from actual experience here, not just the hilarious patter of the bottomfeeder that is far too typical on HN.

What have you worked on?

I've worked on FreeBSD, Mac OS X, and an assortment of smaller widely used software projects, including user-facing applications.


Why do you think Chrome is eating Firefox's lunch?

Humorous given that both webkit and from that Chromium are largely comment free. What nonsense are you arguing again?

I've worked on FreeBSD, Mac OS X, and an assortment of smaller widely used software projects, including user-facing applications.

"Worked on" in HN parlance means "I did a coop term and wrote some test cases for some irrelevant little utility". Given your comical claims about Linux and Firefox re: Chrome, I have enough information about your skills.


> Humorous given that both webkit and from that Chromium are largely comment free. What nonsense are you arguing again?

Seriously?

http://src.chromium.org/viewvc/chrome/trunk/src/ipc/ipc_chan...

  // If the channel has already been created, then we need to send this
  // message so that the filter gets access to the Channel.
http://src.chromium.org/viewvc/chrome/trunk/src/ipc/ipc_chan...

And so on.

> "Worked on" in HN parlance means "I did a coop term and wrote some test cases for some irrelevant little utility".

Please go back to Reddit.


An enormously complex rendering and JavaScript engine. Some laughably irrelevant, trivial comments. Quite a solid proof you have there.


Those comments are trivial and irrelevant?

You clearly have no idea what you're talking about. I hope I never get stuck cleaning up your messes, but chances are that someone as intellectually lazy as you -- if not you -- will leave me an uncommented code base to maintain.

The fact that you actively advocate intellectual laziness is distressing.


Intellectually lazy? That's a mighty big term for someone like you.

Further it's utterly astonishing that you would equate writing clear and non-ambiguous code rather than nebulous code of uncertain purpose -- like the example Chromium code you linked -- is "intellectual laziness". That you hold good coding as deficient compared to lazy commenting is hardly surprising given your comments.


Failing to document code is to the detriment of future maintainers. Anyone that claims their code is clear and ambiguous without comments is lying to everyone, including themselves, as a means to justify their intellectual laziness.

"I don't need to comment" is really "I don't want to document my work because that's boring and I'm much too smart to need to do that".

You're not that smart. If you were, you'd realize just how dumb everyone is, and thus, just how necessary comments are.


Hey flatline3, I feel your pain, but it's time to stop feeding the troll.


Making the machine check as much as possible about your code is a worthy and practical goal. I often try long and hard to capture as many invariants as possible in the type system. And even though the language that we are using, Haskell, has one of the strongest typesystems you can find, that's still not very much logic guaranteed by the compiler. Of course, run time checks can catch a few more errors, but I'd rather catch mistakes as early as possible.

You seem to have lots of experience with expressing intent in code, and making that intent 'canonical'. How do you make the machine check the accuracy of your code? What language are you using for that?


You seem to be missing the point of comments. Comments are not meant to explain what your code does. They are for explaining why.


http://epicureandealmaker.blogspot.com/2012/03/chesterton-fe...

This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable.


I would say it was still their failure for not documenting why things were done in an unintuitive/inelegant way.


And yet, questions like "Why didn't they do it this way?" have been and will be the sparks that start great new fires.


That is a good way to approach a new code base, generally. On one project I took on they were putting SQL queries into a Javascript variable on the page and sending to the client. The client was then sending a SQL query back via ajax for the server to process. I hope I was not being arrogant calling it/them dumb!


Caution, there may be loads of inappropriate decisions, but, it doesn't mean all of them, even the most curious ones are. Highly misleading.

I rebuilt a messy WP blog (i18n hardcoding, dead code, etc) from scracth. Turns out the weirdest thing they did (perverting category/post system using additional metadata) .. they did it because of a weird bug the most important plugin they needed which was not open source anymore. Hit me right in the face.

As someone mentioned earlier, one has to explain everything in the system. The code is only the tip of the iceberg.


For every time that I've investigated strange code only to find out they had valid reasons for writing it that way, I've also found just as many situations where I'm glad I investigated and was therefore able to turn a horribly over-complex routine into something way simpler and obvious.

Maybe the ratio isn't one to one, but please, please don't just give up second-guessing ugly code.


"Annoying because the underlying thought is "Those NASA/JPL guys are so dumb""

I have to disagree with this statement and the article. I've asked some of these questions in the past, and the reason I ask is because I'm curious, not because I'm trying to call anyone "dumb".

The fact that people try to understand the underlying technology of such a complex mission, and later followup with a "why" question, simply shows how the general public is interested in these events.

I'd be more worried if people didn't ask any questions at all, implying that they do not care for such scientific and technological breakthroughs.

Indifference can be a dangerous thing.


Fairly big difference between asking "Why don't they...?" and "Why don't they just...?". The 'just' tacked onto the end of the beginning of the question implies it should be totally obvious.


regardless of semantics, it is ignorance--not knowledge--that is the true engine of science


No.

It is the desire for knowledge.

I loathe semantic games in arguments as much as anyone, but I too often see "ignorance" being given some kind of special place in science, where it doesn't belong.

There's a reason why NASA didn't just dedicate 8 years, $2.5 billion, and a tremendous amount of human effort to land a rover called "Ignorance" on another planet.

Curiosity: in this glorious age of Google, Wikipedia, Wolfram-Alpha, and so many other free and readily available resources, it doesn't start with the words, "Why don't they just..."


read ignorance as a lack of knowledge - not the voluntarily action to ignore.


>read ignorance as a lack of knowledge - not the voluntarily action to ignore.

Same thing. You cannot do anything with just "lack of knowledge", whereas you can do lots with either "knowledge" (e.g process it and extract more knowledge, conclusions, dis/prove theories etc) or curiosity (e.g obtain knowledge).

Lack of knowledge is passive. Curiosity is active.


"regardless of semantics". Why is this such a common phrase in debates? Semantics is the study of meaning in language. Requirements, code, comments and documentation are often much better when people do pay attention to semantics ;-)


It is ok to ask questions and wonder why they did it this way and not another way.

It is not ok to just go "they should have done it this way" without consideration for the millions of factors and limited resources they have

Especially from some people that have no idea how hard it is, how hardware can fail in hundreds of ways, how things are harder in a slower processor, in low level programming, in real time app

"Oh but they should have used Erlang/OCaml/LISP/JavaRT/NodeJS for that they are soooo stupid" SHUT UP

They did it, not you, and they have to live with the consequences, not you.


I don't think this is what jgc was saying.

The point is that it's foolish to assume you know better than someone, particularly when you are unaware of the background to their decisions. And doubly so when the someone in question is as smart as the NASA/JPL staff undoubtedly are.


I had the same reaction. "Why don't they just" questions are essential to understanding engineering, and we should, if anything, encourage them.


Which happens to be the point of this article.


Also people don't really share NASA's motivation. Mostly we just like looking at Mars, and if that was NASA's motivation they probably could have designed a better rover by swapping in cameras later in the process.

Dumb questions can have interesting answers. Even if people don't ask them nicely.


Well, I wrote one of those "why don't they just" comments on an article posted by jgrahamc[1] two weeks ago, so I feel the need to justify myself here.

Of course, as you point out I didn't mean to say that I know better than the guys at NASA. It's meant as "The way they do it seems strange and non-intuitive, how come they do this?". I agree that I could've worded that better.

That being said, I disagree when you say "It doesn't take much research to find the answer". Case in point, my comment:

"Why don't they just put a camera filming downwards to determine the ground speed? Wouldn't it be simpler and more reliable?"

I don't think any of your "answers" addresses this specific question. So it boils down to "Because it's on friggin' Mars, doofus". When I posted the comment, I hoped someone around here had an explanation for this (after all, determining a ground speed is something even non-space exploring robots need, I'm sure :).

But again, I agree the wording is awkward and comes off as pretentious, but do believe that it wasn't anything but genuine curiosity (sic).

[1] https://news.ycombinator.com/item?id=4345126


The questions will be dumb. they will repeat again and again. That's because not everybody is an expert on Mars robotics. Everybody has daily tech and they just use these tech as comparison points.

For instance, people have everyday technology in hands and they believe, if the ordinary Jane/John Doe has a multi core processor smart phone costing them 1000$ with manufacturer margin, NASA/ESA/JPL whatever guys with millions of dollars of funding has to use better and faster and fancier hardware. And by being ordinary John/Jane Doe, fancier does not mean "radiation hardened", "autonomous in hostile environment", "updatable on slow and relayed links". Because these are not their daily problem. They do not know about them. Also if Apple/Samsung/Motorola heck even Nokia guys could design and mass manufacture, and distribute any smart phone in a few months to be held on john's or jane's hands, NASA must be a fricking snail fast. How distant could Mars be? I can transport from continent to continent in a few hours?

Don't be harsh on them.

Some of the questions may seem dumb, but may be really good questions if they are worked on. For example, I believe radiation hardening is needed because while transportation to Mars there is massive amounts of radiation to be moved through. If the Curiosity would not be operated in the journey, and Mars radiation levels are not that massive. Could Curiosity use multiple (with spares of course) faster and cheaper ordinary processors with small energy footprint, if the transportation was inside a radiation hardened armor that would be gotten rid off on the Mars surface?


Just to answer the example: Mars does not have a magnetic field of significant strength. Curiosity needs shielding on the surface.

Perfectly reasonable question though.


Sorry but I could not understand the shielding on surface. What is this shielding for?


On Earth due to our magnetic field and outer atmosphere the radiation that gets fired at us constantly is massively reduced, on the surface of Mars the rover still needs to be shielded against radiation as there's nothing similar to protect it.


Radiation, such as cosmic rays and solar flare output. The Earth has a magnetic field which pulls such charged radiation to the poles; Mars has much less of that effect.


This last year I have made a conscious effort that whenever I use the word 'just' in communicating at work I stop. It has taken some time and introspection but for me personally when I prefface a statement with "... just do.." it follows with a statement I have put about 10 seconds of thought into and isn't really valuable.

I now listen for the word 'just' when other people are pitching ideas, or forming repsonses and tend to ignore what is said next as I suspect they too have put little thought into the statement they are about to make.

I'm not sure this is a great general case recomendation, but it has helped me in my parsing language for possible stupidity.


Just is a four letter word. Discourse is almost always improved when you remove that word from a sentence.


The attitude of I know better than they do is really strong in technology and programming in general. Most people who make those claims know not of what they speak.


I'd say it's really strong everywhere, particularly technology and programming.

Thing is, unlike the average folks with lofty ideas of self-serving "fixes" for every irritation in their lives, people in technology aren't just consumers from a distance - they're builders, maintainers and influencers.

It's a dangerous problem when these "active" people don't understand context, concessions, dependencies and just generally what it takes to actually create things in the real world.

I believe it's one of the roots of various stifling attitudes of conformance and covering your ass above real, forward homegrown engineering.

Apologies if that doesn't make sense. I had a hard time trying to articulate my ideas there.


International politics (and to some extent politics in general) is another topic where this happens a lot


An even cooler project would be proving once and for all that a swarm of small, rugged drones can indeed cover a larger area in harsh conditions and are a much more sensible strategy in exploration than a large single drone.

Iff you have ten, twenty expendable machines then you don't need the same level of QA, thus opening up a lot of interesting possibilities. If we can get equivalent amounts of exploration done in just half (or maybe quarter) the cost then it opens up room for an arms race of sorts where each generation can be rapidly iterated upon and we can plan such missions within a one or two year timeframe instead of an eight year one.

Of course, this is easier said than done... ;)


Well, surely for a certain type of scientific mission (e.g. something like spectral analysis via ChemCam) you cannot just shrink down the components and instruments indefinitely. The size of Curiosity is directly related to the fact that it is bristling with scientific instruments like no other probe before it. Sure you could make a swarm of toaster sized drones that just go off and cover ground, but the purpose of this mission is not just to take some pictures but to do some very detailed scientific analysis.


What you could do is have dedicated drones which are essentially movable instruments, so you can have co-operation on tasks and say teams of 3 or 4 (one vaporises, the other has the spectrometer. One drills, the other has the reagents and so on)

If you enter this paradigm then you can eliminate costs such as that long arm which you need to position the instrument package. That arm in of itself is an engineering marvel and it requires a lot of careful design to make sure that it doesn't malfunction. (remember you have a not so light weight at the end, the torque due to that is huge and you have a complex assembly consisting linkages to transfer torque and so on...)

The idea of this is to see how simple and redundant you can make things. If for the cost of that arm we could have one small team of rovers wouldn't it be worth it? Wouldn't it jump start exploration?


What you might "gain" in reduced complexity by getting rid of the arm you loose massively in being able to power the whole system efficiently in one go. If you split the 10.6 pounds of plutonium across N smaller drones you would need to duplicate N Radioisotope Thermoelectric Generators (RTPs), power electronics, batteries, radios, cameras, computers and probably a half a dozen other things I haven't thought of yet. This probe is going to be going for at very least 10 years - probably quite a lot longer, all because of the design choices around how it's powered.


Again do we need all of those systems? Why can't you have two dedicated machines for earth communication and the rest communicate via a HF radio protocol? You don't need a RTG if it's say sojourner size and packs one payload on a standardised mobile base, solar panels will work just fine. Since you have redundancy built into the swarm you can skimp on a lot of things including computing power. Remember you can use group decision making for more complex "choices" and if you lose a few drones due to unforeseen obstacles, then objectively it shouldn't matter, so you don't need a lot of LIDAR and stuff as well. As far as heating and shielding goes then the smaller size actually works to your advantage.

I don't see any reason why someone shouldn't make this. Yes these probes will be disposable, but that's the entire point. They can be used and thrown away opening doors to risk taking that we haven't really seen before.


"I don't see any reason why someone shouldn't make this."

You are exactly the kind of person that should build your own lander if you don't see this. And please, stop asking all of these questions in bad faith. I applaud deeringc for engaging with your specific points, but I can't bring myself to do this, since it seems to me like you assume all aerospace engineers are uncreative drones that can't think outside the box and see the obvious solution you've reached from your armchair.

Have you ever built a multiply-redundant, space-worthy, swarm-based system before? Have you built something that's any one of those three? If not, I don't have a problem with you thinking about them, but I do have a problem with your attitude. Edit: to clarify, I mean the attitude that comes across from your writing style. I have no idea what your qualificaitons are, how much thought you've put into this, how receptive you are to the idea that you're wrong, whether or not you recognize that everyone has more "unknown unkowns" than anything else, or what your actual attitude is. All I have are the words you write here. And my natural response to snark is more snark.

And if you think you're the first person to think "wouldn't it be great if not every spacecraft didn't have to have to re-solve the problem of power, communications, and computation?" I first heard the idea proposed in a 2009 talk[1] by someone who has been an insider for over 20 years. Some gems I remember from that talk:

* Having an identical copy is not redundancy

* Complexity is inherently more succeptable to failures

* Cars have been widely successful because of gas stations and repair shops. Spacecraft have to drag around their own

* We could create a ring of satellites, each dedicated to providing comms to earth, wireless power, computing power, or whatever. Users of the system would just need to build a structure, interfacing components, and their instrument and launch it into a nearby orbit.

[1] Abstract: http://www.spacecraftresearch.com/files/Fleeter.pdf


> You are exactly the kind of person that should build your own lander if you don't see this.

Once I have the financial resources, I actually plan to do so.

> Have you built something that's any one of those three?

To answer that yes I have, which is a completely orthogonal fact to my original comment because I am advocating reducing the qualifications of "space-worthiness" through the use of multiple copies. After reading your comment I decided that there had to be some respectable source who had advocated this earlier at some point and with some research I found this paper written by Rodney Brooks called Fast, Cheap and Out of Control : A robot invasion of the solar system outlining the same concept; http://people.csail.mit.edu/brooks/papers/fast-cheap.pdf

> I mean the attitude that comes across from your writing style.

Yes, I agree that my comments could look snarky under the weight of your assumptions, but I was doing my best to be genuine and engage in an honest discussion

> I have no idea what your qualificaitons are, how much thought you've put into this, how receptive you are to the idea that you're wrong

I'm actually quite certain that I'm wrong most of the times, but I don't know how I'm wrong and discussing, building things are the only ways to find out.

> whether or not you recognize that everyone has more "unknown unkowns" than anything else

Yes I do and it is a terrifying thought.

> or what your actual attitude is.

I'm doing my best to learn as much as I can and to never judge. (judgement takes up too many mental resources)

> And if you think you're the first person to think "wouldn't it be great if not every spacecraft didn't have to have to re-solve the problem of power, communications, and computation?"

I'm not and I would love to do more than just think and actually build things.

> * Having an identical copy is not redundancy

Can you please explain why? Is it because the failure points remain the same?


Hey, I'm glad you responded, and that I misinterpreted where you were coming from and your intentions, but I guess that happens when limited to text (not that a misunderstanding-free medium exists). But as you've probably gathered by now, most of my response came from reading "I don't see any reason why someone shouldn't make this. [and I doubt a good one exists]" instead of "I don't see any reason why someone shouldn't make this. [and I would like to hear some]"

> Fast, Cheap, and Out of Control

The first thing that popped into my head when I saw this title was NASA's "Faster, Better, Cheaper" initiative. I will admit that I do not know much about it, but what I do know is that there were some failures (as you would expect) and the public did not react well. The failures did not come from not having space-grade parts, IIRC, but for various other reasons. The most infamous was the Mars Climate Orbiter, known for failing due to an imperial-metric conversion error [1], that is still brought up in almost every space-exploration piece that gets sufficient attention. Anecdotally, the lessons learned by those I've talked to are (1) choose 2 and (2) the public will not tolerate failure on large NASA projects, even if those projects cost a fraction of what it takes to host an Olympics, or to buy Instagram, or fight a war for a day.[2] But I have just now hopefully started a discussion with my coworkers about this paper[3] on our message board.

So anyways, about the paper. It looks like your idea is only mentioned briefly in the one section, and not fleshed out. The idea was somewhat more fleshed out in the talk I went to. What I meant by my line of questioning was that I haven't seen an implementation of these concepts outside academia, even though the idea has been around for some time, and the industry isn't entirely driven by irrational beings, so there must be some technical reasons why the ideas haven't been fully adopted.

>> * Having an identical copy is not redundancy

> Is it because the failure points remain the same?

Essentially, yes. His argument that most failures were systematic, and not due to unequal 'wear' of various kinds. Software especially, since on space missions it is implemented to be as deterministic as possible. That means that even if the primary processor fails gracefully due to bad internal logic, and a hot backup immediately takes its place, the backup will behave in exactly the same way given those inputs, which are likely to stay the roughly the same across the switch. Another example is a mission where the high gain antenna succumbed to a systematic failure, and they completed the mission on its low gain antenna. If there had been 2 identical antennas, they would not have been able to.

Designing and testing components to last a long time under the conditions you expect and test for is relatively easy. It's when the designs and tests don't match reality is when the problems happen. If you write something and make a copy, it will retain all of the typose of the original, and it's the same with Code/CAD/etc (and you might be more likely to introduce bugs than fix them). Ground-based systems can get away with this because of how easy it is to replace/repair broken units in a redundant setup.

[1] Even though it was due to a much larger issue related to how the project was carried out, and just happened to manifest in that error. It could have easily been a kilometer/meter mixup instead. And having a backup string of landing hardware/software wouldn't have helped in this case, unless the backup units had a different design/implementation.

[2] And yet, JPL still chose to use the skycrane. Very couragous.

[3] http://www.dau.mil/pubscats/ATL%20Docs/Mar-Apr10/ward_mar-ap...


>>> most of my response came from reading "I don't see any reason why someone shouldn't make this. [and I doubt a good one exists]" instead of "I don't see any reason why someone shouldn't make this. [and I would like to hear some]"<<<

Ah I see. I'll start qualifying my I-don't-see-any-reason-whys from now on with the latter statement. :)

>>> I will admit that I do not know much about it, but what I do know is that there were some failures (as you would expect) and the public did not react well.<<<

I guess that's the real reason why this won't be implemented by government agencies. Space missions are a matter of national pride and no one wants a "designed to fail/waste of money" accusation on their hands. However, empirically speaking it seems to be the best way to do things as we can increase the amount of explored area quite rapidly and respond to changes much more quickly. I think that when private companies such as Planetary Resources start doing exploration they will be forced to adopt this model because of their constraints and a lot of amazing solutions will come out of it. If that happens then it might open up a pandora's box and advances will happen much more quickly. Some people will see it as a bad thing, but as long as the systems are autonomous it should lead to a lot of good things.

>>> 2) the public will not tolerate failure on large NASA projects, even if those projects cost a fraction of what it takes to host an Olympics, or to buy Instagram, or fight a war for a day.<<<

More importantly the politician who approves it probably won't get re-elected.

>>> [2] And yet, JPL still chose to use the skycrane. Very courageous.<<<

I was quite shocked when I heard it worked. I was willing to bet on the side of failure because of the sheer complexity involved. A small timing error, sensor glitch or the other million things that could have gone wrong would have led to failure. It's quite impressive that they managed to do such a high stakes real-time task more or less autonomously. It really was quite a daring thing.

>>>your idea<<<

It's definitely not mine. I bet people were talking about it when I was in diapers.

>>> What I meant by my line of questioning was that I haven't seen an implementation of these concepts outside academia, even though the idea has been around for some time, and the industry isn't entirely driven by irrational beings, so there must be some technical reasons why the ideas haven't been fully adopted.<<<

Yes, I'm willing to bet that this has a lethal flaw which quickly led to such implementations to be rejected, but the question is can this be hacked, for the lack of a better word? Remember most organisations where rovers are designed are meant to be risk averse and they, aside from NASA, have an endless pool of resources to draw from. (I'm talking about the military) The sociological and resource incentives in place simply work against any such proposal independent of engineering viability.

>>> His argument that most failures were systematic, and not due to unequal 'wear' of various kinds. <<<

There's an interesting case to be made over here, one is redundancy at the unit level and another is redundancy at the systemic level. Let's take the swarm as a system, in that case if you have multiple backup copies of machines adept at performing particular tasks then you essentially have redundancy provided that the same method is not followed at the unit level. In that case if one failed for a particular set of inputs (hardware or software) then you can "patch" the rest by either avoiding the physical situation or changing the software. You can repeat that with individual units and let them have free reign. If one of them gets destroyed then the other ought to be able to be modified in time. Although it won't guard against stupidity at the unit level, this should to be a much more redundant system than anything we can create within a single one-use device.

At the level of the single unit, systematic failures come into play and more copies is actually less over there. Things must be asymmetrically designed to overcome edge cases and systematic failure modes and robust engineering makes sense over there. I think that one of the best ways to implement such a system would be to spend most of the resources in creating and testing a mobile base which is independent of its payload. (attachable modules which perform specific tasks) You should then be able to deploy this machine across missions and learn from all of the real world testing in each mission to create something truly robust and reliable. Once you achieve that you can start offering redundancy at the swarm level through the payloads. For example in that high gain, low gain antenna scenario, wouldn't it be better to have a dozen robots equipped with a variety of antennas dedicated to communication?

>>> It's when the designs and tests don't match reality is when the problems happen. <<<

Yes, but isn't the entire point of the exercise to fail early and fail often so that you can succeed? If your system is disposable then any systemic failure becomes yet another data point to engineer against and all future systems are better because of it. There is no better laboratory than nature and surely this is a point in favour of it? (unless I'm missing something)

Would you like to carry out this convo via email? If so then please feel free to drop me a line at searchingforabsolution [at] hush.com

>>> [3] http://www.dau.mil/pubscats/ATL%20Docs/Mar-Apr10/ward_mar-ap.... <<<

Thanks for linking me to this article! It was great.


> Again do we need all of those systems?

Yes. Like I've said above the explicit purpose of this mission is to gather advanced scientific data which requires comparatively bulky instruments and high power availability. Because it is so ridiculously expensive to get anything to Mars you want it to last as long as possible. "Disposable drones" make absolutely no sense when it costs so much to launch them into orbit, transport them 60 million km across the solar system, and then enter the Martian atmosphere and land in a coordinated place on the surface. Each gram you get to that point costs many thousand dollars. You don't just plan your strategy around losing a bunch of them - that would be many hundreds of millions of dollars down the drain.

Using a decaying nuclear isotope to power your probe you will get many multiples more bang for buck compared to solar powered probes which so far have maxed out at about 5 years. The solar concentration of a Martian winter is extremely low, and this problem is exasperated the smaller the probe and the resulting battery that it can carry.

Just like in computing and electronics, physical and mechanical distributed systems are inherently complex - more so than monolithic systems.


>>> Yes. Like I've said above the explicit purpose of this mission is to gather advanced scientific data which requires comparatively bulky instruments and high power availability. Because it is so ridiculously expensive to get anything to Mars you want it to last as long as possible. "Disposable drones" make absolutely no sense when it costs so much to launch them into orbit, transport them 60 million km across the solar system, and then enter the Martian atmosphere and land in a coordinated place on the surface. Each gram you get to that point costs many thousand dollars. You don't just plan your strategy around losing a bunch of them - that would be many hundreds of millions of dollars down the drain. <<<

I've thought about what you have said and I think that we are measuring the likelihood of success in different ways. I'm measuring it in terms of the likelihood one pair of devices will complete the outcome at the cost of all the others and, if I am correct, at some level you are measuring it in terms of reducing the possibility of a loss while achieving the mission objectives.

I think that we can afford to build disposable machines because if they are tiny and can fit within say a 50 cm cube (which is the diameter of curiosity's wheel) the mass of each machine will also be radically less. Curiosity weighs 899 KG, a well designed vehicle base could weigh as less as 1 KG with instrumentation we could work with the assumption of 2 KG. That is around 450 rovers! If they are divided into teams of say 6 and are dropped off using some method at discrete intervals then you have 75 teams exploring the martian surface. If each team explores during just the warm martian months (I'm working with assumption of 400 sols) with a rate of a very conservative .5 m^2 explored in a day then all of the teams combined will explore 15000 m^2 in the course of a single mission. That's huge. Further, in this scenario, if individual units fail at some point then the entire mission won't be jeopardised and that number will roughly stay the same. I think that if the units are allowed to be autonomous (again because they are disposable) then you could rapidly increase the area explored and get more out of a single mission.

In this scenario the success of the mission has now bifurcated from the functioning of a single device and because of that you are free to pursue several orthogonal benefits such as these which ultimately reduce costs. I think that if you factor in a decrease of launch costs due to company's like SpaceX, then this ought to become even more attractive.

natep linked to a wonderful article ( http://www.dau.mil/pubscats/ATL%20Docs/Mar-Apr10/ward_mar-ap... ) on this which argues the point in a much better way.

>>> The solar concentration of a Martian winter is extremely low, and this problem is exasperated the smaller the probe and the resulting battery that it can carry. <<<

One of the main uses of the battery during winter months is to keep the processor warm. If the assembly is small enough then you should be able to use just insulation, a very small heater and perhaps a long lasting exothermic reaction which proceeds slowly over time. The amount of heat generated by such a reaction would be too small for something like curiosity, but perhaps it might work for a very small machine? Again since tolerances are low you shouldn't you be able to use a wider variety of batteries which store more per unit volume? I might be wrong on all counts, but a smaller design and lowered tolerances might actually work to our advantage.

>>> Just like in computing and electronics, physical and mechanical distributed systems are inherently complex - more so than monolithic systems. <<<

I'm actually not that into computing and electronics, I used to build physical systems and how they fail fascinates me. My designs failed so often upon meeting the real world that I realised the only way to know if something would ever work was to actually implement it IRL. If you can carry out a mission at one-tenth of the cost then you can do that much more willingly and learn from unforeseeable failure modes much more quickly. It should be an answer to this problem than the other way around.


There is at least MetNet (http://en.wikipedia.org/wiki/MetNet) which aims to swarm of measurement devices on Mars surface. They are going to use immobile measurement platforms.


Great post. I'm sure at least one person is going to say "But I don't have a billion dollar budget to spend on my Earth-rover!" though.


I don't see any problem with the "Why don't they just..." questions. People are curious. I don't really like the tone of the post, it comes off as a little elitist to me and is going to discourage question asking in the future if people are answered with that sort of tone.

Also "Why don't they just" has multiple meanings in my opinion. You explained one of them, a sort of incredulous question querying the stupidity of the people who made those decisions. But it's also used far more innocently as well. I and a lot of my friends will ask questions in that format but as a genuine query not meaning to tread on anyones toes or insult anyone. (I'm having trouble articulating this!)

This sort of question asking isn't the exclusive domain of Mars rovers either, it's everything. Politics, economics, business etc etc. It's just natural human curiosity, people trying to understand things that seem counter intuitive at first.


The problem is how the reader chooses to interpret the phrase, and usually such phrases aren't made in a clear context.

From what I see, it looks like most readers interpret that phrase as an arrogant arm chair expert assuming NASA, or who ever, are full of really stupid people and the person "asking" is essentially suggesting he knows better. The simpler interpretation that a person who knows they don't know is simply asking is often the secondary thought, if that thought occurs at all.

I think that has a lot to do with the combat nature of on-line "debate". People now are sort of trained to expect confrontation rather than a mellow old chat. Kinda ties in to a recent thread here about , er, negative and harsh replied to Show HN articles.

This is one of the things I'm beginning to really not like about on-line discussions. I end up spending more time trying to make sure I'm not misinterpreted than making my actual point.


While I totally understand that the mission's computing power was decided a long time ago, if they'd left some 'high risk' room in the budget (mainly weight) then they could have added whatever new tech was available, say a year before launch in the hope that it works, if it doesn't then the old klunky decade old tech will still do its job, but just maybe the foil wrapped iPhone will Facetime back to Earth and work just fine. Aside from adding a little weight, it seems like a small risk to throw onboard something closer to cutting edge with a caveat that it might not work in the mission plan.

I bet if NASA had had a competition to come up with the most likely to work gizmo, up to 1 lb (or whatever) we'd have seen some pretty good ideas (beyond duct taping an iphone to the antenna mast)


Off topic: There's a comment (http://news.ycombinator.com/reply?id=4408271&whence=item...) that breaks the layout because it contains a very long word.

I was trying to find an easy CSS hack to make the layout impervious to long words but it turned out to be a little tricky. Can anyone find a CSS "one-liner" (few-lineser?) that'll fix this?


I deleted that comment with Noscript click-Del (new feature), but that kills the answers too. The suggestion by jlev (hellbanned) is not working here.


Why don't you "just" use JavaScript?


"... You need to be a bit conservative. The thing you're sending to Mars is going to be on its own and unrepairable. ..."

Hardware is hard to fix but software is do-able. There is a lot of explanations how this was done with Pathfinder:

http://research.microsoft.com/en-us/um/people/mbj/mars_pathf...

http://chronicle.augusta.com/stories/2004/02/16/liv_404763.s...

http://queue.acm.org/detail.cfm?id=1035610

there was also some reconfiguring of Curiosity on-going/completed ~ http://blog.chron.com/techblog/2012/08/nasa-about-to-perform... there is also a lot of work done on 'self healing' software systems ~ http://www.zdnet.com/blog/emergingtech/self-healing-computer...


There are perfectly valid reasons to ask questions like this: you may be genuinely curious or confused about the issue. If something seems obviously odd to me it usually means I'm missing some piece of information needed to understand. Having somebody explain what circumstance I'm unaware of is extremely useful and enlightening. If I ask a question like this it's to get a better feel for the problem space rather than to imply any sort of incompetence on the part of whomever I'm asking about.

The real problem is that it's hard to tell the people asking questions like this out of truly interest apart from the people being undeservedly condescending.

I suppose the most important thing is to phrase questions like this carefully and to be polite. I think it's also useful for the answerer not to assume bad faith unless it's very overt. A presumption of reasonableness from everyone would markedly improve discussions about projects like the Mars rover which generally contain very useful information and insight but can get side-tracked by the very issues this article talks about.


Regarding #4; it would be nice to see someone pursuing a project to put a set of comms satellites in orbit around Mars to significantly improve this coverage.

If we are going to be making these efforts at Mars in the future (whether autonomous or manned) it seems worth the investment :)


> it would be nice to see someone pursuing a project to put a set of comms satellites in orbit around Mars to significantly improve this coverage.

Would that really change anything? A higher bandwidth between the rover and Mars's lower orbit will not make the link between Mars and Earth much faster, and that's the true bottleneck.

Not to mention wireless communications are not "free" as any smartphone owner probably knows and the rover's daily energy budget is pretty much fixed, the more the rover stays in contact with Mars's lower orbit, the less energy it has to drive around or fire its lasers.

And finally, for what it's worth, the MRO is already "significant improve[ment of] coverage".


> Would that really change anything?

Yes. The rover is only in communication with its satellite for a limited time frame each day, and the satellite <-> earth link is wider than the satellite <-> rover link. With more satellites we could have around the clock high speed link up, and improve the bandwidth - future missions are likely to be loaded with more and/or higher bandwidth sensors.


> and that's the true bottleneck.

Are you sure? My understanding was that the biggest bottleneck lay in the fact that the orbiters could only communicated with the lander for 8 minutes per day.

Fairly crippling :)

> Not to mention wireless communications are not "free" as any smartphone owner probably knows and the rover's daily energy budget is pretty much fixed

I understood the energy source gave a fixed output - it's not a daily budger per se. So 2 hours communications (rather than 8 minutes) wouldn't leave it unable to move for the rest of the day.

I could be wrong.

However; if it takes a week to send it an update - or to receive big pictures - that implies delays in its work anyway.

I agree that the MRO is a significant improvement. I was just idly pointing out that if the aim is to go to Mars more it would be worth investing in infrastructure as much as science.


I think it's the conservative part people have a problem with. Someone at some point in NASA's life had to take the plunge and develop the components and algorithms to use, so it isn't unreasonable to expect some new components and algorithms on new projects.


Computers are something that NASA has always been conservative with. By the time Apollo flew, the technology for the Apollo Guidance Computer -- RTL ICs -- was a few generations obsolete. The shuttle's main computers started out as designs derived from the IBM S/360. It takes time to design a radiation hardened board, and from there, it takes time to integrate that board into new designs. Thus, progress seems slow, but it's about as slow as it ever was in hardware. Integrating a newer core late in the game means a lot of needless regression testing for questionable benefit. Because of how orbits work, the deadlines are a lot more set in stone, and if your change breaks something, it could mean that you miss your launch window, and are sitting on your butt for a couple years until it comes around again.

The algorithm part is equally simple to explain. Most of these algorithms aren't broke, so there's no real impetus to fix them. The equations are pretty well known, so between missions, there's really not a need to change them up.


I imagine they build the new components and algorithms for less dicey missions, get as much data back as possible and then look at reusing them in another area. NASA is basic a giant engine of risk mitigation, a lower priority mission with a lower cost is much more suitable for something new that isn't tried and tested out.


"1. The Mars Science Laboratory project was started eight years ago in 2004. So, all the technology on it is at least eight years old."

This doesn't seem quite right. From watching documentaries on the past Mars missions, it seems like each piece of scientific equipment is done separately by various engineering groups around the U.S. and the world. They are only integrated at the end. And those individual pieces of equipment are getting changed right up until the deadline if they aren't working up to spec.

So the requirements are eight years old, but the tech isn't necessarily. They are just very conservative in their requirements.


Well, "Why don't they just..?" question is not specific to this case. It's a very common question in many other areas of life. In my opinion it's because of general curiosity found in humans. And except mild annoyance it can be a very good way to introspect your position.

For example if others haven't been asking this question we wouldn't got this explanation. And now that I got this explanation of NASA's process I would like to know "Why don't they just make all this process more agile?"

For example why have only one launch in 7 years with expensive hardware? Why not multiple launches with cheaper technology?


  > But rather than explaining all this stuff, I think there's a 
  > better way: build, land and operate a rover here on Earth.
Is this implying that building a rover here would provide an explanation for their technical choices? With a 2kg payload limit for a balloon-launched rover, you might be better off with the latest technology. GPS on a chip rather than separate components making up GPS circuitry. Or a tiny, power-efficient mobile chip rather than some power-hungry chip from a decade ago.


I agree, especially because many of the reasons for the older, less cutting edge technology used in space boil down to high levels of radiation. Building a small 2kg probe, launching via a weather balloon and then exploring a desert on earth would certainly be hugely educational (not to mention fun) but the constraints would be very different to those on a Mars mission and so the end result would look VERY different.


VxWorks was chosen because it does what it says on the tin.


"just" is a dangerous word. if you are hearing it, prepare to be condescended to. if you are saying it, you probably don't actually understand the situation.


While I generally agree with what he says, I can't help but wonder why they would build something over the course of 10 years without assuming that by the time it would be sent into space, there would be better technology available for various aspects of their design.

For example, build everything around the Camera, but make sure that when it's within a year of going up into space, the design is such that the highest resolution camera available can take the current camera's place.

Of course, anyone who has worked on the project will easily be able to say that I have no clue. And I don't. But I guess that if I were included in the planning stages, and it came down to the data collection components of this machine, the first thing I would add would be that it should be capable of upgrading to the latest and greatest within a year of launch.

As a final note, I understand that they DID do this with software. They can update the operating system and roll back. They can advance the software, bug fix, etc, in a VERY safe way from afar.



At least with software you can rollback, not exactly something that works with hardware. A year may seem like a long time, but to test all the ramifications of something that sounds simple like a higher resolution camera is not an easy problem. As the dpreview article highlights, a change in resolution affects multiple subsystems.


Why is tech so unreliable? Why does testing and building take so long time that tech is outdated?


I used to wonder that too. Well, I wondered why nobody could seem to make simple products that worked reliably without any duds and without a lot of cost.

Then I got a job with a company that manufactures sensors used by a huge number of companies and governmental departments, including NASA and Boeing. And my job was Output Smoothness Technician -- which meant that I was the one guy who was responsible for verifying the electrical specifications of every single part that left the plant. (The next two steps were QA -- which tested things like watertightness -- and then shipping.)

I learned a lot from that job. I learned that there are a lot of decisions behind every single little thing. For example, let's take the tiny little linear potentiometers that Disney uses in its animatronics. Someone at Disney decided they needed them; one team of engineers came up with a spec; another team of engineers figured out how to turn the spec into something that could be made; another team figured out how to make it. Then someone decided what kind of metal to use. Someone else decided how to tune the potentiometer to provide the desired output. Someone else decided what kind of grease to use. And then during manufacturing, someone decided whether or not the housing was good enough, someone decided whether or not everything fit together right, someone decided whether or not the weld was good enough, and so on.

By the time that little thing got to me, there were thousands of little decisions stored in it. Some good, some not. Then I had to test it and decide whether or not it would do what the customer wanted. Should I throw it out, wasting the company a lot of money? Or should I ship it, and let the customer decide, and hope the QA on their end is better than me?

I pretty quickly developed a reputation as one of the toughest OS techs they'd ever had. I threw back a lot of parts. The sales manager (who basically ran that location) hated my guts. But we still had defective parts come back every month!

Now imagine that you're trying to build something that you can't service, and you can't fix. You get only one shot to get it right, and a lot of money and a lot of people are riding on you. Best of all, you're building it to survive in an environment that you just can't really create here on Earth, so you don't get to test it the way that you'd test a lot of things.

That's why it takes 8 years and a lot of money: because it's very, very hard.

I think that one day sending things to Mars will become something that we're used to, and at that point, it will get a lot faster and a lot cheaper and a lot more reliable. But, right now, we're still trying to do things that we don't really know how to do. That's hard, and it takes time.


So tech is unreliable because so many people work on different parts, that everything is a fragmented mess? How can one fix this problem?


Having a large number of people in the chain isn't the problem, it's that no manufacturing process is perfect and defects in design or production are often not obvious until you've tested the item in the field.

Further, since it's not practical to test things on Mars, so you can imagine how difficult this can be for the engineers here on Earth.

The process described here is typical of many companies because it's not possible for one person to do all of these steps. The level of expertise required is too high. You may be a great mechanical engineer, but do you know about the performance characteristics of the hundreds of different kinds of grease? About the way the sleeve should be machined? There's a list of specializations thousands of items long, and at best you'll be able to truly master only a small percentage of these even through a whole career.

The ultimate reason technology is unreliable is because the world is a complicated, crazy place that often exposes you to situations you're not prepared for.


Let's not say it's unreliable. That doesn't really have much meaning without context: the reliability I expect from my car is much lower than the reliability I expect from a space probe.

Instead, let's consider that everything ever built will fail under a certain set of conditions. Understanding a particular component's reliability means understanding all the things that are likely to go wrong over its lifetime and fixing some, and characterizing others. The many people working on different parts is an outcome of different concerns seeing different problems and applying solutions. These solutions are sometimes contradictory: a weak potentiometer shaft may be improved by using a particular alloy of steel instead of, say, aluminum. But now that alloy may make the part too heavy, or too expensive, or it may create problems for the bearing it has to sit in, etc.

What you see in the long development time isn't that something is necessarily "unreliable" it's that it takes that long to understand how it can fail and make it reliable enough to sustain the mission. This is already why engineers will tend to use tried and true parts: they already understand them completely.


Just to play devil's advocate here, is it really /that/ "outdated"? Yes, the camera is modest by 2004 standards, but by 1994 standards it's out of this world!

To me, eight years to build and test equipment to fly to Mars and wander around on another planet's surface is a pretty reasonable turnaround time.


Why don't they just send an iPhone to control the mars rover?

Radiation? What does radiation do to electronics? Do they corrode? Will things short circuit?

Would shielding an iPhone be more expensive than using outdated parts?

Do we know that the iPhone isn't reliable enough? Mine seems to be fairly reliable. Any problems with it are solved by a simple reset. Certainly a small bit of old battle tested code/hardware could handle this problem. Is the benefit of having modern hardware not worth it?

I'm sure all of these questions have answers, but it's not like I know where to find them.


Until we lose communication with the rover because a bit of conductive dirt shorted the antenna.


I would expect the budget would include enough money for a bumper.


This is overly cynical. When people say "Why don't they just...", it is a sincere question, not a rhetorical statement. "Oh radiation...that makes sense". "Oh they had to bake the tech for 8 years". Etc.

Sidenote - "Why don't they just" shield the electronics rather than making the electronics itself radiation proof? I ask that knowing that there is a legitimate reason (shielding weight?), but it is something I have sincerely often wondered.


'Radiation' is not a single unified thing. You can shield some things chicken wire with multi inch holes in it. Other can travel though the earth before hitting you. On of the basic problems is some of the things that help you vs one type of radiation makes other things worse. Also, Mars missions are multi year affairs so even at background radiation levels on earth you are going to have problems.

PS: If you added up the entire mass of everything ever put into orbit you would not be able to make a shield to guarantee normal equipment would be able to stand a 5 year mission in space.


The problem is that you can't fully control the meaning that someone takes from your statement. As a spacecraft enginner that knows many of the possible justifications for these "why don't they just"s, it's the negative connotation that springs to mind first, even without the "just". I think it is because that phrase is often used in the imperative sense, and not in a knowledge-seeking sense. Google autocomplete for "why don't you" suggests "stay", "get a job", "love me", and "do right" and if someone were to ask me any of those questions, I would not assume they were sincerely wondering anything.

Now, if you're wondering how better to phrase your questions of this sort, I'm afraid I don't have any specifics, because I'm not sure if I'm just as guilty of this, or if I've successfully rephrased my questions of this sort. What I do is try to imagine that I've spent years designing, building, and debugging the thing in question, and gone through multiple reviews by outside organizations where every design decision was scrutinized, and had many meetings with coworkers (formal and informal) to discuss the thing in question. And then I ask my question in the affirmative, rather than in the negative, such as:

"Why did you do X, would Y also work?"

"I would have done Y. What is it about X or Y that I am missing?"




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: