So I would go in and "fix" things... only to find out a month later that the code was the way it was because of some obscure edge case that I had never thought of. It turns out that in my arrogance I was the dummy all along.
After that happened a couple of times, I stopped approaching strange code with that attitude. Programmers are in general pretty clever, and if you see something strange in some code, don't assume it's because they're dumb. At first assume it's because you're dumb, and only change your mind if careful and deliberate analysis--and talking to someone else with history in the organization--proves otherwise.
Then all of a sudden, you don't need to waste time on careful and deliberate analysis, and hunting down people who might know about it, and making assumptions, because the 2 minutes it would take to write a few lines of explanation would save you all of that.
At which point, the real question is: why did they choose not to document this non-obvious solution, and the edge case that required it?
From my personal experience with "corporate programming", the usual suspects are:
1) corporate culture that dictates that you need to get the code out ASAP and let someone else worry about maintenance
2) original author's assumption that he/she will be the only one to touch that code
I've been guilty of #2 before I learned that even if I am the only one to touch the code, if I wait long enough before I come back to it, I'll still have the same problem as a newcomer would.
As for #1, this is a typical corporate culture for any company whose business isn't producing code (and for quite a few whose business is precisely that).
When you get familiar with a domain it is very easy to get blind to what people without exposure to it will consider obvious or not.
4) Documentation that is held separate / made in different systems. That diverge over time. Or, one failure in training of the new guy is to not provide them a good overview of the documentation system(s). And/Or, the documentation organization and systems are cumbersome to the point of being unuseful, unless you already have a pretty good idea of where stuff is (and isn't -- all those empty forms that end up being ignored).
Unfortunately, three months from now, "someone else" is you, and "getting the code out ASAP" means figuring out the spaghetti code you wrote but no longer understand.
In the long run, doing it right the first time helps us go faster. Therefore, it's part of the programmer's job to resist pressure to do it wrong.
It's feasible to maintain. But people might rely on it, too. As a random example (and easily avoidable example), quicksort works with random input, but 'breaks' for pre-sorted input.
Any implementation can hit n^2 with random input. It's just highly unlikely.
For example, you could have one 10-line method, interspersed with comments, or you could have a hierarchy of method calls, each with a name so clear that it needs no comments. The second method is less likely to get out of sync with what the code actually does, and it's more testable.
This the lesson that stuck with me most from Clean Code, which I highly recommend.
Not trying to pick on you, but every time I see someone implying that "naming things better" is a panacea, I want to throw it out there that sometimes, nothing beats a good comment to explain (in natural language) what's going on here.
// It may look as if you could just say p->adjust() here,
// but that doesn't work because of a subtlety involving
// cancelled credit cards belonging to purchasers in Uganda.
// Please tread carefully!
If the code isn't the clearest comment you have, you're doing it wrong.
Comments are a crutch for people who write poor code. Comments have zero authority or guarantee of accuracy, and more often than not have little correlation with the actual code.
Code is canonical. Comments are noise.
Business logic is complicated and rarely defined by a developer but by a product manager. Often you can understand what's being done in the code, but the WHY is necessary to understand why it's there and what it is trying to do. I believe giving a brief synopsis of the business logic in a method comment and, if it's not super straightforward, a brief overview of the steps or algorithm, is incredibly useful.
I guarantee you won't be able to figure out what your program is doing by looking through years old wikis left by product managers no longer at your company.
That the code did X is clear by the code: No amount of words can refute or change that the code does X. The danger is replacing code is always in the behavior of the code, and never simplified descriptions of its actions.
To business logic, as someone who has worked heavily with business code for years (laughable commentary from imbeciles to the contrary), business logic in comments is one of the worst choices a team can make because it is an escape hatch. It negates the need for verbose, traceable code. It negates the need for vastly superior external proof.
Donald Knuth, who gives money to people who spot bugs in his code, even invented literate programming to mix prose explanation and code better. Is he an idiot?
They are the crutch of people who can't read code: "Add more comments because otherwise I can't make sense of what the statements are doing." It is the English speaker demanding that every French passage have an English translation, rather than simply learning French.
They are the crutch of people who can't write code. "My code is a gigantic, illiterate mess, so instead read the comment at the top that has no guarantee of being robust or accurate."
Bringing up mathematicians and Knuth are both irrelevant distractions. Software development in the modern world is a very structure, self-describing affair, or at least it should be. Comments are the short-circuit from having to figure out how to do that.
As has been said, comments explain why you are doing something in a certain way. That why is often related to a business process, several business process, and/or 25 different outside cases.
The code can be amazingly clean and organized, but you comment to indicate why you did something one way and not another.
Sure you are. And that's okay.
I mean, I can write self documenting code without any comments, and it's perfectly understandable.
self.count + 1
self.count + 1
self.count + 1
I would argue that in this case, it is far better to just understand the code and then fix it appropriately. A comment would leave you second guessing.
self.count += 1
Though, I think your example turned out to be a great one because it highlights that not everyone reads code the same way. Personally, I load large segments of code into memory and then mentally step through. You left me wanting more to see the context in which the method was to be executed, but I didn't really feel the need for comments.
However, with just the one method that you wrote in your example, that seems to indicate that you read just one function at a time. If you are not observing the code as whole, I guess a comment would help. I often find them taxing though, as they have to be read into memory as well.
I think that discrepancy is why the comments vs. self-documenting code debate exists.
But the way in which people read code is an interesting point, which I actually think might be worthy of its own discussion. Looking at the psychology behind this would be good, I think.
Actually, I would say your intent there was quite clear, which is where I came to realize that the code did nothing. Without that context, one could assume the method was used for its return value where the code could very well have had purpose as written.
What wasn't clear was how the count attribute was intended to be used throughout the rest of application. In the real world I would start to build a mental model around the uses of count, which was not available from your example. In terms of self-documenting code, I liken a short code snippet like your example to a sentence fragment in english. The entire sentence, or statement if you will, encompasses much more of the codebase.
This is definitely an fascinating topic, but unfortunately one that is very difficult to discuss for many reasons. I wonder how we can dig into the physiology aspects that you raise without the prevalent "my way is the only true way" attitude?
Now we just put the comment in the variable name. The compiler just checks the variable names all match up, but doesn't check whether their names make sense.
Perhaps the example chosen was too much of a toy to yield valuable insight?
Why isn't it a GUID? Why is it 32 bits instead of 64? Why is it signed yet starts at 1? Why isn't it a string? How will the identities be merged?
The notion that answering one single question provides clarity is ridiculous.
Pretty sure that's it.
I'm not trying to be an a$$hole btw. I am not a coding guru, I genuinely want to minimize the need for comments in my code and am willing to learn from examples.
Read almost any non-trivial successful project for good examples. The Linux kernel. Firefox. etc. The frequency and verbosity of comments tends to have a direct correlation with the simplicity of the code (which is the exact opposite of normal expectations).
Have you written any large projects that you've had to maintain over years, or worked with large teams, or handed off maintenance of a large project to others?
To your questions, while you're rhetorically asking, trying to wink to the crowd in the implication that the answers are telling, yes, actually I have. To very good effect. I'm speaking from actual experience here, not just the hilarious patter of the bottomfeeder that is far too typical on HN.
> "Messes". Indeed.
Yes, messes. Why do you think Chrome is eating Firefox's lunch ? Google has both a better-implemented product and sufficient marketing clout to push it.
Have you worked on Linux kernel code?
> I'm speaking from actual experience here, not just the hilarious patter of the bottomfeeder that is far too typical on HN.
What have you worked on?
I've worked on FreeBSD, Mac OS X, and an assortment of smaller widely used software projects, including user-facing applications.
Humorous given that both webkit and from that Chromium are largely comment free. What nonsense are you arguing again?
"Worked on" in HN parlance means "I did a coop term and wrote some test cases for some irrelevant little utility". Given your comical claims about Linux and Firefox re: Chrome, I have enough information about your skills.
// If the channel has already been created, then we need to send this
// message so that the filter gets access to the Channel.
And so on.
> "Worked on" in HN parlance means "I did a coop term and wrote some test cases for some irrelevant little utility".
Please go back to Reddit.
You clearly have no idea what you're talking about. I hope I never get stuck cleaning up your messes, but chances are that someone as intellectually lazy as you -- if not you -- will leave me an uncommented code base to maintain.
The fact that you actively advocate intellectual laziness is distressing.
Further it's utterly astonishing that you would equate writing clear and non-ambiguous code rather than nebulous code of uncertain purpose -- like the example Chromium code you linked -- is "intellectual laziness". That you hold good coding as deficient compared to lazy commenting is hardly surprising given your comments.
"I don't need to comment" is really "I don't want to document my work because that's boring and I'm much too smart to need to do that".
You're not that smart. If you were, you'd realize just how dumb everyone is, and thus, just how necessary comments are.
You seem to have lots of experience with expressing intent in code, and making that intent 'canonical'. How do you make the machine check the accuracy of your code? What language are you using for that?
This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable.
I rebuilt a messy WP blog (i18n hardcoding, dead code, etc) from scracth. Turns out the weirdest thing they did (perverting category/post system using additional metadata) .. they did it because of a weird bug the most important plugin they needed which was not open source anymore. Hit me right in the face.
As someone mentioned earlier, one has to explain everything in the system. The code is only the tip of the iceberg.
Maybe the ratio isn't one to one, but please, please don't just give up second-guessing ugly code.
I have to disagree with this statement and the article. I've asked some of these questions in the past, and the reason I ask is because I'm curious, not because I'm trying to call anyone "dumb".
The fact that people try to understand the underlying technology of such a complex mission, and later followup with a "why" question, simply shows how the general public is interested in these events.
I'd be more worried if people didn't ask any questions at all, implying that they do not care for such scientific and technological breakthroughs.
Indifference can be a dangerous thing.
It is the desire for knowledge.
I loathe semantic games in arguments as much as anyone, but I too often see "ignorance" being given some kind of special place in science, where it doesn't belong.
There's a reason why NASA didn't just dedicate 8 years, $2.5 billion, and a tremendous amount of human effort to land a rover called "Ignorance" on another planet.
Curiosity: in this glorious age of Google, Wikipedia, Wolfram-Alpha, and so many other free and readily available resources, it doesn't start with the words, "Why don't they just..."
Same thing. You cannot do anything with just "lack of knowledge", whereas you can do lots with either "knowledge" (e.g process it and extract more knowledge, conclusions, dis/prove theories etc) or curiosity (e.g obtain knowledge).
Lack of knowledge is passive. Curiosity is active.
It is not ok to just go "they should have done it this way" without consideration for the millions of factors and limited resources they have
Especially from some people that have no idea how hard it is, how hardware can fail in hundreds of ways, how things are harder in a slower processor, in low level programming, in real time app
"Oh but they should have used Erlang/OCaml/LISP/JavaRT/NodeJS for that they are soooo stupid" SHUT UP
They did it, not you, and they have to live with the consequences, not you.
The point is that it's foolish to assume you know better than someone, particularly when you are unaware of the background to their decisions. And doubly so when the someone in question is as smart as the NASA/JPL staff undoubtedly are.
Dumb questions can have interesting answers. Even if people don't ask them nicely.
Of course, as you point out I didn't mean to say that I know better than the guys at NASA. It's meant as "The way they do it seems strange and non-intuitive, how come they do this?". I agree that I could've worded that better.
That being said, I disagree when you say "It doesn't take much research to find the answer". Case in point, my comment:
"Why don't they just put a camera filming downwards to determine the ground speed? Wouldn't it be simpler and more reliable?"
I don't think any of your "answers" addresses this specific question. So it boils down to "Because it's on friggin' Mars, doofus". When I posted the comment, I hoped someone around here had an explanation for this (after all, determining a ground speed is something even non-space exploring robots need, I'm sure :).
But again, I agree the wording is awkward and comes off as pretentious, but do believe that it wasn't anything but genuine curiosity (sic).
For instance, people have everyday technology in hands and they believe, if the ordinary Jane/John Doe has a multi core processor smart phone costing them 1000$ with manufacturer margin, NASA/ESA/JPL whatever guys with millions of dollars of funding has to use better and faster and fancier hardware. And by being ordinary John/Jane Doe, fancier does not mean "radiation hardened", "autonomous in hostile environment", "updatable on slow and relayed links". Because these are not their daily problem. They do not know about them. Also if Apple/Samsung/Motorola heck even Nokia guys could design and mass manufacture, and distribute any smart phone in a few months to be held on john's or jane's hands, NASA must be a fricking snail fast. How distant could Mars be? I can transport from continent to continent in a few hours?
Don't be harsh on them.
Some of the questions may seem dumb, but may be really good questions if they are worked on. For example, I believe radiation hardening is needed because while transportation to Mars there is massive amounts of radiation to be moved through. If the Curiosity would not be operated in the journey, and Mars radiation levels are not that massive. Could Curiosity use multiple (with spares of course) faster and cheaper ordinary processors with small energy footprint, if the transportation was inside a radiation hardened armor that would be gotten rid off on the Mars surface?
Perfectly reasonable question though.
I now listen for the word 'just' when other people are pitching ideas, or forming repsonses and tend to ignore what is said next as I suspect they too have put little thought into the statement they are about to make.
I'm not sure this is a great general case recomendation, but it has helped me in my parsing language for possible stupidity.
Thing is, unlike the average folks with lofty ideas of self-serving "fixes" for every irritation in their lives, people in technology aren't just consumers from a distance - they're builders, maintainers and influencers.
It's a dangerous problem when these "active" people don't understand context, concessions, dependencies and just generally what it takes to actually create things in the real world.
I believe it's one of the roots of various stifling attitudes of conformance and covering your ass above real, forward homegrown engineering.
Apologies if that doesn't make sense. I had a hard time trying to articulate my ideas there.
Iff you have ten, twenty expendable machines then you don't need the same level of QA, thus opening up a lot of interesting possibilities. If we can get equivalent amounts of exploration done in just half (or maybe quarter) the cost then it opens up room for an arms race of sorts where each generation can be rapidly iterated upon and we can plan such missions within a one or two year timeframe instead of an eight year one.
Of course, this is easier said than done... ;)
If you enter this paradigm then you can eliminate costs such as that long arm which you need to position the instrument package. That arm in of itself is an engineering marvel and it requires a lot of careful design to make sure that it doesn't malfunction. (remember you have a not so light weight at the end, the torque due to that is huge and you have a complex assembly consisting linkages to transfer torque and so on...)
The idea of this is to see how simple and redundant you can make things. If for the cost of that arm we could have one small team of rovers wouldn't it be worth it? Wouldn't it jump start exploration?
I don't see any reason why someone shouldn't make this. Yes these probes will be disposable, but that's the entire point. They can be used and thrown away opening doors to risk taking that we haven't really seen before.
You are exactly the kind of person that should build your own lander if you don't see this. And please, stop asking all of these questions in bad faith. I applaud deeringc for engaging with your specific points, but I can't bring myself to do this, since it seems to me like you assume all aerospace engineers are uncreative drones that can't think outside the box and see the obvious solution you've reached from your armchair.
Have you ever built a multiply-redundant, space-worthy, swarm-based system before? Have you built something that's any one of those three? If not, I don't have a problem with you thinking about them, but I do have a problem with your attitude. Edit: to clarify, I mean the attitude that comes across from your writing style. I have no idea what your qualificaitons are, how much thought you've put into this, how receptive you are to the idea that you're wrong, whether or not you recognize that everyone has more "unknown unkowns" than anything else, or what your actual attitude is. All I have are the words you write here. And my natural response to snark is more snark.
And if you think you're the first person to think "wouldn't it be great if not every spacecraft didn't have to have to re-solve the problem of power, communications, and computation?" I first heard the idea proposed in a 2009 talk by someone who has been an insider for over 20 years. Some gems I remember from that talk:
* Having an identical copy is not redundancy
* Complexity is inherently more succeptable to failures
* Cars have been widely successful because of gas stations and repair shops. Spacecraft have to drag around their own
* We could create a ring of satellites, each dedicated to providing comms to earth, wireless power, computing power, or whatever. Users of the system would just need to build a structure, interfacing components, and their instrument and launch it into a nearby orbit.
 Abstract: http://www.spacecraftresearch.com/files/Fleeter.pdf
Once I have the financial resources, I actually plan to do so.
> Have you built something that's any one of those three?
To answer that yes I have, which is a completely orthogonal fact to my original comment because I am advocating reducing the qualifications of "space-worthiness" through the use of multiple copies. After reading your comment I decided that there had to be some respectable source who had advocated this earlier at some point and with some research I found this paper written by Rodney Brooks called Fast, Cheap and Out of Control : A robot invasion of the solar system outlining the same concept; http://people.csail.mit.edu/brooks/papers/fast-cheap.pdf
> I mean the attitude that comes across from your writing style.
Yes, I agree that my comments could look snarky under the weight of your assumptions, but I was doing my best to be genuine and engage in an honest discussion
> I have no idea what your qualificaitons are, how much thought you've put into this, how receptive you are to the idea that you're wrong
I'm actually quite certain that I'm wrong most of the times, but I don't know how I'm wrong and discussing, building things are the only ways to find out.
> whether or not you recognize that everyone has more "unknown unkowns" than anything else
Yes I do and it is a terrifying thought.
> or what your actual attitude is.
I'm doing my best to learn as much as I can and to never judge. (judgement takes up too many mental resources)
> And if you think you're the first person to think "wouldn't it be great if not every spacecraft didn't have to have to re-solve the problem of power, communications, and computation?"
I'm not and I would love to do more than just think and actually build things.
> * Having an identical copy is not redundancy
Can you please explain why? Is it because the failure points remain the same?
> Fast, Cheap, and Out of Control
The first thing that popped into my head when I saw this title was NASA's "Faster, Better, Cheaper" initiative. I will admit that I do not know much about it, but what I do know is that there were some failures (as you would expect) and the public did not react well. The failures did not come from not having space-grade parts, IIRC, but for various other reasons. The most infamous was the Mars Climate Orbiter, known for failing due to an imperial-metric conversion error , that is still brought up in almost every space-exploration piece that gets sufficient attention. Anecdotally, the lessons learned by those I've talked to are (1) choose 2 and (2) the public will not tolerate failure on large NASA projects, even if those projects cost a fraction of what it takes to host an Olympics, or to buy Instagram, or fight a war for a day. But I have just now hopefully started a discussion with my coworkers about this paper on our message board.
So anyways, about the paper. It looks like your idea is only mentioned briefly in the one section, and not fleshed out. The idea was somewhat more fleshed out in the talk I went to. What I meant by my line of questioning was that I haven't seen an implementation of these concepts outside academia, even though the idea has been around for some time, and the industry isn't entirely driven by irrational beings, so there must be some technical reasons why the ideas haven't been fully adopted.
>> * Having an identical copy is not redundancy
> Is it because the failure points remain the same?
Essentially, yes. His argument that most failures were systematic, and not due to unequal 'wear' of various kinds. Software especially, since on space missions it is implemented to be as deterministic as possible. That means that even if the primary processor fails gracefully due to bad internal logic, and a hot backup immediately takes its place, the backup will behave in exactly the same way given those inputs, which are likely to stay the roughly the same across the switch. Another example is a mission where the high gain antenna succumbed to a systematic failure, and they completed the mission on its low gain antenna. If there had been 2 identical antennas, they would not have been able to.
Designing and testing components to last a long time under the conditions you expect and test for is relatively easy. It's when the designs and tests don't match reality is when the problems happen. If you write something and make a copy, it will retain all of the typose of the original, and it's the same with Code/CAD/etc (and you might be more likely to introduce bugs than fix them). Ground-based systems can get away with this because of how easy it is to replace/repair broken units in a redundant setup.
 Even though it was due to a much larger issue related to how the project was carried out, and just happened to manifest in that error. It could have easily been a kilometer/meter mixup instead. And having a backup string of landing hardware/software wouldn't have helped in this case, unless the backup units had a different design/implementation.
 And yet, JPL still chose to use the skycrane. Very couragous.
Ah I see. I'll start qualifying my I-don't-see-any-reason-whys from now on with the latter statement. :)
>>> I will admit that I do not know much about it, but what I do know is that there were some failures (as you would expect) and the public did not react well.<<<
I guess that's the real reason why this won't be implemented by government agencies. Space missions are a matter of national pride and no one wants a "designed to fail/waste of money" accusation on their hands. However, empirically speaking it seems to be the best way to do things as we can increase the amount of explored area quite rapidly and respond to changes much more quickly. I think that when private companies such as Planetary Resources start doing exploration they will be forced to adopt this model because of their constraints and a lot of amazing solutions will come out of it. If that happens then it might open up a pandora's box and advances will happen much more quickly. Some people will see it as a bad thing, but as long as the systems are autonomous it should lead to a lot of good things.
>>> 2) the public will not tolerate failure on large NASA projects, even if those projects cost a fraction of what it takes to host an Olympics, or to buy Instagram, or fight a war for a day.<<<
More importantly the politician who approves it probably won't get re-elected.
>>>  And yet, JPL still chose to use the skycrane. Very courageous.<<<
I was quite shocked when I heard it worked. I was willing to bet on the side of failure because of the sheer complexity involved. A small timing error, sensor glitch or the other million things that could have gone wrong would have led to failure. It's quite impressive that they managed to do such a high stakes real-time task more or less autonomously. It really was quite a daring thing.
It's definitely not mine. I bet people were talking about it when I was in diapers.
>>> What I meant by my line of questioning was that I haven't seen an implementation of these concepts outside academia, even though the idea has been around for some time, and the industry isn't entirely driven by irrational beings, so there must be some technical reasons why the ideas haven't been fully adopted.<<<
Yes, I'm willing to bet that this has a lethal flaw which quickly led to such implementations to be rejected, but the question is can this be hacked, for the lack of a better word? Remember most organisations where rovers are designed are meant to be risk averse and they, aside from NASA, have an endless pool of resources to draw from. (I'm talking about the military) The sociological and resource incentives in place simply work against any such proposal independent of engineering viability.
>>> His argument that most failures were systematic, and not due to unequal 'wear' of various kinds. <<<
There's an interesting case to be made over here, one is redundancy at the unit level and another is redundancy at the systemic level. Let's take the swarm as a system, in that case if you have multiple backup copies of machines adept at performing particular tasks then you essentially have redundancy provided that the same method is not followed at the unit level. In that case if one failed for a particular set of inputs (hardware or software) then you can "patch" the rest by either avoiding the physical situation or changing the software. You can repeat that with individual units and let them have free reign. If one of them gets destroyed then the other ought to be able to be modified in time. Although it won't guard against stupidity at the unit level, this should to be a much more redundant system than anything we can create within a single one-use device.
At the level of the single unit, systematic failures come into play and more copies is actually less over there. Things must be asymmetrically designed to overcome edge cases and systematic failure modes and robust engineering makes sense over there. I think that one of the best ways to implement such a system would be to spend most of the resources in creating and testing a mobile base which is independent of its payload. (attachable modules which perform specific tasks) You should then be able to deploy this machine across missions and learn from all of the real world testing in each mission to create something truly robust and reliable. Once you achieve that you can start offering redundancy at the swarm level through the payloads. For example in that high gain, low gain antenna scenario, wouldn't it be better to have a dozen robots equipped with a variety of antennas dedicated to communication?
>>> It's when the designs and tests don't match reality is when the problems happen. <<<
Yes, but isn't the entire point of the exercise to fail early and fail often so that you can succeed? If your system is disposable then any systemic failure becomes yet another data point to engineer against and all future systems are better because of it. There is no better laboratory than nature and surely this is a point in favour of it? (unless I'm missing something)
Would you like to carry out this convo via email? If so then please feel free to drop me a line at searchingforabsolution [at] hush.com
>>>  http://www.dau.mil/pubscats/ATL%20Docs/Mar-Apr10/ward_mar-ap.... <<<
Thanks for linking me to this article! It was great.
Yes. Like I've said above the explicit purpose of this mission is to gather advanced scientific data which requires comparatively bulky instruments and high power availability. Because it is so ridiculously expensive to get anything to Mars you want it to last as long as possible. "Disposable drones" make absolutely no sense when it costs so much to launch them into orbit, transport them 60 million km across the solar system, and then enter the Martian atmosphere and land in a coordinated place on the surface. Each gram you get to that point costs many thousand dollars. You don't just plan your strategy around losing a bunch of them - that would be many hundreds of millions of dollars down the drain.
Using a decaying nuclear isotope to power your probe you will get many multiples more bang for buck compared to solar powered probes which so far have maxed out at about 5 years. The solar concentration of a Martian winter is extremely low, and this problem is exasperated the smaller the probe and the resulting battery that it can carry.
Just like in computing and electronics, physical and mechanical distributed systems are inherently complex - more so than monolithic systems.
I've thought about what you have said and I think that we are measuring the likelihood of success in different ways. I'm measuring it in terms of the likelihood one pair of devices will complete the outcome at the cost of all the others and, if I am correct, at some level you are measuring it in terms of reducing the possibility of a loss while achieving the mission objectives.
I think that we can afford to build disposable machines because if they are tiny and can fit within say a 50 cm cube (which is the diameter of curiosity's wheel) the mass of each machine will also be radically less. Curiosity weighs 899 KG, a well designed vehicle base could weigh as less as 1 KG with instrumentation we could work with the assumption of 2 KG. That is around 450 rovers! If they are divided into teams of say 6 and are dropped off using some method at discrete intervals then you have 75 teams exploring the martian surface. If each team explores during just the warm martian months (I'm working with assumption of 400 sols) with a rate of a very conservative .5 m^2 explored in a day then all of the teams combined will explore 15000 m^2 in the course of a single mission. That's huge. Further, in this scenario, if individual units fail at some point then the entire mission won't be jeopardised and that number will roughly stay the same. I think that if the units are allowed to be autonomous (again because they are disposable) then you could rapidly increase the area explored and get more out of a single mission.
In this scenario the success of the mission has now bifurcated from the functioning of a single device and because of that you are free to pursue several orthogonal benefits such as these which ultimately reduce costs. I think that if you factor in a decrease of launch costs due to company's like SpaceX, then this ought to become even more attractive.
natep linked to a wonderful article ( http://www.dau.mil/pubscats/ATL%20Docs/Mar-Apr10/ward_mar-ap... ) on this which argues the point in a much better way.
>>> The solar concentration of a Martian winter is extremely low, and this problem is exasperated the smaller the probe and the resulting battery that it can carry. <<<
One of the main uses of the battery during winter months is to keep the processor warm. If the assembly is small enough then you should be able to use just insulation, a very small heater and perhaps a long lasting exothermic reaction which proceeds slowly over time. The amount of heat generated by such a reaction would be too small for something like curiosity, but perhaps it might work for a very small machine? Again since tolerances are low you shouldn't you be able to use a wider variety of batteries which store more per unit volume? I might be wrong on all counts, but a smaller design and lowered tolerances might actually work to our advantage.
>>> Just like in computing and electronics, physical and mechanical distributed systems are inherently complex - more so than monolithic systems. <<<
I'm actually not that into computing and electronics, I used to build physical systems and how they fail fascinates me. My designs failed so often upon meeting the real world that I realised the only way to know if something would ever work was to actually implement it IRL. If you can carry out a mission at one-tenth of the cost then you can do that much more willingly and learn from unforeseeable failure modes much more quickly. It should be an answer to this problem than the other way around.
Also "Why don't they just" has multiple meanings in my opinion. You explained one of them, a sort of incredulous question querying the stupidity of the people who made those decisions. But it's also used far more innocently as well. I and a lot of my friends will ask questions in that format but as a genuine query not meaning to tread on anyones toes or insult anyone. (I'm having trouble articulating this!)
This sort of question asking isn't the exclusive domain of Mars rovers either, it's everything. Politics, economics, business etc etc. It's just natural human curiosity, people trying to understand things that seem counter intuitive at first.
From what I see, it looks like most readers interpret that phrase as an arrogant arm chair expert assuming NASA, or who ever, are full of really stupid people and the person "asking" is essentially suggesting he knows better. The simpler interpretation that a person who knows they don't know is simply asking is often the secondary thought, if that thought occurs at all.
I think that has a lot to do with the combat nature of on-line "debate". People now are sort of trained to expect confrontation rather than a mellow old chat. Kinda ties in to a recent thread here about , er, negative and harsh replied to Show HN articles.
This is one of the things I'm beginning to really not like about on-line discussions. I end up spending more time trying to make sure I'm not misinterpreted than making my actual point.
I bet if NASA had had a competition to come up with the most likely to work gizmo, up to 1 lb (or whatever) we'd have seen some pretty good ideas (beyond duct taping an iphone to the antenna mast)
I was trying to find an easy CSS hack to make the layout impervious to long words but it turned out to be a little tricky. Can anyone find a CSS "one-liner" (few-lineser?) that'll fix this?
Hardware is hard to fix but software is do-able. There is a lot of explanations how this was done with Pathfinder:
there was also some reconfiguring of Curiosity on-going/completed ~ http://blog.chron.com/techblog/2012/08/nasa-about-to-perform... there is also a lot of work done on 'self healing' software systems ~ http://www.zdnet.com/blog/emergingtech/self-healing-computer...
The real problem is that it's hard to tell the people asking questions like this out of truly interest apart from the people being undeservedly condescending.
I suppose the most important thing is to phrase questions like this carefully and to be polite. I think it's also useful for the answerer not to assume bad faith unless it's very overt. A presumption of reasonableness from everyone would markedly improve discussions about projects like the Mars rover which generally contain very useful information and insight but can get side-tracked by the very issues this article talks about.
If we are going to be making these efforts at Mars in the future (whether autonomous or manned) it seems worth the investment :)
Would that really change anything? A higher bandwidth between the rover and Mars's lower orbit will not make the link between Mars and Earth much faster, and that's the true bottleneck.
Not to mention wireless communications are not "free" as any smartphone owner probably knows and the rover's daily energy budget is pretty much fixed, the more the rover stays in contact with Mars's lower orbit, the less energy it has to drive around or fire its lasers.
And finally, for what it's worth, the MRO is already "significant improve[ment of] coverage".
Yes. The rover is only in communication with its satellite for a limited time frame each day, and the satellite <-> earth link is wider than the satellite <-> rover link. With more satellites we could have around the clock high speed link up, and improve the bandwidth - future missions are likely to be loaded with more and/or higher bandwidth sensors.
Are you sure? My understanding was that the biggest bottleneck lay in the fact that the orbiters could only communicated with the lander for 8 minutes per day.
Fairly crippling :)
> Not to mention wireless communications are not "free" as any smartphone owner probably knows and the rover's daily energy budget is pretty much fixed
I understood the energy source gave a fixed output - it's not a daily budger per se. So 2 hours communications (rather than 8 minutes) wouldn't leave it unable to move for the rest of the day.
I could be wrong.
However; if it takes a week to send it an update - or to receive big pictures - that implies delays in its work anyway.
I agree that the MRO is a significant improvement. I was just idly pointing out that if the aim is to go to Mars more it would be worth investing in infrastructure as much as science.
The algorithm part is equally simple to explain. Most of these algorithms aren't broke, so there's no real impetus to fix them. The equations are pretty well known, so between missions, there's really not a need to change them up.
This doesn't seem quite right. From watching documentaries on the past Mars missions, it seems like each piece of scientific equipment is done separately by various engineering groups around the U.S. and the world. They are only integrated at the end. And those individual pieces of equipment are getting changed right up until the deadline if they aren't working up to spec.
So the requirements are eight years old, but the tech isn't necessarily. They are just very conservative in their requirements.
For example if others haven't been asking this question we wouldn't got this explanation. And now that I got this explanation of NASA's process I would like to know "Why don't they just make all this process more agile?"
For example why have only one launch in 7 years with expensive hardware? Why not multiple launches with cheaper technology?
> But rather than explaining all this stuff, I think there's a
> better way: build, land and operate a rover here on Earth.
For example, build everything around the Camera, but make sure that when it's within a year of going up into space, the design is such that the highest resolution camera available can take the current camera's place.
Of course, anyone who has worked on the project will easily be able to say that I have no clue. And I don't. But I guess that if I were included in the planning stages, and it came down to the data collection components of this machine, the first thing I would add would be that it should be capable of upgrading to the latest and greatest within a year of launch.
As a final note, I understand that they DID do this with software. They can update the operating system and roll back. They can advance the software, bug fix, etc, in a VERY safe way from afar.
Then I got a job with a company that manufactures sensors used by a huge number of companies and governmental departments, including NASA and Boeing. And my job was Output Smoothness Technician -- which meant that I was the one guy who was responsible for verifying the electrical specifications of every single part that left the plant. (The next two steps were QA -- which tested things like watertightness -- and then shipping.)
I learned a lot from that job. I learned that there are a lot of decisions behind every single little thing. For example, let's take the tiny little linear potentiometers that Disney uses in its animatronics. Someone at Disney decided they needed them; one team of engineers came up with a spec; another team of engineers figured out how to turn the spec into something that could be made; another team figured out how to make it. Then someone decided what kind of metal to use. Someone else decided how to tune the potentiometer to provide the desired output. Someone else decided what kind of grease to use. And then during manufacturing, someone decided whether or not the housing was good enough, someone decided whether or not everything fit together right, someone decided whether or not the weld was good enough, and so on.
By the time that little thing got to me, there were thousands of little decisions stored in it. Some good, some not. Then I had to test it and decide whether or not it would do what the customer wanted. Should I throw it out, wasting the company a lot of money? Or should I ship it, and let the customer decide, and hope the QA on their end is better than me?
I pretty quickly developed a reputation as one of the toughest OS techs they'd ever had. I threw back a lot of parts. The sales manager (who basically ran that location) hated my guts. But we still had defective parts come back every month!
Now imagine that you're trying to build something that you can't service, and you can't fix. You get only one shot to get it right, and a lot of money and a lot of people are riding on you. Best of all, you're building it to survive in an environment that you just can't really create here on Earth, so you don't get to test it the way that you'd test a lot of things.
That's why it takes 8 years and a lot of money: because it's very, very hard.
I think that one day sending things to Mars will become something that we're used to, and at that point, it will get a lot faster and a lot cheaper and a lot more reliable. But, right now, we're still trying to do things that we don't really know how to do. That's hard, and it takes time.
Further, since it's not practical to test things on Mars, so you can imagine how difficult this can be for the engineers here on Earth.
The process described here is typical of many companies because it's not possible for one person to do all of these steps. The level of expertise required is too high. You may be a great mechanical engineer, but do you know about the performance characteristics of the hundreds of different kinds of grease? About the way the sleeve should be machined? There's a list of specializations thousands of items long, and at best you'll be able to truly master only a small percentage of these even through a whole career.
The ultimate reason technology is unreliable is because the world is a complicated, crazy place that often exposes you to situations you're not prepared for.
Instead, let's consider that everything ever built will fail under a certain set of conditions. Understanding a particular component's reliability means understanding all the things that are likely to go wrong over its lifetime and fixing some, and characterizing others. The many people working on different parts is an outcome of different concerns seeing different problems and applying solutions. These solutions are sometimes contradictory: a weak potentiometer shaft may be improved by using a particular alloy of steel instead of, say, aluminum. But now that alloy may make the part too heavy, or too expensive, or it may create problems for the bearing it has to sit in, etc.
What you see in the long development time isn't that something is necessarily "unreliable" it's that it takes that long to understand how it can fail and make it reliable enough to sustain the mission. This is already why engineers will tend to use tried and true parts: they already understand them completely.
To me, eight years to build and test equipment to fly to Mars and wander around on another planet's surface is a pretty reasonable turnaround time.
Radiation? What does radiation do to electronics? Do they corrode? Will things short circuit?
Would shielding an iPhone be more expensive than using outdated parts?
Do we know that the iPhone isn't reliable enough? Mine seems to be fairly reliable. Any problems with it are solved by a simple reset. Certainly a small bit of old battle tested code/hardware could handle this problem. Is the benefit of having modern hardware not worth it?
I'm sure all of these questions have answers, but it's not like I know where to find them.
Sidenote - "Why don't they just" shield the electronics rather than making the electronics itself radiation proof? I ask that knowing that there is a legitimate reason (shielding weight?), but it is something I have sincerely often wondered.
PS: If you added up the entire mass of everything ever put into orbit you would not be able to make a shield to guarantee normal equipment would be able to stand a 5 year mission in space.
Now, if you're wondering how better to phrase your questions of this sort, I'm afraid I don't have any specifics, because I'm not sure if I'm just as guilty of this, or if I've successfully rephrased my questions of this sort. What I do is try to imagine that I've spent years designing, building, and debugging the thing in question, and gone through multiple reviews by outside organizations where every design decision was scrutinized, and had many meetings with coworkers (formal and informal) to discuss the thing in question. And then I ask my question in the affirmative, rather than in the negative, such as:
"Why did you do X, would Y also work?"
"I would have done Y. What is it about X or Y that I am missing?"