But, for me, another amazing thing about these long term projects is the sheer amount of knowledge that needs to be maintained and passed on to new people, over the course of several decades. To pull it off, this need of knowledge management and transfer must be so deeply engraved in the culture of the organization. How do they do it in this day and age of "FAQs", "Forums" and "Helpdesks"? :)
In all seriousness, what do you do to maintain "knowledge" in your organization?
Our company is building a remote team, and similar to Gitlab, we are discovering that staying in sync is not just a good value but a way of life. Trello, wiki, and code help us repeat our assessment work and ensure everyone is on the same page.
We are also putting non mission-critical information in a Wiki; basically as a tl;dr of the more dense documentation.
I think it's one of those tools which demand a lot of expertise in setting them up, or you will paint yourself in a corner, with explosive paint.
So true. But when you have a culture and an industry where 2 years ia a long tenure at a company and a long lifecycle for the latest faddish "framework" or whatever, it's impossible. I'd love to stay somewhere 10+ years and work on a single significant application the whole time. But it seems to be impossible to find such a gig. Experience is generally undervalued throughout our entire industry.
Try CAD. The industry is full of crusty applications pushing 30 years. I'm sure some of the careers are equally long.
JPL has a formal database that maintains all its requirements documentation. Though, it isn't very user friendly; and you generally have to know what you are looking for.
When I was there, most of the knowledge transfer happened by talking to the older folks around. As a result, I ended up developing close work relationships with people who had been there for 20-40+ years. The historical context they provided on current and past designs was incredibly useful, and not something you would find in a requirements document.
Software was a different matter. Until recently, when the internal GitHub was introduced, finding and digging through code was not as straightforward. Each team/section maintained their own repos. And there was very little consistency, some used subversion, others cvs, accurev, etc, so it made it difficult to find and read code for projects outside your purview. Naturally, this led to software fiefdoms. In contrast, I work at google now, which is the polar opposite: I can effectively read through the entirety of google3 and send a CL/patch to just about anyone.
After all they're accountable for spending taxpayer dollars on stuff that can crash, burn, explode, and in order to keep funds coming they must be able to prove they're not just making money vanish for no reason.
Seriously, the degree to which age discrimination is an accepted reality in tech today is a great failing, and one that has enormous hidden costs over time.
Sadly I know a number of extremely talented older engineers who have been discarded like trash only to be constantly rejected in their search for new employment, until some of them just give up. It's remarkably ignorant and short-sighted of employers who practice this kind of discrimination.
It's laziness mixed with incompetence mixed with the simple reality that too many companies consider it a competitive advantage that they're completely fucked up in their process and procedure but manage to somehow churn out something anyway.
And if you're not hitting your targets, why, move the target! Imagine how much easier NASA's job would be if they could say the moon shot was a success even though they only got something into orbit because we redefined what a moon shot was.
That's a 'reasonable' cause for being short sighted, but it still means they are short sighted.
This is the way in biomed:
>'When scientists finished the first draft of the human genome, in 2001, and again when they had the final version in 2003, no one lied, exactly. FAQs from the National Institutes of Health refer to the sequence’s “essential completion,” and to the question, “Is the human genome completely sequenced?” they answer, “Yes,” with the caveat — that it’s “as complete as it can be” given available technology.'
Even in a relatively small team >20 I have found that even simple communication or sharing of knowledge falls apart _Very_ quickly.
Maybe I'm not actually supposed to be in the loop, but it still doesn't feel like how teams are supposed to be ran.
The secret: Employee retention.
Everything is easier when the knowledge stays in the company.
If you didn't read this article the portraits of the engineers and snapshots of the lives is moving.
"In retirement, Zottarelli told me, he would like to see Florida again. He wonders how it has changed. In his garage is a 1954 Swallow Doretti, a fixer-upper. ‘‘It probably needs new brakes,’’ he said. I asked him if there was anywhere he liked to drive for fun. ‘‘No,’’ he replied. ‘‘Not anymore.’’"
Imagine that's your grandfather. With his expertise, shouldn't he be able to 'see Florida twice a year'? I would have wished for a way to have both this awesome technological specialization over long stretches of time, but also given the people on the program some more time to go their own ways. And then the toll of being in the same hierarchy for 30+ years. Ack. But perhaps these people self-selected for stoicism, who am I to judge ;)
The most depressing line for me was him effectively waiting (albeit in a joking manner) for his second stroke. No one should be worked to the point of fatalism.
On the one hand I love the team's dedication. On the other hand, I don't think it's just competitive pay that keeps young engineers out of that office. I imagine that the energy and excitement in the early 80s does not really exist anymore, and it's not fair to dump on young engineers for wanting that energy where they work.
One of the proposed instruments is a 1kg helicopter.
They have tried to build a pipeline with the Mars exploration missions, in which senior people can move from mission to mission, and younger people can train up on older, mature missions before they move to newer ones.
This pipeline did not really exist with the Voyagers.
IIRC from an old SciAm article, the temperature of the spacecraft's electronics is also taken into account when doing this calculation.
I worked in a satellite data office of NASA and there was no formal documentation process. Much of the code running in the facility had been written by a single developer over 20 years, and all of the knowledge left with him. Masterful Perl can be frustrating stuff without documentation or comments.
It goes into amazing detail on the history of the spacecraft and the events that made it possible. For example: the story of an intern figuring out the '3 body' problem and, from there, the concept of 'gravity assist' gave me goosebumps
See: Daniel Pink and the whole autonomy, mastery, and purpose idea. This mission checks those three boxes quite strongly, likely captivating the engineers and folks on the mission :)
I have a friend who works there - brilliant guy who went to school for physics but is also very talented at low level embedded systems type development. When he graduated with a physics masters he turned down a six figure offer to start at NASA at around $60k. From what he's told me though, he's essentially guaranteed to hit the six figure mark within 6-8 years. Last time we spoke he had just started his PhD (paid by NASA) and was working on the Mars 2020 rover.
6 figures in SF/NYC can be less than 60k somewhere else.
> 'The data the probes are collecting are challenging fundamental physics and will provide clues to the biggest of questions: Why did our sun give birth to life only here? Where else, within our solar system or others, are we most likely to find evidence that we are not alone?'
"Please, dear God, don't let me fuck up."
It sounds like a solar system-scale Van Allen belt  when described like this. So we have at least three layers of cosmic radiation protection identified (heliosphere, outer VAB, inner VAB).
If the heliosphere has implications for high-precision, high-accuracy independent-of-Earth interplanetary spaceflight similar to gravimetric and magnetic readings on Earth do for submarines today, then mapping the heliosphere would be a future asset.
It's something I'm not used to here on earth -- when something breaks, I've learned you either crack it open and replace the part or buy a new device :)
Can someone talk (or provide a link) about how this sort of system design works?
In the mid 1980's I went on a tour of JPL in Pasadena and actually saw the computers that were (at the time) in charge of recording and storing the telemetry data. I'm vague on the exact details, but apparently the computers were donated from the US Army and were field models (early "portable" computers) and they operated on 48V DC power. So not only were the computers themselves large fridge sized units, they had near matching transformers that were fairly unreliable.
I recall reading that in the mid 90s NASA replaced all the mission control systems for the space shuttle with a single Sun Workstation, so I assume that JPL also at some point replaced the downlink computers for V'ger.
My hat is off to these many fine people who stuck it out with jobs that were probably long periods of drudgery interspersed with moments of sheer terror.
(We used it to solve for the most efficient driver/ bus scheduling).