
A Look into NASA’s Coding Philosophy - astdb
https://mystudentvoices.com/a-look-into-nasas-coding-philosophy-b747957c7f8a
======
lisper
This article is very much at odds with my 15-year (1988-2000, 2001-2004)
experience writing software at NASA (albeit at JPL, which is to say the
unmanned space program, but still NASA). In my experience there was just as
much politics, marketing hype, and general bullshit as I've seen in the
commercial world. It was actually pretty amazing to me some times that things
worked at all, let alone that they worked as well as they did. I think the
real secret at NASA is not that they are better at writing code, but that they
write it slowly (development cycles are measured in years) and they test the
living shit out of it. Then, once it's working, they don't change it except
under extreme pressure. But while it's being developed it's just as much of a
sausage factory as anything else.

~~~
mcguire
As I wrote when this showed up on proggit,

Not to put too fine a point on it, but I would be very skeptical about using
NASA as a good example for much of anything, but especially software
development.

My bonifides: I'm 49; have spent roughly 25 years as a systems programmer,
systems administrator, and general bit wrangler; and I worked for 8 years at
the Marshall Space Flight Center, specifically the NASA Enterprise Application
Competency Center.^1 (And yes, that's a thing. Started as IFMP (Integrated(?)
Financial Management(?) Program), and was IEMP (Integrated(?) Enterprise
Management(?) Program) when I started.) If you're a NASA employee, you might
recognize STaRs (I worked on the rewrite, post Perl 1 and Monster.com),
NPROP/Equipment, or DSPL/Disposal. And IdMAX, which I noped out of shortly
after moving to the project.^2

NASA itself is a massively disfunctional organization, in my experience, and a
failure to "cut through the bullshit" is a major reason why. For software
development specifically, while I didn't do anything with "man-rated"
development or the other important bits, I have strong doubts that they are
any better than other avionics, automotive, or other embedded development
organizations.

There was no mentoring. People tried, it didn't go over well, usually with the
mentees.

You have to trust each other's potential, because there is no damn chance of
getting any two projects to agree on anything. What goes on in that other silo
is their business, not yours.

I did say, "I don't understand." A lot. Frequently pronounced "WTF?"

The list of "unreliable sources of knowledge" looks rather like a checklist of
how things got done.

^1 NEACC is also the acronym for the North-East Alabama Community College,
which I find ironic for no good reason.

^2 Unfortunately, I don't have a picture of me looking arrogant. Sorry.

~~~
lisper
> NASA itself is a massively disfunctional organization, in my experience, and
> a failure to "cut through the bullshit" is a major reason why.

I completely agree with this, though I think it's important to note two
things. First, despite the dysfunction, they do, more often than not, make
things that work under extremely challenging conditions. And second, a lot of
the dysfunction is not their fault, but is a consequence of the fact that they
are a government agency and hence ultimately answerable to Congress. Not only
that, but they are a government agency created to fight a (cold) war that
ended almost thirty years ago so they have been rudderless for a long time.
The fact that they've done _anything_ other than suck up taxpayer dollars is a
testament to the incredible skill, both technical and political, of many of
the people in the agency. Notwithstanding my vocal criticism of the
organization, I have tremendous respect for many of the people who work there.

------
alexanderdmitri
Here's a paper[1] for a space conference in Montreal written a few years ago
on DO-178 standards and applying the FAA's certification process to space
systems development (DO-178 sets the bar for "testing the shit" out of
software, I've seen projects held up for years because it's like approaching a
limit f(n) -> L (where f(n) is the software development cycle and L is getting
a non-bought DER to sign off on it) proving your system to its requirements).

[1] [http://articles.adsabs.harvard.edu/cgi-bin/nph-
iarticle_quer...](http://articles.adsabs.harvard.edu/cgi-bin/nph-
iarticle_query?2013ESASP.715E..63D&amp;data_type=PDF_HIGH&amp;whole_paper=YES&amp;type=PRINTER&amp;filetype=.pdf)

~~~
nickpsecurity
DO-178B was one of best things to happen in software assurance. It was a non-
prescriptive method of certification that forced precise requirements, design,
and code with reviews of each and tracability between them. The high cost of
certification and certification failure led to a booming ecosystem of
supporting tools (esp static analysis or runtimes) and pre-made components
(esp RTOS's & middleware). Some companies even specialized in handling the
administrative overhead. The result was that certified systems usually had
very, high quality. Meanwhile, there were people arguing on HN and other
places about whether a certification could ever be useful for software.

Anyone reading the above paper should skip to the summary to get a high-level
view of the overhead and resulting benefits. The author of the paper agrees
with me that the certification is beneficial. The gist is you keep things
simple, think about everything in lifecycle, test everything in lifecycle,
throw every analysis tool you can at the design or code, esp take your time to
develop the thing, and document all of this in a way where any aspect is easy
to verify by a 3rd party. That's how high-assurance systems are done. They
usually work, too. Some work for a long, long, long time. :)

~~~
ptero
Applied very narrowly to the right problem, maybe.

But I have also seen DO-178B to lead to a completely non-maintainable mess.
That mess helps trace each statement to a requirement, but leads to code that
is unreadable to anyone used to normal code.

It is not that "no extra code" and exactly meeting requirements is a bad model
in itself. However writing code in a way that makes it convenient to _prove_
the above encourages unnatural coding styles and programmatic contortions.
YMMV however.

~~~
alexanderdmitri
This happens a lot. IMO, the best method to mitigate this is to design and
write your code base to the system requirements rather than try to perfect the
low level requirements before writing any code.

You can then reverse-engineer the low level requirements from source, deriving
your verbiage from the system requirements and logic from the actual code.
This way your low-level/unit testing is precise and verifiable while also
ensuring strong traceability up and down the chain.

If there ends up being low level requirements/code that don't trace up to the
system for some reason, then either there's been a disconnect between design
and execution or, if not, you make the case for a derived requirement.

~~~
w_t_payne
I agree that the left hand side of the Vee has to be seen as a two way street.
As we develop the code we gain experience and insights which enable us to
elaborate and refine our requirements. The requirements then act both to guide
the development of the system and also to document the lessons that have been
learned during development. The output of early requirements elaboration
phases are then draft, rather than final, requirements sets.

The key, for me, is the tooling that we have around maintaining requirements
and the traces between them and the code. I think that it should be possible
to navigate traces and update (low level) requirements and test specifications
without leaving the text editor. If it takes 5 minutes to open DOORS and to
find the relevant requirement, then you aren't likely to keep it as up to date
as you should. The same is true if the requirements are stored on a
spreadsheet in a document management system.

Some ALM/IDE tools may enable you to manage things in a better way. From what
I have seen, mbeddr is a really interesting experiment along these lines. I
also suspect that Visual Studio and Eclipse have some pretty powerful features
in this regard.

In my side project (and purely for my own 'entertainment') I am experimenting
with ways of embedding low level requirements into comments in the code (the
build extracts them and can update DOORS or some other tool), as well as a
generic trace system that allows different classes of traceable item to be
defined and indexed. The build can enforce requirements coverage in the same
way that it enforces test coverage, and because the text and metadata of
requirements is easily accessible to the build, we can use NLP tools to
enforce the use of restricted natural language or DSLs in the requirements
text. The ultimate goal being to use machine learning to learn the correlation
between requirements changes and code changes so that a change in one can help
generate advice and guidance on what needs to change in the other.

------
rav
The article severely misrepresents Amdahl's law, in particular in the
sentence:

> Though you took half of your entire program, and optimized it to be four
> times faster, the overall system is only 1.6x faster.

This application of Amdahl's law doesn't tell you anything about what happens
when you take half of the program; it tells you what happens when you take
half of the _execution time_. If you only get a 1.6x speedup, it means you
optimized the _wrong part_ of the program!

Suppose 20% of the program is responsible for 80% of the execution time. If
you speed up _this bit_ four times, you end up with a 2.5x speedup (and not a
1.18x speedup as the article tries to imply).

~~~
taneq
Basically this. The article makes it sound like "optimizing half the program"
means "optimizing half of the codebase", not "optimizing a part of the
codebase that consumes 50% of the execution time."

This quote is especially misleading:

> Amdahl’s Law is telling us that to significantly improve the speed of a
> system, we’ll have to improve a very large fraction of it.

No, it's telling us we'll have to improve _the part of it which is responsible
for a very large fraction of the execution time_. Often this is only a tiny
amount of the overall codebase.

------
Judgmentality
I always found this article(1) about writing code for NASA fascinating. I'm
sure a lot has changed in the 20 years since it was written, but it's such a
stark contrast to what I'm used to.

(1) [https://www.fastcompany.com/28121/they-write-right-
stuff](https://www.fastcompany.com/28121/they-write-right-stuff)

~~~
mcguire
20 years ago:

* 420,000 LOC

* 260 people

* $35,000,000 / year

~~~
Judgmentality
There's something to be said about doing more with less.

------
pif
Software is not Modern or Ancient. Software is either Good (i.e. it works as
it is supposed to do AND it is easily maintainable and modifiable) or Bad.
NASA clearly needs Good software, and indeed the article never mentions
arbitrary deadlines.

------
bluedino
I'd like to see how this compares to, SpaceX for example

~~~
dsfyu404ed
When it comes to physical components SpaceX has a habit of selecting an off
the shelf component and asking the vendor if they can get a modified version
that includes some changes that make it capable of living in their
application.

That's great for reducing cost (or increasing quality or buying time, they're
all related) but in the long term it often leads to spaghetti.

Considering the time pressure they're perpetually under and how Elon is known
to run his organizations I would suspect their software products are no
different.

~~~
brianwawok
On that note, what language(s) are SpaceX, Tesla built on?

Job postings I see a mix of C# and golang. I cannot believe either a Model X
or a rocket is rockin C#...

~~~
0xffff2
Probably not on the rocket itself, but C# is a very common language for
ancillary testing tools in the industrial automation world. I wouldn't be
surprised if a lot of their test infrastructure is written in C#.

~~~
w_t_payne
Pity. I've been using Python a _lot_ for test infrastructure and it fits
really well.

