Not to put too fine a point on it, but I would be very skeptical about using NASA as a good example for much of anything, but especially software development.
My bonifides: I'm 49; have spent roughly 25 years as a systems programmer, systems administrator, and general bit wrangler; and I worked for 8 years at the Marshall Space Flight Center, specifically the NASA Enterprise Application Competency Center.^1 (And yes, that's a thing. Started as IFMP (Integrated(?) Financial Management(?) Program), and was IEMP (Integrated(?) Enterprise Management(?) Program) when I started.) If you're a NASA employee, you might recognize STaRs (I worked on the rewrite, post Perl 1 and Monster.com), NPROP/Equipment, or DSPL/Disposal. And IdMAX, which I noped out of shortly after moving to the project.^2
NASA itself is a massively disfunctional organization, in my experience, and a failure to "cut through the bullshit" is a major reason why. For software development specifically, while I didn't do anything with "man-rated" development or the other important bits, I have strong doubts that they are any better than other avionics, automotive, or other embedded development organizations.
There was no mentoring. People tried, it didn't go over well, usually with the mentees.
You have to trust each other's potential, because there is no damn chance of getting any two projects to agree on anything. What goes on in that other silo is their business, not yours.
I did say, "I don't understand." A lot. Frequently pronounced "WTF?"
The list of "unreliable sources of knowledge" looks rather like a checklist of how things got done.
^1 NEACC is also the acronym for the North-East Alabama Community College, which I find ironic for no good reason.
^2 Unfortunately, I don't have a picture of me looking arrogant. Sorry.
I completely agree with this, though I think it's important to note two things. First, despite the dysfunction, they do, more often than not, make things that work under extremely challenging conditions. And second, a lot of the dysfunction is not their fault, but is a consequence of the fact that they are a government agency and hence ultimately answerable to Congress. Not only that, but they are a government agency created to fight a (cold) war that ended almost thirty years ago so they have been rudderless for a long time. The fact that they've done anything other than suck up taxpayer dollars is a testament to the incredible skill, both technical and political, of many of the people in the agency. Notwithstanding my vocal criticism of the organization, I have tremendous respect for many of the people who work there.
NASA, like many large organizations known for prestigious accomplishments, can struggle to meet the needs of those doing the enterprise work (like internal HR apps as you mentioned). Especially in a place like the government, people's ego's feel small, and they try to compensate by making their projects bigger, grander, and overstated. This just results in bloat, too much talking, and BS as you mentioned.
I'm sorry you had that experience. If you ever consider coming back, I'd recommend getting involved with something more directly related to space applications.
Considering the TRAJ assembly probably went back to DEW Line, I have no idea how they peeled that logic off. I still run into LockMart folks running ancient ATC code on Unix variants that have been out of service for over a decade.
Our code will never last that long.
The best I can tell is that the big difference in perception comes from people who worked on code for manned flights or anything that will interact with an actual human being vs non-manned and even non-mission critical code.
It seems that anything that will come close to a human is developed like it's the new arc of the covenant while for anything else it's YMMV based on the project and the team.
The amount of process and oversight is highest at class A, and lowest at class E. If you are developing class E, you can do pretty much whatever you want and your team doesn't even need any training in software engineering.
If you want some fun, track down somebody working at the NASA Enterprise Applications Competency Center and ask them about the Office of Education software. (Me, I know nothing, through an exercise of good luck and skill. "No hablo ingles. Tengo hambre. Mucho muerte. <feigns death>")
(I have a friend and excoworker there who was an aerospace engineer, specifically concerning the design of solid rocket motors, who explained why Constellation was not going to match its hype: you can't add a segment to a SSRB to get a bigger motor; it's a complete redesign anyway.)
Early in a career, can be a good time to try out large institutions because large institutions tend to have explicit structures for on-the-job training and career development. Early in a career is also a good time for some people to decide "big companies ain't for me" and for other people to decide "this is a good fit." Over time people in the startup ecosystem often become just as cynical as career government employees. I think it's more a function of age and experience and personality than anything else. Bright eyed and bushy tailed doesn't go with gray hair.
I wonder if self-driving car manufacturers will apply this same philosophy.
Especially in a big organization, things are different depending on the division you're in and where you work. I work at a large .gov institution and am only here a decade later because my leadership team in the early days were great bosses and amazing mentors. They taught me a lot both in a professional and personal level. I aspire and in many ways fail to be the same.
I look at other parts of the organization and see great places as well divisions where employees are treated like crap and are demoralized to the point of depression.
Working in a "good part" of a big company or institution can be amazing.
In short, s/w practices and the dedication and capabilities of the people writing the software vary too much to characterize at that level of granularity.
Not a coder, but if you used an idiom that worked because of a compiler quirk then that could lead to problems down the line but related to maintenance per se, hence why I added the second part.
An institution's philosophy is almost always bullshit. Whether it is Google's "Don't be evil" or north korea's "worker's paradise" or our own "capitalist democratic paradise".
The media isn't a fount of journalist integrity. The government isn't a found of "goodwill towards its citizens". The military isn't "defending the peace".
It's sad but the more you work and the older you get, the more the facade crumbles. At the end of the day, it's just fallible humans trying their best to get by.
Anyone reading the above paper should skip to the summary to get a high-level view of the overhead and resulting benefits. The author of the paper agrees with me that the certification is beneficial. The gist is you keep things simple, think about everything in lifecycle, test everything in lifecycle, throw every analysis tool you can at the design or code, esp take your time to develop the thing, and document all of this in a way where any aspect is easy to verify by a 3rd party. That's how high-assurance systems are done. They usually work, too. Some work for a long, long, long time. :)
But I have also seen DO-178B to lead to a completely non-maintainable mess. That mess helps trace each statement to a requirement, but leads to code that is unreadable to anyone used to normal code.
It is not that "no extra code" and exactly meeting requirements is a bad model in itself. However writing code in a way that makes it convenient to prove the above encourages unnatural coding styles and programmatic contortions. YMMV however.
You can then reverse-engineer the low level requirements from source, deriving your verbiage from the system requirements and logic from the actual code. This way your low-level/unit testing is precise and verifiable while also ensuring strong traceability up and down the chain.
If there ends up being low level requirements/code that don't trace up to the system for some reason, then either there's been a disconnect between design and execution or, if not, you make the case for a derived requirement.
The key, for me, is the tooling that we have around maintaining requirements and the traces between them and the code. I think that it should be possible to navigate traces and update (low level) requirements and test specifications without leaving the text editor. If it takes 5 minutes to open DOORS and to find the relevant requirement, then you aren't likely to keep it as up to date as you should. The same is true if the requirements are stored on a spreadsheet in a document management system.
Some ALM/IDE tools may enable you to manage things in a better way. From what I have seen, mbeddr is a really interesting experiment along these lines. I also suspect that Visual Studio and Eclipse have some pretty powerful features in this regard.
In my side project (and purely for my own 'entertainment') I am experimenting with ways of embedding low level requirements into comments in the code (the build extracts them and can update DOORS or some other tool), as well as a generic trace system that allows different classes of traceable item to be defined and indexed. The build can enforce requirements coverage in the same way that it enforces test coverage, and because the text and metadata of requirements is easily accessible to the build, we can use NLP tools to enforce the use of restricted natural language or DSLs in the requirements text. The ultimate goal being to use machine learning to learn the correlation between requirements changes and code changes so that a change in one can help generate advice and guidance on what needs to change in the other.
> Though you took half of your entire program, and optimized it to be four times faster, the overall system is only 1.6x faster.
This application of Amdahl's law doesn't tell you anything about what happens when you take half of the program; it tells you what happens when you take half of the execution time. If you only get a 1.6x speedup, it means you optimized the wrong part of the program!
Suppose 20% of the program is responsible for 80% of the execution time. If you speed up this bit four times, you end up with a 2.5x speedup (and not a 1.18x speedup as the article tries to imply).
This quote is especially misleading:
> Amdahl’s Law is telling us that to significantly improve the speed of a system, we’ll have to improve a very large fraction of it.
No, it's telling us we'll have to improve the part of it which is responsible for a very large fraction of the execution time. Often this is only a tiny amount of the overall codebase.
* 420,000 LOC
* 260 people
* $35,000,000 / year
That's great for reducing cost (or increasing quality or buying time, they're all related) but in the long term it often leads to spaghetti.
Considering the time pressure they're perpetually under and how Elon is known to run his organizations I would suspect their software products are no different.
Job postings I see a mix of C# and golang. I cannot believe either a Model X or a rocket is rockin C#...
Basically it's C/C++ (as expected) for critical stuff, like the rocket or dragon, and C# for all enterprise systems. Plus an assorted set of other languages/technologies depending on team and project.