
Applying NASA coding standards to JavaScript - PixelsCommander
http://pixelscommander.com/en/javascript/nasa-coding-standarts-for-javascript-performance/
======
dljsjr
Whenever somebody does an analysis of coding standards like this (see also the
JSF coding standards) they always lock in on how these rules are written to
enforce "correctness", but they frequently gloss over an extremely important
part of what it means to be correct in a complex control system like a jet
fighter or a space craft, and that is deterministic execution.

"Correct" execution doesn't just mean error free, but it also means that code
needs to execute deterministically both in terms of near-complete referential
transparency as well as execution time. This is what is referred to as "real
time computing" in CS, a term that has recently been made homonymous by
UI/UX/front-end designers to describe responsiveness in high-latency systems
like the web.

All of the inferences about how these rules aid static analyzers are good and
correct, but a lot of these rules are also important to achieving
deterministic execution time; memory management being the big one due to its
complete lack of determinism both in computation result and computation time.
Sticking code in a real-time OS doesn't really matter unless the code itself
is also real-time kosher.

I don't bring this up to gripe, but only bring it up because it wasn't
something I was familiar with until I started working on torque controlled
walking robots a few years ago where the feedback loop has a real-time
constraint, and it has since taken over my life and I find it to be a very
fascinating aspect of computer science that almost nobody ever talks about.
Achieving real-time deadlines in a program can be quite challenging and very
often making something "faster" than its deadline is not even close to the
same thing as making sure it _always_ makes its deadline. It really changes
the way you think about performance, execution time, program optimization,
etc.

~~~
ep103
Could you go into more detail on where I could learn more about this / where I
could work on such types of systems? (outside of high frequency trading)?

I've started working on similar types of technology recently, but purely by
wandering into it by accident.

~~~
dljsjr
I don't really know of any good general resources; they're often pegged to a
specific programming language because the ways in which you have to achieve
real-time compatibility differ depending on how the language and its runtime
behave. We do all of our work in Java, which is definitely not a great
candidate for real-time in most cases, so a lot of what I've learned about
real-time programming has come from having to wrangle the JVM in to doing what
we need it to do.

The fields that come to mind when I think of real-time outside of HFT, though,
are control systems and digital signal processing (DSP). While DSP doesn't
necessarily need to be real-time if you're doing it for analysis, doing DSP
live definitely requires real-time programming knowledge since signals are
often expressed in the time domain and any delay in computation of a discrete
element will propagate in how the signal is interpreted. For example, if the
signal you're processing is a sound signal (like music), and you are doing
software-based DSP on the signal before it reaches the playback device, it
better be real-time or any delay spikes will ruin the playback. This is why OS
X is so popular with the music industry and especially DJ's; the Mach kernel
has really good real-time support and a pretty good real-time software based
DSP library for building apps.

~~~
pjmlp
You might be interested that Aicas is driving the new real time specification
for Java.

[https://oracleus.activeevents.com/2014/connect/fileDownload/...](https://oracleus.activeevents.com/2014/connect/fileDownload/session/4ACE36ED8A7AF1923F15B9941F35BB25/BOF4957_Hunt-
javaone-rtsj-bof-2014.pdf)

~~~
dljsjr
We have some contacts at Aicas, and we've used both the original RTSJ in the
past as well as evaluating Jamaica. Myself and a colleague recently attended
JTRES and spoke to Dr. Hunt about where Aicas fell short for our specific use
case, and in the very near term it won't be usable for us. We're looking in to
re-evaluating it around summertime.

------
DannyBee
"but what is wrong with recursion? Why NASA guidelines prescribe to avoid
simple technique we were studying as early as in school? The reason for that
is static code analyzers NASA use to reduce the chance for error. Recursions
make code less predictable for them. JavaScript tools do not have such a
precept so what we can take out of this rule?"

This is of course, wrong. Javascript has not solved the problem of making
recursion easy or sane to analyze. It is quite literally the same problem you
have with C.

In fact, most javascript tools are _way_ less useful at recursion analysis (IE
they produce a much higher rate of false positives) than C tools.

"9.The use of pointers should be restricted. Specifically, no more than one
level of dereferencing is allowed. Function pointers are not permitted.

This is the rule JavaScript developer can not get anything from. "

Except that javascript developers use function pointers like they were going
out of style ...

~~~
wyager
I suspect issue isn't that recursion is hard to analyze, but that non-tail-
call-optimized recursion has unbounded memory usage.

In fact, what makes you say that recursion is difficult to analyze? Inductive
proofs over recursive algorithms are generally much nicer than proofs over
iterative algorithms.

~~~
jacquesm
You nailed it. Tail calls have constant memory footprints, non-tail-call
recursion (assuming the language runtime and/or compiler are able to recognize
these situations and convert the tailcall into an adjustment of the variables
in scope in the current stack frame followed by a jump).

If we'd be worried about our ability to analyze these then we wouldn't be able
to optimize away that special case to begin with.

I do find that to my imperatively trained eyes it always screams 'stack
overflow' at me and then I have to remind myself that will be optimized away.

It'd be nice if environments that provide tail call optimizations could be set
to fail compilation if any non-tail-call recursion is detected.

~~~
DannyBee
"If we'd be worried about our ability to analyze these then we wouldn't be
able to optimize away that special case to begin with. "

Most compilers require you tell them, specifically, that it can tail call
everything, before it will do it (i'm talking more about GCC and LLVM here,
which are generic compiler frameworks). It will do some tail call
optimization, but only fairly basic cases, exactly because it can't analyze
what will happen.

------
jerf
"Why NASA guidelines prescribe to avoid simple technique we were studying as
early as in school?"

Because recursion in the languages they are using involves allocating stack
frames for the recursion, which means they can't predict stack depth, which
means they can't predict the amount of memory the stack will consume, which
means they may run out of memory, which means their spaceship may be
destroyed.

Which, incidentally, here on step two indicates why the NASA standards are
from _such_ a different world from the web browser that they don't much
matter. Now, I'm not saying that memory use doesn't matter, just go nuts, but
JS is generally running in an environment with wildly more memory than those
restrictions were meant for (yes, even on cell phones nowadays), and generally
you're looking at "annoying a user and they go away" being the worst case
rather than "losing a billion-dollar spacecraft".

JS is not even capable of following #3 about memory allocation.

NASA: "6\. Data objects must be declared at the smallest possible level of
scope."

Commentary: "This rule have simple intention behind – to keep data in private
scope and avoid unauthorized access. Sounds generic, smart and easy to
follow."

No, that's not what that rule is for. By keeping the data objects at the
smallest possible scope level, it makes it as likely as possible that it can
be stack-allocated, which means the lifetime of the object can be easily
understood, guaranteed not to proliferate due to a leak (see rule about not
recursing), etc. Again, this has little to do with a language that decides for
you whether it will stack or heap allocate.

#8, again, has nothing to do with Javascript.

"When started to write this article I could not imagine how much web world
could get from NASA and how much is similar."

No, it really isn't. Unfortunately, and I don't know how to effectively soften
this, you don't understand enough about their world to understand what the
rules are for, so you brought your own conceptions about JS to what the rules
were saying. In fact they're _really_ specific to a certain type of resource-
restricted real-time programming that has very little to do with a web
browser, and which you could not follow with JS even if you somehow wanted to,
and for those you could, the cost/benefit tradeoff for following NASA's
suggestions are awful. If you're interested in improving your JS quality,
focus on more extensive automated testing, focus on more profiling, and focus
on general software engineering principles like single-responsibility and DRY.
Don't even worry about these rules.

~~~
zamalek
I really agree with you jerf.

> Because recursion in the languages they are using involves allocating stack
> frames for the recursion, which means they can't predict stack depth, which
> means they can't predict the amount of memory the stack will consume, which
> means they may run out of memory, which means their spaceship may be
> destroyed.

Thank you. It is trivial to turn a recursive function into a loop. Trivial
especially compared to running through all the stack space (1MB on Windows)
and crashing the process in some runtime environments[1]. Or your space ship.
Either way: recursion belongs in academics. I don't care if you are using JS
or ASM, learn how to avoid using it.

> Manage your variables with respect. Regularize variables declaration by
> putting them in the top of scope

That really _isn 't_ what is meant by "Do not use dynamic memory allocation
after initialization." Memory is allocated on the stack the second you enter
the frame, no matter where you declare the variable. That is a style
recommendation.

> JS is not even capable of following #3 about memory allocation.

Not to mention that the wrong definition of a GC is used.

[1]:
[http://stackoverflow.com/questions/107735/stackoverflowexcep...](http://stackoverflow.com/questions/107735/stackoverflowexception-
in-net)

~~~
mercurial
> I don't care if you are using JS or ASM, learn how to avoid using it.

It's a bad idea for a class of languages. Functional languages implementing
TCO are a different kettle of fish (especially if your function is tail-
recursive). It's an extremely common pattern in these languages.

~~~
zamalek
Indeed, however, understanding why it's a bad idea in imperative languages is
important when using functional languages. You have to know how what qualifies
a function as tail-recursive (and why that is important). You'll only learn
that from experience with both classes of languages and most importantly,
getting burned due to abusing the stack.

I'd rather people learn from the other direction, you know? I'm not sure how
that would be implemented, but I see recursion abuse far too often and few
seem to understand how that could be a bad thing.

------
atsaloli
The coding standard is one of six parts of Dr. Holzmann's formula for building
robust software.

You can read more about Dr. Holzmann and the other 5 parts at "Making robust
software"
[http://www.verticalsysadmin.com/making_robust_software](http://www.verticalsysadmin.com/making_robust_software),
my notes on Dr. Holzmann's "Mars Code" talk at USENIX HotDep '12 mini-conf
(full video of talk: [https://www.usenix.org/conference/hotdep12/workshop-
program/...](https://www.usenix.org/conference/hotdep12/workshop-
program/presentation/holzmann))

------
danielsamuels
This article is almost blogspam, poorly written, very little content. The
linked PDF is far more interesting.

~~~
PixelsCommander
Any help on making text better is appreciated. Blogspam... May be, but
consider that PDF were published a long time ago and not that many Web people
were interested until there came a way to use it in everyday practice.

~~~
zamalek
I can tell you put a lot of effort into the article. Heck, it got a decent
amount of upvotes - which is a good thing. Thing is, there are a few veterans
hanging around here. They're really don't take factual errors lightly. I have
had my run-ins with them too.

Your first mistake?

> web people

You are not web people. You are a developer. Broaden your horizons and stop
restricting your experience to the microcosm that is JS and the web in
general. Once you do that you'll start to understand why the folks here have
some issues with some of the suggestions you put forward.

------
unwind
Can somebody please fix the glaring typo in the title: s/standarts/standards/?

No point in duplicating OP's errors here, in my opinion.

~~~
PixelsCommander
Done, thanks. Please, let me know if there are more typos / stylistic issues.

~~~
Xophmeister
s/autmatization/automation

------
Matthias247
Avoiding dynamic memory allocation and recursion are typical rules for
embedded scenarios, where programs run in safety critical contexts on tiny
microcontrollers. These rules are not for easy static analyzation but for
having a deterministic memory usage. In the best case one where you can proof
that your application will in no state exceed the available memory. Similar
rules are also enforced in automotive scenarios.

Allowing recursion allows to blow the stack. Allowing malloc/free might blow
the heap. Even when it does not do it immediatly it might do it over time due
to heap fragmentation.

------
midgetjones
A 60 line limit for functions sounds way over-generous to me.

~~~
mercurial
Really depends on your language. Also, I find (though rarely) that it
sometimes help to keep most of the pieces of a given algorithm in the same
place rather than splitting it in small functions.

------
debacle
I really like this. I especially like how the author(s?) didn't make the
Assertions -> Unit Tests leap. Assertions are much more powerful, even if they
aren't executed in production.

I feel like a few of these are glossed over. #7 implies that error conditions
should be handled. #8 precludes transcompilers since they are also just
glorified preprocessors. #10 means use "use strict"

~~~
fat0wl
yeah I agree #8 in particular should elicit a lot more debate in the js
community. Back when I used to be into RoR I would get a ton of flack in
interviews for writing pure js rather than Coffeescript, pure css rather than
Sass.

There are legitimate cases where a pre-processor is useful (Coffeescript demos
show several cases where a lot of code can be simplified) but unless you are
writing a math/list-heavy app, I feel these advantages don't apply so much. I
don't perceive a need for these tools in most projects that are about dom
manipulation, it can instead be achieved through pure libs (jQuery) which will
eliminate a build step and should allow for some easier testing.

This is a fundamental programming debate that I don't see a lot of discussion
of... since I like CLojure I see some talk of how "transparent" it is as a
wrapper over JVM types but a lot of the pre-processor style of transcompilers
gloss over this idea (I guess just preferring 2 independent stages of
static/lint analysis?).

~~~
debacle
Firstly, I don't pool LESS/SASS with Coffescript and the like because CSS is a
content description language, not a programming language, though as someone
who is an expert on none of the above, I would hear arguments against them.

Secondly, most people wouldn't. If someone is using Coffeescript they don't
want to hear that they shouldn't be using Coffeescript because they want to
use Coffeescript, and most people (and in 2014 "most programmers" are a
microcosm of "most people") don't like listening to opinions that deviate from
their own. All of the "great" parts of Coffeescript can be implemented in
JavaScript with a library like underscore.

Clojure is a bit different since it is interacting directly with the JVM.

~~~
fat0wl
good points, but yeah the reason i lump them together (even though I agree
that C preprocessors vs. css vs. js vs. jvm abstraction are obviously very
different technologies) is that in the context of this discussion, i see them
as a layer of abstraction that basically mangles the ability for code analysis
(whether it be static or in debugging) to happen in a way that feels
comprehensive

for some technologies this is being alleviated with source maps, but it still
feels like over-complication when there are library alternatives available
(Underscore, as you point out). Because when it comes down to it you will need
to debug & understand some language paradigms/quirks of the language you are
transcompiling to, so it would behoove the developer to be as fluent in the
target language as possible (& of course, coding natively in it to begin with
promotes fluency...)

as the tools become stronger the danger disappears, yet the base language has
simultaneously matured to the point where this extra technology may be nearing
obsolescence... i understand in js there is some lag with web standards boards
etc., but taking Java as an example, annotations and other language functions
were added while simultaneously the libraries/containers (web frameworks,
Spring, EE) got more powerful. It is a pretty strong & mature toolset probably
mainly due to the language evolution always being forced back toward the core.

From the limited amount I know, the other JVM languages are implemented
against the JVM spec which is matured in a similar way, thus facilitating
easier implementation of language features & interoperability.

------
IshKebab
> All loops must have a fixed upper-bound.

This was done by default in QuakeC (a pretty great DSL I think). I was amazed
the first time I made an infinite loop and it detected it. (It stopped after a
large number of iterations, maybe 10000).

------
joesmo
"9.The use of pointers should be restricted. Specifically, no more than one
level of dereferencing is allowed. Function pointers are not permitted."

I disagree that JS developers can't get anything out of this. The rule clearly
states that first-class functions (function pointers in C) should not be used.
Whether that's a good or bad idea to follow is another matter, but the
author's interpretation that it doesn't apply to JS is wrong. IMO, it's likely
to reduce complexity and bugs quite a bit so it seems like the rule should be
generally followed if one follows the rest of the rules, despite giving up
such a powerful and flexible feature.

~~~
lscharen
Totally agree on this point.

My feeling is that Rule 9 roughly maps to following the Law of Demeter in that
you shouldn't chase properties down more than one level, e.g.

Good:

    
    
       var something = obj.property;
    

Bad:

    
    
       var something = obj.property1.property2.property3;

~~~
adestefan
Once again this has more to do with the idea that everything must be
deterministic. Using a pointer when it's not absolutely needed is just one
more place where a non-deterministic runtime issue can crop up.

