
Why Ada Is the Language You Want to Be Programming Your Systems With - fogus
https://hackaday.com/2019/09/10/why-ada-is-the-language-you-want-to-be-programming-your-systems-with/
======
wrp
This article promotes a popular misconception of the programming situation for
defense projects in the 1970s. There may have been "hundreds of specialized
programming languages" in existence that could be used, but just a handful
actually predominated. Most aeronautical projects were done in JOVIAL.

I've talked about this history with engineers from the 1960s-70s. They did not
regard the introduction of Ada as a good thing. The JOVIAL language had been
developed by engineers and modified through experience over several years to
become something they were quite happy with. The impetus that led to Ada came
from management and implementation was done by academics.

~~~
naikrovek
I'm sure that's all true.

There is something to be said for a language that was written to a spec from
the beginning, however. That is a luxury that few languages today can claim;
most language specs were written _after_ the first version(s) of the language
were implemented, and were written to match the implementation, not the other
way around.

Writing the specification first makes for a much cleaner language, though not
always the most comfortable, if you put any weight in your own opinions.

[I am speaking generally from here onwards and I am not looking for specific
examples to the contrary. I know full well that there are exceptions to the
generalizations below.]

Lots and lots of engineers today would (and do) express complete comfort in
their daily drivers of C and C++, while producing buffer overflow bugs and
use-after-free bugs all the while. Maybe it isn't in the best interest of the
software landscape to let developers choose the languages they use...

We have, as a community, let our feeling that software development is an art
dictate how we react and deal with new information; we overwhelmingly reject
it. What we're doing now is so much more comfortable, and we feel like we know
what we're doing with [language x] finally! Why would I want to overturn all
that comfort?

Software development is not an art, it is a science, and we've been ignoring
the science part of that as we refuse to abandon our favorite language(s) for
things that could easily be proven to be better, if we would just do the
testing to determine performance numbers, ease of bug introduction, ease of
maintenance, and so on.

We call ourselves "software engineers" and we don't measure anything beyond
the most rudimentary measurements. We DEFINITELY do not actively look to
overturn our comfort in favor of finding a better language to use. We don't
try to improve anything the moment that might mean that we switch languages,
no matter how significant the long term advantages could be. We don't even
think about what the long term advantages could be; we're too busy telling
ourselves that we are smart and that we can do it in whatever language we are
comfortable with.

~~~
generatorguy
Until software engineers are personably liable for their work like civil and
mechanical engineers are and have to carry insurance because they will get
sued for negligence if there are issues, it’s not professional engineering.

~~~
wahern
Software engineers are personally liable for their work. It's just that it's
much more difficult to prove the various elements of negligence in the
software realm, which makes litigation costly and uncertain. How negligent is
an arithmetic overflow in a free software library that causes the loss of # of
ads clicked for 10,000 Facebook users? Could the developer have foreseen the
use case? What's the value of that loss?

An arithmetic bug in a flight safety system or financial trading platform?
Your odds of being successfully sued just went up astronomically as it's much
easier to show negligence _and_ put a dollar figure to damages.

Professional licensing regimes are principally designed to prevent negligence
by a priori preventing people from holding themselves out as capable of
performing some task. In terms of legal liability all they really do is
restrict the extent to which you can limit your liability contractually.
That's important in business and in professional services (doctor, lawyer,
etc), but for downstream users who suffer damages there was never a contract
involved, anyhow.

------
ajxs
I live near Sydney University ( not my alma-mater ). In a rather serendipitous
turn, one day on my way home from the shops I was walking past one of the USYD
buildings and there was a giant stack of library books that had thrown out
onto the sidewalk as rubbish to be collected by the council. It turns out they
were old science faculty books that were no longer prescribed material. I
started looking through them and found several books on Safety-critical
software development in Ada, several books on hardware design and several
others on software development. I took them all home, at first thinking the
Ada ones would just be some archaic novelty. When I got a chance to read them
I was amazed. I was expecting something akin to COBOL from the language.
Instead I found a language that was extremely well designed for its domain.
I'd already done a bit of embedded and operating system development, and I
could immediately see how Ada would benefit a developer in these areas. Even
representation clauses alone are such a huge convenience for low-level
programming. I haven't looked back since. I was actually in the process of
preparing a bare-bones example of operating system development in Ada, if this
post had been three days later I'd have been able to link the finished product
here right now.

~~~
OneWingedShark
> if this post had been three days later I'd have been able to link the
> finished product here right now.

Well, that was 5 days ago, which means you _CAN_ post the link! (I'd like to
see it, quite a lot.)

~~~
ajxs
Here's my initial implementation:
[https://github.com/ajxs/cxos](https://github.com/ajxs/cxos) I'm going to
continue to work on it for some time, but this is the minimal working example
that I had enough time to properly document.

------
jeffdavis
There is some overlap in the use cases between Ada-in-the-mainstream (which
seems to be what the article is suggesting) and Rust. The existence of Rust
will make it even harder for Ada to break out of its existing domains.

Both seem to have good ecosystems, but quite different. Ada has more high-
assurance tooling and practices. Rust has more in terms of packaging and
general-purpose high level libraries.

With Ada, it's pretty hard to shake the feeling that it's not as welcoming.
Either you are building a spacecraft, or "what are you doing here?". Not that
anyone actually says that, and in my limited experience playing with the
language, everyone seemed nice. It's just a feeling that's hard to escape. I
wish it weren't so, because I think Ada has a lot going for it and it could be
good in more donains.

One kind of cool thing is that it's designed to get by without so many heap
allocations, and it does some interesting things to allow/encourage more stack
allocations. It also has more sophisticated runtime checks, like contracts,
that are kind of between a unit test and a compile-time check.

Rust is great at managing complexity. I keep holding out hope that Ada can
somehow give us some simplicity back while still making useful software.

~~~
im_down_w_otp
I understand what you're getting at here I tbink, but one of the ecosystem
challenges we run into is that while Rust is abstractly a fantastic choice for
the domain of systems development that would often be served by Ada (in fact
we think it's a lot better in some respects because we can do more statically
ahead of time, that Ada does dynamically at runtime, by abusing the Rust type
system and borrow-checker), the Rust ecosystem isn't really focused on this
class of problems.

There's significantly more attention provided to things like Wasm than there
is to things like embedded use cases for example.

The two languages, ecosystems, and communities seem to have largely non-
overlapping objectives despite having overlapping technical capabilities.

~~~
devnulloverflow
> The two languages, ecosystems, and communities seem to have largely non-
> overlapping objectives despite having overlapping technical capabilities.

I keep thinking servers should be written a bit more like embedded thingies.
(Though I suspect the actual trend is in the other direction).

Both kinds of software deeply interested an asynchrony and concurrency (though
not necessarily parallelism). And both also tend to talk over networks to
other gadgets as part of a complex system. And since my job often revolves
around putting out fires when severs OOM, I'd also like to see a lot less
malloc().

It would be nice if software could place hard upper limits on how many
resources (including RAM) it takes to serve a request and then reject requests
early. With real-world languages and frameworks, we can only do very primitive
approximations to this. But a framework written from the bottom-up with a
malloc-is-evil mentality () might be able to do it.

~~~
im_down_w_otp
We have a crate internally that uses type-level programming to do statically
tracked memory allocation (technically the method can be used for other kinds
of resource management too), so that we can have compile-time guarantees that
allocations will always succeed and we'll never allow memory to be allocated
from inappropriate regions (e.g. mapped registers).

Layered on top of that we can also ensure that if at runtime your device
memory configuration is sufficiently different from what your assumptions were
at compile-time, then we don't allow the device to boot so that you don't end
up in a dangerous resource exhaustion situation by accident.

We use this for no_std embedded shaped use cases, but you could also use this
library and programming model to create a special heap in your normal programs
which have statically tracked allocations to create some region of your
program context that you want to have assurances like that.

We have 3-4 things kind of in that vein in our pipeline to open source, and I
think that's number 3.

------
ch_123
> Ada code controls rockets like the Ariane 4 and 5, many satellites, and
> countless other systems where small glitches can have major consequences.

A somewhat amusing endorsement given that the infamous failure of the maiden
launch of the Ariane 5 due to series of interconnected software bugs - a
reminder perhaps that the choice of implementation language does not
automatically make for a reliable system.

~~~
ajxs
There's always someone who brings this up whenever Ada is mentioned, as if it
serves as a damning refutation of Ada's reputation as a safe language. That
bug was caused by a series of design failures if anything, rather than
software bugs. From memory, they reused the inertial reference system from the
Ariane 4 mission without testing its suitability for a different delivery
vehicle. The actual 'bug' was caused by an integer overflow occurring due to
the velocity variable not being wide enough to handle the higher horizontal
velocities of the newer rocket. This caused the rocket to go off course and
require termination. The backup computers had the exact same flaw due to
having the same programming. I'm guessing they used Ada's modular integer
types, which wrap around when they overflow. This is the same as C, and I'm
guessing the same as Rust. Using either of those languages wouldn't have
helped in the slightest here. It's not some software bug caused by Ada. It was
a domain issue caused by a failure in management and testing.

~~~
nwallin
> The actual 'bug' was caused by an integer overflow occurring due to the
> velocity variable not being wide enough to handle the higher horizontal
> velocities of the newer rocket. This caused the rocket to go off course and
> require termination.

This is not entirely correct. The code which overflowed was unnecessary for
flight. People have this idea that the sensor sensed it was going 32745 m/s to
the east one moment, and -32723 m/s the next, and then tried to compensate for
the sudden gust of wind it must have encountered. This is incorrect. The
routine in question was only supposed to run when it was on the ground, and
served no purpose in flight. It was scheduled to be disabled 40 seconds after
launch. (The flight failed after 37 seconds)

The problem is specifically the fact that the code was written in Ada. Ada
does not fail the way other languages fail when you overflow. Instead of given
an incorrect value (which never would have impacted the flight) it throws an
exception. This exception was uncaught. This uncaught exception brought down
the entire flight control system, despite the fact that the falling code was
not controlling flight.

Rust improves upon this situation by panicking in debug mode and wrapping in
release mode, which is better than the Ada behavior in every possible way.
Normal "unsafe" languages improve this by giving an incorrect value as the
result, which in the case of Ariane V would have saved the flight.

~~~
ajxs
You could have condensed your entire post to just "This exception was
uncaught". Your last point is just outright ridiculous:

> "Rust improves upon this situation by panicking in debug mode and wrapping
> in release mode, which is better than the Ada behavior in every possible"

So... What you're trying to tell us is that Ada recognising an exceptional
situation has occurred is bad, and that Rust just ignoring it is a good thing?
Righto... An integer overflow occurring in this scenario _is_ an exceptional
condition. In this case it was one that the developers did not anticipate.
This issue arose because they mistakenly thought it was not possible for this
scenario to occur. The mistake here was not anticipating the exception and
handling it in a meaningful way, not the raising of an exception. The fact
that not raising an exception would have avoided the disaster in this case is
entirely incidental. It was still an exceptional circumstance that the
developers, testers, designers did not anticipate. The silent wrapping
behaviour you describe would have bitten someone eventually, perhaps here,
perhaps somewhere else. Possibly with enormous repercussions.

> "Normal "unsafe" languages improve this by giving an incorrect value as the
> result, which in the case of Ariane V would have saved the flight."

The normal "unsafe" languages you're referring to ( presumably C ) have a
history of very poor error handling. Requiring an extremely high degree of
conscientiousness when programming for mission-critical applications. Can you
imagine _actually_ programming for what you're describing here? Not only does
the computer have to guard against outright failure, but now it has to infer
the possibility of an error state from a stream of in-band data in real-time.
Righto...

Also, just so you know: Signed integer overflow is UNDEFINED BEHAVIOUR in C.

~~~
kragen
Tainting the dataflow rather than the control flow, like a quiet NaN, would be
a better solution in cases like these. It happens that the solution that was
chosen was the worst choice possible, but in other cases it wouldn't have
been.

Probably the fact that the engineers got the case analysis wrong suggests that
the methods of reasoning they were using weren't very effective. Modern formal
methods might or might not have helped; guaranteeing that a piece of code is
statically free of exceptions is not a terribly advanced thing to do, but
really you'd like to guarantee that the variable was actually big enough.
Having less code to reason about certainly would have helped.

C’s choice is clearly unconscionable.

~~~
blub
In hindsight it would have been perhaps better, but I don't think it can be
said that having essentially random values in memory is better than having a
controlled error handling action being executed. Is there any reference that
such a course of action is preferred or beneficial?

~~~
kragen
We have a lot of experience with floating-point qNaN values since IEEE 488
introduced them in 1985, and more rigorous languages than Ada — such as OCaml,
Haskell, and Rust — use a generalized version of the concept for a lot of
error handling.

I don't think there's an open-and-shut case that error values are "preferred
or beneficial" _in general_ with respect to exception handling through
nonlocal control-flow transfers; there are reasonable arguments on both sides,
and it seems likely that both are viable options in most circumstances, and
which is better depends on those circumstances. (For example, lazy evaluation
or catastrophic consequences attached to software crashes would seem to me to
weigh heavily on the side of using error values, while interactive REPL-style
usage would seem to me to weigh on the side of nonlocal control flow, perhaps
with Common-Lisp-style restarts.)

My point was narrower: _in this particular case_ , using error values
analogous to NaNs, rather than shutting down the SRI, would have saved the
rocket, without posing the risk of using garbage data that C's semantics pose.
_In this particular case_ , they would have been better. It's possible that,
in other cases, perhaps even in other places in the SRI firmware, using error
values would have been worse than unwinding the stack or shutting down the
SRI. But _in this case_ they would have been better.

(If you aren't convinced of that narrow point, you should probably read the
report from the Lions board of inquiry.)

~~~
blub
I think I see what you mean. I'm personally interested in generally applicable
rules, so the fact that it could have worked in this case doesn't help me
decide what to so when designing another system.

In the general case, the Ada behavior of detecting and signaling numeric
errors seems to be the only robust choice to me. The alternative could only be
acceptable if it's proven that the algorithms fail safe in the presence of
tainted data.

~~~
kragen
I'm interested in generally applicable rules, too; in this case, the generally
applicable rule is that there is no generally applicable default robust
behavior—the behavior of detecting numeric errors and crashing the system was
the worst possible behavior in this context, but in other contexts it would be
the best possible behavior. As I explained in
[https://news.ycombinator.com/item?id=20935662](https://news.ycombinator.com/item?id=20935662),
there was no acceptable way to handle such an error if it was detected at
runtime. The only acceptable solution is to rigorously verify _before runtime_
that the error is not going to happen, as Rust does with, for example,
multithreaded data races. (Well, you can't rule out hardware malfunction, but
if you can make it sufficiently improbable you can use the strategy that they
did in fact use.)

However, default-ignore vs. default-explode isn't a difference between
handling errors with IEEE-754-qNaN-style dataflow-tainting and CLU-style
nonlocal control flow transfers, as you seem to think. In either case,
algorithms that need data that could not be computed successfully will fail to
get that data; in either case, you can write the code to either propagate that
failure or to attempt to recover from it somehow. In either case, you will not
be able to control the spacecraft if you needed that data to control it. The
difference I was mentioning, which is not generally applicable, is that in
unusual cases like the Ariane 5 disaster, where the erroneous data shouldn't
have been computed in the first place because it wasn't needed, qNaN-style
error handling clearly results in safer behavior.

But, in general, real-time control is not a place where there exists a "robust
choice" when your software is broken. If your jet engine control software or
your antilock braking software is broken, you're likely to die, and no error
handling strategy can prevent that, at least not at the level we're talking
about. Biological systems manage this situation by having a melange of
somewhat-independent control loops using different algorithms that are
constantly fighting for control of the organism; while this does seem to
produce systems that are more resilient than engineered systems, it's less
efficient and enormously more difficult to develop or understand, and it's the
opposite direction from the rigorous certainty you and I favor.

------
waynecochran
Ada was the language of the undergraduate curriculum at the University of
Washington when I was there (late 1980's) -- because Boeing. One of the core
safety mechanisms was exception handling.

* The biggest thing I miss is that _every_ block was inherently a try-block. Just put your exception handlers at the bottom of the block. This provided a nice separation of "normal flow-control" and "exceptional flow-control". Why have try-catch anyway -- I never got used to that later in life.

* Exceptions were not synonymous with errors. In fact the way to read a file byte by byte was just to keep reading bytes and never explicitly check for end-of-file. Let the exception handler handle this case. Logic is much cleaner.

* Remember that you can't just dump core and shell out to the OS on an un-handled exception -- not if your controlling a ICBM flying of the continent. Ada really forced you to handle all errors at some level of the call-stack with an exception handler. At the outer-most level -- just call self-destruct.

~~~
realusername
> * The biggest thing I miss is that every block was inherently a try-block.
> Just put your exception handlers at the bottom of the block. This provided a
> nice separation of "normal flow-control" and "exceptional flow-control". Why
> have try-catch anyway -- I never got used to that later in life.

That's still the case in Ruby, you can put a rescue at the end of the block.

~~~
waynecochran
"rescue" seems to imply "error," whereas "exception" means simply an unusual /
non-normal condition. Does Ruby encourage exception handling as control-flow
besides error conditions?

------
AceJohnny2
If you've done VHDL, you'll be familiar with Ada's syntax. When the DoD
commissioned the design of VHDL, they required that its syntax be based on
Ada.

[https://en.wikipedia.org/wiki/VHDL#History](https://en.wikipedia.org/wiki/VHDL#History)

~~~
non-entity
TIL VHDL was commisioned by the DoD.

Also random note, but wouldn't most engineers familiar with HDL's be
electronic engineers rather than software developers?

~~~
opencl
A lot of people trained as EEs seem to end up in software dev jobs these days.

~~~
magduf
Yep. You don't find many EEs left in the US these days, unless they work at a
military contractor. All the electronic hardware design work went to Taiwan
and China, because that's where all the manufacturing for it is. What's left
is government-related (military) stuff that obviously can't be outsourced to
Asia, and some commercial stuff that they keep in-house for various reasons.

Basically, the US is a really bad place to be an electrical engineer
specializing in electronic hardware, but it's a great place to be a software
engineer or developer.

~~~
pkaye
> Basically, the US is a really bad place to be an electrical engineer
> specializing in electronic hardware, but it's a great place to be a software
> engineer or developer.

Its because software engineers get paid much better in addition to more
options. As a firmware engineer I straddled both sides. So many of my EE
colleagues lamented they should have done more on the software side. A few of
the brilliant ones moved to places like Google, Amazon which need them for
specialist roles.

I remember a decade back I was visiting a customer in Taiwan to help debug our
product and saw a legion of Chinese engineers helping out some project. I knew
eventually Taiwan would lose some of their R&D to China.

------
GnarfGnarf
I've developed a set of "strongly typed" classes in C++ that have saved me a
ton of grief.

Here are some examples. All classes are fundamentally floats. Errors are
caught at compilation.

    
    
        cInches i1, i2, i3;
        cCm cm1, cm2;
    
        i1 = 1.f;  // error
        i1 = cInches(1.f);  // OK
        i3 = i1 + i2;  // OK
        cm1 = i1;   // error
        i1 = i2 * 2.f;  // OK
        float f1 = i1 / i2;  // OK
        i1 = i2 * i3;  // error! square inches are not inches
    
        int add(cInches in1, cInches in2);
        add(i1, i2);  // OK
        add(cm1, cm2);  // error
    

'Typedef's just don't cut it.

~~~
vardump
Why use floats when you can basically have doubles for the same cost? Floats
tend to cause nasty surprises when you deal with real world units.

~~~
LPDWORD
> Why use floats when you can basically have doubles for the same cost?

Floats costing the same as doubles is a myth stemming from x87 arithmetic,
which is obsolete. On an x64 CPU running 64-bit code, your compiler can often
pack four floats into one SSE register. Even when that doesn't happen, CPU
microcode can likely do more with floats than with doubles.

Lastly, memory bandwidth usage and cache occupancy doubles, which is true even
with x87.

~~~
vardump
The difference is meaningless in scalar code. Not everything is or can be
vectorized. Pretty much no difference between one double and one float in a
SSE XMM register.

> Even when that doesn't happen, CPU microcode can likely do more with floats
> than with doubles.

I have no idea what that means. As far as I know, there's no CPU microcode
dealing with floating point numbers.

> Lastly, memory bandwidth usage and cache occupancy doubles, which is true
> even with x87.

So use floats when you have a lot of data.

~~~
LPDWORD
If you already knew there was a cost difference, why didn't you just say so?

> The difference is meaningless in scalar code. Not everything is or can be
> vectorized.

Multiple scalars can be packed into one SSE register even in scalar code. This
is not the same as vectorization.

> Pretty much no difference between one double and one float in a SSE XMM
> register.

Even then, some arithmetic operations have higher throughput with floats than
with doubles.

> I have no idea what that means. As far as I know, there's no CPU microcode
> dealing with floating point numbers.

Your CPU doesn't execute the ISA directly in hardware, it first converts it
into architecture specific micro-ops. That includes FP operations.

> So use floats when you have a lot of data.

That's not the point. A lot of the time, the cost difference will indeed be
irrelevant. The point is that _there is a cost difference_.

~~~
vardump
> Multiple scalars can be packed into one SSE register even in scalar code.
> This is not the same as vectorization.

They can, but can you honestly call it a common case for scalar code?

> Even then, some arithmetic operations have higher throughput with floats
> than with doubles.

By far the most (90-99%) of FP computation is additions and multiplications
(or fused multiply adds). For scalar case (1 double or float in SSE register),
they take _precisely_ as long on modern x86 hardware.

Sure, float div executes in 11 instead of 13-14 clocks for doubles and I'm
sure transcendentals are even worse, but they're rarely needed. Even then, if
the dependency chain allows, the cost is often OoO scheduled away in integer
dominated code.

> Your CPU doesn't execute the ISA directly in hardware, it first converts it
> into architecture specific micro-ops. That includes FP operations.

Except that SSE instructions pretty much _are_ micro-ops as-is. Despite
similarly sounding term, microcode has nothing to do with micro-ops.

> That's not the point. A lot of the time, the cost difference will indeed be
> irrelevant. The point is that there is a cost difference.

Well, I've written a lot of SSE, AVX etc. SIMD code. There sure is a big
difference when you're processing large amounts of data. But... I've seen
floats introducing silly precision related bugs [0] and a ton of useless float
-> double -> float conversion chains. Most of the time most programmers should
default to double.

[0]: Example: [https://randomascii.wordpress.com/2012/02/13/dont-store-
that...](https://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-
float/)

------
bluejekyll
First, this is a great introduction article to Ada. It's a great history
lesson, and analysis of where it's used and why.

That said, it seems to be arguing for something in a vacuum. Where is the
mention of other new programming languages that also fit this category? Where
is the mention of Rust? Rust fits all the same requirements:

\- A general, flexible design that adapts to satisfy the needs of embedded
computer applications.

\- Reliability. The language should aid the design and development of reliable
programs.

\- Ease of maintainability. Code should be readable and programming decisions
explicit.

\- Easy to produce efficient code with. Inefficient constructs should be
easily identifiable.

\- No unnecessary complexity. Semantic structure should be consistent and
minimize the number of concepts.

\- Easy to implement the language specification. All features should be easy
to understand.

\- Machine independence. The language shall not be bound to any hardware or OS
details.

\- Complete definition. All parts of the language shall be fully and
unambiguously defined.

The exception is possibly the last one, complete definition. But that's
debatable as Rust does have a reference, but no one wants to go as far as
calling it a spec. (Also, I know some people think Rust has some "unnecessary"
complexity, personally I do not). The article goes on to compare to C, but
doesn't mention some of the new features of C++ that make it safer, or Rust
which is approximately as safe as Ada. So why would I pick Ada over other
modern languages?

btw, I really enjoyed Steve Klabnik's post on learning Ada:
[https://words.steveklabnik.com/learning-
ada](https://words.steveklabnik.com/learning-ada)

~~~
falcolas
> \- Easy to implement the language specification. All features should be easy
> to understand.

I'm not sure this is at all applicable to rust. Perhaps once there are more
than the one, quickly mutating, implementation, we can say that it's easy to
implement the specification.

> \- Reliability. The language should aid the design and development of
> reliable programs.

This one feels a touch questionable, given the built-in "succeed or die"
macros that simplify code at the cost of reliability. There's also the unsafe
keyword, and how the impact of an unsafe operation in one section of code
(say, an imported library) can have negative impacts on unrelated sections of
code otherwise deemed safe.

> \- No unnecessary complexity. Semantic structure should be consistent and
> minimize the number of concepts.

Borrow checker aside, there is a ton of complexity in Rust (traits, multiple
flavors of arrays, multiple flavors of boxed values).

> \- Machine independence. The language shall not be bound to any hardware or
> OS details.

Rust relies on the LLVM project to provide this, and relies on the
contributors to the LLVM project to target different architectures.

~~~
vardump
> ...unsafe operation in one section of code (say, an imported library) can
> have negative impacts on unrelated sections of code otherwise deemed safe.

Yeah. Just like _every_ line in C/C++ code _can_ have negative impacts on
unrelated sections.

While true, that talking point feels somewhat unfair or even in bad faith.
Rust requires "unsafe" escape hatch to be able be a low level systems
language. Or gasp, to call libraries using C ABI.

I write a lot of low level code in C/C++ (firmware, kernel drivers, low level
services) and I think a C ABI compatible "runtimeless" safe language, like
Rust, dramatically improve security, safety and even ultimately provide faster
time-to-market.

~~~
falcolas
This comparison is against Ada's language criteria, not C/C++.

You're well above 100% right with your points, if we weren't looking at the
language criteria put forth by Ada.

EDIT: It's worth nothing that Ada is also a system and embedded programming
language.

~~~
Jweb_Guru
Ada has unsafe code as well and messing it up causes exactly the same thing as
it does in Rust or C++, so I'm not sure what your point is.

------
auvi
I first read about Ada in JARGON:

    
    
       Ada:: n. A {{Pascal}}-descended language that has been made
       mandatory for Department of Defense software projects by the
       Pentagon.  Hackers are nearly unanimous in observing that,
       technically, it is precisely what one might expect given that kind
       of endorsement by fiat; designed by committee, crockish, difficult
       to use, and overall a disastrous, multi-billion-dollar boondoggle
       (one common description is "The PL/1 of the 1980s"; hackers find
       the exception handling and inter-process communication features
       particularly hilarious).  Ada Lovelace (the daughter of Lord Byron
       who became the world's first programmer while cooperating with
       Babbage on the design of his mechanical computing engines in the
       mid-1800s) would almost certainly blanch at the use her name has
       been latterly put to; the kindest thing that has been said about it
       it is that there is probably a good small language screaming to get
       out from inside its vast, {elephantine} bulk.
    

Maybe the situation is different now?

~~~
msbarnett
The situation hasn't changed, ESR just flat-out never had a good read on Ada.
You really cannot take the Jargon file seriously, its author's ignorances are
glaring and numerous.

Ada's "elphantine bulk" is smaller than many modern languages, and even some
which the Jargon File praises, like Common Lisp. Its inter-process
communication and exception handling functionality were relatively forward-
thinking at the time and would look familiar to your average Java programmer.

~~~
rodgerd
The kindest thing one can say about esr is that he is not as clever as he
thinks he is.

~~~
jacques_chester
Which is also the least kind thing.

------
rurban
I'm missing SPARK, the restricted and formally defined language based on Ada,
extending Ada with contracts and aspects, and disallowing the insecure parts.
[https://en.wikipedia.org/wiki/SPARK_(programming_language)](https://en.wikipedia.org/wiki/SPARK_\(programming_language\))
This is the real highlight of Ada.

~~~
johnisgood
Exactly. See this post:
[https://news.ycombinator.com/item?id=20934511](https://news.ycombinator.com/item?id=20934511)

------
kahlonel
As a side note for people like me who have no option other than C for their
safety critical projects, remember there's a MISRA-C standard. If that feels
too much, [1] is a good starting point.

[1] [http://pixelscommander.com/wp-
content/uploads/2014/12/P10.pd...](http://pixelscommander.com/wp-
content/uploads/2014/12/P10.pdf)

~~~
snazz
Thank you for the link! I think that’s really helpful and those are good
explanations for each rule.

Does anyone know of a guide for writing C code where you allocate a specific
block of memory at the start and then use that for all data from then on?
Avoiding malloc is one of the suggestions there and seems like a smart
strategy.

~~~
baot
I've written code exactly like that and realised that it just kicks the can
down the road: you're still allocating memory from a fixed block, it's just
now the malloc is written by you, with more bugs, and you still need to deal
with variable allocation delays and running out of memory. The real issue is
if you don't know at compile-time what resources you'll need at run-time. For
some problems that can be an intractable problem for making a safe product,
including, to my knowledge, everything that runs on a rich OS.

~~~
carlmr
>The real issue is if you don't know at compile-time what resources you'll
need at run-time. For some problems that can be an intractable problem for
making a safe product, including, to my knowledge, everything that runs on a
rich OS.

You're right, but you should still constrain yourself to these few instances
to use malloc instead of wildly using it everywhere.

------
mickduprez
What about Forth?

I quite like the way it eschews too much abstraction and can go from very low
level to high level with a lot less code (and less bugs).

Given the current interest with IoT and the proliferation of cheap micro
controllers it might be due for a resurgence(?).

~~~
kragen
I like Forth but I always seem to make my programs too clever in it. If I
write them in C or assembly instead it takes longer but has less bugs.
Probably people with better self-discipline than I have can do wonders in
Forth.

Current micros are a bit big for Forth, though. You can run C or C++ on them.
If you have only 4K or 8K of code space and 128–512 bytes of RAM the case for
C over Forth gets a lot weaker. Maybe we'll see a resurgence of Forth when we
get nanobots.

------
choonway
I used Ada (GNAT) about 15 years ago. One of the things that struck me was the
accuracy of the compiler error messages and the sophistication of the
debugger, when compared to C/C++ in Visual Studio / GCC

------
verisimilitudes
This article was a decent introduction, I suppose. I've been learning Ada for
about a year now, and I quite like it. It's larger than the previously largest
language I know, Common Lisp, but I like the focus on reliability and whatnot;
it's a language that has many facilities programs need, but you don't need to
use otherwise; it's also segmented well, unlike Common Lisp, but there's
advantages to both methods.

There's a flaw in the article:

 _Ada doesn 't define standard types like int or float_...

It does, technically, Integer and Float. The rest of the article is fine.

Lastly, it's amusing to see the Rust advocates cry out due to this. As I like
to write, Ada is used in critical systems such as ballistic missiles and
trains, whereas Rust is used in Firefox.

~~~
dpc_pw
> As I like to write, Ada is used in critical systems such as ballistic
> missiles and trains, whereas Rust is used in Firefox.

You mean to say that Ada is good for much simpler, easier, smaller-scope
projects?

~~~
krapht
Only a person who has never worked on a safety-critical application would say
something ignorant like that.

(I am neither a Rust nor an Ada user.)

~~~
dpc_pw
I have actually worked on safety-critical applications. ISO 26262, MISRA C,
ASIL, etc. :D . I even own and drive a car that has my software in it.

I am obviously teasing the OP, but there's IMO a lot of truth to it. Safety-
critical stuff damn better be simpler, smaller and fundamentally easier or it
will fail and kill people. That's why stuff like
[https://groups.google.com/forum/message/raw?msg=comp.lang.ad...](https://groups.google.com/forum/message/raw?msg=comp.lang.ada/E9bNCvDQ12k/1tezW24ZxdAJ)
makes sense. In a lot of aspects, mission critical-software is much, much
"easier": easier to stretch the budgets, easier to not have competition,
easier to justify why something can't be done or why it has to take another 3
years of work.

While a modern fully functional web browser is a huge pile of complexity,
technologies, requirements and yet it is still needs to work under much
tighter social, economical and performance pressures. Any bug being widely
exploited, could lead to people literally dying or at least having their lives
ruined. It's just not as spectacular as "missiles".

So I think the OP (and probably you as well :P) is being arrogant by
dismissing "firefox", that's all.

~~~
munk-a
I think well designed safety critical bare-metal stuff and well designed
browsers have a difference in complexity placement - within bare-metal work
you need to be aware of and account for the full domain range of values any
systems you talk to can produce (including error output) and account for those
- while browsers[1] tend to be more emergently complex with relatively secure
and safe input but a lot of complexity in how those inputs are handled.

One system is taking a complex world and trying to reduce it to a safe and
simple system - the other is taking something safe (some rando user inputs and
a pile of HTML) and turning it into something complex - a rendered website.

Granted, I think value domain identification is one of the only ways for
complex software to at all survive growing into a legacy system - so maybe the
core difference is the nice stress free life of a system where the worst
outcome of a critical failure is someone not being able to post on facebook ;P

1\. Lets ignore firefox and JS and just assume we're in the 90's downloading
HTML sites...

------
octorian
During weekend installfest/hackfest events at a LUG that I used to attend,
there was always this one guy who would sit in front of a pair of laptops and
code in Ada. One thing I remember about it is just how much it reminded me of
Pascal.

~~~
MisterTea
Ada was built on top of Pascal so they are very similar. Another Pascal
language is ST or Structured Text, an IEC industrial automation language
specification for PLC's.

I remember fooling with Ada years ago and I really liked the concurrency and
ipc mechanisms which was quite unique in the 00's. It was a little more
friendly to use vs. pthreads. Now every new language is baking in concurrency.
Since I wrote mostly c code I wound up making a franken program which was a
statically linked Ada/c. I used Ada for its concurrency which called my linked
in c code. It was a PLC emulator I scrapped.

------
JulianMorrison
When I played with Ada, my feeling was that it allowed a lot of things to lean
on the types, protection is one that gets made a lot of fuss about, but also
there's a lot of labour saving. You can size something to the length of
something else. You can get the max and min of a type (which are user defined,
not tied to byte sizes). You can get an array's bounds. Everything "reflows"
if you edit the type definitions. It allows you to be very detailed about what
you want, and then let other parts of the program lean on that detail. Types
become more than a way to catch mistakes, they become a way to describe your
intentions.

------
Mikhail_Edoshin
I once flipped through Ada spec (purely voluntarily) and in my opinion it was
a really well designed language. I generally come to believe that a language
should offer fewer fancy programming constructs, but advance in specifications
and definitions, and Ada looked like a solid step in that direction (as
compared to C, for example). As it was recently pointed out by some other
discussion here at HN the idea of software development is to document
knowledge, and Ada is pretty good at that. The modern trend is very different,
unfortunately.

------
writepub
No you shouldn't be writing in Ada. Embedded software running all kinds of
sensitive workloads, from pacemakers to routers/switches use C for it's
superior tooling, universal support across compilers and chip vendors, massive
user base and decades long best practices (like NASA's C coding guidelines) to
mitigate some of C's potential inadvertent misuses.

Not to mention the bevy of advanced niche features, like SIMD support, and
custom GCC extensions that lend superpowers to C coders that other languages
typically lack.

Sure, ada may have it's benefits for certain narrow use cases, but for non-
hobby projects spanning tens/hundreds of engineers, considering real world
vendor support for ada and missing talent pool, C is the obvious choice

~~~
irundebian
\- You don't need much tooling and coding guidelines if safety aspects which
prevent common C errors are built into language design. Despite that SPARK, a
subset of Ada makes it more easy to proof program correctness. I'm not sure if
it's similarly easy for C.

------
Symmetry
Different types for different units is a great idea. Nim borrowed it from Ada
and I assume that there are other programming languages that did too.

------
6thaccount2
I really like the language, but the only compiler I'd trust is the adacore one
and I imagine it is pretty pricey for commercial work

~~~
pronoiac
Why not FSF’s GNAT? It’s been a while since I used it, but I remember
preferring its error messages to some commercial offering we had around.

~~~
mjw1007
AdaCore's compiler is GNAT.

They sell a pro version with bells and whistles but AFAIK the actual compiler
is the same.

~~~
6thaccount2
You can't use a lot of the libraries and still get the runtime exception to
where it doesn't have to be released as GPL I think.

------
Catsandkites
What is the employment market like for Ada?

~~~
imglorp
There's some non-DOD avionics floating around that are ADA, I guess because of
requirements and regulatory overlaps.

[http://archive.adaic.com/projects/atwork/boeing.html](http://archive.adaic.com/projects/atwork/boeing.html)

~~~
cpeterso
I've read that SpaceX uses C++. I'm surprised safety critical systems like
SpaceX and Tesla don't use Ada. It's a mature safe language used in their
industry, yet they choose C++.

From SpaceX's 2013 Reddit AMA:
[https://www.reddit.com/r/IAmA/comments/1853ap/we_are_spacex_...](https://www.reddit.com/r/IAmA/comments/1853ap/we_are_spacex_software_engineers_we_launch/)

~~~
fjcp
I don't have any experience on this field, but researching the subject some
time ago I found that C/C++ is not unusual too. On the safety matter, when
used in critical mission systems they usually follow a guideline or framework
to ensure the reliability of the software, like MISRA C[0] for exemple.
Another important point is that the development is real slow compared to other
segments of software engineering and often takes several years to develop,
with lots of tests and rigorous compliance with specifications.

Its an interesting field, but feels hard to get into.

[0]
[https://en.wikipedia.org/wiki/MISRA_C](https://en.wikipedia.org/wiki/MISRA_C)

------
non-entity
I toyed with Ada some years ago. I was interesting, and I was fascinated by
the type system, but never had any real use for it

------
mikorym
I believe the Eurofighter Typhoon was and still is programmed in Ada.

~~~
eb0la
I believe most NATO weapon systems are programmed in Ada. It just makes
_natural_ to integrate anything with the avionics using the available
interfaces, specially with the friend-or-foe systems.

------
leshow
So one should learn Ada because it doesn't coerce integers? Plenty of modern
languages won't do that.

------
altoidaltoid
Flashbacks to Prof. Feldman's classes

~~~
kshannon
Or Dr. McCormick's classes. CS 101 with Ada. Students 2 years behind me got to
learn Java or Python.

He wrote 3 books on Ada.
[http://www.cs.uni.edu/~mccormic/](http://www.cs.uni.edu/~mccormic/)

