Hacker News new | past | comments | ask | show | jobs | submit login
Why Ada Is the Language You Want to Be Programming Your Systems With (hackaday.com)
301 points by fogus 10 days ago | hide | past | web | favorite | 324 comments





This article promotes a popular misconception of the programming situation for defense projects in the 1970s. There may have been "hundreds of specialized programming languages" in existence that could be used, but just a handful actually predominated. Most aeronautical projects were done in JOVIAL.

I've talked about this history with engineers from the 1960s-70s. They did not regard the introduction of Ada as a good thing. The JOVIAL language had been developed by engineers and modified through experience over several years to become something they were quite happy with. The impetus that led to Ada came from management and implementation was done by academics.


Many of the 'programmers' of those days lacked the sophistication and education to appreciate Ada.

Just the mojo required to print an integer (having to instantiate a generic) was considered complicated

Ada was also very slow to compile, on the order of 10 times slower than Jovial. (my experiences)

Avionics systems back then were tiny. Jovial usually ran on a 16 bit processor.

Ada enabled a certain maturity and correctness to the avionics domain.


> Many of the 'programmers' of those days lacked the sophistication and education to appreciate Ada.

That certainly sounds like academic self-righteousness.


> Just the mojo required to print an integer (having to instantiate a generic) was considered complicated

That sounds complicated to me, in 2019 with 25 years of C++ experience. (And of course C++ error message sprouting mysterious stdlib templates also seems complicated to me).


> That sounds complicated to me, in 2019 with 25 years of C++ experience.

std::cout << X;

is not a generic, but on the project I just tested, if I substitute X with a type that does not implement an appropriate operator<<, I get ~1000 suggestions in the overload set, many of which are generic operator<< that are templated on a `typename OStream`.

I personally find having to do overload resolution first on a 1000 function overload set containing generic and not generic functions, and then often having to actually instantiate a generic (doing template argument deduction, template instantiation, substitution-failure is not an error so the next overload might be picked, etc. etc. etc.) infinitely more complicated than "just" instantiating a generic.


Indeed, type classes as found in languages like Haskell and Rust were first designed as a way to make overloading (aka "ad-hoc polymorphism") work more sensibly, and in a way that plays nice with generics/templates.

It isn't more complicated than printf, which is also close to being a generic.

    PutLine("The number is " & num);

    printf("The number is %zu\n", num);

pakage int_io is new integer_io (num=>integer);

and then

int_io.putline(num);


    with Ada.Text_IO; use Ada.Text_IO;
    with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;

    ...

    PutLine(num);
As far as I can tell, Ada has had Integer_Text_IO that does that all for you, since the early days. [0] (Note that the reference is the first "public" release of Ada outside of the DOD).

[0] https://www.adahome.com/LRM/83/RM/rm83html/lrm-14-03.html#14... (Ada 83 Manual)


That's an awful amount of boilerplate for a simple integer hello world.

Languages with these modules and import statements should be burned. (Starting with Python.)

For pete's sake, couldn't they at least combine this with and use into one keyword so you don't have to repeat yourself?

Or can't use be made to forgive a missing with and just get on with the show?

Do defense contracts pay by the byte or what?


'use' allows you to integrate namespaces, that's all. You can have multiples of those within the same file. Not all 'with' statements will only expose a single namespace.

Nothing different than std in C++.


C++ is awful also; it is not a good choice of benchmark for brevity.

Though a single use declaration will get you the identifier in C++, the right header file has to be #included. At least it's possible for a single include to bring in lots of material:

  #include <biglib>  // just one of these

  use bigfoo::bar;   // one per identifier
  use bigwidget::zark;
But C++ has lots of irritating repetition like:

  class foo {
    foo();
    ~foo();
  };

  foo::foo()
  {
  }

  foo::~foo()
  {
  }
Seven repetitions of foo just to set up an empty class skeleton with non-inlined constructor and destructor.

(The C++ people are self-conscious about verbosity, and trying to fix the problem; unfortunately, in wrongheaded ways that are making the language worse.)


I'm not sure brevity is a benchmark for a good language either.

So far as I can tell, you dislike how verbose the Ada "hello world" is, because it supports namespaces, and uses them. That wouldn't be something I'd agree with, especially not when dealing with any program of significant size.


> I'm not sure brevity is a benchmark for a good language either.

It's the only benchmark that we have any kind of analysis for (https://www.researchgate.net/publication/3188171_The_Confoun...), which correlates to less defects and less time per feature. Less code is better. Until I see evidence to the contrary, I think it's a valid criticism. YMMV


> When we controlled for class size all of the effects disappeared, providing compelling evidence for a confounding effect of class size

Holy crap; are they really saying that all the various correlations seen between numerous software metrics and fault proneness are actually the effect of nothing but raw size?

That's so intuitive. Because in our daily experience, almost all reasonable approaches to to various aspects of program organization actually work fine in programs of a size that one programmer can comfortably fully understand and maintain. Even for any technique reviled as "bad" and discouraged in coding standards, there are counterexamples of it working fine. Those counterexamples are in the context of small programs, so we are told, "just wait, that approach will burn you in a big program". But it looks like what will burn you is just the size of that big program, no matter what techniques you wield, whether blessed or not.

That little paper you pointed to there (thank you) looks pretty darn important.


I agree with less code, but the tokens themselves needing to be shorter, not so much - so long as they remain easily readable.

    class foo {
      foo() = default;
      ~foo() = default;
    };
Is actually the idiomatic version of that C++ class, far less repetitive.

The class I wrote has a non-default constructor and destructor.

The idiomatic way to write your class is just:

   class foo { };
allowing the compiler to supply the default constructor and destructor.

The purpose of the C++11 default mechanism is to request a default implementation of a special function in situations when it would be suppressed. For instance

  class foo {
  public:
    foo(int x); // this will suppress the default constructor
    foo() = default;  // this requests it back
  };

printf is not c++

Ada is the one that was considered complicated. Making use of PutLine in the way I demonstrated was, at the time, considered complicated and confusing.

That it's as easy as printf demonstrates the changes in thinking to today fairly well.



I can sympathize with resistance to Ada in a sense: Big "enterprise-y" languages can feel unwieldy and cumbersome if you're used to bit-twiddling and struct-packing in something like C.

But as the recent Boeing debacle has shown, the avionics domain is in need of greater sophistication and reliability. What it doesn't need is more hackery, as satisfying as that may be for the hardware hackers.

Though Ada isn't a massive household name, I think its designers were vindicated by history.


The software in the case of the 737 max performed exactly according to the spec. The problem is that the spec was buggy. The language can't fix a buggy spec.

Note that the bug which caused the Ariane V disaster was written in Ada. And that was caused by the language. If the Ariane V code was written in C and the value simply overflowed, nothing negative would have happened. (The value would be hilariously wrong, but that wouldn't have mattered because the code wasn't necessary during flight. It was an oversight that it was running at all.) However, this is Ada, so it caused an integer overflow exception. Which was uncaught. Which caused the entire system to nope out. Which caused the rocket to blow up.

So there's that to consider.


> "If the Ariane V code was written in C and the value simply overflowed, nothing negative would have happened"

So... what you're saying is that the language not recognising an exception conditional has occurred is a better outcome in mission critical applications? "wasn't necessary", "It was an oversight", "Which was uncaught"... With these phrases you've already pointed out that the developers had improperly implemented the application.


There is something to be said for not just halting on an error in production systems. Its the same reason assertions are usually turned off on production builds.

Yes ideally the programmers should have handled the exception, but once your in production and the system is running, should you really just halt the whole program due to an exception the prograera didn't foresee?


For mission-critical software like avionics or a nuclear power plant? Absolutely! At design time you better figure out what all the error cases are, and blindly steamrolling over them when they should be triggered makes a bad problem worse (e.g. look at Cloudbleed where unchecked out-of-bounds reads starts disclosing other people's banking information).

Assertions are also usually turned off for performance reasons, not to allow the program to plow through invariants being violated.


Uncaught exception and die.

For a banking application, yes, die, hard, as loudly as possible. Downtime is worth it. Do not continue, do nothing until it is understood and fixed.

When you're flying. Hmmm. Not so much. Dying is a really bad idea. Ok you should have it tested thoroughly so it isn't going to happen but if it does and "anything could happen" well "anything" is better than killing all the passengers, crew and anyone you might crash on, right?

Literally anything is better than that.

Idealy, you log it, you refuse to take off again, all the fleet is grounded etc etc but deciding to fall out of the sky because something unexpected happened is probably not the right response.

Is graceful shutdown possible? No? Ok, so take your chances and log noisily is a less bad option.


The default out-of-box runtime can only handle one general scenario. The runtime designers choose to print the error message and terminate, and this seems to be the only sound option for default behavior. It's up to the developer to replace this default handler when the scenario differs that much.

> this seems to be the only sound option for default behavior

I think that is the crux of this debate. In my background, yeah, you fail hard when anything unexpected happens. It's the most straightforward way to fail safe.

But my background maps better to the financial case; I've never worked on avionics or anything like that. I can see the point, though, that, in that kind of situation, failing hard doesn't fail safe at all.

It's conceivable to me that different problem domains require different default behavior. (And perhaps, by extension, different programming languages.)


That particular set of requirements is not unique to avionics. It’s pretty close in game development. A glitch is way better than a game which crashes in the middle of gameplay. Especially because many glitches only last a single frame i.e. barely visible.

> It's conceivable to me that different problem domains require different default behavior. (And perhaps, by extension, different programming languages)

Not sure about problem domains, e.g. 3D graphics is the main problem domain in videogames people play for fun, and CT scanners which save lives.

Different projects indeed have different requirements.

Every time you see a programming-related holy war on the internets (exceptions/status codes, static/dynamic typing, unit testing or not, FP/OOP, etc.), you can bet the participants have different types of projects on their background, and these projects needed different tradeoffs. More often than not, what’s good for a web startup aint’t good for Nasa or MS, and vice versa.


Whether die-hard or steamroll-thru is the better option depends more on what exactly goes wrong rather than what the application is. For a flight control computer, for example, if the ordered pitch suddenly becomes 3.402823466e+38 degrees it might be better to die hard and restart rather than try to execute the order.

The answer for this is sanity checking and some kind of layered "reflexes".

I design for mission critical things on a regular basis, and one of the error modes I must accommodate is random bit flipping by cosmic rays, emf, or other failures.

Sometimes, it is possible to push things to a "safe" failure state and reboot (which often takes only 100 ms or so)

Sometimes, though, the error must be caught and corrected to a last known good value or something like that. Everything critical is sanity checked : overall boundaries, rate of change, state related boundaries, etc. Layers of "reflexes" are more robust than a single programmed behavior, because an error in one will be resisted by the others. So much the better when there are segregated systems to check each other.

Often, I'll have a "brilliant" system that performs to a very high standard and is complex and brittle. If the brilliant system fails, it just shuts down or reboots. Underneath the brilliant system is another layer or two. A "competent" system that is simpler and more robust, but less efficient and with soft failure modes, and a "brainstem" system that takes nothing for granted, even checking basic functions by bit flipping to negate stuck ram bits or broken ALU logic, but only tries to do a pasable job, reducing algorythmic complexity to its bare acceptable minimum.

Typically the system will generate a basic range of acceptable parameters at the lowest level (and take corrective action if needed), then refine the range at each subsequent level..rinse and repeat. That way, each lower level checks the ones above.

Or, you just fail downwards if errors are suspected. Either way.

Designing failure tolerant systems is not impossible, but it requires a different mindset.


I imagine (just thinking out loud randomly, I'm sure there's other issues to consider!) one way to mitigate this would be to have a second set of much simpler software (written in a different language, perhaps running on a different platform) whose only runtime job is to sit in the communications path from the primary software to the avionics hardware and monitor the outputs of the primary software and ensure the values and/or rate-of-change in certain values are within some physics-based sanity limits before they hit the hardware. It could be responsible for rebooting the primary software and holding the last-known-good output values (for the consuming avionics) while the reboot happens (hopefully quickly!). Of course then who watches the watcher, and you've added more things in the critical path which can have their own failure modes...

"whose only runtime job is to sit in the communications path from the primary software to the avionics hardware and monitor the outputs of the primary software and ensure the values and/or rate-of-change in certain values are within some physics-based sanity limits before they hit the hardware"

Brings to my mind: Erlang


Was waiting for someone to mention Erlang.

Well, for an unmanned rocket, have it automatically self-destruct is likely the saner choice, as you don't want it to fall back somewhere inhabited, and there is usually little use for something on the wrong orbit.

This is a slightly more complicated topic (i.e. it's complicated by whether or not manual recovery is possible and by whether or not carrying on makes it worse), but in general, the worst thing you can do if you encounter an error in the software that supervises a process is to bail out and leave the process unsupervised. A chemical reaction, for example, is happy to carry on whether or not computers are watching.

There are plenty of situations when automated or manual recovery isn't possible and/or where carrying on with a potentially damaged system can make things worse. In practice, you solve this sort of problem via other design solutions, like redundancy. If you think that a critical system needs to keep running, but you also think that some errors will have to be handled by shutting it down, then you make it redundant.

Airplane FBW systems are a good example (caveat: I don't do airplanes, I do medical stuff -- I might be wrong about the practical details of this but I think it gets the point across). If the ELAC (elevator and aileron computer) runs into a condition it doesn't know how to handle, there's a good chance it will make things worse if you keep going. But you also don't want your mitigation to be "just halt the damn ELAC", you still want to have control over the elevator and the ailerons. That's why there are several ELACs.

More to the point: if something has to keep happening, no matter what, then you design the computing system and the firmware around it so that it keeps happening.

> At design time you better figure out what all the error cases are

Not being able to figure out what all the error cases are has been an unfortunate component of systems engineering for decades now. The Wright brothers would have probably been able to account for all the failure modes in their flight control systems, but today, you are likely to miss some of the failure modes of a CPU that executes nothing but a NOP and a jump back to that NOP.

With the exception of simple and special-purpose systems, built without any programmable logic whatsoever, it's unlikely that you'll be able to figure out what all the error cases are. (There's something to be said here about OISC and whatnot...)

That's not to say it's OK to build systems that blindly steamroll over errors -- just that you have to build them so that they can deal with errors that you have not foreseen at design. You will run into that sort of error sooner or later, we are all failible.

Edit: as for assertions, performance may be a factor, but that's not why you want to turn them off in production builds for embedded systems. (Although, IMHO, this isn't the right approach for embedded systems at all, but I've seen it used).

First of all, you turn them off because, presumably, they make your system deviate from the spec (i.e. the system ends up handling some cases differently in the production build vs. the debug build, and hopefully the one in the production build is the one you want).

Second, you turn them off because they can introduce undefined behaviour in your system. For example, if a peripheral gives you an incomplete or malformed packet in response to a command, or fails to answer altogether, you may want to abort with a stack trace in a development build. But what you really want to do IRL is probably to check and reset that peripheral, because for all you know it may be stuck and giving haphazard commands to an actuator.

IMHO, assertions are only a partial answer to the problem you really want to solve -- obtaining useful data (e.g. stack traces) in response to error conditions. You can generally log the useful data in addition to actually handling the error correctly. Development and production builds should differ as little as possible -- ideally not at all. Handling potentially critical errors in different ways certainly doesn't count as differing as little as possible.


In the Boeing case why wasn't there a button the pilot can hit that says "The computer is acting up, it's going to crash, turn it off and let me fly the plane"

Surely, the pilot should always be able to have the last say in where or what the plane is flying toward? Or are these planes now so complicated the pilots can't fly them without the computers?


I'm a bit out of my waters when it comes to aerospace, hopefully someone more familiar with the field can correct me if I'm wrong on any of these accounts. I knew I should have given examples from the medical field, but the parent post mentioned airplanes and nuclear plants so...

Airplanes that cannot be flown without computers definitely do exist. The F-117 is perhaps the most famous. Its shape makes it aerodynamically unstable on all three axes and it needs constant correction from the FBW system. Which has quadruple redundancy :). You can turn off the autopilot in these systems, obviously, so you get to say where it goes -- but without the FBW system to issue corrections, the plane crashes.

As for Boeing (or Airbus, who have this ELAC thing)... the main thing to understand here is that there is not a computer. There are several computers, each of them covering a particular set of modules (e.g. ELAC controls the ailerons and elevator, SEC controls spoilers and elevator). There's a more in-depth overview here: https://aviation.stackexchange.com/questions/47711/what-are-... . The autopilot is only one of them. The way they take over each other's functions is actually remarkably advanced, and leads to very interesting procedures for handling failures, see e.g. https://hursts.org.uk/airbus-nonnormal/html/ch05.html .

Now, on some airplanes, some actuators can only be controlled through these computers. They get the commands from the pilot and they issue the right signals that control the actuators. There's no way to bypass them. You can turn off the autopilot and the plane goes where you want but the actuators that control the flight control surfaces are still acted upon by computers.

I don't know if this is the case on Airbus specifically (like I said, I'm in a different field), but if it were, then simply turning those systems off in case of something unexpected is definitely not the right design solution.


There are...there are multiple levels of auto-pilot to turn off, but the pilot can have pretty much full control.

No, the process manager should log the stack trace, restart the subsystem and try running a few more times, then try an auxiliary system or just fly without the subsystem. It should not halt the whole software.

On a rocket launch?

On projects which require redundancy and resilience.

Ideally your type system should be aware of all the exceptions a particular function is able to throw so the compiler forces you to handle all of them before it compiles.

This is hardly possible. The whole idea of exceptions is that each function only deals with a subset. If no one does this, there's always a top-level handler that handles all. The default handler normally terminates the program, but it's totally possible to write a custom top-level handler that does something else. E.g. normally "out of memory" is an exception that causes a program to terminate, but in old Adobe Photoshop it was a routine situation that simply prompted the user to free some memory (by dropping the Undo, for example).

I agree that the concept of the top-level handler and customization should be more visibly documented.


> This is hardly possible. The whole idea of exceptions is that each function only deals with a subset.

Why? It's totally possible to infer the most general type for all the functions in the program, hence to infer the type of the needed handler.

Any language with subtyping and powerful type inference can do this, here is the toy ocaml example with typed algebraic effects (you can think of them as of exceptions):

    val read_file : path -[ Not_found ]-> string

    val process_content : string -[ OOM | Intern_err of int ]-> float

    let computation path =
      let content = read_file path in
      let result = process_content content in
      if result < 0
      then raise Bad_result
      else result
the type of computation would be inferred as

   val computation : path -[ Not_found | OOM | Intern_err of int | Bad_result ]-> float
The handler should catch the corresponding exceptions. This could also be used with the result monad as well [1].

Even if you want to keep some exceptions unhandled, you can easily choose which ones, and track them carefully.

[1] https://keleshev.com/composable-error-handling-in-ocaml


It's possible to compute which exceptions can be raised, but it's impractical to handle each exception in each function (this is how I understood your comment, I guess this is not what you meant). E.g. nearly any call can technically raise an OutOfMemory error or an IntegerOverflow error, but most functions are not competent enough to handle that, all they can do is to clean up and raise this up the call stack.

But it is indeed possible to have a single top-level handler (or a carefully constructed hierarchy of handlers) that explicitly handle all exceptions that may arise. (And to ignore an exception is also a form of handling.) The handler already exists, but it's pretty simple: for any exception it prints the error and terminates the program. It's up to developer to override this to make the program more robust.


As a counter point, the Toyota ECU in those Camry's occasionally accelerated uncontrollably had asserts and what not. When they fired the system rebooted. From memory it took about a second to boot, so usually the driver didn't notice. Apparently it did that on a regular basis as it was overloaded, and sometimes it failed to meet deadlines.

The acceleration problem was caused by noise flipping bits in the RAM the operating systems used to store state data. (The Toyota code was most immune to this because they duplicated their state by storing it in two different places, always compared on read and if it was different rebooted. But the OS was provided the the chip manufacturer, NEC which was a black box to Toyota - and it wasn't as conservative.) On rare occasions a bit flip would take out the watch dog process and the cruise control just after it decided to accelerate, and occasionally if that happened everyone in the vehicle died.

Toyota blamed the deaths all sorts of things for a while - the driver, the carpet, third party accelerator pedals. Which to my mind was fair enough. They like everybody else had no idea what was causing it, but they knew their monitor / reboot if out of spec kludge worked very well so it was unlikely to be the ECU.


What they are saying is that every language runtime, even stricter ones, has its own safety implications and you have to consider them when writing the specs and the code.

Simply switching to a language with a stricter runtime may automagically take care of some simple cases, but no system is devoid of failure modes.

Overflows are classic sources of disastrous bugs (e.g. the infamous THERAC-25). Switching to a system that detects them and raises an exception is a step in the right direction, but you still have to handle the exception cases correctly. And in some cases (like the Ariane V), incorrectly handling an exceptional condition turns out to be worse than allowing it to happen.


C would recognize it, but not over-react. Log it and continue rather than going full abort/self destruct which is what ada did.

>The software in the case of the 737 max performed exactly according to the spec. The problem is that the spec was buggy. The language can't fix a buggy spec.

That is true, but I found that in more strict languages you have a bit of a slower ramp-up, but the time you save later before production in bugs you don't have could be used to take more looks at the spec and better simulations.

>If the Ariane V code was written in C and the value simply overflowed, nothing negative would have happened.

In this particular case yes, however if the system was actually needed for flight (which I would guess most software is), it might be better to reset and retry.

On average if your language fails hard like ADA it's also more likely to find these bugs in simulations and tests.

Rust might have solved the problem the way you like it. Integer overflow causes exceptions only in Debug mode, not in Release mode.


> That is true, but I found that in more strict languages you have a bit of a slower ramp-up, but the time you save later before production in bugs you don't have could be used to take more looks at the spec and better simulations.

In many of these large companies in avionics (and not just avionics), the people writing the specs are not the same as the people writing the code. There is no trade-off between how much time it takes to write the code and how much time it takes to draft and review the specs.

Edit:

> On average if your language fails hard like ADA it's also more likely to find these bugs in simulations and tests.

That is definitely ture, but it's important to keep two things in mind:

1. The possibility of a hard failure from your runtime is something that you need to be aware of at design time. As seen in the case that the parent comment mentioned, there are cases when a hard failure on a non-critical error is actually worse than allowing the error to occur.

2. More important, if it's hard failures that expose bugs during tests, the first thing you have to fix, even before you fix the code, is the test cases themselves. A hard failure during testing is an indication that the test cases you have don't cover a given situation and, more importantly, that your system can't even handle that situation.

There are always going to be error conditions you can't recover from, and if they're in critical systems, you work around that (e.g. through redundancy). But a runtime that gives you a hard failure is rarely useful by itself.


The issue with Ariane V was the lack of integrated testing. If they had coupled the testing of the software with increased acceleration levels generated by a simulated Ariane V instead of an Ariane IV, they would have caught the issue.

But let's say this was something unexpected - then probably the only way to mitigate this would be a backup system - but this is an unmanned system - and Apollo/Saturn/Shuttle/Soyuz levels of redundancies is not required.


Yeah requirements were woefully incomplete, testing insufficient and may I add design where one unreliable sensor was wired to one processor which was given great authority over a system to wind in one direction very fast and where manual reversing out of was very slow or impossible when speeds got high (improbable when pitch directs you down).

How could anyone believe this system was safe? No testing or requirements needed.


> value simply overflowed, nothing negative would have happened.

Except overflow would most likely have caused a significantly more severe outcome than the uncaught exception.

And there is a restriction in GNAT (No_Exception_Propagation) which forces you to deal with any exception immediately, and should be used in any critical software imho.

https://docs.adacore.com/gnathie_ug-docs/html/gnathie_ug/gna...


Significantly more severe than the rocket blowing up?

You can and should check your specs like they're code. For instance, you can write the spec in TLA+, which lets you specify temporal properties (e.g. "can the stall recovery procedure take an unbounded amount of time?") and liveness properties (e.g. "can any non-majority of disagreeing sensors cause the wrong trim actuations?")

To test your invariants, you should sabotage the spec and check that invariants break.

Once your spec passes model checking (or perhaps theorem proving with TLAPS), you can codify it in e.g. ADA-SPARK contracts.

Once you have that, you've validated your spec, your contracts and your code. Bugs can only occur in your invariants and the seams of your subsystem. This level of rigor should be standard by now for safety-critical systems.


> this is Ada, so it caused an integer overflow exception

This is not a language issue, but a general engineering one. In any critical system that can cause an unexpected exception and failure, there should be sufficient redundancy, so that when the main system goes down, the failover system can take control.


The recent Boeing debacle was a systems engineering issue. AFAIU, there was a jammed sensor, and the systems corrected based on incorrect input from this sensor. Additionally, it is my understanding that some indicator that showed that these sensors disagreed with each other was not always present.

Regardless, this entire incident was precipitated by boeing, and the carriers, not wanting to have to retrain pilots on the new systems.


Wasn't the ill fated code itself written in Ada? That is essentially all that goes on boeing aircraft. I don't think a lack of software engineering rigor is to blame for the design mistake.

Possibly; I don't work for Boeing or their subcontractor. But I was getting at more of the general sense that as aircraft become more and more computerized we need more rigour and standardization of the sort that gave rise to Ada, and less of the aerospace equivalent of hacking things together in perl.

I would say we need more real rigor, of the kind that gave us qmail and seL4, and less fake rigor, of the kind that gave us Ada and inevitably proceeds from standardization.

Standardization means less autonomy, which generally means that the opinions of prestigious people are imposed on everyone else. This can be good for intellectual rigor at first if your social pecking order is based on having sound opinions, like St. Augustine, but if it's based on anything else, standardization is an intellectual disaster from the start. And of course standardization prevents intellectual progress, always and everywhere, which is how Galileo ended up under house arrest.


While I understand the instinct, there is a flip side. A while back Lutz Prechelt posed the phonecode problem, inviting solutions in multiple languages. While he didn't specify error handling, the Perl solutions were on average better at error handling than the compiled language solutions.

I think you are referring to [0]. Thanks a lot for sharing this, it's quite interesting!

[0]: http://www.inf.fu-berlin.de/inst/ag-se/teaching/V-EMPIR-2011...


The Boeing debacle is an example of software performing exactly to specification. The spec was the problem.

When Ada came out it was a large intimidating language. The saying was "to learn Ada, it's best to know Ada first."

I haven't touched it in a long time, but I remember some of the impressions it made on me when I first used it.

The type system: I think you could do some really fancy things like:

  speed := miles_traveled / elapsed;
You could use types to make speed type mph, miles_traveled be miles and elapsed be hours.

the thing is, I think you had to create some conversion stuff for the "/" so that it allowed two types to be divided, returned a third type.

unchecked conversion: no matter what you did, you had to interface with hardware. So there was a lot of unchecked_conversion() stuff to convert between basic machine stuff.

argument lists: You couldn't have a printf() or any other varargs type stuff that I remember. This made interfacing with external libraries that needed this kind of stuff lots harder. (maybe there are ways to do this)

there were things I really did like

the package and package body stuff: You could implement a bunch of interfaces, and compile against the prototypes before the implementations were written.

the attributes, like foo'Size and the system package let you write your code to be correct even if system types or machine types changed

A lot of Ada is really tedious, but once you got over that, working with an existing project was usually really nice.


I'm sure that's all true.

There is something to be said for a language that was written to a spec from the beginning, however. That is a luxury that few languages today can claim; most language specs were written after the first version(s) of the language were implemented, and were written to match the implementation, not the other way around.

Writing the specification first makes for a much cleaner language, though not always the most comfortable, if you put any weight in your own opinions.

[I am speaking generally from here onwards and I am not looking for specific examples to the contrary. I know full well that there are exceptions to the generalizations below.]

Lots and lots of engineers today would (and do) express complete comfort in their daily drivers of C and C++, while producing buffer overflow bugs and use-after-free bugs all the while. Maybe it isn't in the best interest of the software landscape to let developers choose the languages they use...

We have, as a community, let our feeling that software development is an art dictate how we react and deal with new information; we overwhelmingly reject it. What we're doing now is so much more comfortable, and we feel like we know what we're doing with [language x] finally! Why would I want to overturn all that comfort?

Software development is not an art, it is a science, and we've been ignoring the science part of that as we refuse to abandon our favorite language(s) for things that could easily be proven to be better, if we would just do the testing to determine performance numbers, ease of bug introduction, ease of maintenance, and so on.

We call ourselves "software engineers" and we don't measure anything beyond the most rudimentary measurements. We DEFINITELY do not actively look to overturn our comfort in favor of finding a better language to use. We don't try to improve anything the moment that might mean that we switch languages, no matter how significant the long term advantages could be. We don't even think about what the long term advantages could be; we're too busy telling ourselves that we are smart and that we can do it in whatever language we are comfortable with.


Software engineering is neither science nor art, it’s a craft; one where the raw materials are pure arbitrary logic and data, and therefore having much broader scope than crafts tied to physical materials, but a craft nonetheless.

Until software engineers are personably liable for their work like civil and mechanical engineers are and have to carry insurance because they will get sued for negligence if there are issues, it’s not professional engineering.

Software engineers are personally liable for their work. It's just that it's much more difficult to prove the various elements of negligence in the software realm, which makes litigation costly and uncertain. How negligent is an arithmetic overflow in a free software library that causes the loss of # of ads clicked for 10,000 Facebook users? Could the developer have foreseen the use case? What's the value of that loss?

An arithmetic bug in a flight safety system or financial trading platform? Your odds of being successfully sued just went up astronomically as it's much easier to show negligence and put a dollar figure to damages.

Professional licensing regimes are principally designed to prevent negligence by a priori preventing people from holding themselves out as capable of performing some task. In terms of legal liability all they really do is restrict the extent to which you can limit your liability contractually. That's important in business and in professional services (doctor, lawyer, etc), but for downstream users who suffer damages there was never a contract involved, anyhow.


Some of them are, just not on every country. Here are the examples for Portugal and Germany.

https://www.ordemengenheiros.pt/pt/a-ordem/colegios-e-especi...

https://www.vdi.de/

Although you can get by without the admission exam, some kind of state projects do require it for any engineer putting their name on the contract assignment.


Calling one language better than another is subjective and open to interpretation. Clean might not be as important as critical mass in adoption, letting people more efficiently find people to work on the software.

That depends. If you are looking for a language that produces more work for the CPU per line of source code, that is definitely not subjective. Programming languages are not hard to learn for anyone that already knows one or more.

Other important metrics no one thinks to measure:

* Quality of compiled assembly or bytecode (it is dead easy to write a shitty compiler that produces stupid assembly or bytecode)

* Runtime performance on tight loops (this is directly tied to cost when running on something like AWS Lambda, and code that performs well means you need to upgrade your hardware less often)

* Lines of code required to achieve a given goal. Less is obviously better, when you understand that more code = more complexity, and that more complexity = higher likelihood of bugs. Also, if there is enough boilerplate code to break these guidelines, then what is that code doing, except slowing development times and slowing execution times?

I can write a method two or more ways and know provably which is faster. No one does this! No one looks for hot spots and tries to optimize them. No one profiles anything if performance is deemed "good" or "not a problem."

There is a lot of CPU time to be saved, a lot of RAM usage and cache optimization that no one does because they believe that their work is above being challenged, even by themselves. This world is insane.


Languages are tools to codify behaviors. The fact that we spend so much time debating which tool is better instead of “building bridges”, is why I left this industry.

Just look at the web front end. Absolute cluster fuck.


Well, languages are indeed different and some are better for the task than others. And bridges do fall sometimes, so it's worth to reflect on why and how to prevent it.

As others have mentioned, software engineering is not a science, it's a craft strongly entangled with communication: not only requirements-to-product, but also product-to-the-next-developer.

As I've probably posted before, I was at a Dijkstra lecture in the early 80s where he put up one of his famous hand-drawn slides that said "Ada: The latest government boondoggle that will take 5 years and 5 million dollars to kill." [5 million dollars was a lot of money back then.] So yeah, even the academics didn't much care for Ada.

I worked with a guy who worked on a ge defense contract in the late 80s that demanded ada as the design language. He worked there 2 years learning the language without writing any code before they laid the team off.

Ah, those famous people and their catchy phrases. They do a lot of damage sometimes.

What did Dijkatra like? Pascal?

ALGOL 60. https://en.wikipedia.org/wiki/Edsger_W._Dijkstra#Compiler_co...:

”Dijkstra was known to be a fan of ALGOL 60, and worked on the team that implemented the first compiler for that language. He was closely involved in the ALGOL 60 development, realisation and popularisation.”

Also: Haskell and pascal over C++ and Java (https://www.cs.utexas.edu/users/EWD/transcriptions/OtherDocs...)


…for CS instruction.

I've never come across any indication that Dijkstra wrote actual code (in the sense of something executing on a computer) after 1970 or so. The algorithms in the EWD notes are all written in ALGOL style pseudocode.

Dijkstra was the CS equivalent of the Patent Law concept of a "Non-Practicing Entity", which gave him great liberty to troll all programming languages without exposing his own choices or designs to corresponding scrutiny.


This nails it. Dijkstra sort of hated computers; he always hand-drew overhead projector transparencies for his lectures. He's also famous for saying we should call our field "Computing Science" because "Computer science is no more about computers than astronomy is about telescopes."

> "Computer science is no more about computers than astronomy is about telescopes."

It's more accurate to say that CS is no more about computers than aerodynamics is about flying: Without flight, aerodynamics would barely exist as a distinct field, and wouldn't be of interest to anyone except people in a sub-field of fluid dynamics, and never intersecting with practical concerns such as materials science and the overall design of buildable objects. Similarly with CS: Without computers it would be a simple, obscure sub-field of mathematics, about where it was in the 1930s when Post and Turing and Gödel were active, and it wouldn't intersect with fields like electronics or, probably, have much to do with formal grammars.


Before C was even a thing, plenty of computer systems got developed in Algol dialects for systems programming, like ESPOL, NEWP, PL/I, PL/S, Algol RS,...

If I could upvote this twice, I would.

He was teaching at UT Austin and Pascal was the CS department's pedagogical language of choice back then, so I presume the answer was yes. Or at least I never heard him say he hated it.

My impression of Ada via undergraduate exposure at Embry-Riddle was that it was like an industrial strength Pascal. I never thought later on that I needed Ada, but then I didn't go on to work with avionics or anything like that. And I always thought that Pascal+extensions was a pretty good way to develop applications and was disappointed when the Macintosh moved towards C and C++. I learned to write GUI applications with THINK Pascal, before Java existed.

In the version of Ada we had, it was DOS based, but I found out that you could use the interrupt API, allowing VGA graphics.


> The impetus that led to Ada came from management and implementation was done by academics.

Impression of mine back then was academics were busy designing their perfect languages to do data transformation while ignoring real world engineering needs and that not everyone had an unlimited account on the schools big iron mainframe.


Seems like "the next big thing" has existed in the tech world from the start.

I remember using Ada 95 in university, after learning C++, Java and Python in other projects. It worked very well for concurrent processes, but it was so basic as a developer experience.

In C++ (98 I think) I needed to build length dynamic arrays, in Ada 95 I had to build dynamic length strings.


> in Ada 95 I had to build dynamic length strings.

Ada95 did have dynamic strings, they were provided by Ada.Strings.Unbounded package.


Interesting. To be honest only used the language for a project!

Ada exists because of the US Department of Defense. In particular, it formed a working group, headed by a military officer, which

>crafted the Steelman language requirements, a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications. Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals [1].

The success of the language resulting from this process was ensured by the D of D's strong desire for a standard programming language shared by all of its contractors.

[1] https://en.wikipedia.org/wiki/Ada_(programming_language)#His...


Speaking of JOVIAL: I found this great talk by Jules Schwartz on the development of JOVIAL. https://youtu.be/sXjCqa4W4iQ It's a great, quite humorous watch.

Welp, there's my rabbit hole for the week. I've never heard of JOVIAL before, and it looks like there's lot of interesting stuff here. Thank you!

Around 2008/2009 I was briefly (about 6 months) employed to write Jovial and 286 assembly language for software that controlled satellite communication base stations.

Unfortunately I was laid off before I got to write very much of it, but it seemed like a neat (if cumbersome and old fashioned) language.

Information about it is pretty hard to find, and there's no open source or free compilers that I know of, but here are some links I had saved.

A 1980s style tutorial:

https://apps.dtic.mil/dtic/tr/fulltext/u2/a142780.pdf

The mil-spec:

http://everyspec.com/MIL-STD/MIL-STD-1500-1599/download.php?...

An explanation of how it came to be:

https://jovial.com/documents/p203-schwartz-jovial.pdf


fortran with nice syntax and built in bit operations.

Nice fixed point types too. A fixed point all to the right of the binary point to represent angles asa semicircle was useful. Table ovelays are evil though. I'd rather have rust than ada for the functional lineage along with the performance, but the thin stdlib combined with the difficulty of getting a crates.io mirror certified clean of malcious code makes it problematic.

Not to say Rust is ready for prime for safety critical systems, but both crates.io and the standard lib are optional for Rust.

I think it is probably closer to safety critcal than it is to the middleground of safety related, where the sloc hits millions so you want boost or a thick stdlib plus posix but its still controlled enough you develop airgapped and worry about number of library contributors, trustedness and mostly futile code inspections of entire libraries. Even for server backends the rust team has expressed concern about the security implications of having such deeply nested dependency trees with so many authors.

While you're down there, you might as well check out Coral-66.

production JOVIAL code was like.... 20 lines of comments. 1 line of code. 20 lines of comments. 1 line of code.

This sounds perfectly on-brand with respect to how we do business in the government.

The government has an extreme aversion to encouraging competing approaches or internal competition - largely because of budget concerns around double-work - which drives this behavior.


I live near Sydney University ( not my alma-mater ). In a rather serendipitous turn, one day on my way home from the shops I was walking past one of the USYD buildings and there was a giant stack of library books that had thrown out onto the sidewalk as rubbish to be collected by the council. It turns out they were old science faculty books that were no longer prescribed material. I started looking through them and found several books on Safety-critical software development in Ada, several books on hardware design and several others on software development. I took them all home, at first thinking the Ada ones would just be some archaic novelty. When I got a chance to read them I was amazed. I was expecting something akin to COBOL from the language. Instead I found a language that was extremely well designed for its domain. I'd already done a bit of embedded and operating system development, and I could immediately see how Ada would benefit a developer in these areas. Even representation clauses alone are such a huge convenience for low-level programming. I haven't looked back since. I was actually in the process of preparing a bare-bones example of operating system development in Ada, if this post had been three days later I'd have been able to link the finished product here right now.

> if this post had been three days later I'd have been able to link the finished product here right now.

Well, that was 5 days ago, which means you CAN post the link! (I'd like to see it, quite a lot.)


Here's my initial implementation: https://github.com/ajxs/cxos I'm going to continue to work on it for some time, but this is the minimal working example that I had enough time to properly document.

Don't worry, I haven't forgotten about these comments. It's taking me a little longer than expected due to a few other commitments. I'm just in the process of cleaning the code up and doing the documentation, so I'll have this up very soon.

I look forward to seeing your OS example.

As I posted in another comment, here's my initial implementation: https://github.com/ajxs/cxos Apologies for the double post. I just wanted to ensure that you would see the reply.

There is some overlap in the use cases between Ada-in-the-mainstream (which seems to be what the article is suggesting) and Rust. The existence of Rust will make it even harder for Ada to break out of its existing domains.

Both seem to have good ecosystems, but quite different. Ada has more high-assurance tooling and practices. Rust has more in terms of packaging and general-purpose high level libraries.

With Ada, it's pretty hard to shake the feeling that it's not as welcoming. Either you are building a spacecraft, or "what are you doing here?". Not that anyone actually says that, and in my limited experience playing with the language, everyone seemed nice. It's just a feeling that's hard to escape. I wish it weren't so, because I think Ada has a lot going for it and it could be good in more donains.

One kind of cool thing is that it's designed to get by without so many heap allocations, and it does some interesting things to allow/encourage more stack allocations. It also has more sophisticated runtime checks, like contracts, that are kind of between a unit test and a compile-time check.

Rust is great at managing complexity. I keep holding out hope that Ada can somehow give us some simplicity back while still making useful software.


I understand what you're getting at here I tbink, but one of the ecosystem challenges we run into is that while Rust is abstractly a fantastic choice for the domain of systems development that would often be served by Ada (in fact we think it's a lot better in some respects because we can do more statically ahead of time, that Ada does dynamically at runtime, by abusing the Rust type system and borrow-checker), the Rust ecosystem isn't really focused on this class of problems.

There's significantly more attention provided to things like Wasm than there is to things like embedded use cases for example.

The two languages, ecosystems, and communities seem to have largely non-overlapping objectives despite having overlapping technical capabilities.


> The two languages, ecosystems, and communities seem to have largely non-overlapping objectives despite having overlapping technical capabilities.

I keep thinking servers should be written a bit more like embedded thingies. (Though I suspect the actual trend is in the other direction).

Both kinds of software deeply interested an asynchrony and concurrency (though not necessarily parallelism). And both also tend to talk over networks to other gadgets as part of a complex system. And since my job often revolves around putting out fires when severs OOM, I'd also like to see a lot less malloc().

It would be nice if software could place hard upper limits on how many resources (including RAM) it takes to serve a request and then reject requests early. With real-world languages and frameworks, we can only do very primitive approximations to this. But a framework written from the bottom-up with a malloc-is-evil mentality () might be able to do it.


We have a crate internally that uses type-level programming to do statically tracked memory allocation (technically the method can be used for other kinds of resource management too), so that we can have compile-time guarantees that allocations will always succeed and we'll never allow memory to be allocated from inappropriate regions (e.g. mapped registers).

Layered on top of that we can also ensure that if at runtime your device memory configuration is sufficiently different from what your assumptions were at compile-time, then we don't allow the device to boot so that you don't end up in a dangerous resource exhaustion situation by accident.

We use this for no_std embedded shaped use cases, but you could also use this library and programming model to create a special heap in your normal programs which have statically tracked allocations to create some region of your program context that you want to have assurances like that.

We have 3-4 things kind of in that vein in our pipeline to open source, and I think that's number 3.


hmm... there are many libraries that are no-alloc. It seems quite feasible (well, that and custom allocators are a thing now)..

> There's significantly more attention provided to things like Wasm than there is to things like embedded use cases for example.

Rust is a language developed by its users. The reason less things happen in Rust embedded domain, is because less companies pay people to work on embedded Rust than they do on WASM rust.


> ore attention provided to things like Wasm than there is to things like embedded use cases for example.

Things might be better, but last time I checked rust's compiler was a lot slower than C and the binaries were several times bigger. That last one is a big deal for embedded.


You can get Rust binaries just as small; the smallest known one was 145 bytes. Some defaults were recently adjusted to make them smaller by default too.

https://github.com/tormol/tiny-rust-executable


Nobody wants to play with their compiler to get a small binary. Languages are typically judged by their defaults. If rust can produce small binaries, then it must produce small binaries.

The point is to demonstrate what's inherently overhead and what isn't. Rust has effectively the exact same inherent overhead as C does; that's the point.

> Nobody wants to play with their compiler to get a small binary... If rust can produce small binaries, then it must produce small binaries.

Everything is a tradeoff. Smaller binaries may or may not perform better than larger ones. This is something you need to play with, just like in C. If one single setting made everything the best in all circumstances, it wouldn't be a setting. C compilers have these settings too.

> Languages are typically judged by their defaults.

As I mentioned, the defaults have also been adjusted to help out too.


> Languages are typically judged by their defaults. If rust can produce small binaries, then it must produce small binaries.

Code size isn't the only axes that can be used to measure the quality of a binary, there are many others: like run-time performance, portability, debuggability, etc.

No C compiler I know optimizes for size by default, in fact, I don't know of any C compiler that, by default, optimizes anything at all. Clang, GCC, MSVC, and all other mainstream compilers require users to enable optimizations (O1, O2, O3, Ofast, etc.) and none of these are code-size optimizations. Only relatively recently have compilers grown the ability to optimize for size, and you have to go way out of your way to do it (enable Oz, disable debug information -g0, enable LTO -flto, ...).

Optimizing for size over anything else by default is one of the worst trade-offs a compiler can make. It's a bad trade-offs for most programs, and it is a bad-tradeoff for most compiler users.


If big binary works faster, I would prefer big binary any day. Nobody cares about disk size.

And for most compilers, changing whether you care about binary size or performance, is a single switch away, not a series of enhancements you need to refer to a blog page to ensure you're doing it right.

Rust does have the same switches C compilers have, that do the same things.

>Nobody cares about disk size.

Embedded people care.


To be fair, “embedded people” do not use disks.

They do when they embedded device happens to be a phone, tablet, POS, medical monitoring device, ...

Depends on the system, embedded is a very large section of the market.

Another consideration: Rust’s compiler is also slower because it does a lot more forces you to add error handling.

In terms of real impact on developers I never felt it was dramatic. The rust language support in editors is so good that you will catch most errors before compiling. So once everything is dealt with you usually compile and it just works.

Binary size is also one of these areas they are dealing with atm.


It's also slower because it hands LLVM a boat-load of IR and relies on LLVM to optimize it (instead of doing it itself).

I've heard this is mostly the answer. Which makes me assume that the LLVM is a bad match for rust's backend requirements.

So how come Ada, Delphi, C++ Builder, VC++, .NET Native, D compilers happen to be much faster?

I still cannot forgive Rust for not learning anything from Ada.

"Code is read more often than it is written". Reading Ada is so unambigous, and clear. If you mean if then, you have the keyword then, when you want to say procedure you say procedure and you can read that. Rust? fn, because is it fun, or functor, so fuck you.


You learn what 'fn' means once, and then you know it forever. It's a non-issue.

It is an issue, forever evern after learning fn means function.

Cd is rd m thn it is wrtn, it is esr to rd function thn to prse fn rclnt function and cont rdn.


Do you really want to type out 'procedure' thousands of times?

You type 'proc' and let the editor autocomplete that for you: this is a long solved issue.

Yes. Because the alternative is idiotic, hard to read BS like fn.

The problem is that "fn" is optimizing the WRONG things: not only is your code going to be read more than it is ever written, but this abbreviated form is actively hostile towards domain-level subject-matter experts that you may want to audit or verify your code. Now, instead of him jumping into code that reads like English (Ada), now he has to 'decode' Rust BEFORE AND DURING his evaluation of the code, distracting him from his purpose.


Yes of course, even if my IDE doesnt autocomplete anyway. Code is read much more often than it is written.

Yes, I do want to read out what I type and possible speak it to collegues without sounding like I have a fn seizure.


Having a big ecosystem of libraries is not even close to a big concern when developing critical applications like what Ada seems to be used for.

Totally. For defense, I think a large third party ecosystem is pointless unless every square inch of it has been verified for compliance.

Do people think part of missile software includes “npm install wordpad”?


No, but I would expect it to contain "npm install launchpad" :-p

But it is a concern for bringing Ada into more domains, which seemed like the point of the article.

EDIT: on a second read, it's not clear whether the article is saying we should use Ada the next time we are building a rocket, or the next time we are building an OS.


There was a post here recently about an attempt to bring those concerns to Rust: https://ferrous-systems.com/blog/sealed-rust-the-pitch/#

Interesting, I am highly interested in convincing NASA to start using Rust for their embedded systems they use on ISS experiments and such. I would love to get more widespread adoption at the embedded system level.

We came to the same conclusion at Kubos, and moved from C to Rust for most of our flight segment/application code (embedded Linux on the satellite OBC).

https://www.kubos.com/


Awesome, do you think it’s a hard sell for old school government contractors? I am passionate to write embedded systems with Rust, but I worry it will be too hard of sale to d school contractors with stacks already written C, and C developers on staff.

I've had a good number of conversations actually with some NASA FSW coders, and in general they're open to these ideas. NASA has a project with architectural and structural goals pretty well-aligned with what we set out to do at Kubos: https://cfs.gsfc.nasa.gov/Features.html

The biggest challenge is to prove that it works. The space software community historically is very conservative and risk-averse, and mostly for good reason. As Kubos (and hopefully others) get more flight heritage with Rust on the flight segment, it'll be a much easier sell.

That's the biggest obstacle I think. Someone will have to break the seal and establish that "Rust in Space" actually succeeds consistently, then larger "trad space" organizations will follow.


> Ada code controls rockets like the Ariane 4 and 5, many satellites, and countless other systems where small glitches can have major consequences.

A somewhat amusing endorsement given that the infamous failure of the maiden launch of the Ariane 5 due to series of interconnected software bugs - a reminder perhaps that the choice of implementation language does not automatically make for a reliable system.


A language doesn't make code automatically reliable, but it can certainly help.

Ada can do overflow checking, so choice of language can have an impact on reliability, but the software engineers deliberately disabled some of those checks to save CPU cycles on a computer that was running at/near capacity. Their reasoning was that it was impossible for the variables in question to overflow given the known flight characteristics... of an Ariane 4.


This is why I always document considerations like these with stuff like: “This error case can’t happen because foo” or even better, enforce these things by type system and let the compiler decide if it really can’t happen once you changed something in the other part of the code.

Ada could have enforced those at the type system level, it has bounded types, but those checks were disabled.

There's always someone who brings this up whenever Ada is mentioned, as if it serves as a damning refutation of Ada's reputation as a safe language. That bug was caused by a series of design failures if anything, rather than software bugs. From memory, they reused the inertial reference system from the Ariane 4 mission without testing its suitability for a different delivery vehicle. The actual 'bug' was caused by an integer overflow occurring due to the velocity variable not being wide enough to handle the higher horizontal velocities of the newer rocket. This caused the rocket to go off course and require termination. The backup computers had the exact same flaw due to having the same programming. I'm guessing they used Ada's modular integer types, which wrap around when they overflow. This is the same as C, and I'm guessing the same as Rust. Using either of those languages wouldn't have helped in the slightest here. It's not some software bug caused by Ada. It was a domain issue caused by a failure in management and testing.

> The actual 'bug' was caused by an integer overflow occurring due to the velocity variable not being wide enough to handle the higher horizontal velocities of the newer rocket. This caused the rocket to go off course and require termination.

This is not entirely correct. The code which overflowed was unnecessary for flight. People have this idea that the sensor sensed it was going 32745 m/s to the east one moment, and -32723 m/s the next, and then tried to compensate for the sudden gust of wind it must have encountered. This is incorrect. The routine in question was only supposed to run when it was on the ground, and served no purpose in flight. It was scheduled to be disabled 40 seconds after launch. (The flight failed after 37 seconds)

The problem is specifically the fact that the code was written in Ada. Ada does not fail the way other languages fail when you overflow. Instead of given an incorrect value (which never would have impacted the flight) it throws an exception. This exception was uncaught. This uncaught exception brought down the entire flight control system, despite the fact that the falling code was not controlling flight.

Rust improves upon this situation by panicking in debug mode and wrapping in release mode, which is better than the Ada behavior in every possible way. Normal "unsafe" languages improve this by giving an incorrect value as the result, which in the case of Ariane V would have saved the flight.


You could have condensed your entire post to just "This exception was uncaught". Your last point is just outright ridiculous:

> "Rust improves upon this situation by panicking in debug mode and wrapping in release mode, which is better than the Ada behavior in every possible"

So... What you're trying to tell us is that Ada recognising an exceptional situation has occurred is bad, and that Rust just ignoring it is a good thing? Righto... An integer overflow occurring in this scenario is an exceptional condition. In this case it was one that the developers did not anticipate. This issue arose because they mistakenly thought it was not possible for this scenario to occur. The mistake here was not anticipating the exception and handling it in a meaningful way, not the raising of an exception. The fact that not raising an exception would have avoided the disaster in this case is entirely incidental. It was still an exceptional circumstance that the developers, testers, designers did not anticipate. The silent wrapping behaviour you describe would have bitten someone eventually, perhaps here, perhaps somewhere else. Possibly with enormous repercussions.

> "Normal "unsafe" languages improve this by giving an incorrect value as the result, which in the case of Ariane V would have saved the flight."

The normal "unsafe" languages you're referring to ( presumably C ) have a history of very poor error handling. Requiring an extremely high degree of conscientiousness when programming for mission-critical applications. Can you imagine actually programming for what you're describing here? Not only does the computer have to guard against outright failure, but now it has to infer the possibility of an error state from a stream of in-band data in real-time. Righto...

Also, just so you know: Signed integer overflow is UNDEFINED BEHAVIOUR in C.


Tainting the dataflow rather than the control flow, like a quiet NaN, would be a better solution in cases like these. It happens that the solution that was chosen was the worst choice possible, but in other cases it wouldn't have been.

Probably the fact that the engineers got the case analysis wrong suggests that the methods of reasoning they were using weren't very effective. Modern formal methods might or might not have helped; guaranteeing that a piece of code is statically free of exceptions is not a terribly advanced thing to do, but really you'd like to guarantee that the variable was actually big enough. Having less code to reason about certainly would have helped.

C’s choice is clearly unconscionable.


In hindsight it would have been perhaps better, but I don't think it can be said that having essentially random values in memory is better than having a controlled error handling action being executed. Is there any reference that such a course of action is preferred or beneficial?

We have a lot of experience with floating-point qNaN values since IEEE 488 introduced them in 1985, and more rigorous languages than Ada — such as OCaml, Haskell, and Rust — use a generalized version of the concept for a lot of error handling.

I don't think there's an open-and-shut case that error values are "preferred or beneficial" in general with respect to exception handling through nonlocal control-flow transfers; there are reasonable arguments on both sides, and it seems likely that both are viable options in most circumstances, and which is better depends on those circumstances. (For example, lazy evaluation or catastrophic consequences attached to software crashes would seem to me to weigh heavily on the side of using error values, while interactive REPL-style usage would seem to me to weigh on the side of nonlocal control flow, perhaps with Common-Lisp-style restarts.)

My point was narrower: in this particular case, using error values analogous to NaNs, rather than shutting down the SRI, would have saved the rocket, without posing the risk of using garbage data that C's semantics pose. In this particular case, they would have been better. It's possible that, in other cases, perhaps even in other places in the SRI firmware, using error values would have been worse than unwinding the stack or shutting down the SRI. But in this case they would have been better.

(If you aren't convinced of that narrow point, you should probably read the report from the Lions board of inquiry.)


I think I see what you mean. I'm personally interested in generally applicable rules, so the fact that it could have worked in this case doesn't help me decide what to so when designing another system.

In the general case, the Ada behavior of detecting and signaling numeric errors seems to be the only robust choice to me. The alternative could only be acceptable if it's proven that the algorithms fail safe in the presence of tainted data.


I'm interested in generally applicable rules, too; in this case, the generally applicable rule is that there is no generally applicable default robust behavior—the behavior of detecting numeric errors and crashing the system was the worst possible behavior in this context, but in other contexts it would be the best possible behavior. As I explained in https://news.ycombinator.com/item?id=20935662, there was no acceptable way to handle such an error if it was detected at runtime. The only acceptable solution is to rigorously verify before runtime that the error is not going to happen, as Rust does with, for example, multithreaded data races. (Well, you can't rule out hardware malfunction, but if you can make it sufficiently improbable you can use the strategy that they did in fact use.)

However, default-ignore vs. default-explode isn't a difference between handling errors with IEEE-754-qNaN-style dataflow-tainting and CLU-style nonlocal control flow transfers, as you seem to think. In either case, algorithms that need data that could not be computed successfully will fail to get that data; in either case, you can write the code to either propagate that failure or to attempt to recover from it somehow. In either case, you will not be able to control the spacecraft if you needed that data to control it. The difference I was mentioning, which is not generally applicable, is that in unusual cases like the Ariane 5 disaster, where the erroneous data shouldn't have been computed in the first place because it wasn't needed, qNaN-style error handling clearly results in safer behavior.

But, in general, real-time control is not a place where there exists a "robust choice" when your software is broken. If your jet engine control software or your antilock braking software is broken, you're likely to die, and no error handling strategy can prevent that, at least not at the level we're talking about. Biological systems manage this situation by having a melange of somewhat-independent control loops using different algorithms that are constantly fighting for control of the organism; while this does seem to produce systems that are more resilient than engineered systems, it's less efficient and enormously more difficult to develop or understand, and it's the opposite direction from the rigorous certainty you and I favor.


> Rust just ignoring it is a good thing?

Rust tries to give you zero-cost abstractions where possible. Overflow checks aren't zero-cost, therefore they're disabled in release mode. Otherwise when safety can be done at zero-cost (at runtime) Rust will usually do that.

I agree with you though that the exception might have helped in many other situations, especially if a simulation catches it early on. I would still say from gut feeling that ADA makes the safer decision on average.


I don't see the point with "zero-cost abstractions", overflow check is not an abstraction.

And of course the run-time checks can also be disabled for Ada of course, and usually are for release build.


And this is worth noting:

> These runtime checks[1] are costly, both in terms of program size and execution time. It may be appropriate to remove them if we can statically ensure they aren't needed at runtime, in other words if we can prove that the condition tested for can never occur.

> This is where the analysis done by GNATprove comes in. It can be used to demonstrate statically that none of these errors can ever occur at runtime. Specifically, GNATprove logically interprets the meaning of every instruction in the program. Using this interpretation, GNATprove generates a logical formula called a verification condition for each check that would otherwise be required by the Ada (and hence SPARK) language.

[1] overflow check, index check, range check, divide by zero

---

So again, I do not see why Rust would shine (as some people suggested) here, Ada/SPARK can statically ensure that your program is correct and eliminate all sorts of runtime errors, including overflow.


The rust compiler also removes bounds checks and such if it can statically prove that they won't occur. You don't have as much tooling to communicate it to the compiler as you do in SPARK just yet.

When I learned Ada (blog post somewhere in this thread) I was pretty shocked by how many more runtime checks it had than Rust does, overall. Rust usually checks things at compile time.


>When I learned Ada (blog post somewhere in this thread) I was pretty shocked by how many more runtime checks it had than Rust does, overall. Rust usually checks things at compile time.

That's perhaps an oversimplification; elsewhere it's been said of Ada's mentality that "Incorrect Is Simply Not Allowed." — but there's ALWAYS been a preference for pushing checks from dynamic to static, and from runtime to compile-time.

As a trivial example, the following code is typically generated without any sort of index check because the loop-control variable takes its range from the Array, it's obvious that it CANNOT be an invalid index, and this is allowed by the language reference manual (and encouraged by the annotated reference manual)—

    For Index in Input'Range loop
        Input(Index):= 0; -- Zero the array.
    End loop;
Conversely, there are places where you cannot statically determine the validity:

    Value : Positive := Positive'Value( Get_Line );


Mostly I agree, but the engineers and managers weren't just accepting the Ada language defaults here; they had consciously decided, after clearly not enough testing, that the best thing to do if there was an unhandled error was to shut down the SRI, because it was probably a hardware failure. So using Rust, which I do agree is much better than Ada and provides safety guarantees Ada doesn't even dream of, wouldn't have saved them; they would have chosen to panic.

Also Ada does have wrapping arithmetic available. It just isn't the default.


Wow. that's quite shocking. A similar situation lurks in many C++ programs, with uncaught exceptions threatening to crash the program at any time.

> It's not some software bug caused by Ada.

This is not correct. It was, in fact, "some software bug caused by Ada", in the sense that in most other programming languages, the error that did in fact happen would have been harmless.

For those who are interested, the full report is at http://www-users.math.umn.edu/~arnold//disasters/ariane5rep..... There is no need to "guess" when a fairly detailed public analysis is available, particularly when the issue is one that comes up repeatedly. It is true that the problem was that a horizontal velocity variable ("horizontal bias", BH in the code) overflowed, although it was not the ordinary kind of integer overflow, but rather a floating-point-to-integer-conversion overflow, which does not produce an exception in any commonly-used programming language.

The missing piece of the explanation is that the variable in question was no longer being used at that point in the flight, so if the failure had been allowed to pass silently rather than shutting down the SRI ("inertial reference system"), the rocket would not have been destroyed.

Of course, under some circumstances, allowing variables to have grossly incorrect values in your inertial reference system could have been equally disastrous — that's what you assumed happened. The system was designed in the way it was because the assumption was that such errors would almost surely stem from hardware failures rather than software bugs; perhaps that is a "domain issue caused by a failure in management and testing".

But in some sense, you can argue that any organizational failure is a "failure in management", since management's job is to look to windward and steer the organization clear of failures by any means necessary. But in fact the particular failure in management that happened was for the software to be inadequately thought out, such that one of Ada's hidden pitfalls caused Ariane 5's maiden flight to fail.


Forgive the use of "from memory" in my original post. I was on the move and didn't have time to reference the original material in-depth. You're correct. You've given a great explanation of the accident. Whenever this is brought up, I feel the need to rebuke the insinuation that this accident was explicitly caused by the use of Ada. You are correct in that the accident was the result of an exception caused by an overflow, something that might not have triggered an exception condition in another language. However, I don't think the fact that a language like C might never have realised there's a problem at all to be damning of Ada. No matter what language this was developed in, better formal analysis of the problem domain should have highlighted the possibility that this could occur. Especially considering the different hardware involved.

This conversation is everything I love about HN. Thoughtful, informative conversation, correction without offense--and so we all benefit. Thank you guys and thanks to dang for keeping this such a great forum!

Well, I probably could have corrected more politely, too, but I appreciate that ajxs suffered fools gladly in this case, the fool being me.

Also as long as we're buttering up the mods, don't forget sctb!


I agree.

>> he actual 'bug' was caused by an integer overflow occurring due to the velocity variable not being wide enough to handle the higher horizontal velocities of the newer rocket.

Something all that fancy type system should have prevented. I found it interesting in the Hackaday piece that the C code xample called out the size of an integer while the Ada version did not. The Ada version caught an implicit conversion, but apparently does nothing for overflow.

Every language has it's good points and bad. I still want to learn more Rust. In fact I recall reading recently that some Ada developers want to adopt something akin to Rust's borrow checker. All I know is that C++ is not safe. How well does Ada work with C libraries? Rust is pretty OK at it.


Ada / C interop is very easy to do.

https://learn.adacore.com/courses/intro-to-ada/chapters/inte... shows how Ada's Interfaces.C package and the Import and Export aspects are used to share data and make function calls between Ada and C.


Ada is miles ahead of Rust when it comes to C interoperability. I posted this before in one of the other Rust threads, but you can't get more than two paragraphs through the official Rust FFI documentation ( https://doc.rust-lang.org/nomicon/ffi.html ) before being asked to install a third-party dependency. The 'Interfaces.C' package is defined in the Ada language specification. And, as others have mentioned, Ada's language defined pragma directives, or Ada2012's aspects provide a way to implement FFI.

There's nothing wrong with pulling in the `libc` dependency. It literally means adding two lines of code and then `cargo` manages everything for you. It isn't really third-party; the owner is "rust-lang:libs".

I understand that people are conditioned to think APIs are easier to consume when they're in the standard library, because that's true in many languages, especially those that lack a modern package management system (e.g. Ada). But it's not true in Rust and there are advantages to having libraries like `libc` outside the standard library. For example `libc` is updated more frequently than the Rust standard library (multiple times per month vs every six weeks).


Having libc updated multiple times per month sounds like a fucking nightmare to me. What am I missing about this situation?

I actually laughed at this comment. I was wondering the same thing. Although this is a cheap shot, how could I resist mentioning that Ada's FFI functionality is updated every 20 years on average and is still ahead of the curve.

> There's nothing wrong with pulling in the `libc` dependency.

Yes, there is. Not only is dependency transitive: you now depend on everything libc depends on — but now you're depending on the correctness AND properties of the dependency, to include security.


Amusing but only superficially a counterexample because the bug was basically in a specification-implementation mismatch. Nothing in the language can catch that.

Here’s a report on the Ariane 5 incident https://zoo.cs.yale.edu/classes/cs422/2010/bib/lions96ariane...

I don't think that there is any language around that claims that just because it compiles/can be interpreted that is "reliable". Can you point out a single one that guarantees you won't have logic errors? Rust et. al. are good at eliminating very common issues like memory leaks and concurrency, I've never read in the documentation where it said you won't have issues.

105 launches, 2 failures, 3 partial failures. That's a 95,2% success rate.

I nitpick a little but failures were at launch 1 and 14. Partial Failures at launch 2, 10 and 97 so the current effective success rate is much higher than 95,2%.

(4 failures in the 14 first launches, but only one partial failure in 91 launches since).


That's a normal failure rate for chemical rockets. The notable thing was that the first failure was due to software.

How many dollars did the failures cost?

Ca. 370 Mio. USD

Ada was the language of the undergraduate curriculum at the University of Washington when I was there (late 1980's) -- because Boeing. One of the core safety mechanisms was exception handling.

* The biggest thing I miss is that every block was inherently a try-block. Just put your exception handlers at the bottom of the block. This provided a nice separation of "normal flow-control" and "exceptional flow-control". Why have try-catch anyway -- I never got used to that later in life.

* Exceptions were not synonymous with errors. In fact the way to read a file byte by byte was just to keep reading bytes and never explicitly check for end-of-file. Let the exception handler handle this case. Logic is much cleaner.

* Remember that you can't just dump core and shell out to the OS on an un-handled exception -- not if your controlling a ICBM flying of the continent. Ada really forced you to handle all errors at some level of the call-stack with an exception handler. At the outer-most level -- just call self-destruct.


> * The biggest thing I miss is that every block was inherently a try-block. Just put your exception handlers at the bottom of the block. This provided a nice separation of "normal flow-control" and "exceptional flow-control". Why have try-catch anyway -- I never got used to that later in life.

That's still the case in Ruby, you can put a rescue at the end of the block.


"rescue" seems to imply "error," whereas "exception" means simply an unusual / non-normal condition. Does Ruby encourage exception handling as control-flow besides error conditions?

If you've done VHDL, you'll be familiar with Ada's syntax. When the DoD commissioned the design of VHDL, they required that its syntax be based on Ada.

https://en.wikipedia.org/wiki/VHDL#History


They started with Ada's syntax and tacked on strange things that do not fit in, e.g., the ".all" in use clauses.

For people wondering what ".all" is about:

https://en.wikibooks.org/wiki/Ada_Programming/Types/access


I think he's talking about the use of the all keyword in VHDL. Although, the Ada use is odd too.

Also if you've written any PL/SQL stored procedures in Oracle.

PL/SQL has remained faithful to the Ada zeitgeist. I've found spotting flaws in PL/SQL during peer review to be easier than some other languages.

Interesting.

I wonder how nice a PL/SQL & Ada combined environment/IDE could be.


TIL VHDL was commisioned by the DoD.

Also random note, but wouldn't most engineers familiar with HDL's be electronic engineers rather than software developers?


There are a decent chunk of EEs/CpEs doing work that bridges hardware and software. For example, they might be writing VHDL or Verilog for an FPGA and also writing C/C++ for the microcontroller that interfaces with it.

Also, there are lots of professional software developers who studied Electrical Engineering or Computer Engineering in college. Many schools historically had all their CS classes and faculty within their EE department.


Really just depends on what you wanted to specialize in. My degree is basically the VLSI portion of an Electrical Engineer and Operating Systems portion of a CS degree. There were many students in my HDL classes that were purely CS but had an interest in software based processors. Not as many EE's taking classes on Big-O notation.

A lot of people trained as EEs seem to end up in software dev jobs these days.

Yep. You don't find many EEs left in the US these days, unless they work at a military contractor. All the electronic hardware design work went to Taiwan and China, because that's where all the manufacturing for it is. What's left is government-related (military) stuff that obviously can't be outsourced to Asia, and some commercial stuff that they keep in-house for various reasons.

Basically, the US is a really bad place to be an electrical engineer specializing in electronic hardware, but it's a great place to be a software engineer or developer.


> Basically, the US is a really bad place to be an electrical engineer specializing in electronic hardware, but it's a great place to be a software engineer or developer.

Its because software engineers get paid much better in addition to more options. As a firmware engineer I straddled both sides. So many of my EE colleagues lamented they should have done more on the software side. A few of the brilliant ones moved to places like Google, Amazon which need them for specialist roles.

I remember a decade back I was visiting a customer in Taiwan to help debug our product and saw a legion of Chinese engineers helping out some project. I knew eventually Taiwan would lose some of their R&D to China.


It's interesting to see how Verilog & C both took over there in the industry. There's probably a chasm where the extra typedness isn't worth the velocity cost at the beginning of a project and when it's valuable the weight of switching can be too great. I think the middle ground is a language where you can relax some of your rules but then as pieces mature you can start enforcing stricter typing rules.

I observe the same effect on other things like unit tests except the effect is the reverse of what people typically think. Lots of unit tests early on help you write correct code initially but as code matures it's just wasted automation time & it can end up ossifying some architectural decisions (ie. a change ripples out & causes a large amount of tests to need updating).


I tend to disagree: strong typing is very valuable in the hardware design space - it's just that you couldn't convince hardware devs of this. If they had exposure to a programming language it was C (and possibly some assembly language) so they wanted something closer to what they were familiar with.

When I worked in EDA I was given a project to get rid of linting errors in our generated HDL code. We generated both VHDL and Verilog. Our generated VHDL had very few problems thanks to strong typing. But there were lots of problems to fix in the Verilog code caused by it's weak typing.


It's very valuable but not worth acquiring for most developers. I write relatively lot of C, I don't enjoy the familiarity, it just gets the job done for me in 90% less time because the tooling is there, the IDE is there, the documentation and support is good, the libraries are there - it matters a lot in what languages I find tolerable I've noticed. I like PLPGSQL, it's also has all that I've listed, I didn't like Ada and it doesn't have what I listed, because I had to "invent the wheel" so much. In the end, why spend 50-90 % more time to get that last x% of some foggy benefit?

VHDL is quite strong across European companies.

I've developed a set of "strongly typed" classes in C++ that have saved me a ton of grief.

Here are some examples. All classes are fundamentally floats. Errors are caught at compilation.

    cInches i1, i2, i3;
    cCm cm1, cm2;

    i1 = 1.f;  // error
    i1 = cInches(1.f);  // OK
    i3 = i1 + i2;  // OK
    cm1 = i1;   // error
    i1 = i2 * 2.f;  // OK
    float f1 = i1 / i2;  // OK
    i1 = i2 * i3;  // error! square inches are not inches

    int add(cInches in1, cInches in2);
    add(i1, i2);  // OK
    add(cm1, cm2);  // error
'Typedef's just don't cut it.

There's a few libraries which implement this for all units in e.g. SI, like boost units or nholthaus/units on github. There's also the safe numerics lib in boost which catches UB and other issues around integer operations preferably at compile time and, if not possible, at run-time.

From what I read about Ada, its types offer better guarantees than what's typically available in mainstream programming languages: https://learn.adacore.com/courses/intro-to-ada/chapters/stro...


I guess no one here bothered to actually benchmark scalar (like unit type/conversion code) floats vs doubles on modern x86 hardware. It will perform identically. Heck, don't believe me, check Agner's x86 instruction tables [0].

Please default to doubles and optimize only when it's actually an issue. 99% of cases it's not, unless you're doing throughput computing. In that case floats provide up to 2x improvement. Keep in mind modern x86 CPUs can perform up to 16 double precision FLOPs (FMA) per cycle per core — do you really think it's going to be the bottleneck in your code?

32-bit floats expose you to nasty issues with even "simple" stuff like errors in naively computed sums. Issues that mostly just wouldn't matter if 64-bit doubles were used. See a proper floating point summation algorithm [1].

That said, even 64-bit doubles (or any arbitrarily long imprecise format) won't shield you from all of floating point weirdness. But it sure does help in common cases.

With floats you'll get issues like this:

https://randomascii.wordpress.com/2012/02/13/dont-store-that...

[0]: https://www.agner.org/optimize/instruction_tables.pdf

[1]: https://en.wikipedia.org/wiki/Kahan_summation_algorithm


Why use floats when you can basically have doubles for the same cost? Floats tend to cause nasty surprises when you deal with real world units.

> Why use floats when you can basically have doubles for the same cost?

Floats costing the same as doubles is a myth stemming from x87 arithmetic, which is obsolete. On an x64 CPU running 64-bit code, your compiler can often pack four floats into one SSE register. Even when that doesn't happen, CPU microcode can likely do more with floats than with doubles.

Lastly, memory bandwidth usage and cache occupancy doubles, which is true even with x87.


The difference is meaningless in scalar code. Not everything is or can be vectorized. Pretty much no difference between one double and one float in a SSE XMM register.

> Even when that doesn't happen, CPU microcode can likely do more with floats than with doubles.

I have no idea what that means. As far as I know, there's no CPU microcode dealing with floating point numbers.

> Lastly, memory bandwidth usage and cache occupancy doubles, which is true even with x87.

So use floats when you have a lot of data.


If you already knew there was a cost difference, why didn't you just say so?

> The difference is meaningless in scalar code. Not everything is or can be vectorized.

Multiple scalars can be packed into one SSE register even in scalar code. This is not the same as vectorization.

> Pretty much no difference between one double and one float in a SSE XMM register.

Even then, some arithmetic operations have higher throughput with floats than with doubles.

> I have no idea what that means. As far as I know, there's no CPU microcode dealing with floating point numbers.

Your CPU doesn't execute the ISA directly in hardware, it first converts it into architecture specific micro-ops. That includes FP operations.

> So use floats when you have a lot of data.

That's not the point. A lot of the time, the cost difference will indeed be irrelevant. The point is that there is a cost difference.


> Multiple scalars can be packed into one SSE register even in scalar code. This is not the same as vectorization.

They can, but can you honestly call it a common case for scalar code?

> Even then, some arithmetic operations have higher throughput with floats than with doubles.

By far the most (90-99%) of FP computation is additions and multiplications (or fused multiply adds). For scalar case (1 double or float in SSE register), they take precisely as long on modern x86 hardware.

Sure, float div executes in 11 instead of 13-14 clocks for doubles and I'm sure transcendentals are even worse, but they're rarely needed. Even then, if the dependency chain allows, the cost is often OoO scheduled away in integer dominated code.

> Your CPU doesn't execute the ISA directly in hardware, it first converts it into architecture specific micro-ops. That includes FP operations.

Except that SSE instructions pretty much are micro-ops as-is. Despite similarly sounding term, microcode has nothing to do with micro-ops.

> That's not the point. A lot of the time, the cost difference will indeed be irrelevant. The point is that there is a cost difference.

Well, I've written a lot of SSE, AVX etc. SIMD code. There sure is a big difference when you're processing large amounts of data. But... I've seen floats introducing silly precision related bugs [0] and a ton of useless float -> double -> float conversion chains. Most of the time most programmers should default to double.

[0]: Example: https://randomascii.wordpress.com/2012/02/13/dont-store-that...


Good point. The class definitions can easily be changed from float to double in just a few lines.

I used to use a lot of doubles. The I wrote some DirectX and GPUs all use float. I figured if floats are good enough for 3D, they're good enough for me :o)


With floating-point code, you will slowly lose precision in your numbers. (How quickly is a matter of what calculations you're doing, if you're doing them in an error-tolerant manner, and how bad the initial error is in the first place).

Single-precision floating-point starts you off with about 6-7 decimal digits of precision, and the intermediate rounding will mean that you're constantly introducing error in the 7th digit. Computer graphics are pretty tolerant of error--you only need about a 1-2 digits of precision generally--and achieving higher FLOPS is more useful than higher precision. Engineering tolerances are usually going to be tighter, perhaps 4-5 digits for particularly tight tolerances. Furthermore, you're more likely to use nastier equations (logarithms and exponents are particularly bad for precision), so single-precision could well introduce error with real-world consequences.


Yup.

Although I'd like to add a typical issue when there's a small change in a large quantity.

Say time. If you measure time in microseconds, and use floats, your precision will be halved to 2 us in just 33.5 seconds. In 24 hours, the precision has detoriated to over 2 milliseconds. One famous example is the deadly Patriot missile bug: https://sdqweb.ipd.kit.edu/publications/pdfs/saglam2016a.pdf

> Furthermore, you're more likely to use nastier equations (logarithms and exponents are particularly bad for precision), so single-precision could well introduce error with real-world consequences.

Some other ones are iteration and simply long computation chains introducing systematic error.


I first read about Ada in JARGON:

   Ada:: n. A {{Pascal}}-descended language that has been made
   mandatory for Department of Defense software projects by the
   Pentagon.  Hackers are nearly unanimous in observing that,
   technically, it is precisely what one might expect given that kind
   of endorsement by fiat; designed by committee, crockish, difficult
   to use, and overall a disastrous, multi-billion-dollar boondoggle
   (one common description is "The PL/1 of the 1980s"; hackers find
   the exception handling and inter-process communication features
   particularly hilarious).  Ada Lovelace (the daughter of Lord Byron
   who became the world's first programmer while cooperating with
   Babbage on the design of his mechanical computing engines in the
   mid-1800s) would almost certainly blanch at the use her name has
   been latterly put to; the kindest thing that has been said about it
   it is that there is probably a good small language screaming to get
   out from inside its vast, {elephantine} bulk.
Maybe the situation is different now?

The situation hasn't changed, ESR just flat-out never had a good read on Ada. You really cannot take the Jargon file seriously, its author's ignorances are glaring and numerous.

Ada's "elphantine bulk" is smaller than many modern languages, and even some which the Jargon File praises, like Common Lisp. Its inter-process communication and exception handling functionality were relatively forward-thinking at the time and would look familiar to your average Java programmer.


The kindest thing one can say about esr is that he is not as clever as he thinks he is.

Which is also the least kind thing.

Having learned Ada (or rather, taken a class that used Ada) back around 1990, at the time, it was a bloated monstrosity. It's just that over the past 30 years, C++ has bloated to such a size that it now dwarfs Ada (in my opinion).

Now, at the time, I didn't mind Ada (I think I was the only one to not mind it in class), but learning it just after K&R C/ANSI C (C was just recently standardized), it did appear to have quite a bit more overhead than I was used to.


Languages with the kind of constraints (sometimes kicked up to 11) that inspired the term “bondage and discipline languages” [0] have come back into vogue in a big way since the heyday of the Jargon File, and are in many cases the new “languages of choice” [1].

[0] http://catb.org/~esr/jargon/html/B/bondage-and-discipline-la...

[1] http://catb.org/~esr/jargon/html/L/languages-of-choice.html


I remember reading the first entry many years ago and wondering why anyone would want to use a language with type enforcement when you could have duck typing. (I thought of types as something that had been enforced by the lack of expressive power of old languages in which the types were effectively closer to the machine -- like describing the size of a machine word or something.)

Later on I experienced the famous feature of Haskell in which it takes a while to write a valid program that compiles at all, but then the program immediately works correctly the first time and is free of several large classes of bugs.

So yeah, excellent description of that trend!

Edit: another closely related phenomenon is people gaining a new appreciation for formal grammars and formal specifications, understanding how hard parsing problems are and how bad the consequences of underspecification and undefined behavior can be. I imagine in the Jargon File heyday it would have been made fun of to use formal specifications everywhere, but now some of the most successful systems and tools use them to good effect. Maybe there was a "formal methods winter" or "formalism winter" to parallel the AI Winter?

Edit 2: also, as a sibling comment mentions, these tools are much nicer to use now than they were then. Protobuf and friends do something akin to what ASN.1 did, but the tools around them are so much nicer!


Yes, I think there was very much a "formal methods winter", say from about 1985 to 2015; the Jargon File alludes to this in the "DWIM" entry, I think. Essentially, proving tiny programs correct became feasible in the mid-1960s, but only tiny programs were feasible to prove correct until the mid-oughts. Dijkstra died terribly disappointed.

During essentially this entire time period, the scale of what people were programming was growing much faster than the scale of what people could prove correct. If the aspirational task in 1995 was to prove correct a telecom switching system, the aspirational task in 2005 was to prove correct an AJAX frontend in untyped JS with its goofy coercion rules speaking over an unreliable network with MITM proxies to a backend written in untyped PHP, storing its data in a MySQL cluster where it would read from a read slave that was different from (and a poorly characterized latency period behind) the master it was writing to. Since then things have gotten more complicated, but I feel that formal methods have closed a bit of the gap.

However, I don't think of Ada as being a promising part of these developments. It isn't especially formally tractable, and it doesn't even attempt to offer the kinds of safety guarantees that Rust does. Rust, CompCert, seL4, Z3, Haskell, OCaml, and TLA+ seem a lot more promising than Ada. Heck, even Hypothesis is kind of in that category for me.

P.S. I miss you. I hope you're doing well.


My limited experience with dynamic languages vs the more persnickety typed languages is with the former you really really really need unit testing. I found the more type enforcement the less bugs of the sort unit tests catch. I wonder what programmers of that era would have thought is their introduction to dynamic languages was in an environment heavy on TDD and unit tests. Likely not so fond.

Not saying strong typing is good or bad, but I think tools allow strong typing much more conveniently now than when that B&D sentiment was written. Back then, you'd use vi/emacs and have to argue constantly with the type checker until it was placated. Now of course, your IDE will complete based on what's allowed in the Context.You().are.typing and it's more of a correct by construction affair. Ada might have faired differently if those tools existed then.

I think you mean static typing. For instance, Python is strongly and dynamically typed.

Although the editor of the jargon file was confident he was able to speak for all "Hackers", I'm not sure they were ever quite as narrow-minded as he likes to make out.

(Many of the early criticisms of Ada seem a bit silly in hindsight. For example people used to like to make a big thing of it taking three or four years for fully standards-compliant compilers to appear, which seems pretty good going compared to, say, C++98. Similarly, hardware from the early 80s sometimes struggled to compile Ada code efficiently, but it turns out that processors were getting faster rather rapidly around that time.)


Also, something I'd read before (though I can't find a reference now) is that the "design by committee" was that everything filtered through Jean Ichbiah (I think), who acted as benevolent dictator for the design process, and he was careful about the features forming a cohesive whole.

Another thing was a take on the Jargon File entry where a good amount of hacker backlash was due to it being a language forced by the DoD, so of course it is going to be bad.


The situation was not that way even then.

It's a tongue-in-cheek document, made by imbeciles, not some real language evaluation...


The situation remains the same: Jargon file is bad.

I'm missing SPARK, the restricted and formally defined language based on Ada, extending Ada with contracts and aspects, and disallowing the insecure parts. https://en.wikipedia.org/wiki/SPARK_(programming_language) This is the real highlight of Ada.


First, this is a great introduction article to Ada. It's a great history lesson, and analysis of where it's used and why.

That said, it seems to be arguing for something in a vacuum. Where is the mention of other new programming languages that also fit this category? Where is the mention of Rust? Rust fits all the same requirements:

- A general, flexible design that adapts to satisfy the needs of embedded computer applications.

- Reliability. The language should aid the design and development of reliable programs.

- Ease of maintainability. Code should be readable and programming decisions explicit.

- Easy to produce efficient code with. Inefficient constructs should be easily identifiable.

- No unnecessary complexity. Semantic structure should be consistent and minimize the number of concepts.

- Easy to implement the language specification. All features should be easy to understand.

- Machine independence. The language shall not be bound to any hardware or OS details.

- Complete definition. All parts of the language shall be fully and unambiguously defined.

The exception is possibly the last one, complete definition. But that's debatable as Rust does have a reference, but no one wants to go as far as calling it a spec. (Also, I know some people think Rust has some "unnecessary" complexity, personally I do not). The article goes on to compare to C, but doesn't mention some of the new features of C++ that make it safer, or Rust which is approximately as safe as Ada. So why would I pick Ada over other modern languages?

btw, I really enjoyed Steve Klabnik's post on learning Ada: https://words.steveklabnik.com/learning-ada


> - Easy to implement the language specification. All features should be easy to understand.

I'm not sure this is at all applicable to rust. Perhaps once there are more than the one, quickly mutating, implementation, we can say that it's easy to implement the specification.

> - Reliability. The language should aid the design and development of reliable programs.

This one feels a touch questionable, given the built-in "succeed or die" macros that simplify code at the cost of reliability. There's also the unsafe keyword, and how the impact of an unsafe operation in one section of code (say, an imported library) can have negative impacts on unrelated sections of code otherwise deemed safe.

> - No unnecessary complexity. Semantic structure should be consistent and minimize the number of concepts.

Borrow checker aside, there is a ton of complexity in Rust (traits, multiple flavors of arrays, multiple flavors of boxed values).

> - Machine independence. The language shall not be bound to any hardware or OS details.

Rust relies on the LLVM project to provide this, and relies on the contributors to the LLVM project to target different architectures.


> ...unsafe operation in one section of code (say, an imported library) can have negative impacts on unrelated sections of code otherwise deemed safe.

Yeah. Just like every line in C/C++ code can have negative impacts on unrelated sections.

While true, that talking point feels somewhat unfair or even in bad faith. Rust requires "unsafe" escape hatch to be able be a low level systems language. Or gasp, to call libraries using C ABI.

I write a lot of low level code in C/C++ (firmware, kernel drivers, low level services) and I think a C ABI compatible "runtimeless" safe language, like Rust, dramatically improve security, safety and even ultimately provide faster time-to-market.


This comparison is against Ada's language criteria, not C/C++.

You're well above 100% right with your points, if we weren't looking at the language criteria put forth by Ada.

EDIT: It's worth nothing that Ada is also a system and embedded programming language.


Ada has unsafe code as well and messing it up causes exactly the same thing as it does in Rust or C++, so I'm not sure what your point is.

> C/C++

No such thing.


> given the built-in "succeed or die" macros that simplify code at the cost of reliability

Most of those are linted for, so you can use them to speed development but still be aware that they shouldn't be there in production code.


These points are around language design and implementation; linting is an additional set of tools outside the scope of the language itself.

Whether it should or shouldn't be in production, following Ada's rules it wouldn't be allowed in the code in the first place.


I'd argue that lints are part of the language - they are run as part of the compiler, and block code from compiling, like the "must use" lint.

That said, they are configurable, so I see your point - they are more permissive than what Ada allows

OTOH, I believe you can turn lint warnings into hard errors for only production, allow for fast development but programmatically disallowing them from reaching production. But again, thats configurable.


> given the built-in "succeed or die" macros that simplify code at the cost of reliability.

I would consider a program that dies early to be more reliable (but less robust) than one that just soldiers on, probably doing wrong (and possibly dangerous) things.

Regardless, it is the programmers choice whether to call panic!() or to try to handle an error gracefully and continue.


It depends on the situation. Ariane 5 was destroyed because its inertial reference system (written in Ada) died early instead of soldiering on, and as it turns out, it would have been fine if it had soldiered on.

Really, there's a certain amount of software, like the Ariane 5 SRI, for which the only acceptable outcome is not to have the error in the first place — there's no acceptable way to handle it once it happens. That software is very expensive to write, so you want to keep it to a minimum in the overall system, but sometimes you cannot avoid having some of it.


Rust cannot be used in most professional use cases where Ada programs can be deployed because the Rust toolchain isn't a qualified one for any of the various IEC 61508 derived qualification regimes.

Which is a shame, because Rust is a great fit for these use cases. Though there are a lot of ecosystem shaped things that present practical challenges too: things that should/could be in no_std but unnecessarily depend on std instead, a variety of cargo workflow issues, etc.

There's a group in Germany (I think) that proposed attempting to qualify Rust to get it approved for inclusion in the list of 61508 languages, but that proposal did not seem to be met with much enthusiasm.

We're using Rust to build tools to service those kinds of industries, and it's non-trivial to justify its use even in adjacent tooling. Let alone to be directly deployed to a target device.


> There's a group in Germany (I think) that proposed attempting to qualify Rust to get it approved for inclusion in the list of 61508 languages, but that proposal did not seem to be met with much enthusiasm.

https://ferrous-systems.com/blog/sealed-rust-the-pitch/

There's still a lot of interest; this stuff doesn't happen quickly.


I got the impression folks were lukewarm to it with concerns about what it would mean with respect to Rust's rolling release cycle and other shapes of incongruent goals between how to manage Rust as a big project and what it takes to maintain compliance for standards purposes.

I would be thrilled to death for that effort to move forward with zeal however. If a WG develops around it, I'd also happily provide engineering resources to coordinate and work on the effort.


I'm not directly involved, so I can't say too much, but the general advice of "comments on the internet are not representative" applies.

You should get in touch with the Ferrous folks, I'm sure they'd love to hear from you.


Might as well mention F* then!

> F* (pronounced F star) is a general-purpose functional programming language with effects aimed at program verification. It puts together the automation of an SMT-backed deductive verification tool with the expressive power of a proof assistant based on dependent types. After verification, F* programs can be extracted to efficient OCaml, F#, C, WASM, or ASM code. This enables verifying the functional correctness and security of realistic applications. The main ongoing use case of F* is building a verified, drop-in replacement for the whole HTTPS stack in Project Everest. This includes verified implementations of TLS 1.2 and 1.3 and of the underlying cryptographic primitives.

> Its type system includes dependent types, monadic effects, refinement types, and a weakest precondition calculus. Together, these features allow expressing precise and compact specifications for programs. The F* type-checker aims to prove that programs meet their specifications using a combination of SMT solving and interactive proofs.

https://www.fstar-lang.org/


And the "worst name" prize goes to ...

Go.

Still Go.


It's incredibly silly to want to insert Rust into every discussion and suggest every article written about something that Rust (sort of) does should mention it. It's not a good look for people not yet into Rust that this stuff is still happening, IMO.

In this discussion, Rust is actually relevant, nearest comparison I can think of.

It's not necessarily as much Rust as the desperate need to get something safer than C/C++ without compromising performance and interoperability.


>as the desperate need to get something safer than C/C++ without compromising performance and interoperability

Object Pascal

D

ATS

Common Lisp

Ada

even Fortran!!

the list can go on and on


I think the fact that all of those have been around for decades, and yet haven't gained much traction, implies that there's something wrong with them that makes them unsuitable.

If it's just a matter of marketing and failing the subjectivity of a popularity contest, that's a shame, but I wouldn't leap to assume that's the sole reason.


Being bundled with the OS SDK happens to be a big factor for adoption.

On the other hand, think that some of those languages have commercial compilers being sold for decades and they are still in business.

So, some companies do value the quality of the software they product.


Go?

> Where is the mention of other new programming languages that also fit this category? Where is the mention of Rust?

The article is about Ada. If you want to read about Rust, go read about Rust.


This article promotes Ada as the right language to use for systems programming; a good comparison would include a description of how it compares to other systems programming languages. History of why the DoD chose it in the 1970s, over the languages of the 1970s, doesn't explain why it's the right choice today.

I don't read the comment you're replying to as asking for an article about Rust or any other language. It's interesting to read about Ada, specifically. I would love to see descriptions of what Ada does better than other modern languages for systems programming. "Strong typing" is a feature of many non-C languages, for instance, but types with specific ranges aren't. And the "storage pool" mechanism deserves a more detailed comparison to other memory management approaches.


bluejekyll does chastise the author for not including their favorite programming language and I don't think that's a legitimate criticism at all.

Yes, most systems programming is done in C and C++. Some people even use Rust, I've heard. Doesn't mean that an article has to draw comparisons to these languages. In fact, almost every article comparing programming languages inevitably does a poor job at it because the author most likely is more familiar with one language and then we'd have people criticizing the author because they got X, Y and Z wrong.

I have seen this a lot of time on HN. It's simply smart to not compare languages.


If the article had a title of, "Ada is a great language for embedded", then great. But it's implying that Ada is the only language that exists today that can accomplish this task, and that's just not true. Rust is merely one example, and the best I know. And the tooling of these modern languages perhaps are better reasons to use them than Ada.

But yes, perhaps I was too critical, it's mainly the title that is a little misleading, and then the content didn't express anything specific about Ada that other languages don't also have.


It may not be obvious from the article title, but the article IS about using Ada for embedded systems programming. Here are the reasons that this is so:

1) The article is on Hackaday, a website for hardware hacking enthusiasts. Most programming articles on the site are geared toward low-level and embedded systems programming.

2) The end of the introduction section of the article ends with the line: "Ada might also be the right choice for your next embedded project."

3) The last section of the article "WHERE TO GO FROM HERE" points the reader to a related article about how to get started with Ada on a PicoRV32 RISC-V core, a small processor typically used in FPGA and ASIC designs, not the usual fare for desktop, server-side, or web programming.

Ada was designed for safety-critical, military-grade embedded systems programming. It excels in that domain and while it CAN be used for operating system and application programming, it requires a bit more effort than other programming languages because of the difference in idiomatic approaches and the built-in APIs.


That's not really the point I was making. Ada has been around since the 70's and at that time, was possibly the correct choice to make given the criteria. The question I asked at the end, and the title of this article, is why would I pick Ada today?

The best reason I've seen is because it's already passed the hurdles in some industries as being a compliant language for their needs. That might be a good enough reason for some, but it's not one that's important to me.


That seems like a trick question, because obviously you wouldn't pick Ada, you would pick Rust. And then you would talk about Rust, and mention how great Rust is, perhaps in a thread about Ada, which hadn't mentioned Rust at all. Or a thread about C, or about C++ or even about Go or why not even Python and JavaScript.

But seriously now: the author didn't mention Rust because they probably don't even know it exists. I work in embedded and like to ask people offering courses / tooling / services around safety if they support Rust and the only answer I got so far is "no". A lot of those people either don't know about Rust or they don't care, because everyone's using C and C++.


I'd really love to read a fair and comprehensive comparison and review of Ada and Rust. Safety, performance, concurrency, low level ABI compatibility etc.

Ada does seem interesting, but all the buzz seems to be about Rust. Perhaps Ada deserves more interest?


At least a short comparison of Ada/SPARK and Rust:

https://www.electronicdesign.com/industrial/rust-and-spark-s...


Thanks, while short, it sure looks interesting.

SPARK statically checked function pre- and postconditions and constrained types certainly look very interesting especially for embedded systems. And for a lot of other low level code as well.


Hackaday has written several articles about Rust, though I don't believe any of them were by this author.

Why can't someone write an article about Ada without it having to be about Rust?

Because they overlap almost entiely in use case?

From my own analysis, about a year ago, I do not believe this to be true. Rust is memory safe, but it does not offer the type system that Ada does. The type system is where a large portion of Ada's safety comes from. They're both tackling software/system safety, but from different directions and to different ends.

Additionally, modern Ada (after the introduction of SPARK, what 15 years ago now?) offers even more safety features that I've not seen in Rust, through the form of code contracts. Conttracts which can be used as run-time checks, and (to some degree) as static proofs about the code.

Extend Rust's type system and add a contract mechanism, then it'll have the same use-case that I have for Ada.


I would recommend https://www.adacore.com/uploads/books/pdf/ePDF-Implementatio... for more information regarding contracts. There are of course a LOT more, just fire up your preferred search engine and enjoy your findings. :)

> SPARK builds on the strengths of Ada to provide even more guarantees statically rather than dynamically. As summarized in the following table, Ada provides strict syntax and strong typing at compile time plus dynamic checking of run-time errors and program contracts. SPARK allows such checking to be performed statically. In addition, it enforces the use of a safer language subset and detects data flow errors statically.

Contract programming:

- Ada: dynamic

- SPARK: dynamic / static

Run-time errors:

- Ada: dynamic

- SPARK: dynamic / static

Data flow errors:

- Ada: -

- SPARK: static

Strong typing:

- Ada: static

- SPARK: static

Safer language subset:

- Ada: -

- SPARK: static

Strict clear syntax:

- Ada: static

- SPARK: static

---

Additionally, safe pointers in SPARK: https://blog.adacore.com/using-pointers-in-spark and https://arxiv.org/abs/1710.07047.

---

Cryptographic library in SPARK 2014: https://github.com/Componolit/libsparkcrypto.

> libsparkcrypto is a formally verified implementation of several widely used cryptographic algorithms using the SPARK 2014 programming language and toolset. For the complete library proofs of the absence of run-time errors like type range violations, division by zero and numerical overflows are available. Some of its subprograms include proofs of partial correctness.

---

Ada/SPARK is excellent for real-time and safety-critical software: https://www.rcs.ei.tum.de/fileadmin/tueircs/www/becker/spark... to give you some ideas. It may be a bit outdated in some regards, though, for example: "Next version of SPARK 2014 under development (pointers!).". It is already done! :)


SPARK contracts look super interesting. It seems like that could let an optimizer do some great stuff and decrease the mental burden of reasoning about mostly pure functions by calling out specific side effects. What is different about its types? I don't see anything obvious. What I'd miss about rust is the borrowchecker and const by default.

Hmm on further investigation I think most of the use cases I'd have for contracts are met by the borrow checker. Side effects are going to get wrapped in unsafe or be explicit in the function signature.

How?

Ada was designed for programming safety-critical, military-grade embedded systems.

Rust was designed as a memory-safe, concurrency-safe programming language, largely to overcome the shortcomings of C++.

Each excels at what it was designed for, but the intended use cases are very different.

Rust is not (currently) being used for aircraft flight control systems--Ada is.

Ada is not (currently) being used for high-performance web browsers and servers--Rust is.

While there are SOME similar design goals in terms of memory safety, concurrency safety, and error prevention, they do not "overlap almost entirely in use case".


Those are good points. While I can see rust flying within the next few years, I dont see it in flight controls for a while yet. That said I'm aware of c++ flight control software which seems insane. And that seems to be the trend. Rust does target embedded so there is no reason not to try and move it in that direction, though ada seems a much better choice.

Agreed. The PicoRV32 mentioned in the article's "WHERE TO GO FROM HERE" section has its own crate: https://docs.rs/picorv32/0.1.3/picorv32/

C/C++ have been great for embedded work, but Ada and Rust are better alternatives in my opinion. They are safer, more powerful, and more secure.


Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: