Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article promotes a popular misconception of the programming situation for defense projects in the 1970s. There may have been "hundreds of specialized programming languages" in existence that could be used, but just a handful actually predominated. Most aeronautical projects were done in JOVIAL.

I've talked about this history with engineers from the 1960s-70s. They did not regard the introduction of Ada as a good thing. The JOVIAL language had been developed by engineers and modified through experience over several years to become something they were quite happy with. The impetus that led to Ada came from management and implementation was done by academics.



Many of the 'programmers' of those days lacked the sophistication and education to appreciate Ada.

Just the mojo required to print an integer (having to instantiate a generic) was considered complicated

Ada was also very slow to compile, on the order of 10 times slower than Jovial. (my experiences)

Avionics systems back then were tiny. Jovial usually ran on a 16 bit processor.

Ada enabled a certain maturity and correctness to the avionics domain.


> Many of the 'programmers' of those days lacked the sophistication and education to appreciate Ada.

That certainly sounds like academic self-righteousness.


> Just the mojo required to print an integer (having to instantiate a generic) was considered complicated

That sounds complicated to me, in 2019 with 25 years of C++ experience. (And of course C++ error message sprouting mysterious stdlib templates also seems complicated to me).


> That sounds complicated to me, in 2019 with 25 years of C++ experience.

std::cout << X;

is not a generic, but on the project I just tested, if I substitute X with a type that does not implement an appropriate operator<<, I get ~1000 suggestions in the overload set, many of which are generic operator<< that are templated on a `typename OStream`.

I personally find having to do overload resolution first on a 1000 function overload set containing generic and not generic functions, and then often having to actually instantiate a generic (doing template argument deduction, template instantiation, substitution-failure is not an error so the next overload might be picked, etc. etc. etc.) infinitely more complicated than "just" instantiating a generic.


Indeed, type classes as found in languages like Haskell and Rust were first designed as a way to make overloading (aka "ad-hoc polymorphism") work more sensibly, and in a way that plays nice with generics/templates.


It isn't more complicated than printf, which is also close to being a generic.

    PutLine("The number is " & num);

    printf("The number is %zu\n", num);


pakage int_io is new integer_io (num=>integer);

and then

int_io.putline(num);


    with Ada.Text_IO; use Ada.Text_IO;
    with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;

    ...

    PutLine(num);
As far as I can tell, Ada has had Integer_Text_IO that does that all for you, since the early days. [0] (Note that the reference is the first "public" release of Ada outside of the DOD).

[0] https://www.adahome.com/LRM/83/RM/rm83html/lrm-14-03.html#14... (Ada 83 Manual)


That's an awful amount of boilerplate for a simple integer hello world.

Languages with these modules and import statements should be burned. (Starting with Python.)

For pete's sake, couldn't they at least combine this with and use into one keyword so you don't have to repeat yourself?

Or can't use be made to forgive a missing with and just get on with the show?

Do defense contracts pay by the byte or what?


'use' allows you to integrate namespaces, that's all. You can have multiples of those within the same file. Not all 'with' statements will only expose a single namespace.

Nothing different than std in C++.


C++ is awful also; it is not a good choice of benchmark for brevity.

Though a single use declaration will get you the identifier in C++, the right header file has to be #included. At least it's possible for a single include to bring in lots of material:

  #include <biglib>  // just one of these

  use bigfoo::bar;   // one per identifier
  use bigwidget::zark;
But C++ has lots of irritating repetition like:

  class foo {
    foo();
    ~foo();
  };

  foo::foo()
  {
  }

  foo::~foo()
  {
  }
Seven repetitions of foo just to set up an empty class skeleton with non-inlined constructor and destructor.

(The C++ people are self-conscious about verbosity, and trying to fix the problem; unfortunately, in wrongheaded ways that are making the language worse.)


I'm not sure brevity is a benchmark for a good language either.

So far as I can tell, you dislike how verbose the Ada "hello world" is, because it supports namespaces, and uses them. That wouldn't be something I'd agree with, especially not when dealing with any program of significant size.


> I'm not sure brevity is a benchmark for a good language either.

It's the only benchmark that we have any kind of analysis for (https://www.researchgate.net/publication/3188171_The_Confoun...), which correlates to less defects and less time per feature. Less code is better. Until I see evidence to the contrary, I think it's a valid criticism. YMMV


> When we controlled for class size all of the effects disappeared, providing compelling evidence for a confounding effect of class size

Holy crap; are they really saying that all the various correlations seen between numerous software metrics and fault proneness are actually the effect of nothing but raw size?

That's so intuitive. Because in our daily experience, almost all reasonable approaches to to various aspects of program organization actually work fine in programs of a size that one programmer can comfortably fully understand and maintain. Even for any technique reviled as "bad" and discouraged in coding standards, there are counterexamples of it working fine. Those counterexamples are in the context of small programs, so we are told, "just wait, that approach will burn you in a big program". But it looks like what will burn you is just the size of that big program, no matter what techniques you wield, whether blessed or not.

That little paper you pointed to there (thank you) looks pretty darn important.


I agree with less code, but the tokens themselves needing to be shorter, not so much - so long as they remain easily readable.


    class foo {
      foo() = default;
      ~foo() = default;
    };
Is actually the idiomatic version of that C++ class, far less repetitive.


The class I wrote has a non-default constructor and destructor.

The idiomatic way to write your class is just:

   class foo { };
allowing the compiler to supply the default constructor and destructor.

The purpose of the C++11 default mechanism is to request a default implementation of a special function in situations when it would be suppressed. For instance

  class foo {
  public:
    foo(int x); // this will suppress the default constructor
    foo() = default;  // this requests it back
  };


printf is not c++


Ada is the one that was considered complicated. Making use of PutLine in the way I demonstrated was, at the time, considered complicated and confusing.

That it's as easy as printf demonstrates the changes in thinking to today fairly well.



I can sympathize with resistance to Ada in a sense: Big "enterprise-y" languages can feel unwieldy and cumbersome if you're used to bit-twiddling and struct-packing in something like C.

But as the recent Boeing debacle has shown, the avionics domain is in need of greater sophistication and reliability. What it doesn't need is more hackery, as satisfying as that may be for the hardware hackers.

Though Ada isn't a massive household name, I think its designers were vindicated by history.


The software in the case of the 737 max performed exactly according to the spec. The problem is that the spec was buggy. The language can't fix a buggy spec.

Note that the bug which caused the Ariane V disaster was written in Ada. And that was caused by the language. If the Ariane V code was written in C and the value simply overflowed, nothing negative would have happened. (The value would be hilariously wrong, but that wouldn't have mattered because the code wasn't necessary during flight. It was an oversight that it was running at all.) However, this is Ada, so it caused an integer overflow exception. Which was uncaught. Which caused the entire system to nope out. Which caused the rocket to blow up.

So there's that to consider.


> "If the Ariane V code was written in C and the value simply overflowed, nothing negative would have happened"

So... what you're saying is that the language not recognising an exception conditional has occurred is a better outcome in mission critical applications? "wasn't necessary", "It was an oversight", "Which was uncaught"... With these phrases you've already pointed out that the developers had improperly implemented the application.


There is something to be said for not just halting on an error in production systems. Its the same reason assertions are usually turned off on production builds.

Yes ideally the programmers should have handled the exception, but once your in production and the system is running, should you really just halt the whole program due to an exception the prograera didn't foresee?


For mission-critical software like avionics or a nuclear power plant? Absolutely! At design time you better figure out what all the error cases are, and blindly steamrolling over them when they should be triggered makes a bad problem worse (e.g. look at Cloudbleed where unchecked out-of-bounds reads starts disclosing other people's banking information).

Assertions are also usually turned off for performance reasons, not to allow the program to plow through invariants being violated.


Uncaught exception and die.

For a banking application, yes, die, hard, as loudly as possible. Downtime is worth it. Do not continue, do nothing until it is understood and fixed.

When you're flying. Hmmm. Not so much. Dying is a really bad idea. Ok you should have it tested thoroughly so it isn't going to happen but if it does and "anything could happen" well "anything" is better than killing all the passengers, crew and anyone you might crash on, right?

Literally anything is better than that.

Idealy, you log it, you refuse to take off again, all the fleet is grounded etc etc but deciding to fall out of the sky because something unexpected happened is probably not the right response.

Is graceful shutdown possible? No? Ok, so take your chances and log noisily is a less bad option.


The default out-of-box runtime can only handle one general scenario. The runtime designers choose to print the error message and terminate, and this seems to be the only sound option for default behavior. It's up to the developer to replace this default handler when the scenario differs that much.


> this seems to be the only sound option for default behavior

I think that is the crux of this debate. In my background, yeah, you fail hard when anything unexpected happens. It's the most straightforward way to fail safe.

But my background maps better to the financial case; I've never worked on avionics or anything like that. I can see the point, though, that, in that kind of situation, failing hard doesn't fail safe at all.

It's conceivable to me that different problem domains require different default behavior. (And perhaps, by extension, different programming languages.)


That particular set of requirements is not unique to avionics. It’s pretty close in game development. A glitch is way better than a game which crashes in the middle of gameplay. Especially because many glitches only last a single frame i.e. barely visible.

> It's conceivable to me that different problem domains require different default behavior. (And perhaps, by extension, different programming languages)

Not sure about problem domains, e.g. 3D graphics is the main problem domain in videogames people play for fun, and CT scanners which save lives.

Different projects indeed have different requirements.

Every time you see a programming-related holy war on the internets (exceptions/status codes, static/dynamic typing, unit testing or not, FP/OOP, etc.), you can bet the participants have different types of projects on their background, and these projects needed different tradeoffs. More often than not, what’s good for a web startup aint’t good for Nasa or MS, and vice versa.


Whether die-hard or steamroll-thru is the better option depends more on what exactly goes wrong rather than what the application is. For a flight control computer, for example, if the ordered pitch suddenly becomes 3.402823466e+38 degrees it might be better to die hard and restart rather than try to execute the order.


The answer for this is sanity checking and some kind of layered "reflexes".

I design for mission critical things on a regular basis, and one of the error modes I must accommodate is random bit flipping by cosmic rays, emf, or other failures.

Sometimes, it is possible to push things to a "safe" failure state and reboot (which often takes only 100 ms or so)

Sometimes, though, the error must be caught and corrected to a last known good value or something like that. Everything critical is sanity checked : overall boundaries, rate of change, state related boundaries, etc. Layers of "reflexes" are more robust than a single programmed behavior, because an error in one will be resisted by the others. So much the better when there are segregated systems to check each other.

Often, I'll have a "brilliant" system that performs to a very high standard and is complex and brittle. If the brilliant system fails, it just shuts down or reboots. Underneath the brilliant system is another layer or two. A "competent" system that is simpler and more robust, but less efficient and with soft failure modes, and a "brainstem" system that takes nothing for granted, even checking basic functions by bit flipping to negate stuck ram bits or broken ALU logic, but only tries to do a pasable job, reducing algorythmic complexity to its bare acceptable minimum.

Typically the system will generate a basic range of acceptable parameters at the lowest level (and take corrective action if needed), then refine the range at each subsequent level..rinse and repeat. That way, each lower level checks the ones above.

Or, you just fail downwards if errors are suspected. Either way.

Designing failure tolerant systems is not impossible, but it requires a different mindset.


I imagine (just thinking out loud randomly, I'm sure there's other issues to consider!) one way to mitigate this would be to have a second set of much simpler software (written in a different language, perhaps running on a different platform) whose only runtime job is to sit in the communications path from the primary software to the avionics hardware and monitor the outputs of the primary software and ensure the values and/or rate-of-change in certain values are within some physics-based sanity limits before they hit the hardware. It could be responsible for rebooting the primary software and holding the last-known-good output values (for the consuming avionics) while the reboot happens (hopefully quickly!). Of course then who watches the watcher, and you've added more things in the critical path which can have their own failure modes...


"whose only runtime job is to sit in the communications path from the primary software to the avionics hardware and monitor the outputs of the primary software and ensure the values and/or rate-of-change in certain values are within some physics-based sanity limits before they hit the hardware"

Brings to my mind: Erlang


Was waiting for someone to mention Erlang.


Well, for an unmanned rocket, have it automatically self-destruct is likely the saner choice, as you don't want it to fall back somewhere inhabited, and there is usually little use for something on the wrong orbit.


This is a slightly more complicated topic (i.e. it's complicated by whether or not manual recovery is possible and by whether or not carrying on makes it worse), but in general, the worst thing you can do if you encounter an error in the software that supervises a process is to bail out and leave the process unsupervised. A chemical reaction, for example, is happy to carry on whether or not computers are watching.

There are plenty of situations when automated or manual recovery isn't possible and/or where carrying on with a potentially damaged system can make things worse. In practice, you solve this sort of problem via other design solutions, like redundancy. If you think that a critical system needs to keep running, but you also think that some errors will have to be handled by shutting it down, then you make it redundant.

Airplane FBW systems are a good example (caveat: I don't do airplanes, I do medical stuff -- I might be wrong about the practical details of this but I think it gets the point across). If the ELAC (elevator and aileron computer) runs into a condition it doesn't know how to handle, there's a good chance it will make things worse if you keep going. But you also don't want your mitigation to be "just halt the damn ELAC", you still want to have control over the elevator and the ailerons. That's why there are several ELACs.

More to the point: if something has to keep happening, no matter what, then you design the computing system and the firmware around it so that it keeps happening.

> At design time you better figure out what all the error cases are

Not being able to figure out what all the error cases are has been an unfortunate component of systems engineering for decades now. The Wright brothers would have probably been able to account for all the failure modes in their flight control systems, but today, you are likely to miss some of the failure modes of a CPU that executes nothing but a NOP and a jump back to that NOP.

With the exception of simple and special-purpose systems, built without any programmable logic whatsoever, it's unlikely that you'll be able to figure out what all the error cases are. (There's something to be said here about OISC and whatnot...)

That's not to say it's OK to build systems that blindly steamroll over errors -- just that you have to build them so that they can deal with errors that you have not foreseen at design. You will run into that sort of error sooner or later, we are all failible.

Edit: as for assertions, performance may be a factor, but that's not why you want to turn them off in production builds for embedded systems. (Although, IMHO, this isn't the right approach for embedded systems at all, but I've seen it used).

First of all, you turn them off because, presumably, they make your system deviate from the spec (i.e. the system ends up handling some cases differently in the production build vs. the debug build, and hopefully the one in the production build is the one you want).

Second, you turn them off because they can introduce undefined behaviour in your system. For example, if a peripheral gives you an incomplete or malformed packet in response to a command, or fails to answer altogether, you may want to abort with a stack trace in a development build. But what you really want to do IRL is probably to check and reset that peripheral, because for all you know it may be stuck and giving haphazard commands to an actuator.

IMHO, assertions are only a partial answer to the problem you really want to solve -- obtaining useful data (e.g. stack traces) in response to error conditions. You can generally log the useful data in addition to actually handling the error correctly. Development and production builds should differ as little as possible -- ideally not at all. Handling potentially critical errors in different ways certainly doesn't count as differing as little as possible.


In the Boeing case why wasn't there a button the pilot can hit that says "The computer is acting up, it's going to crash, turn it off and let me fly the plane"

Surely, the pilot should always be able to have the last say in where or what the plane is flying toward? Or are these planes now so complicated the pilots can't fly them without the computers?


I'm a bit out of my waters when it comes to aerospace, hopefully someone more familiar with the field can correct me if I'm wrong on any of these accounts. I knew I should have given examples from the medical field, but the parent post mentioned airplanes and nuclear plants so...

Airplanes that cannot be flown without computers definitely do exist. The F-117 is perhaps the most famous. Its shape makes it aerodynamically unstable on all three axes and it needs constant correction from the FBW system. Which has quadruple redundancy :). You can turn off the autopilot in these systems, obviously, so you get to say where it goes -- but without the FBW system to issue corrections, the plane crashes.

As for Boeing (or Airbus, who have this ELAC thing)... the main thing to understand here is that there is not a computer. There are several computers, each of them covering a particular set of modules (e.g. ELAC controls the ailerons and elevator, SEC controls spoilers and elevator). There's a more in-depth overview here: https://aviation.stackexchange.com/questions/47711/what-are-... . The autopilot is only one of them. The way they take over each other's functions is actually remarkably advanced, and leads to very interesting procedures for handling failures, see e.g. https://hursts.org.uk/airbus-nonnormal/html/ch05.html .

Now, on some airplanes, some actuators can only be controlled through these computers. They get the commands from the pilot and they issue the right signals that control the actuators. There's no way to bypass them. You can turn off the autopilot and the plane goes where you want but the actuators that control the flight control surfaces are still acted upon by computers.

I don't know if this is the case on Airbus specifically (like I said, I'm in a different field), but if it were, then simply turning those systems off in case of something unexpected is definitely not the right design solution.


There are...there are multiple levels of auto-pilot to turn off, but the pilot can have pretty much full control.


No, the process manager should log the stack trace, restart the subsystem and try running a few more times, then try an auxiliary system or just fly without the subsystem. It should not halt the whole software.


On a rocket launch?


On projects which require redundancy and resilience.


As a counter point, the Toyota ECU in those Camry's occasionally accelerated uncontrollably had asserts and what not. When they fired the system rebooted. From memory it took about a second to boot, so usually the driver didn't notice. Apparently it did that on a regular basis as it was overloaded, and sometimes it failed to meet deadlines.

The acceleration problem was caused by noise flipping bits in the RAM the operating systems used to store state data. (The Toyota code was most immune to this because they duplicated their state by storing it in two different places, always compared on read and if it was different rebooted. But the OS was provided the the chip manufacturer, NEC which was a black box to Toyota - and it wasn't as conservative.) On rare occasions a bit flip would take out the watch dog process and the cruise control just after it decided to accelerate, and occasionally if that happened everyone in the vehicle died.

Toyota blamed the deaths all sorts of things for a while - the driver, the carpet, third party accelerator pedals. Which to my mind was fair enough. They like everybody else had no idea what was causing it, but they knew their monitor / reboot if out of spec kludge worked very well so it was unlikely to be the ECU.


Ideally your type system should be aware of all the exceptions a particular function is able to throw so the compiler forces you to handle all of them before it compiles.


This is hardly possible. The whole idea of exceptions is that each function only deals with a subset. If no one does this, there's always a top-level handler that handles all. The default handler normally terminates the program, but it's totally possible to write a custom top-level handler that does something else. E.g. normally "out of memory" is an exception that causes a program to terminate, but in old Adobe Photoshop it was a routine situation that simply prompted the user to free some memory (by dropping the Undo, for example).

I agree that the concept of the top-level handler and customization should be more visibly documented.


> This is hardly possible. The whole idea of exceptions is that each function only deals with a subset.

Why? It's totally possible to infer the most general type for all the functions in the program, hence to infer the type of the needed handler.

Any language with subtyping and powerful type inference can do this, here is the toy ocaml example with typed algebraic effects (you can think of them as of exceptions):

    val read_file : path -[ Not_found ]-> string

    val process_content : string -[ OOM | Intern_err of int ]-> float

    let computation path =
      let content = read_file path in
      let result = process_content content in
      if result < 0
      then raise Bad_result
      else result
the type of computation would be inferred as

   val computation : path -[ Not_found | OOM | Intern_err of int | Bad_result ]-> float
The handler should catch the corresponding exceptions. This could also be used with the result monad as well [1].

Even if you want to keep some exceptions unhandled, you can easily choose which ones, and track them carefully.

[1] https://keleshev.com/composable-error-handling-in-ocaml


It's possible to compute which exceptions can be raised, but it's impractical to handle each exception in each function (this is how I understood your comment, I guess this is not what you meant). E.g. nearly any call can technically raise an OutOfMemory error or an IntegerOverflow error, but most functions are not competent enough to handle that, all they can do is to clean up and raise this up the call stack.

But it is indeed possible to have a single top-level handler (or a carefully constructed hierarchy of handlers) that explicitly handle all exceptions that may arise. (And to ignore an exception is also a form of handling.) The handler already exists, but it's pretty simple: for any exception it prints the error and terminates the program. It's up to developer to override this to make the program more robust.


What they are saying is that every language runtime, even stricter ones, has its own safety implications and you have to consider them when writing the specs and the code.

Simply switching to a language with a stricter runtime may automagically take care of some simple cases, but no system is devoid of failure modes.

Overflows are classic sources of disastrous bugs (e.g. the infamous THERAC-25). Switching to a system that detects them and raises an exception is a step in the right direction, but you still have to handle the exception cases correctly. And in some cases (like the Ariane V), incorrectly handling an exceptional condition turns out to be worse than allowing it to happen.


C would recognize it, but not over-react. Log it and continue rather than going full abort/self destruct which is what ada did.


>The software in the case of the 737 max performed exactly according to the spec. The problem is that the spec was buggy. The language can't fix a buggy spec.

That is true, but I found that in more strict languages you have a bit of a slower ramp-up, but the time you save later before production in bugs you don't have could be used to take more looks at the spec and better simulations.

>If the Ariane V code was written in C and the value simply overflowed, nothing negative would have happened.

In this particular case yes, however if the system was actually needed for flight (which I would guess most software is), it might be better to reset and retry.

On average if your language fails hard like ADA it's also more likely to find these bugs in simulations and tests.

Rust might have solved the problem the way you like it. Integer overflow causes exceptions only in Debug mode, not in Release mode.


> That is true, but I found that in more strict languages you have a bit of a slower ramp-up, but the time you save later before production in bugs you don't have could be used to take more looks at the spec and better simulations.

In many of these large companies in avionics (and not just avionics), the people writing the specs are not the same as the people writing the code. There is no trade-off between how much time it takes to write the code and how much time it takes to draft and review the specs.

Edit:

> On average if your language fails hard like ADA it's also more likely to find these bugs in simulations and tests.

That is definitely ture, but it's important to keep two things in mind:

1. The possibility of a hard failure from your runtime is something that you need to be aware of at design time. As seen in the case that the parent comment mentioned, there are cases when a hard failure on a non-critical error is actually worse than allowing the error to occur.

2. More important, if it's hard failures that expose bugs during tests, the first thing you have to fix, even before you fix the code, is the test cases themselves. A hard failure during testing is an indication that the test cases you have don't cover a given situation and, more importantly, that your system can't even handle that situation.

There are always going to be error conditions you can't recover from, and if they're in critical systems, you work around that (e.g. through redundancy). But a runtime that gives you a hard failure is rarely useful by itself.


The issue with Ariane V was the lack of integrated testing. If they had coupled the testing of the software with increased acceleration levels generated by a simulated Ariane V instead of an Ariane IV, they would have caught the issue.

But let's say this was something unexpected - then probably the only way to mitigate this would be a backup system - but this is an unmanned system - and Apollo/Saturn/Shuttle/Soyuz levels of redundancies is not required.


Yeah requirements were woefully incomplete, testing insufficient and may I add design where one unreliable sensor was wired to one processor which was given great authority over a system to wind in one direction very fast and where manual reversing out of was very slow or impossible when speeds got high (improbable when pitch directs you down).

How could anyone believe this system was safe? No testing or requirements needed.


> value simply overflowed, nothing negative would have happened.

Except overflow would most likely have caused a significantly more severe outcome than the uncaught exception.

And there is a restriction in GNAT (No_Exception_Propagation) which forces you to deal with any exception immediately, and should be used in any critical software imho.

https://docs.adacore.com/gnathie_ug-docs/html/gnathie_ug/gna...


Significantly more severe than the rocket blowing up?


> this is Ada, so it caused an integer overflow exception

This is not a language issue, but a general engineering one. In any critical system that can cause an unexpected exception and failure, there should be sufficient redundancy, so that when the main system goes down, the failover system can take control.


You can and should check your specs like they're code. For instance, you can write the spec in TLA+, which lets you specify temporal properties (e.g. "can the stall recovery procedure take an unbounded amount of time?") and liveness properties (e.g. "can any non-majority of disagreeing sensors cause the wrong trim actuations?")

To test your invariants, you should sabotage the spec and check that invariants break.

Once your spec passes model checking (or perhaps theorem proving with TLAPS), you can codify it in e.g. ADA-SPARK contracts.

Once you have that, you've validated your spec, your contracts and your code. Bugs can only occur in your invariants and the seams of your subsystem. This level of rigor should be standard by now for safety-critical systems.


The recent Boeing debacle was a systems engineering issue. AFAIU, there was a jammed sensor, and the systems corrected based on incorrect input from this sensor. Additionally, it is my understanding that some indicator that showed that these sensors disagreed with each other was not always present.

Regardless, this entire incident was precipitated by boeing, and the carriers, not wanting to have to retrain pilots on the new systems.


Wasn't the ill fated code itself written in Ada? That is essentially all that goes on boeing aircraft. I don't think a lack of software engineering rigor is to blame for the design mistake.


Possibly; I don't work for Boeing or their subcontractor. But I was getting at more of the general sense that as aircraft become more and more computerized we need more rigour and standardization of the sort that gave rise to Ada, and less of the aerospace equivalent of hacking things together in perl.


I would say we need more real rigor, of the kind that gave us qmail and seL4, and less fake rigor, of the kind that gave us Ada and inevitably proceeds from standardization.

Standardization means less autonomy, which generally means that the opinions of prestigious people are imposed on everyone else. This can be good for intellectual rigor at first if your social pecking order is based on having sound opinions, like St. Augustine, but if it's based on anything else, standardization is an intellectual disaster from the start. And of course standardization prevents intellectual progress, always and everywhere, which is how Galileo ended up under house arrest.


While I understand the instinct, there is a flip side. A while back Lutz Prechelt posed the phonecode problem, inviting solutions in multiple languages. While he didn't specify error handling, the Perl solutions were on average better at error handling than the compiled language solutions.


I think you are referring to [0]. Thanks a lot for sharing this, it's quite interesting!

[0]: http://www.inf.fu-berlin.de/inst/ag-se/teaching/V-EMPIR-2011...


The Boeing debacle is an example of software performing exactly to specification. The spec was the problem.


When Ada came out it was a large intimidating language. The saying was "to learn Ada, it's best to know Ada first."

I haven't touched it in a long time, but I remember some of the impressions it made on me when I first used it.

The type system: I think you could do some really fancy things like:

  speed := miles_traveled / elapsed;
You could use types to make speed type mph, miles_traveled be miles and elapsed be hours.

the thing is, I think you had to create some conversion stuff for the "/" so that it allowed two types to be divided, returned a third type.

unchecked conversion: no matter what you did, you had to interface with hardware. So there was a lot of unchecked_conversion() stuff to convert between basic machine stuff.

argument lists: You couldn't have a printf() or any other varargs type stuff that I remember. This made interfacing with external libraries that needed this kind of stuff lots harder. (maybe there are ways to do this)

there were things I really did like

the package and package body stuff: You could implement a bunch of interfaces, and compile against the prototypes before the implementations were written.

the attributes, like foo'Size and the system package let you write your code to be correct even if system types or machine types changed

A lot of Ada is really tedious, but once you got over that, working with an existing project was usually really nice.


I'm sure that's all true.

There is something to be said for a language that was written to a spec from the beginning, however. That is a luxury that few languages today can claim; most language specs were written after the first version(s) of the language were implemented, and were written to match the implementation, not the other way around.

Writing the specification first makes for a much cleaner language, though not always the most comfortable, if you put any weight in your own opinions.

[I am speaking generally from here onwards and I am not looking for specific examples to the contrary. I know full well that there are exceptions to the generalizations below.]

Lots and lots of engineers today would (and do) express complete comfort in their daily drivers of C and C++, while producing buffer overflow bugs and use-after-free bugs all the while. Maybe it isn't in the best interest of the software landscape to let developers choose the languages they use...

We have, as a community, let our feeling that software development is an art dictate how we react and deal with new information; we overwhelmingly reject it. What we're doing now is so much more comfortable, and we feel like we know what we're doing with [language x] finally! Why would I want to overturn all that comfort?

Software development is not an art, it is a science, and we've been ignoring the science part of that as we refuse to abandon our favorite language(s) for things that could easily be proven to be better, if we would just do the testing to determine performance numbers, ease of bug introduction, ease of maintenance, and so on.

We call ourselves "software engineers" and we don't measure anything beyond the most rudimentary measurements. We DEFINITELY do not actively look to overturn our comfort in favor of finding a better language to use. We don't try to improve anything the moment that might mean that we switch languages, no matter how significant the long term advantages could be. We don't even think about what the long term advantages could be; we're too busy telling ourselves that we are smart and that we can do it in whatever language we are comfortable with.


Software engineering is neither science nor art, it’s a craft; one where the raw materials are pure arbitrary logic and data, and therefore having much broader scope than crafts tied to physical materials, but a craft nonetheless.


Until software engineers are personably liable for their work like civil and mechanical engineers are and have to carry insurance because they will get sued for negligence if there are issues, it’s not professional engineering.


Software engineers are personally liable for their work. It's just that it's much more difficult to prove the various elements of negligence in the software realm, which makes litigation costly and uncertain. How negligent is an arithmetic overflow in a free software library that causes the loss of # of ads clicked for 10,000 Facebook users? Could the developer have foreseen the use case? What's the value of that loss?

An arithmetic bug in a flight safety system or financial trading platform? Your odds of being successfully sued just went up astronomically as it's much easier to show negligence and put a dollar figure to damages.

Professional licensing regimes are principally designed to prevent negligence by a priori preventing people from holding themselves out as capable of performing some task. In terms of legal liability all they really do is restrict the extent to which you can limit your liability contractually. That's important in business and in professional services (doctor, lawyer, etc), but for downstream users who suffer damages there was never a contract involved, anyhow.


Some of them are, just not on every country. Here are the examples for Portugal and Germany.

https://www.ordemengenheiros.pt/pt/a-ordem/colegios-e-especi...

https://www.vdi.de/

Although you can get by without the admission exam, some kind of state projects do require it for any engineer putting their name on the contract assignment.


Calling one language better than another is subjective and open to interpretation. Clean might not be as important as critical mass in adoption, letting people more efficiently find people to work on the software.


That depends. If you are looking for a language that produces more work for the CPU per line of source code, that is definitely not subjective. Programming languages are not hard to learn for anyone that already knows one or more.

Other important metrics no one thinks to measure:

* Quality of compiled assembly or bytecode (it is dead easy to write a shitty compiler that produces stupid assembly or bytecode)

* Runtime performance on tight loops (this is directly tied to cost when running on something like AWS Lambda, and code that performs well means you need to upgrade your hardware less often)

* Lines of code required to achieve a given goal. Less is obviously better, when you understand that more code = more complexity, and that more complexity = higher likelihood of bugs. Also, if there is enough boilerplate code to break these guidelines, then what is that code doing, except slowing development times and slowing execution times?

I can write a method two or more ways and know provably which is faster. No one does this! No one looks for hot spots and tries to optimize them. No one profiles anything if performance is deemed "good" or "not a problem."

There is a lot of CPU time to be saved, a lot of RAM usage and cache optimization that no one does because they believe that their work is above being challenged, even by themselves. This world is insane.


Languages are tools to codify behaviors. The fact that we spend so much time debating which tool is better instead of “building bridges”, is why I left this industry.

Just look at the web front end. Absolute cluster fuck.


Well, languages are indeed different and some are better for the task than others. And bridges do fall sometimes, so it's worth to reflect on why and how to prevent it.


As others have mentioned, software engineering is not a science, it's a craft strongly entangled with communication: not only requirements-to-product, but also product-to-the-next-developer.


As I've probably posted before, I was at a Dijkstra lecture in the early 80s where he put up one of his famous hand-drawn slides that said "Ada: The latest government boondoggle that will take 5 years and 5 million dollars to kill." [5 million dollars was a lot of money back then.] So yeah, even the academics didn't much care for Ada.


I worked with a guy who worked on a ge defense contract in the late 80s that demanded ada as the design language. He worked there 2 years learning the language without writing any code before they laid the team off.


Ah, those famous people and their catchy phrases. They do a lot of damage sometimes.


What did Dijkatra like? Pascal?


ALGOL 60. https://en.wikipedia.org/wiki/Edsger_W._Dijkstra#Compiler_co...:

”Dijkstra was known to be a fan of ALGOL 60, and worked on the team that implemented the first compiler for that language. He was closely involved in the ALGOL 60 development, realisation and popularisation.”

Also: Haskell and pascal over C++ and Java (https://www.cs.utexas.edu/users/EWD/transcriptions/OtherDocs...)


…for CS instruction.

I've never come across any indication that Dijkstra wrote actual code (in the sense of something executing on a computer) after 1970 or so. The algorithms in the EWD notes are all written in ALGOL style pseudocode.

Dijkstra was the CS equivalent of the Patent Law concept of a "Non-Practicing Entity", which gave him great liberty to troll all programming languages without exposing his own choices or designs to corresponding scrutiny.


This nails it. Dijkstra sort of hated computers; he always hand-drew overhead projector transparencies for his lectures. He's also famous for saying we should call our field "Computing Science" because "Computer science is no more about computers than astronomy is about telescopes."


> "Computer science is no more about computers than astronomy is about telescopes."

It's more accurate to say that CS is no more about computers than aerodynamics is about flying: Without flight, aerodynamics would barely exist as a distinct field, and wouldn't be of interest to anyone except people in a sub-field of fluid dynamics, and never intersecting with practical concerns such as materials science and the overall design of buildable objects. Similarly with CS: Without computers it would be a simple, obscure sub-field of mathematics, about where it was in the 1930s when Post and Turing and Gödel were active, and it wouldn't intersect with fields like electronics or, probably, have much to do with formal grammars.


Before C was even a thing, plenty of computer systems got developed in Algol dialects for systems programming, like ESPOL, NEWP, PL/I, PL/S, Algol RS,...


If I could upvote this twice, I would.


He was teaching at UT Austin and Pascal was the CS department's pedagogical language of choice back then, so I presume the answer was yes. Or at least I never heard him say he hated it.


My impression of Ada via undergraduate exposure at Embry-Riddle was that it was like an industrial strength Pascal. I never thought later on that I needed Ada, but then I didn't go on to work with avionics or anything like that. And I always thought that Pascal+extensions was a pretty good way to develop applications and was disappointed when the Macintosh moved towards C and C++. I learned to write GUI applications with THINK Pascal, before Java existed.

In the version of Ada we had, it was DOS based, but I found out that you could use the interrupt API, allowing VGA graphics.


> The impetus that led to Ada came from management and implementation was done by academics.

Impression of mine back then was academics were busy designing their perfect languages to do data transformation while ignoring real world engineering needs and that not everyone had an unlimited account on the schools big iron mainframe.


Seems like "the next big thing" has existed in the tech world from the start.

I remember using Ada 95 in university, after learning C++, Java and Python in other projects. It worked very well for concurrent processes, but it was so basic as a developer experience.

In C++ (98 I think) I needed to build length dynamic arrays, in Ada 95 I had to build dynamic length strings.


> in Ada 95 I had to build dynamic length strings.

Ada95 did have dynamic strings, they were provided by Ada.Strings.Unbounded package.


Interesting. To be honest only used the language for a project!


Ada exists because of the US Department of Defense. In particular, it formed a working group, headed by a military officer, which

>crafted the Steelman language requirements, a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications. Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals [1].

The success of the language resulting from this process was ensured by the D of D's strong desire for a standard programming language shared by all of its contractors.

[1] https://en.wikipedia.org/wiki/Ada_(programming_language)#His...


Speaking of JOVIAL: I found this great talk by Jules Schwartz on the development of JOVIAL. https://youtu.be/sXjCqa4W4iQ It's a great, quite humorous watch.


Welp, there's my rabbit hole for the week. I've never heard of JOVIAL before, and it looks like there's lot of interesting stuff here. Thank you!


Around 2008/2009 I was briefly (about 6 months) employed to write Jovial and 286 assembly language for software that controlled satellite communication base stations.

Unfortunately I was laid off before I got to write very much of it, but it seemed like a neat (if cumbersome and old fashioned) language.

Information about it is pretty hard to find, and there's no open source or free compilers that I know of, but here are some links I had saved.

A 1980s style tutorial:

https://apps.dtic.mil/dtic/tr/fulltext/u2/a142780.pdf

The mil-spec:

http://everyspec.com/MIL-STD/MIL-STD-1500-1599/download.php?...

An explanation of how it came to be:

https://jovial.com/documents/p203-schwartz-jovial.pdf


fortran with nice syntax and built in bit operations.


Nice fixed point types too. A fixed point all to the right of the binary point to represent angles asa semicircle was useful. Table ovelays are evil though. I'd rather have rust than ada for the functional lineage along with the performance, but the thin stdlib combined with the difficulty of getting a crates.io mirror certified clean of malcious code makes it problematic.


Not to say Rust is ready for prime for safety critical systems, but both crates.io and the standard lib are optional for Rust.


I think it is probably closer to safety critcal than it is to the middleground of safety related, where the sloc hits millions so you want boost or a thick stdlib plus posix but its still controlled enough you develop airgapped and worry about number of library contributors, trustedness and mostly futile code inspections of entire libraries. Even for server backends the rust team has expressed concern about the security implications of having such deeply nested dependency trees with so many authors.


While you're down there, you might as well check out Coral-66.


production JOVIAL code was like.... 20 lines of comments. 1 line of code. 20 lines of comments. 1 line of code.


This sounds perfectly on-brand with respect to how we do business in the government.

The government has an extreme aversion to encouraging competing approaches or internal competition - largely because of budget concerns around double-work - which drives this behavior.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: