Hacker News new | past | comments | ask | show | jobs | submit login
Ada for the C++ and Java Developer [pdf] (adacore.com)
152 points by pjmlp 14 days ago | hide | past | favorite | 70 comments



If any Ada users are here, I have question on one section:

  procedure Main is
     type Distance is new Float;
     type Area is new Float;

     D1 : Distance := 2.0;
     D2 : Distance := 3.0;
     A  : Area;
  begin
     D1 := D1 + D2;        -- OK
     D1 := D1 + A;         -- NOT OK: incompatible types for "+" operator
     A  := D1 * D2;        -- NOT OK: incompatible types for ":=" assignment
     A  := Area (D1 * D2); -- OK
  end Main;
> The predefined Ada rules are not perfect; they admit some problematic cases (for example multiplying two Distances yields a Distance) and prohibit some useful cases (for example multiplying two Distances should deliver an Area). These situations can be handled through other mechanisms

I can get why you have to be explicit in type casts if you're trying to be safe, but is there any way to say that

  type Area is Distance * Distance
Or

  Type CubeVolume is Distance * Distance * Distance
So that of I say a: Area and b : CubeVolume and a := d1 * d2 it just works, and then b := a * d3 also works.

IOTW, explicit casts not needed.

Or is it the case that Ada really took the concept of "explicit is better than implicit" and went to town?

I just wonder because I love F#'s unit of measure types and how they can prevent logical type errors (say, multiplying 10 m/s by 3kg when assigning to a variable of type m/s^2).

Or for this case, if I wrote

  x: Distance := y: Distance * z: Distance
It'd fail hard, because a Distance * a Distance is an Area.

I sorta figured Ada would have invented this sorta stuff and then it was cribbed by other languages.

So yeah, wondering what the other mechanisms the author mentions are. Because I've always heard Ada is great at type safety, so I'd be keen to see how safe it makes your code when combining values with units that aren't logical to combine.


I am not an Ada expert, but I came up with this. Please notice that the idea here is not to have the compiler do proper dimensional analysis but simply to avoid the default operator overloading of "*" for the Distance type that also returns a Distance value.

Nothing very high-level, as we need to explicitly manipulate the single-component records, but it does the job.

    with Ada.Text_IO; use Ada.Text_IO;
    
    procedure Main is
       type Distance is record
          Value : Float;
       end record;
    
       type Area is record
          Value : Float;
       end record;
    
       function "*" (Left, Right : Distance) return Area is
       begin
          return (Value => Left.Value * Right.Value);
       end "*";
    
       D1 : constant Distance := (Value => 10.0);
    
       D2 : constant Distance := (Value => 20.0);
    
       -- D3 : constant Distance := D1 * D2;
       -- Does not compile. GNAT returns the following error:
       --   main.adb:21:33: expected type "Distance" defined at line 4
       --   main.adb:21:33: found type "Area" defined at line 8
    
       A : constant Area := D1 * D2;
    begin
       Put_Line ("Area A is " & Float'Image(A.Value));
    end Main;*


As someone said, the best solution here is to use the dimensionality analysis in GNAT: https://docs.adacore.com/gnat_ugn-docs/html/gnat_ugn/gnat_ug...

as in:

  with Ada.Text_IO; use Ada.Text_IO;
  with System.Dim.Float_Mks; use System.Dim.Float_Mks;
  with System.Dim.Float_Mks_IO; use System.Dim.Float_Mks_IO;

    procedure Main is
       subtype Distance is System.Dim.Float_Mks.Length;
       subtype Area is System.Dim.Float_Mks.Area;

       D1 : constant Distance := 10.0*m;

       D2 : constant Distance := 20.0*m;

       -- D3 : constant Distance := D1 * D2;
       -- Does not compile. GNAT returns the following error:
       --   main.adb:13:08: dimensions mismatch in object declaration
       --   main.adb:13:37: expected dimension [L], found [L**2]

       A : constant Area := D1 * D2;
    begin
       -- print: Area A is  2.00000E+02 m**2
       Put ("Area A is ");
       Put (Item => A);
       Put_Line ("");
    end Main;

Oh, thanks for pointing it out. Honestly, I was more interested in leveraging the type system than on proper dimensional analysis. I could find no mechanism by which Ada allows the programmer to disable the default operation overloads, so I decided to try that out.

IIRC you can disable/forbid the default like so:

    Function "*"(Left, Right: Distance) return Distance is abstract;
then use:

    Function "*"(Left, Right : Distance) return Area;

That seems to do the trick for an individual predefined operator specification. Thanks!

This is the right way to approach it on my opinion.

There is now, at least with Ada 2012 and GNAT[0][1]. While I'm pretty sure only GNAT has implemented it, I'm not sure if it's actually in the 2012 standard or not.

I haven't played with it yet, but it does looks pretty interesting.

[0]https://gcc.gnu.org/onlinedocs/gcc-4.9.4/gnat_ugn_unw/Perfor... [1]https://blog.adacore.com/uploads/dc.pdf


PTC also supports Ada 2012.


I got it working with:

with Ada.Text_IO; use Ada.Text_IO;

procedure Main is

     type Distance is new Float;
     type Area is new Float;

   function "*" (Left, Right : Distance) return Area is
     temp : Distance;
   begin
     temp := Left * Right;
     return Area (temp);
   end "*";

     D1 : Distance := 2.0;
     D2 : Distance := 3.0;
     A  : Area;

  begin
     A  := D1 * D2; -- OK
     Put_Line(A'Image);
 end Main;
Interestingly, it took me several attempts to create this overload, make it a single line return would make it recursive:

   function "*" (Left, Right : Distance) return Area is
   begin
      return Left * Right;
   end "*"; --   ^ Does not work
            --   raised STORAGE_ERROR : stack overflow or erroneous memory access
Also:

   function "*" (Left, Right : Distance) return Area is
   begin
      return Area (Left * Right);
   end "*"; --  ^ Does not work either, compiler gives me "ambiguous operand in conversion"


Could you do

    return Area(Float (Left) * Float (Right));


Yes, that works.


Could you do

  return Area (Distance (Left * Right));

?

IIRC, the one-liner here [assuming you disabled the default "" operator] would be:

    Return Area( Distance'Base'(Left * Right) );
The apostrophe at the inner portion is 'qualification', a method for directing the compiler that the enclosed value is supposed to be a particular subtype. (Typically used to resolve ambiguities.)

There's a library for that: http://www.dmitry-kazakov.de/ada/units.htm


It does have operator overloading so you could certainly define some

  function "*"(Left, Right : Distance) return Area 
and if dedicated enough build up a whole set of unit-types with corresponding conversion rules.

Not an Ada guy at all though so no clue if there's a less by-hand way of doing it.


The Units library does exactly what you describe. http://www.dmitry-kazakov.de/ada/units.htm



One of my favorite features of Ada 2012 are the design-by-contract checks taken from Eiffel[0]:

Preconditions: "a condition or predicate that must always be true just prior to the execution of some section of code or before an operation"[1]. It is up to the caller (client) to set up the preconditions and ensure that they are true before calling the code in question. If any preconditions are not met, it is the fault of the caller (client).

Postconditions: "a condition or predicate that must always be true just after the execution of some section of code or after an operation"[2]. The code in question guarantees that the postconditions will be true after the code is executed. If any postconditions are not met, it is the fault of the callee (supplier).

Invariants: conditions or predicates that "can be relied upon to be true during the execution of a program, or during some portion of it"[3]. Both parties must ensure the invariants hold.

These features make it very easy to determine where a bug is in the code and make it very explicit what is expected of the caller (client) and callee (supplier).

They act as run-time sanity checks and push Ada / SPARK code in the direction of Haskell function signatures and types. With proper preconditions, postconditions, and invariants in place I think it should be possible to implement a QuickCheck-style[4] testing system to provide some empirical checks if SPARK proof checking is not used.

I would love to see these design-by-contact features added to Rust, C++, and even C.

[0] https://www.eiffel.com/values/design-by-contract/introductio...

[1] https://en.wikipedia.org/wiki/Precondition

[2] https://en.wikipedia.org/wiki/Postcondition

[3] https://en.wikipedia.org/wiki/Invariant_(mathematics)#Invari...

[4] https://en.wikipedia.org/wiki/QuickCheck


C++20 was supposed to have them, but they were removed on the very last minute, if we are lucky maybe C++23 will get them, but C++26 is most likely, if they ever go forward with them. [1]

For C you can have a look at Frama-C. [2]

D also has DbC. [3]

In case you can target Windows only, VC++ supports SAL Annotations, while not DbC they help to improve code security

[1] - http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p233...

[2] - https://frama-c.com/html/acsl.html

[3] - https://dlang.org/spec/contracts.html

[4] - https://docs.microsoft.com/en-us/cpp/code-quality/using-sal-...


One thing I am excited about is that we should be able to exploit the contracts in some circumstances to better performance.

The problem I’ve seen with trying to add non-static contracts to performance-sensitive languages has been ideological rather than technical.

Specifically, do the language semantics allow a mode where these checks can be turned off? Leaving them on all the time can be a huge performance burden, both directly and in the barriers they add to optimization (while there are potential optimization benefits to exploiting the guarantees of contracts, I don’t think as a practical matter they balance out).

But allowing them to be turned off in “release mode” eliminates much of the benefit in all modes: Developers can no longer assume that their code always runs in a context where the contractual guarantees are met, so they have to guard data integrity or safety critical operations dynamically anyway.


I guess it's as bounds check and other runtime validity checks integrated in the language. When we upgrade compiler versions, I usually run a global app profile (perf, and internal tools you must have when latency and cpu load are actual product requirements. Every 'new' hot spot is analyzed and if it's a contract, I check whether there's a less costly way to perform it, or if watering down the contract seems OK (usually not). The thing is, a precondition (especially if you went through AoRTE, or silver-level SPARK) can authorize you to aggressively remove all runtime checks (and get far more performant code) 'behind'.

> Specifically, do the language semantics allow a mode where these checks can be turned off? Leaving them on all the time can be a huge performance burden, both directly and in the barriers they add to optimization (while there are potential optimization benefits to exploiting the guarantees of contracts, I don’t think as a practical matter they balance out).

Yes, with Ada it is typically a pragma or compiler-flag to turn checks off; however, Ada has a long history of having mandatory-checks and strongly-encouraging compiler-implementers to optimize the checks away when it is known they cannot fail (which is a surprising chunk of the time if you're coming from a C-based language).


They can be turned off for release mode, and the SPARK subset of Ada can use them in formal verification, proving that the conditions hold true for all inputs statically, eliminating the need for any checks.

I think the performance impacts are usually not as big as you think (the Ada compiler is smart enough to eliminate redundant checks where it can), and I believe that they can be turned off per package, too, so for performance critical modules they can be deactivated in release mode.

I do not think contracts were intended to raise recoverable errors in a normal application's behavior; if they are triggered, then the program is incorrect, not just encountering an error. I think most developers would leave them on in release mode (if possible) just to get stack traces and exception messages to see what failed.


The paper "Applying Design by Contract" by Bertrand Meyer shows how DbC is to be used. It is sufficient to have only preconditions turned on by default in "release" mode, others are optional.

My thumb rule is to have all preconditions/postconditions/invariants turned on only in the boundary functions (i.e. public api) of a module while the inner cohesive functions only have preconditions turned on.


Is it not possible to use refinement types to erase some (but maybe not all) runtime checks but still maintain a type safety guarantee?

Any language that has assertions can be used to simulate at least preconditions and postconditions, and maybe some support for invariants.

As the team leader for the second version of a database middleware product developed in C, being convinced by study of the efficacy of assertions for DbC and hence better quality software, I made sure my team used C assertions heavily throughout the codebase, at the entry and exit points of functions, to implement preconditions and postconditions, despite some opposition to it.

End result: the product was a success, and was used in multiple software projects for customers.

We were rewarded well for it.

Edit: I first learned about DbC myself, via reading about Eiffel and Bertrand Meyer's work, early on.


I should add that the product being a success wss not solely due to using assertions for DbC, of course, although, IMO, that did play a significant role in bug detection and hence removal and better product quality. I also ensured that the team consistently applied some other basic software engineering practices, which were not ordinarily followed in that company. The product success was the result of all the practices that were applied and also a good team.

I never understood how is that different from decorating your function with `assert` statements. Is it analyzed statically ?

Yes. SPARK/Ada will attempt to verify many of the properties statically. Not everything can be, and not everything can be done without significant effort (that is, more than it may be worth). But it's more than asserts in most other languages which are only verifiable dynamically. For those areas where SPARK can't prove something statically, the checks are used at runtime (like a traditional assert). It's a best of both worlds situation, in that regard.

asserts are just a language mechanism using which you implement preconditions/postconditions/invariants which are program correctness design constructs. Thus it is a systematic (i.e. not ad-hoc) way of using asserts correctly to guarantee program state.

I highly recommend reading Bertrand Meyer's papers "Applying Design By Contract" and "Design by Contract". They are well worth every programmer's time.


You can use formal verification tools to verify the code. Having preconditions ("assume" instead of "assert") reduces the search space.

I used Ada (first Ada95, later Ada2005) while working on the GPS program, before leaving aerospace and taking on work using Java.

I found the Java type system to be horrendous in comparison. Much better employment potential, though...


That is one reason why I never liked C and C++ was a better option when coming from Turbo Pascal 6.

Although Turbo Pascal isn't Ada, it did allow for a similar approach with stronger types, which C++ also kind of supports (but C compatibility spoils it).


Boeing 777 flies on 99.9% Ada

Used in aviation extensively where toy and aspirational languages don't do the job. http://archive.adaic.com/projects/atwork/boeing.html


Ada seems like a really good language. Is there any reason why it isn't more popular?


There's a lot of FUD spread online, partly derived from historic facts, partly total bogus. I like how Luke A. Guest (https://github.com/Lucretia) put it:

"You’ve got to love it when people who [know] nothing about Ada like to tell other people, who know nothing about Ada, things about Ada that they probably heard from someone else, who knew nothing about Ada." -- https://users.rust-lang.org/t/if-ada-is-already-very-safe-wh...

That also applies to lots and lots of comments on this site!

(If you read here, hello, Luke :-)


My personal reasons as someone who has learned the language quite intensively but ultimately decided not to use it: (i) Vendor lock-in and too much future dependence on Adacore (other commercial Ada compilers do not count as alternatives to me because they are super-expensive), (ii) I can't always use the MGPL and would prefer MIT licenses of important Ada packages, (iii) the user community is split in a weird way between very professional aerospace and high integrity systems engineers and fairly dilettante hobbyists, but not many users in the space in-between those extremes, and (iv) not enough libraries for mundane tasks/operating system integration/GUIs.

I'm currently using Go. Although I would prefer Ada as a language, (iv) is in the end decisive for my tasks. If I used Ada I'd spend half of my time reinventing the wheel or interfacing with C libraries. I'm hoping to find a use for it in the future, though.


As someone who is going all in with Ada, every time I have to reinvent the wheel I plan on releasing it as a MIT licensed library on github. Hopefully if enough Ada programmers do that, we won’t have to worry about it as much.

It was historically too expensive (ie: not free).


While a bit old this might explain why it wasn't popular back then.

Why Ada isn't Popular (1998): https://news.ycombinator.com/item?id=7824570


Ada was standard in aerospace and defence projects in the UK when I started in s/w back in the 80's, although personally never worked in those areas. It may be that its perceived lack of popularity is tied to its association with those rather more secretive lines of work, although that doesn't in and of itself explain why it didn't become more broadly used - other commenters have mentioned cost, and that consideration was enough to kill another technically excellent language (Smalltalk).

My gut perception of Ada is unfortunately mediated through the murky lens of its bastard offspring PL/SQL, which is by a good distance the least favourite of any language I have ever used, although I'd be willing to entertain the argument that this is in large part due to all the ugly and ill-considered bits nailed onto it by Larry's mob rather than inherent defects in the parent language itself.


On the other hand I really quite like programming in Postgres's version of PL/SQL which I find to be pleasantly consistent and quite easy to understand.

Have only recently started with Postgres, but have been really impressed with the whole product and yes totally agree - the language seems to be designed by people who actually understand the DML tasks that programmers want to perform.

> Is there any reason why it isn't more popular?

There are a few; if you head over to comp.lang.ada you can find threads on the issue by people who were involved at the time. As I understand it though there are four or five points:

1. Ada was designed and specified completely before any implementations were extant, it used then bleeding-edge theory and integrated several big concepts/features: this lead to the very first 'implementations' being either incomplete or pretty bad performance-wise.

2. The backlash among DoD contractor's programmers; "Don't tell us what language to use!"

3. The rise of C's popularity; I believe this had massive consequences, ultimately setting back the field of computer science by decades. [Take a look at Multics, VMS, and the Burroughs... then realize how many of their features have been added to 'popular' OSes in the last 10–15 years.]

4. Misunderstanding the compiler's mindset: a lot of programmers take the view that they need to "shut the compiler up" rather than as the compiler helping them out by finding problems.

5. Misunderstanding "mandatory checks" — a lot of programmers are used to C & decedent's "simple" nature and really don't understand how things can be leveraged. A good example here is the sequence F(F(F(X))); if F takes Natural and returns Natural, then there is only one check that is needed for this sequence: N on the innermost call... and if X is defined as natural, even that check can be optimized away.


1. Being good is not enough to be popular. In most cases to be popular a technology needs to be actively promoted by a big brand (of at least a non-profit foundation). E. g. Java is popular thanks to Sun and Go thanks to Google. Also it helps to have free of cheap tools.

2. This problem also has another side - C is much more popular than the language itself warrants.

I see here following reasons: 1. Unix (which is popular in academia since 70s-80s) and later GCC (it was hard to compete with a free compiler at times when most other required an expensive license) 2. Microsoft designated C and C++ as "official" languages for Windows: MS provided IDE - Visual C++ supported only C/C++ [1] and official documentation implies that everybody uses C or C++ to create Windows apps.

[1] Visual Studio later added .Net support, but this not reduced C/C++ popularity because .Net competes mostly with Java.


Wow, reading through some of these examples, it’s pretty clear where VHDL got it’s syntax! I think I do recall some relation between it and Ada, but it’s almost uncannily close. (Looking up now, it was apparently a requirement for it to be based as much as possible on Ada).


VHDL was initially developed by the DoD back in the time when there was a strong push towards unification of programming languages used in government/defense contracts.

Newer VHDL deviates from Ada syntax in some unfortunate ways that can lead to confusion. For example ".all", in Ada is used to "dereference pointers" while in VHDL it is used to import everything from a library.

Also, coming from Ada you eventually realize that many FPGA vendor's tools are non-conforming to the standard in regards to certain features that are seldomly used by hardware designers but are bread and butter for Ada developers. (enum -> int, int -> enum conversions were partly unsupported in the Xilinx toolchains some years ago, not sure if the problem persists.)


> For example ".all", in Ada is used to "dereference pointers" while in VHDL it is used to import everything from a library.

In VHDL, the meaning of the suffix all depends on kind of the prefix. When the prefix is an object of access type, it behaves similarly to Ada.


The inclusion of Java in the title is misleading: "For those coming from the Java world: there's no garbage collector in Ada, so objects allocated by the new operator need to be expressly freed." So really, you first need a prerequisite which explains how to program without a garbage collector, but this will have a much greater scope than a book that just teaches you Ada.

And here In thought people were talking about Cardano and IELE...

"Java" is probably not a good name for a cryptocurrency either.

Why did they name that cryptocurrency "Ada"?


Does Ada have real memory safety features? Or is it of the "better than nothing" C++ kind?

For example: are array out-of-bounds checked? Or prevented at compile time? What about overflow? ...


Yes they are bounds checked, at compile time, or runtime if not able to prove them at compile time.

Overflow is checked.

However both can be disabled via unsafe code pragmas if so desired.

As of Ada 2012, the SPARK proof system was integrated into Ada and you can also use DbC as formal proofs.

Many of the use cases that in C++ would require new/delete are handled by the compiler itself, thus there is an error if when a function is called there is not enough space available.

In the cases that there is a need to explicilty do malloc/free like programming, only malloc (new) is considered safe, manually releasing memory is an explict unsafe operation and marked as such.


In addition Ada allows dynamically-sized arrays to be allocated on the stack and to be returned from a function. Efficient implement of the latter requires rather non-trivial support on the compiler side and C/C++/Rust have nothing like that. Yet in many cases it allows to eliminate new/delete and simplify code.

For example, just consider if C allowed to return a plain C string from a function without any heap allocation. Things like unsafe sprintf/strcopy would never happen then.


There a two stacks, the call stack, and a stack for returned values of dynamic size, not very complicated, is it?


Yes that is what I mean by allocation being handled by the compiler.


Although Ada still isn't fully memory safe. Read-before-write causes undefined behaviour, if I recall correctly.


Accesses - pointers - are automatically nulled when declared. So you won't silently screw up who-knows-what with an uninitialized access, though yeah, it's not perfectly safe.


You can also declare access types as "not null" in which case initialization is compulsory.

http://www.ada-auth.org/standards/12rm/html/RM-3-10.html#p13...


I didn't mean pointers/access-types, I meant ordinary integer-type locals.


You can put a pragma Warning_As_Error ("never assigned"); or the respective compiler switch on GNAT.

Other compilers have similar switch.

Heck even C and C++ have them, although few make use of them.


Also, I think Ada is fully memory-safe but for initialization. That is to say, the only way in which it's not memory-safe, is regarding uninitialized variables. (That's assuming of course that you don't disable checks.)

I'm not sure if it lets you shoot yourself in the foot regarding invalid type conversions, misuse of unions, that kind of thing.



Pragma Initialize_scalars for the win. But at runtime.

And if you can afford it, Codepeer (static analysis) finds most of the ones the compiler doesn't find. And then if you can live with the Spark subset, you get proof of initialization in 'bronze' mode :-).


In addition to what other's have mentioned in direct response to the examples you mentioned, there are additional safety factors - both by default and opt-in.

Pointers - accesses in Ada parlance - have rules that help ensure you can't have an invalid access. For one, the object itself has to be declared 'aliased' to create accesses to it. There are also rules to do things like ensure you can't create an access to some type at a higher scope than the type itself.

There are also memory pools that you can use. So you can specify that all allocations of a type occur in a specific pool or sub-pool. In addition to letting you control how allocation occurs, you can also use them for memory safety. All items in a sub-pool will be freed when the pool falls out of scope, so you can simply leave deallocations to occur that way.

There are also concurrency tools baked-in, with runtime support at least. Specifically in terms of memory safety, you have protected objects. They will automatically ensure you have a single writer at a time, and will do other nice things like still allow multiple readers. If you need to, you can get a fair bit of control over how it all works.


From the book:

> dynamic checks (such as array bounds checks) provide verification that could not be done at compile time. Dynamic checks are performed at runtime, similar to what is done in Java.


Check out the table on https://docs.adacore.com/spark2014-docs/html/ug/en/usage_sce....

> SPARK builds on the strengths of Ada to provide even more guarantees statically rather than dynamically. As summarized in the following table, Ada provides strict syntax and strong typing at compile time plus dynamic checking of run-time errors and program contracts. SPARK allows such checking to be performed statically. In addition, it enforces the use of a safer language subset and detects data flow errors statically.


Very nicely done, Ochem.

65 pages of straightforward translation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: