
Learning from Ada - rspivak
http://www.monkeysnatchbanana.com/2016/02/22/learning-from-ada/
======
nostrademons
"The C compiler made me feel adequate. Unlike Ada, I regularly got my programs
to compile. And so, I made my choice and went with C."

I think this dynamic plays out in a lot of technology evaluation decisions.
It's probably why techs like Javascript, MongoDB, Ruby on Rails, Java, PHP,
MySQL, Wordpress, etc. have gotten widespread adoption despite numerous
technical flaws. They have a very low barrier to getting _something_ useful up
on screen, and feed peoples' need for instant gratification. When you feel
good about yourself while using the language, you feel good about the
language.

The interesting thing about the software industry is that network effects are
often so strong that it's rational for a disciplined, expert developer to use
one of these technologies rather than something more niche that plays to their
expertise. You may hate Java, but if you need to use
Hadoop/Storm/SparkML/OpenNLP and the myriad of well-tested libraries, it may
be a lot better choice than building your own distributed big-data stack.

I guess this is the idea behind "Worse is Better" [1]. It makes me wonder if
"better" will ever be "better" (perhaps after the pace of adoption in the
software industry slows down, and we start spending time getting known product
architectures right rather than finding the next big unknown product
architecture), or whether by that time the entrenched base written on top of
ad-hoc technologies will be too big to change.

[1]
[https://www.dreamsongs.com/WorseIsBetter.html](https://www.dreamsongs.com/WorseIsBetter.html)

~~~
jakub_h
> It's probably why techs like Javascript, MongoDB, Ruby on Rails, Java, PHP,
> MySQL, Wordpress, etc. have gotten widespread adoption despite numerous
> technical flaws. They have a very low barrier to getting something useful up
> on screen, and feed peoples' need for instant gratification.

It might not be just about that. Sometimes you don't know what you're doing,
and you have to find out first. What use is it if you painstakingly design and
craft a perfectly engineered program, only to find out that you actually
needed something else? Being able to experiment is precious. Being forced to
handle everything before the thing even compiles is not all that useful at
times. There's a reason why interactive environments like Lisp and Smalltalk
_don 't_ force you to define all functions that could plausibly get called
(and when they do get called, you can even add them on the fly and continue).

~~~
pka
Static languages, in particular Haskell, PureScript, etc, are _unparalleled_
when it comes to fast prototyping or design space exploration. The compiler
can show you whether your idea even makes sense before writing big chunks of
business code.

How? By building the structure/hard stuff first and letting the compiler
verify it while leaving out the uninteresting parts.

A very simple example. Let's say you have some data model with people,
addresses, whatever. You want to see what you need to do to get a list of
addresses for some group of people.

You start by defining the structure of your data model (this can depend on DB
schema, business requirements, whatever):

    
    
        data Person = Person { name :: String, city :: String }
    
        data Address = Address { city :: String, zipcode :: String, street :: String }
    

Now, go straight to the hard part. No need to fuck around with DB connections,
mock data, whatever:

    
    
        getAddresses :: [Person] -> [Address]
        getAddresses = undefined
    

Leave out the implementation for now. So probably we are gonna need a way to
get an Address for a Person. Ok:

    
    
        addressForPerson :: Person -> Address
        addressForPerson = undefined
    

Again, leave out the implementation. Now go back to `getAddresses` and see if
we can implement it now:

    
    
        getAddresses = map addressForPerson
    

Cool, the compiler doesn't complain. So far we've verified that _if_ we have a
function like `addressForPerson` we can safely assume that our design is going
to work.

Now we can go back to `addressForPerson` and implement that. But turns out, we
only have a city stored in Person, so the best we can do is get back a list of
potential addresses for each person. So we need to adjust that type signature:

    
    
        addressForPerson :: Person -> [Address]
        addressForPerson = undefined
    

Leave out implementation, it's trivial so no need to spend time on it now. But
now the compiler complains that `getAddresses` doesn't make sense anymore! We
are saying that, having a list of people, we can get a list of addresses, but
that's not true anymore. We found out that we can only get a list of potential
addresses, so we have to adjust the type signature.

    
    
        getAddresses :: [Person] -> [[Address]]
    

Now every use of `getAddresses` will have to be adjusted to the new
requirements. Etc etc.

Granted, this is a very simple example, but I hope it illustrates how static
languages facilitate extremely fast prototyping. Sure, a Lisp REPL is very
very nice, but in my personal experience it's no match for a type system. We
didn't even have to mock data to start experimenting!

(And yes, Haskell, OCaml, PureScript, Elm etc all have a REPL too).

~~~
nickbauman
I don't see this example particularly enlightening. I'm not really ever seeing
problems where I'm stuffing an integer into where I wanted a string in dynamic
languages. Or stuffing a Person into a Gorilla. These problems seem important
and they tickle programmers warm fuzzies (unless you're like me and you've
been programming static and dynamic for more than 20 years).

The small amount of scientific study around what type checking and static
typing gives you is that you avoid around 3% of bugs but even those 3% are
easier to fix than the effort in writing them correctly in the first place.
While I agree that the subject needs more study but what we know now: the real
win would be to have a dev process that lowers criticality of failure (like
TDD, CI and other Agile processes) and dispense with static typing for most
projects.

~~~
pka
While I disagree (static typing is much more than catching type conversion
errors), my point was that it makes experimentation and design exploration
much easier and faster. Without types, you'd need mock data for even the
simplest of programs, and changing mock data structure is much more painful
than changing a type somewhere and getting the places ( _all the places_ )
where your program turns then out to be incoherent. Not to mention that types
are much more than Persons or Gorillas, they can be higher order functions too
(Person -> Gorilla) that are passed around.

~~~
nickbauman
Sure. But does your static typing system allow you to skip writing automated
tests? Pretty sure everyone writes tests these days unless they're spiking
something. If you're writing tests, you will have to come up with the mock
data anyway. And the kind of exploration you're doing I tend to do from a test
regardless. Once my exploration is done I also have a test that captures my
work. You still have to write one. The question of whether to static or
dynamic is moot in this case. It no longer matters much.

Maybe I need to spend some time in Haskell and come over to your way of seeing
it. But the studies show that constructing the type system for your problem is
very difficult and remains error prone. People routinely make mistakes with
types systems of all kinds. I've found I spend more time making the compiler
happy that it does making me happy.

~~~
pka
The sibling comment points out property based testing (see QuickCheck), which
is a good idea in any language. And yes - you don't need to write that many
unit tests (but still some) when using a type system, because you can encode a
lot of invariants in the types.

But still, my point stands - writing tests during early experimentation is a
lot harder than just altering some type and letting the compiler infere what
needs to be done or what you broke.

I've spent some time writing Clojure and still use dynamic languages a lot for
smaller tasks, so I know both sides. This doesn't mean much, but I'd really
encourage you to try some ML language and spend some time with a nice type
system. Haskell is probably too big a time investment, so maybe try Elm [0] -
it's very beginner friendly and a lot of the people using it come from
JavaScript.

[0] [http://elm-lang.org](http://elm-lang.org)

------
lanestp
Ada has a lot of cool stuff going for it. I liked the type system well enough
and a well written Ada program is very easy to read. My problem with it was I
always had trouble with the stuff like while ... loop if ... then case ... is
The number of times that a program failed to compile because I got the magic
words wrong drove me nuts! The author is bang on in the comparison to C which
despite its flaws is very internally consistent.

~~~
david-given
There's actually sense to the magic words. Each word which opens a block has a
unique word which ends the block. So, begin..end, if..end if, loop..end loop,
etc.

The advantage of this is that it's much harder to screw up block terminations
--- one thing I've done many times in C is when closing a chain of blocks
with:

    
    
                }
              }
            }
            do_something();
          }
        }
    

is to miscount the braces and put do_something() in the wrong place. In Ada,
that'd be:

    
    
                end if;
              end;
            end case;
            do_something();
          end loop;
        end;
    

...which is far more meaningful.

~~~
user12357z7
You should read about how to write clean code... (instead of switching to
another programming language)

------
trott
IMHO Ada violated the most fundamental principle of programming, which is DRY
(Don't repeat yourself). You have to invoke the name of every function twice
while defining it, etc. The language seems to be designed for multi-page
blocks, which are a bad idea to begin with.

It also lacks memory safety, despite being a safety-oriented language.

If someone added a better syntax to Ada (probably easy) and Rust-like memory
safety (probably hard), that would make it a higher-value proposition.

EDIT: There's also an article demonstrating a hole in Ada type safety:
[http://www.enyo.de/fw/notes/ada-type-
safety.html](http://www.enyo.de/fw/notes/ada-type-safety.html)

~~~
bitwize
I'm a big believer in RYINTBDIMOTB (Repeating Yourself Is Not The Big Deal
It's Made Out To Be). In a safety-oriented language, checking for consistency
in all of the repeated instances of a thing is one form of low-hanging-fruit
safety check.

If you're restricting yourself to access types you have good memory safety.
Access types cannot be aliased unless declared so, and are subject to
accessibility checks to verify that they are "live". Unsafe stuff in Ada must
be used explicitly; you cannot, in general, accidentally the whole stack or
heap unless you are using a pathological coding style.

~~~
wyager
> checking for consistency in all of the repeated instances of a thing is one
> form of low-hanging-fruit safety check.

I shouldn't have to make such checks in the first place. Not making a mistake
in the first place is preferable to detecting it later on.

~~~
jacques_chester
> _Not making a mistake in the first place is preferable to detecting it later
> on._

Not trying to detect a mistake is a great way to form the impression that
you're not making them in the first place.

~~~
wyager
Perhaps I was not clear. A preferable alternative to detecting mistakes is to
ensure that they are impossible to make in the first place.

------
opk
I worked with Ada in the early part of my career. For the most part, the
strong typing really helps but it can often make the logic messier that you
can't push an integer subtype one past the maximum value. Unlike C, the code
base tends to remain quite readable and understandable even in big projects
and it is more fun to use than Java (which I find to be soulless and dull).

~~~
35bge57dtjku
How is it more fun than Java???

~~~
na85
Anything is more fun than java.

------
leoh
I loved this article. I never knew about Userland Frontier Kernel and enjoyed
looking it up. I also enjoyed other bits of history sprinkled in the article
especially about Macintosh switching from Pascal to C and about what it was
like to use the Ada and Pascal compilers.

The author's central argument is "don't give up on a language that's hard
because it might have significant, practical advantages." In particular, he
argues that Ada might be superior to C because C suffers from segfaults or
other memory-related issues that Ada does not suffer from. Because C was
initially easier to write, he went with it.

I think this principle — that it is worth sticking with a language despite
difficulty in writing it — is flawed. While Ada was both difficult and
resolved memory issues, languages like Python graceful avoid memory issues
without being difficult to write.

Naturally, language safety is greater than ensuring memory integrity. Haskell
ensures memory integrity and type safety and happens to be a hard language. I
think that Haskell is a stepping stone. That in the future, languages will be
safer, but also easier to write. That we haven't solved this paradox — that a
language can be both safe and easy to use — is merely a matter of time. For
now, I'll keep writing Python.

~~~
JohnStrange
Ada is fairly easy to use once you know it. It has a few quirks and many nice
features. It just takes a bit more time to learn, because its way more
strongly typed than most other languages. IMHO, its main advantage is not
safety, though, but readability and long-term maintainability. You can take a
30 year old Ada program and compile it with the latest version of GNAT.

Anyway, please don't compare languages like Ada and Python. That's like
comparing Forth and Smalltalk or apples with oranges. They have different
purposes. Ada compiles to executables that run around as fast as C and C++,
and like those languages it's not very suitable for rapid lego-brick type
programming. For gluing together existing libraries, languages like Python are
awesome but you wouldn't want to write an Operating System in Python.

~~~
icebraining
You can compile any Python program to C using Cython, hence Python is as fast
as C.

~~~
Kapura
That's like saying "you can compile C to x86 ASM, so C is as fast as assembly"

Abstraction has costs. They can be reduced, but they don't evaporate.

~~~
icebraining
My point is that "fast as C" is meaningless. You can write quite
abstract/unoptimized C. And you can write Python code with a few Cython-
specific annotations that rival "regular" C code.

------
Annatar
_And now whenever I encounter a difficult moment learning new languages like
Haskell or Pony, I try to remember my Ada /C decision and stick will the
language whose compiler is trying to tell me I'm doing it wrong._

What this tells me is that the author gained some insight, but still did not
learn his lesson: one of the worst mistakes one can make in programming is
learning new languages, just because they are there. Pick at most three
languages covering 95-99% of required functionality, then master the living
daylights out of them; you'll become seemingly superhuman at programming,
writing small, portable code in fraction of the time it will take dingle-dongs
out there to do the same thing. Just because you know your tools so well. To
be clear: it takes about a decade working in a language every day, eight hours
per day, to begin to master it. So that's per language, and I wrote _pick
three languages_. Don't cater to fashion, because there is always some dingle-
dong out there thinking he can invent a new language where things can be done
easier and better, but that's not true at all.

------
microcolonel
Sean misses the irony that he would have given up on writing a lot of software
if he tried to do it in Ada rather than C.

I think C is an excellent and useful language which doesn't attempt to change
the way you program. While one can argue that one shouldn't be able to compile
mistakes; the cost of avoiding mistakes often exceeds the cost of amending
them. A sort of personal “It's easier to ask for forgiveness than for
permission” situation.

I often ponder whether the time I spend debugging C programs is worth it. Each
time I conclude that I wouldn't have bothered to write it without C.

~~~
PeCaN
It depends what you're writing.

I wrote an incremental generational garbage collector in C, and rather
regretted the time I spent debugging it. If I wanted it to be at all
concurrent, I think I would've given up doing it in C.

Right tool for the right job and all that. But lately I've strongly been
preferring Rust, Ada, and ATS for anything I'd use C for.

~~~
microcolonel
Fair enough, I do like ATS as well; but there are a lot of programs which are
just too frustrating to write at all without being paid a lot of money for a
decade.

------
qwertyuiop924
I don't think a tool has to be hard to use to be better. You can learn most of
Lisp and Smalltalk very fast, but they're regarded as some of the most useful
and effective tools out there.

~~~
sportanova
How does smalltalk compare to objective-c? It's on my list of languages to
learn because it started with message passing, but if it's user experience is
as bad as objective-c I might pass

~~~
qwertyuiop924
It depends on how you think objective-c's UX is bad. Smalltalk's syntax is
much simpler than O-C's. It has three message passing formats:

    
    
      object message.
      object + arg. "There are very few valid names for these sorts of messages"
      object argNameOne: argOne argNameTwo: argTwo.
    

Double quote is a comment, period is the statement delineator, and semicolon
is the cascade operator, allowing you to send multiple messages to one object.
It has a block (or lambda) syntax similar to Ruby's, although multiple blocks
can be passed to a function.

This is pretty much all of SmallTalk's synax. Yes, really. There's also syntax
for local variables, and returning values from method calls, and maybe one or
two other things. That's it.

It should also be noted that Smalltalk is inseparable from its IDE and image.
GNU Smalltalk tries, but it's not the same. To speak of one is to speak of the
other, so I shall speak of both.

Smalltalk code exists entirely within its image, along with all the tools that
make up the IDE. It's sort of an OS in and of itself, although you can
interact with the native system. The practical upshot is that when you edit
smalltalk code, you have all the source code for the entire implementation
along for the ride: You can not only query Smalltalk about your own code, you
can query it about itself using the same mechanism. You can also edit and
debug your app in real time, as it runs. Smalltalk's debugger allows to to
inspect every single stack frame, and make changes to your source as
necessary. You aren't editing text that will become objects: You are editing
objects. And because the entire system state is saved to the image, none of
the changes you make to the running code will be lost, unlike most Lisps,
where changes you make at the REPL go away as soon as your app shuts down.
It's truly unlike anything else you will see.

~~~
sportanova
Thanks! It sounds like the IDE is the real feature

~~~
passthefist
I'm not really well versed in the language, but from what I've seen it takes
the whole 'everything is an object' concept seriously, which I think is kinda
fun.

For example, booleans are an object that can have two values: True and False.
There's also two methods on the boolean object, ifTrue: and ifFalse:, and they
both take a code block/anonymous function as the only argument. Both True and
False override these functions. True's version of ifTrue: calls the code it's
passed, and False's version does nothing (and vice-versa for ifFalse:).

For example:

a < b ifTrue: [^'a is less than b'] ifFalse: [^'a is greater than or equal to
b']

They're implemented more or less as a standard library package rather than a
language construct. You could potentially extend the boolean class with
different implementations of ifTrue and ifFalse, maybe reversing the logic or
logging the branch taken or whatever. The functionality can be changed
dynamically.

I think it's neat when a language eats itself like that.

~~~
qwertyuiop924
Yeah. Smalltalk is pretty much that. Even moreso than Lisp. And now for the
mandatory quote from "A Brief, Incomplete, and Mostly Wrong History of
Programming Languages":

    
    
      1980 - Alan Kay creates Smalltalk and invents the term "object oriented."
      When asked what that means he replies, "Smalltalk programs are just objects."
      When asked what objects are made of he replies, "objects." When asked again he says
      "look, it's all objects all the way down. Until you reach turtles."

------
siscia
To me is sound a lot like the internal fight that I am having between rust and
go.

I already write go professionally but sound like in the long term rust will be
a better deal...

------
voycey
I was taught Ada at university, it is a complete bitch of a language but it
forces you to be disciplined and do things "Properly" \- something that is
crucial in real-time systems. I don't regret being taught it, I do however
regret that it was used as the main teaching language meaning that real world
usage was an after thought!

------
Marazan
Ada is a large verbose language where the verbosity actually gives you
something (unlike, say, late nineties Java).

