
A Conversation with Language Creators: Guido, James, Anders and Larry [video] - DonHopkins
https://www.youtube.com/watch?v=csL8DLXGNlU
======
DonHopkins
Andrew Hejlsberg:

Maybe I'll just add, with language design, you know one of the things that's
interesting, you look at all of us old geezers sitting up here, and we're
proof positive that languages move slowly.

A lot of people make the mistake of thinking that languages move at the same
speed as hardware or all of the other technologies that we live with.

But languages are much more like math and much more like the human brain, and
they all have evolved slowly. And we're still programming in languages that
were invented 50 years ago. All the the principles of functional programming
were though of more than 50 years ago.

I do think one of the things that is luckily happening is that, like as Larry
says, everyone's borrowing from everyone, languages are becoming more multi-
paradigm.

I think it's wrong to talk about "Oh, I only like object oriented programming
languages, or I only like imperative programming, or functional programming".

It's important to look at where is the research, and where is the new
thinking, and where are new paradigms that are interesting, and then try to
incorporate them, but do so tastefully in a sense, and work them into whatever
is there already.

And I think we're all learning a lot from functional programming languages
these days. I certainly feel like I am. Because a lot of interesting research
has happened there. But functional programming is imperfect. And no one writes
pure functional programs. I mean, because they don't exist.

It's all about how can you tastefully sneak in mutation in ways that you can
better reason about. As opposed to mutation and free threading for everyone.
And that's like just a recipe for disaster.

~~~
jcora
> And no one writes pure functional programs. I mean, because they don't
> exist.

This is, both mathematically and practically speaking, incorrect. People write
pure programs all the time.

For example, `f x = putStrLn x` is a pure program. The reason for this is
because, despite the name, nothing actually "happens" when this function is
evaluated. I suggest you try it: just bind `f "hello world"` to a variable!
Nothing happens. Because it's just an immutable value, computed by a pure
function with no side effects.

There are situations where IO values get executed, but the crucial point is
that computation and execution are logically separated. You can have a
structure of IO-producing functions, get a list of IOs from them, filter and
modify and reorder them, bind them, process them: they're just data. Pure.

~~~
rujuladanh
Yes, we all know how functional languages work; this is Y Combinator after
all. However, that is a pedantic remark and missing OP's point.

It is clear OP meant a program as in an entire non-trivial system, not
individual functions; even if they are programs themselves.

Regardless, when one talks about function purity, it is about the final
evaluation, not about partial binding or lazyness.

~~~
jcora
This is actually not at all pedantic but a very fundamental, if subtle at
first glance, point about purity. Once you start doing more advanced monadic
programming it stops being subtle at all. Many people, yes "even on HN", are
totally fooled by do notation and the imperative nomenclature, but there is
nothing similar.

You misunderstand it as well: it has nothing to do with evaluation mode. Idris
is eagerly evaluated, for example, and _that function is still pure_! `f x =
putStrLn x` gets "finally" evaluated and yes there is still no side effect. It
is so evaluated that you can bind it to a variable and apply another function
on it. Just like 2^100 is a pure expression, so is `putStrLn "hello"`.

> It is clear OP meant a program as in an entire non-trivial system, not
> individual functions; even if they are programs themselves.

It is not clear. Does he mean including the OS and hardware? Because no matter
how complex, a pure program is still pure.

~~~
rujuladanh
Yes, yes, of course. All my C programs are pure too!

They are a simple mapping of a tuple (hardware state, queue of world events)
onto itself! Truly marvelous!

Vive la pureté!

PS. I am now working on abstracting this further, and I just realized the
universe itself is pure too; but somehow I got some strange behaviors when I
started to look into particles too closely... will report back soon.

~~~
jcora
You are really not getting this distinction, I'm not sure how much clearer I
can make it. You can evaluate `putStrLn "hello"` a billion times without
anything "happening" at all, because it is just a function. Maybe it has to be
made clear that purity is a property of languages and language constructs, and
how you can reason about them, if you model a physical system mathematically
of course you will have a pure formal description of it, but that's not the
domain where this concept applies. Even then, it can only be said of your
_model_ that it is pure, and the point of pure languages is to allow for this
mode of reasoning.

When you call a C function, you cannot know whether equational reasoning
holds, whereas you can be sure when you're using a pure language. Therefore,
when you're working on a project that is 100% in say Idris, you are in fact
making a totally pure program. You could make an Idris compiler that inserts
random perturbations in various functions, or you could look at crashes etc.,
but that is not where the concept of purity applies, it's a category error to
think it does.

~~~
AnimalMuppet
All right, but...

When I actually get output that says "hello", that isn't pure, is it? And
there is some way for me to actually produce that output, isn't there? It may
not be "evaluating putStrLn", but it's _something_.

And if that's all true, then rujuladanh is essentially correct: You're being
pedantic in a way that misses the main point.

------
DonHopkins
"My favorite is always the billion dollar mistake of having null in the
language. And since JavaScript has both null and undefined, it's the two
billion dollar mistake." -Anders Hejlsberg

"It is by far the most problematic part of language design. And it's a single
value that -- ha ha ha ha -- that if only that wasn't there, imagine all the
problems we wouldn't have, right? If type systems were designed that way. And
some type systems are, and some type systems are getting there, but boy,
trying to retrofit that on top of a type system that has null in the first
place is quite an undertaking." -Anders Hejlsberg

~~~
hzhou321
So how do you express the idea of null when the language doesn't have null?

~~~
tybit
The problem isn’t actually null perse, it’s an implicit null.

E.g. it’s perfectly fine to say you might have an empty value via ‘Foo | null’
or ‘Foo?’ or with something like null, ‘Maybe Foo’ but to sneak it into plain
old ‘Foo’ is problematic (a billion dollars worth of problems).

~~~
brazzy
I'd say the real problem is having nullability as the _default_ for all types.

------
DonHopkins
Note: the talk actually starts at 50:00. It starts out with bad audio, which
gets better eventually, and it has a long gap in the middle when they go for a
break (since it’s a recording of a live stream). But the discussion and back-
and-forth is riveting!

I like to transcribe and write up articles about videos like this, and this is
a worthwhile candidate, if I can find the time. It would be a lot more
accessible as annotated text with links and illustrations, than a long hard to
hear video. (It’s also good to check with the people talking to make sure the
transcription was right and captured their meaning.)

------
DonHopkins
"The features I wanted to add were negative features. I think all of us as
language designers have borrowed things. You know, we all steal from each
other's languages all the time. And often we steal good things. And for some
reason, we also steal bad things. [Like what?] Like regular expression syntax.
[Oh, yeah, I'll give you that one.] Like the C precedence table. [Ok,
another.] Ok, these are things I could not fix in Perl 5, and we did fix in
Perl 6. [Ahh, ok. Awesome!]" -Larry Wall

~~~
whoisjuan
REGEX in Perl 6 looks pretty amazing:
[https://docs.perl6.org/language/regexes](https://docs.perl6.org/language/regexes)

------
DonHopkins
James Gosling wants to punch the "Real Men Use VI" people.

"I think IDEs make language developers lazy." -Larry Wall

"IDEs let me get a lot more done a lot faster. I mean I'm not -- I -- I -- I
-- I -- I'm really not into proving my manhood. I'm into getting things done."
-James Gosling

~~~
etse
I saw that part of the video too, and although Larry looked like he was having
a fun moment, I think he was also quite insightful.

IDEs help people write programs, but to language developers, an IDE might be
the easy and less elegant way out of a problem in the design of the language.

~~~
DonHopkins
Anders Hejlsberg also made the point that types are documentation.

Programming language design is user interface design because programmers are
programming language users.

"East Coast" MacLisp tended to solve problems at a linguistic level that you
could hack with text editors like Emacs, while "West Cost" Interlisp-D tended
to solve the same problems with tooling like WYSIWYG DWIM IDEs.

But if you start with a well designed linguistically sound language (Perl, PHP
and C++ need not apply), then your IDE doesn't need to waste so much of its
energy and complexity and coherence on papering over problems and making up
for the deficiencies of the programming language design. (Like debugging mish-
mashes of C++ templates and macros in header files!)

~~~
daotoad
Types as documentation is one of the things I like best about Perl6 types.
Perl6 has the concept of a `subset` of a type. Let's say I want to write a
function that takes a name which must be only one line of text, and a positive
integer as arguments.

In Perl6 I can document those requirements in the subroutine signature:

    
    
      sub foo( Str $name where *.lines == 1, Int $count where * > 0 ) {
          say "Name is $name with count $count";
      }
    

Or if I want to name the concepts and reuse them, I can say:

    
    
      subset LegalName of Str where *.lines == 1;
      subset PositiveInteger of Int where * > 0;
    
      sub bar( LegalName $name, PositiveInteger $count ) { foo( $name, $count) }
    

You can use these subsets for multiple dispatch as well:

    
    
      # Print a single line argument.
      multi sub print-it( Str:D $it where *.lines == 1 ) {
        say $it;
      }
    
      # Print a multiline string as a single line
      multi sub print-it( Str:D $it ) {
        say $it.lines.join('');
      }
    
      # Handle anything that we didn't expect
      multi sub print-it( Any $it ) {
        Failure.new( "it was unprintable", $it.gist );
      }
    

IME, this makes it easy to write self documenting code.

~~~
DonHopkins
Then you'd probably love ADA! Does Perl 6 have postconditions, too?

[https://www.adacore.com/gems/gem-31](https://www.adacore.com/gems/gem-31)

Gem #31: Preconditions/postconditions

>The notion of preconditions and postconditions is an old one. A precondition
is a condition that must be true before a section of code is executed, and a
postcondition is a condition that must be true after the section of code is
executed.

[http://www.ada-auth.org/standards/12rat/html/Rat12-2-3.html](http://www.ada-
auth.org/standards/12rat/html/Rat12-2-3.html)

>We will look first at the simple case when inheritance is not involved and
then look at more general cases. Specific preconditions and postconditions are
applied using the aspects Pre and Post respectively whereas class wide
conditions are applied using the aspects Pre'Class and Post'Class.

>To apply a specific precondition Before and/or a specific postcondition After
to a procedure P we write

    
    
        procedure P(P1: in T1; P2: in out T2; P3: out T3)
            with Pre => Before,
                 Post => After;
    

>where Before and After are expressions of a Boolean type (that is of type
Boolean or a type derived from it).

If your language supports preconditions, postconditions and invariants, then
you can pursue the "Design by Contract" approach to programming, coined by
Bertrand Meyer for Eiffel.

That's when you guarantee that if all the preconditions are met before calling
a function, then all the postconditions will be met after it returns (but the
object may go through intermediate states where the postconditions aren't met
before it returns, like while inserting an item into a doubly linked list).
Unfortunately that idea breaks down when you have multiple threads that might
enter a function at the same time, so you have to use object level locks,
which have their own problems.

[https://en.wikipedia.org/wiki/Design_by_contract](https://en.wikipedia.org/wiki/Design_by_contract)

[https://en.wikipedia.org/wiki/Eiffel_(programming_language)](https://en.wikipedia.org/wiki/Eiffel_\(programming_language\))

>The design of the language is closely connected with the Eiffel programming
method. Both are based on a set of principles, including design by contract,
command–query separation, the uniform-access principle, the single-choice
principle, the open–closed principle, and option–operand separation.

>Many concepts initially introduced by Eiffel later found their way into Java,
C#, and other languages. New language design ideas, particularly through the
Ecma/ISO standardization process, continue to be incorporated into the Eiffel
language.

~~~
lizmat
Yes, Perl 6 has pre and post-conditions. They are available in the form of the
`PRE` and `POST` phasers:
[https://docs.perl6.org/language/phasers#PRE](https://docs.perl6.org/language/phasers#PRE)
,
[https://docs.perl6.org/language/phasers#POST](https://docs.perl6.org/language/phasers#POST)

------
DonHopkins
"I have a feature that I am sort of jealous of because it's appearing in more
and more other languages: pattern matching. And I cannot come up with the
right keyword, because all the interesting keywords are already very popular
method names for other forms of pattern matching." -Guido van Rossum

~~~
zimablue
Note that python does have pattern matching, just not to the same degree as
some languages.

    
    
      (a, (b, *c, d), *e) = [1, (2, 3, 4, 5), 6, 7]
    

Good for nested iterables, but not supporting dict/object unpacking.

~~~
tomp
Not sure that's _pattern matching_... I'd call that _destructuring_ , for me
_pattern matching_ also means the ability to _fail_ the match (and try
another).

~~~
arethuza
Presumably Prolog's unification being one of the most powerful 'pattern
matching' mechanisms?

[http://www.dai.ed.ac.uk/groups/ssp/bookpages/quickprolog/nod...](http://www.dai.ed.ac.uk/groups/ssp/bookpages/quickprolog/node12.html)

[Amused to see that the page I linked to there is over 20 years old!]

------
DonHopkins
"In the Java universe, pretty much everybody is really disciplined. It's kind
of like mountain climbing. You don't dare get sloppy with your gear when
you're mountain climbing, because it has a clear price." -James Gosling

------
tda
Unfortunately the terrible production quality of the video makes the
conversations quite hard to follow. Going to try nonetheless

------
bpyne
Anders made a great point in the discussion about optimization: his hunches
are often enough at odds with the profiler. I was heartened by it due to the
high-skilled developer that he is.

~~~
DonHopkins
I saw Alvy Ray Smith give a talk in which he mentioned how when he was writing
code, he had to just keep flipping the signs and coordinates around in the
equations until the math worked out right.

That made me feel a lot better about having to do the same thing all the time
too! Nothing ever works right (or runs fast) the first time, and you have to
just keep fiddling with it until it does.

[https://en.wikipedia.org/wiki/Alvy_Ray_Smith](https://en.wikipedia.org/wiki/Alvy_Ray_Smith)

