
Near Future of Programming Languages [pdf] - myth_drannon
http://dev.stephendiehl.com/nearfuture.pdf
======
Animats
Things to think about for the near future of programming languages:

\- The borrow checker in Rust is a great innovation. Previously the options
were reference counts, garbage collection, or bugs. Now there's a new option.
Expect to see a borrow checker in future languages other than Rust.

\- Formal methods are still a pain. The technology tends to come from people
in love with the theory, resulting in systems that are too hard to use. Yet
most of the stuff you really need to prove is very dumb. X can't affect Y. A
can't get information B. Invariant C holds everywhere outside the zone (class,
module, whatever) where it is transiently modified. No memory safety
violations anywhere. What you really need to prove, and can't establish by
testing, is basically "bad thing never happens, ever". Focus on that.

\- The author talks about "Java forever". It's more like Javascript
Everywhere.

\- Somebody needs to invent WYSIWYG web design. Again.

\- Functional is OK. Imperative is OK. Both in the same program are a mess.

\- Multithread is OK. Event-driven is OK. Coroutine-type "async" is OK. They
don't play well together in the same program. Especially if added as an
afterthought.

\- Interprocess communication could use language support.

\- We still can't code well for numbers of CPUs in triple digits or higher.

\- How do we talk to GPU-type engines better?

~~~
PaulRobinson
I disagree with a few of your thoughts, but they're good thoughts!

* Javascript everywhere is a function of low barrier-to-entry for it, but almost everybody agrees it is flawed as a language. If that's the future, we are screwed as an industry. One thing I've noticed (and I say this as a guy who wrote Ruby for 10+ years), is that type safety is becoming a hugely desired feature for developers again.

* WYSIWYG web design (a la Dreamweaver) died off a little because the tools saw a web page as standing in isolation. We know however that isn't interesting on its own - it needs to be hooked up to back-end functionality. Who is producing static HTML alone these days? In the case of SPAs it needs API integration. In the case of traditional web app dev, it needs some inlining and hooks to form submission points and programatically generated links to other "pages". Making that easier is the hard part - seeing a web document as an artefact of an output from a running web application container.

* Multi-threaded, event-driven, coroutine-type patterns are fine in Go, to my eye. What's making you think we can't mix this up with the right type of language and tooling support?

* Is it that we can't code well for CPU counts > 100 or that the types of problems we're looking at right now that need that level of parallelism tend to be targeted towards GPUs or even ASICs? I think I'd need to see the kind of problems you're trying to solve, because I'm not sure high CPU counts are the right answer.

* Talking to GPU-type engines is actually pretty simple, we will deal with it the same way we deal with CPU-type engines: abstraction through a compiler. Compilers over time will learn how to talk GPU optimally. GPU portability over the next 20 years will be a problem to solve as CPU/architecture portability was over the last 40.

~~~
pjmlp
> WYSIWYG web design (a la Dreamweaver) died off ....

Which is why what we actually need is a la Delphi

~~~
tluyben2
I have been thinking about that and there is an issue with it; Delphi (and VB)
where written in a time when devs paid a lot for software tools. Besides some
niches (embedded) that is not really the case anymore. You expect to pay a few
$10 at most in total if that. And a lot of people (but that might be the
HN/Reddit echo chamber) demand all to be OSS as well they work with. Making 'a
modern Delphi' is a _lot_ of work; years of it. And much of that time is not
'fun', it's hard work polishing little parts and having user test groups
feedback on it and polishing it some more. The time that you could be Borland
seems gone (unfortunately imho) and i'm not sure how you can make the kind of
polished tool you are talking about in the current climate. Maybe someone else
here has some different views though.

~~~
pjmlp
There are companies trying it though,

[https://anvil.works/](https://anvil.works/)

[https://www.outsystems.com/platform/](https://www.outsystems.com/platform/)

JetBrains is probably a good example of a "Borland" like company.

Outside HN/Reddit bubble that are plenty of companies that are willing to pay
for software, the supermarket cashier doesn't take pull requests.

Also, the back to native focus on mobile platforms, including Google having to
integrate Android apps on ChromeOS, might make it less relevant, given that
native IDEs do offer some Delphi like experience.

~~~
bitL
JetBrains never did any rapid UI IDE like Delphi did. In fact, all their IDEs
are in Swing, which is a mess. I'd totally love having CLion/PyCharm with Qt
UI designer, but it's not going to happen.

~~~
terminalcommand
The easiest time I had writing GUIs was when I used PyQT. I designed the UI in
Qt Designer, loaded it in the python code, set the bindings and voila, it was
working.

Btw Visual Basic continues to exist under Visual Basic.NET and if you stick to
the basics you could learn to write C# GUI programs pretty quickly.

I agree writing GUIs by hand is very counter-productive.

------
YZF
One area I haven't seen any good solutions for is the interleaving of tests
with production code. I think we need better ways to express tests without
cluttering the production code and excessive mocking. What I'd like to do, and
really can't in any language/tooling that I know of, is to extract some
arbitrary subset of the code and surround it with tests or a test harness. I'd
also like to specify various injection points for tests right as I'm writing
the code in a way that doesn't affect the production code and contributes to
its readability. Perhaps this is pieces of server and client code or various
services all tested together. When I refactor the code I want the tests to
"refactor" with the code. I don't want to need to rewrite my tests...
Automatic test generation is another interesting area (beyond the basic table
or algorithm stuff we might do in tests today).

It's almost like the "problem" isn't the languages themselves, it's the
tooling around the languages. I want to be able to do a lot more than
"compile", "run", with my code... I can imagine machine learning driven
tooling being able to automate a lot of the mechanical aspects of code writing
beyond the simple generate a closing bracket that an IDE can do...

~~~
tomelders
I don’t do coding tests. I talk to the candidate about programming and I only
require one interview. I haven’t made a hiring mistake in 6 years.

~~~
NotSammyHagar
okay. care to give any other info? Usually when someone says I haven't failed
in 'some impressive challenge in a long time' it means your challenge wasn't
that impressive. How much interviewing have you done, what were the hard calls
you were asked to me, whats your scheme?

When I was at google, one of the hardest thing I did was look at the marginal
intern review scores and tried to pick out the ones from the big pile whom we
should look at again and who should not be looked at. There were all kinds of
crazy ass stupid interview questions that in my opinion were not very useful
to classify capability.

~~~
coldtea
> _okay. care to give any other info? Usually when someone says I haven 't
> failed in 'some impressive challenge in a long time' it means your challenge
> wasn't that impressive._

Or they couldn't recognize failure

------
lmm
The "Language Gap" slide seems massively overstated, or maybe I'm
misunderstanding. We really have seen a lot of progress in the last 10-20
years, both in industrial languages and in academic ideas that could become
the industrial languages of the next 10-20 years (e.g. Idris on the short end,
Noether on the longer end). The author laments that pattern-matching is still
not standard, but we're getting there; map/reduce/filter are standard in all
new languages these days (they weren't 10-20 years ago), some kind of
lightweight record feature is standard in all new languages, some level of
type inference is standard in all languages. Yes, it's taken longer than you
might imagine it should, but progress is happening. Likewise formal methods -
they may not be practical in 2016, but there's a lot more awareness, a lot
more work being done, and people are starting to try to take the useful parts
and apply them in more and more industrial settings. Likewise graphical
representation of code - not the LabView nonsense that's exciting to talk
about at cocktail parties, but the little touches that today's IDEs do almost
invisibly - highlighting, mouseover information, outline views, smart code
folding.

I wish we were better at communicating about programming languages. I wish we
were moving faster. But despair is unwarranted. We really are in a much better
place than 10-20 years ago, and the next 10-20 years look set to bring more
improvements.

~~~
pjmlp
10-20 years ago some of us could use Smalltalk, do systems programming with
strong type safe languages, use RAD environments like Delphi, release
applications in Prolog, for example.

To me it seems we are catching up with the past, and as someone already
programming on those environments, it looks we have spent 10-20 years loosing
our tools, educating the masses, only to get a taste of things used to be.

~~~
lmm
I've certainly seen cases where we take one step back in one area to take two
steps forward in another; where it takes 5-10 years to get language C that can
do something that language A we were using 5-10 years before that could do -
but only if we forget that we also wanted some capability in the language B
that we couldn't do in A, and C is the first language that manages to
synthesise both. And industry is always going to be a long way behind the
cutting edge - most of the features we're excited about today are things that
were present in ML. But on the whole it feels to me like both a) the
mainstream industrial programming experience today is better than it was 10
years ago and b) the academic cutting edge of programming language design
today is better than it was 10 years ago, and I expect both those things to
continue to be true.

------
aryehof
My thoughts are that we don't really need more languages. Arguably we don't
need better ones either, because they aren't the problem in _general_
computing. Instead we need better _design paradigms_ that better let us model
complex requirements and systems into code. Let's have new languages that then
support those paradigms.

We continue to struggle abstracting complex problems using functional
decomposition, structured analysis, information (data) modeling, and object-
based decomposition.

Many newcomers I meet only know modeling problem domain concepts as data in a
database, with behavior and constraints acting on that data in a separate
layer, organized using functional decomposition. Of course that layer
increasingly approaches a 'big ball of mud' as size and complexity increases.
Sounds a lot like we are back data-flow modeling as so popular in the 1980's,
in a new guise.

A focus on programming languages in my opinion, masks the real issues we face.

~~~
ShallowLearning
Much of the time, new languages lead to new design paradigms. As a general
rule, newer languages are more abstracted than older ones. When people don't
need to get hung up on the intricacies of low level programming, real progress
can be made on the design paradigm front.

~~~
aryehof
I'm curious which languages have resulted in which new design paradigms?

~~~
arianvanp
Two good examples:

Erlang led to Microservices and the Reactive Manifesto. the Open Telecom
Platform is still a very modern Microservice library (in my opinion) after
more than three! decades

ML led to programming against generic interfaces
([http://www.cs.cornell.edu/courses/cs312/2006fa/recitations/r...](http://www.cs.cornell.edu/courses/cs312/2006fa/recitations/rec08.html))

------
js8
I would love to see programming as a dialogue between user and computer
(programmer and compiler). For example:

Compiler would infer the types, and the programmer would read it and say, oh,
I agree with this type, but I disagree with this type, that's perhaps wrong,
this should be rather that type. Then the compiler would infer types again,
based on programmer's incremental input.

Data structure selection. Programmer would say I want a sequence here. The
compiler would say, I chose a linked list representation. The programmer would
look over it, and disagree, saying, you should put this into an array. And
compiler could say, look, based on measurements, array will save this much
space but list will be this much faster.

Code understanding. Programmer should be able to say just, I don't know what
happens here, and the compiler would include some debug code to show more
information at that point.

Or take refactoring. Programmer would write some code, computer would refactor
it to simplify it. Then programmer would look over it, and say, no, I rather
want this, and don't touch it, and he would perhaps write some other code. The
compiler would refactor again...

But all this _requires_ that there is syntactically distinct way (so that
perhaps editor could selectively hide it) to specify these remarks in the
code, both for computer and programmer. So each of them should have a special
kind of markup that would be updated at each turn of the discussion. Because
you don't want to just overwrite what the other side has just said; both are
good opinions (which complement each other - human understands the purpose of
the code but the computer can understand the inner details much better). So,
to conclude - I wish future programming language would include some framework
like this.

~~~
marcosdumay
The types line is what some people do with Haskell, Idris. I do personally
favor writing the large-scale types beforehand, because that gives the
compiler a chance of saying "look, you program is wrong", what is way more
useful than "hey, your program has this type". Besides, abstract-type driven
programming is an incredibly good methodology where it's applicable.

On code understanding, what makes it better than the programmer inserting the
debug statements themselves? It saves some misunderstanding from the
computer's part.

On refactoring, some IDEs do that. I'm on the fence about its usefulness.

~~~
js8
Thanks, I will respond to other people here as well.

I know a bit of Haskell and want to look at Idris, someday.

My point was, there should be a clean (best if even syntactic) separation
between code itself (i.e. what should actually be done) and its properties
(like types). Also, because there are two points of view about the properties
(human and computer), this separation needs to be there twice (so for example,
each type could be specified by computer and by human). I haven't seen a
system that would do it, on a systematic level in the programming language. I
only gave examples to show where it could be used.

~~~
marcosdumay
You mean that types should reside on different files, and be inputed by
different means?

Now I get the entire programming-compiler conversation. It is interesting. I
can see some potential there. Yet, I shrug when I think about all the sparse
metadata that I will have to check once I discover a low level bug (there is a
reason I'm not programming in Smalltalk).

Somehow the best place for all that stuff to live is right there at the source
code. That means the compiler (IDE) should be editing your files, so it better
have a great integration with your version control system.

~~~
js8
> Somehow the best place for all that stuff to live is right there at the
> source code.

That's what I am saying, and that's why it has to be syntactically distinct.

Also it needs to be clear what was input by computer and what was input by
human, that's the second distinction. Because of how the conversation works.
You don't want computer to erase human input, but the result also has to be
logically sound. So you need to know both inputs for the comparison and
synthesis, which happens at each conversation turn, both human's and
computer's.

And that's what "type holes" and similar systems lack - they only record the
result of the synthesis, not the two different opinions. Which is IMHO wrong.

~~~
marcosdumay
> You don't want computer to erase human input

I will contest that one too :)

As long as it is interactive enough, and the history is well marked, there is
no reason not to rewrite human code.

------
samth
This talk is right that effect systems aren't popular yet, except the way
Haskell does them. But it's wrong about the trajectory of languages. Right now
is the best time to be interested in using cutting edge languages in practice.
Also right now has seen an explosion of new languages with interesting ideas,
from Rust to Purescript to Elm (in the authors preferred realm of typed
languages). Also industry is backing major post-Java languages like F# and
Rust.

In short, the near future of PL is great, and exciting stuff keeps happening.
Don't believe the naysayers.

------
TuringTest
_> Lots of people are reinventing Smalltalk on a Mac. (See Bret Victor and
Eve)._

At last, someone noticed! ;-)

Though from that expression, what the author doesn't seem to grok is why
having a Smalltalk-like environment is desirable; maybe not as the primary way
to program computers but certainly as a tool alongside.

It's a shame that a family of programming languages that build on and expanded
that model hasn't gained more traction in the industry (not necessarily for
people who dedicate their lives to build complex software with a highly
general programming language, but for the rest of us).

~~~
vanderZwan
I find the jab at Bret Victor especially undeserved. He's an interaction
designer (a really good one who sees through all the fads[0], which is kind of
the opposite of what this one-liner implies). His focus is on better
_interface_ design, not _formal language_ design; why criticize someone for
something they're not trying to do?

And it's not useless; we probably wouldn't have had Elm without Bret Victor's
_Inventing on Principle_ [1][2]. And there has been some progress in that
direction of interface design of, for a lack of a better term, "programmable
environments": look at Apparatus, for example[3]. Where would you even fit
that on these slides?

[0]
[http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...](http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign/)

[1] [http://elm-lang.org/](http://elm-lang.org/)

[2] [https://vimeo.com/36579366](https://vimeo.com/36579366)

[3] [http://aprt.us/](http://aprt.us/)

~~~
TuringTest
Quite true. I find likely that the next revolution in programming languages
will come from designing a PL that's a good fit for these programmable
environments. Maybe it should break from current undisputed conventions like
the radical separation between "data" and "source code", and be more like a
spreadsheet.

End-User Development has lots of under-explored ideas on how to build software
automatisms that don't require the end user to learn a hard formalism (even if
such formalism exists as the basis for the system). Though I understand that
programming language theorists are not interested in that angle of the
evolution of PLs.

~~~
fnord123
There is a dichotomy between sealed programs and evolving programs. Evolving
programs like Smalltalk or REPLs are great for exploratory work. Almost
everyone wants sealed programs for systems to work reliably. This runs along
the lines of Ousterhout's dichotomy as well - scripting vs systems languages.

The engineering break through will be when we have a scripting/evolving system
that can be more easily distilled into sealed systems. So people poking around
in a spreadsheet will be able to turn that into a reliable application.

IMO, TDD and BDD could be seen as attempts to do exactly this.

~~~
shalabhc
Do you see this as a useful dichotomy? In reality no system is truly sealed -
it's just temporarily sealed between evolution steps, no?

~~~
fnord123
I do find it a useful dichotomy because I work with people who evolve their
code and wonder why I get my back up about monkey patching (because it's often
my job to turn research or prototype quality code into reliable code).

If you've ever had to make another person's research or prototype quality code
more robust it can be easy to wonder how on Earth they work the way they do.
Having this dichotomy in mind makes it easier to have more sympathy.

It works both ways. Researchers need to understand why platform development
moves so slow. This dichotomy helps reason about it.

But you're right, between evolution steps the system is not sealed. That's a
concern and why projects like Debian are keen to have reproducible builds.
It's also a big part of why containerization is becoming so popular.

~~~
shalabhc
Ah I see. So with 'sealed' you mean things like reproducible builds and being
able to determine the exact set of sources, etc?

The containerization movement is interesting - while having these 'sealed'
virtual environments and reproducible system builds is the right direction,
I'm amazed at the size and weight of each of these.

~~~
fnord123
Sure. By sealed I mean you compiled the program, you might have a turing
incomplete configuration (e.g. ini, toml, json) and you can reason about the
program. And by evolving (or, I guess unsealed) you are in the process of
editing the program. Or you monkey patch it when you run it.

But with things like LD_PRELOAD, different .so minor versions, etc, a compiled
program suddenly straddles the categories. Hence my comment about reproducible
builds and containerization.

------
stmw
This is a great presentation, I wish I could hear the talk track. While we can
all differ on the right "winner" for the next programming language (I don't
think Clojure is the right answer, someone else might), we are all stuck with
the same set of facts - and this covers the state of things very well. Most
importantly, it explains certain truths of the social/economic ecosystem for
programming languages - which is what gave us Java, Python, Javascript, and a
few other really popular systems that seemed unlikely to succeed when they
first appeared. The reasons for their success have just as much to do with
"ecosystems" than with language features.

~~~
saas_co_de
On twitter he says:

"The thesis was that programming in 2030 will have very advanced research
languages but mainstream languages will effectively stop advancing and we'll
be last [sic - left?] with a vast insurmountable gap between the two."

That provides some context for what the slides are about.

------
joe_the_user
“It just feels readable”. -> “It looks like this other language I know.”

But: that language might be English (or other natural language). Which is to
say maybe computer languages are leveraging the human "language faculty" and
rightly so. Which may mean that some languages are always going to feel more
foreign.

~~~
fasquoika
Considering how many programmers will tell you that C style semicolons and
curly braces are more readable than using keywords, I think they've got a
point

~~~
Symmetry
At some extremes yes but, for example, I've spent at least twice as much time
programming C and C+ than I have with Python but I still find Python generally
more readable. Maybe if the ratio were 100 to 1 instead of 2 to 1 I'd feel
differently but that suggests there are actually intrinsic differences in
readability apart from just familiarity.

------
david927
This is a _brilliant_ synopsis of the state of the art -- truly fantastic --
and yet the presentation concludes with, "The innovation won't happen because
I can't see where it will come from," and that's not entirely fair. Some areas
he covers aren't exhaustively covered and what's missing is quite interesting.
In other words, cheer up, there's more hope than what's shown here.

~~~
ghthor
I think this is just a fustration with how little direct incentive there is
from our economic system to fix the problem. The only chance of this paradigm
existing is as a long shot project that is gifted to society, and the person
that makes this gift has the slim chance that it doesn't succeed and all the
effort is wasted.

~~~
peoplewindow
I found that the weakest part of the presentation.

Kotlin and Ceylon both came out of corporate/industrial development. C#,
JavaScript and Java continue to evolve entirely due to industrial funding.

I agree that academia is not a great place to do PL research as all the other
non-idea aspects of a successful PL don't get enough attention. But industry
is doing OK at launching new languages. Perhaps he feels that's
"incrementalism" but there's nothing wrong with making solid upgrades that
aren't attempts at unicorn style magic cure-alls.

It'd also help if we collectively got over our demand for our tools to always
be open source. This has solid justifications behind it but it ensures that
programming languages will always be spinoffs of other endeavours rather than
having truly focused attention in their own right. When you take away the
profit motive, you get stagnation, and that seems to be the main point of the
presentation.

------
WoodenChair
What people who do academic analysis of programming languages often miss is
_culture_. So much about what programming language gets used has to do with
what ecosystem it's been targeted at and what colleagues in that ecosystem are
using.

~~~
stmw
I couldn't agree more and was just making this point to someone last week -
for many of the well-established languages, 80% of the reasons to use or not
to use them is the associated culture, from the qualifications of the average
applicant to the prevailing approaches to solving problems.

------
quickben
It looks like the presentation stops short of any strong conclusion.

~~~
banachtarski
It's still pretty useful as far as identifying the problem is concerned.

------
Klasiaster
Hype is too strong and good ideas of the past are a niche topics. Computer
science should focus more on its own history and realize what has been done
and needs to be taken further. I wish I studied much earlier about things like
process calculi, PL/type system theory, alternative OS design and verification
approaches. Just by chance I pick up important information from the web or in
rare master's courses where we are just a handful of students (while the
machine learning courses get hundreds).

------
jlward4th
"Where will the next great programming language come from?"

Interestingly Scala has come from Academia, Industry, and Hobbyists. And for
me it's already the next great programming language. Yeah, it has some warts
and is hard to learn but that's true of all great things. :)

~~~
mlazos
Yeah this comment stuck out to me in the presentation. Industry has a great
record of creating + supporting languages. C#, F#, Dart, Swift, Rust, D are
all languages that are actively supported by industry, and some were even
created by industry too. Just seemed like a weird statement for the slides to
say that new languages just aren’t going to appear.

~~~
throwaway7645
If anything, almost too many of them. In the esoteric language area, every new
language takes away from libraries for other ones as the communities get
spread out. I do think the spread of ideas from that setup is good though.

------
cromwellian
Perhaps the future of programming isn't in linear deterministic code but in
stochastic or probablistic languages dealing with machine learning or biology.
Maybe we're hitting Godel/Turing-like wall and as we strive for more
perfection and provable soundness, we will find we can't scale up to bigger
and bigger systems. We'll be stuck with human debuggable linear languages like
we have today or systems which operate more like a composition of machine
learning models.

~~~
yazaddaruvala
Of course in practice we know is a lot easier to build and use probabilistic
models. However, I hope we don't give up on proovable systems.

When given enough time, if math can stay pure and adequately abstract to
eventually describe our entire universe concisely. Shouldn't code similarly
have no limitation?

------
chewbacha
TL;DR Everything is awful and nothing will save you

------
fmap
I like the slides and agree with the widening language gap. I have a few
comments about the conclusions, though.

> The UPenn dependently typed Haskell > program shows a great deal of promise
> and > is likely to manifest a decade before other > DT languages generate
> practical backends.

I don't agree with this at all. The design problems with tacking on dependent
types to an existing system are massive - it's a research problem for a
reason. On the other hand, writing a "ghc quality" backend for
Coq/Agda/Idris/F* seems difficult, but at least it's an engineering problem
instead of a rough idea.

In particular, CertiCoq is a compiler for Coq written in Coq, and the main
problem here is verification. Simply writing a compiler is no harder than
writing a compiler in any language.

> Interesting ideas out of Microsoft Research on SMT solver > directed
> programming editors that enforce invariants and can > generate code during
> development.

Another interesting Microsoft Research project along the same lines as Dafny
is F _. Both are nice, but F_ is closer to modern dependently typed languages.

> Lots of non-local reporting problems associated > with using unification
> during type-checker.

I would argue that the problem is that we use constraint solving for type
checking. For example, strict bidirectional type checking leads to more
tractable errors, since it's straightforward to follow the compiler's
reasoning. On the other hand, bidirectional type checking is less powerful, so
it's not like there's a silver bullet here.

> Type-safe OTP.

I wish more people were working on things like this. There's some theoretical
work on calculi for distributed systems, but once they encounter the real
world they inevitably become horrendously complicated.

------
dom96
Regarding the effect systems slide. One of my favourite features of the Nim
programming language is its effect system,[1] it tracks IO effects, but more
importantly it also tracks the exceptions that are raised by each procedure.
This allows for a very nice implementation of checked exceptions and docs
which contain the possible raised exceptions of each procedure.

1 - [https://nim-lang.org/docs/manual.html#effect-system](https://nim-
lang.org/docs/manual.html#effect-system)

------
AnimalMuppet
"Use the right tool for the job" is _not_ the dumbest cliche in software... in
theory. But it requires actually knowing what options are available, what
their strengths and weaknesses are, and what problems (various kinds of yak
shaving) the job will throw at you. Picking the tool that can best reduce the
yak shaving is the difference between professionalism and masochism.

In practice, though, a lot of people quote this who don't know what options
are available, what their strengths and weaknesses are, and what the problems
of the job will be, and so _when those people quote it_ , it's just a cliche.

------
wsxiaoys
In case of anyone interested a reasonable engineered language that supports
Closure Serialization (mentioned as "hard" in last slide), checkout:

Gambit Scheme: A fast scheme implementation.
[http://github.com/gambit/gambit](http://github.com/gambit/gambit)

Gerbil Scheme: Provides full module and syntactic tower on top of Gambit
Scheme. [https://github.com/vyzo/gerbil](https://github.com/vyzo/gerbil)

~~~
mrkgnao
Does Guile (or even Chicken) do anything similar?

(My only exposure to it is from Andy Wingo's blog, and I don't know much about
Scheme.)

~~~
wsxiaoys
Nope.

------
pron
> Will we just be stuck in a local maxima of Java for next 50 years?

1\. Yes, if the extent of the imagination is languages like Idris and ideas
like effect systems, that follow a gradient descent from Java, and always in
the same direction: being able to express more constraints. What you get "for
free" from such languages may not be significant enough to justify the cost of
adoption, and the valuable stuff you can get is not much easier than the
options available today, which are too hard for anyone to take. If you were to
consider truly novel languages that think out of the box (e.g. Dedalus/Eve)
then maybe one will stick and make an actual impact rather than just a change
in fashion.

2\. How do you even know that we _can_ do much better? NASA engineers may not
like it, but they don't _complain_ that we're "stuck" at sub-light speeds.
Maybe Brooks was right and we are close to the theoretical limit (that we know
must exist).

> We talk about languages as a bag of feelings and fuzzy weasel words that
> amount to “It works for my project”.

Can you find another useful way, available to us today, of talking about
languages?

> “Use the right tool for the job” Zero information statement.

That's right, but it's not a dumb cliché so much as it is a tool we've
developed to shut down religious/Aristotelian arguments that are themselves
devoid of any applicable, actionable data. One, then, is often confronted with
the reply, "but would you use assembly/Cobol?" to which the answer is, "of
course, and it's not even uncommon, and if you don't know that, then you
should learn more about the software industry before giving any more
Aristotelian arguments."

~~~
mrkgnao
> What you get "for free" from such languages may not be significant enough to
> justify the cost of adoption

Idris can often infer entire functions from their types if the domain is
amenable to accurate type-based specification. For instance, taking the common
"sized vector" example, where `Vec n a` refers to a length-n vector of values
of type a, functions like

    
    
        zip : Vec n a -> Vec n b -> Vec n (a, b)
    

can be automatically generated via the interactive "proof search" mechanism
that the Emacs integration for Idris provides. Similar things are possible in
Agda, which is however squarely aimed away from the "practical use" market.

~~~
pron
> Idris can often infer entire functions from their types if the domain is
> amenable to accurate type-based specification.

This is a great example of what I'm talking about. The kind of functions Idris
can generate is that of functions that you could manually write with only
marginally more effort -- if that -- than the effort required to write the
precise type. I don't think those functions ever form a significant portion of
a significant program, if at all (those functions must be so generic, or else
there would be a search problem that Idris can't solve, that they would
already likely be in the standard libraries). When Idris is able to generate
an efficient sorting function given some constraints, then I'd be impressed.

> can be automatically generated via the interactive "proof search" mechanism

If you've spent a significant amount of time with such proof assistants, you'd
see that the proof search is rather pitiful.

~~~
bjz_
> If you've spent a significant amount of time with such proof assistants,
> you'd see that the proof search is rather pitiful.

Afaik, Idris' proof search was hacked together in an afternoon just to see if
it would work. And it did, surprisingly well considering. Don't know how much
it has been worked on since then though. But yes, still leaves a bit to be
desired compared to other systems.

~~~
pron
I wasn't just talking about Idris. There isn't a single tool out there that
can automatically prove that your quicksort implementation actually sorts
without pretty significant work.

------
lain-dono
> It’s lightweight > I was able to install the compiler

It does not prevent the browser from eating all the resources.

------
josefrichter
Just curious where does Elixir fall in this scheme of things? Any experienced
devs who could give a qualified answer, please?

~~~
bjz_
Elixir doesn't really push much of the state of the art forward. It's pretty
much Erlang with new clothes (and some nice cleanups). I worked with it for
over a year and while I loved the 'fail fast' model of programming, most of
the errors that were caught were stupid programming errors that would have
been trivial to catch by a type system.

What I really want is a type system for reasoning about distributed, stateful,
programs. Has to deal with mixing strongly consistent eventually consistent
(CRDT) data with synchronisation points (see Bloom/Lasp), hot code reloads,
migrations, messaging across nodes, possibly some sort of row polymorphism
combined with clojure's namespaced symbols... Like sort-of like Cloud Haskell
but with zero-downtime deployments. Or like Pony but with a better
distribution story.

~~~
rurban
Distributed Pony will be there soon.

[https://www.doc.ic.ac.uk/~scd/Disributed_Pony_Sept_13.pdf](https://www.doc.ic.ac.uk/~scd/Disributed_Pony_Sept_13.pdf)
[pres],
[https://www.ponylang.org/media/papers/a_string_of_ponies.pdf](https://www.ponylang.org/media/papers/a_string_of_ponies.pdf)
[paper]

I'm not sold on Cloud Haskell or Clojure.

------
ernst_klim
OCaml is as "industrial" as Haskell is.

------
Animats
"OTP" on p. 23?

"One True Pairing"?

"Online Transaction Processing"?

probably not "On The Phone".

~~~
paulsmal
Open Telecom Platform? Erlang's runtime

~~~
tormeh
I think it's just the biggest Erlang framework. BEAM is the runtime.

------
samcodes
This is a great deck... but leaving TLA+ out of Formal Methods? It is hands
down the most important thing going on there. And it is a combo between
academia and industry. Amazon uses it extensively.

~~~
pron
It also left out all PL research outside of typed FP. It isn't really about
the future of programming, but about the future of typed FP. For example, it
lists Eve as a "reinvention of Smalltalk" when, in fact, Eve incorporates at
least as much cutting-edge PL research as Idris.

------
curryhoward
> Languages that have dabbled with modeling effects with row types have
> backtracked on it in favor of IO.

Interesting. I thought algebraic effects + handlers were the new hotness in
modeling effects with types.

~~~
iitalics
They are hotness because they are having trouble getting them to work (trouble
= open research topic :) ), and they want to convince the monad proponents
that they are better.

~~~
toolslive
what trouble ? [http://www.eff-lang.org/](http://www.eff-lang.org/)

------
telotortium
Is there a talk for these slides? I haven't been able to find it.

------
fulafel
This could be titled "Static typing is not going to save you"? There's nothing
but static typing languages and related theorem provers from the timeline
slide onwards and the conclusion is that it's too hard.

This is a good companion piece for the recent Rich Hickey Clojure/Conj talk :)

~~~
stmw
I don't really see where you get that. Is dynamic typing demonstrating a lot
of sustainable success?

~~~
fulafel
I edited in the Clojure part after you replied.

Languages that have forgone static typing aren't necessarily trying to solve
everything using language types at runtime. See eg Clojure's approach. Or
SQL's.

But at the base popularity-contest level the answer is clearly yes: Dynamic
languages have been on a roll for the last 30 years - Perl, Python, Ruby,
Erlang, Clojure, even JS.

~~~
35bge57dtjku
That certainly doesn't mean it's because of their dynamic typing. And Erlang
and Clojure, as nice as they are, are still about as popular as AWK is.

------
andrepd
Well this person sure seems very angry and opinionated, yet there's hardly any
justifications for his criticisms in this childish rant. I mean if you're
going to say everything is "dumb", and "shit" and whatnot, at least tell us
why. As it stands this is definitely a case of "maximal opinions and minimal
evidence".

------
miguelrochefort
_cough_ Prolog _cough_

------
rotten
Recently a famous AI project was shut down because it was inventing its own
language. Perhaps the next generation programming language and design
paradigms won't come directly from humans at all.

~~~
myth_drannon
Are you talking about Theano?

------
jamesrom
So are we just going to pretend Perl doesn't exist?

~~~
chrisseaton
You can't expect 20-or-so slides to cover all languages. And I'm not aware of
any PL research topics being explored in Perl, so not sure how talking about
Perl would fit in.

~~~
cestith
Perl6 in its spec has auto-parallelization of at least the hyperoperators. It
has hyperoperators themselves. It has full LL and LR grammars as part of the
language. It has optional strong typing for support of incrementally typing
your source. It has junctions. It has a native MOP. It has active metadata on
all data types (values, variables, subroutines, and types themselves).

Beyond what might be purely PL research, there are some other neat ideas being
explored. It does function currying, lazy evaluation, grapheme-level Unicode
support, and custom operators from any Unicode grapheme.

It's definitely a language I'd be watching if I was writing about near-future
languages.

------
drefanzor
TLDR The future of programming languages should involve not naming your
language "Coq". I know I am going through the lowest common denominator, but
how do you explain to your boss that you're proficient with using Coq. Ugh. Or
professor, since apparently it's a proofing language for academia.

~~~
GuiA
"Coq" means "Rooster" in French. Coq (the software) started development at
INRIA (France) in 1984 as a small R&D project. I am not particularly shocked
that the original developers did not realize the possible double meaning in
another language, and being as dismissive as you are seems uncalled for.

There are countless startups whose names in English mean something else
entirely in another language. For example "Cocu Social", a cooking startup,
whose name in French would mean "gregarious cuckold".

~~~
samth
The original developers include Thierry Coquand. Also there's a tradition of
naming theorem provers after birds (that might have started with Coq though).

But they definitely knew that this pun was there.

~~~
theaeolist
.. and it is based on a type theory known as the Calculus of Constructions
(CoC).

------
bagswatchesus
Maybe not for me and you, but for software development in JavaScript generally
it's really not ideal.
[http://www.sellbagswallets.com/handbags](http://www.sellbagswallets.com/handbags)

------
shion
My project introduction book contains a description of the future programming
language, you can see.
[https://github.com/ShionAt/Keys](https://github.com/ShionAt/Keys) twitter:
@ShionKeys

------
qaq
Coq should have good interop with cockroachDB.

~~~
qaq
Why are people so against the future of tech?

------
whytheam
Very disappointed not to see Cilk included.

~~~
stmw
Do you mean this Cilk?
[https://en.wikipedia.org/wiki/Cilk](https://en.wikipedia.org/wiki/Cilk)

~~~
whytheam
Yes. Cilk is very easy to use and has pretty good performance. I would love to
see it used more in industry rather than just in academic work.

------
adamnemecek
"It's like X but more practical". -> "There's a library for my domain".

3real5me

------
Schampu
Dynamic analysis + machine learning could become next-generation coding:
[https://maierfelix.github.io/Iroh/](https://maierfelix.github.io/Iroh/)

------
lispm
Looking at the 2010 Haskell report: most of the committee members come from
'Academia':

[https://www.haskell.org/onlinereport/haskell2010/haskellli2....](https://www.haskell.org/onlinereport/haskell2010/haskellli2.html#x3-2000)

How is it an 'industrial' language? Haskell is the main 'academic' language
for a branch of Functional Programming.

'industrial' is just another weasel word.

~~~
throwaway7645
If you keep your head to the ground, there are quite a few companies like
Facebook that use Haskell for important things in production, but it isn't the
language even 50% of the company is using. I'm sure there is a lot more lisp
in production, but you typically hear about the same ratio from both camps in
the blogosphere.

~~~
lispm
I would guess that in a company like Facebook, 99.5% of the code they wrote is
not Haskell. If they have, say, 200MLOC of code written (just a number), this
would be 1MLOC...

~~~
pjmlp
There are companies using a bit more of Haskell in production.

[http://cpmed.de/jobs/](http://cpmed.de/jobs/)

