Hacker News new | comments | ask | show | jobs | submit login
Near Future of Programming Languages [pdf] (stephendiehl.com)
385 points by myth_drannon on Oct 30, 2017 | hide | past | web | favorite | 292 comments



Things to think about for the near future of programming languages:

- The borrow checker in Rust is a great innovation. Previously the options were reference counts, garbage collection, or bugs. Now there's a new option. Expect to see a borrow checker in future languages other than Rust.

- Formal methods are still a pain. The technology tends to come from people in love with the theory, resulting in systems that are too hard to use. Yet most of the stuff you really need to prove is very dumb. X can't affect Y. A can't get information B. Invariant C holds everywhere outside the zone (class, module, whatever) where it is transiently modified. No memory safety violations anywhere. What you really need to prove, and can't establish by testing, is basically "bad thing never happens, ever". Focus on that.

- The author talks about "Java forever". It's more like Javascript Everywhere.

- Somebody needs to invent WYSIWYG web design. Again.

- Functional is OK. Imperative is OK. Both in the same program are a mess.

- Multithread is OK. Event-driven is OK. Coroutine-type "async" is OK. They don't play well together in the same program. Especially if added as an afterthought.

- Interprocess communication could use language support.

- We still can't code well for numbers of CPUs in triple digits or higher.

- How do we talk to GPU-type engines better?


I disagree with a few of your thoughts, but they're good thoughts!

* Javascript everywhere is a function of low barrier-to-entry for it, but almost everybody agrees it is flawed as a language. If that's the future, we are screwed as an industry. One thing I've noticed (and I say this as a guy who wrote Ruby for 10+ years), is that type safety is becoming a hugely desired feature for developers again.

* WYSIWYG web design (a la Dreamweaver) died off a little because the tools saw a web page as standing in isolation. We know however that isn't interesting on its own - it needs to be hooked up to back-end functionality. Who is producing static HTML alone these days? In the case of SPAs it needs API integration. In the case of traditional web app dev, it needs some inlining and hooks to form submission points and programatically generated links to other "pages". Making that easier is the hard part - seeing a web document as an artefact of an output from a running web application container.

* Multi-threaded, event-driven, coroutine-type patterns are fine in Go, to my eye. What's making you think we can't mix this up with the right type of language and tooling support?

* Is it that we can't code well for CPU counts > 100 or that the types of problems we're looking at right now that need that level of parallelism tend to be targeted towards GPUs or even ASICs? I think I'd need to see the kind of problems you're trying to solve, because I'm not sure high CPU counts are the right answer.

* Talking to GPU-type engines is actually pretty simple, we will deal with it the same way we deal with CPU-type engines: abstraction through a compiler. Compilers over time will learn how to talk GPU optimally. GPU portability over the next 20 years will be a problem to solve as CPU/architecture portability was over the last 40.


> Javascript everywhere is a function of low barrier-to-entry for it, but almost everybody agrees it is flawed as a language

Everybody is everybody who has used other languages intensively or is into programming languages or, the most negative parties against JS, people who are into formal methods. But 'everybody'; I get often downvoted to hell for being negative on JS on Reddit. And i'm not using a baseball bat; i'm subtle about it as I don't care for language wars. Use what you want, but please don't say it's the best thing to happen to humanity. But no; 'everybody' (as in headcount) thinks it is the best thing that happened to humanity and that other languages should die because you can write everything in JS anyway.


I said 'almost everybody'.

Even the most ardent JS fans I work with call it objectively bad.


It's hard to be objective on this.

A modern JavaScript engineer would point out that:

  - The community is very accepting of new engineers. 
  - The ecosystem is huge and there are great solutions
    available to many problems.
  - It's easy to write consistent code while avoiding
    many problems with the language if you use `eslint`,
    `prettier` and `flowtype`. Tooling on the web is
    excellent and rapidly improving.
  - You can use one language on the back-end and front-end.
    You can even use the same language to write desktop,
    embedded, command-line and curses applications.
  - `babel` drives the adoption of new features into the
    language with greater vigor than many other languages.
  - The VM is very fast due to lots of work by Google,
    Mozilla, etc. There's also methods of interacting with
    modules written in WebAssembly or if you're on the
    back-end native modules written in Rust/C++.    
  - JavaScript is driving a lot of momentum for
    new programming languages (Reason, Elm, PureScript, etc)
    since as a transpilation target it gives you access
    to many environments and libraries at the get-go.
In my opinion JavaScript in 2017 is a much better language than JavaScript in 2012. I wouldn't call it objectively good but I certainly wouldn't call it 'objectively bad'. There are many reasons to choose it beyond the syntax and semantics of the language itself, and there are effective treatments available for most of the warts it was once known for.

It's dominating because of the sheer drive within the community, and if there were award ceremonies it'd be winning 'Most Improved' year-upon-year.


How much of that is just solving a problem created by Javascript?

A community accepting of new engineers isn't. But it isn't something to brag about either, because it never means its literal sense. Instead, people say a community accepts new engineers when it has low enough standards that newbies feel empowered without learning anything new. The one language to rule them all mentality is also about low standards.

The ecosystem is just compensating the lack of native features and bad overall design; so is the tooling. Babel is a great tool, but it's just compensating for the lack of a good VM at the browsers; ditto for all the transpiled languages.

Besides, no JS VMs aren't very fast. They are faster than the slowest popular languages (Python, Ruby), but can't get anywhere near the moderately fast ones like Java and .Net.


Almost none of the things I mentioned are solutions to 'a problem created by JavaScript' therefore your question seems deliberately misleading.

An accepting community can be a large funnel: it doesn't need to mean that nobody improves or that there are no selection effects. More candidates means more chaff, but also more wheat.

Babel isn't merely contributing to a 'lack of a good VM'. It's more valuable to see it as a playground and streamlined process for large-scale testing of language-level features. (It solved this problem later on. It was originally meant only to solve the problem of running modern code on older VMs.)

I guess you could argue that the VM should have been complete in 1995 but what programming language has managed this?

Also, I don't think that comparing JavaScript's VM to the VMs of statically typed languages is fair. You mentioned yourself that in comparison to similar dynamically typed languages it's faster. Compare apples to apples.

I don't think I'm bending the truth here. There's been a lot of investment into JavaScript, and not all of the problems that have been solved are obvious or easy.

At this point it is ascendant because it has effectively solved lots of problems that relate to speed of change and it has done this seamlessly on a large number of platforms. I think people look at this and pattern-match to 'first-mover advantage' or 'JavaScript has a monopoly on browsers' but neither of these things were what got us to this place: it was innovation in maneuverability.

(It won't necessarily stay ascendant now that it is so easy to circumvent writing JavaScript but yet still plug into its ecosystem.)


> More candidates means more chaff, but also more wheat.

That needs substantiation. It's not immediately obvious in any way.

About Babel, notice how we never had a problem playing with language-level features that target the PC? That's the difference a good VM makes (although on the PC case, it's not virtual). Besides, you are conflating VMs and languages somehow - they have only a tenuous relation.

About the speed, Javascript misses most of the expressiveness of Python and Ruby. It's more on line with Java but if you want dynamically typed languages, with PHP and Perl too. Yet, it's not any faster than PHP and Perl. It has reasonably fast runtimes - those are not an issue for the language, but are not a big selling point either.

Overall, the reason Javascript still exists and is viewed as a serious language is all because it has a monopoly on the browsers. It was worked on enough that if that monopoly goes away it will still be a good choice for some stuff, but it is not an stellar language, and can't be coerced into becoming one unless it can pass through a Perl5 - Perl6 level transition.


>Overall, the reason Javascript still exists and is viewed as a serious language is all because it has a monopoly on the browsers.

Basically, this. ClojureScript and others are simply the efforts of sane, experienced developers that don't want to cope anymore with Javascript and its warts.


I don't think that comment requires more substantiation than yours that there are "low enough standards that newbies feel empowered without learning anything new". The more people that learn your language, the more likely it is that you will find some that can contribute to its state-of-the-art. A low bar doesn't effect people that would surpass a higher bar with ease: they still wish to learn and sate their natural curiosities.

In fact, the normal stereotype of JavaScript developers as people constantly chasing new technologies and libraries is actually true, but what you are claiming is the exact opposite: "people feeling empowered without learning anything new". People are empowered and learning new things because there is a low barrier to doing so and this is exciting.

  > misses most of the expressiveness of Python and Ruby.
  > It's more on line with Java
Have you actually written modern JavaScript code? I personally think it's more expressive than both Python and Ruby, and certainly much more so than Java.

  > Overall, the reason Javascript still exists and is
  > viewed as a serious language is all because it has a
  > monopoly on the browsers.
As it stands JavaScript doesn't have a monopoly on the browser. You can transpile ClojureScript, Elm, Reason, PureScript and many other languages to it. Yet -- surprise! It is still in use. Do you honestly think this is just inertia? I'd argue that it's investment in the platform itself and particularly 'innovation in maneuverability' (number of environments, speed of prototyping, backward compatible code, ease of hiring) which keeps developers using the platform.

In my opinion, the existence of NPM and a broad range of environments that you can run your code on will likely mean that JavaScript would still be a productive environment even if the web was to die.


>You can transpile ClojureScript, Elm, Reason, PureScript and many other languages to it. Yet -- surprise! It is still in use. Do you honestly think this is just inertia?

No, it isn't inertia -- one important reason is that creating a transpiler to JS invariably ends up with a crippled (feature-restricted), slower version of the original language (Clojure, etc.)

Thus the hope placed in Webassembly.


People aren't choosing between those languages based on performance or avoiding any due to missing features. I've never heard of anybody doing that.

People generally choose languages due to assumptions about their hiring pool.


If people only choose languages based on popularity, how do you imagine any language ever becomes popular?


My point is that those transpiled languages aren't that stymied. You can use them without a lot of problems, and they're being left on the bench for reasons other than their feature set.

Companies generally choose JavaScript because there are lots of developers to choose from, it runs everywhere and the ecosystem is huge.

Engineers choose something like PureScript as it's not a 'blub' language and people think that by choosing it they will be able to hire (or be seen as) "math geniuses". I'm sure the feature set is important, but it's not enough to unseat a language with the previously described properties.


> Javascript misses most of the expressiveness of Python and Ruby.

Could you please give examples of expressiveness of Python or Ruby that Javascript lacks?


You can't change the rules interpreting your source code so that the parser will expand a small command into an entire program, or that it will read a more fitting DSL that does not resemble your original language.

You can not inspect an object and change its type or the set of available properties based on some calculation.

You can not run some code in a controlled environment separated from your main code.


> Even the most ardent JS fans I work with call it objectively bad.

Counterargument: Read

Douglas Crockford - JavaScript: The Good Parts

Even though this book is somewhat "aged" (it is from 2008), I know no better book where it is presented so well what is so interesting about JavaScript and how this language is so often misunderstood.

If after reading this book you still consider JavaScript as "objectively bad", so be it. But first read the arguments that are presented in this book.


It is very telling that you don't see the irony of the fact that a language needs a book titled "Language X: the good parts".

Good tech mostly speaks for itself. When you need entire books to convince you that a language has good parts, then you know the language has a problem.


What are their objective measures?


I can think of one: the fraction of design decisions that the language's creator, standardizers, and serious developers wish they could change but can't.


> but almost everybody agrees it is flawed as a language. If that's the future, we are screwed as an industry.

What language is not flawed? And why are we "screwed"? I don't get this FUD...there are more important things than language-choice such as dependency management system + community + ecosystem. JS lets you get on with the job and get things done quickly. You need performance - use C/C++ bindings. Its been clear for a long time that JS is the safest long-term choice and is slowly creeping into every other language's castle.


It doesn't bother you that huge portions of our tech stack are sitting on top of layers of terrible language design that our ancestors will have to deal with? Perhaps the parent post is looking more to the future. Of course it all still works, but when you think we could be doing the same job with Lisp, Smalltalk, (insert any non perfect, but much better than JS technology), it does make me cringe a little.


> that our ancestors will have to deal with

You meant descendants?

Anyway, they won't. They will throw away the garbage and rewrite anything good they find. Just like we do.


Haha, yes my goof up. I disagree that they will rewrite though. Nobody is willingly rewriting millions of loc.


Well, you were right about one thing: you don't get it. ;)

Just look at the prototype hell. Look at the quirkiness of the comparison operator. Type inference. The mess piles up extremely quickly.

"Disciplined devs don't make those mistakes" is NOT an argument and never was. Decades later, people believing themselves to be godlike C/C++ devs still make stupid mistakes.

But I guess learning from the past won't happen for people who fanboy and have a survivorship bias. And have money on the line.


It's flawed because 99.999% of the time your code will work just fine. Job done. Then one day, "undefined is not a function" and your airplane falls out of the air.


This happens in all languages... look at the Toyota acceleration bug for a real world of example of your hyperbole. They were using MISRA C. Languages dont fix spaghetti, laziness, tight deadlines, bad engineers, etc.


> This happens in all languages...

To some extent, yes, that happens in all languages. To some extent 10^100 and 1 are both numbers. That does not mean you can even compare them.


I'm not sure you can do spaghetti and still do MISRA C. Doesn't MISRA have some things about goto, cyclomatic complexity, and such?


type safety is becoming a hugely desired feature for developers again

I don’t think anyone ever hated type safety, I think they hated verbose syntax and unfortunately conflated the two.

Now we’re finding a happy medium with compiler type inference and people are like, wait what?


Dynamic typing is objectively much more flexible and easier to prototype in. Static typing is much easier to build large and robust systems in. Eventually we'll get to the point where everything you can do in dynamicly languages can be done in staticly typed languages (more verbosely), but we're not there yet.

For example, functions that return a different type depending on the values passed in are a useful pattern, but not allowed in staticly typed languages, except those with dependent types (Idris) or runtime downcasting (Go). There's always a way to acheive the same thing, but usually at the cost of safety or much more verbosity.

This is actually, IMHO, one way Go strikes a happy balance between static and dynamic typing. It provides a (runtime) safe and low-verbosity way to write dynamically-typed code, i.e. `interface{}`. Rust is slated to get something similar with `impl Trait` values.


> For example, functions that return a different type depending on the values passed in are a useful pattern, but not allowed in staticly typed languages, except those with dependent types (Idris) or runtime downcasting (Go). There's always a way to acheive the same thing, but usually at the cost of safety or much more verbosity.

Look at this verbose OCaml:

    type my_return_type = AFloat of float | AnInt of int | AString of string;;

    let my_fun n =
        if n < 0 then AString "negative"
        else if n == 0 then AFloat 0.0
        else AnInt n

    # my_fun (-4);;
    - : my_return_type = AString "negative"
    # my_fun 0;;
    - : my_return_type = AFloat 0.
    # my_fun 42;;
    - : my_return_type = AnInt 42
(But yes, to use these values you have to do pattern matching, and if you're in a FUD mood, that is "runtime downcasting" and incurs a "cost of safety".)


Yes, you can always generate a new union type for your return type and then pattern match on it. That is certainely better than C where you must use unsafe tagged unions or void pointers. It is more verbose than in dynamic duck-typed languages, though I'll give you that it's quite compact in OCaml.

In a dependently typed language, you could reduce verbosity at the use site as well as avoid the extra runtime branch.


So how exactly would you write the above five lines of OCaml in fewer than five lines in Idris?


The verbosity is not in the function definition, its in the match expression needed at _every single call site_. In Idris, the definition would be similar length or longer, but call sites would not need a match expression.


> provides a (runtime) safe and low-verbosity way to write dynamically-typed code, i.e. `interface{}`

Since Go 1.9 was released with its new Type Aliases feature a few months ago, the verbosity is even lower. By putting `type any = interface{}` somewhere in your package, you can just write `any` instead of the verbose `interface{}` everywhere you want some dynamic typing.


The response to the recent Rich Hickey talk definitely points toward "dynamic typing as its own solution" (instead of a stop-gap until someone makes static typing less "verbose") being a popular view nowadays.

https://news.ycombinator.com/item?id=15464423


It's really not that bad. Modern JS isn't ideal but it has many parts of it that are as good as or better than Ruby or Python, but most people don't act like they are horrible languages. I'd gladly use it over either.


What causes you to say that JavaScript is a flawed language? Not trying to be snarky or saying you're wrong, just want to better understand your reasoning.

It seems to me that at one point JavaScript had a lot of confusing/bad design decisions but that more recent changes have largely eliminated them. For example, I almost never have to worry about "this" anymore.

I recently worked on a project using TypeScript and I really appreciated how it changed a lot of the bugs from being runtime to compile time. I could definitely see how that is a big flaw, but it seems like the community is developing solutions.


I don't think "this" in JavaScript was a bad design decision. The bad decision was how functions create their own scope in unexpected situations. Also, I think modern JavaScript still requires a lot of "this," just much less ".bind(this)" or "var that = this."

But the thing is, just because arrow functions exist doesn't mean people are going to use them. Similarly, just because modern browsers allow "let" and "const" doesn't mean people will stop using "var."

I agree that JavaScript is generally fine if you stick to the new parts of the language, but that in itself is a pretty big problem. Maybe not for me and you, but for software development in JavaScript generally it's really not ideal.



About WYSIWYG, we are missing standards on our APIs. SOAP was going that way but it was way too much, way too early.

Either that, or an ASP.Net view where the backend is interacting with the user through a browser. But that doesn't work well. It's much better to standardize the backend API than the entire frontend.


>SOAP was going that way but it was way too much, way too early.

I think SOAP successfully demonstrated that rigorous "API standards" are a pipe dream. Swagger/REST is probably as good as it's going to get.


> WYSIWYG web design (a la Dreamweaver) died off ....

Which is why what we actually need is a la Delphi


I have been thinking about that and there is an issue with it; Delphi (and VB) where written in a time when devs paid a lot for software tools. Besides some niches (embedded) that is not really the case anymore. You expect to pay a few $10 at most in total if that. And a lot of people (but that might be the HN/Reddit echo chamber) demand all to be OSS as well they work with. Making 'a modern Delphi' is a lot of work; years of it. And much of that time is not 'fun', it's hard work polishing little parts and having user test groups feedback on it and polishing it some more. The time that you could be Borland seems gone (unfortunately imho) and i'm not sure how you can make the kind of polished tool you are talking about in the current climate. Maybe someone else here has some different views though.


There are companies trying it though,

https://anvil.works/

https://www.outsystems.com/platform/

JetBrains is probably a good example of a "Borland" like company.

Outside HN/Reddit bubble that are plenty of companies that are willing to pay for software, the supermarket cashier doesn't take pull requests.

Also, the back to native focus on mobile platforms, including Google having to integrate Android apps on ChromeOS, might make it less relevant, given that native IDEs do offer some Delphi like experience.


JetBrains never did any rapid UI IDE like Delphi did. In fact, all their IDEs are in Swing, which is a mess. I'd totally love having CLion/PyCharm with Qt UI designer, but it's not going to happen.


The easiest time I had writing GUIs was when I used PyQT. I designed the UI in Qt Designer, loaded it in the python code, set the bindings and voila, it was working.

Btw Visual Basic continues to exist under Visual Basic.NET and if you stick to the basics you could learn to write C# GUI programs pretty quickly.

I agree writing GUIs by hand is very counter-productive.


They have a GUI designer for Swing.

https://www.jetbrains.com/help/idea/components-of-the-gui-de...

And integrate with Scene Builder for JavaFX.

JavaBeans are the Java version of VCL components.

Qt doesn't have a component eco-system like Java or even .NET enjoy.

The company behind Qt is still trying to sell QML to C++ developers, looking from the set of talks at Qt World Summit.


Outsystems and Jetbrains are these examples but on the other hand they are not; they are 'old'; they both exist since 2000 and at that time pushing into the market was a lot easier. I was more thinking of a company starting now, to which I'll check out Anvil.Works. There are more new companies working in the space, for sure, but they all miss the breadth that Borland had (they really had a lot of cash and developers on hand in those days).

But yes, Outsystems (I worked with them and their product quite a lot in the past) could be considered a Delphi. But still not modern; it's rather painful building apps/sites with it that people seem to want.

Jetbrains can be considered a Borland; I didn't think of that because I consider them more in the space of 'low level' programming tools (which, like you say, includes Delphi functionality, but a modern Delphi wouldn't be like the old Delphi; it would need a lot more innovation).


Anvil founder here!

The crucial difference between Outsystems(/Bubble/etc) and Anvil is that Outsystems tries to be a "no-code" environment, and we think that's a mistake (or at least, a different market).

Delphi and Visual Basic proved that writing code isn't the problem - code is the best way to tell a computer what to do. But writing code in five different languages to produce "Hello World" on the web...now that will slow you down.

(Count 'em: JS, HTML, CSS, backend eg Ruby/Python, SQL. We do everything in Python, which gets you going much faster. We just got back from PyCon UK, where among other things we got an 8-year-old building database-backed web apps in an afternoon. That's the sort of thing that used to happen with VB/Delphi.)


Thanks; that sounds very interesting. I will check it out tonight.


Was the talk recorded?


There's a healthy market of RAD/lowcode tools similar to Delphi these days. One example is Mendix: https://www.mendix.com/application-platform-as-a-service/

Full visual development, runs on a scala runtime, can be deployed (visually) on cloudfoundry, docker/kubernetes, etc.


Thanks for the heads up, one more for my list. :)


Funnily enough I was looking at the title and just thinking in 25 years time I will still be programming in the same three languages C, Pascal/Delphi and SQL I learnt 25 years ago.


Interesting. Can you elaborate more?


Imagine WebComponents finally becoming mature and having a programming environment for RAD on the Web, using toolbox with drop-in components for doing application development.

There are already some attempts at it, for example https://anvil.works/


Unforuntately JavaScript is the present and, yes, we're screwed.


>* If that's the future, we are screwed as an industry.*

Agree 100%. However, i find Javascript pretty good for quick and dirty "MVPs".


> Functional is OK. Imperative is OK. Both in the same program are a mess.

I guess you're referring to languages that are not-quite-without-side-effects, but I'd say the biggest influence the functional paradigm has had on other (imperative) programming languages is actually the addition of higher-level data manipulation operations. The functional utility libraries you see for languages such as Python and JavaScript exist solely because sometimes functional idiom like "map this onto this" or "take that only if this holds" or "let's make a new function by pre-filling these function arguments" is more intuitive than having for loops all over the place. And it mixes just fine with other imperative code.


Presumably animats is talking about things like "do notation" in Haskell, not innocuous cases of function composition or first-class functions in imperative languages.


I read it as being about Scala and similar languages, where you can not freely rewrite function compositions because they may have side effects. It may not be big problem from the programmer's point of view. But the compiler surely suffers because of that.

Interestingly, on Haskell-land the debate is about merging "do notation" and normal notation. On practice the difference is limiting, and arguably the types alone are enough to represent the difference.


I don't know, it's far from perfect, but is there a better way to do sequencing in a pure functional language? You have to be able to specify order of execution to tackle real world tasks unless you do callback/continuation passing style right? I find that a lot less intuitive for most applications.


Do notation isn't about sequencing, it's syntactic sugar for the monadic 'bind' operator. Sequencing happens because of data dependencies. This is part of the confusing story about monads -- the fact that every monad is a monoid doesn't imply a straightforward linear sequencing, as something like the tardis monad illustrates.

https://wiki.haskell.org/Do_notation_considered_harmful

More constructively, it could be argued that IO just does not fall in the remit of functional programming. SPJ doesn't refer to Peter Landin much, but Landin's perspective is valuable. In 1964 he wrote "Is there some way of extending the notion of [arithmetic expressions] so as to serve some of the needs of computer users without all the elaborations of using computers?" (The Mechanical Evaluation of Expressions).

IMO this is exactly what the Haskell tradition is pursuing. Landin, quite modestly, doesn't anticipate that entire computer systems will be expressible in functional form. Implicit data structures are just one example of a mechanism that is incompatible with the idea that everything is an expression whose internal representation is managed by the language runtime (itself presumably involving mutable data).

I don't think Landin was experiencing a failure of vision--I think he saw clearly that some of the "elaborations" of computing are remote from a functional model.


Fair enough; I knew do notation was sugar for bind, but I didn't realize/forgot about the time traveling monad.

Follow up questions bc I'm still a Haskell noob:

Do you know if async/wait from Control.Concurrent.Async ensure linear sequencing?

Could the sequencing problem be solved if linear types make their way into Haskell? (They seem to mainly be about memory management, but I'm not sure about other potential applications)


I guess you're referring to languages that are not-quite-without-side-effects, but I'd say the biggest influence the functional paradigm has had on other (imperative) programming languages is actually the addition of higher-level data manipulation operations.

I tend to agree. The two big wins from a more “functional” style, from my perspective, are the clear emphasis on the data and the way effects are more explicit and controlled.

I want things like higher order functions and algebraic data types and powerful interface/class systems. With those I gain many useful ways to represent and manipulate data that I don’t have in most languages today.

In a world where most mainstream languages are just discovering filter, map and reduce on their built-in list types, a language like Haskell gives me, out of the box, tools like mapAccumWithKey that work with any data structure as long as it provides the specific, clearly defined interfaces required for the algorithm to make sense.

In a world where most mainstream languages are worrying about accidentally derefencing nulls or whether there’s a proper type for enumerating a set of values, functional languages routinely use algebraic data types and pattern matching, and some go much further.

Arguably, these aren’t really functional concepts at all, in that you could have them just as well in an imperative language. However, in practice it is the functional-style languages that are far ahead in these areas, because they are a natural way to work in languages that emphasize composition of functions and careful, explicit handling of data.

I also want to know that I’m not applying effects on resources unintentionally, or sharing resources without proper synchronisation, or trying to apply effects on resources in an invalid order, or failing to acquire or release resources properly, or leaving resources in a mess if something aborts partway through an intended sequence of effects. This aspect goes a lot further than just making data constant-by-default, but it certainly doesn’t require trying to remove state and effects altogether. These things aren’t so much about making my code more expressive but about stopping me from making mistakes.

I want a language that will stop me from accidentally modifying a matrix in-place in one algorithm while some other algorithm has a reference to that matrix that it assumes won’t change. I don’t want a language that will stop me from ever modifying a matrix in-place. Sometimes modifying things in-place is useful.

I want a language that will be explicit about the initialisation and lifetime and clean-up of a locally defined cache or temporary buffer. I don’t want a language that tells me I can’t cache a common, expensively computed result 15 levels deep in my call hierarchy without changing the signature of every function on every possible path to that point in the code, or a language that will let me do whatever I want but only if I use some magic “unsafe” keyword that forfeits most or all useful guarantees about everything else in the universe as well.

In this respect, my personal ideal programming style for most tasks very much would be a hybrid of imperative/stateful and functional/pure styles, with the key point being that the connections between them should be explicit, obvious and deliberate.


Rust doesn't require use of unsafe for cacheing 15 levels deep if you use the right primitives, i.e. a Mutex. If you dont use safe primitives, then yes you need to use the unsafe keyword to mark the code as unsafe. How is that unreasonable?


I was commenting on the beneficial influences from functional programming on mainstream languages, and noting that a purely functional programming style with no effects isn’t (IMHO) particularly necessary or desirable.

I don’t know much about Rust so I can’t comment much on that. The intended point of my example was that in a purely functional environment, you can’t have a local, low-level cache, because updating a cache is inherently stateful. So either you need something like a mechanism that lets you break the rules locally, like say unsafePerformIO in Haskell, or you need to infect your entire call chain with whatever mechanism you use to manage effects top-down. While that is perfectly reasonable, given the constraints you choose by adopting a purely functional language, I don’t think it’s particularly helpful.

My conclusion was that I’d rather have a hybrid of imperative and functional styles, contrary to the suggestion in the original presentation and in general agreement with stdbrouw’s comment.


Are there any languages you know of that are somewhat close to what you describe?


Unless I am misreading your question... Erlang / Elixir?

They are purely functional in the sense that NO VARIABLE inside your code can be ever mutable.

However, they do have ETS (which is an in-process cache inside the VM) which is fully mutable and people have long made wrapping libraries around it for transparently working with mutable arrays, double-linked lists, queues, matrices, graphs and what have you.

The philosophy basically is "always work with immutable data except when mutable is more performant or is otherwise more practical". They don't shut the door on you, they just force you to make your intention to work with mutable storage very explicit and clear. That helps a lot when you go hunting inside your code for side effects, too.

As a maturing Elixir dev I can say this philosophy works incredibly well in practice. Code is smaller, much more readable, you don't worry about side effects like ever -- except in very, and I mean VERY RARE occasions (for 1 year of working with it I only had to do that twice) -- and finding a bug is times faster compared to Ruby, Javascript, PHP, Java.


None of the programming languages that I know well myself provides the sort of balance I’d like to see between:

(a) strong visibility of and control over effects,

(b) an emphasis on powerful tools to represent and manipulate data, and

(c) straightforward imperative/stateful aspects when required.

Of course that isn’t to say that no such language exists, but if it does then sadly I have yet to become aware of it. New ideas are always welcome…


You may want to try Nim. It is an imperative language with meta programming capabilities, some functional features and an effect system


Yes, they exist and have been used for decades; two well known examples are Common Lisp and Scheme.

For example you can do functional, imperative and OO programming in Common Lisp, and it brings extensive features for working in all these three paradigms.

More recent examples, Racket and Julia.


There are several reasonably well-known languages that bridge the functional and imperative worlds in one way or another. However, I’m looking for more than just that. In particular, I’d like to have good tools for both imperative/stateful and functional/data-transforming coding, but all within a safe, structured environment in terms of effects/external interactions/observable behaviour.

Now, I am by no means a Lisp expert, so it’s entirely possible that I’m completely unaware of something here. However, I’ve yet to encounter much of an effect system in any flavour of Lisp. Indeed, it’s hard to see how the sort of explicit visibility and control of effects that I’d find useful could be achieved in a language with primarily dynamic typing using any of the approaches I’ve encountered so far.


Great points, but I have a question about one of them in particular:

> - Functional is OK. Imperative is OK. Both in the same program are a mess.

What do you mean by that?

My experience is that functional alone is impossible, since the only useful thing a program can do is through state changes; imperative is a-OK, and functional+imperative in the same program is the best way to do things (i.e. well-defined stateful areas surrounded by lots of functional code).


Once you get over the initial learning curve of the functional/pure approach to state/IO, its far superior to imperative imo. You don't need to reason about global state - because everything is explicit, including passing around your state, you never have to worry about "what if someone else or some other code somewhere is touching this" again.


Imperative programming is fine imo, as long as it is done in a relatively small scope.


> My experience is that functional alone is impossible, since the only useful thing a program can do is through state changes

Some programs are simply pure functions. If your program entry point accepts a string and returns a string, then you can write useful things, such as compilers, image processing, grep, etc, entirely as pure functions.


Unless you want logging with timestamps.


Logging with timestamps is not in opposition to purity.

All purity does is force you to change the type of a function that wants to log something, to reflect that it’s returning a value that, when evaluated at run-time, will have the side-effect of printing to the console. It’s still a pure function since it returns the same description given the same argument(s), the only difference is that when this description is evaluated by the run-time system, something will be printed to the console (in accordance with the description).

In other words, you’re not restricting the effects your program can have, only how you can choose to describe them (as first-class values whose evaluation has a side-effect that is not observable by your code).

It’s all explained perfectly in this talk: https://www.infoq.com/presentations/io-functional-side-effec...


My point was that it is not impossible to write useful programs as pure functions.

You can't counter that with an example of a program that you can't write as a pure function.


Sure, it's not impossible to write something as a pure function. But it's not an interesting statement to make unless you also apply the implied context that it may be desirable to do so. Do you think it is desirable to make these pipeline components as programs using pure functions?


Ok, to clarify - I didn't mean program as in a simple function. I meant a program as in application, something that accepts some real-world input and produces some real-world consequences.


Right - a compiler. That's a real-world application isn't it? A compiler can be a pure function - accepting source text as input, and producing machine code as output. Yes more complicated languages do more complicated things, but for several languages you could write a state-of-the-art compiler as a pure function.


Only if you ignore the dynamic state of the language, of associated libraries, of processor architectures, and so on - i.e. the context the compiler is working in and the mutable state of the language and development environment as a whole.

The idea of pure functions is a false friend in CS because it tries to solve the problem of state by wrapping it up and wishing it away.

It's true that state causes a lot of problems, but so many useful systems rely on mutable state - at practical application levels - that it might be interesting to design robust systems that manage state, context, and relationship instead of trying to create contrived examples of state-free systems.

There seems to be a cognitive bias against this in CS. Most developers appear to love puzzle systems made of hard little components with solid edges, and thinking in terms of context and relationships seems to be disturbingly alien. So there aren't many programming paradigms that explicitly work with contextual inference instead of trying to rigidly delimit interfaces.

But there are real prizes to be won from associative context-smart computing, and IMO the domain is wide open for innovation - because it may be possible to give up the pursuit of complete safety and predictability (which doesn't exist anyway) in return for new kinds of powerful, productive, and smart features.


> Only if you ignore the dynamic state of the language, of associated libraries, of processor architectures, and so on

I don't understand why any of this means you can't have a compile as a pure function. A pure function can cope with things changing internally - it just creates new data structures to represent things that have changed in the old data structures and then passes the new data structures onto the next phase.

A pure function just means you can't do something like a package manager that needs to read files from disk or download things.

> but so many useful systems rely on mutable state

You don't have to argue this to me - I wrote the first half of my PhD on the importance of mutable state.

I'm just arguing against the nonsense that it is impossible to write a useful real-world application as a pure function.

I work in the VM research group at Oracle, and guess what? Our JIT compiler is basically a pure function. It takes in some bytecode and produces some machine code. It's structured a little differently in reality, but it is logically, and almost in practice, a pure function from one to the other.

I'm writing a presentation about this right now.


This is the best HN post i've read today. You should expand it into a blog post.

I also agree - context and state are important and useful, because many real-life problems or processes rely on such a model and thus translate naturally when state is a "first-class citizen."


Hm.

Fair enough.


Erlang / Elixir are 100% immutable inside the code (no var can ever be mutable).

However, they have a mutable in-process cache (living in the BEAM VM) that many people have written libraries around for stuff like mutable arrays, matrices, graphs, and many others.

It goes like this: do 99.5% functional programming and you have the imperative / stateful tools for when they are absolutely necessary.

This works. Extremely well.


> Interprocess communication could use language support.

+100

There have been a few attempts in this direction, but they have mostly been couched in the form of a whole new language that also embodies at least a half dozen other novel (i.e. unfamiliar) ideas as well. Contra the OP, I think this is an area where incrementalism does work. Extending a language people already know with a few constructs for IPC, much like fork/join or async/await have done for concurrency, is much more appealing. I've been thinking about this for a few years now. Maybe I should write some of that down and let people pick at it.


> - Interprocess communication could use language support.

I'm really interested in hearing more of your thoughts on this, since it touches on one of my personal research interests. What kind of language support for IPC are you looking for? Something in the vein of session types [1], which checks that two parties communicate in a "correct" sequence of messages?

[1] https://dl.acm.org/citation.cfm?id=1328472


Lower level than that. Languages should have marshaling support. Marshaling is a low-level byte-pushing operation for which efficient hard machine code can be generated.

I'd suggest offering two forms of marshalling - strongly typed and non-typed. Strongly typed marshaling means sending a struct to something that expects exactly that struct. That will usually be another program which is part of the same system. Structs should be able to include variable-length items for this purpose, so you can send strings. Checking involves something like function signature checking at connection start. This should have full compiler support.

Non-typed marshalling includes JSON and protocol buffers. The data carries along extensive description information, and the sender and recipient don't have to be using exactly the same definition.

Both are needed. Non-typed marshalling is too slow for systems which are using multiple processes for performance. Typed marshalling is too restrictive for talking to foreign systems.


I don't disagree but during my career I've found surprising amount of cases where ASN.1 and its practical encodings like DER, BER and PER work impressively well.

Human readability however has always been the selling point of terrible formats like XML and JSON.

Not sure how can a binary-compact format ever account for that. Maybe excellent cross-platform tools that allow you to inspect and modify the binary-compact format wherever it is? (I mean not only standalone CLI and GUI software; I also mean native browser support in the Dev Tools space and going down the line in the future -- transparent support for the format[s] natively in the programming languages / VMs themselves.)


> Previously the options were reference counts, garbage collection, or bugs.

I think you mean “lack of memory safety” rather than bugs. Garbage collection doesn't magically free you from finalization bugs, it just makes their consequences less disastrous.

> Formal methods are still a pain. The technology tends to come from people in love with the theory, resulting in systems that are too hard to use.

Clearly, the solution is getting rid of people, but it isn't entirely clear on which end to get rid of people.


Multi threaded + async isn't a problem if the runtime system supports it.

Async + Event driven works transparently with higher order frp with the usual tradeoffs of first order vs higher order frp.

Multi threaded + Event driven seems like a mess as soon as state gets involved. Some transaction based system might be interesting.


One area I haven't seen any good solutions for is the interleaving of tests with production code. I think we need better ways to express tests without cluttering the production code and excessive mocking. What I'd like to do, and really can't in any language/tooling that I know of, is to extract some arbitrary subset of the code and surround it with tests or a test harness. I'd also like to specify various injection points for tests right as I'm writing the code in a way that doesn't affect the production code and contributes to its readability. Perhaps this is pieces of server and client code or various services all tested together. When I refactor the code I want the tests to "refactor" with the code. I don't want to need to rewrite my tests... Automatic test generation is another interesting area (beyond the basic table or algorithm stuff we might do in tests today).

It's almost like the "problem" isn't the languages themselves, it's the tooling around the languages. I want to be able to do a lot more than "compile", "run", with my code... I can imagine machine learning driven tooling being able to automate a lot of the mechanical aspects of code writing beyond the simple generate a closing bracket that an IDE can do...


It many not be exactly what you want, but D supports unit tests embedded directly in the production code:

  https://dlang.org/spec/unittest.html
It's been a real game changer for us in improving the quality of the code.


William Byrd's work in program synthesis is an interesting take on this, where an IDE can, guided by test cases written for a function, actually auto-complete code itself whilst writing the function, or tell the programmer when they have something wrong by violating a test case. Of course it is impractical right now, but a good step nonetheless.. It (Barliman) is demoed near the end of this video: https://youtu.be/OyfBQmvr2Hc


This is indeed an extremely interesting approach.

It was pioneered in the context of logic programming and Prolog by Ehud Shapiro in his 1982 PhD thesis "Algorithmic Program Debugging". The thesis was published as an ACM Distinguished Dissertation by MIT press and is available online from:

http://cpsc.yale.edu/sites/default/files/files/tr237.pdf

Together with Leon Sterling, Ehud Shapiro later wrote a very important introductory Prolog book called "The Art of Prolog".


MagicHaskeller[0] (which seems to sadly be offline at present) is a similar project that infers Haskell functions from properties like

   reverse "abcde" = "edcba"
Other, older projects include Exference[1] and Djinn[2], in decreasing order of power.

Also, this is really similar to how Idris and Agda work, except they use expressive types to generate the code (using the Emacs modes) rather than test cases.

[0]: http://nautilus.cs.miyazaki-u.ac.jp/cgi-bin/MagicHaskeller.c...

[1]: https://github.com/lspitzner/exference/

[2]: http://www.hedonisticlearning.com/djinn/


> interleaving of tests with production code

> tests without cluttering the production code and excessive mocking

IMHO, tests should be an integral part of production code. In MPWTest[1], tests are typically expressed on the class side of the class under test, in a category called Testing, though it's easy to override that. This solves 2 of my major annoyances with xUnit-style testing: dual class/test hierarchies and (with static typing) the need to add public interface for the tests, which don't have privileged access.

Mocking in particular and stubbing should be eliminated as much as possible, but this is not a programming language issue[2], more a "not making architectural assumptions too early" issue.

[1] https://github.com/mpw/MPWTest

[2] http://blog.metaobject.com/2014/05/why-i-don-mock.html


I recommend taking a look at Clojure Spec https://clojure.org/about/spec or Racket Contracts https://docs.racket-lang.org/guide/contracts.html


I get clojure.spec is primary a way to define a schema and validate against it, and that you can use it for automatic test generation, too. But I feel I fail to grok the whole extent of the possibilities it offers. Does it address any of the other things YZF mentions? Specially, the "prevent excessive mocking" part?


I think this is a good example of Spec being used to guide the solution http://gigasquidsoftware.com/blog/2016/05/29/one-fish-spec-f...

I don't think Spec alone removes the need for mocking, using Clean Architecture is probably more important for that https://8thlight.com/blog/uncle-bob/2012/08/13/the-clean-arc...

If you structure your app so that any IO lives at the edges, it becomes possible to test all your business logic without mocking. Spec can help at that point by providing generative testing.

Since Spec



I can imagine something like a code-collapse indicator next to every method in a class (or function or code block, whatever), which would expand tests that can be run on-the-fly - even while you type. Kind of like how comments can be collapsed in some IDEs.

Technically, these could very well be simple commented annotations that point to a separate .test file or something, but are shown in-line by your favorite IDE or code-editor.


Doesn't Rust and cargo do this kinda well?


I really like how rust does unit test in line and in docs.


You can also have integration testing.


Forgive me, this is going to sound snarky but I promise I am earnest:

> I'd also like to specify various injection points for tests right as I'm writing the code

You mean, like methods/functions?

> extract some arbitrary subset of the code and surround it with tests

You mean, like modules/classes?


Kind of, but three big problems in practice are:

- runtime state you need to initialize if your code is written in a stateful manner

- other methods/functions in different modules/classes the code you want to test calls out to

- the fact that method/function and module/class separation is (and IMO always should be) primarily driven by needs of production architecture, not testing, means that it may not be perfect for testing needs

Add to that the failure of testing tools and methodologies (like "don't test private methods" meeting "separate out duplicated or complex code into private methods"), and I feel the problems are real.

----

Here's an idea that just popped into my head right now: how about "hash-based testing"? You take a code you want to test, like:

  private Integer foo(String bar, Frobnicator quux) {
    String frob = quux.invokeMagic(bar);
    return memberApi.transform(frob);
  }
and turn it into:

  Method cut = <<
  private Integer foo(String bar, Frobnicator quux) {
    String frob = MOCK(quux.invokeMagic(bar)) AS("frob");
    return MOCK(memberApi.transform(frob)) AS (frob.length() > 3);
  }>>;
  
  //continue testing cut()
The idea being, the compiler or whatever external tooling ensures that the original method in production code, and the inline-modified method in tests, are the same with respect to some equality/hashing function that always treats expressions "foo" and "MOCK(foo) AS(bar)" as equal.

This way, you end up being able to mock everything precisely, inline, in whatever way you see fit, with your tooling ensuring that the actual code stays in sync with the test, since whenever the original method changes in any meaningful way, you'll fail the code "equality" test.

Might be a stupid idea, I welcome comments.

(INB4 testing to implementation instead of the interface - if you have to mock anything, you're already testing to implementation, and this way you can inject testing alterations precisely, instead of having to turn your architecture inside-out to support IoC/DI/whatever the current testing-enabling OOP fad is.)


I just don't experience these problems since I moved to functional style; I find it hard to remember that they even existed. Separate state from logic, separate initialization from operation, and you never have problems with initializing state. If your logic needs to perform effectful operations, separate that out using a monad. Good code structure for production is the same as good code structure for testing, because the needs of code structure are the same in both cases.

Text substitution is impossible to reason about; magic test frameworks sound like a good idea because every kind of magic sounds like a good idea in isolation, but they'll always end up getting you in trouble. Plain old code, plain old functions, parameters, and modules, always win in the end, because every bit of brainpower you can save from understanding magic is brainpower you can spend understanding the details of your actual test instead.


I'm mostly writing functional-style too these days, even in imperative languages. So this was more of a thought experiment based on the testing I've done in the past or watch being done at my $dayjob.

That said, I didn't mean text substitution, but AST transformations (or rather, AST comparisons, since my example is defined in terms of AST equality with the MOCK magic thingie making the particular part of AST be ignored for the sake of comparison).


It's supposed to be this way but in practice it doesn't seem to work. With methods/functions it's a problem when they call other functions.

So really the idea that a good design is also testable is somewhat of an approximation of reality, It has some elements of truth but at some point making your code more testable actually makes it worse, often you need to make other compromises. Then the interactions of pieces of code typically happens in much more brittle system/integration tests. YMMV, this is just my experience.

A lot of interesting pointers in the replies, thanks!


I don’t do coding tests. I talk to the candidate about programming and I only require one interview. I haven’t made a hiring mistake in 6 years.


okay. care to give any other info? Usually when someone says I haven't failed in 'some impressive challenge in a long time' it means your challenge wasn't that impressive. How much interviewing have you done, what were the hard calls you were asked to me, whats your scheme?

When I was at google, one of the hardest thing I did was look at the marginal intern review scores and tried to pick out the ones from the big pile whom we should look at again and who should not be looked at. There were all kinds of crazy ass stupid interview questions that in my opinion were not very useful to classify capability.


>okay. care to give any other info? Usually when someone says I haven't failed in 'some impressive challenge in a long time' it means your challenge wasn't that impressive.

Or they couldn't recognize failure


Uh, this is about automated testing of programs, not interviewing.


I'm an idiot.


I'd love to know how you do that, but the answer is always, "I just ask them the right questions and can tell by their answers," which is so vague it's useless.


The "Language Gap" slide seems massively overstated, or maybe I'm misunderstanding. We really have seen a lot of progress in the last 10-20 years, both in industrial languages and in academic ideas that could become the industrial languages of the next 10-20 years (e.g. Idris on the short end, Noether on the longer end). The author laments that pattern-matching is still not standard, but we're getting there; map/reduce/filter are standard in all new languages these days (they weren't 10-20 years ago), some kind of lightweight record feature is standard in all new languages, some level of type inference is standard in all languages. Yes, it's taken longer than you might imagine it should, but progress is happening. Likewise formal methods - they may not be practical in 2016, but there's a lot more awareness, a lot more work being done, and people are starting to try to take the useful parts and apply them in more and more industrial settings. Likewise graphical representation of code - not the LabView nonsense that's exciting to talk about at cocktail parties, but the little touches that today's IDEs do almost invisibly - highlighting, mouseover information, outline views, smart code folding.

I wish we were better at communicating about programming languages. I wish we were moving faster. But despair is unwarranted. We really are in a much better place than 10-20 years ago, and the next 10-20 years look set to bring more improvements.


10-20 years ago some of us could use Smalltalk, do systems programming with strong type safe languages, use RAD environments like Delphi, release applications in Prolog, for example.

To me it seems we are catching up with the past, and as someone already programming on those environments, it looks we have spent 10-20 years loosing our tools, educating the masses, only to get a taste of things used to be.


I've certainly seen cases where we take one step back in one area to take two steps forward in another; where it takes 5-10 years to get language C that can do something that language A we were using 5-10 years before that could do - but only if we forget that we also wanted some capability in the language B that we couldn't do in A, and C is the first language that manages to synthesise both. And industry is always going to be a long way behind the cutting edge - most of the features we're excited about today are things that were present in ML. But on the whole it feels to me like both a) the mainstream industrial programming experience today is better than it was 10 years ago and b) the academic cutting edge of programming language design today is better than it was 10 years ago, and I expect both those things to continue to be true.


The "Language Gap" slide seems massively overstated, or maybe I'm misunderstanding.

The language gap is relative to where he thinks we should be aiming for, what the author thinks is possible. If you think that we're already "there", then there is no gap for you, and there's nothing wrong with feeling that way.

Personally, I agree with the author: we can do vastly better. I think it's a failure of imagination and effort, not potential.


From my perspective as a developer for the last 30 years, it's been two steps forward, and 1.5 steps back.

I think JavaScript does some amazing things, but it reminds me of Visual Basic in the 90s or Flash in the 2000s. Anyone could write code (some good, more horrible) with it and do cool stuff.

But the maintainability is horrible. I hope TypeScript comes to the rescue!


My thoughts are that we don't really need more languages. Arguably we don't need better ones either, because they aren't the problem in general computing. Instead we need better design paradigms that better let us model complex requirements and systems into code. Let's have new languages that then support those paradigms.

We continue to struggle abstracting complex problems using functional decomposition, structured analysis, information (data) modeling, and object-based decomposition.

Many newcomers I meet only know modeling problem domain concepts as data in a database, with behavior and constraints acting on that data in a separate layer, organized using functional decomposition. Of course that layer increasingly approaches a 'big ball of mud' as size and complexity increases. Sounds a lot like we are back data-flow modeling as so popular in the 1980's, in a new guise.

A focus on programming languages in my opinion, masks the real issues we face.


> A focus on programming languages in my opinion, masks the real issues we face.

Indeed, major problems of programming languages can hardly be solved in the area of programming languages itself as it is being done now.

I would say that one needs a new programming or computing model so it is not about languages. At least it is my conclusion after 10+ years of research and attempts to develop such a new programming paradigm. And although I have quite significant progress (concept-oriented programming), the more I do and the deeper I go, the more fundamental problems I meet. And these problems are not about programming languages at all. It is more about "how a system works", "what is a system", "what is computing" etc.


Doing such a huge research has to have phases of some sort. At certain point you should stop, re-evaluate, and say "okay, where I am at now is good enough to solve problems X and Y, most of the time".

Otherwise it's endless and one loses motivation.


Much of the time, new languages lead to new design paradigms. As a general rule, newer languages are more abstracted than older ones. When people don't need to get hung up on the intricacies of low level programming, real progress can be made on the design paradigm front.


I would say that it is a two-directional dependency:

  language constructs <--> design patterns
Programmers experiment with and accumulates various design patterns using existing languages. Then the most useful of them are implemented (frozen) as programming constructs. Then programmers experiment with these new programming constructs and come up with new design patterns. And so on.

In fact, these design patterns and language constructs come in (anti-phase) waves.


I'm curious which languages have resulted in which new design paradigms?


Two good examples:

Erlang led to Microservices and the Reactive Manifesto. the Open Telecom Platform is still a very modern Microservice library (in my opinion) after more than three! decades

ML led to programming against generic interfaces (http://www.cs.cornell.edu/courses/cs312/2006fa/recitations/r...)


Elm -> Elm Architecture


I agree, and from a different perspective, I would compare this to literature. In English literature we have many different paradigms: Victorian literature, Modernism, Post modernism, Post colonial ect. The language has to be expressive, and in being expressive, it can express any of these different paradigms. The fact that the vocabulary is shared between the "eras" of literature is only a bonus which makes authors more flexible and more able to experiment with actually new concepts, rather than simply reinventing the same old vocabulary.

We then have Chinese literature and Indian literature, and these have very different paradigms. Perhaps it is even hard to effectively translate from Chinese to English. But the actual variance within a given language is still greater than the variance between the languages.

And like human languages, the vocabulary is often arbitrary. In human languages a dog is called a dog not because it makes the sound "dog" and not because it looks like the letters 'd', 'o', and 'g' but for entirely arbitrary reasons of phonetic shift and arbitrary initial designation.

And in lisp you take the 'car' of a tuple and in Haskell you take the 'head' of a list. But the two concepts are very much the same. However, the distinction between continuation passing style (CPS) concurrency and the actor model can be expressed in either langauge just as well. And the distinction between CPS and the actors is FAR grater than the distinction between calling the basic function "head" or "car".


In Haskell you don't ever take the `head` of a list, because `head` is an evil non-total function. Instead you pattern-match the list: in one branch, the head is given to you for free; in the other branch, there's no head at all.


I don't think we've reached the limits of what can be done with better programming languages, just because there's already such a range in terms of what today's languages do - I really do think there are languages in use today that are multiple orders of magnitude better at general computing than other languages that are in use today (as at least nominally general-purpose languages).


I was going to say something which I think may be similar to what you’re saying.

Software and business systems are diagrammed with totally ad-hoc “flow charts”, bubble and arrow diagrams, and less ad-hoc sequence diagrams and UML diagrams. We need advances in formal ways to model concurrent processes, from the level of threads to concurrent business processes.


In a sense yes, although I suggest that a "business process" is too broad and difficult an abstraction. Better for most [1] systems in industry and commerce to elicit and model sequences of recorded events [2] involving interactions with things, people and places in different contexts, with support for constraints [3].

Modeling the above in a business sense, with support in existing paradigms and languages is already a solved problem, just a little known one.

---

[1] The other type of system in industry and commerce being the continuous system that isn't based on recorded events, but instead on logging errors and abnormal circumstances, e.g. an elevator control system or an automated warehouse delivery system, or the engine monitoring software in your car.

[2] Recorded for business or legal reasons.

[3] It's those constraints that prevent an elevator from moving with its doors open, or billing for that product not shipped, or allowing someone to vote twice.


The distinction you draw is unnecessary; both types of systems deal with planned responses to events, and both have to deal with out-of-spec inputs.

And while them recognition of planned response to events as the unifying abstraction for system analysis is not new (it's at the heart of the structured analysis/design approaches of the 1960s and 1970s which, while displaced somewhat by object-oriented analysis and design, are echoed in more recent tools and technologies like BPMN and Flow-Based Programming, though the analytical methods haven't come back as much as the tools have.)

But, no, modelling for both validation by domain experts who are not computing experts and implementation by developers who are not domain experts is not a solved problem in the general case.

I do agree that a lot that has been learned that is useful in finding adequate solutions to that in specific cases is not as widely known as it should be.


I’m not sure I understand everything you said there. Software engineers are often tasked with designing and implementing “business processes”, just as they are tasked with implementing correct logic in a concurrent threaded program, or system of microservices. Don’t you feel that when they step up to the whiteboard and draw yet another completely ad-hoc diagram, for any of the above problems, with no agreed rules of composition or semantics, that these people, who often have degrees in computer science and mathematics, are basically communicating using cave paintings?


There are some that can walk up to a white-board and (repeatedly and predictably) create a domain object model representing complex business requirements. One directly convertible to code.

They can do this because they know a set of higher order patterns, beyond just a knowledge of "objects" and "classes". Beyond just a design approach of identifying classes as the "nouns" in a problem domain.

These higher order patterns represent those business processes, as you call them.

Sadly such a skill is very rarely seen, or even in demand, given that data modeling is the basis for most design today.


OK, so that covers data modeling, and I agree with you that that is a rare and yet critical skill. But if I'm envisioning correctly the talents that you're describing, that still leaves a large amount of concurrent/asynchronous logic undescribed/undesigned. I expect you know what I mean; maybe a jumble of words like "multiple superimposed event-driven parallel state-machines" gives the idea.

You mention a "domain object model". Are you talking about diagrams that capture much of the dynamics/asynchronous behavior that I described (in which case, where can I learn about such a diagramming language?) or is it more restricted to the nouns?


I am talking about a domain model of the problem under consideration. That model is the same thing from analysis/design to code. It is independent of technical concerns and plumbing. Independent of a particular interface or mechanism for persistence. I'm talking about using standard programming language features, rather than yet another (diagrammatic) language that requires translation to executable code.

Those technical concerns are dependent on the non-functional requirements of the particular required solution. This includes aspects of concurrency.

I suspect your interested in a "framework" that supports both problem domain modeling and specific technical circumstances?


I would love to see programming as a dialogue between user and computer (programmer and compiler). For example:

Compiler would infer the types, and the programmer would read it and say, oh, I agree with this type, but I disagree with this type, that's perhaps wrong, this should be rather that type. Then the compiler would infer types again, based on programmer's incremental input.

Data structure selection. Programmer would say I want a sequence here. The compiler would say, I chose a linked list representation. The programmer would look over it, and disagree, saying, you should put this into an array. And compiler could say, look, based on measurements, array will save this much space but list will be this much faster.

Code understanding. Programmer should be able to say just, I don't know what happens here, and the compiler would include some debug code to show more information at that point.

Or take refactoring. Programmer would write some code, computer would refactor it to simplify it. Then programmer would look over it, and say, no, I rather want this, and don't touch it, and he would perhaps write some other code. The compiler would refactor again...

But all this requires that there is syntactically distinct way (so that perhaps editor could selectively hide it) to specify these remarks in the code, both for computer and programmer. So each of them should have a special kind of markup that would be updated at each turn of the discussion. Because you don't want to just overwrite what the other side has just said; both are good opinions (which complement each other - human understands the purpose of the code but the computer can understand the inner details much better). So, to conclude - I wish future programming language would include some framework like this.


Programming in Lean, Agda, and Idris have been quite a revelation in terms of interactive type system exploration. Granted, they can be flakey at times (Lean especially), but it's a tantalizing glimpse at what could be around the corner. Hazel[1] is also a pretty exciting look at advancing the idea of 'programming with holes', as is Isomorph[2]. Lots of exciting things around the corner!

[1]: http://hazelgrove.org/

[2]: https://isomorf.io/


> Programmer would say I want a sequence here. The compiler would say, I chose a linked list representation. The programmer would look over it, and disagree, saying, you should put this into an array.

To some extent, this is the promise of object-oriented programming, that in this particular instance has failed a bit in mainstream languages. It's true that in, say, Java, you have to massively refactor your code to switch between arrays and linked lists, because you use different syntax ([] vs. method calls) to access elements. It could be a bit better in C++ due to operator overloading; you could hide your actual container type behind a typedef, and as long as both container type A and container type B support the same operations with [] that you actually use, you can freely switch between them.

In Smalltalk, every container class derives from a single Collection class, and they have very very similar APIs. There you can, to a large extent, just program against the Collection API and not care much about the actual type of collection you have in your hand. You still have to choose one! The compiler won't do it for you, not in the way you envision. But the idea is that if you program against the generic API, then profile/benchmark your code, it should suffice to change a single line of the program to try a different representation to compare against it. (Of course some things won't work. You can't index into a set.)

Other dynamic languages should be similar, Python for example. But I think you have to work harder to achieve full genericity.


The types line is what some people do with Haskell, Idris. I do personally favor writing the large-scale types beforehand, because that gives the compiler a chance of saying "look, you program is wrong", what is way more useful than "hey, your program has this type". Besides, abstract-type driven programming is an incredibly good methodology where it's applicable.

On code understanding, what makes it better than the programmer inserting the debug statements themselves? It saves some misunderstanding from the computer's part.

On refactoring, some IDEs do that. I'm on the fence about its usefulness.


Thanks, I will respond to other people here as well.

I know a bit of Haskell and want to look at Idris, someday.

My point was, there should be a clean (best if even syntactic) separation between code itself (i.e. what should actually be done) and its properties (like types). Also, because there are two points of view about the properties (human and computer), this separation needs to be there twice (so for example, each type could be specified by computer and by human). I haven't seen a system that would do it, on a systematic level in the programming language. I only gave examples to show where it could be used.


You mean that types should reside on different files, and be inputed by different means?

Now I get the entire programming-compiler conversation. It is interesting. I can see some potential there. Yet, I shrug when I think about all the sparse metadata that I will have to check once I discover a low level bug (there is a reason I'm not programming in Smalltalk).

Somehow the best place for all that stuff to live is right there at the source code. That means the compiler (IDE) should be editing your files, so it better have a great integration with your version control system.


> Somehow the best place for all that stuff to live is right there at the source code.

That's what I am saying, and that's why it has to be syntactically distinct.

Also it needs to be clear what was input by computer and what was input by human, that's the second distinction. Because of how the conversation works. You don't want computer to erase human input, but the result also has to be logically sound. So you need to know both inputs for the comparison and synthesis, which happens at each conversation turn, both human's and computer's.

And that's what "type holes" and similar systems lack - they only record the result of the synthesis, not the two different opinions. Which is IMHO wrong.


> You don't want computer to erase human input

I will contest that one too :)

As long as it is interactive enough, and the history is well marked, there is no reason not to rewrite human code.


I've been thinking a lot about nearly these exact same things. We desperately need better ways to deal with derived textual data. Why do we make the programmer guess which data structure will work best for a particular task, when we could easily try it each way and record the performance, and pick the best? A big part of the reason must be that we have no good strategy for storing that data and making that choice in an ongoing and living way, inside of a code repository. We suck at dealing with derived data on the meta layer above our programming languages.

Email me at glmitchell[at]gmail if you want to chat more about this.


I am curious if you've tried IntelliJ with Java or Kotlin, with the various mod cons applied? Because it's quite similar to what you're asking for.

Kotlin does type inference. You can see what the inferred type is if you enable inline type hints. It's not a part of the source code, but it looks as if it is (modulo styling). As you work with the code you can see the inferred type change in real time.

OK, data structure selection, it won't help you pick between a linked list and an array. That said, I'm not sure that feature would bring much benefit to most programs. You almost always want arrays.

Code understanding: if you're unsure what's going on at a particular point, you can just add a breakpoint and run the program. You can then explore the contents of the program, evaluate arbitrary expressions, you can add "evaluation breakpoints" that print out the values of those expressions without stopping the app (on the fly ad hoc logging, in effect), you can investigate how the code you're looking at relates to other code, what the data flows are, what the location in the type hierarchies are, etc. There's a lot of ways to look at the program.

Refactoring; this is the point I went down this train of thought. Because a modern advanced IDE like IntelliJ can do this sort of thing already. It can do things like spot code duplication and fix it for you by extracting a method, in real time, with a single keypress triggered by subtle visual hints like a soft wavy underline. It can convert code between imperative for-loops and functional pipelines of map/filter/fold/etc, in both directions. It can identify and automatically delete unused variables, function parameters, object properties and so on. "Simple" is somewhat in the eye of the beholder, but it's a pretty close realisation of what you seem to be asking for.

The dialogue is not had through markup in the code but rather, through the IDE giving its suggestions using visual hints, and the user starting an interaction through a keypress that brings up a menu of suggestion options ("intentions"), which may in turn lead to more options and so on.


wrt your first paragraph there, I urge you to take a look at Idris. That's the exact workflow.


> Compiler would infer the types, and the programmer would read it and say, oh, I agree with this type, but I disagree with this type, that's perhaps wrong, this should be rather that type. Then the compiler would infer types again, based on programmer's incremental input.

This is possible today, at least in Haskell. (and I'd guess in OCaml too ... surely also Scala?)

Your proposal for adaptive data structure selection based on benchmarking is intriguing!


This talk is right that effect systems aren't popular yet, except the way Haskell does them. But it's wrong about the trajectory of languages. Right now is the best time to be interested in using cutting edge languages in practice. Also right now has seen an explosion of new languages with interesting ideas, from Rust to Purescript to Elm (in the authors preferred realm of typed languages). Also industry is backing major post-Java languages like F# and Rust.

In short, the near future of PL is great, and exciting stuff keeps happening. Don't believe the naysayers.


> Lots of people are reinventing Smalltalk on a Mac. (See Bret Victor and Eve).

At last, someone noticed! ;-)

Though from that expression, what the author doesn't seem to grok is why having a Smalltalk-like environment is desirable; maybe not as the primary way to program computers but certainly as a tool alongside.

It's a shame that a family of programming languages that build on and expanded that model hasn't gained more traction in the industry (not necessarily for people who dedicate their lives to build complex software with a highly general programming language, but for the rest of us).


I find the jab at Bret Victor especially undeserved. He's an interaction designer (a really good one who sees through all the fads[0], which is kind of the opposite of what this one-liner implies). His focus is on better interface design, not formal language design; why criticize someone for something they're not trying to do?

And it's not useless; we probably wouldn't have had Elm without Bret Victor's Inventing on Principle[1][2]. And there has been some progress in that direction of interface design of, for a lack of a better term, "programmable environments": look at Apparatus, for example[3]. Where would you even fit that on these slides?

[0] http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...

[1] http://elm-lang.org/

[2] https://vimeo.com/36579366

[3] http://aprt.us/


Quite true. I find likely that the next revolution in programming languages will come from designing a PL that's a good fit for these programmable environments. Maybe it should break from current undisputed conventions like the radical separation between "data" and "source code", and be more like a spreadsheet.

End-User Development has lots of under-explored ideas on how to build software automatisms that don't require the end user to learn a hard formalism (even if such formalism exists as the basis for the system). Though I understand that programming language theorists are not interested in that angle of the evolution of PLs.


There is a dichotomy between sealed programs and evolving programs. Evolving programs like Smalltalk or REPLs are great for exploratory work. Almost everyone wants sealed programs for systems to work reliably. This runs along the lines of Ousterhout's dichotomy as well - scripting vs systems languages.

The engineering break through will be when we have a scripting/evolving system that can be more easily distilled into sealed systems. So people poking around in a spreadsheet will be able to turn that into a reliable application.

IMO, TDD and BDD could be seen as attempts to do exactly this.


Do you see this as a useful dichotomy? In reality no system is truly sealed - it's just temporarily sealed between evolution steps, no?


I do find it a useful dichotomy because I work with people who evolve their code and wonder why I get my back up about monkey patching (because it's often my job to turn research or prototype quality code into reliable code).

If you've ever had to make another person's research or prototype quality code more robust it can be easy to wonder how on Earth they work the way they do. Having this dichotomy in mind makes it easier to have more sympathy.

It works both ways. Researchers need to understand why platform development moves so slow. This dichotomy helps reason about it.

But you're right, between evolution steps the system is not sealed. That's a concern and why projects like Debian are keen to have reproducible builds. It's also a big part of why containerization is becoming so popular.


Ah I see. So with 'sealed' you mean things like reproducible builds and being able to determine the exact set of sources, etc?

The containerization movement is interesting - while having these 'sealed' virtual environments and reproducible system builds is the right direction, I'm amazed at the size and weight of each of these.


Sure. By sealed I mean you compiled the program, you might have a turing incomplete configuration (e.g. ini, toml, json) and you can reason about the program. And by evolving (or, I guess unsealed) you are in the process of editing the program. Or you monkey patch it when you run it.

But with things like LD_PRELOAD, different .so minor versions, etc, a compiled program suddenly straddles the categories. Hence my comment about reproducible builds and containerization.


> The engineering break through will be when we have a scripting/evolving system that can be more easily distilled into sealed systems. So people poking around in a spreadsheet will be able to turn that into a reliable application.

Yeah, I see that as a likely evolution as well.

In some ways, an IDE is already an exploratory system with a focus on delivering a sealed system - the final compiled software represented by the source code. I believe this combination is what made IDEs so successful and universally desirable, and future programmable environments should share these traits.

Auto-completion, refactoring and code-browsing tools support the exploratory work. The problem is that IDEs are not so good at exploration as a REPL because they're unable to store intermediate results and states. TDD could then be seen as an ad-hoc way to store such intermediate state.


Agree on the benefits of IDEs and REPLs. An IDE/language with integrated notebook-style REPLs might be useful too. Maybe you want to create parts of the system by interacting with a REPL (instead of writing a static block of text) - and then you'd want to record how you created this specific artifact.

The other thing that would help, and my personal wish, is better simulation and the ability to see indirect effects of changes - instead of only showing what is directly modified (i.e. a line of code) and leaving the simulation up to the human.


> Maybe you want to create parts of the system by interacting with a REPL (instead of writing a static block of text) - and then you'd want to record how you created this specific artifact.

Yes, exactly. You get that with a REPL (e.g. Lisp or Python) and copy-pasting to the source code files, but it's far from ideal.

> The other thing that would help, and my personal wish, is better simulation and the ability to see indirect effects of changes

So, sort like explorable explanations[1][2], but aimed at developing a new model and not just exploring one that has been already made? That's what I'd like to see, too.

[1] http://worrydream.com/ExplorableExplanations/

[2] http://worrydream.com/TenBrighterIdeas/


Yes, somewhat like these, but having the ability to inspect and simulate effects on intermediate forms and systems. So something along the lines of light table but being able to show more than just the immediate program.

Copying something I wrote in another thread: I want to see not just the immediate program I'm manipulating (text or otherwise) but also the implications - the affected parts of various other intermediate representations all the way to the running, live system that will be affected.


Notebook environments have a lot of traction in industry. Is that not in line with what you're hoping for?


Yes, but they are only available for a few languages, and they're not a tool that is recognized as beneficial to programming in the large, like for example IDEs are.

Moreover, having direct introspection of a model stored in a local notebook is still quite limited with respect to having it in the whole environment like Smalltalk or HyperCard did. There are a few systems trying to explore that "structure of the project" approach, like Leo Editor or the Smallest Federated Wiki which could be a better basis for a "Inventing on principle" tool.


This is a great presentation, I wish I could hear the talk track. While we can all differ on the right "winner" for the next programming language (I don't think Clojure is the right answer, someone else might), we are all stuck with the same set of facts - and this covers the state of things very well. Most importantly, it explains certain truths of the social/economic ecosystem for programming languages - which is what gave us Java, Python, Javascript, and a few other really popular systems that seemed unlikely to succeed when they first appeared. The reasons for their success have just as much to do with "ecosystems" than with language features.


On twitter he says:

"The thesis was that programming in 2030 will have very advanced research languages but mainstream languages will effectively stop advancing and we'll be last [sic - left?] with a vast insurmountable gap between the two."

That provides some context for what the slides are about.


my thoughts exactly. it's difficult to understand this presentation without the discussion that goes with each slide. it's too easy to just be biased about the meaning of each slide;


“It just feels readable”. -> “It looks like this other language I know.”

But: that language might be English (or other natural language). Which is to say maybe computer languages are leveraging the human "language faculty" and rightly so. Which may mean that some languages are always going to feel more foreign.


Considering how many programmers will tell you that C style semicolons and curly braces are more readable than using keywords, I think they've got a point


At some extremes yes but, for example, I've spent at least twice as much time programming C and C+ than I have with Python but I still find Python generally more readable. Maybe if the ratio were 100 to 1 instead of 2 to 1 I'd feel differently but that suggests there are actually intrinsic differences in readability apart from just familiarity.


This is a brilliant synopsis of the state of the art -- truly fantastic -- and yet the presentation concludes with, "The innovation won't happen because I can't see where it will come from," and that's not entirely fair. Some areas he covers aren't exhaustively covered and what's missing is quite interesting. In other words, cheer up, there's more hope than what's shown here.


I think this is just a fustration with how little direct incentive there is from our economic system to fix the problem. The only chance of this paradigm existing is as a long shot project that is gifted to society, and the person that makes this gift has the slim chance that it doesn't succeed and all the effort is wasted.


I found that the weakest part of the presentation.

Kotlin and Ceylon both came out of corporate/industrial development. C#, JavaScript and Java continue to evolve entirely due to industrial funding.

I agree that academia is not a great place to do PL research as all the other non-idea aspects of a successful PL don't get enough attention. But industry is doing OK at launching new languages. Perhaps he feels that's "incrementalism" but there's nothing wrong with making solid upgrades that aren't attempts at unicorn style magic cure-alls.

It'd also help if we collectively got over our demand for our tools to always be open source. This has solid justifications behind it but it ensures that programming languages will always be spinoffs of other endeavours rather than having truly focused attention in their own right. When you take away the profit motive, you get stagnation, and that seems to be the main point of the presentation.


The only chance of this paradigm existing is as a long shot project

I couldn't agree more but I also am not as pessimistic as he is at its chances of happening.


The central claim isn't that innovation won't happen. It's that they won't go mainstream because nobody has both the means and motive for good tech transfer work.


What people who do academic analysis of programming languages often miss is culture. So much about what programming language gets used has to do with what ecosystem it's been targeted at and what colleagues in that ecosystem are using.


I couldn't agree more and was just making this point to someone last week - for many of the well-established languages, 80% of the reasons to use or not to use them is the associated culture, from the qualifications of the average applicant to the prevailing approaches to solving problems.


It looks like the presentation stops short of any strong conclusion.


It's still pretty useful as far as identifying the problem is concerned.


Hype is too strong and good ideas of the past are a niche topics. Computer science should focus more on its own history and realize what has been done and needs to be taken further. I wish I studied much earlier about things like process calculi, PL/type system theory, alternative OS design and verification approaches. Just by chance I pick up important information from the web or in rare master's courses where we are just a handful of students (while the machine learning courses get hundreds).


"Where will the next great programming language come from?"

Interestingly Scala has come from Academia, Industry, and Hobbyists. And for me it's already the next great programming language. Yeah, it has some warts and is hard to learn but that's true of all great things. :)


> Scala has come from Academia, Industry, and Hobbyists.

That's an interesting point. The slide implies there will be no next great programming language because academia nor industry nor hobbyists can deliver it, but overlooks the possibility of a combination working together. For example, Rust started as Graydon Hoare's hobbyist language and then development was sponsored by Mozilla.


Yeah this comment stuck out to me in the presentation. Industry has a great record of creating + supporting languages. C#, F#, Dart, Swift, Rust, D are all languages that are actively supported by industry, and some were even created by industry too. Just seemed like a weird statement for the slides to say that new languages just aren’t going to appear.


If anything, almost too many of them. In the esoteric language area, every new language takes away from libraries for other ones as the communities get spread out. I do think the spread of ideas from that setup is good though.


But how many of those languages include substantial innovation? To me they seem to be mostly rehashing existing technology with slight cosmetic tweaks in order to serve the profit interests and ego of their creators.

Now, whatever innovation this proliferation of languages may bring, it also brings fragmentation which acts as a counter-weight on the value created for the industry as a whole.

Whether the net effect is positive or negative is very difficult to tell.


Perhaps the future of programming isn't in linear deterministic code but in stochastic or probablistic languages dealing with machine learning or biology. Maybe we're hitting Godel/Turing-like wall and as we strive for more perfection and provable soundness, we will find we can't scale up to bigger and bigger systems. We'll be stuck with human debuggable linear languages like we have today or systems which operate more like a composition of machine learning models.


Of course in practice we know is a lot easier to build and use probabilistic models. However, I hope we don't give up on proovable systems.

When given enough time, if math can stay pure and adequately abstract to eventually describe our entire universe concisely. Shouldn't code similarly have no limitation?


TL;DR Everything is awful and nothing will save you


I like the slides and agree with the widening language gap. I have a few comments about the conclusions, though.

> The UPenn dependently typed Haskell > program shows a great deal of promise and > is likely to manifest a decade before other > DT languages generate practical backends.

I don't agree with this at all. The design problems with tacking on dependent types to an existing system are massive - it's a research problem for a reason. On the other hand, writing a "ghc quality" backend for Coq/Agda/Idris/F* seems difficult, but at least it's an engineering problem instead of a rough idea.

In particular, CertiCoq is a compiler for Coq written in Coq, and the main problem here is verification. Simply writing a compiler is no harder than writing a compiler in any language.

> Interesting ideas out of Microsoft Research on SMT solver > directed programming editors that enforce invariants and can > generate code during development.

Another interesting Microsoft Research project along the same lines as Dafny is F. Both are nice, but F is closer to modern dependently typed languages.

> Lots of non-local reporting problems associated > with using unification during type-checker.

I would argue that the problem is that we use constraint solving for type checking. For example, strict bidirectional type checking leads to more tractable errors, since it's straightforward to follow the compiler's reasoning. On the other hand, bidirectional type checking is less powerful, so it's not like there's a silver bullet here.

> Type-safe OTP.

I wish more people were working on things like this. There's some theoretical work on calculi for distributed systems, but once they encounter the real world they inevitably become horrendously complicated.


Regarding the effect systems slide. One of my favourite features of the Nim programming language is its effect system,[1] it tracks IO effects, but more importantly it also tracks the exceptions that are raised by each procedure. This allows for a very nice implementation of checked exceptions and docs which contain the possible raised exceptions of each procedure.

1 - https://nim-lang.org/docs/manual.html#effect-system


"Use the right tool for the job" is not the dumbest cliche in software... in theory. But it requires actually knowing what options are available, what their strengths and weaknesses are, and what problems (various kinds of yak shaving) the job will throw at you. Picking the tool that can best reduce the yak shaving is the difference between professionalism and masochism.

In practice, though, a lot of people quote this who don't know what options are available, what their strengths and weaknesses are, and what the problems of the job will be, and so when those people quote it, it's just a cliche.


In case of anyone interested a reasonable engineered language that supports Closure Serialization (mentioned as "hard" in last slide), checkout:

Gambit Scheme: A fast scheme implementation. http://github.com/gambit/gambit

Gerbil Scheme: Provides full module and syntactic tower on top of Gambit Scheme. https://github.com/vyzo/gerbil


Does Guile (or even Chicken) do anything similar?

(My only exposure to it is from Andy Wingo's blog, and I don't know much about Scheme.)


Nope.


> Will we just be stuck in a local maxima of Java for next 50 years?

1. Yes, if the extent of the imagination is languages like Idris and ideas like effect systems, that follow a gradient descent from Java, and always in the same direction: being able to express more constraints. What you get "for free" from such languages may not be significant enough to justify the cost of adoption, and the valuable stuff you can get is not much easier than the options available today, which are too hard for anyone to take. If you were to consider truly novel languages that think out of the box (e.g. Dedalus/Eve) then maybe one will stick and make an actual impact rather than just a change in fashion.

2. How do you even know that we can do much better? NASA engineers may not like it, but they don't complain that we're "stuck" at sub-light speeds. Maybe Brooks was right and we are close to the theoretical limit (that we know must exist).

> We talk about languages as a bag of feelings and fuzzy weasel words that amount to “It works for my project”.

Can you find another useful way, available to us today, of talking about languages?

> “Use the right tool for the job” Zero information statement.

That's right, but it's not a dumb cliché so much as it is a tool we've developed to shut down religious/Aristotelian arguments that are themselves devoid of any applicable, actionable data. One, then, is often confronted with the reply, "but would you use assembly/Cobol?" to which the answer is, "of course, and it's not even uncommon, and if you don't know that, then you should learn more about the software industry before giving any more Aristotelian arguments."


> What you get "for free" from such languages may not be significant enough to justify the cost of adoption

Idris can often infer entire functions from their types if the domain is amenable to accurate type-based specification. For instance, taking the common "sized vector" example, where `Vec n a` refers to a length-n vector of values of type a, functions like

    zip : Vec n a -> Vec n b -> Vec n (a, b)
can be automatically generated via the interactive "proof search" mechanism that the Emacs integration for Idris provides. Similar things are possible in Agda, which is however squarely aimed away from the "practical use" market.


> Idris can often infer entire functions from their types if the domain is amenable to accurate type-based specification.

This is a great example of what I'm talking about. The kind of functions Idris can generate is that of functions that you could manually write with only marginally more effort -- if that -- than the effort required to write the precise type. I don't think those functions ever form a significant portion of a significant program, if at all (those functions must be so generic, or else there would be a search problem that Idris can't solve, that they would already likely be in the standard libraries). When Idris is able to generate an efficient sorting function given some constraints, then I'd be impressed.

> can be automatically generated via the interactive "proof search" mechanism

If you've spent a significant amount of time with such proof assistants, you'd see that the proof search is rather pitiful.


> If you've spent a significant amount of time with such proof assistants, you'd see that the proof search is rather pitiful.

Afaik, Idris' proof search was hacked together in an afternoon just to see if it would work. And it did, surprisingly well considering. Don't know how much it has been worked on since then though. But yes, still leaves a bit to be desired compared to other systems.


I wasn't just talking about Idris. There isn't a single tool out there that can automatically prove that your quicksort implementation actually sorts without pretty significant work.


> if the extent of the imagination is languages like Idris and ideas like effect systems, that follow a gradient descent from Java, and always in the same direction: being able to express more constraints.

I don't think that's a fair reflection of what these languages feel like. When your constraints mean that only the right thing can be done, you can make the thing implicit. The things that are tacit when humans talk about the problem can be tacit in our programming as well. You have ways to factor out concerns that are separate, even when they are entangled. "Better decomposition of the problem" doesn't sound exciting, but ultimately that's the very essence of programming. I don't think we need new grand ideas, at least if we're just talking about the next 50 years of industrial programming; I think we need to follow through on the ideas we already have, and I actually think that gradient ascent from Java will happen, and will move the industry forward, even if it ends up being a lot slower than I'd like.

> How do you even know that we can do much better? NASA engineers may not like it, but they don't complain that we're "stuck" at sub-light speeds. Maybe Brooks was right and we are close to the theoretical limit (that we know must exist).

I think we can go at least one or two substantial steps better than Idris: I want Noether's levels of stratification (beyond just total/not-provably-total), and I want Rust-style linearity, not as a specialised capability but as something integrated into the regular type system. But even if Idris is the limit, it's still a big step ahead of Java, and it's going to take the industry a while to make that step. (Maybe it will have to be two or three steps to get there - it feels like most of the industry is only just adopting the things that OCaml offered, and not yet familiar enough with them to appreciate what Haskell offers above that. And I can't blame them - I only have an appreciation of those things because I've been able to work for a long time in Scala and very gradually work my way up towards using the less familiar facilities). "The future is already here — it's just not very evenly distributed."

> Can you find another useful way, available to us today, of talking about languages?

Yeah, that's the problem :(.

The whole industry is a huge iceberg in terms of what's visible about the practice of programming versus what's actually happening. I'd like to see more programming happening in public - which I think probably means more publicly-funded programming (I was thinking about this a few days ago in the context of research and scientific code - we have a relatively good system in place for providing grants for research, but we're a lot less good at funding the infrastructure that's used for research - we end up with a lot of researchers repeating the same work because they're only ever funded for a specific project, rather than being funded to produce something that's useful for many research projects). What makes sense in an open-source webdev project is all that most of us ever see about most languages, and while it's a lot better than nothing it's still a seriously distorted picture of what most programmers are actually doing.


> When your constraints mean that only the right thing can be done, you can make the thing implicit.

Except that sometimes, when you think about the practical issue long and hard enough, you can find that some constraints need not be stated at all, and can be completely factored out of the language, at least in most cases. For example, GC removes resource management out of the equation (it's not always the right solution, but it is the right one in a large enough portion of software). Or, while FP people pore over effect systems and how to constrain effect, languages like Eve that adopt synchronous programming (which is based on ideas from temporal logic), get a mathematical framework in which effects and computation are one and the same, and effects don't pose a challenge to begin with.

> I actually think that gradient ascent from Java will happen, and will move the industry forward

Perhaps, but I doubt it's in the direction pointed to by Idris. We have much better ideas around. The author of the slides mentioned only the segment of PL research that he find appealing. Other segments may have some more promising ideas.

> I think we can go at least one or two substantial steps better than Idris: I want Noether's levels of stratification (beyond just total/not-provably-total), and I want Rust-style linearity, not as a specialised capability but as something integrated into the regular type system. But even if Idris is the limit, it's still a big step ahead of Java, and it's going to take the industry a while to make that step.

I think you have fallen in love with a specific formalism (I've fallen in love with others) and can only see progress within it. If you take a step back, ignore formalism altogether and ask yourself -- or better yet, conduct research -- on what are the actual problems in programming, you may find solutions in completely different directions.

> it's still a big step ahead of Java

I don't think so, and I don't know why you'd think that. I'm not even sure that Java isn't the limit (modulo small improvements). Of course, when I say something is a big step ahead I don't mean that I enjoy it more or that it can pull some more technical tricks, but rather that it reduces development costs by 30%. If Idris is a big step ahead of Java, by how much do you think costs at, say, Google or Amazon or Citibank would drop if they decided they would all switch to Idris tomorrow? I'm fairly certain that it wouldn't be anywhere near 30%, and I'm not even sure costs wouldn't actually rise.

> it feels like most of the industry is only just adopting the things that OCaml offered, and not yet familiar enough with them to appreciate what Haskell offers above that.

I see it not as progress but as change in fashion. Tastes change, and so people adopt some ideas. Have productivity levels risen in the past three decades? Significantly less so than even Brooks's prediction, which was considered overly pessimistic.


> If you take a step back, ignore formalism altogether and ask yourself -- or better yet, conduct research -- on what are the actual problems in programming, you may find solutions in completely different directions.

It's hard to know what I don't know. I try to look at what people are doing/liking. But I feel confident that programming is inherently going to be about expressing the essence of the problem you're solving - a program will necessarily be as complex as the problem it's solving - and I think that limit is within reach of the approach I'm advocating, which is a conservative extrapolation of things we mostly already value. So I don't see the need to do anything more radical.

> If Idris is a big step ahead of Java, by how much do you think costs at, say, Google or Amazon or Citibank would drop if they decided they would all switch to Idris tomorrow? I'm fairly certain that it wouldn't be anywhere near 30%, and I'm not even sure costs wouldn't actually rise.

A lot of the value of better programming would be new opportunities, not just cutting costs. I think the industry switching to Idris-level languages will bring a factor of 10 to 100 improvement in productivity - i.e. I expect we'll see companies of ~100 people putting the Google/Amazon/Citibanks out of business, and companies of less than 10 people competing heavily with the giants. That's the sort of thing technological improvements enable - compare the rate at which new videogames are being produced now, and how small the teams producing them are, or look at WhatsApp being treated as a serious competitive threat to Facebook with a staff of 10. I think there's been a lot of productivity gain recently that isn't fully captured in economic measures, because we've (rightly) become a lot more quality-of-life focused culturally, and so our productivity has gone into quality-of-life improvements that aren't so visible in GDP terms.

But I fully expect the transition to take decades, with good reason - I agree that a large company switching tomorrow wouldn't reap much benefit from Idris (though I do think we're at the stage where a technologically-aggressive startup should consider it).


> I don't see the need to do anything more radical.

I do, because the things that you mostly value (I guess I value them a lot less) seem to me incredibly complex for the good that they do, and we already have some alternatives that seem both more powerful and vastly simpler. But I am skeptical of them, too, though :)

> I think the industry switching to Idris-level languages will bring a factor of 10 to 100 improvement in productivity - i.e. I expect we'll see companies of ~100 people putting the Google/Amazon/Citibanks out of business, and companies of less than 10 people competing heavily with the giants.

That is a very powerful statement, and I'd like to know why you'd think that. I would even like to know what makes you think that any language (never mind which) can make such an impact, even though we've seen nothing of this sort. I do not share your belief that Idris could make such an impact or anywhere near it, nor probably any other language.

I am skeptical, because having looked at Idris and played with it a bit, I was completely underwhelmed. Java + JML can do pretty much the same, if not better, and for a lowe cost, and this is not a statement about Java+JML being a leap forward. I have no idea if Eve could make an impact as big as the one you predict, but at least I don't feel I can instinctively reject this possibility as easily as I could Idris, as Eve actually brings a lot of new things to the table, where Idris feels like some ergonomic improvement over things that may have had some potential that never materialized, in the hope that better ergonomics is what should do it.

I would like to point out one more problem with viewing the abundance of research as a proxy for utility or of an "established idea", one that touches on my own personal biases :) When you publish a paper, certainly in good journals/conferences, you must exhibit some significant novelty. This, ironically, rewards the more complicated (and so arguably less useful in the real world) frameworks and punishes the simpler ones. For example, I've seen papers talking about embedding separation logic, or specifying amortized worst-case complexity in Coq. You won't see such papers on TLA, because doing both in TLA is rather trivial. Lamport himself complained that he had trouble publishing a paper on specifying and verifying realtime systems in TLA because it is so straightforward that it was hard to demonstrate mathematical novelty. He had to artificially add an extra novelty (the use of a model checker) to make the paper worthy of publication [1].

[1]: http://lamport.azurewebsites.net/pubs/pubs.html#real-simple


> That is a very powerful statement, and I'd like to know why you'd think that. I would even like to know what makes you think that any language (never mind which) can make such an impact, even though we've seen nothing of this sort.

I think we have; I think we've seen dramatically accelerating programmer productivity. I think language improvements compound heavily: if a language is slightly more efficient in the small, that means code has to be broken up less, which give it more and more of an efficiency advantage as the codebase gets larger.

> I am skeptical, because having looked at Idris and played with it a bit, I was completely underwhelmed. Java + JML can do pretty much the same, if not better, and for a lowe cost, and this is not a statement about Java+JML being a leap forward. I have no idea if Eve could make an impact as big as the one you predict, but at least I don't feel I can instinctively reject this possibility as easily as I could Idris, as Eve actually brings a lot of new things to the table, where Idris feels like some ergonomic improvement over things that may have had some potential that never materialized, in the hope that better ergonomics is what should do it.

All of programming language design could be called ergonomic improvements. I always found all the pre/post contract stuff too complicated and too different from normal programming to get any value out of; being able to capture more in plain old functions, values and types is where I see value.


> I think we've seen dramatically accelerating programmer productivity.

Really? I think that the change that has contributed to the lion's share of that boost has nothing to do with language features, and has everything to do with the availability of open-source libraries. The other major contributions have been the widespread adoption of automated unit tests -- also not a language feature, and the practicality of GCs -- if considered a language feature, it's a transparent one.

When Brooks said, around '85, I think, that we won't see a single improvement contributing to a 10x improvement within a single decade, people said he was pessimistic. After 30 years I don't think we've seen a 10x improvement with all methods combined, and if we have, most of it is due to libraries.

Do you think that writing, say, an ERP, a power-station control and management software or an air-traffic control system from scratch (without off-the-shelf/open-source libraries) today would be 10x less costly than it was using C++ in 1987, or even 3x less costly than using Java in 2002 (I'm taking 30 and 15 years respectively as milestones)? Remeber that 10x within a decade was seen as pessimistic.

Although I was just a novice programmer in 1987, I think that the answer to both is an emphatic no. Even when it comes to small/simple programs (which are a completely separate domain), I'm pretty sure that aside from the availability of libraries, Python doesn't have even a 3x factor over VB/Delphi/other "RAD" languages, as they were called way back when.


> Do you think that writing, say, an ERP, a power-station control and management software or an air-traffic control system from scratch (without off-the-shelf/open-source libraries) today would be 10x less costly than it was using C++ in 1987, or even 3x less costly than using Java in 2002 (I'm taking 30 and 15 years respectively as milestones)? Remeber that 10x within a decade was seen as pessimistic.

I'd want to amend that to no domain-specific libraries - I think part of the language advantage is making very general libraries possible. But yes, I do think writing such a system to the same standard would be better than 3x cheaper than in 2002. (I don't think that's what would be done - our implicit expectations around software usability are very different today to what they were in 2002. But I do think we'd get more than 3x better value for the same money than in 2002, certainly risk-adjusted - I think in 2002 there would have been a substantial chance that the project would fail outright)

For smaller programs I'd agree that we haven't improved that much, although it's very difficult to talk about how hard it would be to write small programs without libraries because small programs are mostly just using libraries.


Agree 100%


> If Idris is a big step ahead of Java, by how much do you think costs at, say, Google or Amazon or Citibank would drop if they decided they would all switch to Idris tomorrow? I'm fairly certain that it wouldn't be anywhere near 30%, and I'm not even sure costs wouldn't actually rise.

How much would something like Idris lower costs at Experian? That is, in your "cost" metric, what weight do you assign to the cost of catastrophic bugs?

(Yes, I know, the Experian bug was in a third-party component that wasn't updated when it should have been. I assert that it's still a relevant question...)

Also note: I am not actually claiming that Idris would prevent catastrophic security bugs. But the question adds another axis where Idris could give you a 30% cost reduction.


> How much would something like Idris lower costs at Experian? That is, in your "cost" metric, what weight do you assign to the cost of catastrophic bugs?

As a practitioner (and evangelist) of formal methods, you won't hear anything from me against the use of formal methods, although I have plenty of reason to believe that Idris (as an example of a particular design) is not very good at formal verification. The cost of bugs must be factored in, but we haven't seen the Idris approach effective at reducing bugs at a cost commensurate with their impact, except when their impact can be truly catastrophic. All the effective formal methods I know of look nothing like Idris.

As someone who has used formal methods to good effect (I believe) and is now learning Lean (which is can be put in roughly the same camp as Idris), I find it very interesting, but I cannot even imagine how it can form the basis for something that will one day be used at scale, especially given that there are such better alternatives.


Pron, what are the better alternatives / more promising lines of research you keep referring to?


I think they're better largely because I'm more aesthetically drawn to them (so I will be biased, but no more so than the author, who also only mentions techniques he's aesthetically drawn to), but also because I think they have shown more success in practical industry use.

For programming interactive systems, a very interesting approach which has been quite successful in realtime software and hardware systems, is synchronous programming[1]. It has a deep mathematical theory based on temporal logic, and has been shown successful both in terms of being amenable to particularly powerful forms of formal verification[2], as well as being useful for socially managing the development process[3]. The approach has mostly remained in the realm of safety-critical realtime, but has started making its way to "ordinary" software in languages such as Céu[4] and Eve[5]. Eve, in particular, incorporates at least as much cutting-edge PL research as Idris, but is very different from it, being based on a new and interesting language semantics called Dedalus[6].

In synchronous programming, side effects are not an issue, as the mathematical framework cleanly treats them the same as computation (not requiring embedding in monads), and so allows easy specification over both computation and effects, and a particularly easy way to express global correctness properties (e.g. [7]). The use of monads/algebraic effects in pure-FP is not substantially different from classical imperative programming, other than restricting the effect type, which doesn't seem to me to yield any substantial benefits in general. Using what is perhaps a very bad analogy, given the task of counting the number of passengers on a moving train, classical imperative programming would correspond to a person standing outside the train with a pair of binoculars, pure-FP would be like someone filming the train and then studying the film, and SP would be like someone standing inside the train. Rather than controlling what effects can be emitted by a piece of code, it controls at which point the passage of time can be observed.

In terms of formal methods, TLA+ [8] (a personal favorite of mine) has proven to be very effective, both in theory as well as industry practice -- perhaps more so than any other formal tool -- especially when it comes to specifying and verifying global correctness properties, such as "the database is always consistent", or "data is never lost, even if messages are dropped and nodes fail". It, too, is based on temporal logic, which is a theory that is much better suited, IMO, to interactive/concurrent/distributed software than the theory pure-FP, which is more harmonious with sequential, batch software like compilers. The difference in the difficulty of use of TLA+ vs a tool like Lean [13] for the purpose of software specification and verification (as opposed to doing formal "high math") is the difference between riding a bicycle and flying a Boeing 747.

As far as code-level formal methods are concerned, I find contract systems (like JML [9] for Java, and ACSL [10] for C/C++) to be more promising, again, in my personal opinion, than dependent types, because they separate specification from verification. This is important because while specification is always helpful, the chosen form of verification can make the whole difference between something that's more effective than testing and something that's completely infeasible. The most rigorous form of verification is formal proof, which is both the level of confidence least required by software and more or less the only form of verification afforded by dependent types, at least today. In contrast, contract systems allow you to verify your specification at a level of rigor of your choosing -- even on a per-contract basis -- be it formal proof, model checking or static analysis, concolic testing [11], random test generation [12], or even just inspection.

[1]: https://en.wikipedia.org/wiki/Synchronous_programming_langua...

[2]: https://en.wikipedia.org/wiki/Esterel

[3]: http://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts.H...

[4]: http://www.ceu-lang.org/

[5]: http://witheve.com/philosophy/

[6]: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-...

[7]: http://witheve.com/philosophy/#correct

[8]: https://lamport.azurewebsites.net/tla/tla.html

[9]: http://www.openjml.org/

[10]: https://frama-c.com/acsl.html

[11]: https://en.wikipedia.org/wiki/Concolic_testing

[12]: https://clojure.org/guides/spec#_testing

[13]: https://leanprover.github.io/


Eve looks interesting, thanks I'll try it out. (And the detailed response.)

Most of my programming is in the domain of "take some data, do stuff to it, produce an output" and my experience with Idris so far is very positive -- more so than Haskell (which I admittedly gave up on early on), the language does feel ergonomic, and like the type system is "guiding me" rather than getting in the way. I also find its approach to metaprogramming interesting. Whether this translates to Real World™ productivity / maintainability gains vs a mainstream language, it's too early for me to compare.


BTW, to get a sense of how meticulous Eve's design process is (although I don't know if they'll ultimately be successful in their goal), take a look at this short post, explaining how they've recently decided to change the Eve UI after writing Eve's compiler in Eve (in the previous iteration it was written in Rust): https://groups.google.com/forum/#!msg/eve-talk/tLgrw4zlc5U/V...


I don't doubt that Idris is a fine language or that some people will love it, even though personally I'm not drawn to pure-FP; I do doubt its revolutionary bottom-line impact.


> It’s lightweight > I was able to install the compiler

It does not prevent the browser from eating all the resources.


Just curious where does Elixir fall in this scheme of things? Any experienced devs who could give a qualified answer, please?


Elixir doesn't really push much of the state of the art forward. It's pretty much Erlang with new clothes (and some nice cleanups). I worked with it for over a year and while I loved the 'fail fast' model of programming, most of the errors that were caught were stupid programming errors that would have been trivial to catch by a type system.

What I really want is a type system for reasoning about distributed, stateful, programs. Has to deal with mixing strongly consistent eventually consistent (CRDT) data with synchronisation points (see Bloom/Lasp), hot code reloads, migrations, messaging across nodes, possibly some sort of row polymorphism combined with clojure's namespaced symbols... Like sort-of like Cloud Haskell but with zero-downtime deployments. Or like Pony but with a better distribution story.


Distributed Pony will be there soon.

https://www.doc.ic.ac.uk/~scd/Disributed_Pony_Sept_13.pdf [pres], https://www.ponylang.org/media/papers/a_string_of_ponies.pdf [paper]

I'm not sold on Cloud Haskell or Clojure.


Elixir is mostly different syntax(Ruby-like instead of Prolog-like) for Erlang. Beyond syntax change it fixes some Erlang warts, ads better metaprograming and great tooling. Elixir isn't academic innovation - it's about proffessionals convinience and effectivness similar as go lang. Started by 1 person.


OCaml is as "industrial" as Haskell is.


"OTP" on p. 23?

"One True Pairing"?

"Online Transaction Processing"?

probably not "On The Phone".


My guess is the Erlang OTP: http://learnyousomeerlang.com/what-is-otp


Open Telecom Platform? Erlang's runtime


I think it's just the biggest Erlang framework. BEAM is the runtime.


This is a great deck... but leaving TLA+ out of Formal Methods? It is hands down the most important thing going on there. And it is a combo between academia and industry. Amazon uses it extensively.


It also left out all PL research outside of typed FP. It isn't really about the future of programming, but about the future of typed FP. For example, it lists Eve as a "reinvention of Smalltalk" when, in fact, Eve incorporates at least as much cutting-edge PL research as Idris.


It also left out Alloy, which is another really exciting formal method that a lot of really cool work has been done in.


> Languages that have dabbled with modeling effects with row types have backtracked on it in favor of IO.

Interesting. I thought algebraic effects + handlers were the new hotness in modeling effects with types.


They are hotness because they are having trouble getting them to work (trouble = open research topic :) ), and they want to convince the monad proponents that they are better.


what trouble ? http://www.eff-lang.org/


Is there a talk for these slides? I haven't been able to find it.


This could be titled "Static typing is not going to save you"? There's nothing but static typing languages and related theorem provers from the timeline slide onwards and the conclusion is that it's too hard.

This is a good companion piece for the recent Rich Hickey Clojure/Conj talk :)


I don't really see where you get that. Is dynamic typing demonstrating a lot of sustainable success?


I edited in the Clojure part after you replied.

Languages that have forgone static typing aren't necessarily trying to solve everything using language types at runtime. See eg Clojure's approach. Or SQL's.

But at the base popularity-contest level the answer is clearly yes: Dynamic languages have been on a roll for the last 30 years - Perl, Python, Ruby, Erlang, Clojure, even JS.


That certainly doesn't mean it's because of their dynamic typing. And Erlang and Clojure, as nice as they are, are still about as popular as AWK is.


I've seen a comparison lately with static vs. dynamic languages and it compared bugs/open issues and there was not really an advantage there IIRC.


It's just my opinion, but there definitely is difference in favor of optional static typing. It simply adds more expressivness to language - you get free documentation standard (input,output,structures) backed in. You don't have annoying typos and you leave data manipulation computation to computer lowering programmer cognitive load. Also you can change code in editor(fastest feedback loop) without even running it(very important if there is high price for running code - ex. complicated system bootstrap, deployment, hardware).


That, and also there are so many independent confounding factors in these comparison studies that I am not sure they can ever outperform reasoning or personal experience. The reasoning part in your comment - lowering cognitive load, faster feedback cycles (in editor, in compiler, etc.), fewer typo-induced problems - seems hard to argue with.


It seems both camps count lowering the cognitive load and faster feedback cycles to their advantage!

Good dynamic systems have long done live-coding against the running system with instant feedback. Even recent static languages like Rust and Scala have notoriously long compile times and thus slow feedback loops.

Also there's along tradition of querying the system for things like auto-completion / doc display.

The cognitive load of fancy type systems like Haskell's is seen as very high.


Well this person sure seems very angry and opinionated, yet there's hardly any justifications for his criticisms in this childish rant. I mean if you're going to say everything is "dumb", and "shit" and whatnot, at least tell us why. As it stands this is definitely a case of "maximal opinions and minimal evidence".


cough Prolog cough


My project introduction book contains a description of the future programming language, you can see. https://github.com/ShionAt/Keys twitter: @ShionKeys


Recently a famous AI project was shut down because it was inventing its own language. Perhaps the next generation programming language and design paradigms won't come directly from humans at all.


Are you talking about Theano?

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: