Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's with all the new languages?
60 points by enen on June 15, 2014 | hide | past | web | favorite | 65 comments
I have no particular background in programming languages design and theory but It's been an exciting topic the last few years since a lot of new languages have gained popularity and also most of the big companies seem to have one or two of them in their portfolio (no need to list them all really C#, Go, Swift etc). I've been also on an exploration lately into the history of computer science and reading about the Lisp family and Smalltalk as they seem to viewed as the better designed ones. There is also that quote from PG where he says langs that are invented for the use of the author tend to be better. So what I don't understand and hope somebody here could shed some light on it is what's with all the new languages? How many of them really bring something new to the table, a better way than the old one? How is Go or Rust better than C C++ Ruby Python Lisps Java Smalltalk Erlang and whatnot. Are those languages designed for very specific cases where older languages can't cope with. When I read about Smalltalk or Lisp or Haskell people regard them as the pinnacle of programming language design and yet their popularity isn't really proportional to those statements. How do languages get popular? Money, syntax, portability? Why did PHP rule the 90' and not Common Lisp or Erlang or whatever. Why do I read so much bad stuff about C++ from smart people yet it's one of the most popular languages. Why isn't Objective-C more popular since it is too C with classes? Why Java and not Self?

When I ask those questions I am in no way trying to discredit new languages and their usefulness, I am just young, naive, not very smart and trying to get and idea of how the real world of programming and computer works.

And yes, I know the story of JavaScript. Surely that's an exception in the rule of how languages get popular?

> When I read about Smalltalk or Lisp or Haskell people regard them as the pinnacle of programming language design and yet their popularity isn't really proportional to those statements.

That's like saying you read about Bob Dylan being the pinnacle of songwriting, but his popularity compared to Beyonce not bearing that out.

Or Tom Waits for that matter. I've heard more than a few people say it wasn't until a particular album, or or perhaps 10 or so years after their first attempt that they started to really 'get' Tom Waits. I've heard almost the same for a number of fundamentally great programming languages over the years too.

A programming languages superiority is not all that matters.

Looking at the most popular ones - the learning usually sparks from necessity - setting up a blog and modifying it (Wordpress -> PHP), making a webpage interactive (jQuery -> Javascript).

From that necessity grows a community that creates libraries, classes, plugins, extensions, scripts, full frameworks and even servers (node.js).

This all makes it very hard to prioritize a language that doesn't have such easily accessible libraries, classes, plugins, extensions or frameworks readily available and it becomes even harder when there is a small community to gain knowledge from.

Talking from my own experience, when you are learning it is fun to create libraries, classes and plugins - but when you are up to your ears in real work a very small amount of time can be spent on creating content for the community.

> How do languages get popular? Money, syntax, portability?

One particularly good way is to be attached to an OS or platform.

- C came with Unix (although was so good that it migrated off it to Windows and basically every other platform).

- JavaScript came with the browser

- C# comes from an OS vendor; Microsoft. They built APIs for their platform in C#.

- Likewise, Objective C was for NeXT, and Swift is for iOS. They built APIs for their respective platforms.

- Java is an interesting case because Sun wanted the JVM to be an OS, to replace Windows, but they ended up with just a language. This is great evidence that a language itself is unprofitable; an OS/platform can be hugely profitable.

You have all the main OS cases represented: Unix, Apple, and Microsoft.

Google is sort of an OS/platform company, with Android and ChromeOS. However they reused Java in the former case. They designed their own VM (Dalvik) instead of inventing a new language. For the web platform, they are designing and implementing Dart. For the "cluster of servers" platform, Go is very appropriate.

Mozilla is also a platform company; it's not surprising that they are investing in Rust.

So my takeaway is that OS/platform vendors are the ones with the main interest in the huge effort of designing and implementing a language. How successful the platform often has more to do with the success of the language than the language itself. Java might be the exception.

In the case of JavaScript, at the beggining it didn't just "came" with all browsers since it was first implemented by Netscape. Why MS included it in IE is far from my knowledge/interest into the history of JavaScript/MSIE. I guess that once your language is in 90% (Netscape + MSIE) of all platforms, then yes, all (Chrome, Safari, etc) had to follow along.

Paul Graham's answer to why some languages become popular: http://www.paulgraham.com/popular.html

"So what I don't understand and hope somebody here could shed some light on it is what's with all the new languages?"

While one can make an argument that these new languages address a particular need better than any extant language, I suspect the real reason is that it's more fun to create tools, including languages, than to solve particular business problems. As evidence, I offer the plethora of frameworks, libraries, and utilities that comprise the ecosystem of Java in particular. In many enterprise systems the business logic is a small fraction of the total running application.

"When I read about Smalltalk or Lisp or Haskell people regard them as the pinnacle of programming language design . . . ."

Lisp wasn't designed, it was discovered. ;-)

Thanks for the interesting questions.

Maybe the increased number of people on the internet is connecting those who are making open languages. (I know the poster listed many ones made by closed teams too.)

I thought the technical/scientific computing market that MATLAB serves was too niche to get critical mass in an open competitor that could surpass it. I saw Octave as always being a second place clone playing catch up. I don't know what it took to get it going but Julia has me very excited and I'm grateful for the team that chose to make it.

Part of the reason for the recent explosion of new languages is the emergence of technologies that make implementing new languages much easier, LLVM being one of the most obvious ones. Swift, Rust, Julia, and various new implementations of older languages all use LLVM. Implementing new languages on managed runtimes like the JVM and CLR is also much easier than building a full toolchain from scratch. It's also easier than ever to build a productive community around an open source language – git and GitHub are amazingly effective collaboration tools.

The premise that old languages are pinnacles of perfection is simply not true. C, Lisp, Haskell, Smalltalk, etc. – these languages did not get everything right. What they did do is get enough things right that it is really hard to make a language that is better by enough of a margin that it is worth breaking away entirely and starting from scratch. To make it worth switching to a new langauge, that language has to really make your life much, much better. Performance, convenience, safety, expressiveness – whatever a new language gives you more of, it has to give you so much more to be worth the trouble of switching to a less mature language with a smaller, less developed community. But that's what people are trying to do with these new languages.

Consider Lisp as a potential pinnacle of perfection. Paul Graham quipped that Lisp was "discovered" by John McCarthy, rather than invented or designed – Lisp already existed in the way that mathematical truths exist. That's a cute idea, but clearly not literally true. There were still a lot of design choices – parentheses for example. Why not square or curly brackets? Why not indentation? There were also choices that are now almost universally recognized as mistakes. Dynamic scoping, for example, which was later replaced by lexical scoping in Scheme.

People who haven't tried their hand at language design generally tend not to fully appreciate how many unfortunate tradeoffs are inherent in the process. Static vs. dynamic? Both have great benefits as well as major drawbacks. That's just the first major choice – and each choice affects most of the rest of the language. A coherent language design ends up being a crystalized fractal of difficult, uncertain choices. You never know when the difficulties you're facing in one area might have been easier in some nearby fold of this vast, combinatorial design space – most of which is completely unexplored. The problem is compounded by the fact that although it may seem mathematical, language design is really a subfield of applied psychology: ideas often seem great on paper, but when you try them out, people find them incredibly awkward, unintutive, or just plain annoying.

> Consider Lisp as a potential pinnacle of perfection. Paul Graham quipped that Lisp was "discovered" by John McCarthy, rather than invented or designed – Lisp already existed in the way that mathematical truths exist. That's a cute idea, but clearly not literally true. There were still a lot of design choices – parentheses for example. Why not square or curly brackets? Why not indentation? There were also choices that are now almost universally recognized as mistakes. Dynamic scoping, for example, which was later replaced by lexical scoping in Scheme.

Well, have you seen lambda calculus, which predates lisp ? That's basically lisp (it has lambda, it has bound and unbound variables, it has let, it has steps, it has recursion, ...). All John McCarthy did was implement it. That's where the parentheses come from, of course. Why not curly brackets or indentation ? Because in maths, parantheses specify sequence of calculation, curly brackets denote collections, and indentation doesn't mean anything. Since the concept being expressed is the sequence, they used parentheses. In fact the first reference I can find to parentheses dates from the ancient Greeks (and they were probably merely the first one to write it down).

Lambda calculus is a generalization of mathematical formulas by Alonzo Church to describe computation (as opposed to "just" values or functions, what normal mathematical formulae do. Functions in math are very, very different from functions in lambda calculus. This cannot be said to be very original either, as is was mostly a way to introduce some sanity into Hilbert's "Entscheidungsproblem" ("can you give a mathematical formula that solves mathematical formulae"), and more generally, into constructivism. This was made possible by finding a consistent way to express the combination operators in logic. Constructivism was a branch of logic that ...

You can keep going back for quite a while, but the point is that the form of lisp was effectively decided by someone who's name we don't even know, who lived in one of the ancient Greek city states. He (or she ? not impossible in that period) also had absolutely no idea what they were doing either.

There's a lot more to an actual Lisp than lambda calculus. No Lisp ships with just lambda – that would be absurd and unusable. So there are design choices: choices of standard functions like "car", "cdr", "set" – and what kind of scoping "set" implements. The Common Lisp and Scheme specs are literally lists of design choices.

If you don't already know, there is a Church encoding for singly-linked lists that is in the lambda calculus. We can already implement some of the operations you have listed with just lambdas:

(defun cons (x y) (lambda (z) (z x y))) (defun car (l) (l (lambda (x y) x))) (defun cdr (l) (l (lambda (x y) y)))

It's not like a lisp with 'just lambda' would be 'absurd and unusable'. I just showed you, we can implement cons, car, and cdr with just lambdas(hooray, closure! not clojure, closure...). Although there are some design decisions that had to be made, e.g. set, list.

Why does everyone think they're the only person who knows about lambda calculus? No kidding, lambda is turing complete, so yes, you can implement car and cdr with it. But just because you can do everything with lambda doesn't mean that it's actually done that way.

Would you call scheme unusable? Of course not, people use it every day. It has 12 'fundamental forms' that cannot be implemented within the language itself without a compiler. Every other library function(except low-level IO ones) is defined in terms of these forms. From the language designers, "we realized that the lambda calculus—a small, simple formalism—could serve as the core of a powerful and expressive programming language."

The meat of the S6RS spec is 55 pages [1]. This proves my point rather than refuting it: there's a lot more to Scheme than lambda – 55 pages of design decisions more.

[1] http://www.r6rs.org/final/r6rs.pdf

At university we learned a turing machine implementation in lambda calculus. Now while I will fully agree that it's a disastrously difficult programming language, but "big" (non-trivial) programs in lambda calculus both exist and predate the oldest LISP programs.

Likewise "big" Church thesis programs exist and predate the oldest lambda calculus programs. They're even more unreadable (although everyone doing cs has at least looked at 2 of them).

Before that you will find more implicitly written programs that are definitely programs, in the sense that they are explicitly checked to be finite computation steps.

Even these informal algebraic programs not only predate LISP, Lambda and Church but a lot of them are rewrites of algorithms out of pure algebra that are definitely programs, just less formally written down.

You can trace the history of those back to the renaissance European city states (I refuse to accept al-Khawazarmi as having done anything but compile external sources).

And frankly, is there anyone here that has read Euclid's algorithm and doesn't think even the ancient Greek version is, when it comes right down to it, a valid computer program ? He didn't have the notion of generalized calculation (technically nobody did that correctly before Turing of course), but he had the notion of calculating various things by counting in specific ways. It's not that different, really. And it's only one of quite a set of programs written by Euclid of Alexandria.

Obviously Euclid's programs were never meant to be programs, but they are programs. They have a trivial mapping in pretty much any programming language that exists.

Oh, and Euclid uses recursion. Didn't define it, just uses it. Even Euclid very likely just compiled knowledge, didn't invent/discover it. Likely these algorithms were taught in Alexandria for at least a few decades before Euclid, so you could reasonably accurately say that there definitely were computer programmers in the 4th century BC, maybe even fifth.

And with the comment relating to "car" and "cdr", well, Euclid uses those functions. He doesn't define them beyond describing their effects, but he uses them. In that way, of course, he's acting like any other programmer. The only people defining car and cdr are developing the language, not any real programs.

Brainfuck is Turing-complete too. That doesn't make it usable.

[gratimax shows you how to implement car, cdr and cons in brainfuck]

brainfuck has no procedures, so no can do. :)

Since Brainfuck is turing complete, this is untrue – you can emulate procedures and then define car, cdr and cons as emulation-level procedures.

> How many of them really bring something new to the table, a better way than the old one?

'Better' being subjective, but there are languages, or classes of languages, that bring new paradigms that change the way you approach a problem. Some may work better with the way you mentally model a problem, or they may naturally help with modeling certain problems.

So we've got...

- Imperative sub-procedure languages, like C, Algol, Fortran, etc.

- Object-oriented variants of C, like C++, Java, C#.

- Smalltalk, and Smalltalk OOP based languages like Ruby and Objective-C.

- Forth, a stack-based programming language.

- Tcl, a command-based programming language.

- Unix shells, string-based programming languages.

- Lua/JavaScript, prototypal/hashtable oriented languages.

- Lisp, tree/list-oriented languages.

These are the languages/classes of languages you should study, if you want to see something different. Something that may change the way you think.

Also, Prolog as an example of a logic-programming languages.


- LLVM is an awesome project and quite mainstream now, making it a bit easier to write an optimized compiler.

- Rust: C is a language built for single core use. The future is many-core machines. Mozilla realized that the archaic C language was making multi-core processing far more difficult because of missing language constructs, slowing down development and holding back the future of browser performance. The idea is that multicore/multichip aware languages can greatly simplify developing parallel applications.

- Go: Also aims to modernize "systems languages". Dependency management, better type systems, garbage collection, parallel computing. multi-core awareness, compile times. http://mashable.com/2009/11/10/go-google-language/ . It's your C/C++ replacement.

- Swift: Probably an Apple move to attract more app developers and to increase the quality of apps with better tools.

So the answer is both because the computing landscape is changing, and the move towards Python, despite its performance issues, signals developer demand for better tools.

Remember that HN/proggit users are generally interested in new ideas and ways of working, and most people working in industry have probably never heard of Haskell. HN is also susceptible to marketing from time-to-time - MongoDB and Rails were two huge trends that did not deserve their popularity, at least at the time.

So, the first thing you have to think about is: What is a programming language? Or better, what is the purpose of a programming language? PLs give the programmer a way to express their idea in such a form the computer can understand. Here are two aspects already: 1) Expressing an idea, and 2) interpretation by the machine.

Certain PLs exist to formulate certain ideas in certain ways, that the computer interpretes in certain ways. That way, the popularity of a PL depends on

1) how many people think that way,

2) how much effort those people put into developing the needed tools

3) how much this way of thinking is needed and supported by the industry

4) how well this way of thinking is compatible with previous work

5) how well the computer can execute these expressions using its architecture

6) what architectures exist for which purpose, and whether these purposes comply with the way ideas are expressed in a PL

7) ... and so on, this list is endless

For example, stack-based CISC computers using the von-neumann-model have a long history and are very powerful these days, and using software has become common in non-IT industries, which is why object-oriented imperative programming languages like Java and C++ are so common.

When some great programmers love a special language because because it matches their way of thinking, then it's most probable that this PL is not very popular, simply because few people think this way. A genius may invent the mightiest programming language in the world, but nobody else would use it because nobody else could understand it.

You could say, the only thing that a PL actually expresses can be seen on the people who use it.

I'm not entirely sure, but I believe Go is basically a continuation of Rob Pike's vision in terms of how computing should be in general. (Plan 9, acme, rio, procfs, etc.) It's the spiritual successor to Limbo, which itself was the successor to Newsqueak. They basically extend upon the general model of C, but add things like built-in concurrency (inspired by CSP), conservative garbage collection, type checking and so forth. They also aim for simplicity and removing complicated features (see also cat-v.org and "harmful software" to understand the philosophy).

As for C++, Java and so forth, popularity is no indicator of quality. That and Smalltalk/Smalltalk-esque languages had rocky starts, including difficulty of acquiring development environments and performance overhead.

EDIT: Concerning Objective-C, it's pretty tough to work with it outside of an OS X-related environment, because of the lack of essential libraries. There's GNUstep, which tries to fill in the gap, but it remains very behind and it has very little development going for it.

We are dealing with a much different environment now since we cannot count on an ever faster single CPU. We also must deal with an environment where your program can be attacked via the network or how it handles memory.

Its really about time for a new set of languages given a historical view (after you remove some of the distortions brought on by the Java bankroll).

Never never underestimate the power of N.I.H.[1] in human affairs. Creative people with time to spare can't resist the temptation to think "how would I do that? oooh I can think of several tweaks that would make it better. I should totally make one of my own." And then their work becomes "held territory" for their coterie or organization to be defended and enhanced and bingo, you have a new language or database or protocol with a community, and the more effort the community invests in the thing, the more stable and important it becomes.

Really, it's the intellectual analog of the process by which a whirling disk of dust becomes a system of planets, some big, some small...

[1] http://en.wikipedia.org/wiki/Not_invented_here

A hypthesis: Languages are no mean in itself. They are used to generate programs for specific purposes on specific plattforms. The availability of the plattforms and the need to create software for them makes the language popular. C for Unix/Linux, JavaScript for the Browser, C++ for Microsoft Windows, Java for Business stuff* and Android, Ruby for Rails, Objective-C for Mac and iOS, C# for .net.

* Sun invested a billion in Java to replace the more costlier Smalltalk. Instead of competing with Sun, IBM ditched Smalltalk and just went with Java, too.

Caveat Emptor: All the analysis below pertains mostly to type system features and other "surface observable" aspects of the lanugage. Newly maturing compilation techniques are certainly another reason for the recent explosion (e.g. Rust, Haskell). But I'll stick to what I (ostensibly) know.

> I have no particular background in programming languages design and theory

Well then, good news: most of these new languages -- especially those you mentioned -- were invented primarily with software engineers and programming in mind. That said, all are informed by ideas which emerged as PL design principles in the 1960's-1980's and became well-established in PLT academia throughout the 70's, 80's, 90's and 00's. Haskell is definitely an exception is many ways, but at least the essential ideas driving the type system design were there in the 80's. (with the possible exception of Haskell, where the "old ideas finally getting to market" analysis is a bit less true).

> I've been also on an exploration lately into the history of computer science and reading about the Lisp family and Smalltalk as they seem to viewed as the better designed ones.

I don't know about better designed. A better characterization is that they capture some essence -- lisp, smalltalk, SML, Haskell, etc. were all designed and implemented to demonstrate the feasibility of a certain programming style or discipline (as well how that approach makes certain problems really easy when they weren't easy before.)

> So what I don't understand and hope somebody here could shed some light on it is what's with all the new languages?

> How many of them really bring something new to the table, a better way than the old one?

> How is Go or Rust better than C C++ Ruby Python Lisps Java Smalltalk Erlang and whatnot.

A detailed answer would consider each pair. But broadly:

* These languages typed, which contrasts them from the dynamic family (including lisp).

* These languages tend to favor composition over inheritance, which differentiates them from (canonical) Java.

* These languages tend to make typed functional programming first-class (syntactic and compiler support for lambdas; pattern matching; etc.)

* The examples you've provided -- Rust, Go, Swift -- are more systems-oriented than Java and are not based on a VM.

* Lots of smaller things. E.g. apparently avoiding C++'s slow builds were a major design point for Go.

> Are those languages designed for very specific cases where older languages can't cope with.

Yes. All are designed to address some significant flaw with existing languages. Most were created because for an important set of language requirements, there exists a language which fulfills each requirement but no single language which fulfills all requirements. (Again, Haskell stands out as an experiment with laziness if I understand the history correctly).

> When I read about Smalltalk or Lisp or Haskell people regard them as the pinnacle of programming language design and yet their popularity isn't really proportional to those statements.

> How do languages get popular?

This is an area of active research (search for SocioPLT [1]). The common wisdom is "library support + important problem niche". The library thing strikes me as tautological.

> Money, syntax, portability?

The first is certainly a major reason the # languages exist :-)

> Why did PHP rule the 90' and not Common Lisp or Erlang or whatever.

Oh dear. Let's just agree that "quality" does not equal "popularity". Bieber > Vienna Philharmonic?

> Why do I read so much bad stuff about C++ from smart people yet it's one of the most popular languages. Why isn't Objective-C more popular since it is too C with classes? Why Java and not Self?

You'll receive lots of conjectures. I'll leave that business to others.


edits: formatting, adding link to splt

Some great points!

Out of curiosity, does anyone know of any studies/inquiries into the interaction of type systems with languages that favor composition over inheritance? I imagine that a more inheritance-driven language would be more amenable to strong type checking, for example.


> I imagine that a more inheritance-driven language would be more amenable to strong type checking, for example.

AFAIK the conventional wisdom among formal methods people is actually the opposite.

If you don't mind my asking, why do you imagine this?

> does anyone know of any studies/inquiries into the interaction of type systems with languages that favor composition over inheritance?

I'm not sure what you mean. Do you mean studies about interactions between type systems of these two sorts (e.g. as in Scala)? Or do you mean a comparative study asking which is better?

As I mentioned, I think the conventional wisdom is that inheritance makes things more difficult from a type checking/verification perspective. For this reason, the big arguments for inheritance tend to be oriented toward pragmatism rather than ease of formal reasoning.

So, this may simply be a massive gap in my understanding of theory and PL stuff. :)

My reasoning is that an inheritance-based language has some notion of "A extends B extends C", and so if I need to check compatibility of types I can just walk the class hierarchy and get an answer.

I'm clearly missing something--maybe I'm just using the wrong mental model for types?

> so if I need to check compatibility of types I can just walk the class hierarchy and get an answer.

For languages with parametric polymorphism, note that inheritance is kind of like subtyping. Getting subtyping correct in the presence of parametric polymorphism is famously subtle; see http://en.wikipedia.org/wiki/Covariance_and_contravariance_(...

For languages without parametric polymorphism, it's easy to see why inheritance makes things more complicated. In the case of nominal typing, this walking you describe isn't necessary without inheritance -- a value either is or is not in the exact named type it's supposed to be. In the case of structural typing, it suffices to say that inheritance complicates type inference.

Inheritance is actually even more subtle than subtyping even; e.g. consider http://en.wikipedia.org/wiki/Fragile_base_class

> How do languages get popular?

Inclusive community would be my stab at an answer.

Money, usually. Consider Java, C#, Javascript.

I wouldn't include JavaScript on the list of PL that are popular because of a huge corporation behind it but more on "this is what we've got, suck it or die".

It is an error to equate a programming language used for program construction with a (spoken) language used for communication. It is advantageous to have everyone use the same language for communication, but that's not true for construction. Different construction jobs require different tools. This is why there is a tendency for more programming languages, not less. They are tools. Languages for communication are more like protocols, which there is a natural tendency to reduce, just like spoken languages.

If I'm writing software for an MCU I'm going to want something that can directly address and manipulate memory, which excludes interpreted languages and things like Java.

For example if you look at the Android platform sources you'll see that all the code which supports hardware abstraction is written in C/C++ and then the API is made accessible through JNI.

This is somewhat like how homes typically have foundations built of concrete, on which wood is layered and assembled to construct boxes that people can put their stuff in.

A couple thoughts on C++ vs Objective C...

When I learned c++, it was sold as a better/safer version of C. You had function prototypes, const, inline functions, improved compiler warnings, etc. Some of those improvements made it into later versions of the C standard. I hope C89+ is more popular than K&R C for new code. You could slowly ease into and benefit from day one without ever using a class.

Objective C is strictly the classes and runtime so there's no benefit unless you jump right in.

"How is Go or Rust better than C C++ Ruby Python Lisps Java Smalltalk Erlang and whatnot." - sure fire way to start a flame war with programmers ;)

I would expect most rusticians would give you a reasonably balanced reply. Rust sidesteps lots of issues with other languages whilst getting their advantages, but it pays for this in different ways (eg. learning curve, composibility, and incremental compile times).

PL are one piece of the (user,problem,tool) trinity. Every time you ask yourself why ? the reason is because for a particular triplet, it was the best solution.

Give a newcomer sml with no easy way to integrate with a HTTP server and watch the confusion grow. On the other hand PHP has good apache integration, is a simple platform : a .php file with html and code, press F5 observe your results and is a deceptively non complex language.

I see all these languages as tools in the battle between the big companies. Each language serves the purpose of the power behind it. See my review of this list here: http://tracks.roojoom.com/r/11224

This is great!

You sound just like me of about 12 years ago!

I remember thinking: "I keep reading all this stuff about how Lisp is so much better than all of the popular languages I know about, so is it going to start taking over soon? Is C++ about to go the way of the dodo?"

12 years later, C++ is still around and about as dominant as it was 12 years ago. So I guess the first thing I'd say is: PL enthusiasts on the Internet are not the best indicator of what's about to get big. My best explanation for this is: PL enthusiasts have a somewhat different set of values and aptitudes than mainstream programmers:

- a PL enthusiast is willing to invest a lot of effort into learning and using a language/tool that they think is better. Mainstream programmers will usually go with what has a lot of momentum and support.

- a PL enthusiast usually has a knack for thinking very abstractly, so what looks elegant to them will often be very difficult for less abstract thinkers to unpack.

- a PL enthusiast is usually more concerned with making a language fit an elegant mathematical model than making it fit the model of the underlying hardware.

So PL people end up loving languages like Lisp or Haskell because they are much "cleaner" from a mathematical/abstraction standpoint (particularly in how they eliminate or tightly control side effects). And even though the mathematical models aren't very close to how the hardware works, people have invested a lot of work into making the compilers optimize things so that they are often very efficient -- comparable to what you'd get if you wrote things in a much more "manual" way in an imperative language.

However, because there is a lot of transformation that happens in the compiler, it can be very hard to predict how efficient the program will actually be. You're sort of at the mercy of the compiler -- it can completely change the big-O in both time and space! So while the language itself gave you an elegant way to say something, you may have to get your head around the language's evaluation model and optimizations before you can understand why it has the performance characteristics it does.

For example, one time when I was trying Haskell I wanted to know if a trivial function I wrote took O(1) or O(n) memory. The people on the Haskell list were very helpful, but look how much analysis it took just to answer this simple question!


But languages like Lisp and Haskell are still highly valuable even to the "mainstream" in that they explore a lot of powerful and abstract concepts, and these feed into more mainstream languages. 10-15 years ago few mainstream languages had "lambda" (from Lisp), now most mainstream languages do (JavaScript, Ruby, Python kinda, Java, C#, even C++). Algebraic datatypes (from Haskell) are showing up in Rust. So I think of Lisp/Haskell as idea incubators that make powerful features a lot easier to add to more mainstream languages, because Lisp/Haskell have already tried them out and run into all the questions and edge cases around them.

So now your next question: why all the new languages, and will any of them take off?

New languages are exciting when they can open up new possibilities. But the downside is that languages have strong "network effects" -- integrating different languages together is a pain. Languages succeed when the plusses of the new possibilities outweigh the inherent costs of using a new/different language.

You listed a lot of languages but the main one I want to talk about is Rust. Rust opens up a huge new possibility: the possibility of getting memory safety without giving up performance or control. No mainstream language has ever done this before.

Traditionally you have had two choices. Either you get top performance and control with a memory-unsafe language (C, C++) or you get memory safety while tethering yourself to a garbage-collecting runtime (Java, C#, Python, etc).

(People will come out of the woodwork here to argue that their favorite GC'd language is faster than C and C++ on their favorite benchmark. Such benchmarks usually tend to be a bit fishy, but this is beside the point. The point is that C and C++ give you the control to do whatever the other language might have done to beat you. Other languages winning is just a local maximum, in which the C or C++ programmer has not yet optimized their program to win. The reverse is not true: when you are tethered to a garbage-collecting runtime, there are certain behaviors built-in that you simply cannot work around).

What makes Rust exciting and very new is that it gives you the best of both worlds. Rust is memory-safe (except in localized "unsafe" blocks), but does not impose any kind of runtime or GC onto you. This could completely change the way that we write performance-critical software.

complaints ∝ usage: There are only two kinds of languages: the ones people complain about and the ones nobody uses. Bjarne Stroustrup

  tl;dr they want to make things better
Languages are like products in a market that solve a problem. Many factors, product and non-product, make it hard to predict. e.g. do people know about it? how easy is it to get started? has a core vocal and influential group picked it up? How exactly does it solve the problem? Can it be enhanced or bandaided so it's workable for problems that it almost fixes?

But worse than products, languages are high-tech products. This makes them harder to evaluate, so the bandwagon effect is even stronger (oh, smarter-than-me guy says this is cool, I'll believe it). That makes it even more unpredictable.

But worse than high-tech products, there are network effects: it matters hugely how many other people are using it... because they make libraries which makes it even better. They also use your libraries, making it more attractive. That is, a language is a market, itself. This makes it more unpredictable again.

Finally, why do people make new languages? Well, there is real progress in language design. People just want to make things better. For example, Go is written by the C guys... they want to make it better. (NB: no guarantee of success! those guys also wrote unix, and tried to improve it with Plan 9. "What's Plan 9?", you ask curiously. Exactly.)

Of course, the big companies with money also want to capture developers, instead of sharing, so instead of one language with the cool new features, you have several. Just like in most markets, when there's an improved style of product.

EDIT what about smalltalk, lisp, haskell? partly it's the bandwagon effect that passed these by... partly it's the purity of a cool idea. This makes them attractive to idealists, and unattractive to pragmatists. e.g. homoiconicism is a very elegant idea, but awkward, complex, unintuitive - everything is sacrificed to its pure beauty.

These are like indie artists who haven't sold out.

My take on a few languages you mentioned. I'll try to stay as neutral as possible, but some things are bound to be controversial.

- C#: Microsoft's answer to Java, supposedly does some things better (Java seems to be catching up some), but cross-platform support is so-so.

- Go: I don't understand Go. It seems to be conceived as an improvement over C, and it gets many things right (and a few things wrong, like error handling). Unfortunately, it gets the most important things wrong: performance and low-level access, which are the only reason anyone uses C nowadays. If you don't need C's performance, you get languages that are much nicer and faster than Go (like Java or C#). As a result, it drew Python programmer rather than C programmer, because Go is still faster than Python, and feels quite similar to basic uses of it. Also Go seems to draw people who have drunk too much of the anti-OO kool-aid.

- Swift: A bit too new to tell. Objective-C was a notable improvement on C without incurring the complexity of C++. It suffers of a bit of Go syndrome, but Apple forces you to use it, so there's no debate to be had. Swift is an improvement over Objective-C. It seems to be that this heritage lead to some shoehorning and there are maybe clunky angles to how some things were designed (i.e. the type system).

- C++: many people have said it, C++ is very powerful but it's way too easy to break everything in a subtle manner without realizing it. The problem of C++ is that it has a very large set of core features, which can all interact in ways that are hard-to-predict if one is not a language lawyer. C++ is the opposite of elegance in language design. Despite this, it is used because it is fast and gets stuff done (good expressiveness). And if you run into strange feature interaction, you can always work your way around them by making the design a bit more ugly, thereby avoiding to have to gaze into the pit of hell.

- Rust: very interesting because it promises more safety when doing low-level work, while retaining performance. I'm still waiting for the development dust to settle to give it an in-depth look.

- Smalltalk: the language itself is nice enough, kind of like a Ruby that would have been pushed to the level in terms of meta-programming. The environment, however is awful. The "image" in which you work completely traps you, and has a super poor UX despite the inclusion of very powerful introspection/debugging tools. At any rate, Ruby is mostly good enough, and you rarely need the added meta-stuff from Smalltalk.

- Erlang: genuinely useful for its use case, distributed systems. This is a language where the intended use was really woven in the language design, to great effect. For the rest, it's a bit like ML without types. Personally, I see no good reason for leaving out types, so that tends to annoy me a bit.

- PHP: Many things (mostly bad) have been said about it, and many of them true. However, its success is not undeserved in the sense that it was a very easy language to get started with, from the fact that it could be embedded inside the html directly (allowing for nifty cut-and-pasting) to the availability of easy-to-configure servers. It also has top-notch documentation.

- Common Lisp: The problem of Common Lisp is that it feels old. Many things seem antiquated, especially the library ecosystem. It's very hard to tell if there are good libraries, because the ecosystem is so scattered. Some libraries may not have been worked on for some time, but still be adequate, but that's hard to tell beforehand. There is few endorsement/sponsorship of libraries/tools by organizations or companies; most artifacts are the product of the work of some lone hacker (at least, that's how it feels). Maybe quicklisp is solving the problem, but then again, it's in "beta" since 2012. As for the language itself, well it is quite nice with all the macros and stuff, albeit I once again miss types (mostly for documentation purpose, as Lisp can sometimes be quite cryptic). Typed Lisps exist btw, such as Shen.

- Javascript: Javascript reminds me of Lua, in the sense that both languages have a quite small set of basic features that turn out to be remarkably expressive. There are obvious problems however in Javascript, which are mostly the consequences of how fast the language was produced. Under the circumstances, it turned out admirably well. Javascript became popular because that's what was supported by the browsers, and this looped into a spiral of support/development.

> Erlang: genuinely useful for its use case, distributed systems. This is a language where the intended use was really woven in the language design, to great effect. For the rest, it's a bit like ML without types. Personally, I see no good reason for leaving out types, so that tends to annoy me a bit.

I believe there's an interview with Joe Armstrong (creator of Erlang) where he mentions that the one thing he'd wished he'd added to Erlang was a type system at the jump. I'm not a 100% sure on that, though.

In Learn You Some Erlang for Great Good they talk about the lack of types in erlang [1]. Apparently some Haskell folks wanted to make a type system for Erlang, so they called up Joe Armstrong and asked what he thought. (this is all a really cursory outline; check the sources for better info)

Joe Armstrong recounts the story and says that Philip Wadler told him "he had a one year’s sabbatical and was going to write a type system for Erlang and “were we interested?” Answer —'Yes.'"

Philip went on to write a paper [2] about the type system they wrote, but obviously it never really got traction. More info in The History of Erlang [3]

I don't really have a particular point to this, other than it's interesting and maybe of some historical interest to folks looking into PL's and how they end up getting made.

[1] http://learnyousomeerlang.com/types-or-lack-thereof [2] http://homepages.inf.ed.ac.uk/wadler/papers/erlang/erlang.pd... [3] http://webcache.googleusercontent.com/search?q=cache:ZHq_V41... (google cached version; couldn't find another version right off hand)

Your statement about Java or C# being "much nicer and faster" than Go is just plain wrong (Well if that's true then what's the reason to use Go?). In fact it falls in the same vein as Java or C# in performance, but it has the advantage of being native code. It isn't made to be a C contender. It's designed to be the simplest language to learn, the easiest to deploy, compile as fast as scripts, coding half as nice as Python and runs as fast as Java. Currently it excels at Cloud / server development.

Very good overview, so let me just nitpick this:

    PHP: [...] It also has top-notch documentation.
It has not. It is accessible, it has always been easy to find everything in it, but it is very far from thourough. You need to hunt for user comments for implementation details in the interpreter that actually influences your script's behavior.

With that said, I've sort of came to peace with PHP and use it willingly at work. (JavaScript is the industry's arch enemy now.)

I'm saving norswap's excellent survey and assessments to share with others. Fair, honest, and captures the essentials quite well.

> When I ask those questions I am in no way trying to discredit new languages and their usefulness, I am just young, naive, not very smart and trying to get and idea of how the real world of programming and computer works.

Welcome to the war.

Please don't hold any hard feelings for the community if you get flagged or downvoted to hell. People who will do this to you are generally smart, sympathetic and considerate individuals who just were on the frontlines for much too long. Being cold-hearted and eliminating every threat swiftly, no matter how innocent it seems, is the only way of preserving one's sanity here.

I'm a PLT and Type Theory enthusiast, although I lack any formal education in this direction. I try to follow new research and I'm constantly learning new things (like the ones from the '60 which were then forgotten) and really new things (original research happening now which acknowledges what was done in the field already). I graduated (last year) from just learning new languages and I'm writing my toy languages (thanks to Racket's being an absolutely wonderful framework to do so), but I still learn every single language that seems interesting. This includes both nearly-mainstream languages like Erlang and the ancient, largely forgotten like Prolog, APL and Forth (which you should include in your list next to C, Smalltalk and Lisp).

I'm fascinated by the notion of computation, of how we can encode computation, how we can reason about computation and how we can transform computation to preserve its semantics. I'm fascinated by language design: what features a particular language has and what it omits, I'm always trying to discover what kind of turtle (and if really all the way down) a language is built upon. I'm feeling happy and safe reading papers from Racket and Haskell people, it feels like I'm reading a suspenseful novel in a quiet library somewhere.

Then I go to StackOverflow or here and the reality hits: screaming, shooting, blood and intestines everywhere, people fighting for their salaries and self-respect, so ultimately for their lives.

You'll hear about technicalities from other people here: type systems, concurrency primitives, memory safety and direct memory access, static vs. dynamic (not only typing), syntactic support for common idioms, having (or not) a built in support for certain concepts (like inheritance or composition). I'm not going to tell you about all this. I'd love to, and I really like the topic, but I feel that you wouldn't benefit from it nearly as much as from the other half of the story.

You see, programming languages are tools which people make for people to use. Not only that - both the makers and consumers do what they do to feed their families. I recently saw a Byte magazine from 1980 (IIRC) where I saw an ad of TinyPASCAL, which promised 4x increase in speed over the equivalent code in Basic. It came with some additional libraries (and it was available for a couple of different machines) and cost $8. There was another ad, which claimed that you won't ever need another Fortran after you buy the one being advertised, because it was fast and had additional libraries, for example (IIRC) for calculating log (or lg). It was some $15, I think. Not having lived then I miss a lot of context, but what I see here is that people were using programming languages to make money for quite a long time.

This is not a problem in itself. The problem is the nature of our industry, which is for the most part impossible to measure or experiment with. When have you last heard about double-blind (how would that even look like...) experiment of building the same large corporate system 5 times with different tools and simultaneously? I didn't. And that's not all. We are certain about some things, because the mathematicians discovered some very clever proofs of these things. But they are rare, few and far between. For my favourite example: what "readability" even is? People fight to their last breath and last shred of dignity for their particular take on readability, yet we don't have a slightest idea what the hell readability is, let alone how it impacts us. It's the same, just many times worse, with other features, like famous allowing assignment in conditionals, or preferring mutability over immutability, or providing pointers or not and so on. We know for sure that, if the language is reducible to a very few operations which form one of the basic models of computation, that it's able to express everything expressible in every other language. That's a baseline and it's basically useless, because there are real differences between how good are different languages as a tools and we have no idea at all what makes the difference. We have lots and lots of anecdotes, though.

All this - people wanting better tools and people getting used to their tools, people designing new tools and people marketing the tools they make as better, and having no meaningful way of defining what "better" even means here, but having a vague feeling that how good the tool is directly impacts your performance and your pay leads to the current situation. People have their beliefs, and there are people - some sincere, some not so much - who profit from their beliefs. Languages are being viewed as tools for writing software and for generating revenue... both by corporations and individuals. All programmers make decisions about which philosophy, which belief system to buy into and they all know that this decision is an important one. For companies it - having a language with large following - can make a whole difference between winning and loosing on the market. Similarly for individuals, belonging to a particular tribe makes them feel safer, they can more easily ask for help, they can find jobs more easily. It's really a circle of illusion which works, because it is economically possible for it to work, and because no one can really dispel that illusion (of knowing what "better language" means, for example) yet.

So, to answer your question - what makes languages successful or not? Please do read other answers and pay attention to all the technical details, they are important - but in the end I believe, at least for the last 40 years and some more to come, the answer is really simple: people. It's people, which are social creatures, which have emotions, which are susceptible to manipulation, which are rebellious, which are compliant, which are used to things, which are tired of things, which have wants and fears beyond and above technical matters - it's just people who make languages successful or not. It's almost purely a social issue. Think for a moment - what does it even mean for a language to be successful? Doesn't it mean to be popular with people?

Can anyone point me to the story of js? Did it get popular because it was the better language (as it seems to be implied by OP)?

JavaScript became popular because it was the only(1) language that did what it did -- scripting in the browser. It has become popular on the server side lately(2) partly because some people are reluctant to learn another language, and partly because people believed that Node's nonblocking hype was something new.

(1) Yes, there were others from MS, but only JS has been cross-platform and cross-browser.

(2) Node.js wasn't the first server-side JS environment, but it was the first to be popular.

Well, exist a very strong mythology around the idea that language A = language B, is only syntax sugar. So, some people don't see the point of create new languages (or, why not extend the old ones?).

>So what I don't understand and hope somebody here could >shed some light on it is what's with all the new languages? >How many of them really bring something new to the table, a >better way than the old one?

Probably the term "language" is a bit misleading (and fuel the notion that 'english'=='german' just different) and is better to think in machine builders, where its interface is based in combinations of words, but the words ARE NOT WORDS.

So, is posible to build a better machine builder than others? Of course. Some are very linear (and fast) but not that good at make parallel work. Some are very unsafe. Some are complicate to operate. Some are very non-sensical, where turn left mean instead self-destruction. Some requiere a lot of steps to produce the end-work.

The beauty of a language is that a SINGLE word can not only imply a meaning, but also is EXECUTABLE with a behavior.

go ... async .. for ... spawn ...

Is like have a machine that chomp wood. It could be made of hundreds of small pieces. Or it can be a axe, in a single iron mold.

A new language can be made when is understand that is possible to get NOW the axe and chomp, instead of build it like in minecraft. Even if the end result become the same (dude, people do insane things in minecraft) your way of THINK change if you are NOT PLAYING MINECRAFT but instead, something else.

Some languages try to move closer to the "I have a axe right now, let's move on" faster than others. IE: Some machines are more low-level than others.

With that idea on the mind, a language (machine builder) designer start to see some things: Even if have minecraft-level sub-machine builder is important (ie: The parts that almost all languages have like for, if, list, chars) is another level, game, to have machines tailored to some task.

And if you extend the idea far enough, you can see that is better to have several of that specializations of that in a single package. If the mix going well, you have a happy factory worked that is very... happy!. Or you have another crazy machine where turn left mean self-destruction. But well...

Of course, some natural limits are hit because the limitations of the computer architecture itself (and the limitations of the factory worked), but as I say:

Some people do insane things with minecraft.

Languages are like operating systems, browsers and search engines.

Every big tech company has to have one.

Lets look at PHP a little bit. It made moving from a static site of just html files to a slightly dynamic site dead simple, just rename the file .php, and add in a few <php? blocks. And then it was deployable with just ftp, which helped because some hosts didn't give ssh, and many people wouldn't have know how to use ssh at first. That's why it "won" on the server side, at least for small sites. It solved the problem of "I don't know much about web programming, but I have a website and I want it to do a bit more" really, really well. It doesn't matter if it lacked higher level features to them. So it didn't have them. So experienced programmers, who knew how to get cgi scripts running and could make a website using a "real" language look down on it, since it doesn't add value to them, and, frankly, it does suck a bit. Tons of gotchas, but they couldn't fix them once it became popular so fast. It'd be a much better language if it had time to mature before becoming popular.

That's actually a similar story to Javascript, since it really didn't have any time to mature before shipping out to everyone. But I think both languages have improved as they've been upgraded. But make no mistake, we could have built them better if we started over now, and didn't have to worry about backwards compatibility. We have learned which parts we'd want to keep and which parts might require some ironing out.

Now, for the longest time, Haskell had a unofficial motto of "avoiding success at all costs"[1], (page 10). "When you become too well known, or too widely used and too successful suddenly you can’t change anything anymore. "

So, it's not a big surprise that Haskell isn't super popular, since the creators don't really benefit from it being super popular, and it makes their research harder.

[1]: http://www.computerworld.com.au/article/261007/a-z_programmi...

Do check out this article, it's great if you want to learn about languages. The whole site is. Page three talks about how languages pop out of nowhere: "In my experience, languages almost always come out of the blue."

Let me look at a few new languages

Go: It works great for concurrency, and shuns the hierarchies of OO for interfaces, but keeps the nice syntax. It feels a lot like Python even with it's static types. It's lack of proper generics cause it to get looked down upon sometimes by PL folks, and it's GC make it unpalatable for C/C++ tasks. I'm sure it's useful for Google.

C++11: It feels like a different language from C++. A lot of the verbosity of doing things the idiomatic way falls off (`for(const auto& x : things) {}` is much better than the old way). It definitely makes the language better, and can help speed it up and make it safer too.

Rust: It actually feels a bit like C++: The Good Parts, plus all the concurrency goodness from Go, and the little things you hate going without from Haskell (Algebraic Data Types is a big one). It's pretty ambitious, but if they can pull it off, I think it'll one of the best languages.

I'll pick Rust to beak down your question about how I think it's better. It's "better" than C++, because it leaves out all the foot shooting and messiness, and has just the good parts. And it's nice to have Option types without pulling in Boost. Compared to Ruby, well, it's aimed somewhere different, but I think it's faster while being at least close in expressively. Ditto for Python. Lisp, well, again, it's aimed differently, but Rust does have macros and strong functional programming support. Personally, I'd take the type system and leave Lisp behind. Java? Well, apart from not running on the JVM and being more complicated, I think you can say more clearer and have it run faster in Rust. Smalltalk? I don't know if Rust is better, I haven't used smalltalk at all. Erlang? This one is actually somewhat comparable, since both have strong functional programming support, and good concurrency. I think you can actually do more in Erlang, with it's actors approach, and it certainly wins with the hot code swapping and really cool features there. And Erlang also has better bit level logic support. But those features are exactly what are needed for Erlang's niche, so I'm not sure if I'm being fair. I can't say if one is better than the other here.

Swift: Wow, this one is new. But, it does take some of the things I really like, such as Algebraic Data Types (it's enums), along with things that you really expect from a modern language these days, such as tuples, lambdas/closures, map, filter, and generics (I'm looking at you Go!). It also inherits a bit from Objective-C, and I think that's at least partly why it is it's own language, and not some other language with some libraries. Also that playground feature seems like it's pretty neat; it's what Bret Victor was talking about.

Does my rambling help in any way?

PS: Haskell isn't the pinnacle, it's just a gateway to Idris :D

It seems like its an evolving thing -- maybe like car design... there is language design also. All of them bring new things to the table, and the good ideas are then debated by designers and copied into other languages.

Languages get popular because of the people who use them. PHP ruled because it was easy to use and had a ton of tutorials - with a huge userbase to answer any question.

I'm not sure why people knock C++, its ugly but its fast. Obj-C is popular, just less popular than c++ (I think? Perhaps there are just easier ways to create apps for *nix and windows than Obj-C). I'm not sure about anything Java.

Javascript was an exception, IMO, because... they had a monopoly on being the only language that lets u modify webpage content programmatically.

>Javascript was an exception, IMO, because... they had a monopoly on being the only language that lets u modify webpage content programmatically

I have a very dim memory of Microsoft attempting to push VBscript but I can't recall ever having even seen it run anywhere.

How is Go or Rust better than C C++

Use C or C++ and then try out Go or Rust.

That is not really saying much. Depending on the domain you are in C, C++ can be (and are) much suitable candidates than Go and Rust.

The simple point is the "general purpose programming language" is dead. I am not saying there are no general purpose programming languages. I am saying people are going to use language X for task Y because we have the flexibility to do so now.

I'm going to agree on the Go part, but do think that Rust can beat C and C++ at their own game. I really do feel like it's the best of C++, combined with Haskell, and a really smart compiler. A lot of Rust could be translated back into C++ with shared_ptr and && references (the lifetime stuff at least), but Rust gives stronger guarantees about safety and const correctness.

I feel many of the C++ criticisms I read have to do with the older standards (98/03). C++11/14 is a very major upgrade to C++. C++ certainly still has its pitfalls, but things have gotten much better. It honestly feels like a new language to me.

C++11/14 can't be used without upgrading your compilers. Go and Rust use standard toolchains.

Go is also crossplatform, whereas gcc/clang don't like working on Windows, and Microsoft doesn't like implementing the full C++11/14 specs without waiting years after the open source community has already been using the full specs. This results in fragmentation.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact