Hacker News new | comments | show | ask | jobs | submit login
The Unix Philosophy and Elixir as an Alternative to Go (lebo.io)
305 points by aaron-lebo on June 22, 2015 | hide | past | web | favorite | 131 comments



Erlang (and by extension Elixir) are good languages for web development, I agree. Frameworks like Nitrogen and N2O even let you do frontend work directly from base Erlang constructs like records and functions to represent templates and JS callbacks, respectively.

However, they will not replace Go for a rather simple reason. Erlang is an application runtime first and a language second. It's not meant to play with the native OS all that well, instead relying on its own scheduling, semantics, process constructs and the OTP framework. It exports a minimum set of OS libs for things like file system interaction, but you wouldn't write a Unix subsystem daemon in it. Though, you certainly can - if you want to hack with NIFs, port drivers and C nodes. But then you put the global VM state at risk if some native caller blows up.

For this reason, Go will remain a dominant choice for infrastructure and systems programmers. It definitely is replacing C/C++ for lots of things above kernel space, contrary to your statement.

I'd honestly dispute the characterization that Elixir is a small language. Erlang is easier to fit into your head, not least of which is because of how essential pattern matching on tuples and lists is. It's a consistent and homoiconic design, more warty constructs like records aside.

Finally, Erlang/Elixir aren't Unix-y at all. Don't get me wrong, BEAM is a wonderful piece of engineering. But it's very well known for being opinionated and not interacting with the host environment well. This reflects its embedded heritage. For example, the Erlang VM has its own global signal handlers, which means processes themselves can't handle signals locally unless they talk to an external port driver.


Golang implements a custom scheduler in userspace too, and this is why its FFI is not as fast or as well integrated with the OS as the calling conventions of C or C++ are. Heck, Golang's FFI is even less straightforward than that of Java, since the JVM is typically 1:1, not M:N.

Once you've gone M:N you're effectively in your own little world. There's a reason Golang's runtime is compiled with the Golang toolchan, and why it invokes syscalls directly as opposed to going through libc.


I did some cgo (Go's FFI) work. Go's FFI is vastly superior to JNI (which is in a league of its own when it comes to awfulness).

No FFI is really great. Go's isn't as good as C# or Luajit or Swift but better than Java or Python or Ruby or Lua FFI (where you have to write a wrapper for each function you want to expose to the language).

C and C++ are OS calling conventions so I'm not sure what that was supposed to mean.

It's true that go is M:N and that it does have bearing on bridging some C code (in rare cases where C code only works when called on main thread).

However, gccgo has Go runtime written in C and compiled with gcc, so go's runtime isn't tied to one toolchain.

I don't know why Go designers chose to (mostly) avoid libc but it certainly is great for portability and ease of cross-compilation. If you take dependency on libc (which is different between Linux/Mac/Windows) you pretty much throw the ability of cross-compile and given Go's static compilation model would require bundling a C compiler (unlike runtime-based languages like Java, Python or Ruby, where only runtime has to be compiled on the proper OS, so that complexity is contained for the programs written in the language).

I don't see why do you ding Go in particular for being "it's own little world". Compared to C - of course. Compared to Java or Python or Elixir? Much less than them.


cgo-style stack switching (which I assume BEAM also uses) adds a lot of overhead at runtime, which Java and Python don't need since they're 1:1.

The speed of the FFI really affects how a language ecosystem uses it; if it's a lot slower to call out to external libraries than to call code written in the same language, then there's a large incentive to rewrite all dependencies in the language as opposed to using what's already there. Sun's libraries are a bit of a special case in that Sun really tried to rewrite everything for strategic/political reasons, but look at Android; the heavy lifting in the Android stack is done by Skia, OpenGL, and Blink/WebKit (to name a few), a strategy which works because JNI is relatively fast. Python also heavily favors using C libraries where appropriate, again because Python C bindings are fast.

I don't understand the issue about cross-compilation. You don't need a cross-compiler to statically link against native libraries; you just need a binary library to link to and a cross-linker (which can be language-independent). And, of course, if you dynamically link, you don't even need that much.

I'm not really trying to ding Golang, in any case. M:N scheduling has benefits as well as drawbacks. FFI is one of the downsides. There are upsides, such as fast thread spawning. It's a tradeoff.


> better than Python or Ruby

Mmm? https://cffi.readthedocs.org/en/latest/overview.html (basically a port of luajit's ffi) http://ruby-doc.org/stdlib-2.0.0/libdoc/fiddle/rdoc/Fiddle.h... (ruby DSL to libffi)


On Java's case it was made awful on purpose, to discourage developers from writing native extensions.

The new FFI is on its way, but it might arrive only on Java 10.


Its of topic but I think it was a great idea to have terrible C interop. It really forced people to write java all the way. This meant the JVMs could really evolve unlike the Python and Ruby ones.

I am not sure the new java FFI is such a great idea in the long run. I would rather that they spent more time focusing on object layout and GPU compute ;)


> It really forced people to write java all the way. This meant the JVMs could really evolve unlike the Python and Ruby ones.

Java's FFI is horrible; Python and Ruby are mediocre. LuaJIT2's is fantastic. Not so surprisingly, Python ate Java's lunch in places like scientific computing, where it is much more beneficial to build on existing work.

Python is hard to dethrone from that spot right now because of momentum, mostly - but if the competition was started again, I'm sure LuaJIT2 would take the crown (Torch7 is based on it, but that's the only one I know).

I think my bottom line is: If you want your VM environment to be self sufficient, have horrible FFI like Java. If you want your VM environment to thrive with existing codebases, you have to have at least a mediocre one like Pythons. But you can have the best of all worlds like LuaJIT[2] - and that's who Oracle should be copying.


I think Python will loose momentum as soon as Julia gets more adoption, likewise with languages like Go and ML derivatives. Unless PyPy gets more widespread that is,

Upcoming Java's FFI is based on JNR, an evolution of JNA, used by JRuby for native FFI.

Nevertheless everyone seems to have forgotten about CNI, implemented on GCJ, which mapped C++ classes directly to Java ones.


Sorry, what is M:N?


Any situation where you have M userspace jobs running on N system threads, i.e. the number of tasks is different to the number of system threads.

Normally this occurs because you're running a large number of "green" threads on your own scheduler which schedules onto a thread pool underneath. This is good if all your threads are small/tiny since userspace thread creation is cheaper than creating an OS thread but if your jobs are long-lived then your userspace job scheduler is really just adding additional scheduling overhead on top of the overhead that the OS already has for thread scheduling and you would have been better with OS threads. If your M:N threading requires separate stack space for each job, there can be a sizeable overhead (this is why Rust abandoned M:N threading).


Can you come up with some examples of when this would began to be noticeable to an end-user?


If you're crossing the FFI boundary a lot, any overhead adds up quick. For example, drawing a bunch of small objects using Skia, performing lots of OpenGL draw calls, allocating LLVM IR nodes, or calling a C memory allocator…


One of the nice things about M:N is it decouples concurrency from parallelism. That is, your application can spawn as many inexpensive green threads as the task requires without worrying about optimizing for core count or capping threads to avoid overhead, etc. With Go 1.5, underlying system thread count will default to the same as the system CPU core count.


It's noticeable to the end-user only in it's negative performance implications in certain situations, making things slower than they would be otherwise on the same hardware. It's a low-level construct, it is not directly noticeable to the user either way. The negative performance implications are largely under heavy load. The post you replied to gave some more specific situations.


"However, they will not replace Go for a rather simple reason. Erlang is an application runtime first and a language second. It's not meant to play with the native OS all that well, instead relying on its own scheduling, semantics, process constructs and the OTP framework. It exports a minimum set of OS libs for things like file system interaction, but you wouldn't write a Unix subsystem daemon in it. Though, you certainly can - if you want to hack with NIFs, port drivers and C nodes. But then you put the global VM state at risk if some native caller blows up."

I have to disagree here. One of the best applications of modern times ever written in Erlang is Riak. It has lots os system services, they do IO a lot (it is a key-value store after all) and it works beautifully. It is using LevelDB under the hood for storing the data on disk and it is written in C++. Erlang makes it possible to have a distributed service using Erlang for the distributed parts and C++ for the disk IO combining the best of both worlds.

You are right about the potential NIF problems, but one shortcoming that can be possible worked around does not invalidate the rest of the system that is cutting edge.

How can you be sure that Erlang is great? Just look into projects like Akka and many more Scala project, they pick up features from Erlang because it simply works, easy to understand and reason about.

The notion of Erlang replacing Go is funny anyways because Erlang existed for much longer time than GO and even if you are looking in the web stack in Erlang it is pretty cool. Webmachine is one of the most stable tools out there, providing absolutely amazing latency and a state machine to handle HTTP requests.

Go as a language is new and certain way immature (like debugging and generics for example). It is an ok environment to develop in, but I would use Erlang/Elixir much more willingly for HTTP services (saying this after tried to use Go for a while but I just gave up, it was too much hassle for not too much gain).


For the record, I don't need any convincing of Erlang's greatness. I know that and I'm a big fan of programming in it.

Nor did I ever deny that Erlang excels at I/O, that much is obvious. I made it clear the various ways of interfacing with native code, as well.

But I wasn't talking about more abstract distributed services, rather stuff that brutally exploits POSIX and kernel-specific APIs. Using Erlang would probably be inconvenient for most people, so they'd use a language that seamlessly interacts with its host and has a small runtime, understandably. Not that you couldn't reap benefits from supervision, multi-node distribution, hot reloading and other things if you made the effort to do some elaborate hacking with ports and NIFs.

Erlang/OTP blows Scala/Akka out of the water for me, personally.


Heh it was coming through at the first sight, you know it very well.

" I know that and I'm a big fan of programming in it." Me too btw. :)

I see your point, yes that is absolutely true. I think one reason why Erlang is great because they realized that exposing the POSIX threading APIs to software developers is harmful. :) Why would you ever want to know how many cores you are running on? If you know it you have to keep track of it and adjust it on different systems and bunch of other issues. On the top of that POSIX threads are so expensive that it totally did not make any sense to implement a system that allows the developer to create a thread. I think Joe & team made an excellent decision there. On the other side, you can't really access these low level primitives so this is I guess what you are talking about. For the low level stuff you need a systems language (C,Go whatever).

Oh, yeah, JVM based actors are kind of interesting. I am not sure how can they meet any latency SLA. The GC is skewing the response time potentially, right? In Erlang/OTP it does not happen because of the GC implementation as you know.


I've used the JVM and actors on top of it for real-time bidding, which is normally a soft real-time job because you usually have contracts in which you have to promise to reply to requests in 100 or 200 ms maximum, including the network roundtrip, otherwise you'll be taken off the grid. Our system was replying to over 33,000 of such requests per second on only a couple of instances hosted on AWS.

The garbage collector was a problem and as usually it required us to profile and pay attention to memory access patterns, plus a lot of tuning of that garbage collector. But truth be told, we were also much sloppier than what people do in the financial industry. For example we did use Scala, functional programming and immutable data-structures, which ca bring with it significant overhead, since the profile of persistent collections does not match the assumptions of current generational garbage collectors. But on the JVM you also have options not available on other platforms. The G1 GC is pretty advanced and at scale the commercial Azul Zing promises a pauseless GC and has been used successfully in the financial industry.

I also used Erlang and in my experience the JVM can achieve much lower latency and better throughput. Where Erlang systems tend to shine is in being more resilient and adaptable.


Everything can be done, you can invest a lot of time to make your code run better on JVM and tune the GC settings etc. If you make the same investment into the Erlang ecosystem (like for example WhatsApp guys did) I think you could match the JVM performance in terms of handling number of connections and staying responsive. If your code does numeric calculation most of the time you are probably having a better time on the JVM though.

http://www.erlang-factory.com/upload/presentations/558/efsf2...


> It definitely is replacing C/C++ for lots of things above kernel space, contrary to your statement.

I don't want to live in this world.


> However, they will not replace Go for a rather simple reason. Erlang is an application runtime first and a language second. It's not meant to play with the native OS all that well, instead relying on its own scheduling, semantics, process constructs and the OTP framework.

I think you could argue that both languages are equally a pain in the ass to interop with C in. Erlang/Elixir is slightly more complex because you do need to keep the ERTS schedulers in mind, but whens the last time you saw someone give cgo a compliment?

> It exports a minimum set of OS libs for things like file system interaction, but you wouldn't write a Unix subsystem daemon in it. Though, you certainly can - if you want to hack with NIFs, port drivers and C nodes. But then you put the global VM state at risk if some native caller blows up.

Usually Ports are the first line of attempt to not have to write a NIF in the first place.

> Finally, Erlang/Elixir aren't Unix-y at all.

I'm guessing what OP means by "Unix-y" is the metaphorical definition of Unix-y as in the oft-butchered Unix philosophy of small composable programs that operate on streams of data rather than playing by all of Unix's rules. Elixir very much satisfies this definition with its emphasis on functional programming and pipe operator.


Except that Go doesn't really need interop with C for many systems programming tasks because of its stdlib support for the full POSIX and associated kernel-specific interfaces, without depending on a libc at that. Along with other more generic interfaces.

Then the fact that it's practically designed to be used on top of a Unix userland, much like C. Contrast to Erlang where you live in BEAM.

FFIs in general are often bad, though cgo from my minimal interaction with it seemed tolerable by most standards.

EDIT: Don't know how FP and the pipe operator alone make things Unix-y. Erlang/Elixir definitely play in their own league and I don't recall ESR's rules off the top of my head to make a detailed comparison right now.


> EDIT: Don't know how FP and the pipe operator alone make things Unix-y. Erlang/Elixir definitely play in their own league and I don't recall ESR's rules off the top of my head to make a detailed comparison right now.

Forgive my constant stream of edits. It started off as a short reply but I kept coming back feeling "It's a programming language discussion, ugh. I really should elaborate."

Lots of people contributed to the Unix philosophy, but ESR's contributions alone are 17 rules and people only tend to talk about 1-2 of them so that's why I referred to them as "oft-butchered."

I would say FP satisfies the Unix streams ideal due to how you transform data through functions and pass this data to other functions. Elixir's pipe macro just makes this read more naturally. The modularity comes from the Erlang concept of "applications" and Elixir takes this further with umbrella applications.

I've written quite a bit of Elixir and I can't say that I've ever really needed to interop with the kernel or POSIX APIs at all. One of the reasons I use Elixir is because BEAM handles all of the crap I used to write in other languages (Async I/O APIs) to make them "fast." Heck, a lot of people even think Erlang got microservices right nearly 30 years ago!

I did write some Go long before I ever wrote any Elixir so my experience is similar to OP. Go has a great niche in infrastructure tooling due to compiling to a binary, having a nice stdlib for tedious server-related things (TLS, crypto, etc). and supporting cross-compilation. That said, I'll never write servers or business application logic in Go again. Elixir is just the better tool for the job there.


> I would say FP satisfies the Unix streams ideal due to how you transform data through functions and pass this data to other functions.

And he's saying they're not Unix streams and aren't playing in Unixland, I think you're completely missing his point.


That distinction is overly pedantic, especially in the context of the Unix philosophy which was supposed to be applied to programming in general.


It's not pedantic as it's the point the OP is trying to make; not UNIX'y means it's not playing well with the environment, he's not talking about philosophy.


Philosophy, not implementation.


Which is the point, when he's says it's not UNIX'y, he means it doesn't play well with the environment; he's not talking about philosophy, he's talking about the implementation so saying it's philosophically the same is missing the point.


My point is that the original article was saying its "UNIX'y" in philosophy. So the commenter you're discussing is the one who is missing the point, in my opinion. It's a moot point anyway, we're being needlessly pedantic here.


I think you could argue that both languages are equally a pain in the ass to interop with C in.

Could you elaborate? I have used cgo a lot and it is very nice compared to other FFIs I have used (Java, Haskell, Python, Ruby).

Things will get a bit more ugly >= 1.5 with a concurrent copying garbage collector. Since e.g. arrays backing Go slices may be moved, you cannot pass 'blocks' of Go memory to C-land anymore by taking the address of the first element.

Edit: or do you mean using a Go package from C?


Erlang is not homoiconic. Did you mean Elixir? Neither are anyway.


Homoiconicity extends well beyond the obvious AST primacy of Lisp. Tcl and Io are homoiconic languages too, for example. The reason I used that word is because the Erlang language and libraries exploit a few basic data types (tuples and lists being composed into proplists and iolists) very well, to the point that a lot of practical Erlang code is simply shuffling data by pattern matching on those types. It's not a Lisp-level code-data correspondence, but it's in a similar league.


It really isn't homoiconicity if you can't compile an AST with it.

What you're describing are just simple data structures. Erlang has no user-definable data structures. I don't recall anything I've read explicitly stating this, but I've believed for a while that this was the case because it made moving terms across the network simpler. Thus, libraries aren't "exploiting" a few basic data types... they're stuck with a few basic data types. There's no alternative.

That really has nothing to do with 'homoiconicity'. Erlang code is not a data structure, and it is not spelled in Erlang terms. And that would generally not be a good thing! Erlang's already plenty verbose without also having to be expressed as legal Erlang data terms.

(The generally loosy-goosy typing that results is one of the reasons I don't like Erlang and am moving away from it. But YMMV.)


You mean a dominant choice for those that don't know better, I guess.

The only place we will get to use it, is if any of our customers ever makes a request for Go knowledge.

Unless they are using Docker, knowing their standard languages I doubt they ever will do it.

I don't see any of their Akka, Fork/Join, TPL, Agents, core.async devs caring about Go.


> That being said, have you tried writing a web app in Go? You can do it, but it isn't exactly entertaining. All those nice form-handling libraries you are used to in Python and Ruby? Yeah, they aren't nearly as good. You can try writing some validation functions for different form inputs, but you'll probably run into limitations with the type system and find there are certain things you cannot express in the same way you could with the languages you came from. Database handling gets more verbose, models get very ugly with tags for JSON, databases, and whatever else. It isn't an ideal situation. I'm ready to embrace simplicity, but writing web apps is already pretty menial work, Go only exacerbates that with so many simple tasks.

This.

And don't even try to work around these limitations and share your work for free, or you're going to get trashed by the Go community( i.e. the Martini fiasco )

But think of go more as a blueprint. Go has great ideas(CSP,go routines,...)

Hopefully it can inspire a new wave of languages that don't rely on a VM, that are statically typed and safe without a fucked up syntax.

The language that succeeds in striking the right balance between minimalism and features with an excellent support for concurrency will be the next big thing.


> ( i.e. the Martini fiasco )

Reference please for those of us who are only casual Go users?


The author of Martini released his framework, some people in the community said it wasn't idiomatic Go, he agreed and eventually released another one called Negroni.

I don't know if he was "trashed", or whether there was a "fiasco".


Jeremy Saenz certainly remains a well respected member of the Go community.

Martini is very 'magical' and makes it hard to see exactly what's going on, making it quite unusual as a Go framework, so naturally it drew some criticisms. After some experience with the library, Jeremy realized that he actually needed a lot less to do what he wanted, so he wrote Negroni.


Refer here: http://codegangsta.io/blog/2014/05/19/my-thoughts-on-martini...

The "short" version is that Martini did a lot of 'magic' stuff with run-time reflection. This made it really easy to use for those coming from dynamic languages but also isolated it from the rest of the Go HTTP ecosystem (which rallies around the http.Handler interface). There were also some discussions about that being "slow", but it was still really fast relative to (e.g.) Python.

My opinion is that Martini took a pragmatic approach to getting things done. It was also a bit of a gotcha for newbies who jumped into it and then struggled to learn why things didn't work around the edges of its feature base.


> Somewhere, however, that has been extrapolated to the idea that Go is a good language for writing web apps. I don't believe that's the case.

I have always had this feeling and thought why isn't more people speaking up about that.

For me it's a good improvement from C++ (and maybe C as well), somehow well-suited for systems programming.

I never understood how people can claim it to be natural "upgrade" from Python or Ruby. I have a feeling the opposite is actually the case. This of course is all in the web-development context, not in general one.


I am utterly baffled by people who use the phrase "The Unix Philosophy" for justifying a piece of software but don't understand that Unix philosophy is in the context of software that can be composed together. That is why the other tenet of the Unix philosophy is to use text streams for all I/O.


I think Elixir and Go solve different problems -- but for the problem space the author mentions here, web services, Elixir is clearly superior.


Thanks for that interesting viewpoint. I have also done the same survey, and I agree. I did a ChicagoBoss PoC last year, and Elixir and Phoenix are my goto for the next one.

It should also be noted that both are young languages and they are evolving rapidly. This makes them even more exciting to me. Also, I think Go's sweet spot may be a new approach to client/server i.e. applications that work across the network. This is quite different than the browser/app server model. Elixir has similar plumbing that fit this model as well, but what really excites me about Elixir and Phoenix is the ability to build a restful MVC application that can take advantage of Erlang/OTP's scalability.


It should also be noted that both are young languages and they are evolving rapidly.

Erlang itself is nearly 30 years old, but it wasn't open sourced until 1998. OTP came shortly after.

Erlang doesn't evolve rapidly at all (at least, not in the typical sense of "Software innovates so quickly - Last year I was using one event loop, and this year another"!). It's remarkably conservative, yet still ahead of the pack. That's what makes it so great.


He was talking about Elixir and Go. Not Erlang.


He might intentionally ignored Elixir, IMO.


> It should also be noted that both are young languages and they are evolving rapidly.

Go will basically get dynamic linking and stop evolving. Its type system is way too simple to support any new significant feature. And go designers have said many time that the language is "done".


My understanding is not that they have said that the language is "done", but that the 1.0 branch is stable and won't get any new major language features. At somepoint they will start rolling out 2.0 and it will have new features, maybe even generics. But why should they worry about that the language is plenty good as it stands for developing and their is plenty of work to do on cleaning up build systems, runtime, gc, etc.


> "The language is done and that's a good thing," Buberel says.

http://finance.yahoo.com/news/google-thinks-knock-one-oracle...

Pike said exactly the same thing(video is on youtube), another Go guy said exactly the same thing in a podcast ...

So yes, Go language is DONE, period.

> But why should they worry about ...

They don't, however people adopting Go and wanting generics shouldn't be tricked into thinking that Go developers are even considering generics , because they are not. If they invest in Go, they should know that they aren't going to get generics at all. As for what the Go team thinks, it doesn't really matter they are not betting their own startup on Go.

> they will start rolling out 2.0 and it will have new features

where did you get that idea? it is impossible to add significant new features without breaking backward compatibility, do you really think Go designers are ready to go that way? of course not. But i'm curious what led you to think that ?by all means, give me official sources like I just gave you.


> where did you get that idea?

The core Go developers has said that an eventual implementation of generics won't arrive until Go 2.0, were they will allow breaking backward compatibility. That said, an eventual Go 2.0 release is probably years away.


So yes, Go language is DONE, period.

They don't, however people adopting Go and wanting generics shouldn't be tricked into thinking that Go developers are even considering generics , because they are not.

Wrong:

Generics may well be added at some point. [...] This remains an open issue.

https://golang.org/doc/faq#generics


If you understand one or two things about type theory then you know that generics cannot be added to GO. Go designers know it. They cannot add generics without breaking the language and they will not break the language.

When you have at least 3 core designers of that language saying on record that the language won't have new features, then you know what is written in that Faq ain't going to happen.

So let's not mislead people interested in Go into thinking that it will get generics, it will never have generics.


I think you need to justify that statement, what about type theory says that Go can't have generics?

As a follow-up, would the same logic have proved that Java 1.4 couldn't add generics?


Nothing in the future is set in stone. I'm curious why you're so sensitive about this issue.


I suspect (because I read minds) that if they _knew_ Go was going to add generics and such in a year or two that they'd be ready to dive in for a big project or two. No point in investing the time in working up a significant Go project if you value Features X & Y but don't believe the language is headed there.

In other words, the frustration is because Go is _close_ to a good language but not quite good enough if we know this is it. Or, maybe even worse, we don't know anything at all about its future direction.


Sorry, I think of the ecosystem around a language as part of the language. Tooling, frameworks, etc.


I have been using Elixir for a while now! I'll never go back to Go or Node.js by choice.


Why Elixir over native Erlang? Nobody including the original piece seems to address this. Is it just a subjectively nicer syntax?


The biggest obstacle to erlangs adoption is that it does not have a low barrier to entry. The syntax is strange, the way of structuring and building solutions is strange. It is incredibly powerful, but it requires a different way of thinking.

Elixir makes sense because it is a lower barrier to entry. This is a very important factor. It is one of the main reasons why node.js was so successful in reaching a large audience. It is why Go is more popular than Erlang. They are often used in the same domain and Erlang is arguably more powerful. People still choose Go however.


I will agree that Elixir has better tooling around building and releasing code.

However, in terms of barrier to entry for developing in, I actually think Erlang's is lower. People tout Elixir for its syntax being similar to other languages, but that's a detriment, not a benefit. Erlang and Elixir, both, require you thinking in a way most other languages don't. With Elixir, and the language looking familiar, you're tempted to try and shoehorn your existing knowledge, practices, etc, into an Elixir shaped package, and that leads to less idiomatic code, more surprises in how things actually work, and sometimes even more brittle applications than how they ~should~ be (though having a decent supervision tree means they'll probably still be better than what you'd have had in those other languages). Using Erlang, though, you relearn from scratch. It's a small language and that syntax isn't really a huge barrier; being forced to code in a language that shares so little with what you're experienced with means you're likely open to new paradigms, and will pick them up instead of trying to reuse your existing ones.

At least, that was my experience with Erlang, compared with Scala/Akka (which I tried first, and ended up with a lot of OO, non-functional, non-parallel code).


I had a similar experience with both Erlang and XSLT -- they don't look anything like a typical imperative language, so I wasn't tempted to think in imperative terms. I just had to do things "the Erlang way" and "the (crazy) XLST way."


Strange for those that never did Prolog.

I was really into Prolog back in the university days, to the point of being into logic programming competition between universities.

Got to understand Erlang at the first try, when reading about the language or watching web talks.

It pretty much depends from where one is coming from.


The vast majority of programmers never used or cared about Prolog.


They have missed out.

In terms of elegance, I've never written anything that surpasses Prolog.

For a defined subset of problems.


Most Portuguese universities had Prolog on their CS degrees in the 90's.

Also most companies HR departments will ignore CVs without an EE or CS degree from those same universities.


I studied prolog too! For a semester only but I loved it. I don't remember much of it but I loved it because it felt like a true "high level" programming language to me. Describe it a problem and ask questions. Felt like magic.

Also learning erlang was much easier when you had some basic idea of prolog. Clauses did not seem so weird and once you understood how clauses work you've basically got 80% of the language nailed.

What I don't really understand though is why erlang didn't go all the way. Why take only surface level features of prolog and ignore the really cool parts?


Erlang vaguely looks like Prolog, but it has semantically nothing in common with it. Even the syntax is quite different beyond base similarities to predicates.


I know, but if one has Prolog on his/her toolbox, Erlang becomes more approachable than knowing only Algol style languages.


I think it isn't outlined very well in the article. For the most part though I'd say it's syntax and I agree that syntax matters. Elexir is probably a lot easier to stomach than Erlang for many people that started learning in a similar looking language. I think aesthetics matter quite a bit. If I find code strange to look at I'm never fully comfortable with it (this fades over time but I feel it pretty strongly whenever I dive into a new language). This might be silly but from experience students that are exposed to Java first (which is the most common case here) tend to struggle with JavaScript (pre ES6) just because some patterns look "odd".

The Elixir devs are also fairly smart people and the community seems cool which shouldn't be underestimated.

That being said I have a Prolog background so stock Erlang looks fine :D


In addition to the full access to the rich erlang eco-system, elixir also comes with its on rich features like macros which shines through in the design of Phoenix, its major web framework


Out of interest - why not?


1. Readability is probably the biggest reason. I don't have to worry that some base object changing what I am thinking is happening. There is little magic and it is easy to follow.

2. Let it fail, elixir and erlang you have to write code assuming it is going to fail and handling when it does. Most other language you have to write that code all over in your logic code. Elixir and erlang you do not (readability again).

3. OTP! There is a great distributed framework that erlang and a bunch of much smarter then me programmers built over the last 20 year that I just get to use. Also you can send functions and code over the wire as well to run. Making some very interesting things that are just impossible to do in other languages.

These are just a few reasons. I do build mostly build web applications for business apps. So because of that this is a really good fit. I don't think it is the answer to all but it has been expanding fast.

The one thing I wish it had would be that your dependencies were yours only. I think this is one of the big reasons Node.js got successful so fast. In node you can run different versions with-in the same app depending on the libraries. Elixir is more like ruby in that you can only have one. Well you can have two for swapping out without down time but that is it. I do think this is one of the limits to the Erlang VM.


> Also you can send functions and code over the wire as well to run. Making some very interesting things that are just impossible to do in other languages.

Well, you could do this in any language that has 'eval' really, which includes most dynamic languages. Unless you're referring to something else?


Erlang provides location transparency for processes, meaning that you can send a message to a Pid (process identifier) as if it were running locally, when in fact it's running on another VM on another machine.


> Unless you're referring to something else?

jtwebman means. Elixir uses the actor model to enable concurrency across processes (internal and external).

It's very very easy to scale horizontally using BEAM.


I can't speak for Go, but Elixir has a much more sane concurrency model when writing web services (or otherwise) than node. The ability to block a process means you don't need to dork around with callbacks or promises or anything, your code looks like it executes in serial. No shared memory either, so one request can't break another.

The set of abstractions OTP provides map onto web services perfectly. Once you write stuff in erlang/elixir going back to another paradigm is really quite hard.


> However, as we more and more turn servers into REST APIs that are supposed to deliver JSON to numerous clients, for many it is time to find an alternative.

Perhaps the alternative isn't more REST APIs that deliver JSON; given its inherent problems, it's more likely we'll be using a challenger to that entire model, like GraphQL.

https://facebook.github.io/react/blog/2015/05/01/graphql-int...


I willing to bet once the spec is out there will be a Elixir implementation of GraphQL. I am also willing to bet Elixir would still be faster then Node.js, Ruby, or Python for that.


Code samples side-by-side comparison: http://rosetta.alhur.es/compare/go/elixir/#


Just keep using Ruby or, if you must, Node. Rails 5 is going to fit those use cases just fine. Web programming is entirely too complicated to roll your own stack unless you absolutely need to.


>Finally, JavaScript handles the modern web well, but it isn't perfect. If a request does something that is especially CPU-heavy, every single user of your application will be waiting.

One thing you might want to take into account is that ES2015 generators allow you to write long, blocking functions as generators that partially compute the result before deferring further computation to somewhere further along in the message queue. This allows you to spread out blocking computations so that you can still serve requests.


Erlang is well-researched, pragmatical language (read the Armstrong Thesis at last) this is why it is "great". User-level runtime, which is trying to do kernel's job is a problem, but relying on message-passing and being mostly-functional it, at least, have no threading/locking problems - so it scales. Nothing much to talk about.

Go is also well-researched language with emphasis on keeping it simple and being good-enough and doing it right way (utf8 vs. other encodings) - a philosophy from Plan 9. Go has in it a lot of fine ideas, attention to details and good-enough minimalism - the basis for success. It is also pragmatic - that is why it is imperative and "simply" static-typed.

Criticism about lack of generics or is not essential, especially considering that generics in a static typed language is an awkward mess. Complexity of its user-space runtime is a problem, of course, but runtime is hard, especially when it is not mostly-functional.

Go is in some sense "back to the basics/essentials" approach, not just in programming but also in running the code, and even this is enough to be successful.

BTW, its syntactic clumsiness and shortcomings (hipsters are blogging about) came from being statically typed, just admit it. On the other side, being C-like and already having C-to-Go translators opens up the road to static-analyzing and other tools.

Go is the product of old-school (Bell labs) minds (like Smalltalk or Lisps) not of bunch of punks.)


> Criticism about lack of generics or is not essential, especially considering that generics in a static typed language is an awkward mess.

This is very wrong. Sure, C++ templates are a mess, but generics in a language with a well-designed type system, like Haskell or Rust, are incredibly useful and very elegant, especially in combination with type classes/traits.


Haskell is rather an exception. Looking at generics in Java is more telling.)


Java is a much bigger exception than Haskell, a heavily runtime-checked language with significant established codebases in which generics were retrofit with the foremost goal to minimise their backwards-compatibility impact. Witness C#, a very similar language whose team decided they had a low enough userbase yet that they could afford to break BC and introduce separate reified generic collections[0].

And just about every other language (Haskell very much included) was designed with generics from the start.

Haskell is not an exception, Java is.

[0] the C# team knew all along the language would have generics, but generics were not ready for 1.0 so they were pushed back to 2.0


A few corrections: The foremost goal when retrofitting generics into Java may have been backwards compatibility, but the requirement that drove the designers to type erasure and the clunky implementation was migration compatibility.

C# had generics retrofitted and kept perfect backwards compatibility. You can still run C# 1.0 programs on a modern 4.0 runtime. That is backwards compatibility, and it would have been quite possible to fit Java generics the same way.

Migration compatibility is a (in hindsight) rather absurd requirement where old (non generic-aware) code should be able to call new (generic-aware) code where you as the integrator have control over the source code of neither the "old" nor the "new" code.

In practice this would only happen when you use an old library that has taken direct dependencies on external (to the library) non-generic classes (i.e. not interfaces), and where you will not accept that the old code keeps calling the old classes.

C# retained the old collection classes, implemented new generics-aware collection classes that still supported the old interfaces (with proper type checking and -casting). This meant that old code could still be passed references to new generic collections and work as expected.

The only constraint in C# is that is "old" code has taken a strong dependency on an old non-generic class you cannot pass a new generic class to that code.


A common theme is that only languages that had generics retrofitted into them have awkward implementations (because your design space is now constrained), while those that were designed to have them ab initio typically work fine.

I don't really want to belabor the point of Go and generics; it's one of the many simplicity vs. expressiveness design choices that language designers have to make. But the claim that parametric polymorphism in a statically typed language is (presumably inherently) an awkward mess is a peculiar one, given that there are so many successful examples.


> Go is also well-researched language

Concurrency in Go certainly is, the rest is not, definitely.

> its syntactic clumsiness and shortcomings (hipsters are blogging about) came from being statically typed, just admit it

That's the kind of dismissive and arrogant attitude I often hear from gophers. Doesn't make me want to be part of that community.

Have a look at the Cyclone language. It has generics,tagged unions and pattern matching AND it is statically typed.

https://cyclone.thelanguage.org/wiki/User%20Manual/

Saying that statically typed languages cannot be expressive is pure FUD. Go designers just didn't strike the right balance between features and minimalism, because they didn't care. Without its concurrency model, Go is not much.


> Saying that statically typed languages cannot be expressive is pure FUD.

Could you, please, show us an example from the rosettacode.org website? Generic Quicksort, perhaps?


Take a look at OCaml, or any ML variant language (aside from Rust, which is more C++ than ML in my opinion anyway) for that matter, for a static typed language with a good implementation of generic types.


Would you mind elaborating why you think that Rust doesn't have a good implementation of generic types?


The signatures can get really unwieldy compared to generics in other languages. As I understand it, the Rust team is working on this.


I think you're comparing Rust to a language with ad-hoc polymorphism like C++ (where you pay for the somewhat simpler signatures with confusing template instantiation errors), and I'm very glad we didn't follow the C++ route. Rust's generics are very similar to those of Haskell.

The only major generics-related issue that is getting some significant thought is the ability to have the automatically derive the return type on top-level functions, which is not something that is common in other statically typed languages (only C++ and those with whole program type inference, which has its own set of drawbacks).


I took the Dan Grossman's PL course, so I appreciate how good Standard ML is.)


No idea what you're looking for, but just for the record:

  import List(partition)

  qs [] = []
  qs (pivot:rest) = qs smaller ++ pivot : qs larger
      where (smaller,larger) = partition (< pivot) rest
Haskell 98, the (polymorphic) type is "Ord a => [a] -> [a]"


Haskell is an exception. Haskell is two distinct languages in one - one for defining types, and one for expressions.)


Joe Armstrong's thesis, for the interested: http://www.erlang.org/download/armstrong_thesis_2003.pdf


Go's error handling leaves something to be desired. I find myself reinventing exceptions a lot by just passing errors up through generic "error" return types.

It feels like it would be a lot cleaner to add syntax support for exceptions and call it a day.


There is support for fully "exceptional" behavior. It's called "panic".

The Go language designers explicitly didn't include exceptions because "coupling exceptions to a control structure, as in the try-catch-finally idiom, results in convoluted code. It also tends to encourage programmers to label too many ordinary errors, such as failing to open a file, as exceptional" [1].

I agree with them. While handling errors everywhere is a little painful to write, it enforces better practices by making you acknowledge that things could fail and ignore, handle, or pass the buck.

[1] https://golang.org/doc/faq


> I agree with them. While handling errors everywhere is a little painful to write, it enforces better practices by making you acknowledge that things could fail and ignore, handle, or pass the buck.

There are better ways of accomplishing the same thing though, one way is the Either monad in Haskell. Here's an example I post sometimes comparing error handling in Go to Haskell's either monad:

    func failureExample()(*http.Response) {
        // 1st get, do nothing if success else print exception and exit
        response, err := http.Get("http://httpbin.org/status/200")
        if err != nil {
            fmt.Printf("%s", err)
            os.Exit(1)
        } else {
            defer response.Body.Close()
        }
    
        // 2nd get, do nothing if success else print exception and exit
        response2, err := http.Get("http://httpbin.org/status/200")
        if err != nil {
            fmt.Printf("%s", err)
            os.Exit(1)
        } else {
            defer response2.Body.Close()
        }
    
    
        // 3rd get, do nothing if success else print exception and exit
        response3, err := http.Get("http://httpbin.org/status/200")
        if err != nil {
            fmt.Printf("%s", err)
            os.Exit(1)
        } else {
            defer response3.Body.Close()
        }
    
    
        // 4th get, return response if success else print exception and exit
        response4, err := http.Get("http://httpbin.org/status/404")
        if err != nil {
            fmt.Printf("%s", err)
            os.Exit(1)
        } else {
            defer response4.Body.Close()
        }
    
        return response4
    }
    
    func main() {
        fmt.Println("A failure.")
        failure := failureExample();
        fmt.Println(failure);
    }
The equivalent Haskell code:

    failureExample :: IO (Either SomeException (Response LBS.ByteString))
    failureExample = try $ do
      get "http://www.httpbin.org/status/200"
      get "http://www.httpbin.org/status/200"
      get "http://www.httpbin.org/status/200"
      get "http://www.httpbin.org/status/404"
    
    main = failureExample >>= \case
      Right r -> putStrLn $ "The successful pages status was (spoiler: it's 200!): " ++ show (r ^. responseStatus)
      Left e -> putStrLn ("error: " ++ show e)


I took a survey of my Go code base once. In the mature bits of code, approximately 1/3rd of the places where an error was received, something other than simply returning it was done. And I don't really go in for crazy wrapping schemes, either... it means something was logged, or retried, or modified, or somehow reacted to.

What often starts out as simply a return, by the time something gets to production quality, has often changed.

If you're using Go for its core use case, network servers, the error handling turns out to be very solid, precisely because almost every other error handling paradigm strongly encourages you to just lump all the errors together and not think about them individually. (Yes, that includes Option<>.) I can see where that might be very annoying on a desktop GUI app or something, but if you are not thinking about every single error in your at-scale network server, you're actually doing it wrong.


> (Yes, that includes Option<>.)

Why? Seems to me that it's precisely equivalent to Golang's error handling story (assuming you mean something like Result). In fact, it's a lot easier to handle errors individually that way, both because it tends to make it harder to use a return value without checking for errors and because its generic nature means that you can handle different errors differently without having to deal with interfaces.

If you mean that it's more typing to write "return err" and it forces you to think about it more, I don't really buy that. "return err" is 10 characters; "map" (e.g. in Haskell) is 3 and "try!()" is 6. I really doubt that the difference between 3 and 10 characters has any practical outcome. For Golang programmers, "return err" is muscle memory.


I should have said the monadic form of Option<>. When you use it monadically, the result is nearly equivalent to exception handling; you call a function and the default behavior is to bundle up the error and just throw it up. It is true that if you are manually unwrapping it every single time, it's equivalent to checking every time.

"For Golang programmers, "return err" is muscle memory."

First, as I said, no, I actually think about it every time. And second, I bet you end up with "muscle memory" default handling under any scheme (for instance, exceptions: don't catch them or rethrow them)... in the end, you can bring the horse to water but you can't make it drink. You can only feel good that at least you brought it and you did your part. Option<>, even manually unwrapped, does not force the programmer to do something sensible with the error any more than any thing else does, or indeed, can.


> User-level runtime, which is trying to do kernel's job is a problem, but being mostly-functional it, at least, have no threading/locking problems - so it scales. Nothing much to talk about.

Spoken as someone who has never really used it. Dismissing it as nothing much to talk about is ridiculous.


Erlang is "great". There is nothing much to talk about.


I'm sorry, but if go would be product of lisp minded people it would be simple language with ability to extend the language on the fly. Reality is in go there is no way to even introduce some nice contract on top of error return and if statements without external preprocessor.


Yeah, Bell Labs never been a Lisp shop, but they developed their own philosophy and make it walk with the Plan 9 project.


I feel that this discussion would be incomplete without mentioning D-lang (http://dlang.org) and the vibed framework (http://vibed.org) as alternatives to the technology platforms that the author mentioned.


What about deployment ? How does it compare to "scp and i'm done" like in go ?

Also what about memory usage ? My latest go service was a 4 mo file with one single json config file. It consumed less than 4 mo in RAM, which let me deploy it on a base 500Mo instance with plenty of memory to spare.


I haven't used Elixir yet but deployment in Erlang isn't quite that simple, though it's not bad with some of the recent tools like relx. Just musing here but if you're dev environment is "scp and i'm done" then Elixir/Erlang are likely overkill for the problem at hand (Python or Ruby would probably work fine?). There is some base amount of complexity involved with working with BEAM but the features you gain in return are worth the trouble.


"scp and i'm done" is completely possible with Elixir/Erlang. It is how we deploy lots of erlang pages. Copy beam files, have erlang check periodically if any beam files have changed and reload them.

Using erlang releases is in no way mandatory.


Sure I suppose that's possible but it just seems a little gross. At least with 'scp binary' your first and all subsequent deploys use the same process so tools like Ansible can help you out. I guess that's what I was referring to -- repeatable deploys; pushing a new a BEAM release package is a more involved process than just with a single binary.


I totally agree, being able to just ship binaries around is fantastic. No complex process necessary, things Just Work™.

However, if you're ok with a little extra process, Elixir allows for hot code reloads. To be fair, most use cases probably don't need it, but it's a pretty great option to have in your tool belt.


No more involved than deploying a ruby web app.

At least the erlang/elixir has built-in support for zero-downtime-deploys. You don't have to rely on an app server like Unicorn and USR2 signals.


Models get ugly with tags? You don't have to have them unless the names are different.

This post is all about familiarity and the degree to which you are interested in using a language (just like every other post of its kind).


Web workers are real concurrency and afaik, there is no GIL that spans across the main thread and the worker threads in any of the browser implementations of web workers.


The limitation with Web Workers is that the main worker thread is the only one that can interact with the DOM.


Why would you want multiple threads touching the UI? Android processes for instance only have one UI thread.


Updating is different than reading. Having concurrent reads would be a good thing.


I do not see a big benefit from that. Reading from the DOM will already be fast and can be handled in one thread. And you won't have concurrent reads if another thread is writing to the DOM.

I think the designers of web workers made this decision thoughtfully. Other environments like iOS and Android only have one UI thread as well.


Important to note is that Elixir is created y Jose Valim from plataformatec. An important person and company in the Ruby community


Surprisingly favorable benchmark: https://github.com/mroth/phoenix-showdown

and related HN discussion: https://news.ycombinator.com/item?id=8672234


It gets even more interesting on bigger hardware. A user ran the benchmarks on a 10 core Xeon, Phoenix got 180,000 req/s and peaked at 75% CPU, so IO was still the bottleneck. We were also able to have ~600µs average latency with that load.

https://gist.github.com/omnibs/e5e72b31e6bd25caf39a



Elixir and Phoenix are great for web-apps, REST API's and the like. Phoenix channels make real-time push super simple and scalable.


> We've known for years that Ruby and Python are slow, but we were willing to put up with that. However, as we more and more turn servers into REST APIs that are supposed to deliver JSON to numerous clients, for many it is time to find an alternative.

> You see a lot of competitors trying to fill this void. Some people are crazy enough to use Rust or Nim or Haskell for this work, and you see some interest in JVM based languages like Scala or Clojure (because the JVM actually handles threading exceptionally well), but by and far the languages you both hear discussed and derided the most are JavaScript via node and Go.

Meanwhile the Java programmers just keep on delivering with Jersey, Spring, Restlet etc. etc. so forth. Less blogging, more doing.


No time for blogging, too much boilerplate to write!


Come now, we have IDEs for that. But seriously, Java users are not stuck in the 1990s, even if your gibes at its expense are, so perhaps you should revisit it. :)


Not just IDE's but also scripting languages like Groovy (the scripting version), its predecessor Beanshell, other less often used ones like Nashorn and Xtend, and even Clojure which doubles as a systems language.

NB: By "Groovy (the scripting version)" I mean the component of Groovy which was Beanshell with closures and collections syntax added, but before the meta-object protocol was. So Groovy as used by Gradle rather than Groovy as used by Grails.


The problem with is that we don't live on the valley and are picky about losing features industry got to take from language research.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: