Hacker News new | comments | show | ask | jobs | submit login

Erlang (and by extension Elixir) are good languages for web development, I agree. Frameworks like Nitrogen and N2O even let you do frontend work directly from base Erlang constructs like records and functions to represent templates and JS callbacks, respectively.

However, they will not replace Go for a rather simple reason. Erlang is an application runtime first and a language second. It's not meant to play with the native OS all that well, instead relying on its own scheduling, semantics, process constructs and the OTP framework. It exports a minimum set of OS libs for things like file system interaction, but you wouldn't write a Unix subsystem daemon in it. Though, you certainly can - if you want to hack with NIFs, port drivers and C nodes. But then you put the global VM state at risk if some native caller blows up.

For this reason, Go will remain a dominant choice for infrastructure and systems programmers. It definitely is replacing C/C++ for lots of things above kernel space, contrary to your statement.

I'd honestly dispute the characterization that Elixir is a small language. Erlang is easier to fit into your head, not least of which is because of how essential pattern matching on tuples and lists is. It's a consistent and homoiconic design, more warty constructs like records aside.

Finally, Erlang/Elixir aren't Unix-y at all. Don't get me wrong, BEAM is a wonderful piece of engineering. But it's very well known for being opinionated and not interacting with the host environment well. This reflects its embedded heritage. For example, the Erlang VM has its own global signal handlers, which means processes themselves can't handle signals locally unless they talk to an external port driver.




Golang implements a custom scheduler in userspace too, and this is why its FFI is not as fast or as well integrated with the OS as the calling conventions of C or C++ are. Heck, Golang's FFI is even less straightforward than that of Java, since the JVM is typically 1:1, not M:N.

Once you've gone M:N you're effectively in your own little world. There's a reason Golang's runtime is compiled with the Golang toolchan, and why it invokes syscalls directly as opposed to going through libc.


I did some cgo (Go's FFI) work. Go's FFI is vastly superior to JNI (which is in a league of its own when it comes to awfulness).

No FFI is really great. Go's isn't as good as C# or Luajit or Swift but better than Java or Python or Ruby or Lua FFI (where you have to write a wrapper for each function you want to expose to the language).

C and C++ are OS calling conventions so I'm not sure what that was supposed to mean.

It's true that go is M:N and that it does have bearing on bridging some C code (in rare cases where C code only works when called on main thread).

However, gccgo has Go runtime written in C and compiled with gcc, so go's runtime isn't tied to one toolchain.

I don't know why Go designers chose to (mostly) avoid libc but it certainly is great for portability and ease of cross-compilation. If you take dependency on libc (which is different between Linux/Mac/Windows) you pretty much throw the ability of cross-compile and given Go's static compilation model would require bundling a C compiler (unlike runtime-based languages like Java, Python or Ruby, where only runtime has to be compiled on the proper OS, so that complexity is contained for the programs written in the language).

I don't see why do you ding Go in particular for being "it's own little world". Compared to C - of course. Compared to Java or Python or Elixir? Much less than them.


cgo-style stack switching (which I assume BEAM also uses) adds a lot of overhead at runtime, which Java and Python don't need since they're 1:1.

The speed of the FFI really affects how a language ecosystem uses it; if it's a lot slower to call out to external libraries than to call code written in the same language, then there's a large incentive to rewrite all dependencies in the language as opposed to using what's already there. Sun's libraries are a bit of a special case in that Sun really tried to rewrite everything for strategic/political reasons, but look at Android; the heavy lifting in the Android stack is done by Skia, OpenGL, and Blink/WebKit (to name a few), a strategy which works because JNI is relatively fast. Python also heavily favors using C libraries where appropriate, again because Python C bindings are fast.

I don't understand the issue about cross-compilation. You don't need a cross-compiler to statically link against native libraries; you just need a binary library to link to and a cross-linker (which can be language-independent). And, of course, if you dynamically link, you don't even need that much.

I'm not really trying to ding Golang, in any case. M:N scheduling has benefits as well as drawbacks. FFI is one of the downsides. There are upsides, such as fast thread spawning. It's a tradeoff.


> better than Python or Ruby

Mmm? https://cffi.readthedocs.org/en/latest/overview.html (basically a port of luajit's ffi) http://ruby-doc.org/stdlib-2.0.0/libdoc/fiddle/rdoc/Fiddle.h... (ruby DSL to libffi)


On Java's case it was made awful on purpose, to discourage developers from writing native extensions.

The new FFI is on its way, but it might arrive only on Java 10.


Its of topic but I think it was a great idea to have terrible C interop. It really forced people to write java all the way. This meant the JVMs could really evolve unlike the Python and Ruby ones.

I am not sure the new java FFI is such a great idea in the long run. I would rather that they spent more time focusing on object layout and GPU compute ;)


> It really forced people to write java all the way. This meant the JVMs could really evolve unlike the Python and Ruby ones.

Java's FFI is horrible; Python and Ruby are mediocre. LuaJIT2's is fantastic. Not so surprisingly, Python ate Java's lunch in places like scientific computing, where it is much more beneficial to build on existing work.

Python is hard to dethrone from that spot right now because of momentum, mostly - but if the competition was started again, I'm sure LuaJIT2 would take the crown (Torch7 is based on it, but that's the only one I know).

I think my bottom line is: If you want your VM environment to be self sufficient, have horrible FFI like Java. If you want your VM environment to thrive with existing codebases, you have to have at least a mediocre one like Pythons. But you can have the best of all worlds like LuaJIT[2] - and that's who Oracle should be copying.


I think Python will loose momentum as soon as Julia gets more adoption, likewise with languages like Go and ML derivatives. Unless PyPy gets more widespread that is,

Upcoming Java's FFI is based on JNR, an evolution of JNA, used by JRuby for native FFI.

Nevertheless everyone seems to have forgotten about CNI, implemented on GCJ, which mapped C++ classes directly to Java ones.


Sorry, what is M:N?


Any situation where you have M userspace jobs running on N system threads, i.e. the number of tasks is different to the number of system threads.

Normally this occurs because you're running a large number of "green" threads on your own scheduler which schedules onto a thread pool underneath. This is good if all your threads are small/tiny since userspace thread creation is cheaper than creating an OS thread but if your jobs are long-lived then your userspace job scheduler is really just adding additional scheduling overhead on top of the overhead that the OS already has for thread scheduling and you would have been better with OS threads. If your M:N threading requires separate stack space for each job, there can be a sizeable overhead (this is why Rust abandoned M:N threading).


Can you come up with some examples of when this would began to be noticeable to an end-user?


If you're crossing the FFI boundary a lot, any overhead adds up quick. For example, drawing a bunch of small objects using Skia, performing lots of OpenGL draw calls, allocating LLVM IR nodes, or calling a C memory allocator…


One of the nice things about M:N is it decouples concurrency from parallelism. That is, your application can spawn as many inexpensive green threads as the task requires without worrying about optimizing for core count or capping threads to avoid overhead, etc. With Go 1.5, underlying system thread count will default to the same as the system CPU core count.


It's noticeable to the end-user only in it's negative performance implications in certain situations, making things slower than they would be otherwise on the same hardware. It's a low-level construct, it is not directly noticeable to the user either way. The negative performance implications are largely under heavy load. The post you replied to gave some more specific situations.


"However, they will not replace Go for a rather simple reason. Erlang is an application runtime first and a language second. It's not meant to play with the native OS all that well, instead relying on its own scheduling, semantics, process constructs and the OTP framework. It exports a minimum set of OS libs for things like file system interaction, but you wouldn't write a Unix subsystem daemon in it. Though, you certainly can - if you want to hack with NIFs, port drivers and C nodes. But then you put the global VM state at risk if some native caller blows up."

I have to disagree here. One of the best applications of modern times ever written in Erlang is Riak. It has lots os system services, they do IO a lot (it is a key-value store after all) and it works beautifully. It is using LevelDB under the hood for storing the data on disk and it is written in C++. Erlang makes it possible to have a distributed service using Erlang for the distributed parts and C++ for the disk IO combining the best of both worlds.

You are right about the potential NIF problems, but one shortcoming that can be possible worked around does not invalidate the rest of the system that is cutting edge.

How can you be sure that Erlang is great? Just look into projects like Akka and many more Scala project, they pick up features from Erlang because it simply works, easy to understand and reason about.

The notion of Erlang replacing Go is funny anyways because Erlang existed for much longer time than GO and even if you are looking in the web stack in Erlang it is pretty cool. Webmachine is one of the most stable tools out there, providing absolutely amazing latency and a state machine to handle HTTP requests.

Go as a language is new and certain way immature (like debugging and generics for example). It is an ok environment to develop in, but I would use Erlang/Elixir much more willingly for HTTP services (saying this after tried to use Go for a while but I just gave up, it was too much hassle for not too much gain).


For the record, I don't need any convincing of Erlang's greatness. I know that and I'm a big fan of programming in it.

Nor did I ever deny that Erlang excels at I/O, that much is obvious. I made it clear the various ways of interfacing with native code, as well.

But I wasn't talking about more abstract distributed services, rather stuff that brutally exploits POSIX and kernel-specific APIs. Using Erlang would probably be inconvenient for most people, so they'd use a language that seamlessly interacts with its host and has a small runtime, understandably. Not that you couldn't reap benefits from supervision, multi-node distribution, hot reloading and other things if you made the effort to do some elaborate hacking with ports and NIFs.

Erlang/OTP blows Scala/Akka out of the water for me, personally.


Heh it was coming through at the first sight, you know it very well.

" I know that and I'm a big fan of programming in it." Me too btw. :)

I see your point, yes that is absolutely true. I think one reason why Erlang is great because they realized that exposing the POSIX threading APIs to software developers is harmful. :) Why would you ever want to know how many cores you are running on? If you know it you have to keep track of it and adjust it on different systems and bunch of other issues. On the top of that POSIX threads are so expensive that it totally did not make any sense to implement a system that allows the developer to create a thread. I think Joe & team made an excellent decision there. On the other side, you can't really access these low level primitives so this is I guess what you are talking about. For the low level stuff you need a systems language (C,Go whatever).

Oh, yeah, JVM based actors are kind of interesting. I am not sure how can they meet any latency SLA. The GC is skewing the response time potentially, right? In Erlang/OTP it does not happen because of the GC implementation as you know.


I've used the JVM and actors on top of it for real-time bidding, which is normally a soft real-time job because you usually have contracts in which you have to promise to reply to requests in 100 or 200 ms maximum, including the network roundtrip, otherwise you'll be taken off the grid. Our system was replying to over 33,000 of such requests per second on only a couple of instances hosted on AWS.

The garbage collector was a problem and as usually it required us to profile and pay attention to memory access patterns, plus a lot of tuning of that garbage collector. But truth be told, we were also much sloppier than what people do in the financial industry. For example we did use Scala, functional programming and immutable data-structures, which ca bring with it significant overhead, since the profile of persistent collections does not match the assumptions of current generational garbage collectors. But on the JVM you also have options not available on other platforms. The G1 GC is pretty advanced and at scale the commercial Azul Zing promises a pauseless GC and has been used successfully in the financial industry.

I also used Erlang and in my experience the JVM can achieve much lower latency and better throughput. Where Erlang systems tend to shine is in being more resilient and adaptable.


Everything can be done, you can invest a lot of time to make your code run better on JVM and tune the GC settings etc. If you make the same investment into the Erlang ecosystem (like for example WhatsApp guys did) I think you could match the JVM performance in terms of handling number of connections and staying responsive. If your code does numeric calculation most of the time you are probably having a better time on the JVM though.

http://www.erlang-factory.com/upload/presentations/558/efsf2...


> It definitely is replacing C/C++ for lots of things above kernel space, contrary to your statement.

I don't want to live in this world.


> However, they will not replace Go for a rather simple reason. Erlang is an application runtime first and a language second. It's not meant to play with the native OS all that well, instead relying on its own scheduling, semantics, process constructs and the OTP framework.

I think you could argue that both languages are equally a pain in the ass to interop with C in. Erlang/Elixir is slightly more complex because you do need to keep the ERTS schedulers in mind, but whens the last time you saw someone give cgo a compliment?

> It exports a minimum set of OS libs for things like file system interaction, but you wouldn't write a Unix subsystem daemon in it. Though, you certainly can - if you want to hack with NIFs, port drivers and C nodes. But then you put the global VM state at risk if some native caller blows up.

Usually Ports are the first line of attempt to not have to write a NIF in the first place.

> Finally, Erlang/Elixir aren't Unix-y at all.

I'm guessing what OP means by "Unix-y" is the metaphorical definition of Unix-y as in the oft-butchered Unix philosophy of small composable programs that operate on streams of data rather than playing by all of Unix's rules. Elixir very much satisfies this definition with its emphasis on functional programming and pipe operator.


Except that Go doesn't really need interop with C for many systems programming tasks because of its stdlib support for the full POSIX and associated kernel-specific interfaces, without depending on a libc at that. Along with other more generic interfaces.

Then the fact that it's practically designed to be used on top of a Unix userland, much like C. Contrast to Erlang where you live in BEAM.

FFIs in general are often bad, though cgo from my minimal interaction with it seemed tolerable by most standards.

EDIT: Don't know how FP and the pipe operator alone make things Unix-y. Erlang/Elixir definitely play in their own league and I don't recall ESR's rules off the top of my head to make a detailed comparison right now.


> EDIT: Don't know how FP and the pipe operator alone make things Unix-y. Erlang/Elixir definitely play in their own league and I don't recall ESR's rules off the top of my head to make a detailed comparison right now.

Forgive my constant stream of edits. It started off as a short reply but I kept coming back feeling "It's a programming language discussion, ugh. I really should elaborate."

Lots of people contributed to the Unix philosophy, but ESR's contributions alone are 17 rules and people only tend to talk about 1-2 of them so that's why I referred to them as "oft-butchered."

I would say FP satisfies the Unix streams ideal due to how you transform data through functions and pass this data to other functions. Elixir's pipe macro just makes this read more naturally. The modularity comes from the Erlang concept of "applications" and Elixir takes this further with umbrella applications.

I've written quite a bit of Elixir and I can't say that I've ever really needed to interop with the kernel or POSIX APIs at all. One of the reasons I use Elixir is because BEAM handles all of the crap I used to write in other languages (Async I/O APIs) to make them "fast." Heck, a lot of people even think Erlang got microservices right nearly 30 years ago!

I did write some Go long before I ever wrote any Elixir so my experience is similar to OP. Go has a great niche in infrastructure tooling due to compiling to a binary, having a nice stdlib for tedious server-related things (TLS, crypto, etc). and supporting cross-compilation. That said, I'll never write servers or business application logic in Go again. Elixir is just the better tool for the job there.


> I would say FP satisfies the Unix streams ideal due to how you transform data through functions and pass this data to other functions.

And he's saying they're not Unix streams and aren't playing in Unixland, I think you're completely missing his point.


That distinction is overly pedantic, especially in the context of the Unix philosophy which was supposed to be applied to programming in general.


It's not pedantic as it's the point the OP is trying to make; not UNIX'y means it's not playing well with the environment, he's not talking about philosophy.


Philosophy, not implementation.


Which is the point, when he's says it's not UNIX'y, he means it doesn't play well with the environment; he's not talking about philosophy, he's talking about the implementation so saying it's philosophically the same is missing the point.


My point is that the original article was saying its "UNIX'y" in philosophy. So the commenter you're discussing is the one who is missing the point, in my opinion. It's a moot point anyway, we're being needlessly pedantic here.


I think you could argue that both languages are equally a pain in the ass to interop with C in.

Could you elaborate? I have used cgo a lot and it is very nice compared to other FFIs I have used (Java, Haskell, Python, Ruby).

Things will get a bit more ugly >= 1.5 with a concurrent copying garbage collector. Since e.g. arrays backing Go slices may be moved, you cannot pass 'blocks' of Go memory to C-land anymore by taking the address of the first element.

Edit: or do you mean using a Go package from C?


Erlang is not homoiconic. Did you mean Elixir? Neither are anyway.


Homoiconicity extends well beyond the obvious AST primacy of Lisp. Tcl and Io are homoiconic languages too, for example. The reason I used that word is because the Erlang language and libraries exploit a few basic data types (tuples and lists being composed into proplists and iolists) very well, to the point that a lot of practical Erlang code is simply shuffling data by pattern matching on those types. It's not a Lisp-level code-data correspondence, but it's in a similar league.


It really isn't homoiconicity if you can't compile an AST with it.

What you're describing are just simple data structures. Erlang has no user-definable data structures. I don't recall anything I've read explicitly stating this, but I've believed for a while that this was the case because it made moving terms across the network simpler. Thus, libraries aren't "exploiting" a few basic data types... they're stuck with a few basic data types. There's no alternative.

That really has nothing to do with 'homoiconicity'. Erlang code is not a data structure, and it is not spelled in Erlang terms. And that would generally not be a good thing! Erlang's already plenty verbose without also having to be expressed as legal Erlang data terms.

(The generally loosy-goosy typing that results is one of the reasons I don't like Erlang and am moving away from it. But YMMV.)


You mean a dominant choice for those that don't know better, I guess.

The only place we will get to use it, is if any of our customers ever makes a request for Go knowledge.

Unless they are using Docker, knowing their standard languages I doubt they ever will do it.

I don't see any of their Akka, Fork/Join, TPL, Agents, core.async devs caring about Go.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: