Hacker News new | past | comments | ask | show | jobs | submit login
Why Erlang? (fredrikholmqvist.com)
341 points by todsacerdoti on Aug 31, 2021 | hide | past | favorite | 166 comments



For me, Erlang was the ladder to functional programming. Once I’ve mastered Erlang to be able to write gossiperl (http://gossiperl.com, not maintained in a long time), all immutable programming became easy. Maps, flat maps, recursion, no side effects, all became clear.

After Erlang, I’ve enjoyed Scala a lot. Especially Akka.

I do tons of go now and when I need actors in go (not very often because channels are rather straightforward), I use proto.actor: https://github.com/AsynkronIT/protoactor-go.

Erlang is awesome. I can definitely recommend the “Designing for scalability with Erlang/OTP”: https://www.oreilly.com/library/view/designing-for-scalabili.... This is the book which finally made me “get it”.


> all immutable programming became easy

Switching to the functional style of logic from the algorithmic style can be slightly tricky but as for immutability itself - since the introduction of for-each loops which freed us from incrementing counters I could never understand why is immutability not the default in all the popular languages. Programming algorithmically I rarely need to change a variable even in C# or Python. The fact they don't have a syntax for defining single-assign values (like val in Scala) annoyed me since I learnt Scala (which is extremely easy to start with - you can use it just as "better Java", advanced functional programming tricks are entirely optional) and realized I mostly use "val" naturally. If a variable I defined is changed - it most likely is by a mistake or some unexpected behavior taking place. Both C# and Python already have "var" to define a variable with its type inferred, why won't they add "val" to tell the compiler it can only be initialized once and never assigned again? To me it seems trivial for a compiler developer to add and also not breaking anything while bringing quite a lot of value by eliminating a whole class of bugs.


> I could never understand why is immutability not the default in all the popular languages

My take (could be wrong, happy to be corrected): Because most PL designers want speed to be pre-eminent in their language. One of the quickest way to never have your language adopted by large corporation is to have it be slower, without some obvious great benefit in productivity.

That's the catch though: Arguing for increase in productivity is harder, because it is qualitative, and ultimately subjective. Plus it's a crowded market these days in language productivity land, and unfortunately most of those languages are dynamically typed. Showing some benchmarks on paper is easy to do, and an easy way to sell a language.

So how does this all tie back in? Well, if you want to go fast, you need mutability (and up until most recent times: shared mutable memory, although this appears to be changing to non-shared mutation models). As memory and workloads get bigger and bigger, the cost of copying increases.


Immutability is not inherently slower. It provides for much better reasoning that may even result in superior performance.

Of course retroactively introducing them to a largely mutable language may not prove to have better performance, but eg. Haskell is most definitely not slow.


Immutability being the default does not prevent mutability.


Obviously. It's just makes it an available option and the default. And this already is helpful.


Is ”algorithmic” a synonom for ”imperative” in this case? I haven’t seen this usage before.


Yes. "Imperative" probably is the right word. I don't read much programming literature. You probably are correct.


I mean I understand "an algorithm" as a sequence of commands and a functional program as a pipeline of functions.


> I could never understand why is immutability not the default in all the popular languages

At what point do you shift the blame for bad code from the language to the programmer?

If you made variables const by default bad programmers will simply mark everything mutable.

The only way you enforce proper constness tags (or mutable tags) is through code reviews and teaching people better. So the default doesn't really matter.


couldn't you use the readonly modifier in c#?


I do, occasionally. But it looks annoyingly verbose (which to me already feels enough to avoid using it too much) and IIRC has some limitations (I can't remember) as compared to Scala's concise and universal `val valueName = value`.


Erlang was a gateway to functional programming for me too, specifically Haskell. (And Emacs!)


I have definitely noticed a functional programming to golang pipeline. Multiple people used Haskell and switched to Golang.

Interesting anecdotes.


When you're dying from the compile time of functional languages, you really appreciate Golang's near instant compiles.


That's mainly Haskell and Scala. OCaml and Elm have as fast or faster compile times.


The cause is not being functional, but having dozens of extra features and multiple syntaxes for the same thing, both Scala and Haskell suffer from that. On the other hand OCaml is very fast and Elm is the fastest compiler I've ever seen. Edit: oops there was already a similar comment when I posted


> Elm is the fastest compiler I've ever seen

Compared to the other languages you mention there, I'm not surprised that Elm has the fastest compiler; compiling to an actual binary (or something low-level like JVM bytecode) definitely seems like it would take more work to compile to than something much higher level like JavaScript, especially if you're performing optimizations at each intermediate layer. That's not to knock Elm though; obviously having a fast compile time is a good thing regardless of the reason, and it certainly still would be possible for someone to accidentally write a slow compiler to JavaScript!


In a modern stack though, compiling a JS or Typescript project takes ages; I don't believe it's the rewriting, optimizing and minification per se, but more the huge amounts of code and files that need to be processed.


Elm is fast because it does only the subset of Haskell’s feature set. It doesn’t have as strong a type system, it uses eager evaluation and I guess it mostly falls back on the JS runtime for optimizations.


Compiling to binary can be plenty fast, especially JVM code. It’s optimisation that really takes time.


In reality the Elm compiler is probably orders of magnitude slower than clang, it just has much less work to do. A C++ compiler needs to parse / preprocess 100k+ lines of code even for something like hello world.


Not all functional languages are made alike. OCaml compile times can rival Go's.


I would appreciate it if golang was any good. Unfortunately, it isn't. (It isn't functional either, so I'm not sure what point you were making)


I was tired of having to provision the runtime and dependencies.

Native binaries with no dependencies was the reason to change to go.


May you elaborate? Some Haskell users I know are not fans of Golang. What were the reasons for the switch?


If you like elaborate[0] typing and immutable everything, Go really is not the language for you.

[0] I don't know Haskell that well, but my understanding is that its type system is capable of expressing more intricate types than the Go type system.


I don’t think this comment adds anything to the discussion. Perhaps add a few of the anecdotes? Or even a theory as to what’s going on?


I gotta say I really enjoy these blogs. This and the Go one are really well done. Stylistically I love the look and the art is great too!

Then the content is enjoyable, and the length of the posts seems just about right too.

Where do I subscribe?

Also, technically speaking, having shipped Erlang a few times in 3 different products (although all of them were between 2000 and 2010), that I've had the least amount of trouble developing them the REPL experience is (was?) awesome with all the Emacs integration. The ability to basically shell into a remote node and poke around live was extremely handy.

People didn't like the binary logging but it seems systemd forced it down their throats anyway later. Also, I provided really easy tools to read the logs using the runtime, so they didn't care much in the long run.

It was really easy to write pipe drivers for programs written in Haskell, Python and C++. I even used the Java integration back then.

OTP was like a dream come true, and when you're building out an SNMP MIB, MNesia made that so stupidly easy - I couldn't believe how much faster I was going than the folks trying to plug into NetSNMP.

I wish I was still working on that project sometimes. It was my favorite.


> People didn't like the binary logging

I worked on an Erlang system for nearly eight years, and don't remember any binary logs. Maybe disk_log? but afaik, we only used that for mnesia transaction logs and I don't recall ever needing to look in there.


Hmmm it’s been a decade so maybe I’m misremembering, but I remember needing to use the REPL or escript to access the logs of my otp services… were they simple files?


You may be thinking of the old SASL report browser:

https://erlang.org/doc/apps/sasl/error_logging.html#report-b...


Yes, thank you! That was it.


At least on my systems, they were just files, yeah. Maybe we did something weird, but I ran yaws from debian for something and log files were just files there too.


Otp logs, at least in Linux, now "do the right thing". It's pretty recent though (2y? There was a major log system overhaul).


> Where do I subscribe?

It looks like they have an RSS feed: https://www.fredrikholmqvist.com/posts/index.xml


I found this to be an accurate write up, though my experience has been Elixir, which inherits all of the traits of Erlang (to my knowledge), most notably the BEAM / OTP.

Side note: That's a really cool graphic that looks to be Joe Armstrong, Robert Virding, Mike Williams with telephones? Where'd that come from?


This is almost certainly a reference to Erlang: The Movie [1] which is the most glorious 11 minutes of some programmers explaining how their creation works you will ever see. In the video, the three call each other to demonstrate what happens real-time in an Erlang system. For the impatient, the telephone conversation starts at about minute 3.

[1]: https://www.youtube.com/watch?v=BXmOlCy0oBM


>That's a really cool graphic that looks to be Joe Armstrong, Robert Virding, Mike Williams with telephones?

I noticed it's 3 separate svg files.

He has another post with similar line style drawings[1], no credits there either, so I assume the author did them?

[1] https://www.fredrikholmqvist.com/posts/brooks-wirth-go/


They really are touching tributes, Joe was always kind and courteous in all my interactions with him.


Thank you, this means a lot. It was sentimental drawing Joe, despite having never met him.


This is correct! Cheers :)


Author here.

Yesterday was amazing, today continues in the same fashion.

To everyone who read and commented, thank you. This means a lot to me.


I don't know why I felt the urge to praise/critique your work but here we go:

I think you're doing many things right! This crowd loves discussing fundamental technologies, including languages and your posts provide a really nice basis. I like the rational, tech focused, "no BS" perspective.

I especially like that you bring in some history and the persons behind tech (in Go post), I personally fancy that and think it could even be more. I like posts/books with a ton of footnotes and references. It makes me feel like I'm exploring/discovering something for myself rather than "just" reading some persons opinion (although that is fine sometimes, just not always).

Also your blog and posts just look nice! (Mobile could be a bit better. You should maybe scale down the SVGs responsively). The monospace font is immediately familiar and screams "Hi fellow nerd!". The colors and illustrations are warm and friendly.


> Erlang enables you to write scalable, concurrent, distributed and fault tolerant systems with soft-realtime latency guarantees (which in my experience describes most online systems out there[3][4]).

Beautiful. I do more frontend these days, but if I ever get back to the backend, I hope it's erlang. I want to use tools designed for the problem.


Same, but I insist on finding a tool for the problem, not a problem that a tool I like can solve.


Elixir (plus Phoenix) is quite a natural fit for for backend web development. Try it on a hobby project, I believe you'll love it.


Just in case anyone hasn't seen the old classic, Erlang: The Movie

https://www.youtube.com/watch?v=uKfKtXYLG78


This is the same "movie", but the audio has been fixed to not play out of just the left ear: https://www.youtube.com/watch?v=BXmOlCy0oBM


Like on a telephone :)


Somewhat related: https://caramel.run/

Ocaml on the Erlang VM!


There are quite a few alternative languages for the Erlang VM. I haven't checked the links in my gist for quite some time, but here's a list:

https://gist.github.com/macintux/6349828#alternative-languag...


I've dabbled in both Erlang and Elixir. However, betting on either of them in our company would be quite risky, with no one here having real experience with a functional language. That's why we chose Go. While not the greatest language on earth, using it has turned out to be a blessing in our team (statically typed, super easy to learn, easy deployment, fast compile times). That said, I really want to try Erlang on a real project.


The key gap in this argument is the blank slate which I'll define to be everyone has the same type of intelligence and same level.

1. Type of intelligence. This piece drove by the functional versus procedural roadblock. Most developers don't like and don't grok functional programming.

2. Level of intelligence. Systems programming and concurrency is not for the faint of heart. When I was a TA in college the threading topics were the ones that challenged the students the most.

The functional versus procedural mental capacity is very real and most developers have to fight with functional programming mentally. One cannot just ignore this decades long struggle of functional people bullying us in the procedural camp. One just has to search HK news here for functional programming and most comment sections will have this debate.

Amdahl's law can be stated as make the common case fast. Functional programming is not the common case for the human mind.


I somehow overlooked this when I originally read your comment (or it didn't register if I did read it):

> One cannot just ignore this decades long struggle of functional people bullying us in the procedural camp. [emphasis added]

"Bullying" is quite a strong word. What kind of bullying happens?

Besides the silliness of the term, the other thing I want to comment on: Stop being in a camp. The best way to stop learning is to assign yourself to a camp and to only do what that camp does and only believe what that camp believes. Expand your horizons, learn about other camps and why they do what they do and believe what they believe. You'll eventually learn that no paradigm is "right" (in an absolute sense) and that, instead, they all have right approaches to large portions of the problem of programming but wrong approaches for some (potentially substantial, but usually not) other portion of programming.


UC Berkeley uses Scheme as the first course for CS for a reason, to weed out people. I've seen first hand people struggle with functional programming to much higher degrees than procedural. I'm not in the procedural camp per se, I'm in the common person camp who struggles with things like async and functional programming.


At least on (2), that's actually one of the nice things about Erlang. It removes a lot of the kinds of things that make concurrency difficult for people. There are no locks on data. There is no shared data. The model is based on message passing with actors. This means that bad designs that don't fit this model make themselves apparent (they introduce complex bookkeeping overhead or are cripplingly slow due to coordination overhead), but good designs (data flow, a process per connection, processes as isolation mechanism for state) become easy in this language and perform very well.

And since processes are so key to Erlang, the green thread mechanism that BEAM provides is very fast and message passing is about the same overhead as a function call (which is to say, not much). This model of concurrency should be teachable to any 3rd year CS major in college (and possibly earlier) without breaking their brains.


If one is doing async programming my experience with Promises in JavaScript is that glossing it over only works to the degree the async mostly behaves like synchronous. To quote Albert Einstein, make something as simple as possible, but no simpler. If one is writing programs for asynchronous applications abstracting them away as if the asynchronous behavior environment doesn't exist is problematic.

Most developers I work with day-to-day are not designers. Fred Silverman's "The Mythical Man Month" from 1972 has a good take on this. The way I see it, languages like Erlang are amber that locks in dead code because no one except the original author can maintain it. Once code goes into maintenance mode good luck finding Erlang maintainers.


Erlang effectively makes everything asynchronous, so you're forced to deal with it. It's not glossed over at all, although it does give you good primitives to cope with it.


FWIW, I struggled with FP for years before discovering Erlang. It's a good gateway drug.

> One just has to search HK news here for functional programming and most comment sections will have this debate.

Yet this thread entirely avoided it until your comment.


Perhaps a little more experience with teaching would enlighten. I taught programming for a year in college and you know what one of the hardest concepts was that caused test scores to drop? Recursion. People invariable struggle with base cases. Recursion is a key tool for software development albeit procedural or functional. Teaching software to students is a real eye opener if one never struggles with what may be seemingly basic concepts like recursion. I've seen students flunk tests because they couldn't grasp the concept. Another concept that is not quite is hard is callback functions. I believe this is why functional programming challenges is because lambda functions are callback functions. I have to admit if hadn't taught software at Berkeley for a year as a TA I wouldn't have this perspective so something to consider.


> Amdahl's law can be stated as make the common case fast. Not really, the common part is the one parallelizable by dumb workers. The fastest worker does the sequential part. Here, let the smartest do the hardest.


I took a mook Erlang class and after that, I couldn’t really get into elixir. Erlang just felt a lot more expressive


I keep trying to learn Elixir, and clearly there are some nice features, but I’ve also found it hard to ignore how much I love Erlang’s syntax.


Some people really like the Erlang syntax. Semantically they’re basically interchangeable except that Elixir has macros.


And sane strings. And I really like Mix.

That said, I generally prefer Erlang, for the syntax, for Dialyzer, and for the one less level of abstraction. But recognize the future in the space is probably Elixir, and being fluent with both is probably helpful.


And Task, and Registry. Those don't exist in erlang, and they are fantastic.


Eh, Task is super easy to build though, and is little more than syntactic sugar for what I rarely did anyway (and it also obfuscates the fact that the asynchronous task may never return; yes, a timeout is thankfully included, and will by default tear the task down, but I still like the explicitness of "I sent a message...I need to receive a response with a timeout"). I agree it's a solid pattern to add, just not really impactful for me.

Registry I will admit to never really looking at; I assumed it was the same as Erlang's process registry. Looking at it, yeah, having that as part of the default library seems nice, though in Erlang the few times I needed more than what the process registry gave me I just used a library. Given the evolution in that space just from the time I was in Erlang, I'm actually curious how Elixir's is implemented, but regardless I agree it's nice to include a default one.

So, yeah, solid additions, but I don't know if I'd phrase it that way, that they "don't exist in Erlang".


> Task is super easy to build though, and is little more than syntactic sugar for what

That's the prevailing mindset in Erlang, and it's holding Erlang back. "Why build X, when you can just build it yourself". And so you have everyone building a variety of Xs in each project, all of them subtly different, all of them slightly inconsistent.

Why not have these useful abstractions in the standard library? "Little more than syntactic sugar" goes a long way towards improving developer experience and frees you to do other things than reimplementing stuff that should've been in the standard library in the first place.


I did say - "I agree it's a solid pattern to add, just not really impactful for me."

I'm not at all against having it there, I even call it out as a positive addition. Just not one, I, personally, found as transformative as the things I called out. There are other things that are -nice- in Elixir, the pipe symbol, revamped standard libraries, etc, but they didn't change the experience for me the same way.

Strings in Erlang are a pain point. Lack of a standard build tool (and thus using things like rebar3 and the like) is a pain point. Lack of macros wasn't a pain point, but it did enable new approaches, so I would consider it transformative.

Task? Task is nice syntax sugar, but it's not solving a pain point or enabling anything new coming to Elixir from Erlang (which, as you may note, was the point of this thread). I'm not dismissing it, it -is- a nice addition, and just like Registry having a standard approach is a benefit, but it just isn't nearly as impactful.


I agree, Task and Registry are nice, especially for newer folks, but if you know your way around OTP you can write your own specialized version in 1-2 hours.

The protocols, structs and macros are game changers in comparison. All the Phoenix, LiveView, Ecto stuff is possible because of those.


Task also does some magic with the process dictionary which has buy-in from the community, so if you rebuilt task, you wouldnt get the ecosystem benefits.

For example, if I make a database checkout in testing, using ecto, the checkout lifetime is tied to the test process, as is typical in BEAM for sane resource cleanup. If you execute code which spawns a new process it does not know about the database checkout because it's a new PID. Task stashes it's caller PID (this is not the same as ancestor because you might want to supervise a Task) in the process dictionary, so libraries that need that association can find it.

So yeah, you may have a point about Registry, but Task really "doesn't exist in erlang" because no 3rd party erlang libraries know about your ad-hoc Task clone, so even if you built caller knowledge into it, it would only be useful for your own code. At best you could correctly emulate Task's api, but then you would only be able to get that functionality with your code + Elixir libraries that use Elixir Task, and no erlang libraries.

Setting standards is an important role for languages.


Sure; I will grant you (and I think already did in my original post) that a standard approach to a pattern is better than multiple ad hoc implementations. That said, I appreciate the callout for Task; that does make it sound like it brings more to the table than appears on the surface (to 'mix' a metaphor).



What’s a “mook erlang class”?


Probably autocorrect of MOOC (massive open online course).


Ahh that makes sense. I googled but (not surprisingly) nothing came up.


On my phone and in a hurry now, excuse my not ddging it, there was one by a british university, paid though. I may have it in my bookmarks. Will respond with link later if noone else does.

(edit) There you go:

https://www.futurelearn.com/courses/functional-programming-e...

https://www.futurelearn.com/courses/concurrent-programming-e...

There are older videos from them on YT:

https://www.youtube.com/playlist?list=PLlML6SMLMRgAooeL26mW5...

https://www.youtube.com/playlist?list=PLR812eVbehlwEArT3Bv3U...

Found this in my Erlang bookmarks:

https://spawnedshelter.com/


Appreciate it!



To those of you that know Erlang, is the book "Learn You Some Erlang for Great Good" from 2013 still relevant in 2021?


Yes, very much. The language changes relatively slowly and cautiously.


My only production experience with Erlang is trying to maintain some XEPs for ejabberd. True or not I was told this wasn't a great example of good Erlang.

This post doesn't have any code at all.

What's a good place to look at idiomatic Erlang that will force me to say "yes, this is much cleaner and clearer than my current language of choice"?


Can't say anything about the quality of the code (not an erlang programmer) but, rabbitMQ is built with Erlang.

https://github.com/orgs/rabbitmq/repositories?q=&type=&langu...


> Most commonly, supervisors just restart the crashed process. This effectively prevents stateful Heisenbugs, as restarting a process is cheap.

I'm skeptical of this claim. How does easy restarting prevent bugs?


The argument here is that the system can get itself into a state that is irreparable (and perhaps very difficult to track / debug).

By crashing and having it start into a good state, the system will at least be operational again.


With all due respect, that does nothing to prevent bugs. In fact, it almost seems to encourage them: "Your process crashed? Don't worry about finding and fixing the bug. We'll just restart it again."


Detecting a bug, crashing, and then restarting, can be much easier to write than code which is bug-free. In fact, some times, you need so much code that it is likely you are introducing more bugs by trying to handle an edge-case.

A key point of Erlang systems, however, is that they are really good at reporting the state of the system when it crashes (due to functional programming, you have the state from before the crash happened, and what event lead to the crash).

The restart is a stop-gap measure that gives you service for the system as a whole. You can then look at the logged bug report and fix the problem. But you are in control of how quickly you want to fix the problem. There is a cost to fixing a bug as well.


Depends how the scale you're working on, I suppose.

If you running a thousands of programs on hundreds of thousands of machines 24-7, you're bound to run into weird edge cases at the system level.

Instead of worrying about and optimizing all of these edge cases, SOMETIMES it is better to just have a system that is tolerant of these edge cases by design.


Perhaps, but that's an entirely different claim from the one made in the article (that Erlang can prevent such bugs).


The claim was it prevents Heisenbugs.


It does not prevent them. The reasoning is, that those bugs are so obscure, only occuring in vanishingly rare constellations, that finding and preventing them would be prohibitively hard/expensive.


Say you have an application that takes user input, validates it and then try to add it to the current state. If there is a bug in the validation so that 0 is allowed as input, but later when you try to add it to the current state you divide by 0.

In Java you would explicitly handle this case with a catch and take action on a "division by zero exception". In Erlang you would just let the process crash (and log the reason so bug can be fixed) and restart the state to what it was before the erroneous input, no matter what bug/case the wrong input triggered. By having this generic handling you will make a really resilient system since you don't have to handle every possible bug on a case-by-case basis.

What would be the advantage of handling this case in the Java defensive programming approach? Maybe someone will catch the error and return/introduce null to the state? Or BigDecimal.ZERO? Then you might end up in an unexpected state for all subsequent requests.


Well, division by 0 could be handled with pattern matching without crashing. Excuse my rusty pseudoerlang.

(edit) formatting

  handle_zero(data, 0) -> {:error, "division by zero"},
  handle_zero(data, _) -> {:ok, transform(data)}.

  handle_input(data) -> handle_zero(data, data.divider).


This doesn’t prevent bugs (if we’re talking about logical ones), it prevents extended outages because of them. In other words, if the application crashes you can log such fact and fix it later while your end users won’t notice a problem on your side in the right circumstances (unless it’s a crash loop which can’t be temporarily solved by restarting).


Or you can log it and never fix it. I noticed a flurry of crashes that came about on restarting the vm. Almost certainly transient startup race conditions. Causes effectively zero service downtime and no degradation of QOS for end user (other nodes in the cluster could service requests). But having a disorderly startup is less of a pain in the butt to program and may even result in a quicker time to start. So who cares. Never fixed it.


The point is the app doesn't crash. One process (object) crashes and immediately restarts. The entire rest of the app keeps chugging away. Compare to a language with one or a few main threads, where one bug can crash the whole system. It's a brillant scheme designed for systems required run for a decade or more with only minutes of downtime (telephone switches).


Yes, and isn't that amazing!

Bugs in production will happen in every language, and crashing one of many running processes and restarting it to get it back into a known good state sure beats having to scramble to find a fix, code it, build and deploy a new version of the code.


It’s all about resilience, not about preventing bugs completely. The beauty of it all is that you can skip on a lot of error handling code whose only purpose is to keep the system running. You only need to handle business logic errors. Let’s take a web server as an example.

The way the “let it crash” philosophy is done in Erlang is that you crash individual processes first (e.g. the current HTTP request from one browser). If that keeps happening there’s a counter that crashes the subsystem (e.g. the web file listing component). If that too keeps crashing the whole web frontend might crash, but the node might still be up (e.g. serving FTP or whatnot). And lastly the node will shut down completely if the web frontend keeps crashing.

This way, the rest of the system keeps performing and serving requests even if some parts are not working intermittently.

On top of this you of course have (built-in) logging of all these errors so you can investigate them.


It's more the subtle difference between robustness and correctness.


It enables systems with an uptime of 99.nine9s% that means less than 4 seconds <edit>downtime per</edit> year.


As long as you don’t cause a crash loop you’re golden in my experience.


A crash loop would indicate something to be fundamentally broken in ones code. That's OK too, because then we know, we have to fix it.


Note that the author is referring to ephemeral bugs. Logic bugs that occur every time your input is processed are going to cause you problems, but rare edge cases (or corrupt memory) can be sidestepped by throwing away that input and starting over.


The article says "prevents stateful Heisenbugs." The key word being stateful -- state is reset when a crashed process is restarted.


Is Erlang still relevant?

Many other languages now have both language features and libraries/frameworks for doing similar things to Erlang's main selling points.

Languages that are easier to work with for most, e.g. syntax wise, better tooling, etc.

I see little reason to pick Erlang for any greenfield project.

"Erlang enables you to write scalable, concurrent, distributed and fault tolerant systems with soft-realtime latency guarantees"

Not unique to Erlang.

Not even the author realizes he does not actually answer his why.


I’ve yet to see a language + framework combination that has everything great from Erlang/OTP.

Scala+Akka might come close, but from what I’ve heard it’s not there yet.


Yes Erlang is still truly unique.


Is Erlang good for regular web application building? If so, what's the recommended way of doing so?


Yes and no. The real answer (with an Erlang bias) is to look at Elixir + Phoenix + LiveView.

Here's a neat demo. No JavaScript required.

https://www.youtube.com/watch?v=MZvmYaFkNJI


As a professional Elixir engineer who still hasn't embraced the whole LiveView thing, I'd emphasize that that particular aspect is entirely optional. You can write a traditional HTTP-only or templated HTML backend in Elixir with code that a Ruby on Rails developer wouldn't blink twice at.


I looked at this in some detail and concluded that the real answer was to use Phoenix (i.e. Elixir). It's much better, and much better supported than any of the Erlang options.

Elixir and Erlang interoperate well so if you want to stick with Erlang you can do the Web bits in Elixir and then do all the lower pieces in Erlang. What happened to me though is that I discovered that Elixir is great, and so is it's tooling.


My personal experience is that Elixir, which runs on the Erlang VM, is great for web applications.

https://elixir-lang.org/


I know about Elixir, I played a bit with Phoenix. I was asking about Erlang specifically.


theres Chicago Boss. I'm not sure if it's still maintained though.

http://chicagoboss.org/


The original developer was Evan Miller, who stepped away from the Erlang community a few years ago. I think others contribute, but particularly with the rise of Elixir & Phoenix I wouldn't be surprised to learn that it isn't as active as it once was.


Nova is an up-and-coming new framework http://novaframework.org/


Depends also what you mean by Webapps.

Building a web service backend serving JSON over REST, or similar? Works well.

Wanting a more "batteries included" Rails like experience, with server side rendering and such? Elixir has a solid framework in Phoenix, and community around it.


Like most things, it depends what your web application is doing. I've been working in Erlang professionally nearly 6 years, and I'm not super convinced it's great for building web apps. If you want to utilize the Erlang VM, many people have found Elixir/Phoenix to be super helpful in building their web apps. I encourage you to check that out if it piques your interest.


> and I'm not super convinced it's great for building web apps.

It boils down to the complexity that it saves you from. Webapps eventually grow to need caches and queues. That means external infrastructure dependencies on things like RabbitMQ, Kafka, Redis, Elasticsearch, etc. That also means managing the lifecycle of this infrastructure in your application and building robust systems to do maintenance, etc and not fail.

Erlang provides enough of the basic building blocks to trivially implement these features on BEAM in code, as I'm sure you already know. Then all you're doing is message passing with Erlang.

That's the secret sauce. The language features itself don't make it any better at solving the webapp problem than say Rails, etc...it's all of the ancillary problems of scaling webapps that are solved by everything just being code running on BEAM.


IDK if it comes from Erlang, there is GenStage on offer in Elixir [0], you can build data transformation pipelines, work queues and more out of it.

[0] https://elixirschool.com/en/lessons/advanced/gen-stage/#


The square hole is hilarious.


very nicely arguued.


Ruby on Rails is not a language.


>You’re given these language options: > >1. Ruby on Rails >2. PHP >3. C++ >4. x86 Assembly

Definitely not 2.


It's funny. I'm a functional language bigot who loves Common Lisp, but for prototyping a basic CRUD site quickly given those 4 options I'd probably pick PHP. It's a silly language but it doesn't pretend to be otherwise and for simple stuff it just works.


Eh, modern PHP is pretty good. Sure the stdlib still has warts but that’s largely due to a reluctance to break compatibility.

Recent versions are plenty fast enough for the majority of web purposes, it does tons of stuff out of the box, it’s share-nothing nature is a good fit for web services, and for a dynamic language it has a surprisingly strong type system.


That list omits the hidden fifth and correct answer: Python


> the hidden fifth and second correct answer: Python

FTFY :)


Lets restate the question, but this time include Erlang as an option:

You’re about to create a basic CRUD server for storing configuration files (JSON, YAML), with a web frontend. This will be used internally, with a few hundred daily users. No single file is greater than one megabyte, total storage is less than one gigabyte. You have today to complete this task.

You’re given these language options:

Ruby on Rails

PHP

Erlang

C++

x86 Assembly

Which one do you choose?

Answer: Ruby on Rails. Not Erlang.


As a Ruby on Rails Dev of 9 years, and an Elixir/Phoenix/OTP(erlang) dev of 2 years. I'm going to use Phoenix/Live View.

Every. Single. Time.


Given the forgiving requirements, I would choose whatever I know best, because I'm not going to special snowflake a system unless doing so offers outsized benefits.


Have you tried Phoenix? It's... amazing.


Lots of downvoting but I mean duhh for the average app and average developer of course Rails > Erlang is this even a discussion? Anything from documentation, community size, Stackoverflow questions to libraries will be better with Rails. Ah but the comments now say that Erlang = Elixir actually. Ok, I didn't know that. I'd say the same thing though, Rails > Elixir for the average app. You don't need the extra complexity Elixir brings. Yes I know about Whatsapp; no you're not gonna be Whatsapp.


as a person who knows and works both, Elixir/Phoenix and Rails.

Elixir/Phoenix is dead simple. Theres no "Rails Magic." It's easy to understand what's happening at all times. You don't need to have a grasp of the internals of OTP because it's not that important for the problem described above, but it's there should you need it, wand why you do need it you just need to understand it instead of of a plethora of extra technologies/DSLs that you would with a regular Rails Application.

with Rails you need to know/consider technologies like Foremen, Webpacker, Redis, Capistrano(or what ever), AnyCable or ActionCable, Sidekiq or whatever for ActiveJob, then how Puma(Unicorn, Thin) Webservers work, you probably need NgineX or something like it for reverse proxying. For more Rich Frontend experiences, something like Stimulus Reflex or a JS Spa(angular, react, svelte) Cron for periodic task, even stuff like periodically restarting your application. These are things you don't have to necessarily worry about in the context of Elixir & Phoenix w/ Liveview.


You're just listing a bunch of tech here that probably many Elixir apps use. None of it is essential for Rails btw.


>Foremen, Webpacker, Redis, Capistrano(or what ever), AnyCable or ActionCable, Sidekiq or whatever for ActiveJob

- Formen: thats just a process manager. you need somethign for otp as well. in my startup, its booted up from kubernetes. the nice thing is that the vm will use all available cpus.

- Webpacker: phoenix comes with webpack out of the box though I think they are replacing that in the next version with esbuild.

- Redis: Elxir borrows from the beam and part of that includes both an in memory ets (key balue storage system) and mnasia (a slightly more sophisticated queriable database. ie: I don't need to setup redis when the virtual machine has something already avaialble. redis is there if you really want it though

- Capistrano: We use kubernetes but theres also mix release which comes out of the box

- AnyCable or ActionCable: have you heard of phoenix's channels system? its WAY more powerful. action cable is a toy compared to what the channels system can do and its already setup when you start a phoenix app.

- Sidekiq or whatever for ActiveJob: Oban works great and I was able to set it up and use it from scratch within an hour


Tech is complicated, I don't think Elixir really lowers the bar in terms of understanding stuff like deployment to a basic level, docker, some persistance (I assume redis IS used quite a lot despite what you are suggesting here), reverse proxies like NGINX, CDNS etc etc. AnyCable is written in Go last I checked btw, what makes you think Elixir has something more performant? Never really bothered with it, it's not like every company I work for needs to push messages to millions of people at once all the time.


> AnyCable is written in Go last I checked btw, what makes you think Elixir has something more performant?

https://www.phoenixframework.org/blog/the-road-to-2-million-...


I'm listing the things I've used over the last 9 years of developing SaaS platforms in Rails.

The beautiful thing about Elixir/OTP in general, is because you can approximate all those things that I listed above with OTP and core Elixir/Erlang libraries. You can't do that with rails, you need those other libraries/technologies/DSLs etc.


How do you "approximate" stuff like retrying failed jobs and persisting them to disk then? You use whatever the Elixir community came up with which is Oban that uses PG or Exq that uses redis. Sure hobby projects won't need an actual queue. You don't need Sidekiq for toy projects as well with Rails simply use the memory store, it can take you pretty far.


literally on the Exq github readme: https://github.com/akira/exq#do-you-need-exq

"While you may reach for Sidekiq / Resque / Celery by default when writing apps in other languages, in Elixir there are some good options to consider that are already provided by the language and platform. So before adding Exq or any Redis backed queueing library to your application, make sure to get familiar with OTP and see if that is enough for your needs. Redis backed queueing libraries do add additional infrastructure complexity and also overhead due to serialization / marshalling, so make sure to evaluate whether it is an actual need or not.

Some OTP related documentation to look at:

GenServer: http://elixir-lang.org/getting-started/mix-otp/genserver.htm... Task: https://hexdocs.pm/elixir/Task.html GenStage: https://hexdocs.pm/gen_stage/GenStage.html Supervisor: http://elixir-lang.org/getting-started/mix-otp/supervisor-an... OTP: http://erlang.org/doc/ If you need a durable jobs, retries with exponential backoffs, dynamically scheduled jobs in the future - that are all able to survive application restarts, then an externally backed queueing library such as Exq could be a good fit.

If you are starting a brand new project, I would also take a look at Faktory. It provides language independent queueing system, which means this logic doesn't have to be implemented across different languages and can use a thin client such as faktory_worker_ex."


Whatever the org is familiar with.

I like the question, though. I've seen small projects that see negligible load, but are mission critical, written in a niche language no one else in the org knows. The language handled the problem good enough, but maintenance costs were massive for what the project was.


You make the bold assumption I know Ruby and if this is a time constrained project I would be more willing to use something I know intimately.

What I'm trying to say is in this case, I'd rather use Elixir and Phoenix as I know that best at the moment.


with those requirements, laravel would probably make even more sense. there's plenty of drop in code to get to 80% of what you need within a day or two and the perf is going to be better than rails.

That said, I'm an elixir developer and I use elixir because I need to handle heavy loads and thousands of daily users with unforgiving uptime guarantees. Elixir lets me do that while keeping my hair.


> Answer: Ruby on Rails. Not Erlang.

Can you justify this?

If you knew them all equally well I would suggest that Elixir and Phoenix would be an excellent choice, although I am definitely stretching the definition of "Erlang" to include Elixir.


RoR in 2021? Dude, that's just sad.


As opposed to Erlang?


My (outsider's) perception is that Ruby came and went, whereas Erlang came and stayed. Talking about their popularity in their niches. It's also much older than RoR, so in that picture it's also likelier to stay longer, regardless of temporary fluctuations.

I'd also like to learn Erlang when I find the time and use it for projects in the future, whereas Ruby was never appealing to me. Each time I had to deal with it, it created a mess in my system.


> My (outsider's) perception is that Ruby came and went,

That's like saying PHP came and went. Just because something is no longer the new hotness and doesn't get discussed a lot on HN doesn't mean nobody uses it. Both Ruby and Rails continue to be updated just like PHP and it's popular frameworks. If you compared the number of websites developed by either compared to Erlang, it's not even close. Elixir & Phoenix would be a better comparison, but Ruby is still top 20 in the Tiobe Index (PHP is top 10), and Elixir is somewhere hovering around the top 50.


I don't see PHP this way. PHP was never the new hotness. It was never cool. Whereas Ruby definitely surfed the fad wave, and this wave is gone now. Similarly, Erlang was not a cool kid, it just was somewhere in the background at telecoms, and slowly started being used in other places. If I had to bet, I'd expect Ruby to be completely gone in 50 years, while Erlang will still be there.


> PHP was never the new hotness. It was never cool

It was kinda the de facto way to make websites at some point, together with Perl. it was definitely cool.


> Each time I had to deal with it, it created a mess in my system

That's fine, that's your subjective experience which I'm not gonna argue with. But if you're trying to say here that Erlang is somehow more popular than Ruby for web development (or in general) that's quite ridiculous, not sure I wanna waste both our time engaging that.


One thing that's often not discussed is the time required to understand the pros and cons of another language.

If you don't know it well enough, choosing it over something else is a big RISK.

The investment required to learning enough to make a decision is often not considered. That's why consultants exist.


Erlang does not allow structural sharing. Sending large data structures to other processes always means serialization penalties, even if those processes run on the same CPU.

Also, was the Erlang VM written in Erlang? Probably not. So I'm skeptical about the universality claims of the language.


That's incorrect, there is no serialization when sending messages to processes that are on the same machine. (Otherwise it's transparent to the application, though of course there are performance impacts.)

To the programmer, messages are immutable. There are, however, optimizations to reduce the performance impact, particularly for binary data. When you send a binary message that is longer than 64 bytes, it is actually stored in a shared heap and effectively only a reference is passed as a message. Similarly, when you extract a small piece out of a larger binary, you might actually get a reference to the larger data stored on the binary heap. The Erlang VM has had more than 30 years of development, and there are a lot of subtle optimizations like this which were needed to solve real world performance and scaling issues.

Lack of serialization is one thing that makes e.g. the ETS key/value store built into the VM nice compared to Redis. You can just use raw data structures as keys and values. Reads and writes are under 1 microsecond.


The source for Erlang/OTP is 71.6% Erlang: https://github.com/erlang/otp - but yes, some of the lowest-level parts are written in C.

As for universality, not sure where you’re getting that. No one’s saying you should use Erlang for writing device drivers or OS kernels, or the lowest-level parts of the VM. Erlang is a medium- to high-level language, and should be evaluated as such.


Further, there's a meta-interpreter included in the standard library:

https://erlang.org/doc/man/erl_eval.html

https://github.com/erlang/otp/blob/master/lib/stdlib/src/erl...

It's not lisp levels of concise, but the language was not designed for that.


>Sending large data structures to other processes always means serialization penalties, even if those processes run on the same CPU

the trade off is that every thread has its own gc arena that can be freely and quickly cleaned, and even better, often simply dropped entirely when the thread ends

there's never a stop the world moment, only the specific greenthread that's used up all its space.

OTP setup for managing processes in a standard way is also a big boon to developing in the language.

if you need to mangle some giant datastructure concurrently, sure, erlang is likely to not be what you need. but if you're just trying to toss out a cluster of interconnected nodes that can handle tons of concurrency and your interprocess messages are largely queries to threads dedicated to managing some resource or lump of data, it's a great setup.


The original complaint also isn't true. Large binaries sent as messages actually end up reference counted and managed by BEAM that way. This is specifically to avoid expensive memory copies and fragmentation. I don't remember what the size threshold is off the top of my head anymore, but it's "large" by 1980's standards, not by today's. Something like 64 bytes.


Heap binaries are limited to 64 bytes as you guessed, but to be pedantic, the limit isn't only for sending, anytime you make a binary larger than 64 bytes, it will be a RefC binary, which includes the at least 255 byte binaries that can get allocated for the append optimization. [1]

This particular optimization can backfire if you're not careful. If you generate short binaries that trigger this optimization and then store them in ets, it references the existing binary which has a lot of extra space. If you hit that binary:copy/1 will return a perfectly sized binary you should store in ets instead. This same kind of thing can happen if you make a sub binary from some large binary (like a service response) and store it.

[1] http://erlang.org/doc/efficiency_guide/binaryhandling.html#c...


Also, was the Erlang VM written in Erlang? Probably not. So I'm skeptical about the universality claims of the language.

The article states the opposite opinion: that you should use the right tool for the right job.

Change the requirements... and the spectrum-of-ridiculousness for any option changes drastically. The language matters, deeply.


This is incorrect. Functional languages are usually really good at data sharing since they promote immutability. When messages are sent to a process, the data is shared as much as possible to avoid duplication.


Unless you do exciting things with NIFs, Erlang can't share any data except for the binary storage for RefC binaries. When you send a message, all the Erlang terms are copied for the new process, but for a RefC binary, that means a new ProcBin that references the same binary storage, not a copy of the binary storage.

This copying isn't memory efficient and may not be CPU efficient, but along with immutability, it makes garbage collection simple, fast, and independent per process.


Erlang doesn't share data between processes by default (except large ref-counted binaries) because it would cause GC pauses to be global, rather than per-process.


Not only that, the original reason was that it makes crashing and concurrency far easier.

I suppose crashing is a kind of GC




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: