Hacker News new | past | comments | ask | show | jobs | submit login
Why the Cool Kids Don't Use Erlang [video] (youtube.com)
93 points by pmoriarty on Jan 24, 2015 | hide | past | web | favorite | 66 comments



I tried to do a startup in Erlang a few years ago. Haven't been able to watch the video because there aren't subtitles or a transcript, but here is why I don't use Erlang anymore: the type system. Erlang/OTP does MANY things correctly, where "correctly" is taken to be "in line with my tastes", and yes this includes the inane issue of syntax.

But the type system? Normally that wouldn't bother me, but I'm coming from OCaml, where exhaustive checking of algebraic data types makes me invincible. With dynamic typing, my code feels extremely vulnerable, especially when refactoring a codebase. Of course, Erlang isn't alone in this (I abandoned Ruby for similar reasons), but the pain of refactoring feels more acute in Erlang -- perhaps because idiomatic Erlang uses many short functions, making it easy to miss usages. So you have to rely on run-time testing to ensure complete coverage -- and that is nowhere near as easy nor reliable as a compiler that automatically checks everything for you.

My dream language is one with ML semantics running on top of BEAM. That seems like a natural progression for a platform that was designed to be reliable -- while there have been attempts to graft a more versatile type system onto the Erlang base language, the cleanest way forward seems to be a language purpose-built for the task.


"while there have been attempts to graft a more versatile type system onto the Erlang base language, the cleanest way forward seems to be a language purpose-built for the task."

Are you sure that's even possible? Erlang is fundamentally a dynamic system, with hot code (re)loading, processes starting and stopping (possibly crashing), and nodes attaching and detaching.

Moreover, the idea that a message can be sent to any PID seems fundamentally at odds with type checking at compile time. You'd probably need to do something more like channels.

Perhaps an ML-like type system can work with all of that, but it sounds like a research problem.


> Are you sure that's even possible?

No.

Simon Marlow and Philip Wadler tried to make a type system for Erlang and couldn't do it [1]. And if they couldn't do it... well, it's a very hard problem, for many of the reasons you cited.

From what I understand, the general consensus is that Erlang's dynamic types are a small price to pay for the flexibility and features afforded by the runtime. To be fair, what makes Erlang/OTP awesome is how the system works as a cohesive whole, rather than any particular feature of the language itself.

[1] http://homepages.inf.ed.ac.uk/wadler/papers/erlang/erlang.pd...


I don't believe that one should attempt to copy the Erlang model in a typed language (and this is one reason I don't particularly like Akka). There are many facilities, some noted in other comments, that make statically typing Erlang difficult but I don't believe most of these are necessary.

Can send any message to any actor? Probably a bug.

Hot code swapping? Never had a need for this. Restarts are fine. Any distributed system is supposed to be crash resistant. A restart is just another crash.

I much prefer the CSP model to the Actor model in any case (with asynchronous channels available), and it is much easier to type CSP.

Interested in others thoughts.


> A restart is just another crash.

So let's say you restart, and the process comes up with new code, including new type signatures for the messages that the process is expecting to receive. How (and when) does the compiler check those new type signatures against already-running processes on another node?

Maybe there's some very sophisticated way of managing versions using subtyping to ensure that a new version always accepts a superset of the messages that the old version accepted. That sounds like a mess, and I doubt it would actually work.

Even if it did, it's solving the wrong problem. In a massively distributed system, message types are just one small part of a larger problem: protocol integrity. If you send the right message type at the wrong time, or are waiting for a message that never comes, a type system won't do you any good.

So to make this all actually work, you need to incorporate some high-level declarative protocol specification into your distributed versioning static type system (with subtypes). Perhaps not impossible, but wow, that would be some serious research effort; and then a lot of engineering effort to make it practical.

I could sum this up by saying that all static type systems seem to revolve around one very simple protocol: the function call. The erlang type system can handle that just fine. But erlang also brings a lot of other protocols to the forefront, and we don't know how to typecheck them all. Other languages punt on these problems so that they can call the type system "sound", but that doesn't make the problems go away.


So, maybe that's an interesting thing you've got there, right?

Time-dependent type systems--and that kind of makes me think of state machines. Not sure if there's anything there, but maybe there is. Maybe we can prove that type systems that are time-dependent have certain computability properties?


Typing a the message-passing part of a language like Erlang is very much an open research problem. I spoke about this at length with the session type community. They are thinking about this, but it's at least a decade away.


Isn't Akka planning to add types w/ typed akka? If they can pull that off some subset of Erlang should be doable as well


From a superficial reading I think what they are doing is quite simplistic. Basically they want to provide typed channels, e.g. ActorRef[T] that only accepts T messages. That's the easy part. What you really want is something like type-based guarantees that a the interaction between actors cannot get stuck. Achieving this is fraught with difficulties.


no true scottsman? This reminds me of arguments against STM, which efforts previously failed because they tried to enforce it across the whole system. The way forward was to demarcate clearly which vars were in STM and which weren't, and it is very useful - note doesn't solve the "hard theory cs" formulation of STM.

Just because typed ActorRef aren't as hard as implementing this other property which has some other "hard cs" property, does that really devaule the utility of typed ActorRefs?

I admit I'm not sure exactly if 'the interaction between actors cannot get stuck' property is something that is an existing pain point, or something that would be introduced as a result of typed ActorRefs


I didn't say it wasn't useful. It is. But it's checking essentially a sequential property: a sequential actor is using a channel in a consistent way. What e.g. the session type community is aiming at are properties of concurrent computation, such as linear channel usage which depends on the behaviour of multiple processes.


That's their plan for Akka 3.0. I believe this is the current status: https://github.com/akka/akka/pull/16665


In practice you have to check messages as they arrive because you cannot control what others will send you.


I've never used Erlang, but Cloud Haskell (http://haskell-distributed.github.io/) and Elm's hot-swapping (http://elm-lang.org/blog/Interactive-Programming.elm) seem to combine an ML-like type system with some of the features you list.


Cloud haskell is interesting but doesn't even approach the level of the same capabilities as Erlang w/ OTP and BEAM, and the creator of Elm has described it as a language primarily oriented for responding to UI concerns, not a systems language


I have similar issues with Ruby. We have a large codebase spanning many years and stuff like typos in variable names make it past tests into production. Compile time checking basically replaces a large chunk of unit tests one would have to write and maintain to cover the same issues.


In an absolutely brilliant video[0] on Microsoft's Channel 9, called "Panel: Systems Programming in 2014 and Beyond" (with Charles Torre, Niko Matsakis, Andrei Alexandrescu, Rob Pike, and Bjarne Stroustrup), Rob Pike makes a comment (I'm paraphrasing) that in making Go, he realized that testing in general and the idea of TDD was due to dynamic languages not having any compile-time checking. The panel seemed to agree.

If you have the time, you really should watch. It's fun to see all these guys on the same panel.

[0] http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Pan...


Absolutely brilliant video. One of the best parts was when Pike brings up Erlang, and everyone just kind of mutters and looks down.

I also like the part where Pike was like "Yeah, we statically verified the Plan 9 kernel somewhere, but nobody cared. :("

The important thing to note, though, was the thing they all agreed on: type systems don't magically replace tests, and don't magically save you from things when you're developing real software.

Given a mutually exclusive choice, I'll take a strong test suite over a strong type system any day--because one of those lets me vastly screw with the implementation details and still make sure I'm meeting my business requirements.


As far as I can tell, a static, strong typing system sets a baseline of assumptions that you must further verify as an exercise to the business logic programmer. With more dynamic, weak type systems, more tests must exist to achieve a similar level of guarantees. For me, this all seems vaguely similar to when people argued about garbage collectors v. manually managed memory and it's become more that you've traded one set of problems for another, and as far as pragmatic engineering goes I think that's perfectly fine as long as we're better equipped to handle the new problems.

But yes, I'll take strong tests mostly because I can only spend so much time verifying that a type system aligns with my business logic (granted, QuickCheck is neat but requires a high price in flexibility of language choices) and I'd rather take the approach of weaker type systems with partial correctness than the oftentimes all-or-nothing demands of a rigorously structured business framework within a mathematically oriented language. This is very much a "worse is better" sort of approach to me and I argue it's why imperative and OOP languages will continue to dominate in anything outside academic programming.


Strong type systems (as opposed to weak) come at very little cost, though. There really isn't any reason not to use a language with a strong type system. It will catch a massive multitude of otherwise uncaught bugs and slash unexpected behavior. It doesn't magically replace all tests, but it means you need fewer of them to achieve the same level of quality.

Static typing has a similar effect, but it's weaker (number of bugs caught is lower) and it comes at the expense of verbosity. Your code cannot be as short and sweet anymore, and that makes it more unreadable.

I'll take a strong test suite over a strong type system any day too, but given the choice, there's no way I'll ever do anything more than tiny scripts in a weakly typed language again.


The issue is that imperative languages with strong type systems essentially haven't been a thing until Rust[0], and not everyone enjoys working in Haskell's world of applicatives, monads and functors - you certainly can't say that it's an easy replacement for, say, Python in most companies.

Hopefully Rust will encourage the next generation of programming languages to follow its lead.

[0] And possibly Nim? I've not looked into it to see how strong its type system is, though.


I agree. Thank goodness they're not mutually exclusive!

I happen to be a Go fan, and thankful testing is well supported in the language and tooling. After having worked heavily in both compiled and dynamic software environments, I do like having compile-time checks on stupid things like typos. All my love to Python, which we use heavily, but my, what some compilation-like tooling would save me. (Nim has really caught my eye recently.)

Although after having been spoiled with the static analyzer in Objective-C, probably a contender for my all time favorite language (take from that what you will) I do miss it in Go.


Chunky integration testing is supported. Fine-grained unit testing is ... so-so. If you have units that don't interact with the standard library you can pin down a lot of behaviour.

But if you touch the standard library, which in a systems language is going to happen a bit, then isolating your own code is basically impossible. You are stuck with integration testing only.

Mocking, faking, stubbing and whatnot frequently blow your feet off if you use them indiscriminately. But sometimes they are the best way to test code because you can perform controlled experiments with variables that you control. Go's standard library is riddled with physical structs that cannot be substituted for any kind of double.

Basically, they don't follow their own advice to interface all the things.


> Given a mutually exclusive choice, I'll take a strong test suite over a strong type system any day

That's great as long as you'll only ever work on that one project.

A test-suite yields benefits for one project, a type-system yields benefits for every project.


True, however. Test suits tests things that a type system never is able to test.

In general, if you reach a non trivial size of code you have to have unit tests in order to stay productive and speedy at a high quality level.

A type-system only speeds you up in the beginning.


Tests are code and need to be maintained. Type systems remove the need for certain classes of tests that would otherwise need to be written and maintained. Just because you need to write other unit tests doesn't mean it's free to write unit tests to check types.


I didn't say it is 'free'. I specifically talk about productivity.


Kent Beck invented TDD while programming in Smalltalk, having recalled reading a paper on using TDD with punched tape. I think Rob Pike is confused in his history/assertion.


He also pioneered XP while working on C3, a project shutdown in 99. As I recall the customer representative quit through burnout and stress and couldn't be replaced.

Look, most of us dig agile, but all you really need is the four tenets of the agile manifesto.

When someone says agile, consider the source, the best practitioners aren't publishing books, they are producing working systems every day.

I used to work at a company, let's call it Beutche Dank, I was part of a program that has spent millions of euro developing specifications, that amounted to little more than power point presentations. One of the Business analysts, nice chap, says to me, "Your going to deliver this using Agile right?". He didn't realize that they had already compromised the delivery, but they had followed every instruction of the 'agile consultant' they had hired, they just forgot to include the people that really add value, the development team. You either understand this stuff or you don't.

We need project managers, but they should not outnumber the engineers, we need engineering managers but they better be engineers. If you have directors of technical programs who are not technical, get rid of them.

It is the guys and gals writing the code that are of the most value to you, make everyone who deems themselves above these people justify themselves.

OK enough orthogonal ranting, got to go write me some tests, oops I mean compile my code.


It may be that the popularity of TDD was fueled by this, if not its existence directly.


Yes, TDD became popular many, many years after smalltalk.


I'm not sure why this invalidates his assertion -- Smalltalk is an extremely dynamic language with almost no ahead-of-time static analysis.


>Compile time checking basically replaces a large chunk of unit tests one would have to write and maintain to cover the same issues.

As does static analysis.

You do use a ruby static analysis tool on your project, right?


The last ruby project that I worked on laughed at static analysis tools before pitching them into halting problems. You have to execute ruby code just to assemble the classes, it's quite resistant to static analysis unless you artificially restrict yourself to a subset of the language.

If you're going to limit yourself to the statically checkable part of the language then you might as well just use a language that was designed for it. The fundamental feature of dynamic languages is variations on the theme of self-modifying code - behaviour is determined at runtime. This feature is not normally considered compatible with static analysis, because you have to execute an unbounded amount of the program to find out what it does.

The important thing about static type checking on languages like ocaml is that it can be both "sound" and "complete" - any type error is an error in your program, and a program that type checks cannot go wrong according to the constraints of the type system. In order to make this possible, we have a body of theory on how to design type systems that are constrained just enough to be checkable while still expressing everything you want them to say.


>If you're going to limit yourself to the statically checkable part of the language then you might as well just use a language that was designed for it.

Or not, because that would still catch misspelled variables (the problem the OP complained of), and running a static analysis tool a little bit easier than rewriting your entire project in a different language.


> My dream language is one with ML semantics running on top of BEAM. That seems like a natural progression for a platform that was designed to be reliable -- while there have been attempts to graft a more versatile type system onto the Erlang base language, the cleanest way forward seems to be a language purpose-built for the task.

On the other hand, OCaml is pretty fast in most cases. BEAM doesn't have the same reputation. I think I'd rather have a better parallel and distributed computing story (complete with monitoring capabilities) for the existing OCaml ecosystem. The soft realtime guarantees that Erlang offers are IMHO too much of a price to pay.


AliceML is worth a look, though development seems to have died out a few years ago :(

http://www.ps.uni-saarland.de/alice/


I've been helping keep the bitrot away at https://github.com/aliceml/aliceml if you're interested in using it.


This looks interesting. Once I find OCaml too mainstream, I'll have to look at it :)


Meh. Dialyzer mostly remedies Erlang's dynamic typing for me (also a big OCaml fan). Almost a little freeing.

What kills me is that Erlang's design makes it practically impossible to statically check messaging. ! must be /dev/null as far as Dialyzer is concerned.

(Not that solving this is an easy problem! But in an environment as dependent on messaging as OTP, it's a painful shortcoming.)


> ! must be /dev/null as far as Dialyzer is concerned.

In day to day use with OTP, you don't use ! much, directly - you're much more likely to do a gen_server:call/cast from within a function that you can easily spec out.


Out of curiosity, did you try using dialyzer? I understand ADTs are strictly better, but was there a particular problem that dialyzer couldn't pragmatically solve for you?


Yes, while Dialyzer was somewhat useful, it only seemed to catch a subset of possible errors. The set of errors it missed was large enough that I just gave up on Erlang altogether.

I never took the time to figure out what Dialyzer could and couldn't catch. This was a few years ago, so I don't really have any particular examples in mind anymore, but IIRC it may have had to do with being mostly unable to validate the structure of tuples as parameters to functions.


  > validate the structure of tuples as parameters to
  > functions.
Dialyzer can definitely do that, you just need to specify a type that is a tuple:

    -type something() :: {Thing  :: thingtype(),
                          Thing2 :: thing2type()}.
    %% ...
    -spec foo(something()) -> thingtype().
But I agree that Dialyzer can feel incomplete. It's been a little while now but I recall having to use `term()' in specs (basically, untyped) to suppress false positives in some cases. It was also quite slow.

Still, a much better situation than Ruby.


Also worth noting that erlc catches typos, again unlike Ruby.


This was the bane of our (backend developers) existence at an Elixir startup. Even if you maintained typespecs--good luck!--and signatures it was still a nightmare.


An Elixir startup?

Which one was this?



So, how did you guys end up jumping on the Elixir train? New shiny, or practical reasons?

:)


Hm seeing as it's a bitcoin startup with an .io domain.....


I'm in the same boat as far as dream languages go. But just out of curiosity, what is it about F# or Scala that you don't like? For me Scala is an amazing language that sacrificed what could have been a clean design for the sake of Java interop, while inheriting a bunch of the baggage of the Java ecosystem (no closures until recently, type erasure, needless boxing, etc). And F# is still a windows-first ecosystem, as much as they would like to tell you otherwise. I'm currently using both, and while I like both of them well enough, I can't ever shake the feeling like I'm missing out on something better.


> I'm currently using both, and while I like both of them well enough, I can't ever shake the feeling like I'm missing out on something better.

You'd probably like OCaml, considering how F# is basically "OCaml for .NET". And it definitely cannot be accused of having a "Windows-first" ecosystem.


> what is it about F# or Scala that you don't like

Haven't used either, so giving an opinion on those wouldn't be fair.


Previous discussion: [0]

Talk on lowering the barrier to entry into Erlang ecosystem: [1] (slides: [2])

[0] https://news.ycombinator.com/item?id=7927849 [1] https://www.youtube.com/watch?v=Djv4C9H9yz4 [2] http://www.erlang-factory.com/static/upload/media/1404379022...


I feel like Elixir in general goes a long way to lowering the barrier to using BEAM and OTP.


Do you think folks who are just getting started in the Erlang ecosystem would be better to learn Elixir before Erlang?


I'm at a basic level with Elixir (played around a little bit with Erlang first), and so far I'd say definitely - the syntax choices seem very, very good, and it's a pleasure to work with. I can now read Erlang code pretty easily because of this, the concepts and structure are often the same, but introduced in a far more understandable way, and it's felt a hell of a lot smoother than going Erlang first - YMMV of course.


I think so. I would be careful not to approach it as another Ruby (there's a lot of overlap in the ecosystem and community) as there are a lot of things that can and should be done differently.


On a more lighthearted note, here is the "Erlang II The Movie" he made:

https://www.youtube.com/watch?v=rRbY3TMUcgQ&x-yt-ts=14219146...


He's also the same guy behind other videos like "MongoDB is web scale!": https://www.youtube.com/channel/UCGHa3q15FmqB4D9yNlxrdKQ

Time for a new one, I loved the Erlang sequel.


Warning: wall of text ahead.

My Erlang experience is admittedly hobbyist, and mostly as a consequence of working my way through Fred's excellent http://learnyousomeerlang.com/ and fooling around with side projects.

As I see it, Erlang does a lot of cool stuff, at a semantic and systems level, but utterly fails when it comes to things like approachability and marketing. These topics are universal to all languages, so I feel that my experience as a relative noob may help to illuminate the general outsider's perception.

First, the things it does well (taken from my admittedly tiny experience with the system):

- Simple language: I love that the surface area of the language really is not that large.

- Concurrency is built in to the core: unlike most languages, the concurrency story is just "there". There isn't fuss or ceremony or terror around it. It has been well thought out, and learning actors in their beautiful Erlang implementation helped me take that knowledge elsewhere (mainly Scala, which I'll come back to).

- Legacy: you hear about the systems people have built with it and it just blows your mind. The ability of the language and runtime to behave as a cohesive, robust unit exists in few other places, if at all. The sort of reliability and scale achieved as a result of this fact is incredible.

And not so well:

- Beginner's story: The build tool was not immediately apparent, and it seemed like the community had still not settled on one. The dependency story seemed primitive and still not settled. The repl is pretty shit compared to every other repl I've ever used: ruby, python, clojure, scala, etc. These are obviously "subjective" perceptions, but I'm a fairly captive audience. I already wanted to learn Erlang, so these things will put me off much less than someone who has to be sold on it.

- General marketing: everyone keeps talking about phone switches and Whatsapp, but I'm not writing software for a phone switch or Whatsapp. I know Erlang works for other uses because I've seen some of the cool stuff the community has done, but I don't think the rest of the world goes out of their way to find awesome uses. It's still seen as this extremely narrow, focused language, when actually people are using it for all sorts of things. A robust general-purpose library ecosystem goes a long way to address this, as people see a lib for their task and go, "ahh, a library for Foo! I can use Erlang!"

- Reliability marketing: Reliability for most programmers is not "Nine 9's of uptime". It's "how badly does this environment let me fuck up before everything goes to hell?" They are effectively two sides of the same coin, with one focused on a negative and the other on a positive. I see a huge, neglected part of Erlang's potential sell in its inability to market itself as letting you move faster and break more things. This doesn't mean you get to be an idiot, but it effectively says to the programmer "I'm going to make it harder for you to create a brittle system. Go on and try to build that adventurous new feature, because I've got your back if shit hits the fan".

Related to these points is that there are other environments that get people "most of the way there" without having to learn a totally alien ecosystem. These are, namely, Go and Scala.

Erlangers will argue that with Go or Scala you may get some of the features of Erlang, you don't get the whole package. This is true, but people don't think this way. People think "does this sufficiently solve my problem, and does it do so at a cost I can live with?" For people familiar with C, the cost of doing a Go project is minimal. Basically, you get concurrency and GC taken care of, while maintaining most of the syntax, semantics, deployment strategy, and programming styles you already know.

With Scala, you get robustness, concurrency, and distributed computing in the form of Akka (to a level that satisfies most people), while preserving access to Java libraries.

Personally, Scala's arguments have been persuasive to me, which is why I'm attempting to introduce it at work in lieu of Erlang. In addition to being more approachable for my mostly Ruby and Javascript-programming colleagues than Erlang (I concede that this point is definitely debatable), I can do nearly goddamn anything in Scala: data analysis, NLP, network services, webapps,number crunching, etc. I work mostly in data on a team that has a large deployment of microservices, and for many questions we face I wouldn't even know where to begin in Erlang. Scala, despite its flaws in terms of binary compatibility, immaturity, type-snobbery, retains a relentlessly practical, marketable feel, that I feel much more confident in bringing before a group of programmers who know neither ecosystem.

If the Erlang community hopes to see the language and system achieve greater general adoption, it has to learn to sell to the general case. It doesn't have to win, but it has to make it easy for people to reap Erlang's amazing features while not missing their Java, Ruby, or Python libs too much, or taking too long to get started. Big ups to Elixir, as its community seems to be working hard to address these problems, and hopefully greater-Erlang can capitalize on the momentum.


Regarding the repl, when I toyed around with Erlang I found the emacs mode to be pretty excellent, which would explain why the bare repl has not received much love.


Well, I've read and re-read this comment hoping to glean something useful, and all I can gather is that you find familiarity to Ruby/JavaScript programmers to be the chief selling point of a language, and that you expect Erlang to be effective in heavy data processing applications.

  > The build tool was not immediately apparent, and it
  > seemed like the community had still not settled on one.
Hm, there are a handful of projects that try to get by with essen's `erlang.mk', but the community has settled pretty well on Rebar. Even essen's projects include a rebar.config because that is standard for distribution and dependency specification.

  > The repl is pretty shit compared to every other repl
  > I've ever used: ruby, python, clojure, scala, etc.
That's a strong claim, could you elaborate? Because to me a REPL that includes the usual bells and whistles (code reloading, good autocomplete, fast startup) along with job control, the ability to connect to remote systems, and being uncrashable is a whole lot better than "shit," and in my experience is miles ahead of, say, `lein repl` or even the wonderful IPython.

  > I don't think the rest of the world goes out of their
  > way to find awesome uses.
"Handbook of Neuroevolution Through Erlang" disagrees. Indeed, that book's author claims that Erlang is the language for representing "thinking" systems.

  > its inability to market itself as letting you move
  > faster and break more things.
Huh? That's the core of Erlang's value proposition from what I've seen and any "marketing" efforts focus on Erlang's robustness (that said, nobody should be specifically championing "break more things"). Going way back into Erlang's history you can find "Erlang, the Movie," and writings by Armstrong that emphasize hot code reloading, the system's ability to recover from any error, and the VM's support for incremental improvements in software without downtime.

  > environments that get people "most of the way there"
  > without having to learn a totally alien ecosystem ...
  > Erlangers will argue that with Go or Scala you may get
  > some of the features of Erlang, you don't get the whole
  > package.
Systems you mention like Go and Scala have their own considerable strengths, but they're just not comparable for the applications at which Erlang excels: Highly reliable, low-latency (soft realtime) systems written at a high level. At the same time, Erlang is slow and provides none of the assurances a statically-typed language offers. I can't imagine an experienced engineer seriously considering Go for a project that requires Erlang's strengths, or vice-versa.

What I gather from all of this is that Erlang isn't a fit for your high-throughput data processing application(s), and that you've somehow conflated this with a "marketing" failure on Erlang's part or that Erlang's goals are not focused. But Armstrong's message about what Erlang is good for has remained consistent for decades now.

As an aside, if your Ruby/JS guys didn't take to Erlang, I'm not sure why you expect to get far with Scala. Go seems quite popular with the Ruby crowd and sounds like a strong fit for what you're doing.


Regarding the repl, please connect to a running node and run etop:start().

That's pretty awesome in my opinion.


Thanks for this thoughtful post. I have been considering learning Erlang since it has concurrency built in and I work with state systems already. I took a look into its syntax and it was quite confusing and seemed like a huge learning barrier. I will give scala and go another look.


The syntax is unfamiliar to you, which is why you found it confusing. In reality Erlang syntax is very simple and consistent, here you have most of it: http://erlang.org/doc/reference_manual/expressions.html

I don't understand people who refuse to learn unfamiliar syntaxes at all. A syntax is only bad if it matches semantics poorly, for example in JavaScript you create functions very often, for various purposes, yet syntax for this is rather verbose, which may hurt readability by obscuring what's really going on. In case of Erlang the syntax supports the semantics quite well. For example, sending async messages between processes is central part of the language and accordingly has lightweight syntax, essentially: "Target ! SomeMessage". It's also highly consistent, for example guard patterns in functions have exactly the same syntax as guards in "if" expressions; destructuring has the same syntax no matter where it's done, in funcion definition, case statement or inline with "=" operator. And so on.

In general you should try to understand the syntax and underlying semantics before saying anything about it.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: