Though in some situations, it is better to just ignore the spurious message when it arrives by tracking what monitors you have enabled in the process state. Unkonwn monitors are just gracefully ignored. The same pattern is useful with timeouts as well. Cancel the timeout, but if it arrives in your mailbox while the cancel is happening, you can just detect an old timeout and ignore it.
it's almost always better to ignore down messages that you don't care about.
Personally, I love both Erlang and Elixir and hope to ride out the rest of my career on these platforms.
So why Rust? Like Erlang, it's damn good at concurrency and enables functional programming. It also enables event driven programming through tokio, which is a better fit for web servers than green threads (you're mostly waiting on the network). Unlike Erlang, it's super fast (even at math), has a great type system, amazing error messages, low memory usage, and the community is already quite a bit bigger.
You say you had to rewrite Mnesia, but Rust doesn't even have transactional memory to start with.
Suggesting that event-driven programming is anything like language-level threading like Erlang and Go is crazypants. A common error in event-driven languages is that you end up writing code that gets slow, and blocks the entire event loop, and everything falls apart, and you get paged at 2 AM, until you add another event loop.
One of the best parts of BEAM is that since processes are isolated and preemptively scheduled, you don't have to manage your own call-backs by hand, and although things may get slow, they'll typically only get slow for that one given process.
In addition to this, the GC in Erlang is great, compared the lack of GC in Rust. I think most of us can agree that unburdening yourself of having to memory management code is a good thing.
Of course, BEAM isn't perfect, after all, it's had no more as much investment as the JVM, and CLR, but I believe its semantics are right for writing predictable, low-latency code.
Also, containers have nothing to do with ephemerality. Cluster management systems which dynamically schedule containers may result in scheduling.
Erlang isn't really a dataplane runtime. Often times, you implement your control plane in Erlang, and farm out your dataplane to something NIFs, ports, or something else entirely.
You're right, disterl is a fucking mess. But, it's better than nothing, and having to write your own IPC.
I suggest you read Joe Armstrong's thesis, or a History of Erlang for more.
He said ditch Mnesia, not rewrite.
Erlang is incredibly verbose. It has very few abstractions. Rust has a lot of abstractions. I've rewritten a few Erlang projects now and Rust and I've been able to come out with close to or under the same LOC (I always use specs though).
> It doesn't have any of the stories around immutability that Erlang does.
let expressions are immutable by default. You specifically have to ask for mutability and even then, the Rust borrow checker will always enforce a single writer. Rust definitely has a story around immutability.
> You say you had to rewrite Mnesia, but Rust doesn't even have transactional memory to start with.
Which is good, because Mnesia is crap and I've had to deal with chucking it out the window several times now. Rust has good library support for STM that is completely optional.
> Suggesting that event-driven programming is anything like language-level threading like Erlang and Go is crazypants.
They're both models of concurrency, and you can achieve parallelism through either. Oftentimes one is better than the other for a given task (usually determined by the bottleneck), such as serving web requests bottlenecked by IO.
> A common error in event-driven languages is that you end up writing code that gets slow, and blocks the entire event loop, and everything falls apart, and you get paged at 2 AM, until you add another event loop.
Most high performance web servers use event loops. See the paper "An Architecture for Highly Concurrent, Well-Conditioned Internet Services" for an overview. There are lots of issues with green thread models. See some of the work done by Brian Cantrill for examples, and why it may be a bad idea to bake them into a language.
> One of the best parts of BEAM is that since processes are isolated and preemptively scheduled, you don't have to manage your own call-backs by hand, and although things may get slow, they'll typically only get slow for that one given process.
> In addition to this, the GC in Erlang is great, compared the lack of GC in Rust. I think most of us can agree that unburdening yourself of having to memory management code is a good thing.
Disagree strongly. See Steve Klabnik's latest posts on static garbage collection in Rust for an enlightening take. Rust does have a GC, and it has no runtime performance hit.
> Of course, BEAM isn't perfect, after all, it's had no more as much investment as the JVM, and CLR, but I believe its semantics are right for writing predictable, low-latency code.
Erlang isn't low latency. It has predictable latency.
> Also, containers have nothing to do with ephemerality. Cluster management systems which dynamically schedule containers may result in scheduling.
This is pedantic. Any non-trivial container deployment will have to deal with ephemerality. If you're replacing an Erlang cluster, it will be even moreso an issue because you'll need some level of fault tolerance from the orchestrator.
> Erlang isn't really a dataplane runtime. Often times, you implement your control plane in Erlang, and farm out your dataplane to something NIFs, ports, or something else entirely.
NIFs are extremely dangerous. We've had critical bugs that have taken down entire clusters thanks to NIFs. The architecture your describing is also exceedingly rare. Most Erlang deployments are handling soft real time workloads like routing chat, queue messages, web requests, that have no language separation between control and data.
> You're right, disterl is a fucking mess. But, it's better than nothing, and having to write your own IPC.
In many cases it is better than nothing. It will take a huge amount of wasted effort to fix some of the scaling issues I'm currently having with our Erlang cluster.
> I suggest you read Joe Armstrong's thesis, or a History of Erlang for more.
I've read Joe's thesis. I'm assuming your point is that I somehow don't know anything about Erlang despite having worked on it for years professionally and attended multiple Erlang Factorys/given talks on the subject.
It's awesome to meet a Rustacean who's familiar with Erlang.
So, I've deployed Erlang pretty happily, and I've tried to pickup Rust for a hobby project -- this was around 12 months ago. I tried again about 6 months ago.
I found myself trying to write immutable code, but ended up spending a bunch of time writing out blah.copy, passing it into a lambda, or another very short function to hand it over.
This only got worse when I tried to use libraries that expected Cell, and dealing with Rc.
I also found myself trying to use channels in Rust as a way to do message passing between threads, which started to feel really awkward. My fear with locks, or on-the-fly unwrapping is that I'll end up in a shitty, impossible to debug situation. Is there a more structured approach to concurrency, and immutability together which prioritizes safety over speed?
In my attempt to find off-the-shelf libraries for concurrent and I/O, I found that I was doing callback-hell, or callback chaining out the wazoo. Are there any good examples of being able to write sequential code (a la OS-level threading, Go, or Erlang) and handling I/O and compute concurrently?
As far as async / event-based programming goes -- I agree, that the underlying implementation probably wants to use a event loop in some manner, but I think having to reason about the preemption during compute is kinda annoying.
I'm going to challenge you on containers though. I've been doing containers for a while now -- as long as I've been doing Erlang in fact. Containers and schedulers are two different things. If you want to look at statically-scheduled containers, plenty of people run Docker, and LXC (or LXD) without a reactive scheduler.
Generic schedulers don't provide the registration logic, nor the fault-recovery logic that Erlang does of treating every request independently. There are cluster managers, like Akka's, or Apache Helix that do this, but I the closest thing I know to a generic scheduler that does this is Kubernetes. That comes with a whole lotta other baggage.
a) Not sure if it changed, but mnesia startup was very brain damaged -- fixing up the local data from disc, then throwing it all away to load from peers is a lot of wasted time. It's much faster to remove all the local tables on disk before starting the node so it short circuits to copying from the peers. Even when it's faster, sending over half a terrabyte of data takes a while. Some sort of persistent transaction logs for peers would be nice.
b) network partitions aren't fun at all
c) we direct mnesia read and write for a key into a specific process to enforce serialization, and then we use dirty read/writes; so we skip all the locking.
d) we've certainly patched a lot of things in transaction sending and receiving over long distances, especially needed if your network isn't clean.
Anyway, thanks for your thoughts all over this thread.
- Large tables make restarts really slow (have to read everything from ETS).
- Table dumps cause spikes in load.
- Uses a lot of RAM.
For net partitions, I just alert on it and fix it manually.
Mnesia isn't bad as a cache, bad as a database.
Erlang was designed to be reliable and scalable. It was not designed to be fast. If fast is a hard prerequisite for you, you're right, you should go with another language. But Erlang also does a ton of things correctly (again, in the domains it was designed and optimized for), and is battle-tested far beyond Rust, since it has been around for much, much longer. That's not to say Rust is a bad language, but one should probably not judge its merits by its current popularity. That holds true for any language or tool.
Anyways, if you want abstraction (the lack of which was one of your complaints about Erlang), you should take a look at Elixir. Specifically, Phoenix is excellent for web programming, especially if you need real-time messaging at scale.
We've had to patch the Beam many times to keep moving forward, and work with super dangerous nifs to talk to other systems.
The issues I've had with Erlang had nothing to do with the products I've worked with. Erlang was a 'good fit' for them. Elixir and Phoenix doesn't really solve any of these issues, and neither of them are particularly 'battle tested'.
edit: nevermind I saw your other post, but confirms what I expected.
+ you possibly need to bear in mind the design goals: that Erlang is designed to run as a highly reliable, self contained system in _a single geographic location_, with that system possibly left to run on its own for long periods of time (years)
I've never thought about the dist heartbeats as a scalaing problem. If you have thousands of dist nodes, and your nodes have small memory, dist buffers for each connection to add up -- I think the default is 8mb, you can tune it, but it's a scaling concern. Especially, if you have nodes far apart from each other.
Really, the root design of Erlang was for two nodes colocated in a single chassis. That said, it turns out the design scales pretty well to much larger numbers of nodes, and nodes farther apart, but you have to be careful with some things. pg2:join and leave operate under a global lock, which will be slow if you have contention on the lock, or if one of your nodes has some problem where it's still up but very slow. Mnesia doesn't do well with queuing without a lot of help, schema operations under queuing is definitely a bad idea as well.
If you want to run Erlang at larger scales, you will need to be ready to poke around in OTP, and ocassionaly in BEAM as well. If you're running big systems, IMHO it makes the most sense for your Erlang nodes to fill your physical nodes, so I don't see much need for containers, but if you do use containers, you need to figure out how to get their names consistent for Erlang, or it's going to be confused. (OTP has a concept of a 'diskless' node which would seem to be a good fit for an ephemeral systems environment, but I must admit I haven't played with that)
That's essentially what I've had to do in my career as an Erlang engineer. Erlang requires way more massaging and work than the stories people tell about it would lead you to believe.
I think this is the case regardless of what languages or systems you use, but more well used systems may have more experts and more documentation to lean on.
For things that are a good fit for Erlang, it seems worth it to train up a couple people with deep internal knowledge of the VM you're using. As you said in another part of the thread, Erlang doesn't have a lot of abstraction -- most scaling problems aren't too many layers deep.
Containers is where I've had issues, not necessarily anything drastic, but I've found myself dropping half of the the things I really want from an Erlang system (mainly making as much as possible non-stateful rather than stateful, not using supervision trees to their full potential) to buttress against that ephemeral nature of containers (I haven't really looked at diskless nodes in much detail either though)
It would be hard to burst stateful (mnesia) nodes --- schema operations require a lock across all the nodes in the schema, and that lock requires that the nodes not be in the middle of the 'log dumping' process (where the global transaction log gets divided into per table logs and such), which means long delays in high volume situations, and even longer delays if doing multiple schema operations. This could probably be patched around, but... In my team's experience, our mnesia nodes were generally ok under higher than normal load, expansion was driven by data size. Expansion could be a lot nicer, but I haven't heard of many database systems that handle expansion off the shelf.
So that leaves stateless nodes. I don't see why you couldn't burst those, especially if using standard dist. Bring up the host, push your software, connect to one dist node, and get meshed automatically, once you see all the pg2 groups you need to operate, enable traffic.
That said, we never did too much of that, we're in bare metal hosting so we don't have an incentive to run different server counts at different times of day, and provisioning isn't fast enough to handle incidental spikes -- we have a pretty good model of what spikes to expect, and provision to handle that load being mindful of the possibility of a load spike during a network or datacenter availability incident.
In my opinion a more sensible approach is Elixir + Rust (via rustler). Your get the elegance and productivity of Elixir, the concurrency and fault tolerance of Erlang/OTP, while still being able to writing super fast, low level code in Rust.
Plus, you shouldn't use a single language across your whole stack. When the only tool you have available is a hammer, everything looks like a nail.
I was able to learn Rust in about week by reading the O'reilly book. Compared to C++ it's almost tiny (I have many Bjarn's C++ tome and the Meyers books right next to my Rust books actually), and unlike C++, the compiler will essentially teach you the language if you didn't learn enough from the book to write code that compiles.
Once you've put in the initial investment, Rust code is way easier to maintain, read, and scale. It's also easier to onboard people. Onboarding people for Erlang is hard, and it's hard to hire for. Rust on the other hand is familiar to all of our FP people who like Haskell and Erlang, and our Java and C++ devs.
I've also found it extremely suitable for writing services. We have many services in Rust running in production right now. We have a service toolkit for our cross cutting concerns. We can run our IDL through a compiler and get a client and server that are on the order of 20x faster than their Erlang counterparts.
At the end of the day, use whatever language you prefer, just keep in mind that software needs to be (1) shipped, (2) maintained (by multiple people who read each other's code) and (3) evolved. I would also add (0) experimentation; before one even ships any code, one ought to easily experiment with various ideas.
- Maintainability: Rust signatures not only tell you the types of arguments, but also their lifetimes. A signature in Rust is an extremely strong abstraction. Traits can further constrain types making it essentially a game of plugging the right blocks in to the right holes. Erlang is like having the blocks but all the holes are under a tablecloth and you just have to guess how to fit them in.
- Correctness: Without a static type system Erlang does very little for correctness. Rust has pattern matching as well and can enforces exhaustion. Not even Haskell does that. Lifetimes guarantee safety even for shared pieces of data.
- Speed: Obviously, Rust is a compiled, manually memory managed language with near C++ performance.
On every metric, Rust offers a lot more than Erlang. As someone who has spent years with Erlang, Rust is simply better for professional development.
Coming from a company that uses elixir heavily and has made a significant investment in rust, I don’t think we would ever use solely rust on our distributed systems. However, we have rewritten some code that elixir was too slow at in rust and exposed it as a NIF on BEAM - and that has worked well. (Blog post on that soon hopefully).
I do admit, we are also going to be ditching mnesia for one of our clusters for our own in-house simpler system (ETS replication with different consistency/netsplit guarantees for our use-case), we've had to write our own cross-node process monitoring solution (at peak we see 200M+ cross-node monitors on our cluster), and we've also had to overcome the limits of message-fanout on distribution as well (https://github.com/discordapp/manifold).
However, for operating at our scale (peak 9m ccu @ 5m events/sec fanout to clients), we run a surprisingly small number of servers for our real time system (~120).
EDIT: I can't reply to your post below, but I think the runtime introspection we run into is not dealing with OS level metrics, but application level introspection. Introspecting the state of processes, writing code in the repl to debug issues within the cluster, benchmarking to find hot functions or where specific processes are spending a lot of their time. Capturing traffic to replay on a test cluster to simulate production load, all becomes very trivial with BEAM.
I said this earlier, but we've essentially separated operation concerns from our applications, and that opens us up to relying more on knowledge of Linux which is easier to hire for, and we can reuse that knowledge and all our tooling with any other languages we want to use.
Maybe it doesn't throw an error message, but ghc does warn about it when you use the -Wall flag. I believe you can get the behaviour you want by turning on only that specific warning (I forget what it's called) and using -Werror.
Rust is a mess of amateurish overcomplicated poorly understood hype-driven crap. Tokyo is utter bullshit (look how Erlang or Go solve the same problems with order of magnitude less lines of code), etc.
Rust, it seems, is repeating the story of Ruby, where a crowd of overly excited (for no reason) amateurs quickly (without understanding) pile up "solutions" to really hard problems, which has been researched by the best minds for last 5 decades or so.
For example, all the concurrency bullshit could be boiled down to the well-understood concept of a software interrupt, which is hardware-assisted to be a lightweight isolated process. No sharing, no threading, no cooperative multitasking, no bullshit.
On the other hand, there are Streams and Futures, which also has been well-understood and researched.
Finally, the Actor Model defines how to build distributed system the right way - the way Mother Nature does (isolated entities communicating by message-passing), which is at the core of Erlang and things like Akka.
Erlang and Go are the best examples of how small, uniform and simple systems could be when based on the right principles and proper abstractions. Rust is the opposite of this.
Please be more specific about the amateurism.
> Tokyo is utter bullshit (look how Erlang or Go solve the same problems with order of magnitude less lines of code),
No, they work in fundamentally different spaces.
For example, ML's (and Erlang's) unification of bindings via unified pattern-matching everywhere is a major achievement and canonical example. Haskell's unified approach to typing, is another. Everything is an expression of Scheme and even Lisp's unification of representation of code and data is the great discovery of old times.
PL design is hard, design of good runtimes (OTP, Go) is even harder. Ignoring almost everything which was good and true in PL field is definitely amateurism.
I don't want even start about what kind of nonsense Tokyo is. Universal event-driven frameworks is the same madness as J2EE. On the other hand, ports, typed channels or futures or pattern-marching on receive - support of fundamental concepts in a language itself is the right way.
You need to wait a bit. Tokyo is a low level abstraction. People will build on top of this. Let’s be real, it’s the most promising language of late.
How long did it take for erlang to mature. Rust has come insanely far in the little time it had. Give it five more, you’ll be surprised.
Do you have an opinion about Scala/Akka?
I am a big believer that you should continually invest in learning and mastering new languages. Each language gives you a different perspective on how to solve problems.
But as far as the right tool to build professional software, there are caveats. Safety, correctness, speed, and maintainability are all important factors in choosing the right tools.
I should also say I've had plenty of interviewees who couldn't explain the difference between concurrency vs parallelism, and whose knowledge of these concepts was limited to spawning a pthread and locking shared data. Needless to say these people tend to do really poorly at explaining how to scale a distributed system.
Contrast that with the Erlang developers we interviewed, who think in terms of scale. Their answer is almost always something that could accept 10 requests/s or 10 million. Same thing with our Haskell interviewees. They'll answer our algorithms questions by writing out the types and deriving an answers with a single expression. I had an interviewer who was extremely confused by this. We hired the guy that confused him, not only does he understand concurrency, he also understands lazyness and we love lazy developers :)
Is there a good answer when it seems people don't even agree on what those term mean and apply to in CS.
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously.
Yes, yes it can.
Also, fundamentally, I think you're way out of the norm in terms of system time to build. Getting the kinds of reliability and business value guarantees out of Rust is enormously harder than what you get out of the box with the BEAM. Having to hand-roll everything would take literally months, where I can be done and providing business value in hours in the BEAM ecosystem. Now, could Rust get to this place in another year or two? Completely. But it isn't there now, and, it still ignores my second issue...
...which is difficulty. I've been programming for just under 7 years, and while I've not learned C++, Rust is not only by far the most difficult language I've encountered thus far, it's exponentially more difficult. Months to grok the basics, likely years to be effective in it. You might've had an easy time hiring for this, but that might say more about your social group and hiring channels than the actual availability of talent.
What is it you want to know? I'm giving you my personal experience, not an essay. It's hard to find people who have professional experience with Erlang, so I thought it would be useful to other people who are evaluating it to hear from someone who has been there. I'm more than happy to dig into things more if there's a pointed question.
> Getting the kinds of reliability and business value guarantees out of Rust is enormously harder than what you get out of the box with the BEAM. Having to hand-roll everything would take literally months.
It depends on your needs. My issue is that it is enormously hard to fix the BEAM once you have built something successful with it. It's hard to hire for. Working with it involves lots of esoteric knowledge.
We also know that people are adopting highly reliable and scaleable systems. It's just not Erlang. Cloud enables this. Containerization enables this. Instead of Erlang's IPC, you can use an IDL and an RPC compiler. Instead of pids, you can have a service registry using etcd or DNS. You don't have to hand roll any of that.
What Rust gives you is the ability to quickly build correct, fast, and maintainable systems that are easy to scale because concurrency is a central theme of the language.
> I've been programming for just under 7 years, and while I've not learned C++, Rust is not only by far the most difficult language I've encountered thus far, it's exponentially more difficult.
What languages have you learned? Did you learn about manual memory management in school/camp/on your own? What's been difficult about it for you? I'll admit, I know a lot of languages. Most of the things in Rust are familiar to me outside of lifetimes, which I don't consider to be to difficult to learn. It's just making something you consider implicitly explicit.
We have hired three new grads who we started on Rust, and we got them up in running in a few weeks. They did have the books, mentorship, and assigned work involving small tasks to ramp up on.
What kind of projects have you built? What were the primary needs? What were the constraints?
> My issue is that it is enormously hard to fix the BEAM once you have built something successful with it.
There's a lot of community resources who are more than happy to give advice for free on this stuff. The fixes that usually need to be provided are easy to implement, which wouldn't be the case for a hand-rolled system. It's hard to believe you hit an actually-unique problem unless it was pre-2015, and the world of today's BEAM is an entirely different place than before it, which should itself be considered.
The dismissal of Elixir I think says a lot about your view of the ecosystem- more time and energy has been put into that half of the community in the past few years than Erlang had in the previous decade(which says a lot about just how good Erlang was before the Elixir community came along).
> We also know that people are adopting highly reliable and scaleable systems. It's just not Erlang. Cloud enables this. Containerization enables this.
This really seems like a complete misunderstanding of the kinds of easy bonuses and guarantees you get in the ecosystem. Implementing the kinds of things you're describing literally requires teams- teams! - of people to build and run. My employer is currently in the rampup to this, and the amount of time, energy, and rough edges in the ecosystem right now is out of control. Containerization is great, cloud is great, but they don't automatically give you hot deploys, easy data handoff, automatic introspection, and opposingly, they're difficult to build and debug(since the tools for such are designed for a very different Unix world), have poor separation of concerns(half the tools that handle this stuff also do 2 or 3 other things, all in different ways, and all with very blurry boundaries), and I really don't think finding knowledgable people for them is much better considering half the tools have existed for less than 4 years(coincidentally the same length of time as Erlang's newest comer).
> Instead of Erlang's IPC, you can use an IDL and an RPC compiler.
To me, this sounds like saying, "Instead of this ultra-fast and agile big rig, you can put together a raft with these here twine". Once again- you've gotta build the world, you lose the niceties of the ecosystem, and it's just time time time.
That's not to say that an RPC can't be excellent- it absolutely can- but it isn't easy without a lot of infrastructure.
> Instead of pids, you can have a service registry using etcd or DNS. You don't have to hand roll any of that.
Who uses pids anymore when you've got `bitwalker/libcluster` providing service discovery via whatever mechanism you want(including etcd, Kube DNS, Kube selectors, Consul, EC2 tags...)?
> What Rust gives you is the ability to quickly build correct, fast, and maintainable systems that are easy to scale because concurrency is a central theme of the language.
While I definitely believe that(Rust is nothing if not a real marvel of engineering), it's got a long way to go before it starts removing any reason to use the BEAM. You call those systems maintainable- but how long have you been maintaining them? What team sizes do you often work in, do your projects require? BEAM systems are famous for scaling to tens or hundreds of millions of concurrent users, with no downtime, on engineering teams that could share a couple of pizzas.
I think Rust will get there- and that's the whole reason I've spent so much time in it- but we're just barely starting to get quality Actor system implementations, and they haven't been made easy to use either.
> What languages have you learned? Did you learn about manual memory management in school/camp/on your own? What's been difficult about it for you? I'll admit, I know a lot of languages. Most of the things in Rust are familiar to me outside of lifetimes, which I don't consider to be to difficult to learn. It's just making something you consider implicitly explicit.
I didn't learn about manual memory management until very late(~3 years ago), so that's definitely where a lot of the introductory difficulty has been. But in addition, the depth and complexity of the type system, how that type system interacts with its memory model, the inconsistencies of different types because of pre-implemented traits, etc. Simply reading the Rust book took well over a month of serious study- learning Elixir via "Programming Elixir"(which I consumed before learning Erlang) taught me the majority of the language in an afternoon, and the basics of OTP by the end of the book(later that week of light reading).
Now, years later, I wouldn't consider OTP that difficult to learn or understand. But this one I'll forfeit, if only because I seem to have understood it naturally a bit faster than most(because for me, the concepts honestly seemed to "just make sense". I remember several times thinking, "This is exactly how I would build this.", which actually allowed me to forget a lot that I learned to understand and effectively program in other languages, especially around concurrency).
> We have hired three new grads who we started on Rust, and we got them up in running in a few weeks. They did have the books, mentorship, and assigned work involving small tasks to ramp up on.
Admittedly, I've had several things blocking me on this:
1. The Rust Book, while good, was long and didn't always use the best examples.
2. I only have one serious Rustacean that I can access regularly(a former Mozilla employee).
3. I've struggled to find projects that made me genuinely think, "Rust would be perfect for this", outside of small callouts to it from other languages. I'm just now on one that is going to have some unusual math that needs to be performed quickly and continuously that I might use it for, but even still.
Because of this, I'd definitely concede that it might be possible to do it much faster if one had adequate support, but without prior memory management and deep type systems experience, I doubt it'd be sub 2 months without a constant pair(which might do the trick, and I advocate in most regards anyhow).
I apologize, reading over this I see many places this likely comes off as hostile, and that's not my intention. Just a lot of your comments honestly surprise me, and directly contradict both my lived experience with the ecosystem and that of teams I hold a lot of respect for.
While this whole comment thread has turned into a bit of flame war, I appreciate your candidness and think you’ve tried to fairly express your opinion. As someone who likes semi-obscure languages and systems it’s valuable to see when and where system designs fail for people. Personally without Elixir I wouldn’t want to delve into Erlang/OTP for many of the problems you mentioned. The Erlang syntax seems "elegant" upfront on small problems, but digging into say CouchDB or Riak it’s a bit of a pain to follow. The lack of good name spacing, etc, I find to be a pain.
Elixir is overall a syntax that works well for me, at least as well as Rust’s. The Elixir team have made great strides in providing good compiler error messages, and are actively improving the distribution/ packaging story. Hex and mix are fantastic, and on par with what I’ve use in cargo.
> It depends on your needs. My issue is that it is enormously hard to fix the BEAM once you have built something successful with it. It's hard to hire for. Working with it involves lots of esoteric knowledge.
Not sure I follow this. I’ve dug into the BEAM source and found it to be well designed and relatively easy to follow. Especially compared to CPython, but not as nice as Lua. Presuming you’re meaning fixing dist_erl and such, that’d make more sense. But how would it be any different than say tweaking consuld
> We also know that people are adopting highly reliable and scaleable systems. It's just not Erlang. Cloud enables this. Containerization enables this. Instead of Erlang's IPC, you can use an IDL and an RPC compiler. Instead of pids, you can have a service registry using etcd or DNS. You don't have to hand roll any of that.
Exactly! Except it works two directions. Having to learn and deploy say k8s, consul, and then learn gRPC and figure out how to route messages, etc is a lot of work. While I’ve not deployed a large cluster on BEAM, it’s clear that I wouldn’t want to scale a BEAM cluster more than a few dozen nodes. However, a few dozen nodes can handle the workload for probably 90% of companies. Being in small startups if I can effectively get binary RPC, distributed namespacing, etc, for free it saves a lot of trouble and effort. I’ve also found it’s not to hard to entirely replace the distributed namespace mechanism, with projects like Lasp or Swarm, or heck, likely shunt it off to consuld in the future.
All that said I like Rust and plan on using it in the future where I can, likely, in conjunction with Elixir via rustler. My work is primarily doing IoT work, where Rust is slowly evolving. I’d love to have a Rust "OTP" and actor system for IoT devices where Beam doesn’t fit.
I'm working on making the dialyzer error messages better, too, in Dialyxir and Erlex =).
Out of curiosity, have you seen anything in Dialyzer for dealing with typing GenServer messages and handlers? I haven’t figured out a way to spec message handlers as dialyzer seems to only check that you have any `handle_cast` implemented and adding a behavior doesn’t compose. Given the key role GenServer plays in OTP, it’s a big lack IMHO. I’ve wondered if the new named guards in Elixir could help with it somehow.
I think this key. There are issues, and there are parts of the platform that are crap, but Erlang (via Elixir as well in my case, I'm not sure if feel the same way if it was purely Erlang) is on balance the most useful and usable tool I've ever used for building networked services
You might want to check out Grisp - https://www.grisp.org/ - Erlang VM ported to RTEMS. They also offer devboards with ample resources designed to run this thingy.
Also, you should take a big look at the Pony language if you haven’t yet. It’s much earlier days than Rust, but comes from people who know and love Erlang.
I'm surprised to read this, Erlang and Rust have almost nothing in common. It's understandable that BEAM doesn't suit your needs but suggesting that everyone should write Rust instead is grossly misleading. They are suited to solving different problems!
I'd also like to point out that at both of my jobs, we used Erlang for uses cases that are most heavily associated with the language, and for which popular open source alternatives in Elixir/Erlang already exist. We have had better success scaling, operating, and hiring for Rust. We've saved man hours, time, and money.
So I am saying that Rust is a replacement for Erlang. It's not a tool for a different problem.
So compared to Go, the Rust folks seemed to have paid close attention to the things people liked about Go. We have most of those things. We have great tooling, auto formatting, concurrency primitives (although we have freedom of choice. Green threads are a third party library, and aren't part of a runtime). We even have an animal mascot.
From a language standpoint, Go isn't anything remarkable (our Haskell colleague calls it a step backwards). If you listen to the Gotime podcast, Brad Fitzpatrick, one of the Go authors even says as much. Go's big contribution was essentially web programming and go routines.
Rust, on the other hand, has a lot. It has typeclasses from Haskell. It has lifetimes from the PL research community. It has immutability and pattern matching just like in Erlang. It has all the familiar control structure and enums/structs from C++.
Performance of Go in terms of CPU and memory pressure is usually somewhere near Java, if not slightly better. That's pretty damn good. The JVM has improved a lot over the years. Rust performance is a hair shy of C/C++. That's amazing. All those abstractions, and the speed of C++?
This is a really strange thing to say. Erlang is still an event-driven I/O runtime, it just doesn't inflict the user with the burden of continuation passing like tokio does. What is the benefit of doing that yourself? Even with futures its far more verbose and awkward than it needs to be. And since the entire ecosystem is not built on this io system, you will always be finding libraries - even very popular ones, such as diesel - that are incompatible with it and forcing you into threadpools. Every library in use with BEAM uses async I/O in exactly the same way.
Erlang does provide good solutions to lifecycle management. The problem is that whenever someone picks a solution for you, you're now locked it and it can be difficult to migrate to a new paradigm like containers.
So don't. Deploy using edeliver and give up docker for the Erlang / Elixir parts of the project. If you really can't, you might want to switch to a language with a runtime that plays more nicely with the container idea of the world, but I know companies that deploy Erlang with docker and they are happy with that. Purists frown at the idea but it works.
Giving up on containerization isn't an option unless we want to manage two completely separate infrastructures, which we definitely do not want to do.
I don't think it's constructive to the discussion here to say X has drawbacks (and then go on at it), use Y (and then go on at it).
I have not used Erlang or Rust, and when I read this comment, it seemed flamewar-ish to me.
What exactly does 'burst' mean in this context?
Syntactically it cleans some things up, adds some niceties, and annoyingly makes atoms require a prefix of ":" (annoying because atoms are central to Erlang's readability).
Personally, I don't find the changes to make a big difference when using the BEAM, and I prefer the familiarity of Erlang's syntax. It's also the language all of the documentation for the BEAM will use.
If you absolutely can't get your coworkers to ditch dynamic/gradual typing, Elixir or Erlang are still great choices and way better than Python/Ruby. Otherwise, try Rust.
PS: Content looks cropped on an iPhone 7 nonmatter how you resize it