While I agree, the article doesn't really give concrete examples.
Elixir is layered, making it easy to learn and master. You can get pretty far with Phoenix without ever understanding (or even knowing about) the more fundamental building blocks or the runtime. In large part, this is because of its ruby-inspired syntax. You'll have to adjust to immutability, but that's pretty much it.
Then one day you'll want to share state between requests and you'll realize that the immutability (which you're already comfortable with at this point) goes beyond just local variables: it's strictly enforced by these things called "processes". And you'll copy and paste a higher-level construct like an Agent or Genserver and add the 1 line of code to this root supervisor that was just a file auto-generated in your project. But that'll get you a) introduced to the actor model and b) thinking about messaging while c) not ever worrying about or messing up concurrency.
Then you'll want to do something with TCP or UDP and you'll see these same patterns cohesively expressed between the runtime, the standard library and the language.
Then you'll wan to do something distributed, and everything you've learnt about single-node development becomes applicable to distributed systems.
Maybe the only part of Elixir which can get complicated are Macros / metaprogramming. But you can get far without ever understanding this, and Phoenix is so full of magic (which isn't a good thing), that by the time you do need it, you'll certainly have peaked behind the covers once or twice.
The synergy between the runtime, standard library and language, backed by the actor model + immutability is a huge productivity win. It's significantly different (to a point where I think it's way more accurate to group Ruby with Go than with Elixir), but, as I've tried to explain, very approachable.
> Elixir is layered, making it easy to learn and master. You can get pretty far with Phoenix without ever understanding (or even knowing about) the more fundamental building blocks or the runtime. In large part, this is because of its ruby-inspired syntax. You'll have to adjust to immutability, but that's pretty much it.
This is my main problem with elixir. Even for senior developers it's hard to know how the building blocks and runtime works and since all abstractions are leaky, you'll end up with mysterious issues that only make sense once you understand all the layers. And in case of elixir there are lots.
I second that. Been working on a startup with our core system in elixir for a year now. I only wrote my first genserver 2 months ago and I've yet to touch protocols. I got pretty far with what you can learn in the first 2 weeks + some ecto specific nuances.
IMO, Protocols (and to a lesser extent GenServers) are really there for library writers to write; if you're not writing a library, you probably shouldn't be using it.
Heavy disagree, protocols are just an interface like solution and things that share a common set of functionality can probably benefit from utilizing a protocol.
GenServers I disagree with you on even more so since they can handle arbitrary messages (handle_info) it makes it a lot easier to do things like run a continuous process that does an action every n seconds as an example and also to handle life cycle messages from other processes. Oh my downstream consumer says things are finished handle a message that cleans up the children processes and shuts down the supervisor so we don't keep polling.
I use them to great effect in web crawling as a producer for things like Broadway and things like Agent or Task don't have the flexibility at the level I need it.
Actually Elixir and Erlang are at the same level: they both run on the BEAM. So you need to know the BEAM that it is like a virtual machine that act differently from what we are used to (JVM, .NET).
Once you learn the BEAM everything become clear and easy to use.
IMHO Process state isolation and message passing are the core, that's not so difficult to learn and apply.
Yes, you need to learn the BEAM VM which makes it not easy to learn. It still might be worth it, but now you need to understand Elixir, Erlang, BEAM - additionally to your OS's API.
All I can offer are anecdotes but I've worked at one company with a sizeable rabbitmq cluster where we were fortunate enough to have a guy in a different team who was a BEAM maintainer in his free time. He was the only one able to debug issues (mostly around unexpected memory usage).
At another place, we had some in house elixir services. Developers were very happy and productive with it, but we also ran into unexpected memory usage and crashes due to that and to the day their root cause are unclear, even after adding detailed telemetry to monitor process counts and memory profiles.
I guess I have a general issue with VM based languages, but that might be due to the fact that I spend more time debugging issues than writing software and I appreciate the common libc/syscall api used in VM-less languages.
Number one cause of hard-to-track memory usage in elixir, especially with services that parse JSON. Someone is ingesting JSON and caching a snippet of the JSON in an ets table, which means the entire JSON is never GC'd.
This is pretty easy to identify and fix once you know about it and how to look.
To identify:
erlang:memory() shows unexpectedly large word count for binaries.
Crawl all your ets tables and look if referenced_byte_size(Binary) is significantly larger than size(Binary) (you need to check the keys as well as the values, and you'll need to write something to crawl through your complex values, of course)
If you don't find it there, it's probably references held in processes. You can crawl processes with process_info(Pid, binary), but it's a little tricky because the value you get back is undocumented.
If you find a lot of memory used by binaries, the solutions are all pretty simple:
a) if it's binaries held in processes, and those processes really don't hang onto binaries, just receive and send them, and you're running older erlang (before 19?), try to upgrade --- there's a GC change around then to include refC binary size when checking to see if process GC should run.
b) anywhere else, do StorableBinary = binary:copy(Binary) and store StoredableBinary instead of Binary. This gets you an exactly sized binary (heap or ref-counted), instead of a binary referenced into another.
Parsing larger binaries (like JSON) is one way to get these overlarge binaries, but they're also pretty easy to get with idiomatic code like shown in the efficiency guide[1]; when the system builds a appendable refC binary, it makes appending efficient, but storage inefficient.
yes there is a difference between erlang and the erlang abstract format but, given my limited understanding, for all intents and purposes erlang is equivalent to the abstract format (ie i can get one from the other). elixir does not stand on its own and it’s more like a thin veneer on top of erlang than a standalone language that targets the beam vm.
that being said, the powed of elixir comes from how easy it is to learn and use it, the macro expansions (which in my opinion is the killer feature and is not talked enough) and the seamless way it managed to leverage and enhance the erlang ecosystem.
To be fair, Elixir also allows developers to tag functions, which can then trigger macros. Some of them (like @doc) does not transform code, but others (like @impl, or Appsignal’s transaction decorators) will.
I'm not disagreeing with that sentiment, but I learned a considerable amount while implementing my own annotations. I do agree that they can be maddening... especially in the case of Spring Boot where you can forget to add a @Component annotation, and unit tests will pass... then you'll deploy your app and immediately see it blow up in prod.
> copy and paste a higher-level construct like an Agent or Genserver and add the 1 line of code to this root supervisor that was just a file auto-generated in your project. But that'll get you a) introduced to the actor model and b) thinking about messaging while c) not ever worrying about or messing up concurrency.
Isn't it well known that GenServers can become severe bottlenecks unless you know the inner workings of everything to the point where you're an expert?
I'm not an Elixir expert or even used a GenServer in practice but I remember reading some warnings about using GenServers around performance because they can only handle 1 request at a time and it's super easy to bring down your whole system if you don't know what you're doing.
And I remember seeing a lot of forum posts around the dangers of using GenServers (unless you know what you're doing).
It's not really as easy as just copy / pasting something, adding 1 line and you're done. You need to put in serious time and effort to understand the intricacies of a very complex system (BEAM, OTP) if you plan to leave the world of only caring about local function execution.
And as that blog post mentions, it recommends using ETS but Google says ETS isn't distributed. So now suddenly you're stuck only being able to work with 1 machine. This is a bit more limiting than using Python or Ruby and deciding to share your state in Redis. This really does typically require adding a few lines of code and now your state is saved in an external service and now you're free to scale to as many web servers you want until Redis becomes a bottleneck (which it likely never will). You can also freely restart your web servers without losing what's in Redis.
I know you can do distributed state in Elixir too, but it doesn't seem as easy as it is in other languages. And it's especially more complicated / less pragmatic than other tech stacks because almost every other tech stack all use the same tools to share external state so it's a super documented and well thought out problem.
A GenServer may become a concurrency bottleneck as any other language with concurrent data access may become a bottleneck depending on your abstraction of choice. This is nothing specific to Elixir.
What Erlang/Elixir bring to the table is that there is a good vocabulary and introspection tools to observe these issues. For example, if you have a GenServer as a bottleneck, you can start Observer (or the Phoenix LiveDashboard or similar), order processes by message queue, and find which one is having trouble to catch-up with requests. So we end-up talking about it quite frequently - it is easier to talk about what you see!
If you need distributed data, then by all means, use Redis or PostgreSQL or similar. ETS is not going to replace it. What ETS helps is with sharing data within the same node. For example, if you have a machine with 8 cores, you may start 8 Ruby/Python instances, one for each core. If the cache is stored on Redis, you will do a network roundtrip to Redis every time you need the data. Of course you can also cache inside each instance but that can lead to large memory usage, given each instance has its own memory space. This may be accentuated depending on the data, such as caching geoip lookups, as mentioned in the post you linked.
In Elixir, if you have 8 cores, it is a single instance. Therefore, you could cache geoip lookups in ETS and it can be shared across all cores. This has three important benefits: lower memory usage, reduced latency, and increased cache-hit ratio in local storages. At this point, you may choose to not use Redis/DB at all and skip the additional operational complexity. Or, if you prefer, you can still fallback to Redis, which is something I would consider doing if the geoip lookups are expensive (either in terms of time or money).
In any case, ETS is completely optional. If you just want to go to Redis on every request, you can just do that too! And for what is worth, if I need distributed state, I just use to the database too.
> Or, if you prefer, you can still fallback to Redis, which is something I would consider doing if the geoip lookups are expensive (either in terms of time or money).
What if the geoip lookup took 1.5 seconds to look up from a remote API? Is ETS still the right choice?
Based on your statement, it sounds like you wouldn't since that's a long time (relative to a 25ms response). But if ETS is meant to be used as a cache, wouldn't that defeat the purpose of what it's meant to be used for?
Like, if I wanted to cache a PostgreSQL query that took 1 second to finish. Isn't ETS the primary place for such a thing? But 1 second is a long execution time. I know Cachex (the Elixir lib) uses ETS to cache things, so now I'm wondering if I've been using it for the wrong thing (caching database calls, API call results, etc.).
Normally in Python or Ruby I would have cached those things in Redis and lookups are on the order of microseconds when Redis is running on the same $5 / month VPS as my web server. It's also quite speedy over a local network connection too for a multi-server deploy. Even with a distributed data store in Elixir, you'd hit the same network overheard right?
> if I need distributed state, I just use to the database too.
This part throws me off because I remember hearing various things in Phoenix work in a distributed fashion without needing Redis.
> What if the geoip lookup took 1.5 seconds to look up from a remote API? Is ETS still the right choice?
I would use ETS to cache local lookups (for all cores in the same node). Then fallback to Redis to populate the ETS cache. But again, feel free to skip one of ETS or Redis. The point is that ETS adds a different tool you may (or may not) use.
> Like, if I wanted to cache a PostgreSQL query that took 1 second to finish. Isn't ETS the primary place for such a thing?
Here is the math you need to consider. Let's say you have M machines with N cores each. Then remember that:
1. ETS is local lookup
2. Redis is distributed lookup
If you cache the data in memory in Ruby/Python, you will have to request this data in PostgreSQL M * N times to fill in all of the caches, one per core per node. Given the amount of queries, I will most likely resort to Redis.
In Elixir, if you store the data in ETS, which is shared across all cores, you will have to do only M lookups. If I am running two or three nodes in production, then I am not going to bother to run Redis because having two or three different machines populating their own cache is not an issue I would worry about.
> > if I need distributed state, I just use to the database too.
Apologies, I meant to say "persistent distributed state" as not all distributed state is equal. For ephemeral distributed state, like Phoenix Presence and Phoenix PubSub, there is no need for storage, as they are about what is happening on the cluster right now.
My opinion is that this depends entirely on the cost relative to the overall task, and how likely cache hits are to occur. If cache hits are very likely and the task occurs frequently, I'd strongly consider storing it in ETS. If cache hits are unlikely, then it depends purely on how expensive the task is, but generally there isn't a lot of benefit to caching things that are infrequently accessed.
I wouldn't cache database queries unless the query is expensive, or the results rarely change but are frequently accessed.
Generally though, whether to store something in ETS or not is situational - your best bet is actually measuring things and moving stuff into ETS later when you've identified the areas where it will actually make a meaningful difference.
> This part throws me off because I remember hearing various things in Phoenix work in a distributed fashion without needing Redis.
This is true, but it depends on what kind of consistency model you need for that distributed state. The data you are referring to (I believe) is for Phoenix Presence, and is perfectly fine with an eventually consistent model. If you need stronger guarantees than that, you'll need a different solution than the one used by Phoenix - and for most things that require strong consistency, its better to rely on the database to provide that for you, rather than reinvent the wheel yourself. There are exceptions to that rule, but for most situations, it just doesn't make sense to avoid hitting the database if you already have one. For use cases that would normally use ETS, but can't due to distribution, Mnesia is an option, but it has its own set of caveats (as does any distributed data store), so its important to evaluate them against the requirements your system has.
Couple of other solid responses here, but going to add my own -
GenServer is immaterial - it's the single process that can be a bottleneck. Just like a single thread can be in other languages. If you need multiple threads in other languages, you'll need multiple processes in Erlang. The nice thing here being the cost of spinning up more (even one per request) is negligible, the syntax is trivial, and the error model in the event something goes wrong is powerful, whereas threads in other languages are so heavy weight you have to size the pool, they can be complicated to work with, and if things go wrong your error handling options tend to be limited.
Use ETS in the places you'd use in memory caching. That's it. That's what it's meant for. If you need distributed consistency, it's not the right answer. If you need a single source of truth, it's not the right answer. But if you need a local cache, that does not require strong consistency(and that's very, very common in distributed systems), it works great.
> The nice thing here being the cost of spinning up more (even one per request) is negligible, the syntax is trivial
Do you have an example of the syntax?
Basically, how would you wire things up so it becomes difficult or maybe even impossible to shoot yourself in the foot with GenServer bottlenecks?
Also if you had to guess, why would the blog post author of the post I link not do this trivial syntax adjustment instead of rewriting everything to use ETS?
Unfortunately posts like theirs are what comes up first when you Google for things like GenServer vs ETS.
An Elixir/Erlang process may be used to both do work and store state. This is what a GenServer is for. I don't have experience building large systems (or even production code) in these languages but have been doing tons of reading and playing in the past months.
The intuition I've built so far is that holding state in a process (GenServer being an abstraction around a process) is reserved for stuff like "The current state of a games table" where you would have one process per game being played. This state is only being read and manipulated by itself and will be thrown away when the game is over (which might make it sound like an object!). If you suddenly had a requirement to show live stats from thousands of games being played, then one option would be to start sending it to ETS.
One big difference to point out is that to store state in a process you need to use the language's data structures (lists, maps, keyword lists) which can get slow when they grow HUGE. ETS is an actual key-value store with incredibly fast lookup (and can easily be read by multiple processes).
I hope that makes sense—I'm also testing my own knowledge here :)
And I think potentially even more sense if Live View is just creating a GenServer under the hood.
This would explain why if you had 1,000 people connected to your site through Live View, you would have 1,000 stateful processes running on your server. That would be 1 GenServer for each connected client, each in their own universe with their own un-shared data / state.
I think it's best to just think about processes instead of specifically saying "GenServer". GenServer is just one way to interact with processes. For example, if you wanted to run something in the background that doesn't hold any state you could use a task like: `Task.start_link(fn -> some_long_running_function() end)` (though technically I do believe Tasks use the GenServer API behind the scenes). You can also create and manage processes yourself with `spawn` though it's not recommended unless you REALLY know what you're doing though I think even then there are many use-cases for this (but again, not very experienced here).
Also yes, LiveView does indeed have one process for each one! The GenServer API is available in your LiveView modules.
Process bottlenecks are a design problem, not a language or syntax problem; and are mitigated largely by a few points that can be factored in during design or PR review:
- Be wary of places where you have N:1 process dependencies, where N is large and the number of messages exchanged between each member of N and the single process are frequent/numerous. Since each process can only handle received messages sequentially, there is little point in spawning a lot of tasks in parallel if each process has to talk to the same upstream process to do anything
- Set up telemetry that samples the number of messages sitting in the process mailbox; if a process is becoming a bottleneck, it is going to be frequently overloaded and have a lot of messages in its mailbox. If you have the telemetry, you can see when this starts to happen, and take steps to deal with it before it starts causing problems for you. Likewise, its probably useful in general to have telemetry on how long each unit of work takes in server-like processes, so you can get a sense of throughput and factor that data into your design.
- Avoid sending large messages between processes, instead spawn a process to hold the data and then send a function to that process which operates on the data and returns only the result; or store the data in ETS if you have a lot of concurrent consumers. It can also be helpful to denormalize the data when you store it in ETS so you can access specific parts of it without copying the entire object out of ETS on every access. The goal here is to make messaging cheap and avoid copying lots of data around.
- Take steps to ensure process dependencies in your design are structured as trees, i.e. avoid dependency graphs that can contain cycles. It is all too easy for a change to introduce the possibility of deadlock if you play fast and loose with what processes can talk to each other. If your process dependencies mirror your supervisor tree, then you can protect against this by only allowing dependencies between branches in one direction (usually toward the parts of the tree that were started earlier in the supervisor tree)
I think the problem is that Elixir is still relatively young, and due to the language evolution and the lack of established documented doctrine from the Erlang community, there is a lot of techniques, tips, design patterns, etc., that are being rediscovered; likewise there are a lot of seemingly good ideas that turn out to be not so great in practice, but are encountered on the road to the truly sound patterns. So you get a lot of people writing about the lessons they are learning, and because of the gaps in knowledge, the result is that the information may be missing things, or providing a more complex solution when there is a simpler one, etc. Ultimately this is an important process, and now that Elixir has largely stabilized, this will only improve (and its is already pretty good, certainly far better than when I first started with the language years ago).
Do you have any code examples or practical applications on how to apply most of those bullet points?
Those are all very daunting things to approach without examples but they sound very important.
In Python or Ruby I would have just throw things into Redis as needed without thinking about it and this hasn't failed yet in years of development time with tens of millions of events processed (over time). Send ID of DB object to worker, let the worker library deal with it, look up the ID in the DB when the work is currently being done, let worker library deal with the rest and move onto the next thing.
And for caching, it's just a matter of decorating a function or wrapping some lines of code to say it should be cached and everything works the same with 1 or 10 web servers when the state is saved into Redis (major web frameworks in Python and Ruby support this with 1 line of configuration).
For the use case you are describing, none of my points are important really - an HTTP request that hits a database, then pushes something onto a queue for background processing doesn't exhibit any problems from a process bottleneck point of view on that end of things. You still need to have some logic to deal with backpressure from the queue, but that is a language agnostic concern.
Where you could hit a bottleneck might be in the background processing though, take for example the following scenario:
- A pool of N background job worker processes each pull an item off a queue, and spawn a process to perform the task in isolation
- A singleton process S provides exclusive access to some resource
- Each task calls some code which needs to interact with the resource controlled by S.
The problem with the above is that all of that concurrency/parallelism is nullified by the fact that the tasks are all going to block on S to do their work, the bottleneck of the design.
To be clear, you should always gather telemetry first, but lets assume that you've gathered that and you can clearly see that this bottleneck is an issue (the process mailbox has frequently got many messages waiting to be received, the average time to completion for jobs is increasing). To solve this depends on why the resource is held by S in the first place.
If its because the resource is not thread safe and requires exclusive access, then unless you can find a way to avoid needing the resource in every task, there isn't much you can do, but this should be fairly uncommon in practice.
If S exists because you needed to store some shared state somewhere, and someone told you that an Agent or GenServer was the way to go, then you could move that data to ETS and make it publically accessible so that functions which operate on that data read it from ETS directly rather than call the process. Now you've removed that bottleneck entirely.
If S exists because it needs to protect access to some data, but not all of it, and most tasks don't need to access the protected data, then you can move the parts that do not need to be protected into ETS, and keep the rest in the process. This might reduce the amount of contention on that singleton process by a huge amount, but if even half the processes no longer need to block on accessing it, then you've regained at least that much concurrency in the task processing code.
---
The example above is something I've seen numerous times, but the important pattern to note is that you have some task that you've tried to parallelize by spawning multiple processes, but that task itself depends on something that is not, or cannot be done concurrently/in parallel.
Any time this pattern arises, you need to either find a way to enable concurrency in that dependency, or you should avoid doing the task in parallel in the first place. This is ultimately true of any parallelizable task - its only parallelizable if all of the tasks dependencies are themselves parallelizable, otherwise you end up bottlenecked on those dependencies and you've gained little to no benefit.
Where it becomes a bigger problem is when you consider the system at a higher level. Bottlenecks reduce throughput, which may end up, via backpressure, causing errors on the client due to overload, or depending on the domain, data being dropped because it can't be handled in time (e.g. soft real-time systems).
I don't have any code examples that really encompass all of this in one place, if you are interested in something specific, I can try to throw something together for you. Or if you have specific questions I can point you to some resources I've used to help understand some of these concepts.
> And I remember seeing a lot of forum posts around the dangers of using GenServers (unless you know what you're doing).
The danger is using a single process of the GenServer instead of multiple, so you can get single process bottleneck that won’t use multiple cores. You don’t have to know any intricacies of BEAM or OTP to know and design around using multiple process instances of the GenServer.
> I know you can do distributed state in Elixir too, but it doesn't seem as easy as it is in other languages.
You can use Redis in Elixir as well. Saying that Elixir is worse at distribution than Python/Ruby because ETS isn’t distributed is a bit like saying Python is bad at distribution because objects are not distributed. It’s especially strange since Elixir ships with a distribution system (so you can access ETS from other machines) while your other example languages do not.
Totally but a lot of folks say "but Elixir is good / easier because you don't need tools like Redis". But then when you try to do it without Redis you need to account for many things yourself and end up re-inventing the wheel. This is time spent developing library'ish code instead of your business logic.
It's sort like background job processing tools. Sure you can do everything in Elixir, but when you want uniqueness guarantees, queues, priorities, exponential back-off retries, periodic tasks, etc. you end up using a dedicated background processing tool in the end because it's very difficult and time consuming to write all of that from scratch.
But in practice almost all of those things end up being a requirement in a production grade system. You end up in the same boat as Python and Ruby.
> But in practice almost all of those things end up being a requirement in a production grade system. You end up in the same boat as Python and Ruby.
Well, there is definitely a spectrum. For example, in some communities you will hear the saying "you cannot never block the main thread". So if you want to do a file export? You need to move to a background worker and send the export to an e-mail. Sometimes sending the e-mails themselves is done in the background while it could be done on the go.
Languages like Java, Go, Erlang, Elixir, will be just fine with performing those tasks during the request, so you don't need to push those particular actions to a background job. And as you noted in the other comment, Phoenix ships with PubSub and Presence that work out of the box without a need for an external tool too.
But if you need a uniqueness, queues, back-off, etc, then do use a background job tool! But some of the places you thought you needed one, you may not actually need it.
i would take most of these claims about elixir with a grain of salt
distribution, for example, is a much lauded feature of elixir/erlang but if you look into the implementation it's really just a persistent tcp connection with a function that evals code it's sent on the other end. you could easily write the equivalent in ruby or python or java but you probably wouldn't because it's not actually a very good idea. there's no security model, the persistent connections won't scale past a modest cluster size and the whole design ignores 30 years of experience in designing machine to machine protocols
similarly, people will mention mnesia or ets as a replacement for a database or redis. these are both very crude key/value stores with only very limited query capabilities. you should use them where you would use an in process cache in another language and not as a replacement for an out of process shared cache. they were never designed as such. and as in process caches they are really nothing special
in fact, a lot of elixir's marketing comes down to "do more with less" with a lot of focus on how you can do on a single machine what other languages take whole clusters to do. this is (partially) true. elixir/erlang are excellent for software that runs on appliance style hardware where you can't simply add machines to a cluster. it is, in fact, what erlang was designed to do. what this ignores though is that this is a terrible model for a service exposed over the internet that can run on any arbitrary machine in any data center you want. no one will advise you to run a single vm in aws or a single dyno on heroku for anything that matters.
elixir/erlang's features that increase it's reliability on a single machine are a cost you pay not an added benefit. the message passing actor model erlang built it's supervision tree features around are a set of restrictions that are imposed so you can build more reliable stateful services on machines that don't have access to more conventional approaches to reliability (like being stateless and pushing state out to purpose built reliable stores)
if you're building systems that need to run in isolation or can't avoid being stateful then perhaps elixir/erlang has some features that may be of interest. the idea that these features are appropriate for a totally standard http api running in aws or digital ocean or whatever backed by a postgres database and a memcache/redis cluster is not really born out by reality however. if it were surely other languages would have incorporated these features by now? they've been around for 30 years and the complexity (particularly of distribution and ets) is low enough you could probably implement them in a weekend
> distribution, for example, is a much lauded feature of elixir/erlang but if you look into the implementation it's really just a persistent tcp connection with a function that evals code it's sent on the other end...
I mean, this is just straight up incorrect. Yes the underlying transport is TCP, but using remote evaluation is definitely _not_ the common case. Messages sent between nodes are handled by the virtual machine just like messages sent locally, that is the main benefit of distributed Erlang - referential transparency. Yes, you _can_ evaluate code on a remote node, which can come in handy for troubleshooting or orchestration, but it is certainly not the default mode of operation.
> there's no security model
I mean, there is, but it isn't a rich one. If one node in the cluster is compromised, the cluster is compromised, but the distribution channel is very unlikely to be the means by which the initial compromise happens if you've taken even the most basic precautions with its configuration. It would be nice to be able to tightly control what a given node will allow to be sent to it from other nodes (i.e. disallow remote eval, only allow messaging to specific processes), and I don't think there are any fundamental blockers, its just not been considered a significant enough issue to draw contribution on that front.
> the persistent connections won't scale past a modest cluster size
I mean, there is already at least one alternative in the community for doing distribution with large clusters, Partisan in particular is what I'm thinking of.
> these are both very crude key/value stores with only very limited query capabilities
What? You can literally query ETS with an arbitrary function, you are limited only by your ability to write a function to express what you want to query.
You shouldn't use them in place of a database, but they are hardly crude or primitive.
> elixir/erlang are excellent for software that runs on appliance style hardware where you can't simply add machines to a cluster. it is, in fact, what erlang was designed to do. what this ignores though is that this is a terrible model for a service exposed over the internet that can run on any arbitrary machine in any data center you want
I think you are misconstruing the point of "doing more with less" - the point isn't that you only need to run a single node, but that the _total number of nodes_ you need to run are a fraction of those for other platforms. There are plenty of stories of companies replacing large clusters with a couple Erlang/Elixir nodes. Scaling them is also trivial, since scaling horizontally past 2 nodes doesn't require any fundamental refactoring. Switching from something designed to run standalone in parallel with a bunch of nodes versus distributed _does_ require different architectural choices, and could require significant refactoring, but making that jump would require significant changes in any language, as it is a fundamentally different approach.
> elixir/erlang's features that increase it's reliability on a single machine are a cost you pay not an added benefit. the message passing actor model erlang built it's supervision tree features around are a set of restrictions that are imposed so you can build more reliable stateful services on machines that don't have access to more conventional approaches to reliability (like being stateless and pushing state out to purpose built reliable stores)
I'm not sure how you arrived at the idea that you can't build stateless servers with Erlang/Elixir, you obviously can, there are no restrictions in place that prevent that. Supervisors are certainly not imposing any constraints that would make that more difficult.
The benefits of supervision are entirely about _handling failure_, i.e. resiliency and recovery. Supervision allows you to handle failure by restarting the components of the system affected by a fault from a clean slate, while letting the rest of the system continue to do useful work. This applies to stateless systems as much as stateful ones, though the benefits are more significant to stateful systems.
> the idea that these features are appropriate for a totally standard http api running in aws or digital ocean or whatever backed by a postgres database and a memcache/redis cluster is not really born out by reality however. if it were surely other languages would have incorporated these features by now? they've been around for 30 years and the complexity (particularly of distribution and ets) is low enough you could probably implement them in a weekend
The reason why these features don't make an appearance in other languages (which they do to a certain extent, e.g. Akka/Quasar for the JVM which provide actors, Pony which features an actor model, libraries like Actix for Rust which try to provide similar functionality as Erlang) is that without the language being built around them from the ground up, they lose their effectiveness. Supervision works best when the entire system is supervised, and supervision without processes/actors/green threads provides no meaningful unit of execution around which to structure the supervision tree. Supervision itself is built on fundamental features provided by the BEAM virtual machine (namely links/monitors, and the fact that exceptions are implemented in such a way that unhandled exceptions get translated into process exits and thus can be handled like any other exit). The entire virtual machine and language is essentially designed around making processes, messaging, and error handling cohesive and efficient. Could other languages provide some of this? Probably, though it certainly isn't something that could be done in a weekend. No language can provide it at the same level of integration and quality without essentially being designed around it from the start though, and ultimately that's why we aren't seeing it added to languages after the fact.
sending a message to a remote node is just a special case of eval. instead of arbitrary code you're evaling `pid ! msg`. and what is spawning a remote process if not remote code eval?
when i say there's no security model i mean there's no internal security model. you can impose network based security (restricting what nodes can connect to epmd/other nodes) or use the cookie based security (a bad idea) or you can even implement your own carrier that uses some other authentication (i believe there a few examples of this in the wild) but the default is that any node that can successfully make a connection has full priveleges
as for ETS, you can query any data structure with arbitrary functions. that's exactly what i mean when i say there's limited query capabilities. all you can really do is read the keys and values and pass them to functions
my experience and the experience of others is that elixir and erlang are not significantly more efficient than other languages and do not lead to a reduction in the total number of nodes you need to run. whatsapp is frequently cited as an example of "doing more with less" but it's compared to bloated and inefficient implementations of the same idea and not with other successful implementations. facebook certainly wasn't using thousands of mq brokers to power facebook chat. no one is replacing hundreds of activemq brokers with a small number of rabbitmq brokers
you can absolutely build stateless servers with erlang/elixir (and you should! stateless is just better for the way we deploy and operate modern networked services). my point is that many of the "advantages" of elixir/erlang are not applicable if you are delivering stateless services
when i said you could deliver erlang/elixir features in a weekend, i did not mean all of them. i meant specifically distribution and ets. you are right that the actor model, supervision trees and immutable copy-on-write data structures are all necessary for the full elixir/erlang experience. i generally like that experience and think it is a nice model for programs. i don't think however it is very applicable to writing http apis. java, rust, python, go, ruby and basically every other language are also great at delivering http apis and they don't have these same features
> sending a message to a remote node is just a special case of eval. instead of arbitrary code you're evaling `pid ! msg`. and what is spawning a remote process if not remote code eval?
They are not equivalent at all, sending a message is sending data, evaluation is execution of arbitrary code. BEAM does not implement send/2 using eval. Spawning a process on a remote node only involves eval if you spawn a fun, but spawning an MFA is not eval, it’s executing code already defined on that node.
> as for ETS, you can query any data structure with arbitrary functions. that's exactly what i mean when i say there's limited query capabilities. all you can really do is read the keys and values and pass them to functions
You misunderstood, you can _query_ with arbitrary functions, not read some data and then traverse it like a regular data structure (obviously you can do that too).
> my experience and the experience of others is that elixir and erlang are not significantly more efficient than other languages and do not lead to a reduction in the total number of nodes you need to run.
I’m not sure what your experience is with Erlang or Elixir, but you seem to have some significant misconceptions about their implementation and capabilities. I’ve been working with both professionally for 5 years and casually for almost double that, and my take is significantly more nuanced than that. Neither are a silver bullet or magic, but they excel in the domains where concurrency and fault tolerance are the dominant priorities, and they are both very productive languages to work in. They have their weak points, as all languages do, language design is fundamentally about trade offs, and these two are no different.
If all you are building are stateless HTTP APIs, then yes, there are loads of equally capable languages for that, but Elixir is certainly pleasant and capable for the task, so it’s not really meaningful to make that statement. Using that as the baseline for evaluating languages isn’t particularly useful either - it’s essentially the bare minimum requirement of any general purpose language.
i was not claiming the distribution code literally calls eval, just that it is functionally equivalent to a system that calls eval. you agree that it is possible to eval arbitrary code across node boundaries, yes?
i used erlang for 4 years professionally and elixir for parts of 5. i think both are good, useful languages. i just take issue with the misrepresentation of their features as something unique to erlang/elixir
advocates should talk up pattern matching, supervision trees and copy-on-write data structures imo. those are where erlang and elixir really shine. instead they overhype the distribution system, ets, the actor model and tools like dialyzer which are all bound to disappoint anyone who seriously investigates them
> i was not claiming the distribution code literally calls eval, just that it is functionally equivalent to a system that calls eval.
Not really. The distribution can only call code that exists in the other node. So while the system can be used as if it was an evaluator, it is not because of its primitives, but rather due to functionality that was built on top. If you nuke "erl_eval" out of the system, then the evaluation capabilities are gone.
I agree it is a thin line to draw but the point is that any message passing system can become an evaluator if you implement an evaluator alongside the message passing system. :)
> elixir/erlang's features that increase it's reliability on a single machine are a cost you pay not an added benefit
Agreed! Erlang/Elixir features should not be used to increase the reliability on a single machine. Rather, they can be used to make the most use of individual machines, allowing you to reduce operational complexity in some places.
> And as that blog post mentions, it recommends using ETS but Google says ETS isn't distributed. So now suddenly you're stuck only being able to work with 1 machine.
There is a mostly API-compatible distributed version of ETS in OTP, called DETS. And a higher-level distributed database built on top of ETS/DETS called Mnesia, again, in OTP. So, no, you aren't.
> I know you can do distributed state in Elixir too, but it doesn't seem as easy as it is in other languages. And it's especially more complicated / less pragmatic than other tech stacks because almost every other tech stack all use the same tools to share external state so it's a super documented and well thought out problem.
You can use the same external tools in Elixir as on platforms that don't have a full distributed database built in as it is in the OTP, so, I don't see how the fact that those external tools are widely used on other platforms makes Elixir harder.
> There is a mostly API-compatible distributed version of ETS in OTP, called DETS. And a higher-level distributed database built on top of ETS/DETS called Mnesia, again, in OTP. So, no, you aren't.
DETS is the disk based term storage, it is as distributed as ETS.
I believe the distribution layer built on top of ETS and DETS you’re trying to name is mnesia. It supports distribution and allows a variety of interesting topologies. It’s not the only distributed data store available on the BEAM but it’s well tested, mature, and comes as part of the OTP libraries.
Whether or not this is a problem comes down to two factors:
1 - How ok are you with just accepting things as documenting, like having to put `use HelloWeb, :controller` in each controller. Personally, I _had_ to understand what this was doing and how, but I imagine some people don't care so much.
2 - Do you need / want to do anything outside what's an idiomatic Phoenix. As a simple example, I think Plug.Parser is wrong to raise an exception when invalid json is given to it (shitty user data is hardly "exceptional" and I don't need error logs with poor signal to noise).
> As a simple example, I think Plug.Parser is wrong to raise an exception when invalid json is given to it (shitty user data is hardly "exceptional" and I don't need error logs with poor signal to noise).
Plug sets the status code for parser errors to 400: https://github.com/elixir-plug/plug/blob/v1.10.4/lib/plug/pa...
You should check the `:plug_status` field of the exception and only log/report the error if it's in the 5xx range.
Re: 1. I had the same concern, but the nice thing is that when you get to the point where you gotta know what’s happening, the file with the :controller code is right there in the auto-generated file in lib. You can just open it up and see!
Re: 2, your entire plug chain is accessible and configurable if you don’t like the defaults. I’ve never tried to modify the specific behaviour you’re referring to, but my other experience with modifying plus behaviour has generally been pretty smooth.
It makes for great demos but it basically means “this just works so you don’t need to know how it works”.
The problem is you always need to know how your tools work, often sooner rather than later, as you inevitably end up with edge cases and bugs in your application that simply can’t be resolved without that understanding.
only because we reserve the use of the word "magic" for the times we find it bad/unsuccesful.
When it's good/successful, we call it "powerful, mature, and robust high-level abstractions and APIs".
That "magic" is always bad is just a syllogism because a value judgement on the success of the abstractions or APIs is built into the term as used. Abstractions that are confusing, or unexpected, or inconsistent internally or with the platform they live on, or especially leaky or buggy, or in practice hard for beginner developer-users to develop proper mental models to guide use around and to understand how to debug the use of -- get called "magic". But nobody set out to provide such abstractions, they set out to provide nice, polished, predictable, simple to understand, consistent, powerful tools at the right level of abstraction for the domain -- when someone thinks they've failed, with abstractions at a fairly high level or higher than the speaker thinks is appropriate for the domain -- they call it 'magic'.
I don't think that's what is meant by "magic" in terms of code, at least it's never been my own understanding.
"Magic" is not something the developer doesn't understand, it's something the developer cannot possibly understand, because you didn't document it properly (or at all).
I think that's an extention of what I'm talking about. Either way, so was it intentional? Does anyone sit down and say "I'm going to create something poorly documented that no developer can possibly understand, so that they'll call it magic, this is my goal today"?
Nope. "magic" is poor execution of something, one way or another. It's a category of something done unsuccesfully, not some kind of difference in kind of thing.
This matters for how we talk about 'magic', like, you shouldn't have put all that magic in there, did you think we want magic? No, nobody wants something poorly documented and nobody thinks they do. Nobody sets out to make something unsuccessful, "magic" is a failure.
Where does one draw the line between magic and a lack of user understanding? I never got this argument. Isn't magic you as a user not understanding how it works? I mean, classes in c are pretty magic, the this pointer is magic. Type checking, loops, trampoline jumps. A car is pretty much magic yet people use it every day.
Is it magic when it does something convenient for you based on conventions without you writing the steps? As long as 'magic' is clearly explained in the docs I don't see how it's different from a layer of abstraction like a compiler or a car.
> Where does one draw the line between magic and a lack of user understanding?
There is no line; that's the definition. (I'm agreeing with you here, btw) It's a blub issue; if one doesn't understand it, it's derisively called "magic". So you don't get this argument because it's totally based on the observer.
I use Django at work and I hate it because it's magic all over the place. And I recognize that me saying it is simply I haven't had the time or impetus to go figure out the stuff I don't have figured out yet, so I can bucketize all that as "magic".
Sure, words will always have fuzziness, but there is benefit to substituting fuzzy terms with clearer terms. Words help influence thinking, so fuzzy words can lead to fuzzy thinking.
If we are debating an API design and a colleague says "it's too much magic", is that bad thing? If I write deliverEmail(email) and the e-mail "magically" gets sent, what's wrong with that?
On the other hand, if we debate the abstraction itself--for example behavior doesn't match user expectations--then we can do better at improving the API design.
You're conflating abstractions with magic. They are not the same. Magic is not just an abstraction we don't bother to understand. Magic is magic.
A car is not magic. It does exactly what it purports to do and nothing else. David Blaine levitating is magic. It works in a very limited set of circumstances and no more. It works to amuse some spectators. It doesn't work to, for example,
cross the Hudson River, despite outward appearances.
Similarly, in programming we use "magic" to describe something that seems to defy what the underpinning (language, framework, etc.) would allow. We know that it's using some trick to pull off what it's doing, and that trick likely has constraints that won't allow the magic to ever become a true abstraction.
Note that I'm not calling Phoenix magic as I haven't used it enough to know
No, magic is when you actually do understand and it gets in your way as a result.
For example, I actually do understand http, but doing anything not explicitly designed for by the framework of choice can be painful. It's not that I don't understand the underlying tech, it's that the "magic" is expecting a specific use case that I don't want to do.
Also applies to scenarios where a developer does not understand something not because they don't want to, but because the vendor has actively tried to obscure how it works and is asking you to take it as a given that it will always just work "by magic" for every use case, which invariably nothing ever does.
When do you learn about the JVM in a Java environment? At what point should you dig into be Ruby source code?
At some point you have to treat the abstraction below you as a black box.
I've personally found it helpful to accept the black box and only dig in after I've gained more experience or at the point I feel like a lack of understanding is getting in my way.
When the abstraction can be considered 100% leak-proof in practical terms. Also, if you have alternative tools when you encounter a case where "X" doesn't fit, you do not need to learn about "X" as long as you have "Y".
But when a solution is presented as "this must always be used", the solution should have been around for a long time with a track record of never breaking. If that's where the abstraction is, it's usually solid enough to not go any deeper.
In my experience, most developers don't talk about "magic" as if it's the same thing as "abstraction". "Magic" usually means an abstraction that leaks under non-trivial conditions.
I would say for Phoenix, with 90% of the magic... if you don't like it it's relatively easy to cut and paste your way through an override, so there's that. A lot of it is there for your safety, it abstracts things that "you are likely to get wrong if you try to write it yourself".
The only thing I couldn't figure out how to override was streaming uploads.
> It makes for great demos but it basically means “this just works so you don’t need to know how it works”.
But this describes all manner of legitimately useful features. Garbage-collection, for instance, or dynamic-dispatch, or closures, or await/async.
As technicolorwhat and vendiddy already said, it's also poorly defined. In languages like Haskell, or SMT-solving languages, the programmer might know very little about how the language is implemented.
> you inevitably end up with edge cases and bugs in your application that simply can’t be resolved without that understanding
You can run into trouble if you have an incomplete understanding of your programming language, but this isn't particular to 'magic' abstractions.
I would say do at least one small project or tutorial without Phoenix but in any case Phoenix is a web framework with routes, controllers, templates etc you won't get lost there.
You'll lose some things (no built-in auth but there's libs) and win some (easy websockets and pub/sub).
LiveView is the reason my eye caught Elixir/Phoenix.
Currently I am working with Blazor (C#) and I really like that you can forget about Javascript (most of the time).
I believe Blazor, LiveView and others are the future for webdev.
Unfortunately had a bug I found today in which an event_notifier genserver was calling a method that checked against the database for the the state of an Event, to decide whether to send a notification and then update the event to record that notifications were sent and upon successful update send the notifications. But in the query to construct the list of users that should be notified, the User module was not imported at the top of the file and the notifications were failing to send. Of course, if the path had ever been run, it would have obviously failed and been the simplest type of error to rectify. This is why it's generally not an issue and is actually nice as it gives you direct visibility into dependencies. But, on rare exception, when you don't test the path because it's dependent on time-dependent state and are overly confident in the suitability of the untested code you're deploying to production, you get bit by the compiler that quietly warns and lack of static analysis (oh nice writing this comment just made me discover https://github.com/rrrene/credo)
Can you explain what you mean about module naming? I’ve done a good amount of Elixir, and I’m puzzled because I don’t know what you are referring to at all.
Most libraries and frameworks I’ve used enforce things on your modules via behaviors which have nothing to do with naming. Perhaps you are talking about naming functions for behaviors? Because I think it’s the same in most languages that “overridden” or “implementations” of functions for modules/classes that use some kind of interface must be named the same as the interface definition.
> Perhaps you are talking about naming functions for behaviors? Because I think it’s the same in most languages that “overridden” or “implementations” of functions for modules/classes that use some kind of interface must be named the same as the interface definition.
Yes. In typescript, you can have abstract classes or interfaces to implement which gives you more information (intellisense, visible errors, type information) and make the contract more explicit. I was trying out phoenix and I didn't get any hints about the methods to implement in the module. I was relying on documentation for that. If there was a way to say use @some_spec and get information about it in the editor. It would have helped out.
The compiler will warn you about things like unimplemented functions that are required by behaviors, and there are plugins for various editors to display errors/warnings inline.
There's also Dialyzer (and various IDE integrations) for some level of static type-checking.
there are language features directly for this. no need to rely on compiler hints or dialyzer.
elixir people really should learn erlang.
if you ever saw a coffeescript person struggling to do things that are straightforward in js, because they didn't bother to learn js? that's what you're doing now
This is the main reason why I learn programming languages to broaden my knowledge, but when it comes to production code, only platform languages matter.
This means you're arguing that one should never learn anything except machine code because machine code is what it all turns into at the end of the day.
Only you DO use languages that end up as machine code, and the idea of understanding machine code to better utilize something like C isn't outlandish.
Nope, use C/C++/Perl/awk with UNIX, Java with the JVM, C++/Swift/Objective-C with iOS/macOS, Java/Kotlin/C++ with Android, VB.NET/C#/F#/C++ with Windows, JavaScript/WebAssembly with the Browser,.... and naturally Erlang/C with BEAM.
No need for extra build tools, FFI with the platform language, additional debugging tools, the urge to create wrapper libraries that feel more idiomatic into the guest language, waiting for the guest language to come up with ways to expose new platform capabilities,...
Guest languages are good for experimenting new ideas that eventually get adopted by the platform and then the world moves on, and they turn into the next CoffeeScript.
> In typescript, you can have abstract classes or interfaces to implement
In erlang these are called "behaviours"
.
> I was relying on documentation for that
Consider reading the Erlang documentation. The Elixir people really don't understand Erlang very well, and are trying to manufacture Ruby on top of a language that already gives them solutions to their problems that they just can't see.
.
> If there was a way to say use @some_spec and get information about it in the editor. It would have helped out.
Basically every IDE that speaks Erlang knows behaviors. They're as fundamental to the language as header files are to C/C++.
Elixir tried to "do away with them" because Elixir thinks they're "confusing" but they're actually really important tools.
Those are important because, with optional callbacks, you can make a typo on the function name or on the number of arguments and you won't get any warning, in both Erlang and Elixir. "@impl true" allows you to close this gap.
So we do support behaviours and we have added more static guarantees compared to what you would find in Erlang.
> The Elixir people really don't understand Erlang very well
It is a bit depressing it has come to this but my long list of contributions to Erlang/OTP through pull requests, discussions, EEPs, etc say otherwise.
Elixir hasn’t done away with behaviours. They exist in the language and are used to implement features in the standard library itself and many libraries in the ecosystem.
You don’t have to go to Erlang documentation to learn about them since they are documented in Elixir as well [1] [2].
As soon as I add @behaviour to a module, I get IDE autocomplete and compiler warnings if a function has not been implemented.
I do agree that I have not seen a lot of behaviour use in the wild, even in cases where it's clear that they would be beneficial. Elixir developers seem to have some blindspots.
> Hi, I'm a seasoned erlanger, and we do not all tell you that.
Seasoned Erlang developer here too (10+ years).
First of all, the original post wanted to share data across requests. Given each request is typically handled in separate process, you simply cannot use the process dictionary to share this data. So your original comment completely missed the mark.
Second of all, I would say most of the Erlang community actually agrees with the Erlang official documentation linked above. For example, for a long time, Cowboy had this in its README:
> No parameterized module. No process dictionary. Clean Erlang code.
And the reason is simple: the process dictionary is about state within a single process. This data cannot be easily shared (you can with erlang:process_info/2 but that adds a process lock). Therefore, most times people resort to the process dictionary is to pass data within a process, implicitly, and passing it functionally (the clean way) is often better. Using ets for the same purpose would also be frowned upon. The process dictionary should only be used almost exclusively for metadata.
You probably won't be inclined to take my word for this, so here is a proposal: send a pull request to remove or update the linked notes above from Erlang/OTP docs and see how well it will be received.
Probably not appropriate for CPU bound loads like that.
Erlang/Elixir would be fine as an orchestration language calling out to C code to actually do the number crunching, but you likely won't see much of the benefit of the language doing that unless you're doing something more complicated in that layer. If it's a super thin layer of glue, Python is better suited (not least because Numpy is familiar and meant to work with the language)
Yeah, I've worked a bit with OpenMP. I felt it didn't quite have the ergonomics of an actor model that I would want. I might be wrong there though. I don't have any experience with any other compute limited systems that use the actor model.
Rust certainly seems quite interesting, but also quite daunting.
> I haven't thrown any hand grenades over the wall. Overly sensitive Elixir programmers always throw a fit whenever someone says "you know, other tools are available to you too."
I don't think anyone would take issue with your comments if that's how you approached it.
Instead you're being an asshole, and for no reason I can understand. It's not like we're talking politics here...
> I don't think anyone would take issue with your comments if that's how you approached it.
I got voted into the floor for saying "you're a better elixir programmer if you also learn erlang," which is a simple fact.
I do think this. I think that most micro-language communities are bizarrely intolerant of criticism, and unwilling to cope with the idea that everylanguagehasproblems and that you can only get good at a language if you face them with open eyes.
Imagine a C programmer getting offended if you warned someone learning C "by the way, if you're doing string parsing, you're gonna have a bad time."
.
> Instead you're being an asshole
I love how I've given measurable facts and technical claims, and in order to resist, the members of this community are trotting out swearing and insults, then imagining that they've made a valuable case
> I got voted into the floor for saying "you're a better elixir programmer if you also learn erlang," which is a simple fact.
I don't think that's the case. As an active participant in the various Elixir communities, I've found that the general advice is that learning Erlang, while not necessary, is definitely worth it. Many of the community 'members' do or did plenty of Erlang programming, and the stories of all the 'goodies' and how much simpler Erlang can be at times have more than once gotten me to play around with it. I generally find very little hostility between Elixir and Erlang programmers. Far from it.
I also strongly contest that you're just saying "you're a better elixir programmer if you also learn erlang". At best, some of your comments approach the sentiment, but then end with judging the commenter a blub programmer, or something similar.
I can understand if maybe you've had bad experiences with 'micro-language communities' and you're reading that into what Elixir programmers are saying, but by and large I very rarely come across Elixir programmers who thumb their nose at Erlang. More than anything the sentiment is a kind of reverence and maybe even a degree of 'embarrassment' over finding it hard to get over Erlang syntax.
> I do think this. I think that most micro-language communities are bizarrely intolerant of criticism, and unwilling to cope with the idea that every language has problems and that you can only get good at a language if you face them with open eyes.
That's perhaps true, but Elixir is among the few communities where I find much less of that than elsewhere. The forums and slack are rife with day-to-day Elixir/Erlang programmers who lament the lack of static typing, argue about how Phoenix is too magical, and so on. One reason I was drawn to Elixir was precisely the relative lack of "this is the best thing ever and everything else sucks" sentiment that I find in so many other programming language communities.
I was assuming you were just trolling, because it's hard for me to believe that you don't see the discrepancy between what you say you are saying, and what you're actually saying. It's very discrepid!
You've been perpetuating a flamewar in this thread. We don't want those on HN. Please stop, and please don't do it again.
That includes not tossing in swipes like "This just makes me laugh" and so on. Your comments have been provocative to the point of being trollish, as well as outright nasty in places. That's not cool, regardless of how much you know about Erlang.
The original post wasn't just a general statement, it was inflammatory and accusatory. The poster stated that Elixir not only teaches programmers the wrong things, but that it leaves them damaged. Did not give a single example or explanation for their rationale.
Erlang doesn't offer immutability. It offers a lack of name rebinding. If you want to see why this difference matters, go learn Elixir, and try creating a tight loop without any recursion during which all you do is reassign the value of a variable a billion times.
Notice the ram usage of the machine go to the moon.
Then there's this thing about immutability being guaranteed by processes. No it isn't, though? You just copy the data and operate on a copy (or a fragment, sometimes, in the case of binaries.) The original can be mutated, and if you've scaled anything in erlang, you've seen this cause a bunch of problems around the array module
Erlang isn't actor model (sometimes you hear it called pi calculus, which it also isn't.) These are things that seem casually, superficially similar to what Erlang is, but then, the whole point of the actor model and the pi calculus is to give mathematical rigor to guarantees that Erlang doesn't offer, and then you've confused yourself into thinking that Erlang has properties it doesn't actually have.
Engineering by failed metaphor is tremendously dangerous, because it makes people feel safe in ways they aren't, and just due to the nature of systems, the way you find out is often under terrible pressure and non-rarely unfixable.
"not ever worrying about or messing up concurrency"
Like. How new do you have to do be to Erlang and Elixir to think that message passing means you can't mess up concurrency?
If you ask a C programmer to name some threading problems, five dollars says one of the first three they name is some variation on lock contention.
Messaging does not solve this. (Arguably it makes it worse, since every blocking process is now effectively a lock, which is a generally discouraged design strategy in most languages but encouraged as a primitive in this language.)
Indeed, just as OO problems got more common in C++ vs C because of how easy OO became to do, and how frequent it became, I actually end up seeing a lot more concurrency related problems in Erlang than in most languages, just because there's a lot more concurrency than in most languages.
And this isn't be complaining about erlang; I love it. But also, if you've just upgraded from bows to rifles, you have to understand that with that improvement will also come an increase in accidental bullet injuries.
Saying you'll never have concurrency problems in erlang is like saying you'll never have allocation problems in C++. It suggests that everything you know about the language is reading chapter three's discussion about how tools are made available to mitigate this (in C++'s example it'd probably be a SFINAE discussion or something naive about automatic deletion, or if it's an old enough document, maybe smart pointers.)
That that tooling exists suggests there's an active problem being actively mitigated, not that the problem is solved in blub and not everywhere else. There's a reason you don't see cut outside prolog.
In order to abuse one of my favorite community metaphors, "the reason footguns exist is that sometimes you actually do need to shoot lots of animal at once; don't take it out of the case unless Dracula shows up, but, in the balance, eventually you'll be glad you know [Burt Gummer](https://www.youtube.com/watch?v=SdwFp2D5Eg8)."
.
> Then you'll wan to do something distributed, and everything you've learnt about single-node development becomes applicable to distributed systems.
This just makes me laugh, sadly.
"Because I passed messages in a message passing language, I'm now ready to write distributed systems" is just the most delightfully naïve claim
There are red flags. Red flags are a thing that exists.
And the thing is, you see them all the time in places you shouldn't. [This is correct](https://martin.kleppmann.com/2015/05/11/please-stop-calling-...) and yet my trainers at two FAANG companies claimed that they had internally solved this at large scale within the company, on down from on high from the biggest people in the space
You see fucking Oracle get that wrong
Like the very first three things you ask about distributed systems are:
1. how well does it tolerate partitions,
2. how well does it perform over a WAN, and
3. did you design for both city loss and provider loss
With erlang, the traditional answers are
1. no-ish (that is, only just barely and by throwing away everything you used to love about the language to get a hokey election system that barely works and seems designed for the explicit purpose of causing merge failures (it's the CVS of rejoins, i swear to your god) )
2. (loud, plaintive sobbing)
3. we stood up three copies of the system, does that count? what do you mean transactional
The people saying that Erlang solves distributed systems are the people who stood up an ejabberd instance on their desktop and one on their laptop and declared themselves the vice president of MS Azure Engineering
like yes, it does get rid of a few problems, and it has some convneiences, but
what would you do if someone told you haskell solved all logic errors? some offered responses include:
1. believe them, drop everything, and make all life a burrito
2. say "wow, this person isn't very good at haskell, are they?"
.
> The synergy between the runtime, standard library and language, backed by the actor model + immutability is a huge productivity win. It's significantly different (to a point where I think it's way more accurate to group Ruby with Go than with Elixir), but, as I've tried to explain, very approachable.
I read this paragraph out loud and suddenly I had two people making toothpaste commercials working for me
I don't understand what happened. Am I a marketer now?
this is pretty abrasive, but it's also 100% correct. before you downvote this you should take the time to investigate the claims it makes and think about whether what you think you know is actually true
i learned a ton about distributed systems in my time writing erlang and elixir, but it was almost always because i was forced into it by misapplying some erlang/elixir feature and backing myself into a corner. the idea that erlang/elixir make concurrency easy is extremely dangerous. what they do is make it accessible. you still need to understand what you are doing
The greatest achievement of Elixir is making the Erlang platform and ecosystem accessible to everyone. And that's because its "Ruby-ness".
I learned Ruby with Rails, so in the same spirit you could learn Elixir with Phoenix and I really think it's a bona-fide approach to "graduate" to the BEAM world.
But, caveat emptor, the BEAM world is like an alien wunder-weapon: everything we take for granted in the modern web development world was already invented --with flying colors too-- in Erlang/BEAM so there is a lot of overlapping in terms of architecture solutions. In a Kubernetes/Istio world, would you go for a full BEAM deployment? I don't say it's not an already solved problem but what's the perfect mix-ratio? It depends.
The overlap between K8s and BEAM is a good question. Even amongst experienced BEAM (especially Erlang) programmers, there's a lot of conflicting information.
From my limited understanding, Kubernetes is comparatively complicated, and can hamstring BEAM instances with port restrictions.
On the other hand, there's a rarely documented soft limit on communication between BEAM nodes (informally, circa 70 units, IIRC). Above this limit, you have to make plans based on sub-clusters of nodes, though I have certainly not worked at that level of complexity.
Would be interesting to hear what other people think about this specific subject.
I have no idea where this limit came from. I worked at WhatsApp[1], and while we did split nodes into separate clusters, I think our big cluster had around 2000 nodes when I was working on it.
Everything was pretty ok, except for pg2, which needed a few tweaks (the new pg module in Erlang 23 I believe comes from work at WhatsApp).
The big issue with pg2 on large clusters, is locking of the groups when lots of processes are trying to join simultaneously. global:set_lock is very slow when there's a lot of contention because when multiple nodes send out lock requests simultaneously and some nodes receive a request from A before B and some receive B before A, both A and B will release and retry later, you only get progress when there's a full lock; applying the Boss node algorithm from global:set_lock_known makes progress much faster (assuming the dist mesh is or becomes stable). The new pg I believe doesn't take these locks anymore.
The other problem with pg2 is a broadcast on node/process death that's for backwards compatibility with something like Erlang R13 [2]. These messages are ignored when received, but in a large cluster that experiences a large network event, the amount of sends can be enormous, which causes its own problems.
Other than those issues, a large number of nodes was never a problem. I would recommend building with fewer, larger nodes over a large number of smaller nodes though; BEAM scales pretty well with lots of cores and lots of ram, so it's nicer to run 10 twenty core nodes instead of 100 dual core nodes.
[1] I no longer work for WhatsApp or Facebook. My opinions are my own, and don't represent either company. Etc.
>I think our big cluster had around 2000 nodes when I was working on it.
Is there fairly recent? I thought WhatsApp was on FreeBSD with Powerful Node instead of Lots of Little Node?
>BEAM scales pretty well with lots of cores and lots of ram, so it's nicer to run 10 twenty core nodes instead of 100 dual core nodes.
Something the I was thinking of when reading POWER10 [1], what system and languages to use with a maximum of 15 Core x 16 Socket x SMT 8 in a single machine. That is 1920 Threads!
Lots of powerful nodes. That cluster was all dual xeon 2690v4. My in-depth knowledge of the clusters ends when they moved from FreeBSD at SoftLayer to Linux at Facebook. I didn't care for the environment and it made a nice boundary for me --- once I ran out of FreeBSD systems, I was free to go, and I didn't have to train people to do my job.
We did some trials of quad socket x86, but didn't see good results. I didn't run the tests, but my guess from future reading is we were probably running into NUMA issues, but didn't know how to measure or address them. I have also seen that often two dual socket machines are way less expensive than a quad socket with the same total number of cores and equivalent speeds; with Epyc's core counts, single socket looks pretty good too. Keeping node count down is good, but it's a balance between operation costs and capital costs, and lead time for replacements.
The BEAM ecosystem is fairly small too, so you might be the only one running a 16 socket POWER 10 beast, and you'll need to debug it. It might be a lot simpler to run 16 single socket nodes. Distribution scales well for most problems too.
FWIW, the soft limit for 70 units is about using the global module, which provides consistent state in the cluster (i.e. when you do a change, it tries to change all nodes at once).
The default Erlang Distribution was shown to scale up to ~300 nodes. After that, using sub-clusters is the way to go and relatively easy to setup: it is a matter of setting the "-connect_all false" flag and calling Node.connect/2 based on the data being fed by a service discovery tool (etcd, k8s, aws config, etc).
PS: this data came from a paper. I am struggling to find it right now but I will edit this once/if I do.
I can also mirror this general guideline. I've run 250+ node erlang clusters just fine in the past. There were some caveats to how some built-in OTP libraries behaved but they were easy to replace or workaround in the past.
That was many years ago as well. The distributed erlang story has improved with more recent releases (better performance on remote monitors for example) which might push the number a little higher than 300 if you are careful. Keep in mind the default style of clustering is fully connected so there is some danger in managing that many network connections (quadratically scaling for each node added) during network partitions which can be a problem if you're not tuning things like TCP's TIME_WAIT for local networking conditions.
Even better, these days there are great libraries like partisan (https://github.com/lasp-lang/partisan) which can scale to much larger cluster sizes and can be dropped in for most distributed erlang uses cases w/o much effort.
with BEAM you get in a node what you would get in a k8s cluster. if you only look at supervisor trees and you grok the concept you’re streets ahead the average developer and their “distributed” systems knowledge.
I find this quite funny because it's my first time hearing I was supposed to be thinking of Elixir as a Ruby thing. I actually learnt about it from a concurrent computing class and it was always an Erlang thing, and now I know it as the magic sauce behind Discord that I always want to try and never find a good reason to.
> I always want to try and never find a good reason to
As someone who learns best by doing, what are some practical projects that someone could do to learn Elixir? I know that Elixir is quite capable of solving certain kinds of problems very elegantly, but maybe my experience hasn’t presented these kinds of problems yet. Outside of building a Discord-like server or a Phoenix web app, what other good practical projects/applications are there for Elixir?
I'm probably the crazy one in the community who is using Elixir for the most super-strange things. For example:
- as a custom DHCP server to do multiple concurrent PXE booting (among other things)
- as a system for provisioning on-metal deployments (like ansible but less inscrutable).
- as a system for provisioning virtual machines over distributed datacenters.
I'll probably also wind up doing DNS and HTTP+websocket layer-7 load balancing too by the end of the year. Probably large-size (~> 1TB) broadband file transfer and maybe even object storage gateway by next year. I've rolled most of these things out to prod in just about year. I honestly can't imagine doing all of these things in Go, without a team of like 20.
Elixir sucks at:
- platform-dependent, like IOS, android, or like SDL games or something,
- number-crunchy, like a shoot em up, or HPC.
- something which requires mutable bitmaps (someone this past weekend brought up "minecraft server").
Actually even desktop might be okay, especially if you pair it up with electron.
> something which requires mutable bitmaps (someone this past weekend brought up "minecraft server")
One thing I'd like to see for the BEAM communities long term are well maintained libraries of NIFs[0] for high performance and possibly mutable data structures. Projects like rusterl[1] and the advances made on dirty schedulers make this more feasible than it used to be.
It would be cool to write all the high level components of a minecraft-esque game in Elixir, and drop down to rust when you need raw performance. Similar to the relationship between lua/c++ in some modern game engines
currently my main system in prod is a (very smart) stateless caching system, I don't have a database, but I will eventually spin up a (probably amazon) database, for the web gateway, when the keys to that are handed over to me. One of the nice things is that using elixir really puts you in the mindset of thinking hard about who should have responsibility for "the truth". No hiding behind objects backed by ORMs. State is pushed out as far as possible to the edge and I don't trust my server to keep state correctly. As soon as it's at risk of being inconsistent (netsplit, e.g.) the state-holders in the server are slaughtered, rebooted, and told to reconnect to the edge machines, which are trusted with the truth. The nice thing about elixir is that restart logic costs the the price of like 5-6 lines of code across the entire codebase (no worrying about cleaning up resources like tcp connections or anything).
Not in prod (personal project), I have a websocket-scraper on an aws free tier that's dropping data into a sqlite file, that I can transfer to my laptop for offline processing, so there are lots of interesting options.
This is really interesting to read, I’ve primarily thought about the DB as something which keeps your objects warm. Any recommended sources to read more about this way of treating state?
Haha I dunno, I just came up with it because that seemed like the sensible thing to do. And the virtualization library I use (libvirt) is basically a database anyways. Whenever I've talked with people who know better than I about distributed state (and iot) everyone seems to agree that pushing state management to the edge is the correct strategy in this case. For some things (like tracking customer intent and business logic) where consistency is important, like for you're responsible for someone's $$$, then you want to centralize and really be 100% certain your state is safe, and - I learned this analogy from stephen nunez, who got it from Toyota principles - you want a 'stop the world' lever when things go wrong. To prevent compounding inconsistency.
In crude and probably wrong terms, AP concerns can be more easily pushed to the edge and CP concerns like to be centralized.
When you say push state to the edge, would this be similar to how with the onion architecture you have the domain model in the center but repo access to get data on the outside?
Edit: So as not to have the DB be the ‘core’ of the app?
if you need to do some batching operations such as read from a queue, do some processing and publish to s3, I can recommend Broadway https://github.com/dashbitco/broadway
I have seen lots of codebases in lots of languages do this type of task, but aside from maybe spark on the high end, i haven't see it done better.
The beauty of erlang is your code reads like synchronous code so it's easy to read/maintain, but it has all the power of the parallelism of async code.
> I know it as the magic sauce behind Discord that I always want to try and never find a good reason to
Don't ever want to build a web app? That's pretty much Elixir's sweet spot, IMO. You'll get great productivity, great scalability, performance that's better than popular interpreted languages and a code base that's easy to reason about.
You'll also be able to do a lot from within the VM instead of relying on external services.
Aside from getting rid of decades of language cruft, mutability, etc, one thing I doubt Node will ever have is a fairly unified ecosystem with a dominant framework like Phoenix.
I migrated the code base for my last startup from Node to Rails at the end of 2015, then from Rails to Phoenix at the beginning of 2016. Pretty much all of that Elixir + Phoenix 1.1 code is still fine today. There haven't been any breaking language changes and the most arduous part of upgrading Phoenix to the current version would be the front-end—replacing Brunch with Webpack and removing Bootstrap. Other than that it's just a few lines of code.
The Rails app is slightly more dated but still fairly trivial to update. Most of the gems still do things in the same way and there's generally a clear way to go. The JS app on the other hand, is a pretty depressing mess of broken dependencies and multiple libraries I was using have dramatically changed their APIs.
For me, productivity and ease of maintenance matter much more than popularity does when picking a tech stack.
The first time was early on while still at an exploratory stage. I made the change for productivity reasons. Rails did pay off on that axis, even though I was much less experienced with it than the original stack.
The second time was because I'd gotten it to where it was getting users but I couldn't afford the server costs (given that it was free to use and my users were largely in Vietnam and Thailand). I needed something that could scale much more cheaply.
If I'd had millions of dollars in the bank, I'd have just stuck with what I knew and added people and servers as necessary.
Same. I have never even heard there is a relation to ruby, I always just thought it was just a functional language for the Erlang runtime. I had no idea the syntax was "Ruby like".
You are being outright negative about Elixir throughout the entire thread, generally without providing good reasons why outside what seems to be your personal bias.
No one is forcing you to drop Erlang for Elixir
I guess it's worth literally showing your measurements. The Elixir macro system is a vast improvement over merl. When people try to make re-usable gen_servers in Erlang it often becomes a mess. gen_listener in the Kazoo project is a mess and very hard to understand. I'm confident that if Elixir had been around when it was first written and they had opted to use Elixir, it would be way more approachable for developers. Another example from the Kazoo project where Elixir would help would be the ability to provide default arguments in the argument body.
Elixir itself has been the genesis of several improvements and optimizations to the BEAM. These contributions came from core Elixir developers.
I don't think your comparison to coffeescript is fair at all.
c++ was not created for the specific purpose of making a language feel like ruby
c++ added extensive new tools that the base language did not have
c++ did not get its engineering team hired off to work on something unrelated
c++ did not create problems by misunderstanding c designs and trying to change them
in these four ways, elixir and coffeescript are alike
.
> Joe Armstrong himself was quoted as saying Elixir was some "good shit"
Yeah, Joe also said that object orientation was a terrible mistake, and if you read his design principles, Elixir was swimming upstream of most of them
When Joe got asked about this, his answer was simple: "I just like to support other peoples' programming languages. You can tell my personal preferences by what languages I use."
> c++ was not created for the specific purpose of making a language feel like ruby
> Yeah, Joe also said that object orientation was a terrible mistake, and if you read his design principles, Elixir was swimming upstream of most of them
Did you, by any chance, use Elixir in 2011 and never really re-approached it?
Back in 2011, Elixir was an attempt to bring Ruby and object-orientation to the Erlang VM. It had a prototype-based model, it had eval-based meta-programming, it had loops, and, as you may guess, it failed pretty spectacularly.
But at the end of 2011 and beginning of 2012, I started from scratch. Object-orientation has been removed, a Lisp-based meta-programming model was introduced, most Ruby influences were removed except for minor syntax constructs.
Saying that the design principles were "swimming upstream" of OO principles are inaccurate since 2012 - as most of your other comments in this thread. Our main additions at the language level, compared to Erlang, are macros and protocols, where one is Lisp-based and the other is taken after Clojure/Haskell. From 2012, putting the Erlang VM front and center has literally been one of our main design goals.
Suggesting that Elixir has done little to improve on what Erlang offers seems very untrue. If you write as much Erlang as you claim to, the fixes to string/binary handling and structs/records alone, represent a huge productivity gain.
But everyone's entitled to their own opinion. I occasionally read opinions from polyglots who prefer Erlang's syntax to many other languages. But I personally much prefer Elixir's syntax to Erlang's. For one, the issues in dealing with strings and binaries as mentioned above, for another, I find the punctuation rules in Erlang to be a complete PITA ("." and ",", anyone?).
I suppose it's all opinion, but the points you make about Elixir, when it really is very similar to Erlang 'semantically' (not syntactically), are interesting for how at odds they seem with any of the expert Erlang and Elixir developers I've spoken to, or any of the better researched opinions I've read.
Which is why I find your posts interesting, if not a little misguided.
The article says nothing about them switching to rust. They implemented a specific data structure in rust using elixir NIFs (native interface). This is equivalent of calling a C library from other languages.
Others have already repeatedly mentioned this was a performance optimization. Elixir/Erlang is not particularly good for heavily CPU bound tasks--This is by design. For lack of a better source, here is a post on the Elixir forum from Robert Virding (one of the creators of Erlang) on the subject: https://elixirforum.com/t/on-why-elixir/34038/62
The best things about Elixir are mix and phoenix. We all can talk about how well under load on multicore machines it behaves but that is the same as we would talk about Erlang. What pushes Elixir beyond Erlang is advanced macro language that allows for things like Ecto, mix with moden Ruby like gem+rake kinda dependency management and really really good solid testing framework.
Elixir/Phoenix is really good. And the ecosystem is also pretty solid.
Pros:
* Functional language
* Multicore support built in
* Mix
* Phoenix
* REPL
* solid ecosystem of most needed tools
Cons are:
* Functional language
* Still niche adoption, not many talented people to pick from.
* If you are deploying via release ( as you should ) mix is going away in production
Can be plus or minus depending on people reading it etc.
Now things like LiveView are just cherry on top. In general Elixir/Phoenix is a full package.
Phoenix is necessary for the success of Elixir absolutely. Because languages at this level of abstraction need a web framework to thrive.
I'm not sure I find Ecto and Phoenix as central to Elixirs value proposition in general. I'm looking a lot at Nerves (IoT) and Membrane (media serving). Having a strong web framework simplifies things and Phoenix is good there. But there are a lot of things to like about Elixir/Erlang/BEAM.
Tbh, I feel comfortable writing Elixir and dealing with OTP. But while I find Plug and Ecto to be very interesting projects, I've not touched Phoenix at all. I just didn't feel the need.
I haven't done a release with Elixir or Phoenix yet and the documentation about it is quite confusing. There are many different ways but I have no idea which one to choose.
I went through this a short while ago for a new side project. You can totally use Mix and you'll be fine to get going with initially but you lose a lot of the benefits of BEAM. A proper BEAM "release" has several key advantages. Most of them are in the Elixir docs [1] but I'll point out the ones I like.
* You don't need to include the source code because you have the pre-compiled code in the release, that includes packages as well. No need to `mix deps.get` on production. They don't even require Erlang / Elixir on the production box because that's also baked directly into the release. As long as your architecture is the same as your build machine, you get a super light weight artifact and a simplified production stack.
* It's very easy to configure the BEAM vm from releases. I think most of this is possible through mix with everything installed on the box but using a release you get it put into the release artifact and there's no fussing around after that.
* It also makes life easier when you start using umbrella applications. You can keep your code together and cut releases of individual applications under the umbrella. That lets you scale your application while keeping it together in a single unit (if that's your thing).
There are other benefits, but ultimately it's the way the erlang/elixir/beam community seem to prefer. For me this was the selling point, as I expect tooling will continue in that direction rather than supporting mix in production.
An OTP/Mix release will produce an executable which bundles your code with the whole BEAM runtime, so for a start, it will mean that won't need to install Elixir on the host machine / server.
OTP based release executables also have an internal versioning system. When people talk about the Erlang/Elixir hot code swapping feature, OTP releases are the basis for it. But it needs a bit of extra work beyond just creating the release binary itself.
Yeah it's super confusing, especially since mix releases are now a thing, but weren't before.
I've heard Chris McCord (the author of Phoenix) say he doesn't use Elixir releases in production in most of his consulting company's client work. He talked about it in some podcast like 6 months ago. I think they just run the same mix command as you would in development but he wasn't 100% clear on that.
But yeah, it's not easy to reason about it, and also if you decide to use releases it's a bummer you lose all of your mix tasks. You can't migrate your database unless you implement a completely different strategy to handle migrations. But then if you ever wanted to do anything else besides migrations that were mix tasks, you'd have to port those over too.
That's not true at all. Mix Tasks are just code. Assuming you stashed them in /lib (or someplace that elixirc reaches) you can call them in a release using eval.
I've seen a bunch answers around having to jump through larger hoops to get to run mix tasks in releases. Are all config options still available to be read in mix tasks that are called that way? What about Mix functions like Mix.env()?
If you Google around the topic of database migrations in Elixir releases you'll find like 5 different ways to do them with no clear "this is the best answer".
Mix doesn't exist in releases. You can get config options like normal, but not `Mix.env/0`. You can compile your current env into a config option though, and use that at runtime.
Not many talented people to choose from applies to any language though. And a functional language weeds out all the people who aren't willing to try something new.
As someone with a background in Objective-C, Swift, C++, C# and Java, and currently using Ruby, I'm looking for my next language for web development. Elixir sounds like a step up from Ruby, but I really miss static typing, and I find it hard to justify investing time in yet another language that doesn't have it.
But what are the alternatives? I'm looking for something with static typing, good editor support, mature community projects (e.g. testing on par with rspec), faster than Ruby (though most statically typed languages tend to be) and if it supports some functional paradigms that could be a plus (I dabbled in F# and SML previously).
- Scala is an option, but build times sound like an issue, and the JVM is a bit 'heavy' (e.g. startup times, min memory usage).
- Haskell sounds cool, but maybe a bit too esoteric? (i.e. is it worth the trouble, does it have all the middleware I'm used to)
- C# could be an option, but is the open source/community support there? (if you're not a corporate shop, doing corporate things).
- And then there's Rust, which I'm fascinated by, but I'm also worried that I'll be less productive worrying about lifetimes all the time, and that it's less mature (though growing fast, and seems to have attracted an amazing community of very talented people).
I'm also interested in ways to use a language like that in the frontend - Scalajs sounds pretty mature, C# has Blazor and Rust seems like one of the best ways to target WebAssembly.
So what is a boy to do? Stick to Ruby until the Rust web story is more mature? Try out Elixir in the meantime? Join the dark side and see what C# web dev is like these days? It can be really hard evaluating ecosystems like that from the outside.
Gleam[1] is a typed language on the BEAM. It's still in its early days, more so than Rust. May still be worth keeping an eye on. Nim[2] and Crystal[3] also exist. No idea what their web situation is like, but Nim has a JS compile target, that might be interesting.
> but [in Rust] I'm also worried that I'll be less productive worrying about lifetimes all the time
I'd avoid getting too concerned about this! Explicit lifetime annotations are nowhere near as prevalent in 'modern' (2015+) Rust as they were in the early days, and it's somewhat rare to need to think about them.
The most helpful advice I had on this was to avoid passing references in Rust beyond simple cases unless there is a real need - owning data is much simpler, and clones are generally fine until you get to optimising a program.
If you're interested in Rust for web dev, now is a great time to jump in - while not as mature as Ruby or C# in terms of ecosystem, Rocket is now usable with stable Rust, and Warp is (IMO) a close-to-best-in-class library for building HTTP servers.
I spent the last 8 months learning Elixir, Ecto & Phoenix etc... I am about to leave it and spend more time in C# as I really miss static typing, I also miss the way the documentation is written, the amount of material to read on .net and C#. The library, community and packages are richer for a start. Will be interesting to see if I miss Elixir at all.
If you're willing to be patient, I think that in the next year Elixir is going to get an add-in aggressive static typing library (better than dialyzer) enabled by compiler hooks. It's going to happen, and it's going to be very good.
I'm very very Sorry I should have been less emphatic, less flippant. This is an unsubstantiated prediction based on what elixirc hooks will enable and what the community wants, but the momentum has swung to the point where I think it's inevitable and the timeframe is correct, +/- 6 months.
I've been learning F# and OCaml the last couple weeks and I really like the syntax. (I have been using elixir for 4 years)
The main thing I don't like about OCaml is that it doesn't have method overloading. I love the arity-based (number of arguments in a function) function signatures in elixir but that's not possible in OCaml since it curries methods.
e.g. in ocaml, calling method with less arguments than expected will return a function that takes the remaining arguments:
let add = (fun x y -> x + y)
let add_four = add 4 (* returns a function expecting 1 arg*)
add_four 5 (* returns 9 *)
Not a big deal but I would rather lose this built in currying in exchange for arity-based method overloading.
Otherwise, OCaml is a fast language, both compiling and runtime performance.
I agree that the lack of polymorphism is annoying. A solution to this would be modular implicits. This would allow the built-in currying to work as well.
It's just an OCaml syntax layer, so yeah. You just use the normal tools and the build system just does the right thing. Interspersing reason and OCaml files in a single project works fine too.
This. Although I would not use Reason since the compiler layer, bucklescript, changed its name to Rescript to rebrand as its own frontend language and left Reason holding the bag. There is no reference to OCaml in any documentation that was once under the bucklescript project. It even created its own sytax that is different than Reason and Ocaml to something more like JavaScript.
It basically should have been a new project and have had nothing to do with bucklescript.
The worst part is that the owner of bucklescript even owned some properties that had the name "reasonml" in it (like reasonml.org and the reasonml discord group, which weren't owned by the Reason team) and then he pointed all those thing to Rescript. Just the confusion did some serious damage to Reason.
If JVM isn't a blocker try Kotlin. It compiles faster than Scala and has a great IDE experience. Runtime performance is very good of course with the usual JVM caveats (high memory usage, need to use recent JDK for latency optimised GC if you need that).
Could be worth your time to check out vapor, the server side swift web framework. Obviously doesn’t have the ecosystem as the more popular frameworks but has been under active development with corporate sponsorship for a few years now.
> Elixir sounds like a step up from Ruby, but I really miss static typing, and I find it hard to justify investing time in yet another language that doesn't have it.
Static typing serves basically two purposes: correctness through static type analysis and enabling compiler optimizations.
Like many languages that don’t mandate static typing, Elixir has available static type analysis solutions; in Elixir's case (as for Erlang) it's Dialyzer, which does more static analysis than just what is usually thought of as typechecking.
I love Elixir but i will never advocate Dialyxir for somebody looking for static typing. It's super slow, it's not as powerful and rarely (but sometimes) it rejects programs that are valid.
Elixir seems to encourage simple data structures — everything is made up of basic data structures, and since there's no encapsulation, libraries seem to be built with an attitude of "developers are gonna inspect everything so we might as well make things clear and simple". I only noticed this in contrast to libraries in popular OO languages (most recently Python) where everything is done through objects that often have inscrutable instance variables and "missing" methods/methods that library authors simply haven't gotten around to implementing.
Having a small library of functions operating on a small number of data structures makes programming a lot more intuitive than a large number of classes, each with their bespoke set of things you can do to them.
Instead of "lack of encapsulation", it's more "lack of private state". In an object oriented language, you have private instance variables and methods to manipulate them. In a functional language, you have functions which manipulate data structures.
If possible, the data structure would have straightforward fields which are public and documented. If necessary, you might make it opaque, expecting only the library which manages the data structure to manipulate it.
One way to think about this is that in functional programming, the "verbs" (functions and manipulation patterns like map) are more generic and the "nouns" (data structures) are less important than in OO languages. See http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...
There is no encapsulation like in OOP (data level), but there is another kind of encapsulation (process level): a process is the only one accessing/modifying its own state.
And this is very liberating when you need to think about what is going on, what could go wrong...
> straightforward fields which are public and documented
The rational (whether right or wrong) in OO for private fields and public "manipulator" methods is that the field's representation can be wider than the intended public use. For example, you may store the credit-card number as a string of digits, but only allow the correctly CRC'ed number to be set.
Of course, in a more functional paradigm, you would have a credit card number type (which is just a function that strictly enforces the domain and range of the possible valid values), rather than an object which can have independent identity storing the same information.
It encourages a functional way of thinking, IMO: since you can't have mutable objects, use immutable data structures and functions which alter them, such that where you may have had:
x.f(y)
you now have something like:
x = f(x,y)
I've found that years of functional programming have mangled my brain. Now when I use something like Python, I only use dataclasses and accessor methods on them, or write methods which perform and return the result of some computation based on the values of the object.
I always joke with my team that the best thing I did for my Python was learn Erlang, and that was when I was stuck with NamedTuples and not DataClasses.
Every time I think about using Phoenix, I get scared off by warnings about how not knowing BEAM can result in serious problems. I’m not sure if that conclusion is justified but it’s where I end up every time. It’s odd and unfortunate that Elixir and especially Phoenix seem to have invested heavily in being approachable but the rest of the ecosystem seems to have warning signs posted all over the place.
Is this a fair impression? Or is it possible to run Phoenix in production and gradually learn more about the BEAM, leveling up as you encounter new challenges?
We work hard exactly so that you can run Phoenix in production and gradually learn more about the BEAM along the way!
One recent example is the built-in dashboard showing all the different data the VM and the framework provide: https://github.com/phoenixframework/phoenix_live_dashboard/ - at first it may be daunting but providing a web-based experience to help familiarize with the building blocks is hopefully a good first step. We also added tooltips along the way to provide information about new concepts.
The same applies to Elixir: learning all of functional programming, concurrent programming, and distributed programming would definitely be too much to do at once, so we do our best to present these ideas step by step.
For what is worth, this is also true for Erlang. It has tools like Observer (which was used as inspiration for the dashboard) and a great deal of learning materials.
One last advice is to not drink the cool-aid too much. For example, you will hear people talking about not using databases, about distributed cluster state, etc, and while all of this is feasible, resist the temptation of playing with those things until later, unless the domain you want to use requires explicitly tackling those problems.
As a daily user of Elixir since 2016, having come from a Python/Django background, I can assure you that you can know very little about OTP/Beam and still be immensely productive. But once you learn the underlying concepts that gird Elixir, so many solutions that normally require queues/caching/back pressure/state machines/distributive architectures are all possible with Elixir itself. I can say this honestly that Elixir is the most powerful I’ve ever felt as an engineer in any language.
I suppose it depends on how specialized or 'heavy' your needs are. I suspect that for a decent portion of web applications, Phoenix might last you longer than others before you would need to dive into the nitty gritty in either language.
In my use case, most of my work is rather straightforward web apps. When using Rails I have occasionally run into issues where I had to tweak things or dive a bit deeper. With Phoenix these same projects would've been fine for a while longer.
Of course, when you do need to dive in, perhaps an advantage of Rails/Django/Laravel/etc. is that the ecosystem is larger, so your problem might be solved without having to really figure out more.
No, it can't really. I'd even say that Phoenix largely uses BEAM and its genservers under the hood "so that you don't have to" and still be able to reap their massive benefits.
You'd be perfectly fine (and it's not uncommon) writing a production app without writing a single genserver (kinda the cornerstone of BEAM) yourself, and you'd still get a highly performant web app that can handle a ton of concurrent traffic out of the box.
Then you can take it from there, deepen your knowledge, and write your own genservers and other BEAM goodies where and when you need them. Phoenix can guide you from the very basics, helping you write apps in simple fashion very much like in Rails, Laravel, Django, etc. It just has a larger range of possibilities on the upper end, letting you build massive high-traffic services, if you need to.
It's totally possible to run Phoenix in prod. I started learning elixir and am using Phoenix for https://getlounge.app. I don't use any of the advanced BEAM features like GenServers etc.
It's really not that bad to learn, to deploy and to work with. You don't have to go all-in distributed system, shared state etc to get a lot of benefit from Ex and the smart decisions it has made.
> Erlang has some bad rep for "weird syntax", but it's completely unfounded, the syntax is actually quite simple and clean.
Anything that isn't Algol-flavored and either procedural or class-based OOP tends to be seen as weird syntax, and much more so if it ticks both boxes.
It's not as much a matter of simple and clean as familiar in a world where almost everything people will be exposed to first tends to come from a narrow design space.
Elixir is like an abstraction on top of Erlang and its runtime(otp). It heavily depends on the data structures of Erlang and its very difficult to extend it beyond Erlang's limitations / capabilities. When you take macros out of the way, Elixir programs can be translated line by line to Erlang. That OTP dependancy is considered the great strength of Elixir by many but it also has disadvantages.
The thing with Erlang is that it ain't a general purpose programming language like Java or Python but a niche telecom software language. Erlang is marketised like that and from what I've heard it's really great at that (telecom software). But nobody proposes to write f.e a game or a Gui app in Erlang.
Elixir on the other hand is marketised as a general purpose programming language. People that start a journey of learning Elixir must be very careful and understand that there are a lot of applications that Elixir can't be used for because of Erlang's limitations.
Also, when you start using it you'll see that common applications outside of the Erlang telecom niche, like the polular phoenix web framework and its ecto orm like library make very heavy use of macros and message passing abstractions that seem strange in a lot of situations. Of course all will fall in place after you understand the Erlang dependecy.
I can assure you erlang is not a 'niche telecom language's, as I have used it, in prod, in so many ways that have nothing to do with telecoms. One mini-project I'm planning on working on for fun is to use it for HDL simulations, because its message passing concurrency is a nice way of cleanly dealing with eventing voltage edge transition logic.
> But nobody proposes to write f.e a game or a Gui app in Erlang.
Yet square enix uses elixir for game orchestration
While the original use case for Erlang was for telecom, it's more accurate to say it's for building networked applications that can be designed for fault-tolerance.
"Niche telecom software language" is a brutal misunderstanding of Erlang strengths. But a common one, which Elixir is actually trying to rectify - turns out BEAM is maybe the best thing out there for web apps.
> Elixir on the other hand is marketised as a general purpose programming language.
Where is it marketed that way?
> It heavily depends on the data structures of Erlang and its very difficult to extend it beyond Erlang's limitations / capabilities.
It doesn't "heavily depend" on it and nobody is really trying to "extend it beyond Erlang's limitations". It's the exact opposite - it's trying to make use of all the Erlang & BEAM attributes (and trade-offs!) that make it especially adept for a given problem sets.
The Elixir website literally says this, and it's exactly what it is, nothing less, nothing more:
Elixir is a dynamic, functional language designed for building scalable and maintainable applications.
Elixir leverages the Erlang VM, known for running low-latency, distributed and fault-tolerant systems, while also being successfully used in web development, embedded software, data ingestion, and multimedia processing domains.
I have a few projects closely related to telecom stuff and I thought it may be fun to use Erlang or Elixir, however I keep hearing people advise against using Erlang, except in the specific niche of distributed applications, so idk anymore.
I will say there are some features in these languages that I have yet to see in any other languages, general purpose or domain specific, that I love.
I came to the same conclusion as the author, though not quite in the same way. Everything I needed to know about what Ruby was was given to me when I attempted to learn and love Crystal. Similar syntax does not make a similar language. Smalltalk is far closer to Ruby than Crystal ever could be.
What makes Ruby so lovable is, in a few words, the pure object orientation. This is the source of all it's flexibility. Any concept can be created and tersely described. It's almost as semantically flexible as Lisp and ultimately friendlier.
You'll never get these benefits in a language that looks like Ruby. It's not the syntax at all, it's the semantics, and you can't get Ruby semantics without actually being Ruby.
> [ruby] is almost as semantically flexible as Lisp and ultimately friendlier
Elixir for the most part _is_ a Lisp, and inherits almost all of Lisp's semantic flexibility also
I write Elixir full time now after writing Ruby for several years. At first I struggled getting out of the ruby meta programming mindset. After reading some advanced lisp books, the concepts of quote/unquote began to click and now I feel like my ability to meta-program in Elixir is much stronger than in Ruby.
Not to take away from the sheer amaze-balls power that lisp offers, you can get Ruby superpowers just from one book, Metaprogramming Ruby 2, which is sadly out of print.
Coming into Elixir from mostly doing Python but having it strongly recommended by a Rubyist I was a bit surprised to find myself in the middle of an implicit ex-Ruby world. It has been fine. But in contrast to the article, Elixir has always been about Erlang/OTP for me. I've never cared much about Ruby/Rails because it felt equivalent to the Python/Django which I already know.
I would say that in my experience everything outside of Phoenix is more explicit than python. There's a lot of hidden implicitness in python that drives me up the wall when I'm trying to chase a bug in someone else's code; I rarely have that problem in Elixir (even with phoenix, usually chasing where the code comes from is not terribly hard).
Haha, I was referring to feeling the implicitness of "everyone came from Ruby" which I didn't, rather than anything in the language. Sorry if I was unclear :)
I hadn't heard of Elixir yet. I did use Ruby ages ago, loved the syntax, but the language felt a bit flakey at times (for example, I had a project where I needed to use unicode that didn't turn out to be properly supported).
Erlang has been on my radar ever since a friend wrote either Tetris or Conway's Game of Life in 3 lines of Erlang. Never got around to learning it, though. If Elixir is a friendlier gateway to Erlang, it might be exactly right for me.
At the time, making a language that on ran on the Erlang VM with Erlang semantics that looked like ruby probably seemed like a huge advantage to Elixir. (Plus, the people who developed it were rubyists and former rubyists who honestly liked the 'look alike' elements they took from ruby, it wasn't just a cynical attempt at manipulation).
but these days ruby's popularity seems to be waning (personally I don't think for any technical reasons, especially when compared to it's nearest neighbor python; to me it's like a 'betamax vs vhs' thing, and makes me sad, but nonetheless) -- and the association to ruby may actually be holding it back as people have (IMO unjustified) negative perceptions of ruby. :(
Now that we have Elixir (Erlang looking like Ruby) and Crystal (Go concurrency dressed up like Ruby[1]), I'd be primarily be interested to see how these two languages stack up against each other, what niches they might address etc.
Indeed, but at the same time, it seems people are using both Go and Erlang/Elixir for a lot of similar use cases (scaleable web-apps), which makes me interested in what specific use cases they do respectively shine.
And neither Elixir or Erlang has the raw performance or ability to run without a VM that Go or Crystal has. That was the point of the grandparent - they provide different things. Not everyone thinks immutability is a feature that is good to have on the language level.
I don't know Rails. I know Python and Golang. Worked in a data-intensive, data heavy startup, throwing microservices, nanoservices in a haphazard fashion with K8S backing the "throw and see something sticks" approach.
Elixir, and Phoenix makes me think. About three years ago, when we started the company, if we had these two tech... we could've been saved so much dev time focusing on the core tech, instead of making scaffolding work.
Same here. Microservices have their place if you expect to have to scale like craaaaazzzzy or if you know that your platform will be huge with many different parts but you can go a LONG way with a Phoenix monolith while being "happier" and more productive.
From my (limited) experience it's an ideal framework for startups or new web apps. I've also heard excellent things about replacing painful microservice setups with one Phoenix app.
There is just so much you can do with it at scale and also so much you don't have to do (e.g. channels + integrated pub/sub are a godsend). Definitely my new favorite framework (+ language) for web dev.
I really enjoy writing elixir and I’ve had some really quick wins that have scaled effortlessly on the projects I’ve deployed using it.
I would love to use it to build a desktop app. I’ve fooled around with Scenic for building a UI, but haven’t figured out a way to distributed a binary besides a self extracting executable.
Elixir is basically an Erlang that looks like Ruby instead Prolog. Combined with Phoenix which looks like Rails, this language is really getting traction.
I wonder if an Erlang that looks like JavaScript would work. It would be like what ReasonML to OCaml but may works better on the server side.
I'm a few weeks into Elixir and it and the ecosystem definitely feel more techy and mathy than Ruby. It's more comparable to JS and node, it's like a more consistent JS without the OO stuff plus convention over configuration.
And 99% of thse jobs are legacy RoR codebases, big hairy balls of mud. Good for job security and pays well (somebody has to support it all), but somewhat boring.
Also the ruby ecosystem is in decay. A lot of libraries are abandoned or on life-support - just do a random search on rubygems and check release dates.
> A lot of libraries are abandoned or on life-support - just do a random search on rubygems and check release dates.
Every language has this problem.
Even Elixir. The sole maintainer of the most popular AWS library isn't using AWS anymore so he stopped maintaining the package. He tried to give the project to another maintainer but after months of asking, no one wanted to take over the project.
There's also lots of Elixir libs where it looks like some package author was super gung-ho about making something and then the repo hasn't been touched in 4 years.
I see the same thing in Python too but one big difference is, with Python (and Ruby and Go and PHP) you often have officially supported libraries for things like AWS, Stripe and other important / popular services. With Elixir, you'll be on the hook for writing your own clients for most services.
I've asked Stripe a few times if they will add an Elixir client officially but they always give the same response of it's not worth it because there's not enough demand.
I want Elixir to get more popular but it's a very sketchy world outside of a few Elixir packages that are either developed / maintained by the core team, or have a huge following because it's solving a problem that pretty much everyone has. Basically you need to be prepared to develop libraries and a lot of glue code yourself to get started before solving your "real" business logic.
"legacy RoR" , so if it's not a 2 year old startup "changing the world" - it's boring?
Working in Stripe / Shopify / Github is boring?
I'm pretty sure btw new projects are being started in Ruby in bigger numbers than Elixir (which isn't saying much, but still).
I don't really love those language wars but I don't see why Elixir has to keep trying to assert itself by dissing Ruby. First - at least by numbers, it's way way smaller. Second - it should be more confident in itself.
I'm not bashing ruby or something. I love ruby, I have been using ruby/rails since rails 0.9. Still using ruby with roda/sequel, and i think it is the best language to rapidly prototype and explore ideas.
But the reality is, that all the interesting and cool stuff is happening somewhere else now. Ruby is still a great language, but the active community outside rails is shrinking and the creativity isnt there anymore. People prefer to explore novel ideas in different languages now - rust, go, elixir etc, even if they do ruby for a living.
And rails is showing its age too. Every rails project i encountered was a petrified spagetti monolyth that had to be broken up and refactored into smaller pieces.
If I write an application in asp.net core it will continue functioning on the most recent version of .net core and asp.net core 5 years from now.
If I do the same thing on RoR it will not, guaranteed.
I've lost count of the number of times I've gotten a new client and had them on a web framework that was EOL, on a version of the language that was EOL, and on an OS that was EOL (because they couldn't get the EOL language/framework running on a modern OS).
Anytime you use something like RoR you're automatically taking on a higher maintenance burden because you MUST upgrade or you'll find yourself in a spot where you're trying to decide if you want to rewrite or pull it up to the newest version.
So asp .net core is pretty much a complete rewrite and different runtime than asp .net, right? So it's not like huge changes don't ever happen in Microsoft land. If you have clients running on asp.net that ask to migrate to .net core , is that going to be painless? The clients probably all wanna run on the newest thing probably.
And who's to say .net core doesn't have any breaking changes in 5 years, are they guaranteeing not to do any breaking changes at all to the framework?
going from .net framework to .net core will mostly be painless unless you're using a windows specific API (such as windowing).
If you're using ASP.Net MVC then your app should also come across to .net core fairly painlessly.
The only real challenge is going from the old webforms with its page lifecycle to the asp.net core. But depending on how the application was originally implemented, that really should just be about the frontend, not the entire app. And of course EF core has changes you'll need to deal with as well if you're using it.
It's by no means seamless, but MS did the best they could to maintain compatibility between the two, and you only do it once.
It should also be noted that .net framework is almost 20 years old. Doing something like this once in a 15 year period is a far cry from Laravel's every 6 months or RoR's 2-3 year cadence. IOW, it's not optimal that the divide between framework and core exists, but it's a far better story than Laravel, RoR, et al.
> And who's to say .net core doesn't have any breaking changes in 5 years, are they guaranteeing not to do any breaking changes at all to the framework?
ASP.Net core is literally just a set of libraries. You'll be able to run those on the modern version of .net in 10 years from now. We know this because it's what they did with asp.net. An application built on asp.net 1.0 will work on 4.8.
"We know this because it's what they did with asp.net. An application built on asp.net 1.0 will work on 4.8" . Yep, up to the point where they replaced the framework and called it .net core.
You can also keep running Rails 3 for 20 more years if you want btw.
We generally agree that .net is more stable than Rails though, your point is correct.
> You can also keep running Rails 3 for 20 more years if you want btw.
While I understand your point, not on a modern OS you can't. You literally can't build the tools necessary to run it on a modern OS. I know because I've had to deal with this many many times. You literally are forced to run that stack on a version of ruby that's EOL and on an OS that's EOL.
I mean yes, you CAN do that, just like you can run Windows Server 2000, but it's really really not a good idea.
OK, stability is relative.
Compared to Node / Elixir Rails is still pretty stable. I'd put it with stacks like Laravel / Django.
ASP.net probably changes less, I agree. But .net core was a huge change though, kind of a new framework right? new runtime even.
I don't have any experience with Elixir / Pheonix.
From comments I read, even things like deployment change frequently in Elixir, and major libraries are created or abandoned.
It's just still a very new ecosystem.
In 5-10 years I'm sure the rate of change will decline.
Then there is a stack like Node where constant change seems to be just part of the culture.
Rails changes too, but not as much.
Elixir compiles and runs on BEAM, which is a technology that's roughly 34 years old.
I think most elixir projects are on Phoenix, and while I can't speak too much about the speed of that project, the poster you're responding too has clearly stated they have an app that's 4 years old and running just fine so I have to believe it's not as much of an issue as you think.
It was a Phoenix app. Upgrading from v1.2 to 1.5 is very straightforward, the largest part being replacing Brunch with Webpack for the front-end.
There are some new choices for building releases that make it easier for people who prefer to use run time environment mentioned variables for configuration instead of compile time ones. Previous ways of building releases still work as before.
I make releases with Distillery just as I did back in 2016. The main change in my release workflow is that now I use Gitlab CI/CD, and that's unrelated to any Elixir or Phoenix changes.
I don't know what that means in this context. What I do know is that RoR versions go EOL roughly 2-3 years after they're released. That's not stable by any stretch of the imagination.
True, I see many Ruby, PHP and Java job postings and few Elixir/Phoenix jobs. I hope that will change, because Elixir is so much more productive and fun to program in IMHO. In the meantime I will be building awesome things with it.
I don't think so. Let me explain: Elixir is now 9 years old. Node is 11 years, Golang is 10 years. Node and Go peaked in popularity very fast.
Everyone knows of Elixir already, it's not as if there are countries / organisations who aren't aware it exists as was the case in the 90s with Ruby for example. So Elixir already peaked imo, and didn't catch enough traction. And it will only go downhill from here.
I'm not saying this out of spite: I'm a Rubyist and well aware Ruby lost and continues to lose traction as well. But Ruby was able to capture a big enough segment during it's peak to last. I'm not sure this happened with Elixir.
I hope Elixir people can keep getting paid for decades, don't get me wrong, I'm just not sure there are gonna be enough jobs for them.
After using Elixir professionally for a while, I still barely knew that it was supposed to have anything to do with Ruby aside from the oversized punctuation.
I think there's room for debating whether this sites fetish for new programming languages correlates to the real world. I put forth that perhaps it does not, and you're free to disagree.
Flagging me as a troll because you don't like my opinion seems a bit much
No one flagged you as a troll. But the kind of discussion you were starting here is extremely tedious—it has been repeated a million times and provokes people into making dumber and nastier posts. Surely it's not hard to understand why we don't want that here?
Maybe, but our company uses services from at least three companies that use elixir: Divvy, Slab, and Pagerduty. Discord uses elixir. And Whatsapp famously uses erlang. But I suppose those companies are fictional.
Don’t restrict yourself to real world tech stacks! Regardless of wether these are or are not used by large companies, playing with different languages teaches you stuff to improve how you write your own.
Don’t be! I’ve been using Haskell at work for the last year, but recently I discovered how much easier it is to use Java/Scala because of all the great libraries. Grass is always greener :)
Scala is probably the most expressive statically typed language around, so I personally wouldn't feel bummed about having to use it. Java on the other hand...
Does Scala see much use out of the big data niche nowadays? I've considered picking it up, but I'm always discouraged by someone telling me to use Kotlin instead, or stories of companies abandoning Scala for reasons like complexity and on-boarding.
Honestly I thought this about Java because people told me they thought this about Java, but once you try work with a language lacking libraries you need it’s a lot more appealing! Also I found things like Akka and JML make it even fun to write.
Because I shared your views until I started working with it at a good team :-)
As I continued writing botj Java as well as other projects on other stacks Java became less and less annoying until today it is one of my two favorites.
Last time I tried Elixir it had some rough edges where you had to use some other ugly-ass language (I think it was Erlang?) to do some things. I immediately stopped learning after that. It feels like before learning Elixir, you need to learn Erlang, and I was just too lazy. Kind of using Clojure. You don't NEED to, but knowing Java helps. A lot. You need to learn the ecosystem, standard library, etc. You don't just pull a banana, you pull the gorilla holding the banana.
Ahem. The “gorilla holding the banana” metaphor is from Joe Armstrong — creator of that “ugly-ass language” Erlang; what’s more, it is about emergent reusability in comparative programming paradigms, not about being unwilling to follow a learning curve.
Having spent about a year (so far) building personal projects with both Clojure and Elixir, I think Elixir exposes/requires you to know much less of the underlying Erlang/BEAM than Clojure does for Java/JVM.
maybe it's just me. I can read and write erlang, but I still think it's harder to read. It's got a lot of baggage behind it (like <<"this monstrosity for binaries">>); map syntax is atrocious, and lexical substitution should probably be considered to be an mistake this day in age.
I've reviewed erlang apps and I found them surprisingly easy to read, like really really clear. Perhaps it was the developers who did a good job, but I suspect it is the language. Sure some of the syntax is not super pretty, but I'd bet you would get used to it if you programmed in erlang more frequently.
I can't believe it's a good job: most of what I read in erlang is the OTP source. There are some real stinkers in there, in modules that barely anyone uses anymore (so there aren't any bug reports), like tftp (but when you need it you need it). And if you've ever tried chasing the ssl code, there's definitely some java-esque factory patterns hiding in there where one module or another drops an interface back to its caller, and the call stack weaves several times between two or three modules even.
Elixir is layered, making it easy to learn and master. You can get pretty far with Phoenix without ever understanding (or even knowing about) the more fundamental building blocks or the runtime. In large part, this is because of its ruby-inspired syntax. You'll have to adjust to immutability, but that's pretty much it.
Then one day you'll want to share state between requests and you'll realize that the immutability (which you're already comfortable with at this point) goes beyond just local variables: it's strictly enforced by these things called "processes". And you'll copy and paste a higher-level construct like an Agent or Genserver and add the 1 line of code to this root supervisor that was just a file auto-generated in your project. But that'll get you a) introduced to the actor model and b) thinking about messaging while c) not ever worrying about or messing up concurrency.
Then you'll want to do something with TCP or UDP and you'll see these same patterns cohesively expressed between the runtime, the standard library and the language.
Then you'll wan to do something distributed, and everything you've learnt about single-node development becomes applicable to distributed systems.
Maybe the only part of Elixir which can get complicated are Macros / metaprogramming. But you can get far without ever understanding this, and Phoenix is so full of magic (which isn't a good thing), that by the time you do need it, you'll certainly have peaked behind the covers once or twice.
The synergy between the runtime, standard library and language, backed by the actor model + immutability is a huge productivity win. It's significantly different (to a point where I think it's way more accurate to group Ruby with Go than with Elixir), but, as I've tried to explain, very approachable.