Granted Elixir's Phoenix framework is not Rails level yet, it's system of plugs and heavy inspiration from rails actually makes it quite useable as a replacement for rails. Also immutability and the associated smaller memory footprint makes it more feasible to run cheaply on services like heroku or aws. Personally, I think elixir is a good boat to jump on to carry us over to the next world of web development.
But! you're quite right. Elixir is focusing on the web application type things that Erlang and the community around it has never much addressed. There's enough interoperability from Elixir to Erlang that you could probably say the ecosystem as a whole is addressing the problem if you squint hard enough.
Side comment, it's been interesting to see Elixir grow since 2013. A lot of folks learn Elixir and then Erlang but I think it's less common to learn Erlang and then learn Elixir. It's led to an interesting situation where Elixir has had to bootstrap knowledge about OTP on its own terms without a large pool of domain experts sitting around. The asymmetry in knowledge--knowing Erlang does not imply knowing Elixir but knowing Elixir implies knowing Erlang--also makes me wonder how the squinting assumption above will shake out long-term. ¯\_(シ)_/¯
On one hand, you have a very similar open source culture to what exists in Ruby. The fact that hex.pm exists alone is amazing. Libraries are expected to work, be documented, and be maintained.
On the other hand, the number of questions that can be answered with RTFM is astoundingly high. Where Erlang has an implicit knowledge requirement regarding OTP and distributed systems in general, Elixir has a much lower barrier to entry in the community.
An Example: On one of my libraries (a db driver) I have received multiple requests for "Add section to README on how to use this with Phoenix". I have a section on how to add to a supervision tree, which from the erlang world is overkill, and I expect users to be able to connect the dots.
That said, the core team is pretty dang smart. There's plenty of OTP wisdom there, and they aren't very quick to give into the mob rule of the community. It's under strong guidance.
Once you get the Erlang->Elixir relationship, I recommend Erlang in Anger.
Something I find worth emphasizing: There is exactly one integer type and integers are unbounded. Very few languages gets this right and it's IMO one of the worst mistakes in Haskell. This takes care of a whole slew of bugs and security pitfalls.
I'm still an Erlang n00b, but the lack of a native string type seems a major weakness -- strings are isomorphic to a list of small integers, worse than even Haskell. Please give us something based on Ropes, like in Cedar (http://onlinelibrary.wiley.com/doi/10.1002/spe.4380251203/ab...).
- Easy cross-compilation.
- Out-of-the-box distributed RPC and inter-node connections. Extremely important for our product.
- Supervisor trees (the bees-knees of Erlang).
- Isolation of specific functionality into OTP applications with their own supervision hierarchies (some bound to specific "events", like an AMQP Client coming up upon successful WiFi AP association) - this also protects us from the dreaded "xyz had a bug in it and took down the whole program, it's now bricked".
- Hot code loading and release management. This one is particularly awesome. The node itself is told to upgrade via AMQP, it fetches the latest Release, unpacks it, and rebuilds the engine while driving. There are quite a few built-in safe-guards and checks too until you reach the point of no return in the upgrade. Release management is amazing in Erlang, I have yet to see anything like this in any of the languages I've ever used.
- A proliferation of distributed consensus libraries and utilities thanks to Basho. We use plumtree for replicating the state of each node across its cluster within the user's home. Lots of cool features are possible because of this simple but extremely difficult to get right software.
- The dialyzer isn't anywhere near Haskell's type system but it does give us some useful information and checks.
Isn't that one of the reasons Erlang is slower than Haskell though? Arbitrary-precision arithmetic is much slower than Int64 arithmetic.
It internally separates the integer space into "fixnums" and "bignums". Fixnums are around register-width and use the plain ol' machine math instructions. Operations which are inferred or declared to stay within fixnum range don't even have bignum calls compiled into them.
I would switch to Elixir, but the syntax is too weird.
Ah, I've finally found someone in the wild who thinks Erlang's syntax is less weird than Elixir's! ;)
> and made all the things that matter harder and more expensive
> Raw Erlang is better for anyone used to more than three languages.
While I can read and write "raw Erlang", I prefer Elixir, for whatever reason. The macro facility alone is far more powerful than anything Erlang can provide, I enjoy things like pipe |> which was taken from... crap, forgot which language... http://danthorpe.me/posts/pipe.html, protocols are a useful addition, and strings are natively-supported UTF-8 binaries and not arrays (well, you have both actually, depending on whether you single-quote or double-quote the string). And while both seem to be roughly as terse, I just find Elixir to be far more readable and more-organized-looking, compare Shannon entropy algorithms here for example http://rosettacode.org/wiki/Entropy#Elixir
Anyway, a lot of it is opinion, but I think it's good from anyone's perspective that Elixir is ultimately bringing more clever people onto the BEAM VM.
List comprehensions make no sense to anybody at first. But after you work with and struggle against them for a few days/weeks, everything will click. You'll be a better developer for understand them more intuitively.
Erlang is excellent for problems where latency is an issue. It's really good for problems where scaling is an issue.
But uptime is the big one. The whole immutable, recursive calls, and kill-and-restart quickly is a very good architecture for when you need all those nines.
More interestingly, how does this help uptime? I've been in telephony for a while. My software related downtime (vs human error breaking things) seem either related to logic bugs (restarting won't help) or load (simply too little hardware for a, say, 100x increase in traffic).
Maybe Erlang's selling points make more sense compared to C instead of a managed language? For instance I often hear about the robustness of the Erlang VM, yet JVM or CLR integrity has never been an issue for me.
Nevertheless, I'll try.
In Erlang, almost everything is immutable. So, if you want to update state, you build a new state, and then call yourself recursively with the new state. Seems annoying, right?
Except, I can invert this where I keep the state the same but vary the code. If I want to update my code, I can tell the old code to pack up it's old state and call my new code recursively.
No downtime. At all.
Robustness: the Erlang VM keeps track of a cost for just about everything in its preemptive scheduler. The scheduler is tuned for latency rather than throughput. So, when you get 100X increase in traffic, the scheduler continues to make progress even if it's resource is getting overloaded. In addition GC pauses are per scheduler/process rather than global. This is a function of the fact that Erlang tends to serialize nearly everything so it doesn't have to share.
As for kill/restart, these things tend to be an architectural idiom that is shared by the standard library rather than an obscure technique that only a handful of the programmers grasp. Kill/restart is expected to be NORMAL in Erlang and is thus fast with low overhead--try/catch tends to be abnormal and restarting OS processes tends to be slow.
There are a lot of good things wrapped into Erlang. If you've just used imperative C variant languages (C, Java, C#, Python, Ruby, etc.), I recommend using it in a bit of anger. It can be frustrating until you get your head wrapped around it, but I still retain the architectural elements when I write certain tasks in other languages.
Restarting does help with logic bugs depending on what execution path triggers it. Bringing back a process to a known good state is a heavily useful concession to keep services up and doing work until the root cause is diagnosed and fixed.
You might also start your query to the list with a brief summary of your background and programming experience so you can get the best answer possible.
You would have tested at least the success path in your code (but not probably all the error conditions). 'Logic bugs' then should fall on the non-critical path, so dropping the state that led you to take that path is okay; if it's a critical process, you restart it into a known good state. A trivial example of this might be a user giving you bad input; any validation (or even a lack of validation, provided it breaks something down the line) should cause the data to drop, and the listening process to still be up (i.e., crash and restart into a known good state), ready to take other user requests (and to leave any other in process user requests alone).
For load, Erlang makes it easy to write code that automatically scales for load (up to the hardware limit, obviously). It also makes it easy to write code that can be distributed, running across multiple machines, and able to scale up that way as well. It's also pretty easy to write rate limiters and things, to automatically shed load if you're bottle necking somewhere.
Erlang also makes it harder for humans to break things (in the sense of user error, or production support tinkering with it). You can still do it, but the supervisor processes oftentimes help mitigate that (a human puts in bad state such that it affects an important process, the process will crash and restart).
How is it different than a try/catch? Because rather than trying to figure out every location that an error can occur in your code, you instead ask "what happens if -anything- goes wrong with this process? How do I get this process back to a good state?" You already partially answered this question when you created the process, and had to ask "how do I start this process in a good state?" Does the process depend on another process being up? Then you would have had to already specify as part of the supervisor that the other process is started first, and picked a supervision strategy that will restart that other process if this one restarts, causing both to dump their state and reinitialize. A try/catch, not only do you have to recognize each individual thing that can go wrong (or have a catch-all that hides errors), you have to decide how to recover from it, and it oftentimes is not at all clear how you do that. You end up with a lot more places you can miss handling errors, a lot of assumptions being made, and there isn't a good way to tie multiple processes together.
Restarting an OS process is actually a great example of why this philosophy works well. If the app dies, restart it; it probably died because of bad state. You already understand that that is a great option for unexpected bugs that leave the entire app in a bad state. Only, oftentimes that's overkill; your bad state is probably relegated to a very small part of the app, that, if the app is written in such a way that unrelated things stay unrelated (and communicate with just shared nothing message passing), can individually be restarted within the app. Subsystems, if you will. That's what Erlang does. The idioms and functionality intrinsic to the language make it far easier to write code encapsulated across process lines, such that an error in one process can be treated as identical to any other error in that process, or that kind of process, and handled in a well defined way (drop the state; restart the process into a known good state), leaving the rest of the app unaffected.
That's actually how Erlang was born, hence the apparently odd syntax.
It's been my experience (across all langs) that every new order of magnitude of scaling requires a new technology/technique/architecture...