When will OTP be updated to support Blockchain?
Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.
(he is one of the creators of erlang / stars in Erlang the Movie)
I wish I could downvote but this feature does not seem available in my UI, I guess only certain users can downvote.
Because a pod is far from the granularity of a process. It is closer to the granularity of an application, the biggest abstraction that OTP gives you.
What that mean is that k8s does not have the same property than erlang. In erlang, isolation is your go to tool. If you need something, you spawn a process. Having a lot of really granular process makes scheduling on cores easy, makes your design easy to crash because the blast area is small, and makes it really cheap to have message passing in the core of the VM.
In k8s, any networking is hard, hence the current trend of service mesh. As the unit of computation is bigger, supervision get harder. Building a supervision tree is more expensive. And crashing and restarting get more expensive too.
K8s has the advantage of being language agnostic ofc. But do not forget the paradigm shift that having really cheap process give you in an integrated environment like the BEAM.
And ofc, i have not even talked about the dynamic tracing that the BEAM gives.
I think calling Elixir "just syntactic sugar" is a gross and unfair oversimplification. While friendly syntax is one of its welcoming qualities, Elixir is its own language. It doesn't "transpile" into Erlang. It compiles down to bytecode.
Elixir creates an AST which is transformed into Erlang Abstract Format (http://erlang.org/doc/apps/erts/absform.html) which is then compiled into Erlang bytecode by Erlang compiler.
Erlang VM's bytecode is largely undocumented (except for third-party attempts like http://erlangonxen.org/more/beam and http://beam-wisdoms.clau.se/en/latest/). So all languages on the Erlang VM either create and compile Erlang Abstract Format or transpile to Core Erlang (https://8thlight.com/blog/kofi-gumbs/2017/05/02/core-erlang....)
Eventually I try to do something with Elixir. Then suddenly everything is so good.
The whole experience how I create a new project with `mix`, get dependencies, build, compile, run `repl` was so much better than rebar.
To me, `mix` is the greatest tool. I don't see something like mix in other languages, except Clojure and Leiningen.
The community around Elixir is awesome too. It's very different from Erlang type. I have a hard time to find library and stuff in Erlang before. I don't have a central place to look for that. Elixir brough me hex.pm.
I personally like Erlang syntax, I think it's short and concise. But I end up like Elixir much more than Erlang.
However, when the data becomes too large, Erlang programmers often choose to bring the computation to the data rather than the other way around. Passing a function is efficient. Keeping data cache-local in a process is efficient.
 Except literal data.
For communicating quickly locally, you could use large/refcounted binaries (http://erlang.org/doc/efficiency_guide/binaryhandling.html) in to avoid copying-on-message-pass. Those generally tend to be frowned upon, and incur the usual headaches/perf issues of a ref-counted structure as well.
In short, I'd suggest defaulting to the assumption that Erlang is fast enough at passing messages locally for it not to matter that much in practice. Benchmark the throughput of your system locally, and if the overhead of message passing isn't a major pain point immediately, move on up the stack. If you're doing scientific computing and want to number-crunch huge data sets in parallel via some minimal-coordination memory-map-backed system, this is not the platform for you (or use a NIF).
> Those generally tend to be frowned upon, and incur the usual headaches/perf issues of a ref-counted structure as well.
Why would reference counting be worse than copying?
Nobody will complain if it's needed, but chances are, it's not needed. (It depends on what you're doing though).
Plus refcounted binaries scare people. Until recent versions, references didn't play nice with GC targets, so it was easy for a very efficient proxy process to not trigger GC but still keep references to a lot of garbage. If you're not careful with referencing pieces of binaries you can do something similar too -- maybe you got a huge binary, but only need to save a few bytes, it's easy to keep a reference to the huge thing (hint: use binary:copy/1 to get a clean copy of the small piece that you want to keep in your state or ets)
One of the interesting advantage of erlang is that its processes are small and mostly fit in L2 memory. So you can load a whole process at once, then compute for your whole slice, all in cache.
The result may surprise you.
> One of the interesting advantage of erlang is that its processes are small and mostly fit in L2 memory.
Are you talking about data or are you talking about execution? I wouldn't think execution would need reference counting.
> So you can load a whole process at once, then compute for your whole slice, all in cache.
Whatever data is being loaded still has latency. If it is accessed linearly it can be prefetched. If it is being copied around, that won't won't put it into a CPUs cache any faster and will require using writeback cache, not to mention all the cycle of copying.
I'm not sure how data that already exists in memory and potentially in L3 caches, etc. is going to be hurt by reference counting.
> The result may surprise you.
Do you have results that I can look at? If not I think they will surprise both of us.
Sure, you could pass a pointer around. But then you'd have all the usual headaches about concurrent modification (some of this stuff is mutable when you get down to the implementation/pointer level) and garbage collection (who has it?).
It's optimized well enough that the perf penalty is minor. Copy-on-write semantics, "tail-send" optimizations (I don't know what the technical term for this is, but: data or pointers are moved if a message send is in particular positions in code, without getting the GC involved at all until later) and more are all used where they make sense.
Could it be faster if you built it around a zero-copy runtime? Sure. So could Perl. But then you'd be building your own runtime, with different priorities and tradeoffs than the one that already exists--one whose biggest benefit is that you don't have to build it yourself.
Erlang doesn't promise zero copies locally, but nor does it take the most wasteful possible path; I am not a deep BEAM expert, but am consistently surprised at how well it works. Seldom have I wondered "hmm, the runtime could do something fast/cool with this particular type of code; I wonder if it does..." and, upon checking the available sources/debugger output, been disappointed.
The runtime isn't built around hyper-efficient zero copy semantics (it has a garbage collector, an M:N scheduler, a bloody instruction counter routine that runs every time you invoke any function at all for heaven's sake). For all that, it's more than fast enough for anything besides high-performance scientific computing/number crunching locally. If you need to do that, use a language that lets you sent pointers between threads exactly how you want. If you still want to use that in Erlang, add some reduction-count markers in your $fast_language code and hook it up to Erlang via an NIF.
The BEAM doesn't waste time/complexity more than necessary on hyper-efficient local optimizations so it can focus on its main strengths: really good concurrency and parallelism, and making a near-bulletproof, super-high-performance set of primitives for distributing your code with minimal changes. Of all the "you call this function as if it were local, but it actually runs remotely in nearly the same exact way!" tools out there, even ones invented recently with the benefit of history to learn from, Erlang/BEAM's implementation comes the closest to fulfilling that promise IMO.
> Why would reference counting be worse than copying?
Refcounting is a valid strategy in some cases, but has drawbacks as well. It doesn't handle cycles well, and requires some stop-the-world events (or lots of locking complexity) for garbage collection in concurrent environments. More details on Google, this thread goes into the differences reasonably well: https://www.quora.com/How-do-reference-counting-and-garbage-...
Refcounted binaries are (mostly) effectively copy on write though; if you create a term that's a modified form, a new value will be formed.
https://hamidreza-s.github.io/erlang%20garbage%20collection%... Is a great post
You can cast bytes from a memory-mapped file just fine, if you trust the file/other writers, or are willing to sacrifice some safety. There are also a lot of serialization systems that are simple/fast enough as to basically be structs, e.g. flatbuffers: https://google.github.io/flatbuffers/. Even those still don't avoid that pesky copy, though; for that you'll have to interact with raw memory and hope it's in the right layout for your purposes.
Even if there are clear similarities between Erlang and Kubernetes, I'm not sure the former inspired the latter. We should ask Kubernetes developers :-)
Fortunately, there are lots of simple-to-apply patterns and solutions when you get to that point, for example the excellent Partisan library: https://github.com/lasp-lang/partisan
It depends on what you're doing. At work we have several systems that needed to upgrade to 2x 10G Ethernet (from 2x 1G), but not much pushes that limit at the moment. Our newer hosting is more badwidth with smaller CPU and RAM, so we expect to see more CPU constraints than network constraints.
However, I am not that experienced in distributed applications and would appreciate reading the opinions from more experienced engineers.
iex(3)> defmodule Hello do
...(3)> def world() do
...(3)> IO.puts "here we go!"
iex(4)> Node.spawn_link(:erlang.node(), fn -> Hello.world() end)
here we go!