Hacker News new | comments | show | ask | jobs | submit login
Elixir at PagerDuty (pagerduty.com)
339 points by romanhn 37 days ago | hide | past | web | favorite | 184 comments

As an Erlang developer for the past couple years now, I love seeing the adoption and excitement around Elixir and the BEAM. I will admit I always shudder when people very quickly call out on Erlang's syntax as a reason not to use it. Feels like a pretty lame excuse...

All that said however, it kind of bugs me when I see posts like this (no matter the language) that go somewhere along the lines of "I managed to introduce language X and it was all rainbows and unicorns! Everybody loves it and work is fun again". Like, you had absolutely no problems with it? In my experience, people don't immediately grasp the actor model or how to manage processes (i.e. should I spawn a process for a short-lived task? should this monitor another process? should I link? should it need supervision?).

Given that you're basically changing a methodology amongst your devs, it can be expected that things won't get written "right" the first time. I'm not referring to little idiosyncrasies like coding convention or best practices but literally misunderstanding of gen_servers and supervisors. Sure, perhaps transferring from Ruby to Scala to Elixir helped made the transfer less painful given their similarities but surely there were complications?

Having only written a couple small apps in Phoenix, none of which have needed to scale beyond a single instance yet, I think it's quite possible to build a Phoenix app without even knowing about the actor model or what OTP is. I read Dave Thomas's book first so I've built toy apps using Gen Servers, Supervisors, etc, but so far have needed none of that to write apps in Phoenix. (CRUD apps that make calls to 3rd party servers).

That said, I wouldn't advocate jumping into Phoenix with zero Elixir knowledge. You'll have a tough time. And knowing about OTP/supervision is also recommended because you'll be much better off in the long term. I don't think you can avoid it forever.

> As an Erlang developer for the past couple years now, I love seeing the adoption and excitement around Elixir and the BEAM. I will admit I always shudder when people very quickly call out on Erlang's syntax as a reason not to use it. Feels like a pretty lame excuse...

I have mixed feelings here. From a purely implementation standpoint I totally agree - I don't particularly enjoy using certain tools at work but that is not a valid excuse for me to not get work done. At the same time, as a psych grad turned developer, I'm intimately aware of the effect that dreading a journey can have on attaining a goal and this is no less true of mapping developer experience to productivity - thus we should always be striving to make sure that the goals of the team are not undermined by the requirements to get there. If I had an objective that couldn't be solved in Elixir I would invest time in groking Erlang but its syntax does deter me from wanting to make that investment until I need to.

> All that said however, it kind of bugs me when I see posts like this (no matter the language) that go somewhere along the lines of "I managed to introduce language X and it was all rainbows and unicorns! Everybody loves it and work is fun again".

Totally agree, especially with your points on the gen_* family which are all vastly useful once you understand them but took me some time to bridge that gap. I feel that it's just as important to understand the pain points such that others reading this post and wanting to be the catalyst for similar change can prepare for these issues themselves should they ever want to take the leap.

I did erlang programming for about a year before switching to elixir for the last 3 years. I have a large server cluster serving 275k sensor devices averaging one report per minute containing 1 to 4 readings (temp, humidity, rainfall, wind speed, heading, etc.) calculating daily, weekly, monthly, yearly, hourly, and smaller high/low/average/tally aggregates and allowing users to setup complex alerts against these values. It would have take me a huge amount longer to write the same in erlang.

The difference between readability and usability of Elixir and Erlang syntax is night and day, especially as someone coming from 20 years of working with imperative languages. I can feel comfortable handing out small tasks to a lay developer in elixir and getting back a response pretty quickly, it has a much easier on boarding. It has it's flaws of course. It's much easier to write programs that will crash in elixir, etc. but that concession with a more approachable syntax and methodology coupled with the benefits of OTP and Beam seems like a much more compelling sell than pure erlang.

How interesting would it be to read the article about the company that looked at doing the big, dramatic rewrite, but didn’t, and was successful?

This has happened at my company with a service called Firehose, which is basically PubSub over WS and HTTP written in Ruby Event Machine. It’s unstylish for a lot of technical reasons and an eng even tried to rewrite it in Elixir. We ended up scraping the rewrite project because the Ruby code just works and rarely gives us any problems in production.

The problem with that story though is it won’t attract engineers who like new shiny stuff.

> eng even tried to rewrite it in Elixir. We ended up scraping the rewrite project because the Ruby code just works and rarely gives us any problems in production.

If your Ruby code just worked, why did you try to rewrite it in Elixir?

Your first comment is kind of funny, because while Elixir's syntax may appear more "esthetic", I find Erlang's way more consistent and logical.

Entirely agreed. Elixir's Macro's are awesome, and Erlang could definitely use an Elixir-Macro-Like Parse Transform library (could definitely exist), Erlang's syntax I find far more readable, consistent, and logical as well. Elixirs syntax has a lot needless, hard-to-read, noise, but it's not hard to overcome the noise to get the benefits of the overall ecosystem and macros.

This topic is quite subjective, but could you provide some example? Overall Elixir seems to me a bit easier to read, for instance:


  "Hello,How,Are,You,Today" |> String.split(",") |> Enum.join(".") |> IO.puts


  start() ->
     Lst = string:tokens("Hello,How,Are,You,Today",","),
     io:fwrite("~s~n", [string:join(Lst,".")]),

Entirely subjective yes! ^.^

However what you show is not focusing on syntax differences but rather function differences, even `|>` is an (macro) operator. By syntax I'm talking about things like the `do`/`end` and `fn`/`end` and `,do:`/`do...end` mismatches, things like atom key short-form of `someatom: ...` (being short for `:someatom => ...`) only being useful at the end of it's list/map context instead of everywhere (which is not ambiguous nor have any other reason not to do it that I can see), having functions be callable with or without parenthesis (of which thankfully the formatter default puts parenthesis) when functions really should be defined with or without parenthesis and their usage enforced as such, mis-matches in the AST (when creating macro's) by special casing things like 2-tuples among others, and the really weird multi-arity functions of `for` and `with` that really should have been done via body expressions instead of `,` separated expressions (and thus would not need to be special forms but could then just be normal macros, but they seem extremely out of place for the rest of the language syntax), etc.... etc... etc..

It's just a lot of little inconsistencies like that.

Also, if you want pipes for erlang look at the https://github.com/rabbitmq/erlando parse transform (there are others as well, but I like this one), and if you want a shorter erlang form then look at erl2 (just a layer on top of erlang to shorten constructs like module definitions), and of course nothing beats `lfe` on the beam for succinctness and defineability (being a lisp after all).

Also, your Elixir and Erlang codes do not do the same thing. Your Elixir code is being run at compile-time where your Erlang code is not executed at all, only compiled, and thus will only be run if something calls the `tok:start/0` function. The equivalent elixir would be (formatted by the Elixir formatter):

    defmodule :tok do
        def start() do
            |> String.split(",")
            |> Enum.join(".")
            |> IO.puts()
And you can leave off the final `:ok` on both as both `IO.puts/1` and `io:fwrite/2` both return the atom `ok` in the end. And even then these are not equal as you are working with charlists with one and binaries with the other, so a more direct translation of the Erlang code to Elixir would probably actually be:

    defmodule :tok do
        def start() do
            |> :string.tokens(',')
            |> :string.join('.')
            |> (&:io.fwrite('~s~n', [&1])).()
That would make the functions equal.

As an aside, a few other bits, Elixir defines if a function is public/private via `def`/`defp`, meaning you have to search through a file to see what is exposed, where in Erlang you can see exactly what is and everything that is exposed by just looking at the top.

And yes, `do`/`end` may be longer by one line than Erlang's sequence operator `,` usage, but I don't mind it to be honest, and you can replicate it in Elixir anyway kind of like:

    def start(), do: (
        lst = :string.tokens('Hello,How,Are,You,Today', ',');
        :io.fwrite('~s~n', [:string.join(lst,',')]);
Essentially just replace erlang's `->` with `,do: (`, replace erlang's `,` with `;`, and finally replace erlang's `.` with `)`.

Comparing 'function' differences between them is useless of course, they can call each other functions (though macro's are another issue, erlang cannot 'usefully' call elixir macro's, but then again elixir may not soon be able to use erlang's parse transforms either), the differences are the syntax, and Elixir just has a whole ton of oddities.

Overall I find the Erlang syntax (which is similar to SML/OCaml/etc...) far more uniform, sensible, readable, and that it has far less surprising syntactical cases (like why can't I do something like `%{some_atom: 42, "string key" => 6.28}` in Elixir, blah... you have to do `%{"string key" => 6.28, some_atom: 42}` instead for who knows why reasons...).

I do of course use Elixir for my day job, have for over 2 years now, and these are just a few constant things that bug me on a day-to-day basis. However the tooling, macro's, and community (come join us on the Elixir Forums!!!) are top-notch!

If I had my 'choice' of mythical language though, I'd pick something like OCaml with Rust-style ownership semantics with a full staged macro system (Elixir's is a staged macro system) if it were not otherwise possible to safely add in a full Lisp-style macro system (I really don't think so in a full setup like these though, you need a staged system for various corner-case reasons).

Thank you very much for your insight!

I believe the term "core language" may be more appropriate than "syntax" in your argument: it's true that the pipe operator (|>) is a macro, as well as fn/end or the match operator (=), but that's an implementation detail, they still are syntactic forms.

As with any programming language (except of course for Scheme, which is perfect :D) Elixir has some inconsistencies, but I don't think they make the language "hard-to-read" in any realistic way.

Elixir defines its parts as either 'Language Syntax', 'Special Forms', 'Kernel', 'Standard Library', and of course user code.

Things like a function call is language syntax. You cannot replace language syntax with anything else. Special forms are things like `for` and `with`, and you can replace these with syntax (though that's going the wrong direction in my opinion). Kernel are the standard library bits that are auto-imported into every module (unless explicitly told not to via `except:` or so), these cannot replace special forms or syntax. Etc...

The Pipe operator is just a normal macro in the Kernel, it is redefinable and in fact there are a surprisingly large amount of libraries that do (to make it more monadic interestingly, which makes me think Elixir needs a binding operator in kernel, perhaps `~>` or so).

Things like `fn`/`end` are language constructs, just no way around that at all.

The match operator `=`, interestingly enough, is a special form and not a language construct, so there are times that you can override it as well.

> except of course for Scheme, which is perfect :D

Take a look at Racket, it is scheme refined into bliss. ^.^

> Elixir has some inconsistencies, but I don't think they make the language "hard-to-read" in any realistic way.

Not overall 'hard-to-read' (not like Java for example), but definitely harder than Erlang I'd state.

> This topic is quite subjective, but could you provide some example?

Elixir's pin operator is a good example of unnecessary complexity.

I'm okay with this trade off instead of having to do Variable0, Variable1, ..., VariableN. It would be cool to have an option to turn off rebinding for those who don't want it (and then not requiring the pin).

Elixir already has those rebindable variable, which can be "turned off" by using the ^ prefix on a match variable. Wouldn't that make the |> less useful to you already?

`|>` would replace the great majority of need for multi-staged named variables.

In addition, naming variables like that in Erlang is a mis-design, the names should be more descriptive.

I vaguely recall that this particular choice is one that the language's creator also regrets.

Why is it unnecessary?

It's unnecessary in the 'erlang' ecosystem because erlang doesn't have variable rebinding, Elixir 'does' have variable rebinding so it needs some way to distinguish between rebinding and matching, and it was decided to use `^` to mark matching, when honestly I think it would have been smarter to specify it (or something similar) as rebinding. In quite a few cases rebinding leads to bugs (accidentally rebinding something that you still need an old value of).

I think I would also prefer there to be no reminding. On the topic of the pin operator though, it seems more clear to me that it's an assertive component of a match, rather than needing to reference all variables in a scope in the case where there was no rebinding, to determine if something matching has already been assigned.

A buddy of mine and I were talking at length about this and we came to the consensus that the smartest choice would have been: no rebinding by default, if you want rebinding, make it available, say, using "var", or a sigil

This is precisely how I think as well, some declaration like `let` or `var` or so to say "This is a new binding"

This example captures extremely well what I don't like in Elixir: verbosity.

I'm so much more happy writing the equivalent Ruby code:

    puts "Hello,How,Are,You,Today".split(",").join(".")
It's value.method.method vs value |> Module.function |> Module.function. Alias is not a general solution because of conflicts.

Disclaimer: I use Elixir in projects for customers and I like it. Still, it's too verbose and I'm not using it in my own projects (no need for heavy parallelism there.)

Interesting. I found it immensely liberating to be able to fully separate the functions from the data they operate on. I can see how it might feel verbose though.

I don't feel liberated when I program in Elixir but the other languages I use for customers are Ruby, Python and JavaScript, which are quite liberal. It's been so long since I used Java that I can't really appreciate anymore how big the difference is. For sure I don't want to go back to those times.

IMHO "this is a string" |> String.split(" ") doesn't fully separate the data from the function. It's String.split after all and it won't work with an input of any other type.

However, at least in the case of languages with duck typing, some_object.split(" ") works with an object of type String and with any other class that implements a split method that takes a string as argument. Then the rest of the code must be able to keep handling the duck typed object.

Probably this isn't the way you are thinking about data/code separation, still it's a kind of separation.

Well, I came from Javascript/Ruby/Python (and still like those too), so for me it really was the functional nature of Elixir that felt 'liberating'.

Using String.split() instead of an method call on the string is just one of the reasons I like this rather stricter separation between data and code, and I do see how in isolation it's not all that great.

That said, I still enjoy the more OO languages and rather like using some of the FP things I've learned inside those. And sometimes it does feel convenient to not be pushed into one particular style of coding. So perhaps 'liberating' is not the best choice of words :).

Even better:

    "Hello,How,Are,You,Today" |> String.replace(",", ".") |> IO.puts()

It still doesn't do the same thing as the erlang code as the elixir example is being run at the staged compile-time instead of run-time like elixir's would be though. And a superfluous `ok` atom is being returned from the erlang code when its `io:fwrite/2` already returns it, plus why call fwrite instead of just write out an iolist directly?! o.O

Ya I get the sense the Erlang example is written a bad faith a little.

Not my intention at all, please feel free to rewrite it: I don't know much Erlang and I actually copied that code from Rosetta Code.

Apples to apples? I don't do Elixir but this is what I think it's doing:


I think the AST is easier to see in erlang.

I also found Elixir to sometimes be a bit too 'messy' in it syntax. A bunch of it made more sense when I understood why the choice had been made.

The one that I still feel uncomfortable with, and where I haven't heard a proper justification, is how you can leave out [] for a keyword list when it's the last function parameter (I think they call it an options list?).

It's already a bit confusing how a keyword list translates to a list of tuples, so adding that bit of sugar just further confuses things. It reminds me too much of Ruby's implicit block parameter (or whatever it's called), which I never was a fan of.

Other than that Elixir might be my one of my favorite programming languages.

Instagrammification of tech blogs.


My impression about the Elixir/Erlang so far has been the same.

> Elixir comes with one of the nicest and most helpful communities around

Using Elixir, it's easy to write gorgeous code, with a language that for the most part is really minimalist, and you get to play with the great piece of software that OTP is without all the warts of Erlang (which is mostly the developer experience). The BEAM community is fantastic and really supportive (disclaimer: I'm a GSoC student working for them on making Elixir bindings for Barrel, a replicated document database).

Maybe we could think that it's best in terms of the character of the community if Elixir stays small, however Rust community seems a great example of something bigger to follow that hasn't evolved into a soulless graveyard of enterprise code scattered on the githubs and maven centrals of the world.

There has to be a way between hand-rolled craft parsers you write with a beard and an IPA in hand and another AbstractObservableContainer written by Dilbert that doesn't compromise the ecosystem or contort it in any funny way.

Rust is a community of very nice and helpful people, much like Elixir but with more money behind it. However, I certainly enjoy Elixir's unwillingness to break working code with every release, or to force users to build against nightly releases just to use any features from the last year. Rust is improving in this regard, but isn't quite there yet.

What stuff are you running into with this? The vast majority of the ecosystem is on stable, but there are some holdouts. Working on them! I always like to hear people’s pain points, it helps with prioritization.

Just yesterday our build broke because a cargo update pulled in some dependencies that suddenly required experimental features. I think it was crossbeam via hyper or tokio.

Also, while I recognize it's a third party project, the hyper API keeps changing faster than I can adapt my code. If you are aware of a more stable HTTP server, ideally one that has proper TLS support and support for unix domain sockets, I would be very interested.

EDIT: on the other hand, after working with rust daily for ~1.5 years, I have only run into one compiler bug and had zero actual bugs in my application due to language/compiler updates so far. There have been a few occasions were essential features were missing from the standard library/language though and several times were we had to backport some code to an older, stable rust version that we're one (when a developer had mistakenly tested their code locally with a newer version).

The highest up on my wish list would be that - as I understand it - you can not currently exit the process cleanly (EDIT: early) with a non-zero exit code in stable rust. That makes it pretty hard to implement good command line utilities. Happy to be corrected if I am wrong on this one though.

Interesting, I'll check it out: sounds like a bug in crossbeam. Thanks for letting me know. It's also true that the language and the ecosystem are different things; that's one reason why we're focusing on stabilizing stuff this year, to help move people off of nightly. 1.26 contained a really huge stabilization in this regard. It is a downside of the "put everything in the ecosystem" approach, though.

You're writing a server with hyper directly? Is there a particular reason you're not using one of the frameworks written on top? I may have some advice there, depending.

We started using Iron, but later switched to just using hyper because that seemed easier and like a smaller API surface to target. I'm not really working on a web application so I need zero of the routing/web features of these frameworks.

I do need a lot of pretty specific other features though. Unix domain sockets are currently a must (luckily that works with hyperlocal).

The code runs on a pretty resource constrained system so I like the low level of control that hyper gives me of the request/response chunking behaviour (I can not afford to buffer large requests in memory and what exactly I do with the body payload differs between API calls).

One thing that would be lovely is proper TLS/HTTPS support. I need support for x509 client certificates and I need to get access to the full client certificate data, ideally including the full chain that signed it from the request handler. I have to admit that I currently run a hacked together nginx in front of my app and put this stuff into HTTP headers (hence the unix domain sockets) because I could not get it to work with rust ecosystem libraries.

EDIT: The TLS stuff is also true for the HTTP client case. Currently using the libcurl rust binding because it's the only thing that implements all the features I need (--cainfo, --cert, --key, --resolv). Also it needs to run on OpenSSL or something else that supports x509v3 extended attributes (subtree constraints).

Cool cool, that makes sense. I probably don't have good advice for you then; what I will say is that things should end up much more stable in a few months, but there's a massive async/await/impl Trait/futures/tokio/hyper upgrade going on, so stuff is a bit more chaotic than usual. Once that's over though, stuff should be solid for quite a whole.

I don't know if rustls supports your use-case for TLS, but you may want to check it out.

Thanks again. It's really invaluable to hear about these kinds of things.

A common pattern is something like:

    fn main() { 
      if let Err(e) = real_main() {
        let status = status_from_error(e);
        // print the error message, or whatever

    fn real_main() -> Result<(), TopErrorType> {
      // ...
And simply returning Result<_, SomethingConvertibleToTopErrorType> from functions called by real_main that might need to exit the program.

I believe that in a recent stable version of Rust, you can make this even simpler and have `main()` return a Result directly.

edit: not quite as simple as I described, but maybe more powerful because of the Termination trait: https://github.com/rust-lang/rfcs/blob/master/text/1937-ques... see: https://github.com/rust-lang/rfcs/blob/master/text/1937-ques...

    fn main() -> Result<(), io::Error> {
      let mut stdin = io::stdin();
      let mut raw_stdout = io::stdout();
      let mut stdout = raw_stdout.lock();
      for line in stdin.lock().lines() {

Yes, currently doing something similar to that, but I'd much rather have it as a first-citizen feature. Returning result directly from main sounds very interesting! I will have to look into that thx.

EDIT: I assume you mean RFC1937? Looks like it isn't implemented yet, so we will probably have to at least another year before we can get it in stable rust. But yes - without having read the entire RFC - I think that was what I was looking for!

Part of it is implemented; see https://play.rust-lang.org/?gist=d442c47833587ddeff2158492b0...

The rest of it makes it even better; it's a bit limited in ways right now.

Oh nice! Will start using that as soon as we get to 1.26 :) -- currently stuck on 1.24 (upgrading takes a bunch of work since we're building under openembedded/bitbake, so I wait until the new version hits the upstream layer)

Great! :D

> The highest up on my wish list would be that - as I understand it - you can not currently exit the process cleanly (EDIT: early) with a non-zero exit code in stable rust.

You don't need any fancy features to do this. Every single one of my Rust CLI programs has done this on stable Rust for years. All you need to do is bubble your errors up to main.

If you have destructors that you want to run, then put those in a function that isn't main.

I would not consider setting an exit code to be a fancy feature, but I guess we are living on the bleeding edge here :)

EDIT: I should have said explicitly in my initial comment that I knew about std::process::exit and panic!, but did not consider them to be a clean solution for exiting the program under normal circumstance -- more of an abort() mechanism.

I don't see why. process::exit is perfectly clean from my perspective. You don't need to be on the bleeding edge. This has been possible since Rust 1.0.

Unless I'm misunderstanding you, [`std::process::exit`](https://doc.rust-lang.org/nightly/std/process/fn.exit.html) does what you want.

std::process::exit does not call Drop traits (at least the last time I tried -- maybe I am doing it wrong?). So for example, if you're relying on Drop to clean up temporary files in the system, that will not happen. Providing no method to call destructors AND exit non-zero feels like a bug or missing feature.

EDIT: to expand on this: a suggested workaround is to only call std::process::exit at the very end of the main function. But consider stuff like "env_logger::init". What about that? Ok, that doesn't use Drop and if it did you could put it into it's own scope I guess - so there are workarounds - but it in my opinion that gets pretty ugly. Comparing to c++, std::process::exit or panic! is like abort(), but what I want is a "return 1" from main.

You're not wrong. The issue is, it's hard to make a good API for this; anything else is basically dealing with global state, and so you don't have a guarantee that something else doesn't re-set it to something else while unwinding is happening, etc. I do have one more thing to say on this, but it fits better in a reply elsewhere in the thread :)

panic! does return a non-zero exit code, but isn't really designed for good end-user output.

If you don't care about the specific error code as long as it's non-zero, just returning `Result<(), impl Debug>` does what you want on stable now, right?

That's true, but then you have to juggle Results as your return type everywhere. It's probably a good idea overall, but some people don't want to do that. And yes, you don't get to pick the code. Yet!

Calling `std::process:exit(1)` at the end of main is identical to `return 1`. I'm not sure what your point is with `env_logger::init`. It sounds like you think destructors are run on static items? They aren't. In fact, until very recently you weren't even allowed to have destructors on consts/statics

> I'm not sure what your point is with `env_logger::init`

That most of my programs have some global (i.e. "for the runtime of the program") stuff that is setup at the beginning of main. And that some of that might want to Drop when the program exits, for example to delete a temporary directory. Now, if I want to return a non-zero exit code I can not do so while still getting all this global stuff destructed correctly (or use a workaround like having a wrapper-main).

> Calling `std::process:exit(1)` at the end of main is identical to `return 1`.

The thing is that it is actually not the same with respect to destructors -- the documentation explicitly calls that out. See also the C++ comparison in my other comment.

Ah, I see. You're wanting to have destructors run on local variables assigned in `main`.

I'm just a dabbler and most of my experience is a few months old -- I have no specifics for you, I'm afraid.

If you're talking about Rocket that seems to be the largest thing that requires nightly. That said I can count on my hands the number of times I've used nightly over the last 2-ish years. It's pretty rare that you need to do it.

I'll also add that I've actually been really impressed with how much attention Rust takes to not break stable packages. Between crater[1] and how aggressively they've cut point releases to fix any issues that (rarely) happen to slip by.

[1] https://github.com/rust-lang-nursery/crater

Bindgen also wants nightly so if you're doing any FFI stuff you may be tethered to nightly.

You sure about that?

I've been using bindgen for ages on stable and just grabbed the latest version on an empty project and it still works for me.

I'm sitting next to bindgen's author in meatspace right now and he says yes, it has been on stable for a long time.

Ah sorry, was thinking of rustfmt-nightly, not the toolchain.

No worries!

You shouldn't be using rustfmt-nightly at this point either; install the rustfmt-preview component through rustup. It works on stable!

Does rustfmt-preview work with bindgen though?

I don’t know what this sentence means.

Bindgen relies on rustfmt. With the codebase I'm using rustfmt-nightly works, but the stable version segfaults. Known issue but that's why I've stuck with rustfmt-nightly.

Interesting! I was not aware of that. So, no idea :)


So yeah looks like I'm tethered to the nightly toolchain after all. :/

I happen to be in the same physical location as the maintainers of both. This should be re-opened. I’ll chat with them. But first... I’m also confused. The component is the rustfmt-nightly codebase. You’re still seeing problems with the component?

I'm using bindgen to generate bindings for radare2. But I'm also using TryFrom, so I'm not likely to move off of the nightly toolchain any time soon.

Before I ended up going down the TryFrom rabbit hole I ran into two issues with bindgen:

1.) Bindgen segfaults with the ancient LLVM on OSX 10.9. Issue #1006. Solution: use LLVM >= 5 binaries from the LLVM site.

2.) Stable (deprecated) rustfmt causes bindgen to segfault. Solution: cargo install -f rustfmt-nightly. It could be as simple as "stable rustfmt requires --force to run because it's been deprecated", but more helpful error messages would be a great thing here.

What happens if you uninstall the rustfmt-nightly and use the component instead?

(TryFrom is being stabilized fairly soon, it got caught up in some silly stuff but is basically good to go)

So on the mac, something's going on and even with LIBCLANG_PATH set I'm still getting a segfault in some clang stuff. On the BSD box, rustfmt-nightly allows everything to work. Using `cargo install --force rustfmt` will provoke bindgen into reporting an internal error:

Custom { kind: Other, error: StringError("Internal rustfmt error") }

I get the same results if I call the bindgen executable or have build.rs call the bindgen API. I've stuck with having build.rs call the bindgen executable because if I use the bindgen API, compilation is SLOW.

Not cargo install, the

    rustup component add rustfmt-preview


  $ cargo uninstall rustfmt && rustup component add rustfmt-preview
      Removing /home/alex/.cargo/bin/cargo-fmt
      Removing /home/alex/.cargo/bin/rustfmt
  info: downloading component 'rustfmt-preview'
  info: installing component 'rustfmt-preview'
  $ bindgen bindings.h -- -I/usr/local/include/libr > /dev/null
  Custom { kind: Other, error: StringError("Cannot find binary path") }
  $ rustc --version
  rustc 1.28.0-nightly (29f48ccf3 2018-06-03)
  $ cat bindings.h 
  #include <r_lib.h>
  #include <r_asm.h>
  #include <r_reg.h>
  #include <r_anal.h>
  #include <r_bin.h>

Okay. Let’s get that bug re-opened!

It's all good; I appreciate it anyway.

Things do break if you use nightly – and if you ask for help in the community, "switch to nightly" is a very, very common advice.

That's true. I do think that some people are too enthusiastic to recommend nightly; it happens when you have people who are really into where the language is going and want you to see the future rather than use something that works in the present.

There is good interop story for Elixir/Rust. I would not be so sure about money bit either.

Just some random background for those reading about why Erlang/Elixir<->Rust interop is A Big Thing.

The whole point of Erlang and Elixir is robustness in the face of high concurrency. All other design choices (eg functional programming, immutable data), follow from that goal. Core is that if one green thread ("process" in Erl/Ex lingo) crashes for whatever reason, the rest keep on running. Interop with native code is the sole exception here, which makes interop very scary. If a C function that's called into from Erl/Ex crashes, the entire program crashes. Boom, gone. Sure that holds for most other languages too, but Erlang/Elixir people get extra nervous because they're more accustomed to thinking about error scenarios and because they don't always have the same seven restart/recover layers outside the VM process that good devopsers wrap eg Node processes with because hey, no need, we have Erlang, we never crash.

As a result, the idea of a technology that allows writing native code that has a tremendously small likelihood of crashing is very appealing. Until recently, no such technology was available but Rust changed that. Rust allows writing native functions that can be called from Erlang/Elixir with much fewer worries than C/C++ native extensions ever did, because of all the safety guarantees provided by the compiler. This is a big thing and I hope that as the interop story improves, it means that Erlang/Elixir will in practice become much faster because more libraries will be rewritten in native code.

Or you can just make a remote node out of your C/Java/Rust/whatever code and not worry too much about it crashing your Erlang system.

It's not just crashing either, it's taking a long time to execute a given function in the native code - that can also cause problems for the Erlang system running it.

Remote nodes have overhead, however if you can handle that overhead then you definitely should use a remote node or Port over a NIF. However, NIF's taking a long time to run is not so much of an issue anymore now that the dirty NIF scheduler is enabled by default in the BEAM.

The danger of crashing the whole VM only applies to NIFs (native implemented functions), which is a very tight integration of the external code. There are also Ports, which don't carry the same risk of fatal crashes.

> There are also Ports, which don't carry the same risk of fatal crashes.

Oh, this is cute statement xD No, ports also carry the same risk as NIFs if you use port drivers, because it's still the same way of running the code, just with different API. You need ports backed by external process to be isolated from the crash, though there's still the problem of such ports eagerly reading all the available data without any backpressure whatsoever, so your BEAM can trigger OOM killer. This is much easier to avoid, fortunately.

> [...] the idea of [...] writing native code that has a tremendously small likelihood of crashing is very appealing. Until recently, no such technology was available but Rust changed that.

No such technology was available, barring maybe Ada or some other languages.

There's a fine interop story, I've used Rustler before. The problem is when Rust nightly is sufficiently broken that even the auto-generated Rustler boilerplate won't compile. :)

Has rust ever broken working code in a stable release?

Depends on how you define “working”. We have broken some code that used to compile due to soundness issues.

We’ve also made some small changes that in theory can break code but weren’t observed in the wild. The tolerance for that has dropped over time, of course.

When I started using Diesel (Rust ORM) I was surprised to find the library's author answering all of my routine and sometimes even idiotic questions on gitter, and all the general help chats have been incredibly helpful – every time I asked the question, people actually spent time and effort to understand my code and what I'm trying to do.

I still don't quite understand what makes Rust community this way, but it's one of the best things about the language.

That's awesome to hear! I've had similar experiences with Elixir. On the IRC #elixir channel, I've more than once seen Jose Valim (the language creator) or Chris McCord (Phoenix head honcho) answer relatively inane questions about CSS or beginner questions about Elixir. Always felt heart-warming to see, and made me try and actively help out more so those dudes can focus on their work!

every community of non main stream languages that I know is very nice and helpful.

Elixir is amazing. It is everything I want in a programming language: functional, fast, reliable and elegant.

I desperately want it to become more popular. To this end, I started writing a book that teaches Elixir (Phoenix) in combination with Vue.js. It's still in the early stages, but the thing I realized is that even writing the book is fun! This was very surprising to me because I typically don't find writing technical documentation/tutorials pleasant.

Anyway, if you're on the fence about Elixir, definitely check it out - especially if you're coming from a Rails background!

Looking forward to your book!

As someone who has managed large clusters of Erlang applications, I have to say I am 100% convinced that the language you pick, no matter how amazing its design principles, has no bearing on how well the application runs. If you work for a telecom and you're actually trying to meet a nine-nines SLA, you can do it with Erlang. But if you're a start-up throwing together some random distributed app on a small budget, nine-nines probably isn't your goal, and so your application won't be nearly as stable.

Totally. Uptime & stability is a mentality and a feature, never a free ride! At the same time, having a solid foundation and a community that knows how to take it to rock solid stability is valuable.

Coming to Elixir/Erlang from Ruby, there is a whole new world of thing to break and horrible mistakes you can make. Fortunately, the language/VM also makes it a whole lot easier to fix the horrible new mistakes once you discover that you've made them.

> Elixir has been mostly selling itself: it works, it sits on a rock-solid platform, code is very understandable as the community has a healthy aversion towards the sort of “magic” that makes Rails tick, and it’s quite simple to pick up as it has a small surface area

I was a huge Rails fan back in '07 and built a number of apps with it successfully, but I can't help feel like it's transition to a legacy framework has already begun.

While Rails was never top in the performance category, languages are catching up in tooling and readability. So, now you can get performance and a great developer experience with multiple other languages like Elixir, Crystal, Go and more.

I tend to agree. I love rails and have written a lot of apps in it. I'm newer to phoenix but I'm feeling much like I did early on in rails. Between Elixir/Phoenix and Crystal/Amber, I think rails is headed towards legacy as well. I'm not writing any new apps in rails. The good news at least, being condemned to a "legacy" rails app should still be fairly pleasant :-)

I strongly disagree. I wouldn't call Rails legacy, but _mature_. And the recent adoption of Webpack(er) certainly brings new life to the framework, allowing me to write ES6 JS with a lot of convention over configuration.

I love starting new projects in Rails, and don't feel the need to look anywhere else: it handles everything for me in a very nice and battle tested way, and I'm always one Google / StackOverflow question away from finding out exactly how to achieve my goals quickly.

How can you compare the overall developer experience of Crystal, which isn't even 1.0 yet and has very few libraries, with the mature ecosystem of Rails?

You're right that the major downside I can see is the ecosystem is not as big. But when I jumped into Rails over a decade ago, the Ruby/Rails ecosystem was not huge yet either.

But, static typing and being able to compile and deploy a binary is a pretty sweet developer experience. And the syntax is nearly identical to Ruby with a few nice additions. Many companies like ThoughtBot (a huge Ruby adopter) are jumping into Crystal. The future is definitely in compiled/static languages IMO.

If only there were types in Elixir to get rid of those damn nulls...

Supposedly someone is making a Typescript-like precompiler for Elixir (with ML syntax of course). We are all hoping it succeeds!

Eh, as far as I know I'm the only one doing that, and even then only as an experiment/playground, and I don't have the time to work on it lately (have to pay the bills after all).

It's not compiling to elixir, but the `alpaca` language compiles to the BEAM (via erlang core I think) so it gets you close, although they are not black boxing messages sadly.

If you know of someone else doing typing experiments on top of elixir, please link me (my playground code is on my github repo somewhere)!!!

Found it https://github.com/wende/elchemy

The idea seems just like Typescript, a very interesting project!

Oh yes! Elchemy! How'd I forget about that one (I keep pushing it around!)! ^.^;

It is based on Elm so it has a severely limited type system but it is the most complete on Elixir so far. :-)

This is probably my only gripe with it. I really like static types, since the tools can be so much better in that case. VSCode + ElixirLS is ok, but far from Elm, Typescript or C# in that regard. I've been trying out Dialyzer and Credo, but it all seems so clunky to use.

If you haven't seen the new Dialyzer messages in Dialyxir, I just finished a big effort to improve those messages dramatically for Elixir consumption. Release is in RC currently.

Thanks a ton for your work!

If you mean your future, you could certainly be right. But if you mean the future of all programming I would like to say that compiled/static languages are in fact the past.

They are super useful and given a complicated enough problem/ sophisticated enough programmer they will look like a good enough solution for all problems. But ... Wordpress

Why are they a thing of the past?

Because in the past they dominated everything. This will not happen again. For some problems/people the ease of dynamic languages are simply more important than the correctness/speed issues you deem as paramount.

I don’t mean no one will or should use static languages but believing they will once again dominate like in the 80s and 90s is an illusion imo

Not OP, and not Crystal, but comparing Phoenix ecosystem with Rails is pretty doable. There are a lot of packages available. Obviously not as much as on rubygems, but one can get a pretty good feel for the overall developer experience. My enjoyment of rails isn't really about the wide array of gems available anyway. It's the paradigms and community standards.

Rails is also very powerful because of hundreds of thousands of answers on Stackoverflow and countless blogs. It's not easy to reach that level.

Another reason why JavaScript is unfortunately so popular.

This. If I was a CTO I would never ever choose Crystal, or Elixir, or Kotlin. Give me a proven, "old", battled tested framework with tons of resources that's easy to teach, learn and get sh done with. I'd choose PHP over Elixir for a new business.

I get the idea behind your comment, but without context it's pretty stupid. There are plenty of use cases where Elixir might be a much better choice than PHP, but it depends entirely on what you're trying to do. I sincerely hope you're not a CTO of a company where PHP is not the right approach...

(that said, I do agree that often PHP is a fine choice, and people might overcomplicate things needlessly by choosing something else)

I can't tell if you're trolling here or you geniunely don't realize that the foundation of Elixir is older than PHP.

Disagree as well. How is Rails legacy? It's probably being developed by more people than ever before. Just because it hasn't completely overhauled it's architecture doesn't make it legacy. What will stop you from calling Pheonix legacy in 8 years? There will be at least 2-3 new frameworks/languages that are cooler by then.

Thank you for sharing your experience! New languages is one of the "fun" parts of programming. Except at work.

There are legitimate problems of legibility, maintainability, and training when introducing new stuff into the stack. Then there's the balance that we have to strike with re-writing old with new. How much do we gain with the new vs how much time is it gonna cost?

Lately, these have been the things I worry about instead of algorithm design or product design. Precisely the things we don't get taught in school or get interviewed for

True. In my experience that largely depends on the environment.

In Java or .NET shops, there is a significant investment in the surrounding infrastructure where the code is deployed. Has a huge impact in cost of switching. That's before even factoring in some of the inside baseball related to "discounts" on products like SQL Server if you have a certain number of employees with Microsoft certs.

For just about everything else though...it's not nearly as much of a headache.

Oh man. Those discounts sounds like such a headache. It's hard enough considering the asks for one department already...

Environment is definitely the key. We don't use either of those languages but we do run our site on a dozen or so servers. There's a fair bit of coupling that can eventually add up when you run the same apps on the same servers for years

Awesome read! I was actually an intern on Cees' team when we started writing db2kafka (https://github.com/PagerDuty/db2kafka), and got to contribute a fair amount. It was a really fun language to work with, especially if you enjoy the Actor pattern, and had some nice debugging tools. I'd definitely encourage anyone interested to give it a shot, especially anyone with similar requirements as Pagerduty!

Great hearing more and more Elixir adoption stories from well known companies.

The quote that resonates...

> ...some of us are secretly hoping that one day, we’ll only have to deal with Elixir code

I've been lucky that I could choose my stack on a few projects, but I also hope Elixir will keep getting more popular so that I can use it for the more common contractor-type things I get offered.

I'd love to hear more about your troubles with Scala, and any examples of "clean code being hard to write". Do you think this had more to do with a lack of experience with strongly typed functional languages, or more to do with Scala's mixed paradigm?

Also, thanks for sharing your story. Elixir is a great language and I love seeing it being adopted more and more everyday.

I'm not from PagerDuty, but I also moved to Elixir after some years working in a hybrid Ruby + Scala shop. I concur with the author's experience that writing clean and maintainable Scala code is hard. A few reasons:

* Haskell influence on the language, especially early on, encouraged the omission of dots and parentheses wherever possible. Just about every piece of sample code was written as an undifferentiated stream of tokens. There is a mental burden.

* Haskell influence on the community encourages category theory solutions to all problems. This harms interop with Java libraries, in addition to requiring a number of concepts that are safely ignored in many/most other languages.

* Scala's rich OO system (classes, interfaces, traits, singletons, case classes, implicits, etc) results in frequent design paralysis, not to mention taking a while to learn.

Elixir suffers from none of these problems, while retaining some of the things that made Scala a pleasure to use, notably the actor system of concurrency and the use of immutable data structures.

I work for the Scala Center and I'd like to comment on some of the points you make to hopefully explain how I see things from my side. You can expect my opinion to be biased but I'll try to stick to the facts.

> Haskell influence on the language, especially early on, encouraged the omission of dots and parentheses wherever possible.

This is considered an anti-pattern in the Scala community. The last library that used this at large scala was sbt, Scala's build tool, and now everyone discourages the use of infix operators.

> Haskell influence on the community encourages category theory solutions to all problems.

This is simply not true. First, there is no Haskell influence on the language. Scala is a functional programming language, all similarities between the two are just fundamental to how their type systems work and the foundations of the paradigm. Second, I contest heavily that the Community encourages category theory to solve all the problems. There are many subcommunities in Scala and many styles, but if you talked to some Scala developers nobody would agree that category theory should be applied to all code. In fact, most people don't. They just use Scala because it's a better Java that boosts their productivity.

> Scala's rich OO system (classes, interfaces, traits, singletons, case classes, implicits, etc) results in frequent design paralysis, not to mention taking a while to learn.

Absolutely agree that for someone not experienced, it may be difficult to know what's the best way to design libraries (this is more of a problem with libraries than applications). This is a big problem of Scala that I think we're solving by making the language more opinionated and being more open about what's encouraged and discouraged.

If you ask for my opinion, I think we also need to make a better job at communicating all this knowledge that advanced Scala developers learn but that is usually kept to closed circles.

>They just use Scala because it's a better Java that boosts their productivity.

I spent three years on the 2nd largest Scala 'team' in the U.S (first as an engineer, then leading a sub team), the problem is that using Scala as a 'better java' doesn't really buy you much productivity wise. A small percentage of sub teams tried using Scala this way and hit a lot of road blocks with the unfamiliar syntax, immature tooling, and other quirks.

The teams that had huge productivity gains were the ones who leaned heavily on the functional features WHILE caring about overall readability. That means yes, we used Scalaz but we discouraged the omission of dots and parentheses (as you mentioned), discouraged operator overloading/symbolic methods, and were very deliberate about when to reach for features such as HKTs or some of the more unfamiliar features of Scalaz.

Overall it lead to a very productive and fun development experience, and at the same time illuminated many ways in which the Scala ecosystem could also be improved.

Productivity is defined in subjective terms. I know several success stories of companies that use Scala as a better Java and find themselves more productive. That doesn't mean they don't do any functional programming -- it just means they don't need to use Scalaz/cats to be happy Scala developers. The OP is mostly concerned about Scala being a language that encourages the use of monad transformers and catamorphisms all over the place. What I'm trying to illustrate is that nothing could be farther from the truth.

> This is simply not true. First, there is no Haskell influence on the language.

Since Scala has no build-in way to define FP constructs(monads, functors.. ), cats and scalaz are practically inseparable from the language itself when you try to go beyond the basics in FP. So, it is very easy to see where all the confusion is coming from

Lisp doesn't have any way to define those FP constructs and it's one of the first functional programming languages ever. I think you're being misled by Haskell.

I can't argue about Lisp. For better or worse scala fp programmers are more inspired by Haskell. Scalaz was basically an attempt to port all of Haskell to Scala, 1 to 1

We run a large (1M+ LOC) Scala codebase and don’t have these particular issues (we have other issues like bitrot in libraries but that’s the same in all languages).

You need to sort of put your foot down on using out there FP stuff like some parts of scalaz and turning your code into line noise. The non-FP parts are nice and useful though.

I can go on for days on how someone wrote some unreadable Scalaz that had atrocious performance (even with @tailrec) and someone rewrites it in plain Scala for a 100x perf win.

Still.. would I use scala again? Yes, but mainly because it allows you to use high quality Java libraries that are unmatched by any other language. It’s also just much nicer than Kotlin.

We also really like Akka...

Elixir drops parentheses in function calls, too. Unless it's an anonymous function. Or unless it's a function you pipe to.

There are other syntax quirks as well. A very very very opinionated take on Elixir syntax: https://medium.com/@dmitriid/in-which-i-complain-about-elixi... :)

(Note: I haven't touched Elixir in a while, so some of the things there may be wrong)

Elixir has optional parentheses but they are discouraged in most cases and the official formatter will add them back, so at least there is a community opinion.

The formatter, as Ndymium mentions, fixes a lot of this.

Optional parentheses were one of my main issues with Elixir, but once I understood why this choice was made, and what with the formatter judiciously adding them back in, I'm much happier.

(I recall having tons of issues when using the pipe operator + leaving out parentheses. Happy that these aren't much of an issue anymore.)

1. Disallow infix notation abuse through code review. Scalafmt can automate that for you.

2. I mentored a couple novice developers working on a Play codebase, the rough guideline was to use libraries that depend on Scalaz, Cats, Shapeless ... but never import any of them in our codebase. No HKT in our code either. (Nowadays I work with all that on a daily basis, you can keep things reasonable).

3. I think you're mixing up a few unrelated things. Scala's OO model is cleaner than Java's. Implicits abuse is avoidable with frameworks like Play.

case classes and implicits are almost the exact opposite of OO practices.

Yet case classes with pattern matching and pure functions leads to some of the most readable code I’ve seen, without requiring much FP knowledge. Unlike OOP in which the logic is hidden away in some AbstractVisitorBuilderFactory and a sea of required classes to make that work.

Things can get rough when you take FP or OOP to their logical ends, the sweet spot lies somewhere in between.

I'm pretty sure, as far as type systems go, Elixir is strongly typed as well as fairly functional (It's built on BEAM after all).

I think it has to do more with the latter. To me, Scala is too powerful - it rivals C++ in complexity, despite being half it's age. It almost feels like every Scala codebase is written in a different language, and I always felt I spent more time parsing Scala than understanding the code. It's a powerful language to write, but hard to read - almost the complete polar opposite of Go. As a result it can be very hard to bring new engineers up to scala, and even engineers that know Scala have to learn "your flavor" of Scala.

I know what you mean. Even things like error handling in Scala have divergent paradigms, to the point where we had one Scala consultant at the company I worked for using exceptions, and another trying to use Eithers.

I have to say though, coming from a C++ background, Scala still feels much more beautiful, clean, and safe than C++... but then, almost everything does ;)

Elixir is dynamically typed, just like Erlang and the BEAM. Strong typing is a most requested feature from outside the community, seldom from inside. (EDIT correct, strong typing is not static typing. Does anyone make weakly typed languages anymore?)

Strongly typed meaning that Elixir/Erlang don't allow:

  1 + "some string" + [A,List,Of,Somethings]
Compared to javascript where:

  1 + "2" == "12"

Statically typed means that variables and functions hold or return specific types which can be determined before execution.

The result add(1: Int, "2": String) -> "12": String can be achieved in any language, regardless of how strict the type system is. All your example shows is that in elxir no such `add(int, string)` operation is provided as part of the standard library/environment, while in javascript, it is.

The "+" (operator) case is only special in languages that do not implement operator overloading -- it is completely orthogonal to the type system.

Type coercion does not equal weak typing -- I submit JS and Perl as examples where types are strong but there is a coercion protocol dependent on which types an operator expects around it.

If we are being pedantic, what is going on here is not type coercion.

If it were type coercion, we would expect the add function to have either of these two behaviours:

  - coerce the input arguments to strings, i.e. implicitly accept two strings and return another string

  - coerce the input arguments to integers, i.e. implicitly accept two integers and return an integer
However, that is not what is happening here.

Of course, in Javascript it is implemented as a function that takes the theoretical "Any" type and decides what do do at runtime. But in a statically type language, the equivalent mechanism would be dynamic/multiple dispatch (or a static overload, possibly in combination with a generic/template method) -- not type coercion.

Fair, C may have been a better example. Where nearly everything is actually just a block of bytes of various lengths and can be intermingled fairly freely with minimal effort to get anything past the compiler.

Sometimes useful, but the type system ends up offering few meaningful protections as a consequence.

The types of values and legal conversions between them are well defined in C -- the assumption that everything is just a block of bytes and hence may be casted freely between types is explicitly incorrect (and is what may lead to bad/invalid C code).

However, C does allow you to write code that can not proven to be legal with respect to the formal system of the language and get it to compile, I think that is what you are referring to.

But just because you can write an invalid program does not mean the type system doesn't exist or is flawed. I can also write an invalid Haskell program.

The difference between these two would be that the haskell program can be proven to contain only operations that map to well-defined operations in the formal language model by an automated process, while the same can not be done for C in the general case.

However, just because you can not always prove conformance to a formal model using automated means, doesn't mean that no formal model exists (because it does!).

There's no reason a language can't be both strongly and dynamically typed (like Elixir or Python) or statically and weakly typed (like C). :)

According to wikipedia, Erlang/Elixir is strongly typed. I guess you and the other poster mean static typing?


Elixir is a dynamically typed language, so all types in Elixir are inferred by the runtime.

> Does anyone make weakly typed languages anymore?

I don't think anybody ever did. The definition of "strong" and "weak" seem to be entirely subjective; and of course everybody considers their favourite language to be "strongly typed". It's like a reverse no-true-scotsman.

You may be confusing strongly and statically typed.

Definitely not the lack of experience, there was plenty of in-house Scala experience. I'm not the author, but he has indeed written about his experiences with Scala on his personal blog: http://evrl.com/programming/scala/2017/04/04/scala-part-ii.h...

I wish he would have gone further into detail on why "stuff like Scalaz has no place outside academia". While I understand there's a bit of mental overhead, I've found there to be much real world utility in using that genre of libraries/patterns in production.

It's also interesting that he wants the ability to "run your tests for the intermediate refactoring steps that wouldn’t typecheck and then let you fix up things later."-- I'm not sure what it means to want to run code with ambiguous types in a type aware language like Scala. Type information is deeply involved in runtime program behavior. (E.g pattern matching on types, resolving typeclass instances)

I certainly agree that nontrivial sbt work is rough.

I don't know what he meant by that either, but it's important to mention that `Scala != Scalaz` and that sbt is not the only build tool used in the community (and it's certainly not built by academics!).

(backend of Victorops is in Scala)

I worked on Elixir at my last job and it left quite an impression. At my current role (c#/.net shop) I constantly evangelize for it. The fact that this obscure language makes me comfortable enough to recommend to my entire team speaks volumes to the quality and community around it. Cannot recommend it enough.

How is the library ecosystem with elixir? Currently I'm usually using Go where I have a library for anything and I'm quite apprehensive to start using elixir if there's a lack of libraries.

I love Elixir but being completely honest the ecosystem is sometimes lacking. That isn't to say that you can't do things but rather sometimes you need to rollout your own version of something. For example, Redix refused to support SSL because Redis didn't support it natively[1] even though Redis providers started wrapping Redis with SSL proxies (the maintainer has since then reconsidered and agreed to add SSL support). We also couldn't find a decent library for payload validation. There are Ecto changesets but these are things you check right before saving to a DB after you might have done a lot of work already.

Sometimes there are split clients for what I wish had a community recommended or be in the std lib. For example HTTPotion[2] vs HTTPoison[3] (names are _super_ similar and sometimes confusing).

[1] https://github.com/whatyouhide/redix/issues/77

[2] https://github.com/myfreeweb/httpotion

[3] https://github.com/edgurgel/httpoison

It's pretty solid. You may have to roll something yourself for really esoteric APIs, but pretty much everything you probably want to use is supported already

I've been using Elixir for the last two years, for pretty standard web app stuff, and have never had to write my own libraries (although I contributed code to some in their earlier days).

You can search for anything you need on Hex: https://hex.pm/

It's decent plus you can use all the Erlang libs. Unlike Go Elixir has very nice package manager hex :)

It seems that Elixir would be a perfect platform for Artificial Intelligence programming.

Since Elixir is used for (1) functional programming, and (2) it uses Erlang's OTP Server for massive parallelization.

But, I don't seem much activity in this space. What is the probability that Elixir will emerge as the "killer app" for AI development?

The problem is that BEAM wasn't built to excel at number crunching or doing computationally expensive work. A good explanation here https://stackoverflow.com/questions/11214336/what-makes-erla...

But I've been reading a lot about this topic too. There are a lot of people out there doing exploratory work to see if Elixir can be a viable solution.

There is already some work out there since Elixir lends perfectly for parallelization of computation and is also immutable. See: https://www.youtube.com/watch?v=YE0h9DURSOo and http://www.automatingthefuture.com/blog/2016/9/7/artificial-...

It won't never be even close to native code because of the BEAM overhead, lack of typing, memory layout. It is suited for concurrency not computationally intensive workloads. Not to mention the use of gpus.

There’s the 2012 or 2013 book by Gene Sher, “Handbook of Neuroevolution Through Erlang”. I was working on writing some of the code in Elixir to learn Elixir, but gave up due to inexperience and I lost my interest in Elixir when the Erlang code resonated more with me.

Elixir is suited for reliable and fast servers, not computing workloads... It could control other processes but it could hardly compete with native ML libraries. Also calling foreign code from Elixir messes with the beam scheduled if it takes more than a couple millis

Erlang is not remotely based off prolog. It shares the syntax because it was first implemented in prolog.

Are you attempting to confuse the conversation with facts?

I wonder about a language like Elixir in the age of serverless. Erlang will always be great at the use-case it was originally written for (and what Whats App uses it for), but that's not the scale that most startups operate in.

With serverless the advantages that Erlang has don't matter much, since the process doesn't live along to utilize them. Instead the bottleneck is cold start times. So these types of languages (including Java) that are meant to start and stay running for a long time aren't a great fit. Go, Rust, possibly Pony, are better fits. Even Node isn't a superb choice, cold starts can be slow because of loading so many files.

You do realize that outside of cloud providers hype "serverless" is a tiny fraction of cloud use let alone overall use.

Some times hype is warranted. Get passed it and look at the actual technology and its value. If you are building a website in 2018 you are doing yourself a disservice manually managing servers.

at our scale if we used "serverless" our AWS bill would go from several mil a month to 20+ mil. plus

At your scale, you could take a "serverless" application and just deploy it across managed containers, which is how "serverless" infrastructure seems to work to begin with.

You're not a startup.

Isn't a goal of startup to grow? You lock yourself into a model where you compute spend is 10X that of your competitors you not gonna get very far.

At a certain scale using cloud services at all is too expensive. At a certain scale you write your own databases. You make your own chips.


Seems to be down at the moment, but here's some previous discussion: https://news.ycombinator.com/item?id=5243360

It might be down because the project is pretty much dead at this point. Ling hasn't seen a commit in 3+ years.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact