Hacker News new | past | comments | ask | show | jobs | submit login
Good and Bad Elixir (keathley.io)
346 points by todsacerdoti on June 10, 2021 | hide | past | favorite | 149 comments

I disagree with so much of this! Though I can tell the author is coming from a lot of experience.

How often is it that you don't know if you're expecting a map or a keyword list? I don't think I've ever intentionally put myself in that situation (though it's nice that access supports both).

They say to avoid piping result tuples, but I actually think their example code shows the strength and weaknesses of both approaches. I use pattern matching on tuples when you want more complicated behavior (that might be hard to read in a big else block for a with) and a nice with statement when there's 'one true path.'

In general, I think one of my favorite things about Elixir is that they've done a good job of avoiding adding features you should never use. These all feel like personal judgements about tradeoffs rather than actual hard rules.

> These all feel like personal judgements about tradeoffs rather than actual hard rules.

I'm glad it came across this way. These are all ideas that I believe have helped me build more maintainable systems. But that's just my experience and opinion. I'm happy people are finding stuff to disagree about because I also want to grow and improve my own ideas.

I regret titling the post with the words "good" and "bad". This started as an internal memo for my new team, grew into blog post and I didn't change the title. The title adds connotations that aren't very useful to the reader and sets up a more confrontational tone then I want. Live and learn.

Either way, thanks for taking the time to read through it and for providing feedback.

> I regret titling the post with the words "good" and "bad".

The title struck me as exactly what I would expect from you (based on various articles/comments and your podcast), and fwiw, I personally really appreciate it.

You're one of the people in the community that communicates in a way that often strikes me as 'overly' opinionated, but the combination of that and your actual experience tends to lead to fruitful discussion (see this thread). Plus, I often find myself wanting to 'prove you wrong' and then I end up doing a bunch of equally fruitful research.

So keep doing your thing, is what I'm saying, I suppose. It's appreciated :).

Thank you for writing it! It was a pleasure to read and I appreciate you contributing to the community.

> I disagree with so much of this! Though I can tell the author is coming from a lot of experience.

I bit off topic, but this sums up a lot of my experience with code reviews. People have opinions on what is the "one right way" and if I differ, we go through this drawn out discussion on Github about it. Most of the time, I see it as there are different ways to do it with different trade offs. I find myself just giving in so I can get the damned PR approved without a few days of back and forth.

Granted there are times when there is a much better way. In those cases, I very much appreciate the feedback. It's truly educational. In my experience though, most of the time, it's senior developers insisting their style/way is best.

I think it's important for teams to establish best practices and enforce them. It's good for team cohesion and culture.

For example I've worked on teams that blocked PRs on minor typos in code comments. The signal this sent to me was "a little goes a long way, we care about the little things".

Sure enough that ended up being the cultural norm and the team benefited from that mentality. There were also downsides though! That's part of the trade-off, but the benefits of a unified "culture" outweighed the downsides by far.

What's reeeaaally bad is when you have competing points of view between tech leads and they passively enforce their own idea of what's "right" without regard for the other lead.

I agree with this. There doesn't have to be One True Way, but within the scope of shared work it's useful if there's a sense of "around her we all..." especially if the multitude of ways compliment each other so that sum is greater than whole.

Where this gets hard for me, is in transitioning between teams, contributing to open source projects etc. The dialects can get kind of discordant. At least with Elixir, I find these differences are more palatable because the basic paradigm is consistent. In the "hybrid" languages, you get competing language models in the same code base.

Agreed, this was a huge pain in the past for me. Then, the even MORE senior engineers are just like "Ok do the tests work? Yes? Ok whatever, approve"

So there's this middle section of "senior" (but still in age terms relatively junior in their lifecycle) that are just trying to exert dominance over a dumb web app that controls the flow of the spice (errmm.. code) in the org. It's really silly and deeply counterproductive.

I agree with so much of this! I think it stems from so many software engineers have a quantitative background, where solving for x is either right or wrong. Much of “good” software engineering, however, is subjective and I think many of these folks have a difficult time reconciling that. So you get into these battles on which is the best way when, unless it’s something egregious, is probably good enough.

I have a writing and liberal arts background and so while bad writing is obvious, good writing comes in many stylistic forms. I think this has allowed me to see varying perspectives on “best practices.”

I've noticed this is much more pronounced if there's been no discussion before the PR (talking more about larger decisions than what's in the bulk of code review feedback I guess). I've observed many times people being frustrated because they write some code without making sure the relevant people are happy with the overall approach, then getting torn apart in the PR.

Yeah, I disagreed with a decent amount, but I tried to read it as “here are the patterns and practices I feel passionate about” and that made it an interesting read for me.

The sort of declarative good/bad language worries me a bit because people new to the language may take it as gospel, but overall I’m glad to see people putting their thoughts out there with explanations and examples :)

I do think that being opinionated about code style leads to code that's easier to read and maintain. I was just delighted to find someone whose tastes diverged so distinctly from my own in a smaller language community.

Aside from using an automated formatter/linter, I think being opinionated about code style is only really sustainable for single person projects. Otherwise it leads to inconsistent code reviews where people are making changes for reasons they can't understand.

It's one thing to be opinionated about code styles, but I would argue that doing so when those opinions go against standard idioms is a recipe for disaster.

If I ran across a lot of these suggestions in code, I'd refactor them to be more idiomatic/practical.

But how do you take this to a code review scenario? Are these suggestions that you can take or ignore?

Most of those in the article you can ignore and get away with it for a long time. Some of them start to become problematic once the code grows quite a lot, others are not a problem ever because they are very dependent on the context.

Also disagree with the author!

> You should not use else to handle potential errors (when using with)

Yes you should! Otherwise the with statement could return something that is not :ok and also not :error. I think therefore it’s good practice to put else {:error, error} -> {:error, error} especially because Elixir is not strongly typed so this helps constrain the possible types the statement can return. If you omit the else, your construct can return any number of types from values produced downstream.

IMO it's not as simple as "just use Access". In addition to maps and keyword lists, there are structs in play.

Access works with maps and keyword lists, but not structs. There are pieces of code I've written that reasonably accept map or struct, but not keyword list.

Map.get/3 also has a default value; Access only lets you use `nil` for that.

OTOH, the nil-eating behavior of Access is terribly convenient sometimes.

It's all about what you need at the time. There is more than one way to do it, for a reason.

> How often is it that you don't know if you're expecting a map or a keyword list?

It's just programming to the interface and not the implementation, which often has advantages and very rarely has any downsides.

The benefit of Map/Keyword is that you look at the code and you know right away what type it takes. Access is more universal but you look at it and have multiple possibilities.

I would recommend writing specs for your functions and using Dialyzer, too :)

I've never been able to figure out how to write custom types in Erlang. Any resources in particular that you recommend? I am not trying to be obtuse but the Erlang docs are unreadable and I have admittedly written quite a bit of Elixir code. Most of the time I have to just figure things out ad-hoc from my own interactions with the shell.

Depends what you mean by “custom types.” Elixir supports structs, but they’re not really as rich as classes in another language, for example. If you just want to write type signatures for your functions though, this is a pretty good guide: https://hexdocs.pm/elixir/1.12/typespecs.html

Right, using structs for things I'm doing have been extremely painful, to say the least. Let's say I'm trying to create a module that represents a European option and I want to encapsulate things like "strike price", "expiration date", and the various Greeks in a way that are subsequently modifiable (FWIW, it would be perfectly fine to then just respawn a new proc to have the new data if that is absolutely necessary...). I haven't been able to identify a way to even reference other PIDs that exist which might be able to (with some wizardry) solve some of these problems.

Right now I'm basically wrapping basically all my data in Agents with maps inside, but that is so lame.

Thanks for the quick response. Maybe I'm trying too hard to use a saw to hammer a nail.

EDIT: I don't care about type signatures on functions. I'd just like to be able to have a real time system that updates based on ingested values, that I can distribute easily. It seems like it's possible, even close, but the structs just don't get me quite where I want them to be.

Your description is a bit hard to follow, but it smells kind of like something Ecto schemas could deal with?

That is, by defining an Ecto schema for your data structure you can go almost directly from raw json to clean, correctly casted data.

To use an example similar to yours, this is how I process data from cryptocurrency exchange APIs—just write up a schema for the shape of the response, parse the json into a map and throw the map into Ecto.Changeset.cast. Even if everything is a string in the raw data, it will be correctly cast into integer/float/whatever as defined in the schema.

Sorry about that. Vacationing in Miami Beach right now so I might not be 100% in my descriptive capacity. Anyways, yeah, that obviously could work, but I'm kind of trying to make this all just run in memory because I'm already persisting all the data separately.

You can use Ecto schemas and changesets without a database! It is an incredibly useful library, once you get over the somewhat steep learning curve.

Huh, TIL. Will have to look into that. Do you have any blog posts handy that explain this?

Thanks for your reply to my original comment. I hadn’t considered Ecto, but I guess it does make sense to think of the ChangeSet abstraction irrespective of whether or not you are persisting the data in question. Appreciate it. Do you have an email or something so we could chat further?


I support that too, but it's yet another step as opposed to simply using a specific module to suit the data type :)

On another note, I find Dialyzer a bit slow to run even on modest codebases.

I'm not familiar with Elixir but I'm surprised that the "functional way" wouldn't be to use the pipes in conjunction with a map function that only applies each steps if it's a success and skip them/pass the value if it's an error.

I love this article to bits. The elixir community loves blogging, but so much elixir content is absolute beginner stuff, and so much of the remainder is unopinionated, descriptive stuff. I've found it really hard to strike on some good "how to best do xyz, what are the tradeoffs" style knowledge. This article is the first elixir thing I've read in a long time that really made me go "o wow yeah, I've been doing that wrong". Cool stuff!

Thanks! I'm glad you enjoyed the article and it could provide some useful insights.

I think this is true for most programming languages. I want more stylistic, opinionated stuff that comes with contexts and caveats, so I can get to understand their decision making process and avoid footguns that can only be learned through sufficient experience.

I totally agree, though I think those articles are a lot harder (eg requiring more skill) to write well because you need to quickly ramp your readers on all of whatever the context is that's necessary to actually appreciate the nuance of the design decisions under discussion. You're basically by definition going to be out of the realm of "just follow best practice X" or "apply pattern Y or you're doing it wrong."

As a small example, I've been working on a small asyncio-based web service (Python) which is oriented around an expensive process that generates a result, where the result is stashed in sqlite and returned. I knew upfront that I needed a way to track when a particular result was already being prepared so that if I got a second request for it, it would collapse it into the first one and only do the work once. I wrote this as a twenty line memoizing decorator, but it turns out this issue as a name— cache stampeding. Once I realized that, I discovered that there are existing (and much more complicated/tunable) solutions to this problem, such as https://github.com/DreamLab/memoize/, but the article pitching that solution spends quite a bit of time getting to it— enough so that if I'd discovered it before building my own, I'm not sure I would even have appreciated its applicability:


This hit close to home, especially as a mostly independent developer who can’t ask the neighborhood senior engineer about stuff.

A much more basic example of this from my recent experience is discovering GenStage, which solves the generic problem of backpressure in sequential processing but doesn’t appear in the core Elixir documentation.

Good stuff. I disagree about the first point, though, and have actually gone the opposite way, going from generic Access via `[]` to specific `Map.get` and `Keyword.get`.

The reason is that the underlying data structures are quite different, and operations and access patterns on them shouldn't be agnostic.

You should use Keywords basically only as options to functions, in my opinion, and that's just for historical consistency and interop with Erlang. Otherwise key-value pairs should be a map. And using `Keyword.get` and `Map.get` make it immediately obvious what the underlying data structure is, as well as prompt you to think about whether you should use the `!` version or provide defaults.

Otherwise, I think the tips are all decent to great. I am a little on the fence with exceptions, though. I know there's a meme in the community of "let it crash", but in my experience, that rarely feels like the right thing to do. I much prefer a Rust-style, "Result-all-the-things". It seems hard, in practice, to rely on crashes and restarts to maintain the right state invariants.

> It seems hard, in practice, to rely on crashes and restarts to maintain the right state invariants.

I feel the opposite way. Perhaps if you are mostly doing things where you have control over a transient state (like a chain of validations on a database entry) the error tuples are easy...

But if you are doing something like checking in on cached state for something remote that you don't have control over... It is so, so much better to give up, restart your GenServer, and re-sync your cached state in the init() clause than try to "guess" what the remote thing has for its state after it's sent back an error.

I dunno, I rarely get issues that crash just once. For example, in the blog post, it's recommended to use `Jason.decode!`. But what if whatever server you're hitting is returning invalid JSON for a time? If you crash, the GenServer will restart, it will crash again, and it will just be either in a crash loop (meanwhile DOS-ing the server?), or worse - if it happens in `init` - will take down its own supervisor after a number of failures, cascading up the tree until your app crashes and restarts.

I think it's better to handle the Jason.decode error, and apply some sort of backoff strategy on re-requests. In general, you have so little control over the timing and count and rate of restarts. It's a very blunt object to just crash. Again, with the `Jason.decode!` error, suppose it's in a GenServer that's updating its state. In some cases you may want to reset the state to empty (essentially what crashing does) and in other situations you may want to keep the previous state, from the last time you made that request. Maybe you want to track how many consecutive failures you're seeing and maintain the state for some time, but eventually blank it, or crash at that point.

It could very well depend on the problem domain, though, for sure. I mostly work on freestanding, pure Elixir apps, and a couple of Phoenix apps. I'm more thinking about the freestanding apps, since in Phoenix, sure, just crash your request, whatever.

You can certainly invent a scenario where using `Jason.decode!` wouldn't be appropriate. In that scenario, absolutely, handling the error and backing off is more appropriate. I'd also argue you shouldn't be doing side-effects like that in an init without a lot of care as well.

The same would be true if you were building a kafka consumer. You wouldn't want to crash in that scenario since you could easily poison the entire topic with one bad message.

There are ways to allow for crashes this way though. An alternative approach would be to use a dedicated process for scheduling work and a dynamic supervisor that can be used to start workers as needed. The work scheduler would monitor these worker processes. This means that you could freely crash the worker and allow the work scheduler to determine what further action must be taken. I've used both your approach, and this alternative approach in the past both to good effect.

> I'd also argue you shouldn't be doing side-effects like that in an init without a lot of care as well.

I learned this the hard way. I wrote a GenServer that does some periodic database cleanup (maybe not the best solution), and I naively wrote the init function to immediately do the first cleanup on startup. Then I discovered a case off the happy path where the cleanup was failing due to a constraint violation, and the failing init brought down the whole application (edit: while someone was trying to use it, of course).

yep - init is for things you can guarantee, not things you can't, like anything needing to make network or database calls.

handle_continue was added for these kinds of things that should happen when starting up gen_ processes.

the entire init chain is sequential, so not only will it risk crashing your entire supervisor tree, but it will slow startup

The scenario is not invented per se, it’s something that really happens sometimes. Maybe the systems you interact with never return malformed data more than a few times, but the ones I work with do. Throwing an exception on a deserialization failure in order to crash your process just seems like punting on your retry logic to me.

Sorry, "invent" had the wrong connotations there. I just meant that there are scenarios where crashing wouldn't be acceptable. And in that case, sure, you should prefer to handle the error directly.

But, allowing the process to crash doesn't punt retry logic at all. For instance, in the work scheduler example, the work scheduler would detect that a worker had crashed and would back off appropriately. This allows you to write your worker code in a way that avoids error handling while still managing faults.

The work scheduler example involves writing a work scheduler, though :). The logic is going to go somewhere, is what I’m saying.

Right, but often you should punt your retry logic. You already need to handle process crash at high level, so you already want some kind of restart/backoff in your supervisor. Unless there's something specific that you want to do differently for this particular kind of failure, having separate retry/backoff logic for each kind of failure is just clutter.

You should definitely handle the crash at a high level, but that’s not the same as punting on it!

Not putting any explicit handling for failed requests and instead allowing the current process to crash, knowing that the supervisor's handling will do the right thing in that case, isn't that punting it? I'm not fluent in American but doesn't punting pretty much mean kicking something far away?

It means giving up, more or less. The default handling that a supervisor provides is probably not sufficient for a looping failure, was the point.

> I think it's better to handle the Jason.decode error, and apply some sort of backoff strategy on re-requests.

What's keeping you from implementing a back off strategy in your init? You can return :ignore to the supervisor and try again later.

If you're backing off inside the process itself, you're gonna get some weird shit where your service exists but it's not in a truly available state, and you will have to handle that by basically halfheartedly rewriting logic that already exists in OTP.

Reminds me a bit of this: https://discord.statuspage.io/incidents/62gt9cgjwdgf

In particular: "14:55 - Engineers pinpoint the issue to be strongly correlated to a spike in errors in originating from our service discovery modules. It is determined that the service discovery processes of our API service had gotten into a crash loop due to an unexpected deserialization error. This triggered an event called "max restart intensity" where, the process's supervisor determined it was crashing too frequently, and decided to trigger a full restart of the node. This event occurred instantaneously across approximately 50% of the nodes that were watching for API nodes, across multiple clusters. We believed it to be related to us hitting a cap in the number of watchers in etcd (the key-value store we use for service discovery.) We attempt to increase this using runtime configuration. Engineers continue to remediate any failed nodes, and restore service to our users."

The general theme for people who’ve been working with Erlang for a long time seems to be: rely on crashes for truly unexpected scenarios, but validate data (especially at the edges) so crashes aren’t a regular occurrence.

In the spirit of Python's "Exceptions are when the runtime didn't know what to do, and Errors are when the programmer didn't know what to do", I'd say "rely on crashes for exceptions, but validate data against errors".

I wish we had only maps and only with either strings or atoms as keys. I ended up too many times with code that failed because it got a string as a key instead of an atom or viceversa.

Reminds me of the following from my Ruby/Rails days: https://api.rubyonrails.org/classes/ActiveSupport/HashWithIn...

I don't really like this article cuz:

- I like &Map.get/3 over the some_map[:some_key] syntax

- &with/1 can be hellza confusing in many case if you're calling a bunch of functions, and any ol' one of them can return an error message (e.g. where the heck did `{:error, :bad_arg}` come from?!

- piping into case statements is awesome, and I'll only stop doing it when people complain about it

- higher order functions are useful, and I don't see a problem using one's judgement as need on when to use them

- In the cases where using &with/1 is actually useful, so is using the else block if you really need to


Yeah, I just don't like this article's recommendations.

> - &with/1 can be hellza confusing in many case if you're calling a bunch of functions, and any ol' one of them can return an error message (e.g. where the heck did `{:error, :bad_arg}` come from?!

You agree with the article on this point:

with is best used when you can fall through at any point without worrying about the specific error or contrary pattern.

I’ve never used Elixir, but when I saw piping into case statements I was confused as to why it’s appearing on a ‘do not use’ list. It does seem awesome! The key thing for error handling is to be able to do it inline, without introducing nesting, like those withs are doing. Maybe you don’t get that with case either. Does elixir have a return statement?

Elixir, like Erlang, is functional so there are no statements. Everything is an expression, which evaluates to some value when executed. The “return” value is just the value of the last expression evaluated in the function call.

Is there an equivalent of the Haskell Either monad & do-notation / Rust Result + the ? (try) operator?

Sort of, there’s a convention used throughout the standard library of returning {:ok, output} or {:err, error} or nil. That makes it really easy to compose functions/check for errors.

The equivalent of do-notation would be with statements I think: https://elixirschool.com/en/lessons/basics/control-structure...

ohhhhhhhhh, with statements do sequential . Yep. I'm on board with the original post now. Thanks!

Elixir does not have a return keyword.

I agree with piping into case statements. It's awesome. You're a better person than me. I'll never stop doing it no matter how loudly people complain. :)

not to be too nitpicky, but the article doesn't say don't use higher order functions; the article says don't wrap higher order functions into something that hides it higher-orderness.

"Don’t hide higher-order functions

Higher-order functions are great, so try not to hide them away. If you’re working with collections, you should prefer to write functions that operate on a single entity rather than the collection itself."

Excellent advice -- and not just for Elixir. I find myself applying this technique often in Python, for example. While the example is fairly simple and contrived, the consequences they note ring true.

Yeah, I misread the bit about about higher order functions (or they edited the article).

Lots of good points. I gotta blame my Elixir tutorial (Dave Thomas) for some bad habits: over-reliance on pipes and pattern-matching in function args. He hates if-then in a function. He would take an if-then in a function and split both clauses out into their own function. Seemed fishy to me. Trying too hard to be elegant.

I agree with Dave on the pattern matching in function heads, at least from your description. Garrett Smith’s blog post on tiny functions[0] has heavily influence my Erlang.

[0]: http://www.gar1t.com/blog/solving-embarrassingly-obvious-pro...

The author uses the word "religion" in a jocular manner, which seems appropriate, but probably not in the way he intended. This particular style often seems to get applied as a dogmatic pattern. The underlying principles are good, but the author should have stopped halfway.

For example, the article has this:

  handle_db_create_msg(Msg, State) ->
      log_operation(db_create, Msg),
      handle_db_create_result(create_db(create_db_args(Msg)), Msg, State).

  create_db_args(Msg) ->
      [create_db_arg(Arg, Msg)
       || Arg <- [name, user, password, options]].
  create_db_arg(name, Msg) -> db_name(Msg);
  create_db_arg(user, Msg) -> db_user(Msg);
  create_db_arg(password, Msg) -> db_password(Msg);
  create_db_arg(options, Msg) -> db_create_options(Msg).
  create_db(Name, User, Password, Options) ->
      stax_mysql_controller:create_database(Name, User, Password, Options).
There is no shared logic between the different cases in create_db_arg() (creating a false sense of abstraction where there is none). create_db() has to take its arguments as a list* in order for the design to work, but the use of a list suggests homogeneity between the elements, which is not the case.

It would have been much better to stop at this point:

  handle_db_create_msg(Msg, State) ->
      log_operation(db_create, Msg),
      Name = db_name(Msg),
      User = db_user(Msg),
      Password = db_password(Msg),
      Options = db_create_options(Msg),
      Result = stax_mysql_controller:create_database(Name, User, Password, Options),
      handle_db_create_result(Result, Msg, State).
*: which the author also forgot when making the final listing at the end of the blog post.

There are lots of contradicting advice in the Erlang/Elixir land.

Garret Smith likes microfunctions, while Saša Jurić avoids them.

Same with "tagged with statement" pattern - Saša Jurić prefers it over Ecto.Multi in transactions, while this post discourages it's use at all.

I notice he says the result needs less testing; to me such a rewrite amounts to an audit of the code and thus leaves one with more confidence, definitely. I'd just buy it more perhaps with stringent static types, but I've become a 'gradual typing purist' in a way (in this process, if you're not littering types everywhere you are missing a lot of value)

Having lots of gnarly if-elsif-else blocks of code elsewhere where the logic conditions are varying in number it is nice to encapsulate that in a pattern match function.

For me, it doesn't mean "thou shalt not use if-then ever" just that if you have enough conditionals and logic needed, pattern matching is better suited for those cases (sorry for the pun).

Mind you, when some of the Elixir books were written, the `with` syntax didn't exist yet.

pattern-matching in arguments means that the picking the function according to its signature is the actual decision process? If you have a function that doesn't contain a decision it is easier to unit test, isn't it?

No. It's the same. If you have a function with a large if statement in it, you'd want to test all conditions within that if statement to make sure it was working, right?

You'd do the exact same thing if you were testing a series of pattern-matched functions. The tests would be unchanged either way.

I think it makes it easier to tick off the list of cases that need to be tested: one for each head.

You'd have the number of tests broadly but they are simpler because they don't need to handle the cases hidden in the functions themselves. Maybe in the end it's the same or a matter of taste but it recently occurred to me that unit testing becomes "cleaner" that way.

I came out of that Dave Thomas classes with a lot more good habits than bad. I don't follow everything he does, but the vast majority of it made a lot of sense to me and results in much cleaner code.

I started my Elixir journey with Dave Thomas, too, and I agree with your commentary. I think most of Dave Thomas' style is highly organized and readable.

One of my favorite Dave Thomas tidbits was writing private functions like this:

  defp _private_function(params) do
    # Do something
Whenever I see a function that starts with an underscore, I KNOW it is a private function in the current module. I don't have to go looking for it or wonder where it came from. I know at a glance that it is a private function.

Readability and organization are so important in code and I felt that Dave Thomas helped me get off to a great start in this regard.

Good to hear all the opinions. I'm an Elixir newbie and I appreciate them.

I write elixir full-time at a fintech startup and I can confidently say that this blog is clearly written by someone with significant experience. With statements are key to nailing a balance of error handling and pattern matching, however for trivial functional flows (that don't require error handling), pipes are entirely fine.

The biggest challenges for elixir currently are the lack of established HTTP libs with dynamic testing and db ORMs. For instance, interacting with PSQL is fine using ecto, but using a document based DB or something like Neo4J is an absolute headache. Deployment is also pretty easy, but that's a whole different side of development.

I'm so glad to hear that last bit: seems like the deployment story in Elixir has really improved lately.

It wasn't long ago where the only posts in here would describe it as the biggest headache!

> Don’t pipe results into the following function

Mmmm. No. It makes sense into context of the example, but this is way too generic advice it's actually bad.

Pipes are awesome. Please _do_ pipe results into the following function when it makes sense to.

I didn’t get the impression they were saying never to do it (maybe I missed that bit in the text).

I think they specifically said, when piping, don’t force functions to handle the error of the upstream function.

Because each error needs to be handled differently, the downstream function’s implementation becomes tied to the upstream function. Then the downstream function is only usable with said upstream function.

The more I need to handle errors, the more I want to make every unexpected thing just raise an exception. 500s for everyone!

Exactly, piping a bunch of ! functions is the way.

A lot of a great advice! but I'd like to shave some yak on my lunch so I'll comment on what I disagree with :)

I sort of disagree with Access vs Matching vs Map; if I change my data structure I'm changing the contract I've established. I want things to break in that case, and I want them to break as fast as possible. This seems like great advice for a controller action, not so much for a gen_server or anywhere I'm NOT dealing with user input.

And... I completely disagree with "Raise exceptions if you receive invalid data." If you receive invalid data return an error saying that... please for the love of someone's god, do not raise an exception. You have no idea what the concerns of the caller are, so just return an error tuple.

> This allows us to crash the process (which is good) and removes the useless error handling logic from the function.

No, the exception returned will likely have no meaning to the client and will more than likely confuse them or your. You will still have to write error handling logic. You still have to format the error. You still have to log or report the error... Now you are doing it in multiple places because you're raising and likely still having to handle data validation issues. The best of both worlds, IMHO, is returning an `{:error, exception}` tuple.

Edit: Forgot the "NOT" in the end of the second paragraph.

> If you receive invalid data return an error saying that... please for the love of someone's god, do not raise an exception. You have no idea what the concerns of the caller are, so just return an error tuple.

> No, the exception returned will likely have no meaning to the client and will more than likely confuse them or your. You will still have to write error handling logic. You still have to format the error. You still have to log or report the error... Now you are doing it in multiple places because you're raising and likely still having to handle data validation issues.

If it's invalid in the sense of indicating a bug in the caller (e.g. by violating your function's preconditions), raising an exception is the right thing to do. You don't understand the problem, and probably the only way it can be fixed is by the developer fixing it. Precisely because you have no idea what the concerns of the caller are, each caller needs to do its own validation and error handling logic.

If it's a case of "invalid" input but valid code (e.g. a function that's designed for parsing user input), then that's the case where doing validation yourself and returning an error is better.

(This kind of distinction is where it's really helpful to have a type system, so you can use separate types for raw input and parsed data that's expected to be valid).

So much this. “Let it crash” is cargo culted too hard IMO. If a caller (being another library or end user) can be given information to correct an error, give it!

Let it crash works well for a phone system where it doesn’t make sense to call both parties back if the thing failed.

Uh I think the rule of thumb is, if you're writing a library do return an error tuple because it gives the caller the option, and if you're a mensch also provide a bang function. If you are consuming a library in most cases let it crash.

let it crash is great for unrecoverable errors as it gives you clean state as if the thing that caused the error never happened but it's terrible for cases where you want to actually capture the error and react to it. give me a structured log event or an error code every time over a stack trace

Your talking about errors in the context of a web request or RPC. In those situations then sure, return an `{error, exception`} (that's the pattern I advocate for as well). But, I've found a lot of benefit from designing systems where crashing was an acceptable outcome to be really beneficial and increased the systems overall resilience.

I am in love with Elixir but am still on like a 1 out of 10 on the mastery scale. Love seeing posts like this!

(I bit off more than I can chew by trying to write a full-featured API for an embedded GPS device that speaks a proprietary binary protocol - it's been a slow project)

Agree with almost everything, but one point doesnt attempt to reason about the pattern. Why shouldn't I pipe into a case expression? It's visually pleasing, more concise and removes an empty line.

Easier to talk about where I disagree, but overall great read!

Yeah, it wasn’t obvious why one should avoid |> case.

I also think it looks pleasing, plus it’s easy to read and it avoids allocating a new variable compared to the alternative example given.

I didn't make a very strong case for it. Overall, my experience has been that calling the side-effecting function in the case statement directly tends to be easier to maintain and easier to read. But, I think piping into case can work well in certain situations. Its not one of the points that I feel that strongly about.

I disagree about not using the module-specific `get` functions. They should be used when you expect what you’re operating on to be a specific type. Sometimes I want to write a function that only operates on maps, and Map.get will throw an error if you pass anything else. Granted, I write specs for all my functions, and I come from a static language background, so this is just my perspective. Other than this quibble I think this is solid advice.

Interestingly enough, my advice to not use `Map.get` is the most contentious point in the entire post. I didn't suspect that would be the case.

I'm certainly not against using those functions and the additional checking can be of use in certain circumstances. The use case I had in mind was specifically how people will use `Keyword.get` to grab configuration values or options. This means that if you load configuration from a json file or external service, and it comes back as a map, you now need to convert to a keyword list just to initialize a library. Its a small thing, but in those situations I would rather allow for more flexibility in the interface.

That said, I think this advice is particularly useful for people building libraries more so then people working on applications. I may try to add more nuance to this point in the post.

> The use case I had in mind was specifically how people will use `Keyword.get` to grab configuration values or options.

It seems like this advice is very dependent. I think you should decide whether a function accepts maps or keyword lists or both, but I do not think that all functions which operate on key-value structures should default to accepting both. They have very different performance characteristics, and keys are not unique in keyword lists.

I agree on the performance characteristics. But if your using `get` then it doesn't matter that keyword lists have non-unique keys. You just get the first one.

Maybe it does, maybe it doesn’t. Maybe somebody gave you a keyword list that they thought had unique keys and it didn’t. The point is, the two structures are not semantically equivalent, and I think it’s a mistake to conflate them in general. Just because you can use access syntax on both doesn’t mean you should write a polymorphic function that does so. Sometimes you should! But not most of the time.

No, it literally returns you the first one. If you have a keyword list and use get your not going to be getting the benefit of duplicate keys.

No, you asserted it doesn't matter because it still works and returns the first value. I was saying it might. The point stands. Different data structures, different modules, different semantics. Can and should is an important distinction here.

> That said, I think this advice is particularly useful for people building libraries more so then people working on applications

This is a good point that I hadn't really thought about previously. I agree with your thoughts much more when I consider the different needs between libraries and applications.

I agree with almost all of this except for avoiding else and annotation in with statements. Here’s my reasoning:

1. The argument the author makes here assumes that else clauses can or should only used be for error handling. There are instances where you might want to drop out of a with statement without returning an error however. Some examples might include connection Plugs that don’t need to perform some behavior due to certain assigns already being present on the Connection struct or configurable / optional behavior encoded in an options list. In these instances, pattern matching on a successful fallback clause actually reduces complexity as it avoids the need to position that branching logic somewhere outside of the main with statement that it is actually concerned with.

2. I find error handling through with / else and annotated tuples to actually lead to very expressive, easy to read code. Having built and maintained a Phoenix application in production since 2015, this is actually a very common pattern I and other members of my team use so I’m curious as to why the author considers it something you do under “no circumstances”.

We had a good discussion on this article yesterday in my local Elixir group.

General consensus is that there's a lot of things pointed out here that are good to think about and consider, but don't treat them all as gospel.

Some situations are better handled by other approaches, but understanding the goals of why the author has pointed these out will be beneficial for everyone.

> General consensus is that there's a lot of things pointed out here that are good to think about and consider, but don't treat them all as gospel.

Definitely not gospel. These are patterns that I tend to use and believe have helped increase the maintenance of the various projects I've worked on. But I'm also guilty of going against these points when it seems appropriate.

A few more thoughts from my 2c coming based on my experience writing a lot of code in Elixir:

1 - avoid if else {} blocks, use `cond do .. end` blocks instead, they're more readable

2 - avoid importing modules, but rather use them directly wherever possible, this will help when refactoring code or to find all occurrences of a particular Module.method()

3 - when specifying keyword list options, explicitly supply [ ] in the arguments, example: Abc.method(a: 2, b: 3) may be clearly written as Abc.method([a: 2, b: 3]) so we always know it uses a keyword list and not map

4 - Use a file `.iex.exs` to specify all your aliases and macros you may need in your Elixir shell. Helps a lot while you're playgrounding in the IEx shell. In my case, I have aliased all my Context modules and Model modules

5 - Pattern match everywhere. This is one of the powers of this language. Make your methods follow the {:ok, data} or {:error, message} pattern and it's already very maintainable.

I've been using Elixir for over 5 years and didn't know about .iex.exs. Nice!

Here's a handy thing you can do with .iex.exs:

Have you ever tried to debug something in iex, but database calls and other log messages keep stomping all over your prompt? You can stop them temporarily calling the function

You can start the log message again by typing

That's a lot of typing though. Here's where .iex.exs comes in handy. You can include the following code in your .iex.exs file:

  defmodule Clog do
    def stop do

    def start do
and then from within your iex session, type

to stop the log message flow and

to start it up again.

You can call your module whatever you'd like. I just called mine "Clog" because the log messages were always clogging up my console. I hope this helps.

this is even better:

    if function_exported?(Mix, :__info__, 1) and Mix.env() == :dev do
      Logger.configure_backend(:console, device: Process.group_leader())
It makes logger work like IO; where if it spits it out it doesn't disrupt the prompt. Don't run it in prod, though.

One of the best parts of listening to Elixir Outlaws, the podcast that Chris co-hosts, is disagreeing with his wacky and strongly-held opinions on things. Personally, I don't agree with most of what he's calling out here, but it's still a great read to get you thinking.

(non-Elixir dev here)

Does Elixir have anything akin to Haskell's "do notation" for result types? The provided best practice code feels verbose, and that surprises me given that Elixir looked clean & modern.

For example

    # Do...
    def main do
      with {:ok, response} <- call_service(data),
           {:ok, decoded}  <- parse_response(response) do
would read in C# (.NET 3.5) as the following pseudocode, which personally looks much cleaner, avoiding `{:ok, ...}`:

    Result<T> Main(object data) {
        return from response in call_service(data)
               from decoded in parse_response(response)
               select decoded;

{:ok, foo} is just the largely community adopted soft error pattern in elixir

call_service could just return response and remove the verboseness.

so it would be:

response <- call_service(data)

I don't know C# but would that code explode if there was an error, instead of in again the soft error pattern likely getting {:error, foo} ?

Regardless my point is that {:ok, bar} stuff isn't a language required aspect it's a community pattern if that makes it any better for you.

> {:ok, foo} is just the largely community adopted soft error pattern in elixir

It is not a community pattern, because it's the way OPT (Erlang's "standard library") functions work e.g. maps:find? returns `{ok, Value} | error`.

And Elixir inherited that rather directly and build its standard library the same way. `Map.fetch` similarly returns `{:ok, value()} | :error`.

It's not hard-coded into either language, but it's absolutely not "community adopted" or "a community pattern" either.

Clearly not every library uses it, and not everything in the standard library adheres to it either, just as you said where the Erlang tie ins are.

What would you call something then that clearly has optional adoption and isn't consistently used across the standard library?

In Elixir’s standard library at least, it is pretty consistent. Functions that can fail return tagged tuples, and functions that will throw when they fail end in !.

Map.pop is one counter example. The failure mode is just {nil/default, Map}

Unless you use Map.pop! which raises an error, and because pop! raises an error I would then expect Map.pop to return {:error, nil} or {:error, Map} or something instead of {nil/Default, Map}.

I won't try to back up where/if there's more this is but it was the one I had in mind while writing.

Well yes, because not finding a key in a map is not an error typically. Map.get is the same way, as is Access. It returns nil. If you want it to be an error you can use the version that raises.

That's not true across many of the languages I use, dict.pop returns a key error in python for instance.

By having a version that raises the standard library is basically saying not having a key is the error state.

> That's not true across many of the languages I use, dict.pop returns a key error in python for instance.

Yeah, sorry. I should have been more clear. This behavior is common in Lisps, which is what I’m used to. It’s less common in mainstream languages. With Elixir it may be Clojure-inspired.

> By having a version that raises the standard library is basically saying not having a key is the error state.

Not necessarily, it’s saying you can choose whether it should be an error or not. Notably other functions that error don’t give you the option of specifying a default. That’s the difference here. Nil is returned because it’s passed in, it’s the default argument for the “default value” parameter.

Your example is valid elixir code.

    def main do
      with {:ok, response} <- call_service(data),
           {:ok, decoded}  <- parse_response(response) do

It may be a little semantically different but it's pretty similar.

The part about 'with' and 'else' is interesting but slightly conractictory with the part about handling errors in the "higher" function.

That being said, my main issue with nested 'with' (or nested 'case', for that matter), is that nothing helps you make sure you handled all the possible way for any of the nested function to fail.

And someone, someday, is going to add another way for one of the functions to fail, and you will not have written a handler for that because you're not clairvoyant. And you won't notice until it crashes in prod, and your customer complains.

Good advice overall, otherwise !

I feel like the number of "I agree with everything except..." posts on this thread is an indication of what a great article this is.

Practical, real world, and takes enough of a stand to wake up the pedants.

For the example that the OP listed in regards to pipes I prefer to leverage something like https://github.com/CrowdHailer/OK it allows to retain the expressiveness you get with pipes but cleanly handles result tuples:

  require OK

  OK.for do
    user <- fetch_user(1)             # `<-` operator means func returns {:ok, user}
    cart <- fetch_cart(1)             # `<-` again, {:ok, cart}
    order = checkout(cart, user)      # `=` allows pattern matching on non-tagged funcs
    saved_order <- save_order(order)
    saved_order                       # Value will be wrapped if not already a result tuple

Nice. Solid advice from Chris.

I’d wondered about the pipes construct in Elixir for exactly that reason. Sure, it looks nice and easy, but based on my experience with Erlang it seemed unlikely to be useful unless you are simply planning on crashing for any errors (which isn’t really the Erlang way).

Pipes are still plenty useful for chaining simple functions that aren’t going to cause trouble (because you validated the inputs already, or whatever).

You can use changesets or a similar pattern to avoid this.

Like many others, I disagree with more of these points than I agree with. I DO agree that you shouldn't wrap Enum functions inside of other functions, but I prefer using

  Map.get(foo, :bar, :some_default_value)

because I dislike dealing with meaningless nil values, since they tend to be the source of bugs.

I also disagree with not using else in a with statement. That advise seems a lot like, "Next, pull out your pistol and shoot yourself in the foot," to me.

Languages have idioms for a reason and a lot of these suggestions go against idiomatic Elixir, which I'd recommend against.

This will work with both maps and keywords:

  iex(1)> foo = %{a: 1, b: "test"}        
  %{a: 1, b: "test"}
  iex(2)> foo[:bar] || :some_default_value

  iex(3)> foo = [a: 1, b: "test"]         
  [a: 1, b: "test"]
  iex(4)> foo[:bar] || :some_default_value
This looks like some Perl-ism/Ruby-ism. I personally dislike truth-y/false-y values, as they do more harm than good, and making it harder for dialyzer to infer boolean types.

Also Map.get/3 and Keyword.get/3 are probably faster.

In cases where something really should be in the map, I like to use &Map.fetch!/2.

Agree with many things, big disagree with not using 'else' with 'with' and actually for the same reason you stated not to use it. I use 'with' when I have a pipeline, but the error conditions in the pipeline matter. In fact the example you gave in the article is exactly one of the cases where I think 'with' and 'else' are useful because you can clearly show the dependency of each action in the pipeline and clear failure cases to address them.

I think there is a limit to how many things you should handle in a with statement, but the same goes for regular pipes as well.

My general feeling is that if the errors matter, `with` probably isn't the right construct. In those situations I prefer to use case. That's been helpful in my experience.

If the errors don't matter & there's no real chance of returning an error, why use &with/1 at all? &with/1 has the quirky behavior in that the return value of anything that doesn't pattern match get returned. So if &call_service/1 or &parse_response/1 doesn't return an an ok tuple, you're increasing your debugging surface by having to track down where the unexpected result came from. Good luck tracking down that rogue nil!

Instead of this...

    with {:ok, response} <- call_service(data),
         {:ok, decoded}  <- parse_response(response) do
Maybe just do this...

    {:ok, response} = call_service(data)
    {:ok, decoded} = parse_response(response)
Or even this...

    |> call_service!()   # don't bother returning ok tuples
    |> parse_response!() # just use bang methods!

Not related to Elixir, but I disagree with the last point: you should not have test assertions inside loops at all, because it makes a test less readable and the diagnosis harder when a test fails.

You shouldn’t have to unpack complexity just to understand where a test failed or whether it failed because of bad test code or bad tested code.

Once you have so much complexity that reading and mentally executing the test takes time, you should probably be writing tests of your unit tests.

In Elixir would it also mean that all assertions after the first failure don't get ran? Cause that's how it works in most other languages. The better (but maybe slower) alternative is parameterization.

I have similar feelings to others here but not necessarily disagree with most points, my problem is more that the author does not really explain enough why his rules are true. Even if some problems he states are understandable i would expect to also give examples why other alternatives are bad...

I think a lot of this is subjective and context dependent so the definitive “this is good” and “this is bad” language was a bit of a turnoff.

BUT, I did like seeing someone’s point of view with thoughtful explanations and examples, kind of like what you get out of a good discussion at an elixir meetup or something :D I enjoyed reading it.

Calling it "good and bad elixir" was probably overly charged. Its a little late to change it now, but something I'll consider in the future.

fwiw I enjoyed the article, i agreed with some of them, disagreed with the others, but overall I liked the perspectives presented in the article...

can you elaborate more on the "Don’t pipe into case statements" point though? Not entirely what you're saying with this one as there isn't any explanation why one might be better/worse than the other, other than aesthetic/personal preference. Needing to access the result from the previous pipe in the case maybe?

I don't see how it's too late, unless HN commenters from today will be the only audience. You could also add an note.

don't feel too bad about it: It would probably have gotten fewer eyes if you had named it more mildly.

I never thought about piping into case ... I guess I'll try it next time ;)

If you really wanna piss OP off, pipe into &if/1 :)

    iex(2)> :something |> if do true else false end
    iex(3)> nil |> if do true else false end          

This captures exactly how I feel about with. Really good article.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact