
Reasons to use Phoenix instead of Rails - elvio
https://medium.com/@elviovicosa/5-reasons-you-should-use-phoenix-instead-of-rails-in-your-next-project-504b4d83c48e
======
bwilliams
I've used Rails for a long time, and Phoenix since early versions (0.6 or
something like that?).

Phoenix is amazing but productivity wise I haven't seen it get near the
productivity levels you can get with Rails. Other developers I've spoken to
say the same thing. This is even more true with the more recent release of
Phoenix 1.3 encouraging the use of contexts. I think it's a good pattern to
extract to once you have more knowledge about your application but trying to
think about it up-front has slowed development speed down and has been hit or
miss on whether or not the context was "correct".
[https://hexdocs.pm/phoenix/contexts.html](https://hexdocs.pm/phoenix/contexts.html)

I think Rails and Phoenix have a heavily overlapping place in web development
but my personal tl;dr is that Rails is great for getting shit done and Phoenix
is great for scaling, especially when it comes to websockets.

~~~
bradgessler
Yes! Contexts are the equivalent of building a greenfield rails app with a
bunch of properly namespaced and isolated Rails engines. You end up with some
really awkward names and boilerplate code that just doesn’t feel right. You
can get that sense while reading the Phoenix context documentation.

The Phoenix story should include an area where the app can evolve, and as it
matures and becomes more well understood, pieces of it could be moved into
contexts.

~~~
bwilliams
100% agreed. I see what the Phoenix team was trying to accomplish, but I don't
think it will pan out in projects how they expect. It reminds me a lot of fat
model and skinny controller when that was popular.

I think the underlying issue I have with contexts is that it's forcing us to
predict the future. We have always had to predict/plan but I think contexts
push it a bit further that we can reliably predict/plan. This is especially
true for new apps which tend to change rapidly.

Like I said before I think it's a great pattern to extract existing code into
but it's feels too heavy to start with.

------
joshmn
I've used Django and Phoenix and Laravel. And they all have one thing in
common: They're not the same at all.

I don't know why people compare MVC-framework-X-that's-inspired-by-
Rails'-foundations-and-fundamentals to Rails. Rails isn't great because MVC.
Rails is great the vast and mature tooling and ecosystem that exists in Ruby
(bundle, rake, thor; Rubygems), and the effortless ability to metaprogram
(which is why Rails is what it is).

Per Rubygems, there are literally hundreds of thousands of people who are
smarter than me who have developed libraries that solve the majority of things
that I will/may need to implement. I'm not smarter than them and I'm okay with
that. I want to focus on domain-specific stuff. Not anything else.

Those other frameworks... They're just not the same.

It's like comparing Ghost to Wordpress. Sure, they both allow you to blog, but
Ghost doesn't have the Wordpress ecosystem of themes and plugins (blah blah
blah security, I know).

I don't want to spend my time reinventing the wheel and neither do my clients
or users. I'm not trying to earn style points with my stack and neither should
anyone else. I'm not trying to impress HN or colleagues or friends. I'm trying
to impress my clients and my users.

Rails will, for at least the next 5 years, allow me and anyone else who's
familiar with OOP to greatly outpace any other web framework, given it's the
right tool for the job[0].

There are other comments about scaling below and I'd love to comment on each
of them but I won't. Rails scales just fine. I proudly serve an Alexa Top 5000
website that peaks at 500k rpm a few times a month
(Nginx/Puma/Postgres/Redis). It all sits on a $30 VPS (2 cores @ 2GHz/8GB
RAM). Sub-50ms response times and as stable as can be. It's not the most
trivial of applications, either (pushing 50k writes/min and 1000 open pg
connections). Sure, you'll spend a day tweaking your setup, but the cost of a
day's time is low in comparison to switching to something like Phoenix.

[0] No, you're not going to mine crypto with Ruby.

~~~
jondubois
Phoenix is way overhyped, particularly on HN. This is the same as what
happened with Golang and then with Rust.

A few years ago, I remember that all these articles kept popping up about
people switching from Node.js to Go.

Now it's funny because fairly recently I read a well thought out article on HN
encouraging developers to dump Go for Node.js... These days you hardly read
anything about Go at all. Not that there is anything wrong with it, but
reality caught up with the hype and it's no longer pretending to be a silver
bullet.

Right now Phoenix is pretending to be a silver bullet, but it's really not.

~~~
sergiotapia
It's underhyped because the concepts Elixir brings to the table are most
likely alient to most Rails developers (myself included at the time!)

Most people think it's about a faster activerecord, it's not. It's so much
more!

~~~
regulation_d
I really like Ecto for several reasons, but mostly because it makes every
database call super obvious. If you Repo.something(), you're probably hitting
the database.

A couple other reasons I like it: 1\. It's makes shooting yourself in the foot
with N+1 queries really difficult. 2\. It's not married to Phoenix. I haven't
tried this in a while and maybe this isn't representative of the current state
of things, but last time I tried to use AR in a non-Rails project, I ended up
switching to Sequel gem.

For a nice, if not slightly biased, comparison of AR and Ecto, I recommend
Darin Wilson's talk "Thinking in Ecto"[0].

[0]:
[https://www.youtube.com/watch?v=YQxopjai0CU](https://www.youtube.com/watch?v=YQxopjai0CU)

------
mrdoops
I'm in love with the simple explicit composability of everything in the Elixir
ecosystem. In Phoenix everything is a pipeline from the connection leading
down through the routes, controller, etc. Phoenix is just a plug in a Mix
application and it doesn't impose itself in everything I do. I'm not a Phoenix
developer, I'm an Elixir developer who happens to be using Phoenix to manage
web traffic.

I think this is where Rails and Phoenix diverge in philosophy, Phoenix
prioritizes explicitness and minimal assumptions with how and what you're
going to do with their tool, whereas Rails is famously opinionated providing a
'Rails way' of doing most things.

What Rails does, it is very good at, but when you move outside of its
expertise, you'll may find yourself in hot-water fast. Phoenix can be what you
need it to be, and when you need to do something outside of Phoenix's domain
everything is composable, so pick and chose what you need.

------
lettergram
So, I've used Rails (ruby), Django (python), Flask (python) Revel (go), Spring
(java), Node.js (javascript), and even have used C, php and Go to roll my own
website from scratch[1].

The thing is, I've always scaled my website(s) to thousands or tens of
thousands of requests and hour; with one website even getting close to a
million an hour... all with no problem. In the case of Rails (as is discussed
in the article), my bottleneck (from a usability perspective) is _always_
bandwidth. Most applications (I build anyway), require a hefty amount of data.
Waiting 200ms for the database + an additional 20ms for rendering, the 20ms is
not noticeable. Scaling Rails (or any modern web app) is as easy as just
launching another instance and load balancing.

Given that, I really don't see any advantage toward Phoenix. Plus, why I
personally love rails is all the gems - which is super powerful. Most other
frameworks simply don't have Rails simple logic, with the easily extendable
gems.

[1] [http://austingwalters.com/building-a-web-server-in-go-
handli...](http://austingwalters.com/building-a-web-server-in-go-handling-
page-requests/)

~~~
phamilton
I mean... 1 million requests an hour isn't that many.

That's around 300 requests per second. Assuming 1000ms upper bound on
requests, you need 300 workers to handle that load. Assuming 150MB per worker,
that's 45 GB of memory required to handle the load. So like... 5 m4.xlarge
instances on EC2 (to give redundancy and allow loss of 2 hosts). That's
$700/month.

That's not that much. We've got a Rails app that pushes 5000 qps. And to be
fair, we just dial up the number of instances and it handles it fine. It runs
on over 100 instances. It costs us around $10k/month. Not the end of the
world, cost wise, but we have multiple Go services that handle similar levels
of traffic and run on a dozen instances. Additionally, deploys take a long
time (rolling restarts plus health checks on 100 machines takes time).

Moving to Go (or Elixir) allows us to handle far more requests per unit of
hardware. While latency would indeed improve, it's not the primary motivator
for us moving away from Rails.

I haven't even mentioned the websocket story on Rails. That's a whole new can
of worms.

~~~
WillPostForFood
* 1 million requests an hour isn't that many.*

What percentage of websites serve 1 mil or more requests per hour, .01%?
.001%? Meaning Rails is going to be performant enough for 99.9+% of projects,
and for those projects it would have been a mistake to trade dev time for
performance you’ll never need.

~~~
phamilton
1M request per hour at peak?

A SPA backed by rails is probably going to make at least 10 requests on page
load. So in terms of actual traffic, 100k page loads during a peak hour.
Assume a roughly linear peak increase/decrease and we've got roughly 1M page
loads per day. 30M page loads per month.

How many websites have 30M (non-unique) page loads per month? After some rough
scouting on Alexa ranks, I'd put the over-under at probably 10k US sites, and
50k worldwide. Assuming Alexa has 50M sites, then 0.1% of publicly facing
sites serve that much traffic.

Rails is used often on private, internal sites and tooling. Those sites
wouldn't come up. That would definitely skew the 0.1% number.

Not making a huge argument here, I just started down that analysis path out of
curiosity and figured I'd share it.

~~~
WillPostForFood
Great analysis! According to Netcraft, there are > 600million websites, so
assuming Alexa ignores the 550 million with near zero traffic, you can get to
the .01% pretty easily. Another area I'd tweak is the skew towards SPAs. Most
sites aren't SPAs, especially as you slide down the traffic rankings.

~~~
phamilton
Yeah, the skew towards SPA is more Rails focused. I haven't worked on a non
SPA Rails app in over 5 years. P(SPA | Rails) is higher than P(SPA).

------
aczerepinski
One of my favorite perks of Phoenix vs Rails is no wasted time trying to
figure out where the heck a method/function came from. Is it from the parent
class? One of the included modules? Or perhaps not defined anywhere at all,
with method missing magic? Going from that to explicit imports is refreshing.

~~~
thibaut_barrere
(note: I use both Elixir & Ruby, for different reasons)

You can most of the time rely on object.method(:blank?).source_location to
quickly determine where a specific method is defined.

More useful debugging tips can be found here:

[https://www.schneems.com/2016/01/25/ruby-debugging-magic-
che...](https://www.schneems.com/2016/01/25/ruby-debugging-magic-cheat-
sheet.html)

~~~
nitrogen
If you use Pry with Rails, you can also call binding.pry before the method in
question, then type show-method [methodname] in the Pry console.

------
cutler
One reason not to use Elixir is the absence of vectors/arrays. Yes, you can
import them from Erlang but they ain't pretty. Elixirists try to pretend that
lists and tuples are all you need but Elixir's lists are Lispy lists, not the
Python variety. It's one of these small print things you only discover after
spending time with Elixir but it can be a deal-breaker. Ask on the Elixir
lists and you'll get some very defensive responses which basically add-up to
vectors/arrays being difficult to optimise in a dynamic functional language.
String processing is also not quite as straightforward as in Ruby and Python
due to how Erlang/Elixir uses binary representation.

~~~
jondubois
I never bought into the pure functional programming hype. Most programs are
made up of many functions and as the program gets more complex, the code paths
keep getting longer... If you force everything to always be passed by value
and returned by value (never by reference) it's clear that the costs of
constantly cloning all these objects would quickly add up.

The problem with pure functional programming is that it prevents the developer
from writing well optimized code.

State change side effects might be dangerous, but they're also a really good
way to boost performance and sometimes they're totally worth it.

~~~
ramchip
> If you force everything to always be passed by value and returned by value
> (never by reference) it's clear that the costs of constantly cloning all
> these objects would quickly add up.

That’s not what happens. See: [http://erlang.org/pipermail/erlang-
questions/2013-March/0727...](http://erlang.org/pipermail/erlang-
questions/2013-March/072760.html)

“Pass by value does NOT imply copying and never has; copying is only required
for mutable data structures, and Erlang hasn't any.”

------
pmontra
After many projects with Rails and one with Phoenix, these are my remarks on
the list in the post:

* The directory structure. Rails has a simpler one. Phoenix has some weirdness (why the migrations are in priv/db? They are private and all the other modules are not?)

* The naming conventions. About the same.

* The database migrations. About the same, but with Phoenix we have to duplicate the schema definition in the model, which is not DRY at all and encourage bugs. Either the ActiveRecord way (the truth in the db) or the Django way (the truth in the model).

* The use of dependencies. About the same.

* The ActiveRecord features. AR is much easier to use than Ecto, which is more general but often unnecessarily so. Proof: there are modules on top of Ecto to make it look like AR [1] [2]. Personally I won't use plain Ecto in an Elixir project of my own.

* The ERb templates. About the same.

* The form helpers. I really don't know: my Phoenix project was a backend to a SPA, we generated JSON plus some email templates.

* The built-in support for testing. About the same.

Advantages of Phoenix:

* no need for Sideqik, just spawn processes and send emails and the like

* create some GenServers to run long running processes side by side with the main web application

* the websocket server is an example of the previous point and it performs better than the one in Rails.

* Elixir's pattern matching is so good to use compared to any language without it (Ruby, Python, etc.)

Advantages of Rails:

* ActiveRecord is so easier to use that it translates in visible productivity gains (of course it's also 10 years of Rails vs 4 months of Phoenix). What it means for the long term maintainability of the application is up to you to decide. In my experience the impact is zero because none of my projects was meant to scale to the millions of users or to a complex architecture and none did. Your project might be different.

* The object oriented notation is more compact than the functional one: object.method.method.method vs value |> function |> function |> function, maybe with some Module.function thrown in to make it even more verbose.

[1]
[https://github.com/sheharyarn/ecto_rut](https://github.com/sheharyarn/ecto_rut)

[2]
[https://github.com/MishaConway/ecto_shortcuts](https://github.com/MishaConway/ecto_shortcuts)

~~~
kungfooguru
'priv' is an Erlang/OTP application's directory for files that are part of an
application but not source or compiled beams.

------
ben_jones
I've worked with Django, Flask, Rails, Express, and Go. It gets to the point
where it's not "which framework is the best" but "which framework is the best
_for our company_ ". Having interviewed at a ton of start-ups recently it's
been overwelmingly Node because its super easy to hire for and onboard, with
the benefits of dynamic programming and node's ecosystem. If your company has
a great hiring pipeline or is super attractive to candidates, you could do
something more niche like erlang, scala, elixir, go, etc., to more closely
match technical needs.

Everytime I see "x is better then y" I just chuckle and slowly step away.

------
srikz
For someone who hasn't learnt Rails or any other backend framwork and has only
briefly dabbled with NodeJS, is it better to learn Rails first or should I
learn Elixir / Phoenix directly? Thanks.

~~~
gm-conspiracy
Probably Rails, since most web frameworks are inspired by features from Rails.

Then you can build the missing "parts" in Elixir for Phoenix.

~~~
nitrogen
Downvoting this makes no sense. Rails is an excellent backend framework to
learn because it's been around long enough for people to know what it does
well and what it doesn't, and chances are you won't be big enough to hit the
Rails scaling ceiling (it's way higher than most people think).

------
csdreamer7
Most of the comments seem to answer what I was going to ask after reading the
post: "what about the gems?"

My question now would be "does Phoenix have something like devise and
carrierwave gems?" Please link if so.

~~~
jherdman
Uberauth
[https://hex.pm/packages/ueberauth](https://hex.pm/packages/ueberauth) Arc
[https://hex.pm/packages/arc](https://hex.pm/packages/arc)

~~~
csdreamer7
Thank you.

------
qaq
Strange list to be honest. The fundamental reason to choose Phoenix would be:
1) unparalleled capabilities for real time features 2) BEAM VM and all it's
capabilities available via more conventional syntax than Erlang via Elixir

~~~
sriram_malhar
Real time? Not really. There's no time bound of any sort offered by any part
of the Erlang ecosystem. What they mean is "quick enough", for some definition
of quick and enough.

~~~
ramchip
There’s various types of real-time systems:
[https://en.m.wikipedia.org/wiki/Real-
time_computing#Criteria...](https://en.m.wikipedia.org/wiki/Real-
time_computing#Criteria_for_real-time_computing)

You’re talking about hard real-time. Erlang, as it says on the official site,
targets soft real-time.

~~~
sriram_malhar
I know Erlang says that and I have always been a bit mystified by it.

Definition of soft real time: "the usefulness of a result degrades after its
deadline, thereby degrading the system's quality of service".

Well, duh. That's pretty useless. Any system in production is soft real-time
by that definition.

There's really nothing special about Erlang that makes it amenable to "pretty
quick" responses. It is not as if admission and rate control are baked into
BEAM. If you don't pay attention to your messaging architecture, head-of-line
blocking will kill you.

I know that they claim that their sharded-GC design helps with shorter pauses,
but there's no real evidence to back the imputation that other GC designs have
really held back the industry; consider the large number of sites that have
been implemented in Java/Python/Ruby. I have put Java/Scala/Go/Erlang systems
in production, and rarely have I have ever had to worry about GC tuning.

~~~
ramchip
> Any system in production is soft real-time by that definition.

Most systems don't fit in that definition because they don't have deadlines in
the first place. A site like Hacker News is not soft real-time; the faster it
loads the better, but there's not a set number of seconds after which a user
would give up.

A video streaming service, or most API with response time guarantees, would be
soft real-time systems. There is a deadline, but it doesn't need to be
respected 100% of the time, as users can live with a few dropped frames or
out-of-spec responses.

> There's really nothing special about Erlang that makes it amenable to
> "pretty quick" responses.

It's not about "pretty quick" but about predictable, reliable response times.
The key feature is the preemptive scheduling. Processes can only block a given
scheduler for a very short time; all functions in the language that can take a
long time to execute are built to yield to the scheduler periodically. So you
don't get a slow request because some other request decided to turn a huge map
into a string, or run an expensive loop, or block waiting for I/O, or run a
GC, etc.

This is of course a tradeoff between throughput and latency, because all the
yielding and checking comes at a cost. Go is a bit in the middle ground, it
also uses lightweight processes, but does not yield as much so a hot loop can
block for some time (but also runs faster as a result).

~~~
sriram_malhar
> A video streaming service, or most API with response time guarantees ...

Preemption, the design of Erlang's GC etc. don't contribute to predictability
or reliability any more than other frameworks.

Consider a system written in Erlang, and another written using a very
different architecture, say Python/C++ (YouTube).

In both cases, Python and Erlang are basically used for orchestration; the
action happens in the systems layer below.

In both cases they use non-blocking I/O underneath, and some sort of adaptive
bitrate streaming if not enough bytes are going through within the time bound.
There is nothing that is particular to the Erlang system that monitors bitrate
and does something about it. In both cases, the soft real time guarantee has
to be accounted for explicitly; you don't get it in any shape or form from the
Python or Erlang architecture. The only 'guarantee' you get is the promise of
using the underlying API in the most sensible way possible, and to push out
bytes with as little overhead as possible.

What you get in both cases, is convenience. As a side note, here's BBC's
Kamaelia framework
([http://www.kamaelia.org/Home.html](http://www.kamaelia.org/Home.html))
written in Python. (I'm not sure if they still use it though).

~~~
ramchip
What's missing from your thought experiment is concurrency. Imagine there's
100k people connected to the server. If you make the system in Erlang, you can
handle each connection in a separate process, which individually monitors its
own bitrate and fetches or sends data with synchronous calls. This makes the
code very straightforward.

The VM ensures each process won't block the scheduler for more than 1ms or so,
so that processes that have short but time critical operations to do (e.g.
push a video frame down a socket) get a chance to run quickly, no matter what
other processes are doing and whether it's I/O or CPU bound.

Kamaelia, gevent, Node.js, etc. do not provide that guarantee. OS threads do,
and there are good frameworks based on them (e.g. Celluloid), but they don't
scale to more than a few thousands.

~~~
sriram_malhar
That's a fair point.

------
evtothedev
Seems kind of circular to say, "Use Phoenix instead of Rails because Phoenix
is similar to Rails." Which is pretty much points 1, 2 and 3 of this list of
5.

------
TomK32
After 10 yrs of Rails I have some high expectations and miss information about
Elexir's incarnation of rubygems. Is there anything like RSpec?

~~~
thibaut_barrere
I'm using Rails since 2005, used RSpec all the time (and still do), and I must
say the built-in testing framework (ExUnit) is "good enough" for me. I haven't
felt the need to use something that would mimic RSpec more.

------
holydude
"Author of
[http://www.phoenixforrailsdevelopers.com"](http://www.phoenixforrailsdevelopers.com")
All of this looks like a sales pitch to me. No doubt that elixir / erlang are
an extremely useful and brilliant technologies but the article is simply
bollocks measuring apples and oranges.

~~~
sotojuan
As a technology gathers hype around it, it is normal to see people try to make
a quick buck out of it through books or only courses.

I'd recommend interested developers to go read Phoenix' own website and form
their own opinion.

