Can you give a briefish but honest assessment about how Phoenix outdoes going with your stock Rails stack, and how it might make a few things harder? How would I differentiate/"sell" an Elixir/Phoenix stack (say, for an API) to a manager? How does Phoenix differentiate itself from Rails, and how difficult is it to switch, and under what circumstances?
(I say this as a fan, btw! I just think that it's important to get the pros and cons out there. I also think that a lot of Ruby/Rails folks are VERY curious about Elixir generally and Phoenix specifically, lately...)
Sure. So let me start by saying I have worked full-time in Ruby/Rails at a Rails shop for the last four years. Phoenix is derived from that experience, and we borrow some great ideas from Rails.
Pro - Concurrency:
I really love Ruby/Rails, but what led me to Elixir in the first place was constantly dealing with lack of concurrency in Rails and being unable to do anything like websockets in a sane way. For example, in Rails we can't really block in the controller without killing throughput, so we go to great lengths to background everything. This makes simple problems like "Make a remote API request through a controller" way harder since we now have to throw it in a worker queue, then poll from the client on the status of the job, then finally return the result. In Elixir/Phoenix, we can block in controllers without a blip on throughput, do our work, and return the response.
Pro - Error Handling / Fault tolerance
I won't go into too much details here, but the Erlang/OTP way of building applications around Supervisors and responding to failures has really blown me away. Erlangers have been sitting on this innovation for the last couple decades so it's battle tested and has proven its merits. Look at WhatsApp, 1-2 million connections per server, 400 million users ~ 30 engineers.
Pro - "Realtime" / pubsub:
Phoenix also ships with a realtime layer that we call "Channels" for doing pubsub/realtime events to clients. Our goal is to make building realtime applications as straight forward as building REST endpoints. We include a `phoenix.js` browser client, but we are targeting multiplatorm support and have a working Swift client internally. So Phoenix aims to go beyond a typical "Web Framework". The Web is more than just html apps (but Phoenix excels as just that). Imagine a iOS game publishing on Phoenix channels to other iOS clients, and a Web front-end reporting world events, player lists, etc. Phoenix can power all of it. By virtue of the Erlang VM, we also get cluster support For Free. You can run multiple phoenix nodes on a cluster and PubSub Just Works. No need for redis in b/w or worrying about sticky sessions. Concurrency in Elixir is location transparent. A process is a process regardless of machine it runs on in a cluster.
Cons - Packages, Training:
My goals are to wholesale replace all my Rails work with Phoenix, but obviously we have a ways to go. The main cons are lack of off-the-shelf community packages like in Rails land. We are also not yet at 1.0, so you have to be willing to put up with breaking changes until we're ready to brand a 1.0 release next year. I'm using it for prod systems, but this is an upfront reality as we march towards 1.0. Other cons would be having to train folks new to Elixir, which often means learning a number of new paradigms at the same time. There's a helpful Elixir community for newcomers, but it will take more upfront work to get up to speed if you're coming from an OO background.
Nice work! I would also recommend stealing as much good stuff as you can from ChicagoBoss. It has the same RoR roots and great overall architecture. Thanks for this.
Is there much benefit in using Phoenix for the server for a SPA where 95% is API routes, over using Plug directly? I guess it might make adding websocket services easier in future? (I've been very impressed with Plug/Ecto, just wondering if moving over to Phoenix while not too entrenched would be worthwhile.)
Even if you aren't using Channels or HTML views, you still get the nice features like Router pipelines, JSON Views, code reloading in development for "Refresh Driven Dev", and our Controllers, which provides a number of nice features on to of Plug, like content negotiation. Phoenix Views are really nice for JSON composition and if you ever need to start serving other content like HTML, you'll be all set. Plug is a fine choice, but you'll have to roll a lot more code yourself.
If are planning WS related features in the future, Phoenix will make that a first-class experience, where Plug you'll have to dip directly into the underlying webserver adapter.
I have one question regarding how the Router works.
I worked with the new Node.js framework called Koa.js. It has an interesting way of dealing with middlewares (would be Plug in Elixir world, I assume). Request and response are passed through the middleware chain downstream, and each of them can take the response from previous middleware(s), modify it if needed, and pass to the next.
Either the response reaches the last middleware, or one of middlewares returns immediately, the response goes back "upstream" through the middlewares again. Then it is replied to the client.
During the flow, each middleware can also yield to the next one, so that it can handle the updated response when it goes upstream.
Imagine the cache middleware. Request comes in, this middleware checks if there is a cache of that request. If not, it yields to the next middlewares that will retrieve and transform the data, then cache the response body before returning to client.
For me, this flow offers a lot of flexibility in request and response handling. Is there anything similar to Phoenix?
From what I understand, once a Plug chooses to reply, the response ends at that Plug.
Yes, this is similar to Plug, but we only have a "connection". There is no distinction b/w request/response. A series of Plugs form a transformation layer on the connection where plug middleware can transform the connection, send a response etc. "Yielding to the next plug" is just a matter of returning the connection. There is no explicit yield. You either halt the connection (don't invoke further plugs) or continue the stack by returning the conn unhalted. Where your "upstream" concept fits is a connection can have callbacks bound that are invoked just before the response is sent, this is where a cache plug would cache the response. Plug avoids the issue of separating request/response because for things like streaming, sending a response does not mean that the stack is done with the request. Rack has this issue with streaming and José put a great deal of thought into Plug around the lessoned learned from Rack style APIs. You can read about his thoughts here:
http://blog.plataformatec.com.br/2012/06/why-your-web-framew...
> From what I understand, once a Plug chooses to reply, the response ends at that Plug.
In Phoenix, we halt the connection when you use most of our functions that send a response, because there is little you can do afterwards. The Plug response API itself never halts when a response is sent.
Thank for explaining. Is there a document of all available connection's life cycle hooks somewhere I can lookup? And can different Plugs attach different callback to same hook?
Right now, the Plug repo and docs is the best resource. Yep, different plugs can attach any number of `register_before_send` callbacks on the connection.
I just hope that Phoenix sticks it out for the long run. Other Elixir frameworks like Sugar and Dynamo are nearly dead by now.
Secondly, I hope that the original dev team sticks around too. The Erlang web framework - ChicagoBoss - has had its development slowed down ever since Evan Miller stepped down.
Phoenix is here to stay. My first commit was a year ago, with the goal of web framework to replace all my Ruby work, both for clients and personal projects. Since then, I've spent hundreds of hours on the framework, spoke about it at ElixirConf, and started building prod systems with it. We've also had the pleasure of José Valim joining the phoenix-core team and contributing some really great work. We're just getting started.
What is the reason for requiring the absolute latest minor tick releases? Elixir is pretty cool, but the default solution to installing it (from the erlang-solutions repo) gives me a version that doesn't work with Phoenix. I don't really want to build the language from source, as that makes deployment just that much harder and error prone.
Version managers do exist for Elixir to make upgrading Elixir easier. One such is exenv[0] and elixir-build[1] which should be familiar to you in use if you've ever used rbenv and ruby-build. If you're on OS X, homebrew usually gets updated with the latest version of Elixir minutes after a release happens.
Phoenix is different to WebMachine and Liberator as it is much closer to frameworks like Rails, Django and Play. There is no plan for hypermedia built-in support for 1.0.
Is there any way create a Phoenix add-ons for my Rails app? It would be great to use Phoenix to handle the real-time parts of the app without having to walk away from all of our working Rails code.
I tried using Phoenix for a simple web app I wrote last month. It was a couple of static html/js files and a single WebSocket connection, so nothing fancy. Unfortunately, I found the documentation lacking, and the framework much too "magical" for me to quickly understand and make use of. I have no experience with Rails but rather with Django, so that may explain a lot. Anyway, in the end I used Erlang and Cowboy instead. I checked out Cowboy from github master which is "almost 2.0" now, and I had no problems at all setting basic project up. Defining routes was straightforward and explicit (which I like), upgrading connection to websocket and handling incoming data was simple and explicit too. I added eredis and jiffy to the project and that's basically it, it did everything I wanted it to do splendidly, with little magic and very little overhead.
Now, I know Erlang much better than Elixir, and I worked with Cowboy before and I needed to make this app quickly, which resulted in me not spending too much time on learning Phoenix. Between controllers, routes, views and channels I got an impression that Phoenix has too many moving parts and that it would take me too much time to fully understand what's going on. Especially because I really didn't need most of these, just a static file server and a single WebSocket.
However, I see Phoenix potential for more complex projects, where investing the time to learn it is going to be worth it. It looks like Phoenix provides an awful lot of conveniences and makes a project much better structured than my "a couple of files in a single directory" approach.
I guess what I want to say is that I almost used Phoenix this time and that I would probably use it if it had better docs - especially a solid tutorial(s) for use cases similar to mine. And that, while I didn't use it this time around, I'm certainly going to keep an eye on it and consider it next time I have to write something similar. It looks very promising and - like Elixir itself - very interesting, I hope for it to only grow in the future :)
I'd just like to say that Phoenix documentation is getting better all the time. The core team has put a lot of effort into documenting modules and functions, which you can see on http://hexdocs.pm/phoenix.
I've seen the API reference, but while it's good to have it, it doesn't help very much in the beginning. You need to know what is it you're looking for, and it's very hard to guess it with just the reference and no prior knowledge on the metaphors used. The guides look very nice, they probably would have helped me a lot, but I didn't find them last time as they are not linked from Phoenix main page. Anyway, looks like I will have something to read later this weekend :)
No, and after a quick look at them I see that they'd help. Would be nice to have this mentioned in the project main readme under "Documentation" - do you know why it isn't there (yet)?
This. I can not put my words any better than this.
I used to dislike Rails, since it seemed to be too much abstraction that would require time to learn. I now have the same feeling about Phoenix. I could put up a simple web app quickly with Erlang and Cowboy. With Phoenix, I needed to read documents and the source code to figure out what all the imports are doing.
This whole "framework fatigue" thing is going reductio ad absurdum. Convenience is not bad. _Too_ much convenience traded for opacity is bad.
As someone with a Phoenix app running in production, I can tell you that this framework hits the sweet spot of providing lots of value without requiring the user to learn too much. The abstractions it provides, especially the router and rendering layers, are very welcome. And I'm saying this from the perspective of someone who has built a few smaller vanilla plug applications.
Let's not throw the baby out with the bath water here just because rails went a little too far with the magic.
I think Phoenix strikes a really great balance with its abstractions. I also think great things happens when a community adopts the same conventions. Rails has shown that beyond anything in my opinion. Things like our project bootstrap command `mix phoenix.new my_app` lets you get a bare app up and running in seconds. With a quick guide on the Router and Controller layer, most folks can get up and running very quickly, but our guides have lagged behind development. It's something we're working hard to improve. Lance has been working hard on full-featured guides and we should have them live very soon.
For me it's not even about "time to learn", but the fact that I never can be sure I know how something like this works. Even that little detail about how `resources` automatically defines so many routes… yeah, it's kinda convenient, but I would be more comfortable if I'd have all of my routes explicitly defined somewhere, at least in a framework with scaffolding. When an app written like that grows, and I'm not the only developer, and that guy before me was clearly using drugs — well, that really becomes scary much quicker than in a simple Flask-style framework.
Curiosity: is that a concern only with routes? Or would you also like to have a explicit control of which middleware you are using too?
Because if the latter, it feels like a web framework isn't for you. But you should definitely consider Plug (http://github.com/elixir-lang/plug), which has all the pieces and you just need to put them together (a simple router, a bunch of "middleware", etc).
Going by my experience, if you are using Websockets, choosing Phoenix would be a really good choice. There's a lot of work that's gone into transport adapters in Phoenix (websockets and recently polling support was also added). @peregrine and @chris_mccord did a lot of work as far as I can recall. I've been noticing that @JoseValim has joined in too.
That said, I would love some websockets stuff refactored out of Phoenix into a Plug so that it can be used even without Phoenix. I'm not sure about the amount of work that would require.
My experience with Phoenix was super positive. I built https://www.vuln.pub with it while learning Elixir (no erlang experience..) and it couldn't have been easier. Everyone was very helpful on IRC if I did run into problems, and the framework itself is extremely well documented and the source is quite readable.
Definitely would recommend trying it out if you haven't. It was a breath of fresh air from where I was in node.js/python land.
I have no rails experience, but have written a lot of python stuff in django and flask, and some node stuff in express.
I guess compared to django (and rails..), it's absolutely a smaller framework, so it's easier to reason about what's going on, and when you don't understand the source is much easier to follow. Diving into the django source to figure out a problem was a nightmare.
In terms of actually using it, the biggest strength is the concurrency model.
It handles blocking much better than rails or django where you typically have a fixed number of workers, and if you block and fill up those workers then you're SOL. Yea there are workarounds here, but they're pretty ugly. python and ruby don't really have nice solutions for this, which brings us to node. In node, you always need to be actively thinking about handling async IO with callbacks or promises or whatever, and you can quickly end up in callback hell if you aren't careful.
In elixir (and erlang), BEAM handles all that hard stuff. The result is your code is easier to write and read. Every phoenix http request is in its own elixir process. There's no weird request context like you get in flask, no way to abuse request state, and no callbacks to deal with. You can block a process all you want and throughput will be the same. The code looks like it executes sequentially, even though it doesn't.
For a small app like the one I wrote, it also has the advantage of being able to start a bunch of little services in the background to handle longer running tasks (which would typically be handled by a message queue with django/rails) and they're super easy to deal with since it's just standard elixir process messaging. These services handle things like performance logging, emailing, as well as (in the case of my app) looking for vulnerability disclosures and resolving them to package specs.
Anyway, sorry for the rambling response, but I hope it gives a general overview of why elixir and phoenix made building something way more pleasant than what I'm used to.
Why do you ask about the test suite? Did you find a bug? :)
It's really awesome how Phoenix has come a really long way in a really short period of time. Chris and José (and many others) do an excellent job of carefully considering features and how those features get implemented. While many refer to Phoenix as a "web framework" I don't think of it that way in the traditional sense of web frameworks like Rails. I find it closer to a "web library" that does an excellent job of handling Web concerns such as routing, WebSockets, rendering HTML/JSON and internationalization. I think this is a good thing in this day and age of having very diverse model layers.
If you have time and want to see a very well run open source project in action, I recommend you read through present and past discussions on the Phoenix GitHub project: https://github.com/phoenixframework/phoenix/issues
We are a pretty big Rails shop (30+) devs and are getting pretty excited about Elixir. We are currently writing a series of blog posts about our Elixir journey, first one here http://blog.oozou.com/why-we-are-excited-about-elixir/
Elixir is great. I just wish its syntax was more appealing; some of the design choices are a little idiosynchratic. Using "do..end" is natural in Ruby for blocks, but Elixir uses it for everything, and it looks pretty odd:
Enum.map [1, 2, 3], fn(x) -> x * 2 end
or:
receive do
{:hello, msg} -> msg
{:world, msg} -> "won't match"
end
The "do" syntax is in fact syntactic sugar for keyword arguments, which is suprising and a little disappointing, especially when you realize that constructs like "if", "case" and "receive" are in fact implemented as functions. Sacrificing syntactic elegance for consistency ("everything is a function") might be clever, but is it an improvement over hard-wiring this stuff into the language as first-class keywords? I personally don't think so.
It's a minor point, and not major enough to make me not use Elixir, but when someone goes this far in putting a nicer skin around Erlang, it's disappointing to find newly-invented blemishes that are as weird as the ones it aimed to smooth over in the first place.
As a newbie coming into Elixir without either Ruby or Erlang background, I didn't find anything in the syntax to be a real pain point (although understanding optional parenthesis usage in CoffeeScript did help).
Sure, if everything (including control statements like "if" and "case") is a function, it means the underlying data structure is simpler. That design in itself doesn't preclude other syntaxes such as a more familiar brace syntax.
The language designer here. There is literally no escape from such perception coming from somewhere regardless of the choice. :)
I have mentioned before this is often the most unrewarding aspect of designing a language because, it doesn't matter what you do, you will always get an opposing opinion. Here are some examples of what I have heard or read multiple times throughout the years:
* Using the brace syntax is seen as catering to common languages (like C and Java) which would arguably cause a lot of confusion when added on top of a functional language
* Going with Lisp is always a matter of love or hate. Some people will love it and some people is going to really hate it
* The same with space-based indentation. A lot of people praise its conciseness, a lot of people curse the code being extremely hard to move around (this was honestly my second choice but it would get complex inside quoted expressions)
* The do-end blocks gets some praise for being readable (less punctuation characters) but also a bad rap for being verbose
To be clear, I am not calling you out, the point is exactly that everyone will have their preferences and if I was not writing this comment to you, I'd definitely be writing it to someone else. :)
Of course; you can't make everyone happy all the time.
However, it's quite obvious that you are hugely influenced by Ruby's syntax. What I don't understand is why, in copying Ruby's overall flavour of syntax, you decided to make it a little worse.
My hypothesis is that you discovered that this syntax allowed an elegant, unifying structure to the internal implementation, which is fine — but as a user, it comes across as an annoying wart. The parser should know perfectly well that after "def" comes a function name, so why does it need the "do" to demarcate the function body? It would have been just as ugly in Ruby, which goes for terseness in the common case (eg., "if" can take a "then").
Criticisms aside, I should add that this is the only thing so far that has annoyed me about Elixir.
Elixir's do/end blocks actually enhance the syntax it inherits from Ruby by making it very, very consistent. Defining any type of entity takes a do/end block - def, defmodule, defmacro, defprotocol, defimpl, and probably some that I have forgotten.
> My hypothesis is that you discovered that this syntax allowed an elegant, unifying structure to the internal implementation
Your hypothesis is almost right. The important bit is that it is not about an internal structure, it is about a public structure that is accessible to macros. It is about the language AST.
Before there was syntax, I had defined I would like to have Lisp-style macros in Elixir. For that we need to have a regular syntax because when you are composing AST nodes you don't see keywords.
As an example, imagine you have code generation where you may generate a function called `add` and we treat `def` as a keyword. You would do something like:
quote do
def add(x, y)
x + y
end
end
Now let's say the function may be public or private. You would try to write it like this:
kind = :def # or :defp
quote do
unquote(kind)(add(x, y))
x + y
end
end
And now the parser would be unable to parse the code above because there is no longer a `def` or `defp` keywords where the parser would know it could skip the `do` bit. Therefore having an uniform syntax means you can generate and transform code without worrying about special cases, without evaluating strings and without resorting to special functions/methods for dynamic code definition. You can just compose code!
I also believe this matters a lot to the end-user, positively, for two reasons.
1. Consistency. The language is consistent and you can extend it in a consistent fashion. For example, Elixir code is written inside modules. `defmodule` is not a keyword, it is an identifier as any other in the language:
defmodule MyModule do
end
We also have defprotocol, which is implemented as a macro, and looks exactly the same:
defprotocol MyProtocol do
end
If we made defmodule a keyword, we would put ourselves into a corner as we would either have to make defprotocol a keyword too (for consistency) or have defmodule and defprotocol looking slightly different (causing confusion).
2. The second reason is that enabling such AST macros allows us to extend the language in different and interesting ways you can't do cleanly with other languages. One example I like to give in talks is the assert macro in our ExUnit test framework:
assert foo == bar
Different to many unit test frameworks, we don't need `assert_equal`, `assert_more_than` and friends. As a macro, `assert` navigates the AST, which is quite uniform, and extract information from the code.
Another example I gave at my talk at ElixirConf is that we could even extend language constructs like our for comprehensions. Someone can implement a `parallel` macro that transforms a regular for comprehension into one that runs in multiple Elixir processes (leveraging multicore):
parallel for user <- users do
fetch_user_data(user)
end
This only works if parallel can look at the code and transform it in multiple ways.
TLDR: The unifying syntax is an important part of Elixir macro system which brings consistency and provides an extensible foundation to the language that can be leveraged by its users.
I've been following Elixir closely and recently built a super-simple chat app using Phoenix and websockets. I agree with other comments here that the docs could use some work, and as I also come from a Django background, rather than Rails, I found it a little magical for my tastes. However, once I understood how things go together, it was pretty trivial to get up and running. The included phoenix.js library makes the websocket pub/sub stuff ridiculously easy.
I'm really interested to see where Elixir is going, and to try building something real with it. I'll probably use Phoenix, just because it's the most active, mature framework. (I like the look of Dynamo - https://github.com/dynamo/dynamo - but not sure how active it is.)
> Dynamo is currently in maintenance mode for those who are already using it in production. Developers who seek for an actively developed framework are recommended to look for other alternatives in Elixir.
We don't ship with any DB layer today, but Ecto is the defacto choice for sql. We are planning to include a Resource Protocol, so it should be very easy to bring your own DB/model layer and still get all the nice conventional route builders, forms, etc.
Ecto and PostgreSQL seems the most mature option, though there's a Redis library and of course Mnesia. I haven't been able to find any support for MySQL.
As others have said, Ecto is the layer you would use to connect to Postgresql. It's pretty different from ActiveRecord, not necessarily in good or bad ways. It's just different.
Is providing helpers for authentication and authorization (for example like ASP.NET MVC attributes) anywhere on the roadmap?
It would be helpful not to reimplement authentication routines from scratch in every project.
There are no plans to provide authentication, but since the Phoenix Router and Controllers are just Plugs, we should se e the community produce a handful of first-class auth solutions as things mature. I don't think a generic auth solution that works for everyone is easily done and would prefer third-party packages. The nice thing about Plug is these solutions should be relatively easy to add to yours stack.
What Python's Pyramid web-framework did well in that area was to provide the scaffolding around Access Control Lists in the application. You figured out authentication and provided a really simple function and DB table for resolving roles and permissions and the app did the rest.
I think most frameworks should follow that model: provide a flexible ACL system but let the developer figure out auth.
It worked particularly well for Pyramid with it's resource hierarchy object-model. The ACL would cascade down the tree and as it was traversed it could pattern match the permissions against the tree-node acl.
I actually think the model it's better suited to a functional language. I'm working on a similar extension to the Haskell snap framework.
This looks pretty elegant, but it's a shame that Elixir uses the Ruby-style multi-line code blocks with do end. Looking at those code examples, 'end' take up around 25% of the lines with code. Does anyone know if they considered taking the Python approach? Would be curious to hear the arguments behind the decision. I've noticed that most people seem to favor the Python approach after trying it, but that it's rarely used in new languages.