'Tomorrow's' languages and frameworks would do well to take heed. Winning this war has as much to do with culture and marketing as algorithms and data structures.
Clojure, for example, is a much stronger programming language than Ruby on paper, but for a straightforward web app, you'll likely spend at least twice as long to get something working--and while it will be based on better engineering principles, it will also take new engineers much longer to grok since it doesn't follow any universal set of conventions.
With Rails, you end up in the weeds in the long run, but the alternatives put you in the weeds right off the bat (with the promise of eventual salvation). The reality of most product development (ymmv) is that the former is highly preferable to the latter.
At my current job we primarily use Wordpress. Why Wordpress? Because we have a large and non-technical content team, 260+ websites and several tens of thousands of pages of content that need to go up quickly.
We're just 4 developers - about 1 for every 8 content & marketing people we have. We spend very little of our time on frontend. Tell me that you're going to get that kind of leverage with another tool.
Wordpress is just about the last thing that you would pick as a developer or ops person to base your tech stack, but in our case it is the right choice.
Aside: Coming from Rails and having spent six months deep in Wordpress core, it's actually kind of awesome. Seemingly every problem is solvable.
I'm making a point that Wordpress is also "yesterday's software" and there's nothing wrong with making that choice.
You're absolutely right.
And yes, Rails has quirks and I agree with most of the criticism of the other article
Option 1: Twitter is built on RoR, have growing pains and lots of whales when it has an established product (when it gets the investors and the money to invest in a rebuild)
Option 2: Twitter is built on Clojure, product takes more time, iteration gets slower, hiring is slower, some other competitor gets ahead. Ooops
All our webapps these days are thin React apps (with server-side rendering) that don't have a dedicated backend. Instead, they talk to a group of generic microservices. We've been doing this style of development since around 2010.
With this methodology, a lot of Rails' ergonomic concerns (templating, the split between rendering HTML vs. data, etc.) just melt away. Front-ends become formally isolated clients of microservices, forcing you to engineer better internal APIs and get things like security correct from the start.
One aspect I keep hammering on about with regard to microservices is reusability. If you're developing just one user-facing product, nevermind. But if you have a bunch of products, reusability across microservices is a huge boon compared to monoliths. Reuse all your job processing, notifications, data storage, login/auth/group/role/permission management, identity verification, image processing, NLP processing, feed processing, etc. etc. etc. across your entire stack.
With monoliths, the only other option is to build these as libraries, and unless you want to write bindings or write code more than once, you're now chaining yourself to a specific language.
We use a simple microservice design: All our microservices talk using JSON over HTTP. They share nothing. Everything is done by communicating. There's also an event bus so they can listen to each other's events. All our microservices are multitenant by design, so that they can support any product with no data spillover.
We have a bunch of general-purpose microservices built out to service our particular needs. User-facing webapps are built entirely out of these building blocks. They don't need to deal with login or OAuth; there's a microservice for that. Similarly, almost all data is stored in a single document-oriented data store (which we've heavily rearchitecting in Go and will soon release as open source — email me if you want to be notified) on top Postgres and Elasticsearch. For anything that we need from a webapp, we build a microservice. (We have a lot of older Ruby and Node stuff, but all our new microservices are written in Go, which has been a very pleasant experience.)
One key component of our development workflow is the use of a Linux virtual machine that each developer runs locally. It hosts the exact same stack (and uses the same deployment system) as our real production and staging environments. This lets the developer focus on work and not on how the microservices are run. It's not pain-free, but we have made it quite comfortable, among other things through a seamless system of mounting, where the code running on the VM for a particular app is using the developer's own project folder on the host machine. Hot code reloading is also enabled where possible. So a React app will automatically bundle with Babel and so on when you reload a page.
There are challenges to using microservices in this way. One is deployment and hosting. We currently have our own homegrown system for this that works fine, but hope to migrate to Kubernetes soon. Another challenge is documentation. In the end, the code (and project readmes) is the authority on how a particular microservice works and what APIs it implements; we did automatically generate API docs at one point, but it hasn't kept up with our Go projects. These days we just refer to the code. The most painful aspect of this, and onboarding, is documenting the cross-cutting, "big picture" concerns.
I'd be happy to answer any other questions you might have.
Edit: how do you scale? For example, do you just have lots of nodes running your authentication microservice for all of your active apps?
I find the abstractions much more intuitive. The framework as a whole is just as "magical" for fast prototyping, but it's composed of much simpler individual modules that are easy to wrap one's head around.
It's surprising how mature the whole ecosystem feels considering it's still quite niche. It helps to have decades of battle-tested Erlang libraries at your disposal.
If you enjoyed early adopter status in elixir phoenix, wait till it is Rails mk 2. It will suck big time.
As a side note, I feel like this has happened to some degree to the React ecosystem. I can no longer count on packages to be of decent quality.
I actually believe the opposite. Jumping into and doing Clojure web development is extremely simple at the core. The issue with frameworks like Rails (and I'm not against Rails as I have used it on multiple successful projects and still think it's great) is that you get so much for free and don't have to understand how all the pieces fit and work together (at least initially). Soon enough you have to have more knowledge of that and that's were the complexity comes in.
Having used both Ruby/Rails and Clojure, I think Clojure swings too far in the opposite direction, where absolutely everything is foisted on the developer and they must hold the entire codebase in their head at once or drown.
We even see this in luminus, the closest thing Clojure has to a web framework. It's just a leiningen template which spits files out on disk and says "there you go, it's your problem now". Of course there are upsides to this approach, but personally I've come to find developing web servers in Clojure a withering, joyless experience. Elixir and Phoenix show much more promise and seem to strike the right balance between featureful-ness and cleanliness.
I don't get that, and if that's how you feel I'd say you are doing functional programming wrong!
That being said, I do agree about Luminous and I think it's a terrible way for a new web developer to Clojure to get started. Instead build your way up for what your app needs pulling in the libs as you need them and understanding how they fit and work together. Chances are you don't need that many at all.
I've seen plenty of Rails apps that could have written better initially. But I haven't really seen any where it was impossible to refactor and improve them as a developer learns more.
The difference I'd say between Clojure is you learn those from the bottom up, whereas with Rails you learn from the top down. Either way, you are going to have to learn them!
I think the biases of people who were experienced big system developers before they moved to Rails, are very different from those who started with it, potentially not understanding web app development very well, and then produced something challenging to maintain
I'm gonna try Phoenix for my next project, even though I really like Django and I have never been too much into rails or ruby.
I may be stabbing myself in the foot, but let's hope not :D
As one of our product managers put it: Often, you want something you can sell well fast enough to buy time to make it good.
Overall, in this tool/framework/language debate, I'm growing more and more convinced two things matter the most:
- Pick something the weakest 40% of your team are comfortable using. Especially with support, I'd trust myself to figure out enough of Elixir/React/Whatever to be decently productive within a week or two on the go. But that's the thing - that's why I advocate to pick something the weaker team members are fine with. They will struggle, not the stronger team members.
- Smash problems with standard solutions. Don't be smart. Pick the right tools, throw hardware at them. A decent combination of load balancing, language+ORM, an SQL persistence layer and possibly a message queue can handle _a lot_ of traffic. And it's easy to hire for and to get new people on board because there are no smart things going on.
Note that the language and web framework are not part of this consideration at all. My approach would prefer more established frameworks like rails or something in java though.
So it's not so much that I don't see good, perhaps preferable alternatives to Rails. I just don't see a sufficiently unambiguous case that Rails is out of date to be calling it yesterday's technology - yet.
And lastly, as I mentioned in he other thread about this, when Rails isn't working for me, either because of code clarity or performance, I tend to go to older, more established technologies, not newer ones. For very elaborate sql, I use… sql, not contortions in ActiveRecord that I think are much harder to read and understand than a sql statement. I also often go to raw sql for performance reasons (mass inserts, queries that result in very large result sets that need to be optimized for speed). Unix, networking, specialized languages and environments for scientific computing, machine learning, stats, or other specialized tasks. That tends to be where I go.
It definitely felt like a step back from Rails, but that's partly because Node wasn't meant just to be a web framework.
I've gotten pretty comfortable with it, but I feel like the original decision to use it (which predates my joining the company) was primarily driven by how shiny and cool it is. The vast majority of what we do is standard CRUD stuff.
On a personal level I've enjoyed learning it, and from a career perspective it's nice to have it on my CV, but had I been on board when the original decision was made I would definitely have chosen Rails.
(Plus they also chose to use MySQL for no good reason. We've managed with it just fine, but in literally the last three weeks we ran into some limitations that would not be a problem in Postgres. The CEO, not being technical, doesn't quite understand why changing the database midstream is not something you do without really compelling reasons.)
I think if you more fairly compare Rails with Django, Flask, Laravel, etc. you might find the differences aren't so great.
I have had great success with Python across a variety of different applications.
A lot of people seem to end up comparing/picking between Python and Ruby so I am curious for my own knowledge what type of serious application does Ruby make easy where Python makes difficult?
I also think the overall gravitation towards microservice architecture generally makes language choice less important on the whole...but that's just me.
For a straightforward web app, Rails is probably a fine fit. For something less straightforward, it can shackle you down the road. The alternatives are an up-front investment that can save you time and effort down the road, but also give you enough leeway to make mistakes.
This is probably the most important to remember. Every problem is different. Every situation is different. Good engineering isn't just the implementation, but also the design process.
For most problems, there is rarely a single "correct" solution. This means decisions about which tools to use should should include maintenance costs, familiarity, and other non-technical criteria. For example, it's probably a better idea to be consistent with tooling even if a side project is a great fit for another language/framework. Or maybe it's worth the cost in your particular situation.
In CS we often talk about trade=offs. Should we use the normal trig functions or trade memory for speed and use a lookup table? Should the data be stored in an array or linked list? Should the math be done using int16, int32, int64, or a bignum library? Good design happens when you take the time to consider all of your options, including which tools you will use.
This is true enough. I'd go farther and say that winning depends almost entirely on culture and marketing. The annals of computing are littered with the forgotten remains of technically amazing projects that suffered from poor or non-existent marketing execution. Rails is so popular mainly due to marketing tactics that won developer mindshare, rather than due to any technical merit.
The larger question is not a mutually exclusive either-or situation. A good framework should be easy to use for the most common cases initially and well thought out enough that it's easily extensible in the future.
When Rails first came out, I loved it. It made common stuff easy! Sure, it became intractable if you needed to deviate in any way from the planned normal cases, but that was fine for now. I thought it was a great V1 of the Ultimate Web Framework, and in time they'd build it out to have better and more flexible extensibility. I thought it was fine starting point to build upon, and that eventually they'd redo and flesh out the various badly designed subsystems.
As the years went by, I realized that it was not a V1 of that mythical Ultimate Web Framework. Not at all.
Its developers were quite content to feed on the low end of the market that just needed a quick way to shuttle data between a database and web forms. They had no intentions of updating its design to accomodate any other use cases. The few concessions that were made to other use cases were generally bolt-on hacks rather than integrated elements of the framework. They were focusing their time on flying around the world giving talks at conferences, writing books, giving training seminars, and otherwise monetizing their brand.
Actual engineering work on Rails itself slowed to a near-halt, and it was then usually reactive in the sense that Rails had to be dragged forward kicking and screaming. Rails was no longer an innovator, but that was fine because they were making plenty of cash off of what Rails was. Indeed, they now had a strong disincentive to make any radical design changes, since they would have to redo all of their books and course curricula around the new design.
There is no reason in principle why one couldn't build a framework that's both easy to get started with and easy to extend. It's just that nobody's bothered to do it yet.
Or do we need an entire new class of tools which have traditionally been used to build large scale systems in the past?
It's a multi-faceted question of type systems, tooling, packaging, dependency resolution and others. And as web apps continue to evolve, my guess is that the current tools will be considered lacking.
It used to be that picking "the right tool" meant choosing between Ruby, Python, PHP, JS. In the future it might mean using (gulp) Java + WebAssembly or a combination of other unusual tools. This would be quite game-changing for most web developers. ;)
If there was depth, I missed it. The only two points I got from it were 'I prefer static typing over duck typing' and 'avoid dependency hell'. The original post didn't add anything new to either argument and both have existed and been rehashed over and over again for decades.
That has been a solved problem in ruby for a long time. Bundler (or similar tools) does the equivalent of static compilation by copying all of the dependencies into the project and statically loading them.
That said, I don't think duck typing is Python's biggest problem. Go sort-of has duck typing, in that you can make a type implement an interface just by giving it methods of the right names. It's a bit safer because you have to specify the 'ducks' though. It's kind of static duck typing.
Rust gets it totally right. But anyway the dynamic typing is a far worse issue than duck typing.
I will concede that duck typing allows you to do this sort of thing more casually, though.
For example, while the author claims there are 'crazy levels of duck typing' in Ruby / Rails he provides no such evidence of it and as noted by recent work from Aaron Patterson the vast majority of calls sites even in a very large rails app are actually monomorhpic (https://www.youtube.com/watch?v=6Dkjus07d9Y).
But more important than this cherry picked example is the complete lack of this type of technical discussion, which is why I find the post very shallow.
My understanding of it has always been that it was for the vast majority of web apps, which are neither particularly complex nor particularly large-scale. Rails was there to help those projects get started easily and iterate rapidly. If your Rails app turned out to be a hit you'd probably need to rewrite it, but that was OK because (1) very few apps qualify as "hits" and (2) only apps that actually get written can become hits, and with Rails it was much easier to turn ideas into functioning code.
Large-scale web apps have their own unique types of problems. New tools may be needed to cope with them, but that's a whole different thing than the problems Rails was created to solve.
They absolutely can be built using just Rails (or python) and HTML.
I won't shill any harder about how, but it's not only possible, it's enjoyable and easy. I've done it.
Example: if the unit tests represent 50%+ of the code base, one is probably working around dynamic typing.
And here's some examples of complex software for comparison: image editor, digital audio workstation, CAD, video conferencing, computer vision & imaging, office suite, large games (think DeusEx, not minesweeper).
Not complex: almost anything CRUD
Unit tests are checking a lot more than argument type.. I have no idea what environment you worked in, but the comment comes across as rather clueless.
The secret was their ~200 kloc unit test suite. :)
It has nothing to do with argument types per se, just that one must have very high coverage in order to be able to be reasonably sure that their app won't crash after minor code changes.
In my current project we can get away with having integration tests cover a lot of ground and only unit test what makes sense thanks to static typing. High coverage is good to have, but should be balanced with test development and maintenance effort.
Don't forget about .NET along Java.
Well, clearly they can, because lots of people are doing it. Maybe some other tools would make it easier than it is now but it is certainly possible now.
You know, after being out of the Java world for about ten years now, I'm starting to get the itch to use it again. I was one of those people that used to joke about programming in XML, but that's really just a layer of indirection which is often a good solution to a problem.
It's a very nice language, is fast, and has so far not been destroyed by Oracle. Plus JavaFX may be ugly, gut it is a cross platform GUI toolkit that works.
You should say more rather than better, because better is very debatable when it comes to expressiveness. That expressiveness can often come at the expense of accessibility/readability, and IMHO that's the case with Scala.
A lot of the popularity of Go comes from its explicit lack of expressiveness. I barely use the language, but the one thing I remember most from my time learning it is that I could easily understand every Go source file I found. From Hello, World! all the way up to distributed applications/libraries, Go code was long/verbose, but explicit and easy to understand.
Meanwhile, I've never found Scala code to be very readable. It almost requires that the code be written with the same coding conventions that you're accustomed to using. Even when common conventions are used, we found Scala to be counterproductive in larger teams. Code reviews were significantly less thorough for Scala projects than either Java or Node projects were.
I don't think there's inherently a tradeoff. E.g. in Python you have to write "(lambda x: x + 1)" where in Scala you can write "(_ + 1)"; I think the latter is both more expressive and more readable, because naming "x" isn't adding any clarity for the reader, it's just ceremony that you have to skip over when reading. Similarly Python's verbose, repetitive constructors don't clarify anything (worse, they obscure the rare cases where your constructor does something different from "self.x = x; self.y = y; ...", making it too easy for a reader to be surprised/confused by those cases).
> A lot of the popularity of Go comes from its explicit lack of expressiveness. I barely use the language, but the one thing I remember most from my time learning it is that I could easily understand every Go source file I found. From Hello, World! all the way up to distributed applications/libraries, Go code was long/verbose, but explicit and easy to understand.
I think people overestimate the importance of understanding a line in isolation and underestimate the importance of understanding a whole component or system. A one-screen class where you have to spend a couple of seconds understanding each line feels harder to read than a three-screen class where each line is simple, but (IME) ends up being more maintainable.
> Meanwhile, I've never found Scala code to be very readable. It almost requires that the code be written with the same coding conventions that you're accustomed to using. Even when common conventions are used, we found Scala to be counterproductive in larger teams.
All I can say is that's not my experience.
> Code reviews were significantly less thorough for Scala projects than either Java or Node projects were.
I don't follow. Surely if people find it harder to read they should codereview more carefully (or simply reject on grounds of unreadability)? That's what reviews are for, no?
Can you give the lambda variable a name in Scala? In Python you can write "lambda city_name: add_state_suffix(city_name)"
No. It is a special combination of the shiny insecure magic of Rails, mixed with the happy meal mentality and abilities of most Rails developers.
It was, and still is, a nightmare.
Technically speaking Rails itself upgraded in a reasonably straight-forward way, just follow the documentation (well and a few blog posts here and there for the things missed in the official docs). But all the additional Gems, and dependencies of those Gems (and so on) made the process excruciating. Many things broke in subtle ways at runtime (no compilation, so no compiler errors) and there was no clear path to upgrade; because whilst Rails' upgrade path is documented, there's a plethora of Gems that also needed to be upgraded separately (some in contradictory manners).
You might wonder why I was so out of date in the first place. Two reasons:
1. I inherited this code-base.
2. I've attempted this (or a similar) upgrade about 5 other times in the past; spending hours upon hours debugging crashes (or just weird behaviour) with enormous stack-traces where my application's own code often doesn't even appear in the stack trace. It's only now after making several failed (or rather overly time consuming) attempts I was able to come up with a "workable" upgrade path.
Gems dynamically generating methods left, right and centre, Gems replacing methods of seemingly unrelated classes (when they definitely do not need to), and crazy "conventions" that hide all the actual logic make debugging any sizeable Rails project a complete disaster. Don't even get me started on the poor performance, much of which is to do with poorly designed Gems and not even the Ruby interpreter's fault.
That said... I still turn to Rails when I want to get a new project (with users, database, login, admin etc.) up and running quickly. It's a shame, but in terms of development speed, it's hard to beat Ruby (and Rails). For small projects Sinatra is very solid, and Padrino is interesting - but honestly I can't wait for the day I can move to a compiled language and still achieve this sort of development speed.
There are few ways this could be addressed, e.g. separate branches for old gems, and some developers do actually do this but in the end I've noticed these legacy branches and gems just bitrot while the developer devotes their time to the new branch. So essentially to use the gem you have to make the SemVer API jump which then begins the dependency failure cascade dance you mention.
I'm close to starting another new Rails job and I'm not looking forward to the inevitable rails rescue work on legacy codebases, yet this is the reality of a modern Rails developer.
Also I think this upgrade work needs to be factored into "development time" and when you do I'm not convinced Rails is actually faster.
The problem is a lot of Ruby developers have not felt this pain yet and eschew testing because they work in startups that are in a state of permanent death-march.
A little (okay, a lot) of unit testing goes a long way to ease this pain. The people making breaking changes in their gems usually/hopefully have good testing in place and the return output of their methods are documented.
1) We couldn't be sure that we hadn't broken something because we knew we didn't have 100% test coverage.
2) Some of the gems we depended on conflicted so severely that we had to rip them out and implement the solution ourselves or pick a different gem.
3) Our tests themselves of course contained code with breaking API changes. That means we had to maintain the tests as well as the production code through the upgrade, and had to make changes to many of those tests.
All of this uncertainty means that this was not your typical test-driven confident refactor. QA still had to do massive regression testing, and we're pretty sure we introduced at least a new bug or two. It took a pair of devs 4 months of non-stop work to get these services up to Rails 4.1. The upgrade was absolutely necessary as the Rails core team had already stopped fixing major security holes in 3.1 long ago. The company incurred a tremendous cost during this upgrade process. If they would have kept things up to date all along they could've saved money, but of course that would have eaten into the supposed time-savings of Rails.
The biggest problem which I had was dealing with the dependencies. This project used just under 300 gems, many of them unmaintained and a few of them I had to fork to get to work with rails 3.
I am very fast making things in Rails, for the most part I find it just works. When projects get to a certain size, very roughly around 50k loc, they become very slow to work on. Slow test suites and dependency hell become problems.
$ find -iname '*.rb' | xargs cat | wc -l
Mind, most of the code is tests:
$ find -iname '*.rb' | grep /spec/ | xargs cat | wc -l
IMHO .net has lost some stability with the recent packaging systems like NuGet and chocolatey; they don't really encourage stability.
1. Absolutely no tests of any kind
2. Over 100 gem dependencies
Upgrade from 3.2.x to 4.2.x took one developer (me) three weeks of work. I don't know if that's a lot or not, given the major version upgrade and all of the gems (which were a huge pain). I've not had any problems in production reported via Honeybadger or by end users, so I think the upgrade was a success. I did end up writing about 200 unit and integration specs during the process.
I'm thinking back to my days in the .Net world at BigCorp. Unfortunately I have nothing to compare it to, because we never upgraded anything from one major version of ASP.NET MVC to another. Is that a better situation?
The problem wasn't Rails there.
Unfortunately, of course, that gun doesn't usually go off until it's handed to someone else.
After that you always make sure test exist before even pondering a push of potentially breaking changes! :P
(sorry, couldn't help the futurama/zap branigan reference once I start mixing metaphors).
This "let someone else do the work, get it from a gem" mindset is what kills long lived projects. It has nothing to do with the tools and everything to do with experience.
You don't need 1000 gems. Managing anything more than core dependancies in a project can easily create exponential bugs and consume all your resources to fix. Remember left-pad?
This is true in any language and ecosystem and has nothing to do with rails/gems/ruby. The same is with Python, JS, PHP.
Senior/Lead devs need to carefully curate what a projects foundation is. A strong, well designed foundation means you have something solid to build on. If you don't understand what's in your deps, haven't read their code, see how often it's updated and how many people actively use it, and can say you are using 80% or more of the code in it then don't use it.
Writing your own code is often the best route since it fixes your exact use case, no matter the language, libraries or frameworks being used.
You know what kinds of projects I see fail? The ones who try to architect everything using a core set of libraries and build everything themselves. I've seen hugely funded projects with years of development burn for that very reason.
I know I'm just a single datapoint, but it's worth mentioning that there exists projects that absolutely will do better in the long run with just installing all the gems.
I've worked on code-bases of small to medium size where people were afraid to remove self-written code because nobody knew if the code is actually called somewhere. It's not actually the gem that's the problem - it's being diligent. If you need a gem's function add it as top level dependency, don't rely on some other gem depending on it. require the code explicitly everywhere you use it. That way you can at least search your code base for it. Be equally diligent when removing code: Remove all code that only this code depended on. Other than that: Don't be afraid to break stuff.
If you replace "gem" with "library", this actually makes a lot of sense. It saves time to reuse existing (high quality) software.
Plenty of ecosystems do just fine with this mindset, starting with the JVM and .net.
The problem is specifically Ruby and the gems system.
Having to deal with my share of poorly supported components in Django/DjangoCMS, I can confirm your observation for Python. But betting on the wrong opensource component is often better from betting on the wrong proprietary technology. It comes with the territory.
> Writing your own code is often the best route since it fixes your exact use case, no matter the language, libraries or frameworks being used.
The problem is not you supporting your own code versus you supporting the code of a random dude from Internet, the problem is when you move on to the next job and some poor guy has to support and fix bugs in your code from 3 years ago. In my experience the average quality of community components and components developed in-house are not that different.
Personally I wish project managers starting caring more for life-cycle planing.
Reviewing vendor libraries always a good idea, but you aren't going to make it very far as a "Senior/Lead dev" if you are constantly using someone's money to reinvent the wheel.
$ tree node_modules
Sure, there are some inexperienced (or simply bad) developers in the Rails community, just as there are in every community. But most of the Rails developers I've met also have experience in at least one other major web stack. They're not chasing "shiny magic features". They're chasing productivity.
For me personally, Rails offers greater productivity than any other web framework I've used. The truth is Ruby has little to do with that, as much as I happen to like the language. I could go on for quite a while about why I feel Rails offers the best productivity, but it boils down to two main points: the entire Rails community is focused on building web applications, and the Rails core team is mostly comprised of people actually building web applications. Features get added because they're needed. Common frictions get ironed out.
Rails isn't about "magic". Rails is about getting shit done, quickly and mostly-cleanly.
Have you worked much in Rails? I'm asking, because it sounds like your exposure is mostly from demos and blog posts, which are obviously going to use the five-minute, "happy meal" examples. A complex, real-world application is very different from a presentation where somebody whips together a blog engine in 15 minutes. For example, I can't remember the last time I saw somebody use the scaffold generator for production functionality, other than maybe for temporarily mocking up some functionality for stakeholder feedback.
I'm wondering if some of our difference in opinion is just based on Rails appearing easy to use. Take ActiveRecord for example. Sure, you don't need to understand SQL and database engines to get started using it, but if you're going to build anything serious, you better understand how to drop down to raw SQL when necessary, how indexing works in a RDBMS, etc. I personally think ActiveRecord, while a bit bloated, is a great mix of making the simple stuff extremely easy and getting out of my way when I need to do something more sophisticated.
And I think that applies to Rails overall. It makes the simple, routine stuff very easy, and it lets me do more sophisticated things with minimum complexity overhead.
/me is a Rails programmer, a lot of the time.
Use whatever tools/framework you want. Whatever it is you use, you will eventually become [the original] OP.
The reality is that every language/framework has warts. As you use it and get deeper into it, you will uncover these warts. Eventually, all you can see is the warts.
It's important to take a minute every once in a while and look at the thing you built from a user's perspective. See what problem you've solved for people, or just what cool new thing you've built. Staring at a bug backlog and a mountain of tech debt will always get you down about your project, but that's the reality of programming...bugs and tech debt.
 I write in Java because I work for a Java shop, but even if I had my choice of languages, I'd probably be using either Swift or a compile-to-JVM language.
Any tool that allows the rapid (almost effortless) accretion of complexity will suffer these problems. It goes with the territory.
My first exposure to web programming was HTMLScript at my university in the mid 90s (now called MivaScript and is the language used to build the MivaMerchant product). Soon after, I switched to PHP 3, but I found that I preferred Miva.
I have not touched Miva in years after spending some time as a freelancer in the late 90s, but because it is tied to a successful niche e-commerce platform, the language survives. It is very similar to Cold Fusion minus the enterprise level database support.
I worked with CF for a time in the late 90s/00s (when it was still a product of Allaire) along side of ASP (pre .net), and I actually preferred it to both ASP and PHP - mainly because the mixing of mark up and DSL blocks seemed really unscalable.
PHP may have been seen as an evolution in web programming since it abstracted the mixing of logic and presentation a bit more than Miva or CF, but in retrospect I believe PHP was not an improvement. It made general web programming easier, but software maintenance is easier in languages like CF and Miva that embrace embedded mark up/logic.
The frameworks provided by Ruby, Python, Perl, and even TCL (among others) seem to have reached a point where only blurring the lines between client and server seem like the only logical paradigm change - and that's not to say it's an improvement.
Assuming you were answering to your GP, modern PHP is a totally fine language to program in with a lot of upsides (that would take too muh to enumerate here).
 Sometimes this is better than the 'N layers of crufty consultant supplied virtualisation' model. Sometimes it is not.
As people flee the platform a huge amount of opportunities are going to open up for that still enjoy the platform. I can't wait.
Further down the road, maintenance drowns you.
I've rallied against this mindset before, e.g. regards security http://williamedwardscoder.tumblr.com/post/43394068341/rubys...
I find large Python apps fairly unmaintainable too, but to a much lesser degree.
At least, everything in nature tends to get reduced to a local optimum by a straightforward optimization process of trial and error. There is no way to make a reliable and efficient complex system by piling up more and more crap.
I suspect there are easily visible patterns and trends, and they tend to repeat over and over.
The corollary is that specific languages can't fix cultural issues unless they're designed to do that.
Rails is well positioned for that.
My problem with routing is that the default is an implicit semi-REST pattern, and that routing a lot of verbs to the same controller method is tedious. I prefer Sinatra's "ground up" model instead, although you can certainly write a rails router that way (and sinatra has the same verb problem.)
But I concede that's a matter of taste.
and is the minimum level of complexity necessary for a human web interface.
Rails isn't necessarily better than other platforms at producing HTML: it does support rendering partials which is nice: http://guides.rubyonrails.org/layouts_and_rendering.html. So, rails is pretty good at it, but I can imagine other tools being perfectly competent at the problem as well.
No, it doesn't, though it certainly makes it simpler (than something not designed as a hypermedia format) to do HATEOAS if you think about it.
I've never used wicket, but it looks pretty good at this sort of thing.
I think that's wishful thinking.
The drawing you link is great though.
I can't imagine that an "eliminate boilerplate via convention at the cost of explicitness" mentality would have evolved independently in a world where assurances are earned by proving extra properties to the compiler.
However mordern compiled languages now formalise the shortcuts afforded by dynamic languages, e.g. type inference, generics, implicit conversions, typesafe macros, type classes, etc.
Similarily conventions popularised by rails-esqe frameworks are being formalised using the tools listed above.
I fall into the scala camp, but have used rails at a previous job. My guess is I need 1.5x scala lines vs Ruby which I believe is a justified cost. Opinions of course vary.
Unless he's talking about Elixir and Phoenix, which IMHO is the future of web development.
Rails core team members have been building Phoenix. It's syntax is built to be extremely familiar to Rails but it's been built from the ground up to correct a lot of core issues that come up long term with Rails. It's basically fast Rails.
You get a very equivalent level of productivity with performance and fault tolerance of Erlang. Benchmarks show performance on par with Go.
It's really fascinating. I've been programming professionally for about 17 years now and it's the first time I've been truly excited about a language for a long time.
>correct a lot of core issues that come up long term with Rails.
I've never gotten to this point, what are some of these issues? From reading comments here, it seems like dependency hell could be one, but what do you think? I've had issues in my short time with outdated gems, but I don't know if this is necessarily an issue with the framework/language.
Monolith syndrome is another and one of the huge perks of Elixir is that it forces you to build in a way that makes separating out parts later on much simpler.
Performance is "the" major one because it's one of those things that you just can't overcome easily with Ruby. Usually that leads long term to refactoring a part of your system in another language like Go just for performance sake.
The routing layer is one of the biggest issues that is incredibly difficult to solve in non-compiled languages.
Rack middleware is great until you only want it on some of your requests. At that point things end up in a top level controller that other controllers are inheriting from which forces those parts to get hit after the router. Phoenix has something called pipelines that basically lets you define your middleware stack for sets of routes or even based on request matching like "accepts json" vs "accepts html".
ActionCable has just been added to handle websockets but whether or not they will actually scale remains to be seen. Phoenix has been built for it from the ground up and it's very impressive. You can find the 2 million websockets on a single server benchmark with some Googling.
RAM and CPU usage is better. Concurrent processes have built in supervisors that know how to restart them on failures and know how to drop associated concurrent jobs as well. It's as easy as Go routines but with fault tolerance.
Erlang eco system libraries are usable and bring a lot to the table so it's not like starting from nothing. The "awesome-elixir" page on github is a great reference.
It's functional programming with immutable data structures which forces you to think about problems a bit differently, but also ensures a better concurrency model since there is no such think as a mutex lock.
The built in hot code deploys from Erlang gives you zero downtime deploys on a single machine. Although I haven't yet tried this, people have suggested that this means you could deploy to with millions of connected websockets without breaking the connections (in the same way that Erlang was built to do this with phone systems without dropping calls).
It's a little bit mind blowing and as you can probably tell, I'm excited. ;)
With any programming language and/or framework you have to pick your poison. Rails backloads a lot of big development obstacles that ultimately you may never actually encounter in the life of your app. The issue regarding gems can be aggravating. But the speed in which you can get your app built cannot be understated. Rails is not a one size fits all, and you might eventually outgrow rails (i.e. Twitter). Be grateful the framework got you to the point you could outgrow it, rails helped you get there.
What is "the future" isn't really so interesting as what is productive.
Yes, performance matters a bit, but development time is usually much more expensive than adding a few nodes to an autoscaling group, and not worth the cost of using less fleshed out libraries.
> Yes, performance matters a bit, but development time is usually much more expensive than adding a few nodes to an autoscaling group, and not worth the cost of using less fleshed out libraries.
Sequentially, Go outperforms Python and other interpreted languages by a wide margin (usually a factor of 10). Things get really interesting with Go because it can be massively concurrent without messing around with large async refactors. At work, we're hoping our first iteration (i.e., before any async refactors) of our Python application to handle something like 5-10 concurrent requests per machine (without degrading response times), but I'm confident a single Go process could handle at least 10X that load with better response times. This order-of-magnitude difference seems fundamentally different from the perspective of "throwing hardware at the problem". Further, Go's library story is fairly complete (for web services; GUIs and other domains are still lacking)--at least it's been a long while since I've lacked a complete library for some task.
Go has specialized a bit towards low-level operations, and I tend to strongly dislike what it does with exceptions and the way those involved veto language features.
As for libraries, I'm talking about things on the level of, say, Django or ORMs.
Go is heavily optimized toward performance and rapid development. Go also takes a hard philosophical bent against redundant features, like exceptions, under the "less is more" and "simple is better than complex" axioms; it's actually the same philosophy that Python's zen espouses, except Go actually adheres to it.
Regarding libraries, there are numerous web frameworks and several ORMs. I'm unfamiliar with Django, but I will say that Go's standard HTTP library is a tremendous improvement over Flask. Also, I've not yet found an ORM that saves time or trouble over hand-coding SQL (in particular, SQLAlchemy is a real bear).
It's possible in this scenario, going with a proven solution like Rails me be it. I just wanted to point out that our solutions sometimes have to consider future considerations as well or have to consider continual development of a legacy product. Sometimes developers consider only whats best in the moment, and that has negative long-term consequences.
By the time a project gets large enough it starts optimizing for its major stakeholders. New use cases or new ways of rethinking common use cases come along, and the small libraries that approach it from scratch have a narrowly-defined advantage. If the advantage is significant enough (e. g. virtual dom for browser UI), then new frameworks start being written around them, bringing back some but not all of the features of the older frameworks.
At some point (different for each user/use case) the newer frameworks have enough functionality that people start considering them over the older ones for new projects. When enough of that happens, the older frameworks start looking like yesterday's software.
Most of the time People will answer saying moving away from Rails as you scale is a good problem to have. But the day Rails isn't fast enough or scale easy enough is coming a lot quicker as the Internet Population expands.
And the good old saying of passing this to Morre's Law no longer works as we haven't had much CPU improvement.
I am hoping JRuby with Graal and Truffle will fix that.
Since then, has Rails been the best choice at any point, in your opinion?
Fast forwarding, many apps are (or will be) big JS blobs using APIs/microservices back to the server. In that version of the future, frameworks like Rails can get in the way more than they help.
Or did you accidentally post this unrelated opinion in the wrong thread?
you don't have to use bleeding egde libs instead of rails...
hapi instead of koa
react instead of cycle
ember instead of react
What exactly does that mean? It's a vague response that people have used for years when they don't want to explain why they chose a particular tool. In my experience, many developers take the path of least resistance and use what they are comfortable with and that's why "it's the right tool for the job".
To discuss by means of example, lets talk about floors.
You decide to redo the floor in your place of residence. Do you:
0. Hire it out, using a cheap contractor
1. Hire it out, use a vendor in the middle of the quality/cost curve
2. Hire it out, using the classiest floor shop in town
3. Do it yourself, using a rental sander and a youtube video for training, sand your floors down and refinish using stain the guy at Home Depot recommended
4. Do it yourself, tear the floors out and replace them from the beams up using reclaimed lumber from the local artists collective.
5. Do it yourself, linoleum is fast to install, looks okay, and its super cheap.
6. Do it yourself, spend 6 years learning carpentry from a master artisan in the wilds of Scandinavia, fell your own timber using an axe you made yourself, assemble the timber into the master work of flooring using expensive danish hand.
Personally I try to do #1 for floors and #4 for software but that doesn't meant that works for you. If you have already done the first half of #6 I would be temped to just go all the way.
That said if someone gets bored we could do a sort of survey on a timed basis and rate different tools for different strengths ans weaknesses. Something where you could be like "optimize speed of development versus XML Parsing Quality" and it would give you a nice chart as long as you provide a few quick survey answers.
Might Give writing it a try later on. would be interesting to see what metrics people cared about.
Speaking in generalities isn't very useful.
Each project has to weigh the pros and cons of each decision. It's basic engineering.
Freely admit this is somewhat irrelevant though as most languages are the difference between a felling-ax and a splitting-ax or two sizes of screwdriver, as opposed to hammer vs drill.