Hacker News new | past | comments | ask | show | jobs | submit login
Upgrading GitHub from Rails 3.2 to 5.2 (githubengineering.com)
510 points by masnick on Sept 28, 2018 | hide | past | favorite | 263 comments



No mention of the rails:update rake task, which is a very valuable tool to get your boilerplate updated. I'm guessing at GitHub there is so much customization over the years that they wouldn't get much out of it, but it's still a valuable exercise to run through, and it's worthwhile to keep your boilerplate aligned as much as possible since it makes gems, documentation, and everything else more likely to align with shared community experience.

Also, I want to add a big proviso to the lesson "Upstream your tooling instead of rolling your own". Historically in ruby, and now even moreso in node, the ease of pushing new packages and the trendiness of the language at times has led to a lot of churn and abandoning of packages. The trick is to include stable dependencies, and that requires quite a bit of experience and tea-leaf reading to do right. Often times maintaining your own stripped down internal lib can be a performance and maintenance cost win over including a larger batteries-included lib that ends up being poorly supported over time. For example, a lot of people got burned by using the state_machine gem that at one time was very hot and actively maintained but went on to get left in an aggressive limbo (https://github.com/pluginaweek/state_machine).


I was always disappointed by rails:update - look at all these crazy changes, I can't do all this stuff! - until I figured out I was using it the wrong way. A good way to use it is to run it in a different branch and then scroll through the diff, picking out items that you want to update. This lets you pick and choose and do even that incrementally, so you can do some easy ones first (like removing unnecessary MIME type registrations) and then move on to more complicated ones.

Longer writeup is at https://thomasleecopeland.com/2015/08/06/running-rails-updat... - it's 3+ years old now, but, hey.


I find `git add -p` useful for that kind of work.


I've never quite gotten the hang of `git add -p`, so I use `git gui` (incl. with Homebrew version of git, among others), or other tools that let me do partial staging/unstaging via line-selection with a mouse. It's a handy way to onboard oneself to such a powerful capability, if one feels a bit overwhelmed by the relatively complex UX of the CLI approach.


> Also, I want to add a big proviso to the lesson "Upstream your tooling instead of rolling your own".

I feel like this bit needed more of an explanation about how this applied to GitHub.

If I were to write a post about working in a 10 year old Ruby codebase I'd definitely include "Kill your dependencies" as a bullet point.


> I'd definitely include "Kill your dependencies" as a bullet point

Or at least your monkeypatches!


Yeah, I don't really understand about this, especially the security aspect of Gems.

Every piece of externally-maintained code is a security risk, surely? You are implicitly trusting the maintainer of that Gem to not hide bad things in their code. And every Gem that they depend on. If the Gems are old and the maintainer is unpaid and doing other stuff, how sure can you be that they're still vetting all contributions for security? Or that they haven't handed over the maintenance to someone you no longer trust? Or that the maintainer hasn't succumbed to economic pressure and included some malicious code in their Gem?

Or do you have to manually review every single line of code in every dependency yourself? That seems like a lot of work... I would definitely prefer to write my own code for a feature than review 1000's of sloc of someone else's code to spot any problems.

I get that the core Rails codebase gets security-reviewed regularly, but does that happen for Gems? And is it methodical and thorough, or is it just "lots of eyeballs"? And if so, is there a threshold of Gem popularity below which there aren't enough eyeballs to spot problems and the Gem should be considered insecure?

And if you do spot a problem, do you report it and hope the maintainer has time to do something about it? Or do you write a PR and submit it, hoping they accept it? Doesn't that then mean you're maintaining someone else's code base? Again, I would massively prefer to write and maintain my own code than maintain someone's else code (or wait for them to fix a problem that they may no longer care about).

How do you build a secure application for something as trusted as Github while gleefully incorporating all this third-party code?


I'll add http://railsdiff.org which I find quite useful too to follow track of framework defaults etc.


By the way, it's been changed to `bin/rails app:update` since 5.0: https://guides.rubyonrails.org/upgrading_ruby_on_rails.html#...


For my team, this article comes at some interesting timing, since we're bumping into some of the same issues with Rails.

Rails is now a mature framework and part of the problem is its lack of consideration for large existing codebases running in production. While there are nice tools to help migrate (e.g. rails:update) that hit surface issues, the deep problem is that there are a lot of decisions made going from version to version that are obviously unfriendly to established projects. e.g.: https://github.com/rails/rails/issues/27231

Additionally, there are a lot of gems that are losing momentum, which are near-core to Rails. e.g.: https://github.com/thiagopradi/octopus/issues/490. This is a side effect of the above issue, where the alternatives to Rails are taking a lot of the community away to focus on newer/shiner things. Fortunately, we have companies like GitHub and Shopify that are still very much invested in the success of the ecosystem.

All that said, it's still a great framework to go from 0 to production with a new idea or project.

Other ecosystems we're entrenched in (Node for example) have their share of issues as well, but we won't go into those.


> part of the problem is its lack of consideration for large existing codebases running in production

That's just untrue. Almost half of the Rails core team works at GitHub and Shopify which both have huge 10 years old codebases and I can tell you they take breaking changes very seriously.


> art of the problem is its lack of consideration for large existing codebases running in production

To be fair - this has gotten a lot better. Upgrading 2.3 -> 3.2 was terrible. 3.2 -> 4.0 was terrible. 4.0 -> 4.1 was rough. Since then, I've found the upgrades pretty easy - to the point I ran rails 5.2.0-rc's in production for a while.

As you fairly note, a big problem is that related gems lose momentum and they don't get updated - which blocks other updates. On the flip side, they usually aren't that hard to update and submit a PR on, either.

Even not having those patches merged quickly is not so bad in ruby - it's easy to tell bundler to look at your fork of a gem on github rather than pulling the upstream.


What was your problem with going from 3.2 to 4.0?

I think the upgrade path from 2.3 being terrible is generally accepted as being true. But I don't remember any hard problems from 3.x onward.

At least for Rails itself. Gems dependencies are another problem.

Like at one project we had pains every time Rails upgraded their minor version because the previous devs thought using Squeel instead of the builtin ActiveRecord a good idea. Just for being able to write slightly "nicer" queries and now this is a major stopper for going to Rails 5.


It's been a while, honestly I'm not totally sure.

Generally speaking at that job, though, we tended to not wean ourselves off the deprecated features - we'd use the extracted gem to keep the functionality going. Which works fine for one minor release, but doesn't work when you are 3 minor releases later.

Strong parameters is one that hurt bad. We used the workaround gem to avoid that for a long time, and it just made it more painful when we had to get rid of it.

I think generally there were a lot of related gems that were hard to get updated along the way from 3.2 to 4.0 as well. Seems like a fair amount of libraries were late to the 4.0 party, then jumped ahead to 4.1 or 4.2, and never really got ironed out well against 4.0, so you'd have goofy compatibility issues.

Squeel was a related problem that was horribly painful to remove from that stack as well, I forgot about that.


Similar post from Shopify about a year ago on their experience upgrading from Rails 4.2 to 5 https://shopifyengineering.myshopify.com/blogs/engineering/u...


And some similar discussion on "Shopify now on Rails 5.0. started 12 years ago on 0.5, the First version released" https://news.ycombinator.com/item?id=13448219


It's scarily common for organizations to be running ancient versions of Rails in production. At my last gig we spent six months upgrading a Rails 2.3 application to 3.2 and before that I was working with a team that was maintaining an application written in Rails 1x. Kudos to GitHub for sharing this, I really hope they do future posts going into more detail. In my experience one of the hardest aspects to upgrading Rails is that so much of the really useful information has either fallen out of Google or succumbed to link rot.


This is true of literally everything.

People don't upgrade their dependencies across the board, and it's a massive problem for long-term security and maintainability.


My current company doesn't even lock most of their dependencies. At first I thought it was crazy (and it is crazy) but it does mean we address compatibility issues immediately. We are mostly running background jobs though so it's safer than anything customer facing


My opinions on this have changed a lot over the past few years. At this point, I think anything that pins dependencies to specific versions is asking for long-term maintainability nightmares. Updates of your dependencies, operating system, language version, etc. should happen weekly, and any instance where you have to pin a dependency to an old version (e.g., a major version release that has some compatibility issues) should be dealt with ASAP.

Obviously this isn't a tenable position in every circumstance, but I think it should be the default. Particularly in a world where the vast majority of security fixes go without an announcement or CVE.


Better idea: if you release rarely enough, pin your dependencies, but upgrade them automatically after each release. This way you will have time to test and fix any breakages before the next release.


The longer you wait, the harder it is. & you don't want to block a release on dependencies. So unless you're not doing development on the application every week, it's simple enough to update every Monday while getting over the fact that the weekend wasn't long enough


Letting your dependencies automatically update themselves is a great way to introduce security bugs. Not every update is necessarily an improvement.

I prefer the middle ground of pinning all dependencies, and using an external service to warn if any dependency has a known CVE.


I'd wager that less than 5% of your language dependencies bother to issue CVEs when they release security fixes. They get a bug report, fix the problem, and carry on with their life. This also happens all the time with bugs that nobody realizes are security vulnerabilities, because they're "just" crash on invalid input bugs.


Isn't that similar to golang's (now deprecated) stable HEAD philosophy? On larger projects with tons of human resources that works out OK, but for a smaller team wouldn't living on the bleeding edge & dealing with every issue be a ton of work?


The only times I've ever run into issues updating dependencies (other than across major version upgrades of deeply-integrated deps) is when people have put it off for months or more.

Most updates aren't breaking. The overwhelming majority of updates that are breaking are trivially discovered and fixed with one or two tweaks. Almost all the rest can be fixed with a single search/replace.

Being eight minor versions (or two major versions) behind and having to find and fix all of these at once is when people land themselves into trouble.


It’s not, really. Getting there from a long listed of pinned deps can be hard, but staying there is easy

If you update regularly and have decent tests, it’s easy to find and isolate the problem. And if it’s more than you can fix that day, pin it for now and try again later.


I like to pin dependencies but upgrade them weekly with CI. All my tests get ran and a PR is created automatically. This gives us both stability and visibility with the trade-off of more upfront work to get the automation going.


This is great, but file bugs if the dependencies don't pass CI!


It's not just Rails, this common with many frameworks and even turnkey solutions from vendors. Nothing will ever meet your needs 100% and so you end up with some customization. It really depends on how integral those customizations were and how tightly they are coupled to the product being used.


> Upgrade early and upgrade often

Not sure about the upgrade early. It’s a different kind of pain to be one of the first to use a new Rails version vs lagging a couple of months behind.


Waiting a few months is good. It's a reasonable strategy to wait for x.y.1 but Rails 5.2.0 was so stable that 5.2.1 took ~6 months to appear.


I agree, at least for anything that's hard to test and has some inertia, like a larger code base.

For example most of our configuration management is setup to just pull in the latest cookbooks from upstream during tests, and as long as all integration tests across all projects succeed, they get uploaded to our chef-server.

People argued that it would be annoying because things would break all the time. And yeah, things break with updates, though the opscode community is remarkably disciplined about semver. But that's what we have tests for.

And honestly: I'd rather deal with one broken update per day than 300 broken updates once per year. One bad update usually requires some nudging and that's it. 300 bad updates at once are a fully blown nightmare and you'll need days just to figure out what is even going on.


Yeah, being a first adopter can be a pain, but I don't think that's what they're saying here. Github was 4 years and 2 major versions behind on upgrades.

Upgrade Early does not mean Upgrade Instantly. I would argue that "lagging a couple of months behind" is upgrading early, because these upgrade horror stories always come out of companies that are years and years behind.

Having some patience and waiting a little, combined with the discipline to not wait too long is part of a mature skillset.


I think there's two sane options:

* track master, and notice breakage as soon as possible (you get much smaller deltas when trying to figure out breakage, which is always a big help), and are able to fix it or report it upstream ASAP, or * hold off slightly till the new release has had more bugs found post-release.


Main issue is more with other gems that need to be updated as well. It may take take months. You can't do that on your own.


Fork, use your fork, send a pull request. Almost every rails upgrade I make forks and send patches on gems that I need that I forgot about. I get good responsiveness and merge rates. Most of the time a gem needs to be able to support multiple rails versions, so in that case I usually send pull request setting up the Appraisal gem to test for appropriate compatibility.


You can't do them all, but you can certainly pitch in and submit patches. Outside rspec-related gems, upgrading usually isn't that hard.

And like all open source, somebody has to do it.


Yes we tried this once, with webpack 4, and realized in about an hour that like half our packages were incompatible and had patches weeks out.

I'm talking big packages, less-loader. Htmlwepbackplugin still requires @next for webpack4 to work right.


The Appraisal gem is excellent for ensuring compatibility across many Rails and Ruby versions. Adding it to a project with a good test suite is a pretty good guarantee of an easy upgrade.


I'm not sure how GitHub can say they "learned" this yet. Wasn't their last rails upgrade also a huge ordeal and a blog post? How about get back to us in a few years and we'll see if you learned anything this time around.


As a comparison, here's gitlab's journey (issue opened March, 2016): https://gitlab.com/gitlab-org/gitlab-ce/issues/14286

Looks like the first scheduled milestone was 9.5 (a year ago) and the current is set for 11.4 (next release).


> Upgrade early and upgrade often

It seems daft to keep seeing this lesson being learned by tech companies, and keep seeing blog posts where most of the pain would have been handled easily by just making upgrading a key feature.

Instead, tech managers and engineers seem to make the same mistakes over and over again, delaying those upgrades, until suddenly they discover it's a hard task to upgrade. I get delaying to _some_ degree, it's better to let other people figure out those sharp bits on the bleeding edge for you, but you need to set an explicit target for upgrading.

At another large tech company I worked for, it took the security team swinging the sledgehammer to get teams to upgrade from known-vulnerable versions of Ruby on Rails. When they came to do it, they discovered the changes were so extreme that the effort involved in migrating was likely more than the effort involved in a complete re-write (they did at least have pretty comprehensive tests)


It is easy to say that in the abstract, but you always have a finite time/resource budget for doing work. Effort spent upgrading is effort not spent doing other important work. Is the other work more important in the long run? The answer is not trivial to answer.

This is why we call it 'tech debt'.. it is just like any other debt. You take it on because you don't have the current resources to avoid it, and you calculate that it is worth taking it on. But then, you are carrying the interest on it, and if you aren't careful it will grow to be unmanageable, and all your dev effort goes into just paying the interest without paying the principle.


It absolutely matters. Security is a feature. If you see upgrading as merely technical debt, you're never going to give it the appropriate attention.


Hrm, your statement seems to suggest upgraded versions are primarily security updates. Good projects backport security updates (for LTS versions anyway), and new versions with new features come with their own new security issues.


> Hrm, your statement seems to suggest upgraded versions are primarily security updates

Not in the slightest. But security updates are part of them.

> Good projects backport security updates (for LTS versions anyway), and new versions with new features come with their own new security issues.

There's a limit to how far back patches go, even on the good projects. Rails 3.2 hasn't received a patch in over 2 and a half years, and while there is some desire to backport security fixes, you're dependent on a single individual having spare time to work on it, where more up-to-date releases receive patches far faster, and are far easier to test and integrate in to your existing platform.

I've seen teams, and heard of companies, that are still running services on top of Rails 2 and the like. Now when they look at upgrading Rails the sheer number of changes is mind-boggling, and often represents a non-starter.

I'm certainly not arguing to keep up on the bleeding edge, but making routine upgrading part of your regular workflow is most definitely an important part of remaining secure.


I never said it wasn’t a feature, just that not upgrading is a form of technical debt, with all the same reasons for existing and sticking around.


To twist the metaphor slightly, it's technical inflation.


Someone I saw on Twitter (I forget who - apologies) suggested DepOpps - short for "Dependency Opps" - as a job role, which was defined as a person dedicated to keeping a apps dependencies up to date.

I doubt anyone would enjoy that gig, but it would be a very useful person to have in almost any multi-person team.


The regular thread of people piling in to criticise dynamic languages. Instead perhaps people could suggest a better language/framework that is more productive than Rails, and has had a long lifespan in a large codebase?


Not critisizing RoR, it is the grandmother of all of the web frameworks we enjoy in practically every mainstream language. Also it is dynamically typed. I do prefer static types on the code I write because I am less likely to make mistakes and I like to have a more explicit 'contract' in place between code boundaries. However I am open minded to why people prefer dynamic types and I imagine it is all to do with speed of initial development akin to "Move fast and break things" and "MVP".

That said Microsoft's ASP.NET MVC would be a good contender for your call to suggest a better language.framework.

It is indeed very productive. It has a massive package ecosystem, great documentation, a very long lifespan (not as old as Rails, but probably about 10 years old now). ASP.NET MVC is all over the enterprise so it is well proven, plus StackOverflow if you want a high traffic example - there are probably many more. Where I think it beats Ruby in particular is the C# language is really excellent.

C# is my favourite language for getting things done and I've tried Ruby, Java, Python, Haskell, Basic and Javascript and gone into some depth with all of those. The reason is the excellent language features, one of the first to have async/await, good generics system, Linq is awesome, there is even some dynamic types support if you need it - which is nice for web page stuff while you are experimenting and then can 'shore it up' with a class later on.

The downside of C# I think but I need someone else to confirm is it is probably a bit confusing for a newbie, because of the vast number of features and many ways of doing things because of it's history. Not so bad if you have been doing it for years.

Another downside with ASP.NET is different ways of doing things in .NET Core so lots of relearning to do and tutorial roulette where if the tutorial is using .NET Core you may not find it easy to integrate into your classic application. Although I think RoR would suffer such upgrade issues I am sure.


ASP.NET Core MVC would be the way to go today actually - runs on Linux, does a lot of stuff for you (it's IoC container is pure magic) and is quite fast (IMO)


What's an example of a good product in it? Because in my opinion SharePoint and OneDrive are terrible. And the Azure Portal is only now getting somewhat bearable.



What makes the container pure magic?


One of the other big advantages to static typing is better tooling and in that respect C# really shines.


Visual Studio is lovely to use, it has aged well.


Do you run ASP.NET on Windows? Or on a Linux distro?


At my company we use ASP.NET on both Windows and Linux boxes; it depends on what our dependencies are (some of our older libraries are unfortunately windows only).


I've worked at companies that are more traditional and run ASP.NET classic on Windows or Azure App Service (which is effectively Windows).


“more productive” is such an impossible argument. This idea that you need to use a dynamic language to achieve fast iteration makes no sense. I’ve been building sites in scala for years and it’s more than quick enough. Scalatra/Akka HTTP for web, slick for DB, lift/play/jackson/circe for json


In the other side, I've done both things (webdev with Scala/Play/Circe/Akka, webdev with Rails) and the Rails stack was so fast to develop with that we could leave just one person to do that job, while the rest kept developing our core logic in Scala.

IMHO there isn't much to gain with static languages when doing web development. A good framework is much more important. Static languages are awesome when coding business rules, though.


Static types and languages help a lot when you have more than one person working on things honestly.

One person can keep a rough mental model of how the project fits together in their head, which lets them have good intuition about contracts between different subsystems.

When you have 5 developers on one project, that intuition quickly breaks due to developers mental models being slightly different. That's where having a type-system really helps.

I'd argue that the you lucked out in that the stack was able to be developed by one person, and that's what enabled it to be dynamically typed with relatively little loss.

It's once you have 3+ coders working on the same code-base that static types really start helping.


> Static types and languages help a lot when you have more than one person working on things honestly.

I consider myself working alone to be more than one person when I come back to something I haven’t touched in months. For all practical purposes, I feel like I’m reading someone else’s code.

A statically typed language makes it much easier to make changes in that situation without worrying about unforeseen breakages.


what is web development vs business rules? web development sounds like your javascript webpage. business rules sounds like everything else


In our case, we developed a CRUD API that allowed different kinds of users to access some data. Rails works great for this. Meanwhile, we had another service that interacted with an Ethereum blockchain, and things get so complex there that we kept using Scala for it.


Had a similar experience lately where we had a team fresh off a rails project get up to speed on Scala and complete essentially double the work we thought they'd get done. That being said it was much more complex than basic CRUD


There doesn't seem to be much demand for backwards compatibility from a framework. I use Web2py [1] which has remained backwards compatible for 10 years or so but I don't think has garnered much traction amongst high profile projects. It does require a bit of discretion and restraint but is doable while continuing to make progress.

1: http://www.web2py.com


I had a customer with a web2py application. I liked that I could write Python in the HTML files, like Ruby in RoR views. Django castrated templating language is a pain in comparison and a productivity killer. There are some good form helpers, especially for admin applications. Maybe not that useful for public facing ones.

The defaults are quite terrible. For example the original developer put everything in the default.py file because there is nothing in the framework that suggests to create multiple controllers or models. The idea of auto generating and auto applying migrations is extremely dangerous IMHO. The ORM is pretty verbose and it pollutes the code with all those db(). On the other side, Django pollutes the code with all those Model.objects so it's not worse than that.


Although arguably not as productive Java fares pretty well in that regard, particularly when it comes to maintainability.

Still, Rails is terrific. If it wasn’t for Rails Java today wouldn’t be as productive either.


Yeah Java isn't nearly as bad as it used to be, and it's pretty seamless to write your business logic in Kotlin if Java's verbosity annoys you.


Honestly Java is getting better at a good pace now. It’s nowhere near as bad as the java 6 days.


I've been using a lot of Go to write web applications the last three years, and I find it's a very smooth experience.

I don't have a hate-on for Ruby on Rails; my last job was as a Rails dev and I liked it. I still like, even though I haven't used it in a while (simply because I have no need). It also definitely has some advantages over Go.

But Go offers some great advantages: type safety; still fairly productive; boring understandable Just Works™-kind of language, and has most required components in stdlib (you need a few external dependencies, but not many, although how many depends a bit on your approach as well).

The biggest downside is that there's no standard web framework, and that a lot of Go devs seem to eschew them, too. There are some good reasons for that in some contexts (RoR is not a solution for everything either, that's why Sinatra exists), but it has the effect that a lot of organisations crank out their own internal semi-frameworks. Basically it's Greenspun's tenth rule.

Go-buffalo is probably the closest thing to Go, but I haven't had the opportunity to check it out in depth yet. There is also Revel, but that has some unfortunate design decisions IMHO, and isn't something I would recommend.

Is Go a complete drop-in replacement for all RoR use cases? No; not yet anyway. But I think it is for a lot of cases.


>I've been using a lot of Go to write web applications the last three years...

> ... The biggest downside is that there's no standard web framework

I hate to say it, but this why I feel justified in earning 50% higher pay than the younger devs on my team who are more passionate, more ambitious, faster, and put in more hours, yet always want to rewrite the boring web code in whatever is the new cool language of the year.

I mean if you feel it's worth it to stay late and work weekends in order to reinvent the same old boring code to deal with marshaling HTTP to types, process forms render templates and Json, etc - precisely the things that were mature and battle tested in dozens of other frameworks years ago - I guess that's fine if at least you're learning why it's such a bad idea (unless your job is to wrote such a framework instead of actual business requirements).

But your managers and especially your customers could care less that you did so.


I never advocated rewriting anything. If you have a RoR/Django/PHP/Java app and it works well: keep on truckin'!

But I've also found that writing new applications in Go has some good benefits over e.g. RoR. Things like processing forms and whatnot is all in the standard Go library. No need to reinvent anything.

I understand some people feel frustrated with "new tech of the yeah, yawn" (JS frameworks being the canonical example), but I don't think that standing still is a good option either. There is probably some reasonable middle ground.


How's testing in go? What about unit tests when you want to hit the database, do you wrap that in transactions?


Not quite sure what you are looking for but Go was one of the first languages about 7-8 years ago that had unit testing, benchmarking and code coverage built into the standard library and tooling.

The unique implementation of automatically satisfied Interfaces and first class functions (ie. easy to inject dependency) in particular makes testing very easy and designing code using these abstractions for testability also improves modularity.

For database, generally you just use the standard library to access and well tested database drivers provide the actual access mechanics. So if you want you can actually decouple tests checking the business logic side of database (eg. instantiate and hit an in-memory dB locally during testing - or even use a completely different driver if you're not using a weird "flavor" of SQL) from actual connectivity testing (where you wrap things into a transaction in a real mirror dB)


Check out https://gobuffalo.io/en/docs/testing/

Coming from a rails background, this was the closest out of the box testing experience. Tests are generally handled within a transaction and also offer fixtures and other goodies. Definitely recommend giving it a test drive.


here is some code from various Go codebases I work on. I had to anonymize some stuff because it's work related and the indentation is all borked, but this is the gist. I tend to wrap things like this up in closures. I almost completely use the go stdlib. In case you were curious.

https://gist.github.com/kyleterry/55468cb4ff9ce2e9f0156491c4...


Yeah; I use https://github.com/DATA-DOG/go-txdb

Works pretty well.


I like go but there is nothing like RoR


You may want to try Phoenix


I believe that is elixir :)


Or you could use crystal with amber or lucky framework and get the best of both worlds (C-like speed, typed, compiled, but with many dynamic language features faked with macros, and ruby-like syntax and conventions).


But also with the standard problems of a new language, a small ecosystem and immature tooling.


Those problems can also be a blessing. In the Rails ecosystem you end up with hundreds of gem dependencies that you probably don't need. In NPM, it's 100x worse. That is why, imo, many people cry foul when an ecosystem gets too mature (without realizing that that is why). There can be too few, but also too many packages.


> that is more productive than Rails

So, spending a year and a half on a major version upgrade of your web framework is "productive" how exactly?

I mean, whatever time you think you've saved by using Rails during the initial prototyping phase (and I don't think even that's true), you'll more than pay for in maintenance costs.


The time spent to upgrade a framework is at least partially compensated by the time you save not having to implement functionality provided by the framework. And that's not just overt features of the framework, but also security fixes, compatibility with a broader ecosystem of third-party tools and libraries, and often a less complicated dependency story (compared to cobbling together the features provided by a framework from a few dozen Node modules or the like).

Obviously, there's a break-even point there that's different for every framework and every application. But I think the ongoing popularity of web app frameworks suggests that a lot of people find the tradeoff acceptable.


So. What is your suggested alternative stack for rapid MVP prototyping?

Personally I'm fascinated with Clojure but it doesn't yet have the ecosystem (for me as a Clojure noob) to easily punch out product prototypes. Likewise I've messed with JS frameworks and they're fun but they don't offer me proven established patterns; too many competing and fast-moving choices. I'd rather find my market fit first then worry about scaling / paying off tech debt.

Again, what's your suggested model? To me Rails offers a semi-boring yet productive middleground between high and low ceremony. TAANSTAFL.


If you're talking about CRUD boilerplate apps, literally any modern web framework in a static language.

Again, even if "rapid prototyping" was a thing in Rails (i.e. rapid as opposed to what?) you start paying back the price tenfold once your project becomes moderately complex, test coverage or not.

So if your question is specifically "What's a good stack for a blog I'll never touch again or maintain with X and Y integration, tomorrow", then ok, cool, Rails.


Honest engagement.

> literally any modern web framework in a static language

I was part of a SaaS startup using Java Spring and the verbosity and front-loaded design costs slowed our iteration enough that we couldn't respond to market input and stalled.

My next SaaS engagement was with a language more maligned than Ruby - Perl - and while the codebase was mildly fugly we could move quickly and I tell you what the founders made serious bank.

> you start paying back the price tenfold once your project becomes moderately complex

This is a problem you want to have because you have skin in the game. Better to start paying back your debt than to never kick off. Back to you - you didn't address my question - if you were to launch a product idea, what static stack would you pick? Elm is the only choice that comes to mind, and that's pretty niche. Ask yourself why. (I'm not dissing elm, it looks lovely)

There's no definitive answer here. Rich Hickey makes some excellent talking points about static vs dynamic. I'm not saying you're categorically wrong, but you haven't presented any science to support your dismissive tone. TAANSTAFL. Pick the right tool for the job. I wouldn't use Rails to build whatsapp, but for many midsized SaaS products Rails is a cromulent choice.


> I was part of a SaaS startup using Java Spring

You may accuse me of shifting the goalposts, but do notice I said modern above. I'm not up-to-date with Java anymore, but iirc Play [0] would be worth checking out.

> This is a problem you want to have because you have skin in the game.

I'm saying that this is a problem you don't have to have at all.

> Back to you - you didn't address my question - if you were to launch a product idea, what static stack would you pick?

In the context of SPAs, I'd choose some of the Purescript React libraries and some generic Haskell REST/Websocket server, like Warp/Servant (this is a stack I've used in production).

For a standard web app, I'd go with Yesod [0], which is a somewhat Rails-like framework, but it fully leverages the advantages of static types, i.e. it turns things like dead links, trying to inject user input into a DB query without escaping it first, invalid queries (i.e. querying a person by product id), and many more into compile-time type errors.

> I'm not saying you're categorically wrong, but you haven't presented any science to support your dismissive tone.

Still, to claim that the context of the current thread at least doesn't suggest that Rails is a maintenance nightmare would be disingenuous at best.

> Pick the right tool for the job.

That's what we're discussing here, right? I can't see when Rails would ever be the "right tool" for anything (except for the one use case I alluded to above) but that's obviously subjective, rendering that phrase utterly meaningless.

[0] https://www.playframework.com

[1] https://www.yesodweb.com


Thanks for the thoughtful response. I think this comes down to low ceremony (move fast but easy to make a mess) vs high ceremony (upfront cost when adding features but high safety).

I'll check out Warp and Yesod, although tbh the next stack I'd like to try is Clojurescript/Spec/Figwheel. Spec driven design seems to offer some of the benefits of static typing with the flexibility of dynamism.


> I think this comes down to low ceremony vs high ceremony

One of the most cited qualities of languages like Haskell are their "refactorability" - i.e. one can routinely rip out the guts of a 100KLOC codebase and replace fundamental data structures and functionality in order to implement new features in a couple of hours, no problem whatsoever.

In Rails, I'd do everything in my power to not touch any "important" code if I can get away with it, because no matter how good the test suite, it's still a test suite. It shows the existence of bugs, it doesn't prove their absence. Many people claim that's not a problem in practice, but I personally tend to avoid more fundamental code changes in dynamic codebases precisely because of that fact.

To drive that point home, I'd trust a Haskell codebase without a test suite more than a Rails codebase with an average to good test suite.

Also, it seems to me that "moving fast" is directly at odds with the topic at hand, upgrading Rails. If Rails allowed one to move fast, then surely upgrading Rails shouldn't take a year and a half, even in a big codebase?

> Spec driven design seems to offer some of the benefits of static typing with the flexibility of dynamism.

Definitely, though it's not a replacement for types. Also, the probably more important thing here is that Clojure is functional, making a whole class of bugs stemming from mutability and OOP impossible.

If you're planning to go down the Clojure/spec route, you might want to check out Ghostwheel [0], a lightweight DSL which seems to be gaining a lot of traction lately.

[0] https://github.com/gnl/ghostwheel


1) I think Github are a long way past prototyping... they just sold their rough sketch for $7.5b

2) A year and a half of how much effort?


> A year and a half of how much effort?

> The project originally began with 1 full-time engineer and a small army of volunteers. We grew that team to 4 full-time engineers plus the volunteers.

They had four full-time people working on upgrading Rails at some point!


I was surprised to learn that github runs on ruby on rails.

Interesting, I didn't know Ruby has been around just as long as PHP. I still would choose PHP if my opinion matters, just from my experience with the slow performance of ruby on rails when I gave it a go just a few years back.

PHP - 23 years

Ruby - 23 years

Java - 23 years

JavaScript (nodejs) - 22 years (9 years)

https://en.wikipedia.org/wiki/Dynamic_programming_language#E...


Don't forget

Python - 28 years


And finally, Perl - 30 years


Thanks. I was going to be a little depressed if after all that someone didn't jump in with Perl. :)

That said, it's frightfully close to 31 years... [1]

1: https://perldoc.perl.org/perlhist.html


$_&-++()/+&_$@%%":;!!?)(-_# or die


Many of these languages, while productive in their own ways, almost universally need a framework to extend to the web.

The few languages were web-first were often chastised for not being complete languages.


Not sure what you mean. A basic HTTP webserver can be implemented in a couple dozen lines of code using only sockets in most languages. Something more complex requires more code, which is often packed into a "framework", it doesn't belong baked into the language.


Ruby uses a framework like Rails to reach the web more effectively.

Python can use a framework like Django to reach the web more effectively.

Compared to..

A language like PHP in the beginning was more directly coupled to the web (for better or worse).


> Instead perhaps people could suggest a better language/framework that is more productive than Rails, and has had a long lifespan in a large codebase?

I’d suggest most frameworks are better than rails for long lifespans. They just frontloaded productivity, and it shows. The UI and functionality of github is largely the same as in 2010.


GitLab is mostly rails and they have been iterating their UI pretty heavily. I think programming languages are just tools and the team(s) wielding them matter a lot more than the language.


On the UI, I'm kinda feeling like it's not a bad thing. GitHub UI works. It's neat, most things that need to be done often are easy to find, and otherwise it gets out of my way nicely. Oh, and it doesn't break my browser (links, Back button etc). Why change it?


Bitbucket and Jira use React and are far less usable


Well, their PR software could use some serious love. I don’t use github for much else.


Are you sure you remember how everything worked in 2010?

And then stuff that was sensible then probably shouldn't have changed much.


Meanwhile, they were able to win market rate so fast that it took a while for competitors to show up.


Super cool explanation of some of the real world difficulties of upgrading large rails applications. Really liked the transparency around process and timeline, as well as the lessons learned section.


There are several "we upgraded Rails, it was huge, risky, and took months to years" blog posts from medium to large companies. I personally take this as a warning against using Rails. Ruby is one of the most dangerous dynamic languages to refactor, I don't see how struggling to do it for over a year is a selling point of the framework. It also feels counter to Rails's mantra of delivering value fast with little effort, until you need to upgrade, then you have months of no business value delivery and need to bring in experts to help.


Let me get this straight: you're arguing that a 10-year-old top-100-in-the-world website taking 4 full-time engineers and having them upgrade their core framework 2 major version over 18 months is some sort of massive failure, and that failure would be solved with static types?

Also, you're saying that Github hasn't added any new features in the last 18 months?


I'm not sure any other technology stack would have fared much better.

Consider that 6 years, 2 months, and 20 days passed between Rails 3.2 and Rails 5.2. That's quite a bit of time for the framework to evolve. Then factor in the customizations from several non-framework dependencies and those added by GitHub.

This is an incredible achievement no matter how you slice it.


Yes, four FTE engineers taking 18 months to upgrade across two major versions indicates a massive problem, but not necessarily with Rails or Ruby. That's a cost of $1.5M give or take, just on the engineers, not including the opportunity cost in new feature development or paying other equally important tech debt.


$7 billion dollar company, $1.5M cost? $7 BILLION. Your order of magnitudes are waaaaayyyyy off.

This is the inverse of survivor bias, in that you are retroactively applying "best practice" at the wrong scale. What gets you from $0 to $7B may hurt you at $7B. Heck, may hurt you way earlier than that.

However, and YMMV on what problem you want to solve, but saving $1.5M, heck lets 10X it and call it $15M, saving $15M when worth $7B isn't the problem I'd personally be concerned with.


React 16 was a perfectly smooth upgrade for a massive part of the web ecosystem. It took 1 engineer to bump the version number and to test. Maybe a week or two at most to check that nothing was broken.


The React core team update tens of thousands of Facebook's components every time they update, I expect they'll continue to have uniquely smooth upgrades while that's the case.


It is a massive failure, and it would be solved by static types.

I could take a big, unmaintained 10 years old Haskell codebase and upgrade it to the newest compiler and libraries in a couple of days, at most (and it would most likely work on the first try after it compiles).


Or perhaps it’s survivorship bias. Companies that used Rails are around today with big, dated codebases because Rails adequately served the needs of their growing businesses.


It didn't take Github as a whole a year to go from 3.2 to 5.2. It took a small team within Github to work with everyone to upgrade from 3.2 to 5.2, with most of the changes introducing business value in the form of security and technical debt fixes(as many issues as I have with Rails, which are many, deprecations aren't willy nilly and generally have great reasoning behind them).


Last I heard, Github also runs it's own build of ruby and much of the complexity stemmed from incompatibilities with that.


His point still stands.

You can find other platforms we're updating is a lot easier.

That is the issue.


- 10 year old codebase under active development.

- 2 major versions and 4 years out of date.

- Upgrade started as a half-hearted "in your free time" effort with no official team, or maybe 1 dev (article says both).

- Upgrade included "cleaning up technical debt and improving the codebase", (you could define almost any work under that umbrella).

- At the end there were only 4 devs full-time on the upgrade.

I don't know of any platform that would make the above situation all that much easier...


I'd love to know what other frameworks would be easier. It's all relative. I've been involved in upgrades of a lot of statically typed projects, in many languages, and... it's all the same. Major upgrades are always a pain in the ass.


It depends on scale. It tooks us at a previous company over a year to update from Java 6 to Java 8 (technically 7, but 8 was released before we got done with the upgrade so we jumped to 8), and that didn't involve any major new frameworks. Where I'm at today I don't even want to think about upgrading from Elasticsearch 5 to 6+. Terabytes of data indexed each day plus way too many consumers. I think web frameworks are easy to upgrade compared to databases or schema, at least when you're faced with distributed architectures.


> You can find other platforms we're updating is a lot easier.

Care to name the platforms that would be much easer to update at Github's size across two major versions?


What other platforms have you updated that was a lot easier?


and at github's scale at that.


For what it's worth, I'd be weary of putting too much weight on these types of articles when it comes to judging how difficult Rails upgrades are.

Rails was hugely popular for years (and still is, in a lot of ways). There are countless articles about it and it's been for a ton of projects. There are a lot of internal Rails apps built on earlier versions that are owned by companies with either limited in-house development resources, or none at all; in which case, it's easy for decision-makers on the business side to push off updates (assuming they even know about them). "That doesn't sound like a big deal, we'll just do it for the next one." A bit of time passes, and then you're two major releases behind and you're looking at a serious effort. Or maybe it's developers who make that decision for what are likely valid reasons in the short-term. Upgrades across multiple major releases aren't exactly uncommon because of that, and there are a lot of articles detailing them, blog posts discussing or complaining about them, questions on SO, etc. as a result.

For the most part, though, I don't that that Ruby or RoR is uniquely difficult to upgrade compared to other frameworks. I've handled upgrades across versions that have gone ridiculously smoothly, and some not so smoothly.


I'd like to hear about some of these platforms. 5+ years of continuous development, 2 major versions, heaps of new framework functionality, etc.


According to the article, the upgrade took 1 year for 3.2 > 4.2, and then just 5 months for 4.2 > 5.2. The project began with one FTE, and some volunteers, and expanded to 4 FTEs. But the author says almost none of them had ever done a Rails upgrade before.

I'm curious to what you think would be a better framework for Github to have used, that would've allowed for easier, speedier point upgrades? Rails likely was a big advantage (as it usually is) in the initial stages. Are you seriously expecting it to be just as smooth when the site experiences exponential user and feature growth? That moving from Rails 3 to 5 was doable, with what sounds like a small team and no massive service disruption, seems like a very strong argument that Rails can still be effective in a company's middle-age years.


Not to mention, they also "took time to clean up technical debt and improve the overall codebase while doing the upgrade".

That is no easy task, for such a big application.


ASP.NET MVC as uncool as that might be. IMO this is as close to the benefits of Rails you can get in a statically typed language. And it really doesn't take very long in developing a project for static typing to start saving you time either. IDEs can just be a lot more intelligent with static types, and it can be a big help in the readability of code without fully understanding the broader context.

The upgrade to .NET Core is probably worse than a Rails upgrade though, although it's not really the same thing as .NET Framework will continue to be updated for awhile. Switching to Core is really only necessary if running on Linux servers is a big win for you.


Migrating an ASP.Net MVC app from .Net Framework to .Net Core isn't really an upgrade, as both frameworks are continually updated.

The migration is a pain, but just upgrading from MVC 4 to 5 wasn't painful.

I am sure 2 to 5 would have been a nightmare, especially if you were using the deprecated Microsoft JavaScript libraries, and needed to replace them with their jQuery alternatives.


> Migrating an ASP.Net MVC app from .Net Framework to .Net Core isn't really an upgrade

Doesn't .NET Core's runtime have significant performance benefits over the .NET Framework?


> ASP.NET MVC as uncool as that might be. IMO this is as close to the benefits of Rails you can get in a statically typed language.

I agree .NET MVC is as close as you get to something like Rails with regards to productivity in a statically typed, enterprisy language.

But using .NET/C# at Github would still have ended up with a significantly larger codebase -- which means more code to maintain, and therefore also in all likelyhood more bugs.


ASP.NET MVC didn’t exist in 2008. Maybe as a preview, but it was hardly a good choice at the time.


Of course, I'm only saying that would be my pick today. I personally dislike working with dynamic languages, but I would say Rails was their best option at the time.


I use Rails, but also Elixir, and in the past Java, .Net, and many other tech stacks.

In my experience all large or very large scale apps upgrades (and I've done quite a few) are complicated in a way or another, no matter the stack. Technical debt stacks up in subtle ways (dependencies get obsolete, a specific feature used the framework in unusual ways, stuff can be rewritten with more built-in framework features etc).

I don't see how this article would give Rails bad publicity, personally; I'll add that the advice they provide is pretty much what I would recommend for any tech stack too.


I've done upgrades on absolutely massive codebases for java to JDK7 and JDK8 and have never had anything but minor issues. Very rarely does something break backwards compatibility.


Well, I've upgraded an app from Struts 1 to Struts 2 and I don't want to do it again :-)

- https://www.infoq.com/articles/converting-struts-2-part1

- https://www.infoq.com/articles/migrating-struts-2-part2

- https://www.infoq.com/articles/migrating-struts-2-part3

Granted, the "upgrading" situation has improved since then in the Java world, but it hasn't been always super nice in the past.


I'm not sure if you can compare upgrading a VM/platform that is usually designed for backwards compatibility to upgrading a framework.


I think you're throwing the baby out with the bathwater here. GitHub has an enormous codebase where they've done a lot of work digging around in and modifying Rails under the hood. I think their situation calls for it and they have the internal talent to successfully "go off the rails" a bit. But it makes the upgrade path difficult. This is par for the course. Can you think of a technology/framework that would fare better than Rails in this case? I mean, you'd be in the same boat with Django, Spring, [your framework of choice], right?


You would not be in the same boat in a dependently typed language.

In such a language, any change on framework update would cause compiler errors if the framework's type constraints didn't match what your code expected.

As a result, upgrading in a dependently typed language is simply a matter of fixing compiler errors, and then it's upgraded.

For non-dependently-typed languages that take advantage of the type system, it's still significantly easier, though you probably will have to do a little more than just make sure it compiles.


   > As a result, upgrading in a dependently typed 
   > language is simply a matter of fixing compiler 
   > errors, and then it's upgraded.
This is very naive, and is probably hilarious to a lot of people who've been through upgrade hell in a dependently typed language. A few (and then some) major reasons:

1. It's really the subtle runtime behavior changes that bite you. The ones that a compiler doesn't help you with. (This is not a Rails-specific thing; ask Unity or OpenGL etc. devs)

2. A lot of the pain of upgrading a project (Rails or otherwise) is dependency hell. You upgrade the framework, but some of your dependencies haven't been updated and don't work with the newer framework version. This is true whether it's a dynamic or compiled app.

3. It's certainly true that in a strongly typed language, these sorts of trivial problems would be caught at compile time and that's certainly an advantage. However it's not exactly rocket science to catch these in a Rails app. Assuming your test suite is anywhere near adequate, it's going to spit out a comprehensive list of these problems just like a compiler would, albiet not as instantly.

3a. Rails is pretty good about documenting these breaking identifier changes between versions. They don't exactly sneak up on you, unless you get drunk one night and decide to upgrade your enterprise Rails app without looking at the release notes.

3c. Rails is also quite conscientious at loudly announcing to you, via log messages, when you use functionality that is deprecated and targeted for removal. Assuming you're not willfully ignoring these (ie, drunken late-night upgrade bender?) they don't typically catch one by surprise.


I think you're right that my response is idealistic tinged with naivety. Sure, with a sufficiently smart type system you can model subtle things like OpenGL changes, but "a sufficiently smart type system" is obviously impractical at best.

I agree that dependency hell and getting the versions to line up right is equally hard.

However, I disagree that 3 is a good argument.

> these sorts of trivial problems would be caught at compile time and that's certainly an advantage. However it's not exactly rocket science to catch these in a Rails app

Actually, it kinda is. To model the constraints you create in a dependently typed language, you have to create a set of tests and checks in your dynamically typed language which are basically the equivilant of a full dependent-type-system. Creating an ad-hoc human-enforced type-system and test suite is incredibly hard and I can't think of a single large project written in a dynamically typed language that adequately does this.

Regardless of how good the documentation and testing and warnings in rails are, it's not a replacement for a full type system, and the only way to get those benefits is to implement a poor ad-hoc type system in your methods and tests.

Yeah, big upgrades are always hard. Good type systems make them less scary and have less chance of breaking stuff, which honestly is the most important thing... But after you've finished getting through the stuff that all languages share (hardware sucks, dependencies suck, etc), a dependently typed language will be a matter of fixing compiler errors, not watching percentages of 500s in prod and crossing your fingers.


    > But after you've finished getting through the stuff 
    > that all languages share (hardware sucks, dependencies 
    > suck, etc), a dependently typed language will be a 
    > matter of fixing compiler errors, not watching 
    > percentages of 500s in prod and crossing your fingers.
I am speaking from very direct experience here.

I was involved in a Rails 3.x --> Rails 5.x upgrade of one of the larger Rails monoliths in the world and the trivial sorts of things a compiler can catch were... well, also pretty trivial in our upgrade path. Just not quite as trivial as they'd be with a compiled language (nobody's denying they have the edge here)

    > To model the constraints you create in a dependently typed
    > language, you have to create a set of tests and checks in 
    > your dynamically typed language which are basically the 
    > equivilant of a full dependent-type-system.
No, that's not how you do it.

You don't "model the constraints" explicitly. You write integration tests, same as you'd do in any sort of language. If MethodA from Class1 is passing the wrong stuff to Class 2 from MethodB, your integration tests will fail. At least, assuming you've got proper coverage.

But even in a staticly typed, compiled language you have to write that test anyway, right? Because you need make sure that code path actually works anyway and MethodA is getting the correct response from MethodB there.

There are definitely advantages to strongly typed, compiled languages! To be honest, after a few years in Ruby land, I'm ready to GTFO and go back to something a little more static. But Ruby's not the nightmare you describe it as.


Maybe, but what dependently typed language has the stability, documentation, framework and overall development ergonomics that would let you create something like Github with the resources and the time they did?


> until you need to upgrade, then you have months of no business value delivery and need to bring...

This is not the case in my experience. I've upgraded pretty decent sized apps (hundreds of models,lines of routes, etc) and in my experience it would take a couple hours a day spread out over a few days a month and then I was done (for versions: 3-4 and 4-5, never done 3-5).

I would say most of the problem is making sure everyone on the team just keeps all functionality as-is. It can be tempting for team members to refactor as they go through but this then becomes a huge time-sync. Anyways, thats my exp on rails but I have no other frameworks to compare it to.

Has anyone migrated a massive app from some PHP Framework like Symfony or from a java framework like Play, or any framework with a large code base?

I have had to upgrade massive systems that were not done with any framework and full of one-off solutions with in-house developed libraries and it was an absolute nightmare, but I'm sure this depends on the language and team. However, in general I think an open-source library used by millions or even hundreds of people is going to have better documentation, bug coverage, support, etc. than something done in house, just IMHO.

So I guess my question would be, what does the alternative look like?


My experience looks like yours - in-house frameworks (e.g. typically in banks) are a mess when it comes to upgrading.


I agree with your post but just wanted to throw out that it's time sink instead of time sync ;) imagine all that time going down the drain!


And yet, when Ruby is compared with other languages with regards to bugs in an actual study [1] it does better than many statically typed languages (e.g. Java, C#, Go, Typescript).

Add to that that a Ruby code base will be significantly smaller than a codebase in most statically typed languages. That means less code to maintain, and probably fewer bugs.

1. https://www.i-programmer.info/news/98-languages/11184-which-...


This is disingenuous. Quoting the abstract: ``` Language design does have a significant, but modest effect on software quality. Most notably, it does appear that disallowing type confusion is modestly better than allowing it, and among functional languages, static typing is also somewhat better than dynamic typing. We also find that functional languages are somewhat better than procedural languages. ```

The "statically typed" languages that you're focusing on (I say probably because they're the ones with high bug counts in the data) are probably C and C++, which have other issues making them higher in bug count. C is hardly even typed. Both have manual memory management.

Also, there's no control for commit frequency. Some people put everything in one commit, while others commit every line change. The Rails Tutorial even recommends the latter.

Lastly, Scala and Haskell killed in this study, as far as raw numbers go. But it doesn't seem significant.

I'll stick with subjective evaluations for now. This is just too hard to measure.


I am simply referring to the result data from the study. I fail to see how that is disingenuous.

You say Scala and Haskell killed it in the study, and you are right, they were the third and second best language respectively with regards to low rates of bugs. Perhaps you also happened to notice (but failed to mention) what language did best of all: Clojure, a dynamically typed language.


I think the point is that all else being equal, static typing is better. But obviously all else is not very equal at all, and so in practice you can have a very well-designed dynamic language beating static ones on this metric.


I leave it to future readers to decide who's missing the point between the two of us.


Here, the fulltext: http://web.cs.ucdavis.edu/~filkov/papers/lang_github.pdf

Note, in particular, that there's a high confidence, true, but the claim is "picking language X reduces the chances for bugs by a tiny bit." To quote the abstract:

"It is worth noting that these modest effects arising from language de- sign are overwhelmingly dominated by the process factors such as project size, team size, and commit size. However, we hasten to caution the reader that even these modest effects might quite possibly be due to other, intangible process factors, e.g., the preference of certain personality types for functional, static and strongly typed languages."

Personally I like statically typed languages due to playing nicer with autocompletion and in-editor documentation. Every time people make claims about "upgrades being done when project compiles" I die a little inside.


Refactoring is the biggest reason to prefer static typing for me. I just love the ability to right-click -> "Rename", and know for sure that this is done correctly throughout the entire codebase, even if it's megabytes of convoluted code.

Yes, modern IDEs for dynamic languages can also do this 95% of the time via type inference. The problem is that you never know if this time it's going to be the other 5%. And dynamism tends to encourage clever hacks that make code less verbose, but also make it especially hard for any sort of automated tool to figure out - and those can lurk in corners people don't even remember are there.


> Ruby is one of the most dangerous dynamic languages to refactor

A thousand times this. It's so much easier to do breaking changes and refactors in a language that's supporting you, instead of working against you.


I disagree with this slightly. I've been using Ruby for a long, long time and Ruby does give you the tools to refactor safely - few people actually know how to use them though and there aren't very many high-level tools aimed at doing safe large-scale refactoring. In Ruby you can trace calls and do very powerful introspection which you can verify to make sure things like delegation happened properly

It also doesn't help when most codebases are using ActiveRecord or something in every complex class and wind up increasing the interface width and ancestor depth of their code significantly. The point is - I think the language does a pretty good job supporting the developers, but there are a lot of bad practices that are still in use and recommended. Can't fault the language because people are writing shit code


What tools are you talking about?

The “problem” with ruby is that you basically don’t know if code is valid unless you execute it, whereas a statically typed language will enforce a lot of things during compilation and simply not allow the program to compile, if API is used in the wrong way, this could be calling private functions, referencing undefined symbols (simple typos), wrong number of arguments, passing a string where integer is expected, etc.

Additionally access control is limited in Ruby, which makes it difficult to release a library and have the language enforce that people do not rely on things which are implementation details subject to change.


You're right - all of those things are possible. What I'm arguing is that given you have a reasonably good test suite and are aware of what patterns to are dangerous, you can refactor fairly confidently with the tools given

For example, it's possible to fetch a class's entire interface before and after a refactor and validate it is the same. It's possible to dynamically wrap every method you're refactoring track them and type check them. And if you extend that idea, now you can output this data to files and perform static analysis. Sure, it all relies on you having some safe execution context to get this information but Ruby probably has the best testing tools of any language and many projects have great test coverage

For library owners, I agree with you - there's no hope. But for application developers maintaining their monoliths with nothing depending on them there's a lot you can do to ensure safe refactoring


Dynamic language or not, having good test coverage will be your biggest support.


It's definitely important, but we (Python shop) have "good" (85%) test coverage and we still see 500s in prod every day because of things a type checker could trivially catch. And this is just in the course of normal operation; this isn't even a migration. Having extensive experience with both Go and Python, I would conservatively estimate that Go requires ~30% fewer tests than untyped Python for the same confidence. That's more than 30% time savings; not only are you not writing 30% of the tests, but that's 30% fewer tests to have to maintain. Of course these aren't the only considerations--for certain tasks Python may be faster to develop with (although I think people forget about things like deployment, tooling, dependency management, performance requirements, etc when they make their estimations).


Line coverage is not test coverage.

Having a test execute every line in your application doesn't mean your application is _covered_ or _tested_.


Yes, we all know the metric is easily gamed but no one at our org is trying to game the metric. We are paid to build a product, not to boost the metric.


It's not about gaming the metric, it's just that the metric doesn't mean very much in the first place. Running a coverage tool during tests won't show you edge cases you forgot to handle in the code under test, it will show you code that's not tested at all. That can sometimes be useful for pointing out blind spots, but you shouldn't derive any confidence in the tests from a high coverage score, even if the people who worked on the project had the best intentions.

Coverage tools could only measure quality of a test suite if you're assuming that either the code is perfect or that the existing tests cover (logically) everything about what they test. Without either of those guarantees, it doesn't tell you anything very meaningful, as you discovered.


The metric is meaningful; I think you’re misinterpreting it. To your point, 100% coverage doesn’t mean you’ve eliminated all bugs, but it does mean that your code base almost certainly has a lower bug yield that the code base with 50% coverage (assuming no one has games the metric).

If you really think that the metric is meaningless and useless for deriving confidence, then you are necessarily asserting that code bases with 100% coverage have indistinguishable bug yields compared to those with 50%, 5%, or even 0% coverage. A claim like this is too extraordinary to be believed without considerable evidence.


I guess it's useful for deriving a baseline level of confidence, like a low coverage score is a red flag, and an increasing coverage score probably corresponds to increasing test coverage, but my issue is that 100% coverage doesn't mean anything about the correctness of the code in absolute terms (unless 'gaming the metric' includes not thinking of every edge case, ie, that we're assuming the existing test suite is perfect). If you're working on a poorly tested codebase, it's a useful relative metric of your progress in testing what already exists, but unless you're assuming the code is already correct, that doesn't mean anything more than that. If you wanted to derive, say, the confidence that you won't see 500's daily in production from a metric, then line coverage isn't an effective one to use for that; the tests you write that give you that kind of confidence don't really help your coverage score. The parts of the codebase that are in the most urgent need of tests for getting that kind of confidence in places will most of the time be ones that already have good coverage; think of how TDD works, even if you're not doing TDD.

I could agree that a high coverage score is a prerequisite for having confidence that your test suite is comprehensive (in general), but that's such a low bar, it's like saying a full bath is a prerequisite for a nice house, just knowing that shouldn't do much to convince you it's a mansion.


I noticed something similar with Elixir with regards to the number of tests we needed thanks to the compiler, but it also has late binding (albeit it is statically typed). A "best of both worlds" exists out there


Integration tests are your friend. ;)


What dynamic language is easy to refactor? I remember the "good" old PHP days and I can't say it was easy either.


TypeScript and Python 3 support gradual typing, so code can be optionally annotated with types. This makes refactoring easier, while still benefitting from how you can quickly sketch out new code dynamically and only annotate it once things start to solidify.

Personally, I'm a fan of very strong, static type systems, so I would prefer to annotate all the things, but I understand other people have different views.


TypeScript probably wouldn't call itself a dynamic language.


From what I understand, everywhere that you would need to write a type, TypeScript allows you to either not declare a type at all, or put "Any" and avoid declaring a specific type. I think you can even go so far as to rename a ".js" file as ".ts", and it's fine, which is a great example of "gradual typing", and I think this all makes TypeScript a dynamic language.


It's statically typed with opt-in dynamic (which is opt-out implicit). Sans the implicit opt-out, this is identical to C#.


If you don't specify types at all, is that really opting out? This is valid TypeScript code:

  function foo(a, b) {
      if (a != 3) {
          return a + b
      } else {
          return "hello"
      }
  }
  
  console.log(foo(2, 5)); //prints 7
  
  console.log(foo(3, 5)); //prints "hello"
You can compile it and run it on the typescript playground: https://www.typescriptlang.org/play/

This is not like C#, where would be required to specify that every type is "dynamic". Here, you can optionally choose to specify that the types are "Any", but it's still gradual typing.

EDIT: you said "implicit opt-out"... which sounds like a synonym for "opt-in". I don't think "implicit opt-out" is a term.


The "implicit opt-out" is noImplicitAny[1]. That code of yours won't compile with this opt-out enabled. Edit: the "dynamic opt-in" is `any` (and is `dynamic` in C#).

[1]: https://www.typescriptlang.org/docs/handbook/compiler-option...


If it was really not a statically typed language, I'd expect this code to at least run. But it detects a type failure at compilation time despite the lack of any declarations.

    function foo() {
        return 1;
    }

    let x = foo().substring(1, 2);


> I'd expect this code to at least run

It still compiles the code to JavaScript even with the error, and you can still run it, which makes this more of a warning than anything else.

That warning based on type inference seems like a positive feature... because there's no situation in which that code is right. You can add `: any` to the function declaration and it will stop complaining, but I wouldn't contend that the language is not dynamic because it encourages you not to make a mistake.


The distinction between "warning" and "error" is highly arbitrary anyway. In C++, you also get a warning rather than an error if you, say, return a reference to a local, or read from an uninitialized local. You still get a binary, and it still runs. But these are 100% bugs, which is why most projects enable treat-warnings-as-errors for a lot of that stuff in C++. I'd imagine it's common with TypeScript, as well.



How about Smalltalk?


nowadays it’s quite good situation with php refactoring, given that php now allows type-hinting and if you keep type-hinting variables with phpdoc-style comments, IDE refactors code really well.


I can't agree with you.

Can you name a framework where upgrading a very large (several hundred thousand LOC or more) application across 6+ years, two major versions, and multiple minor versions is not a significant undertaking?

FWIW at my former employer we had a huge Rails monolith with something like 500K+ lines of code. On top of that, our genius architects had split it up into a very nonstandard Rails architecture.

We hired these folks (no affiliation, other than that I used to work for a company that hired them) and they did a solid job. They blogged fairly extensively about each incremental upgrade and the problems they encountered:

https://www.ombulabs.com/blog/tags/upgrades

Do you see anything there that's much more painful than a similarly ambitious upgrade in other frameworks?


We had that experience with Spring, which was pretty painless. One FTE, 3 weeks, ~1/2 mio LOC, and included a Java major version bump.


I would take that warning with a grain of salt. You're talking about the main underlying framework upgrade from 4 years back to today. Take any MVC framework that powers your entire system with a 4 year upgrade gap and you'll end up with the same type of debt.

Also you're measuring effort in "X number of months" but as the article states it started as a hobby side-project for a few engineers. There is no notion of how much effort it actually represented. Heck I could need 5 years to upgrade from angular 1 to angular 2 if I put in 30 secs per day...

I would actually advocate for a framework that's past its prime/hype period over any newly untested hyped framework any day.


If they had good test coverage and stayed on top of updates this would be a non issue. Going across several major versions of any language/framework is going to be painful.


I've done major upgrades from 3.2 to 4.1/4.2, and from there to 5.1 followed by 5.2 - the only time we had to put in a little work was when moving from 3 to 4.

When moving from 4 to 5, we relied on simple smoke tests and unit tests, and had no major issues or bugs. The biggest effort was to make sure all of the application and environment configurations were up to date and using all the new settings introduced etc.

My very subjective opinion is that either most of these code bases are low on quality (meaning they are harder to maintain in general), too tightly coupled with Rails itself (models stuffed full of logic, instead of using plain ruby objects for logic and keeping ActiveRecord for persistence level logic), or engineers are just too scared to make changes to the codebase - which again is perhaps a combination of bad test coverage and bad code quality.

Either way the stories of upgrading major versions being a huge undertaking always make me scratch my head and wonder what are we doing wrong if its easy for us.

And inb4 someone claims our apps are just small and simple - we run about 12 Rails applications in production in various sizes, about half of them being relatively large.


Oh please. Upgrading any framework is always a huge pain in the ass. Using it as an excuse not to use rails is just that: an excuse.


You can refactor with confidence in Ruby if you have a good test suite. Just do it incrementally and test it well along the way, and also (when using Rails) address any deprecation notices you find along the way and you'll be fine.

I've been using Rails since late 2005, and in my last job upgrades a few Rails apps that haven't been touched since 2008 or so.


The problem I have with statements like this are that they apply to every large framework, not just Rails. ASP, Django, Zend, Cocoa etc. A combination of libraries are bound to be hard to update two major versions later when methods and variables are deprecated.


It depends on where you're coming from. Going to Rails 3.2 from a previous version is painful but upgrading from 3 to 4 or even 5 is considerably less dangerous.


I think it depends on whether "took months to years" means there was a team of several programmers working on it full-time for that period, or whether it just takes a couple of people working on and off and most of the time is waiting to see whether the logs are showing any problems.

(And time under the "took the opportunity to clean up technical debt" heading shouldn't really count.)


Fwiw, you’re not wrong but most of those stories are around the actual Ruby jump from 1.8.7 to 1.9.3. It was necessary but painful.


Assuming you have proper test coverage, upgrading even a “big” app isn’t that hard. The problem is when companies neglect the value of automated testing until they actually need it.


Some people are still using Java 1.5 so I do not see how being a static language means you will find it easy to upgrade.


Your comment is fallacious in several ways:

1. You're citing a specific anecdote (some people... java1.5) and trying to generalize. What matters is not some people, but the average case, which "some people" will not tell you.

2. "easy to upgrade" is not being argued; "easier in general" is. Just because it's easier to upgrade in a statically typed language doesn't make it easy, just easier than for a dynamically typed one.

In effect, you're saying "there are people using statically typed languages who didn't update, so it must not be easy to update".

A statement that makes a similar fallacious jump is: "There are some people who still type slowly on computers so I can't see how anyone could claim typing on computers is generally faster than typing on typewriters".

Anyway, the fact that the compiler catches more errors at compile-time means it should be obvious that it's easier to upgrade a statically typed language.

If I have a method in ruby "user.get_id" which used to return an int, but now returns a uuid in a new version of the framework, for a statically typed language my code just won't compile on the new framework until I handle that, regardless of test coverage... where-as in ruby, I'll need to have test coverage of that path or read the upgrade notes or something.

There are valid arguments to be had about dynamic vs static typing, but whether it's safer/easier to perform an upgrade of a library/framework is not an argument that dynamic typing can win easily.


If you miss something, the language shouldn't let it compile. Obviously the more backwards compatibility a language desires, the less likely this is to happen.


Much of your comment history is hijacking HN threads to complain about Ruby and Rails. Not sure what terrible things you've seen in your days as a Ruby programmer, but it might be worth it to try and talk to someone about it and get it out and learn to deal with your trauma.


Personal attacks will get you banned here. Please don't do this again.

If you think someone is using HN abusively, you should email us at hn@ycombinator.com so we can investigate. Attacking them in the comments is not cool, and being personally nasty is of course a bannable offense.

If you'd please review https://news.ycombinator.com/newsguidelines.html and follow the rules when posting here, we'd be grateful.


Sometimes the truth is hard to accept


I agree with everything you said, but know that upgrading platforms is not a pain unique to Rails.

How long was AWS Lambda not able to support Python 3?

How long was GCP not able to support Java 7?

How hard is it to upgrade any core framework or language?


I feel this is comparing apples to oranges. Computing platforms supporting a new runtime is less about porting existing code, but more like adding a new feature to the code base. You really don’t see much “we port our app from framework/language version X to Y” articles anywhere, except for maybe Python 2/3. But that only generally happen for one version bump for a long time. Ruby (and particularly Rails) is really not doing as well as some of other players in this area.


Django is a great product. But migrate a large project between versions multiple years apart, and you'll feel it.


I had no idea that Github even used Rails. The things I learn.


If you didn't know that then let me share with you this interesting story from 2012. I'm going to repeat it from memory so my details may be a little fuzzy but I'll include a link which should tell the story more faithfully.

So back in 2012 rails had a default behavior where you could mass assign values from a POST to a user and there wasn't any scrubbing of that, by default. Someone realized this was a Bad Idea and issued a pull request that would have fixed it. Instead of accepting the PR, DHH (I think it was him) said something along the lines of 'competent programmers would not leave that setting in place' and rejected the PR.

The exploit discoverer thought about this and tried it against github, which was known to run on rails and the code worked! From there he was able to manipulate the permissions on github to get access to the rails repo where he reopened and accepted his own pull request.

He was promptly banned.

https://gist.github.com/peternixey/1978249


Worth mentioning: Wasn't just a random person, it was Egor Homakov who has a history of finding pretty interesting exploits particularly wrt Rails and Github.


This was a huge deal at the time, here's one of the HN threads.

https://news.ycombinator.com/item?id=3663197


Thanks for sharing, that was very entertaining.


Where is the actual pull request?


Re-reading the material from that era I think I embellished a little in my memory. I think there was a pull request but he didn't reopen and accept it. Sounds like he just pushed a commit that said something along the lines of "why can I commit this to master". I'm busy at work so I can't dig in but I'm sure someone will find that original PR. If not tonight I'll see if I can't find it.

EDIT: Went ahead and found it. It was an issue. https://github.com/rails/rails/issues/5228

EDIT2: It looks like DHH may have even gone so far as to delete his comments in this issue. There's folks referencing him and one side of a conversation in places. Pretty funny.



seems like the origin story of strong params


Wow they've been running 3.2 for this long? That's wild considering the talk Eileen gave at RailsConf this year made it sound like alot of the Rails 6 scalability stuff was based on GitHub's existing work.


They've been running 3.2 with local monkeypatches -- which is part of the reason that upgrading was problematic. (Though certainly not all; over that span, there were lots of breaking changes to supported and documented APIs.)


The advice at the end sounds exactly like something I'd say to someone going from 1.8 to 11 with Java. Great advice for any platform, very interesting to see the same conclusions from a totally different stack


Using the conditional boot loading, aren’t there structural differences in ActiveRecord queries/scopes that would run under 3.2 but not 5.2?

Did GH just rewrite those scopes in their respective models and maintain a ton of if/else blocks for the different versions? And if so, didn’t they run into issues without the code not being DRY, e.g. someone fixes a 3.2 query, but not the corresponding 5.2 version?


If you have your test code unified, and have multiple CI pipelines, it should show up immediately on your build servers.


What do they mean by off-hours? I imagine on a global site like github, there are hardly off-hours?


Just because it's a global site doesn't mean the traffic is distributed uniformly across the day. Certain regions are going to have higher traffic during business hours. I'd guess their off hours are somewhere around 6pm PST when North / South America has stopped working, Europe / Africa is asleep, and India is just waking up.


They probably mean when they weren't tied up shipping features or tracking down bugs.


As a person working for a large software consultancy in Scandinavia I hate to see so many using type safety as an excuse for not writing tests. At least a dynamic language forces you to write tests and frankly it is often easier to write tests in a dynamic language imho.


I am very much looking forward to Rails 6.0 and see what Github / Shopify will upstream. Actually Instacart has lot of great gems too which I wish would have been the default solution in Rails.


Given its maturity and settled place in the programming landscape it's always nice to see that Rails can still evoke irrational disdain in HN comments.


100%


Why is irrational to hate languages that are orders of magnitude slower than are necessary?


It's irrational to hate without mentioning tradeoffs. Sure, if performance is your only metric, then Ruby is a bad choice. But that's rarely the case, particularly with the web.


That's a reasonable question to ask.

I think in general, there are lots of reasons to like a language outside of its runtime performance.

I love working with Go and Rust due to their performance. Any I work every day in C#, which ends up nice and quick, too.

But I still love Ruby due to its expressiveness, and the way it works just seems to align with the way I think. But that's poetically because I used Smalltalk in the past and I like the bits of it that Ruby borrowed. :)

To answer to original question, though. I'd say it's irrational to hate languages that are slower than necessary because it's irrational to hate a programming language at all. No matter what language it is, it's just a bunch of words on a screen. Use the ones you like and don't waste any brain cycles thinking about the ones you don't.

Unless you're locked in a cube farm and forced to write Cobol at gunpoint all day. Hate might be rational then.


The thing you performance zealots never seem to realize is that the speed of your language basically doesn't matter for web apps, because there is usually at least 200ms just in transit time to and from the server. An extra 30-50ms spent rendering a result simply doesn't move the needle.


Because a rational person knows that there are trade-offs in programming languages.

Is it rational to love the fastest languages? Do you rank your programming language love by an arbitrary speed index?


Something-something IO something-something


Upgrading early(ish) and often, the very obvious preventative measure against terrible and failure-prone rewrite or upgrade projects, is one of the first things that falls by the wayside in the mostly short-termist logic that seems to dominate modern capitalism. It's absolutely infuriating.


Anyone knows which ruby runtime GitHub uses? Ruby MRI or JRuby etc.?


> The upgrade started out as kind of a hobby; engineers would work on it when they had free time. There was no dedicated team.

I’m not sure why this still surprises me. For a company the size of Github, there should most certainly be a team responsible for these type of upgrades.


Why in the world is github still on rails?


Maybe because their codebase still serves their use cases very well?

And perhaps they have little to gain and possibly much to lose if they ditch it?

You didn't say much in your question, so I don't know if you feel they ought to rewrite with a popular SPA framework or use something like Elixir Phoenix, but if their Rails-based solution handily serves 30 million users, why do you feel so strongly they should move to something else?


Nothing wrong with Rails, especially if the team knows it well. Time to develop is the real cost in software most of the time.

If Github wanted to integrate a lot of real-time features, then Elixir + Phoenix can't be beat. Depending on what they replace, a 10x in performance and a fraction of the servers needed is a nice win.


I've seen performance boosts closer to 20x when I've helped companies rewrite their Rails products in Elixir. I've also seen a reduction in server costs. In all fairness though, simply rewriting the Rails app in Rails, with the benefit of hindsight, probably would have resulted in a performance gain too.


I would bet $10 the scaling problem with Github would properly have more to deal with Git than Rails itself. Switching to Elixir wouldn't really help.


Because Rails is too dynamic for such a mature company. Move to a statically typed language thats not bounded by a GIL. Or multiple languages that serve the right purposes for the right job.


Why not?


I had the impression GH switched Ruby for Scala years ago.


That was Twitter.


Oh, lol.

Thanks :)


How much money in server costs and how much electricity could be saved if Github didn't use an interpreted language, but something like Go, C#, F#, Java etc?


Github would not have been the same in any of those. They really took to some of the Rails concepts - a lot better than most Rails companies - and it shows in their product (routing, object structures, etc)


So your claim is you could not create the same user experience in any language that is jitted or compiled?

I can't really take that seriously.


Sure, in retrospect you could create the same thing. It's just various text processing at the end of the day. What I'm claiming is that their choice of Rails led to certain choices which were really transparent throughout the product and are still there. It would've grown to be something totally different on another technology, so I don't think it's fair to just look at cost and performance. They did many things "the Rails way"


If they'd used Java they would still be working on the prototype.


I've seen quite a few attempts to measure productivity differences between different languages and there is not a consistent win being shown by dynamic languages in general. Perhaps ruby on rails is especially productive for the web, and maybe especially so when github.com launched, but there are lots of options now with similar productivity and 1 or 2 orders of magnitude better performance.


Rails is optimized for small teams to ship code to multiple platforms. When I visit indiehackers and see people posting about their struggles creating a page to update user profiles and the associated back end code - it's not hard to see why it's more productive. There is very little in the way of boilerplate or BS.

There is nothing out there that has anything close to the productivity of rails. Not that there can't be; it's just not a mindset/approach the industry has embraced.


I agree. Something seemingly comparable like Python/Django for instance cannot compare with Rails with regards to productivity. There was (and still is) Meteor.js though, but adpotion stagnated for several reasons.


Java is very verbouse, that's why I picked on it. You would simply have to write a lot more LOC, which would take more time, be more code to maintain, and so on. It's not a matter of static vs. dynamic, Haskell for instance is probably at least as succinct as Ruby (though I doubt you'll find a Hasekll web framework as productive as Rails).


As a Java dev I would say you'll spent more time on restarting and redeploying your app, especially if it's quite big.


I never get this argument. With current state of IDEs, there is maybe more LOC in the end but many of those lines are written for you and if you know your way around, it is actually very fast to write Java.


It's not the time it takes to write the code I'm concerned about. With more LOC per feature there is more code to maintain, there are probably more bugs, and there is more code one has to comprehend to understand what the system does. That has a significant cost.


Imagine having a state of the art IDE and a succinct language! Something like Smalltalk.


Isn’t Ruby Jitted now?


Ruby 2.6, which will be released in December (there are release candidates out now), contains method based jit infrastructure, but at least as of a few months ago the optimizations were still fairly limited and had not yet overcome the overhead of jitting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: