Hacker News new | past | comments | ask | show | jobs | submit login
Building GitHub with Ruby on Rails (github.blog)
660 points by Lukas_Skywalker on April 7, 2023 | hide | past | favorite | 318 comments



GitHub running off the main branch is fascinating, and initially sounds mad, but makes so much sense. Assuming they have very high test coverage, running against mainline Rails isn't really any different to having the fork they had before, but they have more influence on future development.

It must also be a massive boon for the Rails ecosystem to have such a large property running off the head.

Doesn't anyone know of any Django shops that do the same, running off mainline?


You MUST have an insane amount of test coverage to trust something like this to Ruby.

I'm new to Ruby, with 13 yoe as a SW engineer.

Personally, I find it a very hard language to master. Writing tests often feels like I'm settings variables left and right without seeing them being used in the current context. But that then happens to be part of the let() way in rspec.

Now you might say: why use rspec? I inherited this codebase, so gotta do what you gotta do.

I do really miss my compiler. I am not a fan of writing a block somewhere that can be invoked 2 weeks later for the first time and then fail, because someone passed in a number where a string was expected.

I went from PHP to Java to C# to F# to JS to Rust. F# and Rust stand out in terms of hardest to write, but easiest to trust.

I don't have that feeling with Ruby, and RoR.

But, again, my personal opinion. Good friend of mine started with Ruby, and he loves it. He says that he accepts the magic things for what they are, and uses them. My brain doesn't allow that. I have to understand.


I've written in python, C++, Ruby, and Go.

I hear you on the compilers, and I do miss them at least a little whenever I don't have them. But my goodness, there's really something fun in Ruby. I think it shines in smaller codebases, situations where you can have some hope of understanding 90% of everything that's going on as a single developer (and honestly part of the sales pitch of microservices is that you know there's no funny business crossing the wire, so you only need to know the magic bits of the services you work in). But there's really so much joy in stringing together a few functions in a single line, knowing that there might be a few different types being passed around depending on context but knowing that all the functions can handle all the types, and not pausing after every line to check whether an err was nil... Honestly when I have a compiler i miss the magic, and when I have the magic I miss the compiler.


How many environments are really out there where developers blissfully work in their microservices silo? There are a lot of happy promises and sunny-day theories, but in reality it's an awful mess, and you can't really get anything done without understanding the services hairball.

How are you supposed to fix a bug in your own service if you don't know how to set up a specific permutation of services locally in order to reproduce it?


^ all of this is true. I just meant that there's much less likely to be the RESTful equivalent of "method missing" or accessing somebody's __private_field__, hijacking their vtable pointer to redefine the method, etc. And by all means tell me your horror stories :)

or maybe put it this way: the big-ball-of-mud formed by metaprogramming tricks feels like it has boundaries that are permeable in a particularly arcane way, whereas the big ball of mud formed by microservices has boundaries that are permeable "merely" by distributing or spanning of both humans and logic.


Metaprogramming in ruby is not magic. It is fairly simple, but if you come from a language where metaprogramming is difficult or not possible it can seem like magic.

Good metaprogramming levels up a language's power in the same way going from a hammer to a nail gun levels up your building power.

Creating a DSL in ruby is mostly just creating well named class methods, often that take blocks, and calling them without parens. Sprinkle in a little `method_missing` overrides and a few `define_methods` and that's it.

I don't know why people seem to actively avoid learning it, but a day of study, and stepping through a few "magic" methods with a debugger should reveal how simple and non-mysterious it is. And also how useful it is.


I've created many DSLs in Ruby. The issue is later debugging said "magic" methods, when said method is throwing an error and you can't find said method in your codebase.

Metaprogramming is pretty amazing for gems and DSLs in a limited domain. When I see business logic that has metaprogramming in it, I kinda freak. I know in the future that business logic will change and debugging the very clever code in that business logic may become a nightmare.

I say this a person who adores Ruby.


I agree that metaprogramming should be used with care, and mostly makes sense in frameworks and libraries, and not application code. But any ruby programmer should take the time to learn how it works - and a few sprinkles of it here or there can make otherwise hard things easier.


I’m a long time Ruby and Rails developer. I know how the magic works from reading source code.

A lot of stuff uses method_missing and define_method. There’s also instance_eval, which is used to create DSLs like rspec. Once you know what they do, stuff makes a lot more sense.


define_method is so weird. Breaks searching through source code.


Ruby tells you where methods are defined - it is worth your time to learn how explore ruby code from the REPL - this is the ruby way. For example (I have pry installed, but you can do it without pry)

    [5] pry(main)> $ user.first_name

    From: /Users/.../.asdf/installs/ruby/2.7.7/lib/ruby/gems/2.7.0/gems/activerecord- 
    6.0.6/lib/active_record/attribute_methods/read.rb:15:
    Owner: User::GeneratedAttributeMethods
    Visibility: public
    Signature: first_name()
    Number of lines: 4

    def #{temp_method_name}
      name = #{attr_name_expr}
      _read_attribute(name) { |n| missing_attribute(n, caller) }
    end
Tells me exactly where this "magic" method is defined, and I can pop open that file and read all the source.


Can you point to where this is documented? Thanks for sharing!


https://pry.github.io/ - also a lot of features from Pry have made it into the default IRB these days, but I still use pry. I don't know the equivalent commands in IRB.


> I do really miss my compiler. I am not a fan of writing a block somewhere that can be invoked 2 weeks later for the first time and then fail, because someone passed in a number where a string was expected.

Problem solved by using Sorbet in your codebase. I know at least Stripe and Shopify do. Also forces you to keep magic at the minimum (which any sane dev would do anyway).

https://shopify.engineering/adopting-sorbet


FYI, and very late to the game so the short version: the thing to use `let` for that'll make it click is doing things like setting up all the fixtures for "correct" / "normal" execution, and then piecemeal overriding them in nested describe/context blocks to show what should happen in abnormal situations. Basically, "here's what normal looks like, and then here's a deviation from that".

Ofc, you'll shoot yourself in the foot doing that if you let individual test files get to big, but that's a bad idea anyway sooo.


Maybe some day Ruby will get type hinting. I'm loving it in Python.

In terms of the "I wrote a block somewhere and it broke when executed for the first time two weeks later" - don't do that in a scripting language. Use the REPL to build/test the block. That's the trade off - instead of a compiler you get a fast REPL - use it! :)


it already does


Please say more. What's your setup?



It’s… not ready for prime time. I’m optimistic that it can get there but right now the tooling is quite immature and the type system flexibility is not there for such a dynamic language as ruby.


Have you tried Tapioca (https://github.com/Shopify/tapioca) with Sorbet? Typing in general has ways to go sure, but I find this combination quite usable in my day to day.


Yes, but in dealing with parsing JSON in a dynamic way, we took a bunch of time to try to get things working elegantly and it didn’t go so well. Same with trying to set up a base class for a service object that could return any number of things.

Maybe I’ll check back in 3 years? But it seems to be A pet toy of Shopify, and for their needs.


Sorbet exists, but it’s worse than what Python offers, which is dramatically worse than what TypeScript did to JS.


Have you used tapioca with it? Apart from some edge cases, it makes it very smooth to use once you learn it.


My experience with Sorbet has been way better than mypy with Python. Sorbet is way more useful.


That’s good to hear. I only ever tried Sorbet years ago when it was pretty new.


Oh that's interesting. You mentioned that Sorbet is dramatically worse than TS, but working with a modern RoR/React stack I don't have that opinion.

TS is a bit more flexible and expressive than Sorbet, but I find Sorbet very ergonomic even with strict typing. I rarely have to use T.let or T.must.

Sorbet's typed data structures like T::Struct and T::Enum are also great.


You don't have to use `let` in specs. I prefer setting instance variables inside a `before :each` block.


`let` has a performance benefit, which is why they are encouraged. They are only executed if called - but if you setup all the ivars in a `before` block, and all those ivars aren't used in every test they waste time - it can be significant if you are using factories for example.


Instance variables, you mean those beloved things in Ruby that cannot be distinguished between not having been defined versus having being assigned the value `nil`? :p


there are many ways to lookup if something is defined or not

    instance_variable_defined?

    defined?
are two common ones

but if you are using something before defining it you are going to crash so that's definitely one way of distinguishing it.

using fetch is another way to provide a default value to something that may have a nil value as a meaningful value.


Thanks for correcting me (sincerely)

> but if you are using something before defining it you are going to crash so that's definitely one way of distinguishing it.

Would you clarify? Like that was my point, casually using something where `nil` is a possible assigned value, or maybe the variable hasn't been defined, makes possible the (sadly common) category of bugs where the program does not crash, but proceeds as though the variable was assigned `nil`, but actually the variable was never defined.

In contrast to local variables where the program will crash if you reference the variable but it hasn't been defined.

My comment meant "bare instance variables in Ruby are not great [and we might not want to recommend them as a solution to people complaining about Ruby/Rails quirks]". Do you disagree and actually love the behavior of Ruby instance variables? Or are you simply technically correcting me? (which, again, I appreciate)


> Do you disagree and actually love the behavior of Ruby instance variables

yeah I wouldn't say i love it and i'm not sure why they did that but i imagine they had a reason for it originally. i would say it should have crashed instead of return nil as a default. they did tend to try to make the programmer happier and maybe they did it to that end but i don't see a large upside to it... not too much we can do about not using instance variables though... at the end of the day you just have to be a little more careful.

tests are a pretty good thing to have though and can catch this kind of error.


I prefer less opinionated frameworks to RoR, but Ruby is still my favorite language; a pity the ecosystem is so much smaller than Python (esp in the data science space)


Exactly, it’s an investment in Rails by tying yourself so closely with it.

Contrast to the JS ecosystem where as soon as there’s a disagreement or new idea a new framework is born


Except for the JS language itself where every proposed suggestion is implemented.


Errr, tail calls.


Every reasonable. Maybe I am wrong, but tco usually means no more stacktraces?


Not at all! Elixir stacktraces are much nicer than JavaScript ones.


No, only dumb implementations, Scheme and other languages that require TCO have no issues mapping the debugging metadata to proper calls.


JavaScriptCore has tail calls


Nothing that one can rely on for portable code.


stackful coroutines, an "await" keyword that only awaits one layer of awaitables instead of infinitely many layers, etc.


Comparing frontend frameworks with backend frameworks seems really odd to me. It takes a lot less time to build a JS framework. They do a lot less. This isn't to say frontend is easier or whatever. But they really do a lot less. That is why they're so many really good frontend frameworks. It's also a lot easier to rewrite your frontend than it is your backend. While backend there is so much to build that you really need to invest in it.

Every backend lanaguage has one or two really good frameworks. Simply because the cost of replacement is so high and the cost of developing a new one is so high, it just makes sense to invest in the one you're using. Sadly, this means just investing in hiring more people and not in actually improving it which GitHub/Microsoft is doing.


JS is not only for frontend anymore. Alas.


Yea and new backend frameworks pop up every day. Wait, they don't. They were clearly talking about frontend frameworks.


I've seen quite a few backend frameworks, but they're usually all based on express


Unsure why you were downvoted, you're almost certainly correct.


Shopify also runs off main, with a bigger codebase and more devs, and employs some Ruby core/Rails core devs too. It is the right choice IMO. Once you decide on your main framework as a big company, you should invest as much as you can in it and its community, it will pay off in the long term.

https://shopify.engineering/shopify-monolith


I have yet to see a tech stack that's not locked up into a framework version.

That's more because of our engineering culture. The cost of NOT upgrading outweighs by a huge margin than keeping building on top.

And yes, have seen Django shops locked into 0.9x release patched right into the core and running for a very long time, impossible to upgrade and all the horror stories.

EDIT: Added Django


At Shopify we also are basically always on the bleeding edge of all Rails and Ruby versions. By always moving forward it makes each improvement much smaller


Plus Shopify and GitHub must drive most development of Rails so it’s basically an in-house dependency to some extent. I know that’s simplifying things.


It is pretty common for companies to not be locked in to an old version of a framework.

Sounds like this is a good question to ask companies when you're interviewing, since it probably can act as a proxy for a lot of other engineering habits.


That's because most teams don't have a clear tech ownership: everybody wants to upgrade but nobody can justify it to the management or even plan for it.

I found framework and runtime "freshness" to be a good metric of company engineering culture.


I start to think that big breaking changes to "improve" the core are never the way to go for big framework. You're essentially making new framework with old name at that point.


You need a culture that values upgrades not for new features but for future proofing


My advice is to set up things as best you can right from the start, to make upgrading a relatively painless process.

For example:

- Good test coverage (particularly for critical points of your application) including CI pipeline

- Use dependabot or similar to make sure you get alerts for any critical updates (like security patches)

- Practice good dependency "hygiene" - only add a dependency when you really need it, and do vetting to ensure it's well-supported, has been updated recently, etc.

- Regularly audit and remove dependencies - when a library author declares they are sunsetting their project, plan on a replacement (even if that is, worst case, a fork). Remove dependencies which are no longer used in your code.

- Separate out your development/testing/production dependencies

- Use a good dependency manager: for example in Python, use Poetry or pip-tools or similar, rather than manually updating requirements.txt files yourself

- Update early and often: other than dependabot make it a maintenance task to check for updates at least once a week. Use managers, scripts etc to make this as easy and painless as possible.

Obviously if you are inheriting a legacy project you have to deal with the cards you are dealt, but these are a good target to move towards even with an old codebase.


As a counterpoint I have yet to see one that is locked in my 15 years working with Rails.


Github itself used to be. They ran a custom fork of Rails 2.3 for years rather than go through the pain of upgrading to Rails 3. By the time they finally did, Rails had already moved on.

It took Github a full eight years to get caught up after that fateful decision in 2010 to hold off on Rails 3.


Eileen, who used to work at Github, did a talk on the upgrade https://www.youtube.com/watch?v=ZrcPoRx_kQE


What were the sticking points that made the 2.3 -> 3 upgrade so tough?


In the article they mentioned they had a lot of patches to rails. They effectively made a fork. At enterprises it can be seen as not worth it to go through getting your changes merged in because it can take a long time. Once you make that decision, you're entrenched far more than one may realize.


2.3 -> 3 saw the Rails and Merb projects merge, and a ton of changes to remove some of the most unmaintainable magical elements of Rails. https://medium.com/ruby-on-rails/upgrading-a-rails-2-app-to-...


You have worked on a few projects that were behind a few major versions and needed considerable effort to be updated :)


Sure, though I still wouldn't say they were "locked", though what that means is somewhat subjective :) There was still a will and, with a strategy, success in upgrading. I've not been anywhere that has rolled out Rails LTS for instance, or just given up. The place you worked with me last year did a Rails 6.0 -> 6.1 and it took less than a week for one person, 6.1 -> 7.0 should be pretty similar.


I have customers with a mix of small Rails projects in version 6 or 7. I told one this week that we could use AR's encrypts :field to encrypt some fields in the database, but that project is Rails 6 and that feature is added by Rails 7. We could add the code to handle it because it's small, but customer decided not to encrypt the field and wait until (if!) the project moves to Rails 7.


I wouldn't say that's locked though. The place I currently work is on 6.1. We're not locked to it, we're going to upgrade, but in a few weeks or so.

I can understand the reason to put off encrypting the field too. Upgrades, or backporting, are just work like any other project and I guess that other things for now take preference. After all, before field encryption came along the risk balance calculation had already been done (even if only implicitly) that storing the field unencrypted in the database was safe enough. The availability of encrypted fields doesn't change that, though making some things safer doesn't necessarily fall neatly into the "requirements" bucket despite being a good idea.


> That's more because of our engineering culture. The cost of NOT upgrading outweighs by a huge margin than keeping building on top.

The cost of not upgrading is always the worst option. It's just that those making the decisions might not have the same stakes. It has nothing to do with technology and happens in every situation.

Often you're just not even aware of what these costs are because you're wired to work around it. It just feels right to continue that workaround including continuing to hire and expand when it reality it's just more overhead.


nothing is always.

Consider moving to an SOA. Rather than upgrading a monolith when the engineering team knows that it’ll gradually have pieces pulled out of it, use that effort on the migration.


The costs of running of the main branch outweighs the cost of waiting until there is a new release and upgrading then. Even if you're dealing with a stable framework that has good Backwards Compatability.


I think this is mostly a consequence of using dynamic typed languages.


We recently upgraded our pretty large enterprise Scala codebase from Cats v1 to Cats v2. Static typing does help, but there are tons of other problems. One of them is lack of conventions like in Ruby on Rails, which can easily be much worse than dynamic typing in the presence of conventions (provided that conventions are followed -- rarely the case).


> “It must also be a massive boon for the Rails ecosystem to have such a large property running off the head.”

I would have expected Microsoft to focus on developer efforts into speeding up Ruby as a language given they are one of a small few large companies that have deep language/compiler expertise.


Much as I'd like Ruby to be faster it's rarely the bottleneck in most Rails applications, besides, Ruby is becoming faster, it's a lot better than it used to be.


It's usually I/O and dDB calls that are the bottle neck, not the speed of the language. Although, Ruby and be a bit of a memory hog,Ii did hear though, that this got better with the new YJIT in Ruby 3.x.


> It's usually I/O and dDB calls that are the bottle neck

Not in my experience.


In a Ruby on Rails app?


Unfortunately Github is the only product that uses Ruby inside Microsoft. So their incentive is pretty small. I do wish someone inside Github could push M$ to spend some money and collaborate with Shopify on their JIT effort though.


That's actually not true. There's at least one other product at MS that uses ruby... Yammer. Although I'm not sure how much more incentive that would add for MS.


I imagine it wouldn't be too hard, since Django only has one dependency - Django itself (NodeJS developers weep).

Is rails the same way dependency wise?


Rails has dependencies, but not like node apps; last time I counted create-react-app vs a new rails app, there was nearly 10x as many distinct maintainers with access to push new releases to the default dependency tree.

I consider that a better measure of dependency risk than absolute count, since different ecosystems have different ideas about how large a library should be.


Create-react-app isn't really a fair comparison. Create-react-app has ridiculous dependency bloat compared to the norm in the JS ecosystem because it includes every possible option rather than just picking one option in each category. Most serious project using React aren't using create-react-app.


I’m not really familiar enough to contradict you, beyond saying that - as an outsider - it has by far the most visibility in its space.


Funnily it's now outdated.

The brand new react docs don't even mention it: https://react.dev/learn/start-a-new-react-project

> We are currently leaning towards Option 5 ("Turn Create React App into a launcher"). The original goal of Create React App was to provide the best way to start a new React web app for the majority of React users. We like that repurposing it as a launcher explicitly communicates a shift in what we think is best for most new web apps, while at the same time leaving an escape hatch for the older workflows. Unlike Option 3, it avoids the perception that "creating a React app" is somehow deprecated. It acknowledges the reality that a bit of choice is needed, but there are now really great options to choose from.

https://github.com/reactjs/react.dev/pull/5487#issuecomment-...


That's not correct. Django isn't independent, depends on lot many packages but that's not the issue. Issue is the frameworks own Interface:

Existing Interface getting removed OR behavior or defaults changing for an already existing Interface.

That's usually the upgrades are about.


> Django isn't independent, depends on lot many packages

It is quite independent. There are between two and four dependencies: asgiref, sqlparse, tzdata on Windows only [0], and typing_extensions on <3.11 [1]. There are some optional dependencies (argon2-cffi, bcrypt, and a database library like psycopg2), but they are small and mostly self-contained.

[0] https://github.com/django/django/blob/main/setup.cfg#L39-L42 [1] https://github.com/django/asgiref/blob/main/setup.cfg#L34-L3...


Yes the framework's interface can change. But this is usually a piece of cake compared to hundreds of disjoint libraries and leaky abstractions that would be in use in the Node world.


Wait is this true? Doesn’t it also use SQLAlchemy at least? Which then likely has its own dependencies? I’d be really surprised if Django had no dependencies at all.


Django doesn't use SQLALchemy, they have their own ORM system. And it's true that Django used to have 0 real dependencies (now it has 2), but even now many of its "optional builtins" do require some additional external packages, and some of them are really useful.

For example you need an external package to use builtin postgresql, mysql or oracledb support. You need tblib to run tests in parallel using the built-in test runner. You need external packages to use argon2 or bcrypt as password hashers in the builtin auth system, and I'm sure there are others, since Django is very much batteries included, but modular.

The modularity also means you can also use any 3rd party database, use pytest to run tests, make your password hashing on your own, ... so I can understand that these are not listed as dependencies on pypi, but I don't really like that I have to list packages I don't directly import in my code as direct dependencies...


Django had zero external dependencies for a long time, which was a factor of its history.

The first release of Django was in 2005. Back then, the Python Package Index didn't exist yet. Installing Python dependencies was really hard - you pretty much had to grab a copy of the code for each one and put it on your "sys.path" somehow.

So Django avoided the issue entirely by bundling everything you needed to build a web application in a single package.

That's why Django has "django.contrib" - in a time before pip dependencies, it was a way to separate out things like GeoDjango which weren't exactly part of the "core" framework but could be distributed along with it.


django 4.2 has two dependencies: asgiref and sqlparse. other than that, none.

django does not use sqlalchemy as its ORM, it has its own system (which i prefer!).


Amen. Until recently, I was exclusively working with Django for 5+ years. I definitely fell into the trap of making the ORM for granted. Had a brief foray into the JS world and despite plenty of slick-looking projects with fancy pants websites, nothing remotely compares.


Same here, I came into a project that had been started months before and found that they were spending weeks reimplementing stuff that comes out of the box in Django. But not as well.


What JS based ORMs did you use that you weren't a fan of?


All of them are bad in their own ways. They all fall down in key features, or the non-standard SQL features (hooks, after-save, automatic transactions, etc) all have edge-cases and surprising behaviour.

I have tried a lot and the least worst is Zapatos (and it's not really an ORM) because it at least tries to not paper over SQL and instead just creates a type-safe API for using SQL.


The thing is - even if someone creates an amazing ORM for Node, I doubt I'd use it. I am tired and done with async everything. No more NodeJS for business logic.

Now, if someone could create a Django-like ORM for Java or Rust, then we're talking. Hibernate and Diesel are nowhere near Django's ORM in terms of productivity and "it just works" factor. Go's GORM looks pretty good but haven't tried it. jooq does not look as easy to use or setup, and the workflow is entirely different.


Slightly related, I'm not used to using ORMs. Spoiled by jooq from the JVM world, writing SQL with the query builder in TypeORM was a terrible experience.


I don’t think Django uses SQLAlchemy. It has its own ORM which is damn good, but you can also use SWLAlchemy instead of that if you want.


Ah, NodeJs developers are fine with a single dependency too. As long as that dependency is NPM.


I remember working on projects that did this in the early days of Django, particularly when there were badly needed features that took too long to land in the stable release.

Nowadays most projects I've seen use the latest stable release or at least the latest LTS - maybe some legacy projects lagging behind.


I built my startup 4 years ago with a combination of react + aws + gatsby + hasura. I thought this would be great for performance and scale. Fast forward to today, I spend at least 2x as much time to code a feature than if I had just stuck with a simple rails stack, and the scale I imagined never happened. Now rebuilding everything with rails so I can ship faster and focus on growing the product, not making engineering prowesses.


Considering most of the times the startup at hand will fail anyway and the planned scale will not happen, building the initial architecture to be "web scale" is a perfect example of YAGNI.

I had a similar experience, and it was a great lesson.


We used to write software for pretty high-traffic systems using what is now underpowered CPUs, less RAM, and antiquated databases. It should be easier now, NOT harder. Most of development I see today is resume-driven and not based in any pragmatism.


Building for scale from day 0 is a recipe for disaster. You should've seen that coming.


this absolute is a little ignorant. For some products the scalability is the core competitive advantage. He can't engineer it in after the fact.


It's literally Donald Knuth:

“The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.”


Can you give some examples of such products?


A database would be an obvious example. Analytics platforms and message queues are another couple that come to mind.

You can't patch the kind of performance you need for these products after the fact, it needs to be baked into the architecture.


I think the authors above just meant that you don't know if your product will even get there so instead of over-engineering your stack and prepping for every possible scenario you'd take Just-in-time approach and conquer the problems when they come at you.


You suggest people should refactor later on when needed?


Yea, you identify bottlenecks and refactor those as needed (with something like rails it's pretty easy to change out parts of your system while retaining the rails core). Every business will have different bottlenecks and it's very hard to identify them before you start accumulating customers and see how they are using your app


That actually makes sense


Literally yes. Outside of a few very basic common sense optimizations (avoid n+1 queries, use indexes liberally, maybe sprinkle some caching on heavily used endpoints) you should focus entirely on shipping features with the knowledge that your product and by extension your code almost certainly won't look anything like it does now in 18-24 months.


Cost of capital for a startup that is succeeding is almost always far higher early on, so you want to focus on moving fast over scaling as long as you can scale enough to get to big enough raise to throw far more resources at the problem.


There comes an inflection point in a startup when you have to move from MVP to scale, and there are two different kinds of tech and two kinds of people for each stage


If you are fire-fighting production issues at scale all the time with the original version, you may find yourself out-of-capacity for a refactoring/rewrite. After experiencing this terrible state of affairs, I prefer getting performance and robustness (mostly) right the first time.


I haven't been able to apply to a real project yet, but I like the idea of building to refactor.

You execute for today, but make it easier to replace parts on the future


doubling down on premature optimization and chasing the new hotness with a ground-up rebuild... I hope that works out.


The "new hotness" of 2005?


I absolutely love Rails. I'll always remember back in 2010 catching the train to Waterloo station in London and seeing a huge sign overlooking the train tracks that read something like "We Need Rails Developers".

Rails was such a huge part of my professional career.

Now, 13 years later and I'm deep in the JavaScript ecosystem and have been for 8 years.

The most exciting thing to come out of this ecosystem recently is RedwoodJS, because it takes a lot of inspiration from Rails.


Similar backstory. Back when I used Rails as a young dev I remember loving it but thought it all felt a bit parochial somehow, like they didn’t really get where the web was going. Now I realise they were just not interested in hype and other bullshit, they never get sucked into the latest scalability trends like I always did, they never cared about being first and trendy, and that’s why Rails is still there and still effective. My theory is most other frameworks are unwittingly designed around avoidance - people trying to solve painful things “once and for all” so they don’t ever have to worry about them again, while Rails is more about embracing the uncertainty of the future and being smart and measured about it, like: “Yeah you might need to rethink this part of your setup in a few years to scale it, but there’ll be ways and it’ll be fun; don’t stress about it now, focus on what matters.” And as a more mature engineer I now realise they were so very right with this attitude, and that it results in much better stability and adaptability to new requirements over time. To think, the way I used to think it was a bad reflection on the ecosystem to see popular gems that haven’t been updated for months, while it was a good sign that Node packages were being updated every day… Lol. I think I need to make the jump back to being a Rails dev.


Yea it's funny how it worked out. I felt similar. I was chasing the latest trends, frontend frameworks, databases. They weren't, and now we're back where we started with sending data from the server and submitting forms.

I'm less likely in my career to stray away from whatever the old heads came up with. For instance the new thing is putting server/database calls in server side React components. But something tells me MVC was invented for a reason by people way smarter than me.


I started my professional career with Rails 8 years ago and deeply miss aspects of it. It uses lines of code so efficiently, letting you go from zero to product with outrageous speed. Everything is a solved problem. Ruby as a language is so expressive, so beautifully ergonomic, so easy to read. It's such a joy.

But the view layer has not aged well. After years of working with a focus on React and TypeScript, I can't stomach the Rails approach to views. As much as I appreciate the value proposition of Stimulus and truly feel the shortcomings of SPAs deep in my bones, I crave components, type safety, unidirectional data flow, and more options for styling than just classes. React in particular has rich options for UI frameworks that feel truly idiomatic. The last time I tried to start a project with Rails, I moved at a breakneck pace until I got to the view layer, at which point I had to pull the plug.

I'm reaching for Next.js these days because it's the closest I can get. It gives me the opinionated framework I crave plus server rendering and the ability to drop down into client-side rendering when it calls for it. Prisma is a damn good ORM, even if it doesn't have the intuitively ergonomic beauty of ActiveRecord. And of course, TypeScript throughout is a godsend.

But I still miss Rails. I miss the console, Sidekiq, the profound power of ActiveRecord for simple things with the ease of dropping back into SQL when I need it, the polish of FactoryBot and those easy easy easy tests. I miss the breezy and fluid syntax of Ruby. I don't feel limited by TypeScript, I love working with it, but TypeScript lets me lecture, Ruby lets me sing.


ViewComponent, from Github makes writing views much better, but it's far from perfect.


Never seen this before, it looks great! I wonder how well it handles Sorbet or RBS types.

Any experience with it? What makes it far from perfect?


I never learned RoR, it’s had been perpetually number 3 on my “to learn” list for over a decade. However I loved CodeIgniter and also crave that all in one-ness so find myself going to NextJS time and time again now for that reason.

I pair Next with Nest (sigh, why only one letter different?) when I need a proper backend but I can get so far with Next that I’m using it less and less now.


Have you use the whole Hotwire suite? It's much better than just Stimulus. Componentizing your views will also make it more modern.


I have not, no. When you say “componentizing”, what does that entail? Is it the ViewComponents gem?

If someone reading this has experience with both TS+React and modern Hotwire apps, you’d do the world a service by writing a blog about modern Rails views for the experienced React developer.


Yeah what I'm suggesting is a React-y modular view codebase. You can use ViewComponents but I quickly found it was too limited for me and I didn't like to have many files for a single component. I just use Rails partials, the performance hit for my app is none. I also wrote a local VSCode plugin for snippets autocomplete so I can go faster.

Also sprinkle some (rare) Stimulus and Turbo streams/frames.

The one thing you're gonna be missing is typing, Sorbet does not work in views.


> But the view layer has not aged well.

Agreed - In my opinion, Rails messaging/soft marketing could be a lot better about how it's great for a data driven backend, whatever the frontend/UI.

I know there's things like API mode and webpacker, but this all seems very secondary and begrudging at times. Maybe I listen too much to what DHH says, but definitely think the Rails view stuff should be what's secondary. It's a turnoff for a lot of devs who would otherwise be quite happy with Rails as a backend.


I dunno, I think that there are a lot of products and developers who need the view to be as important as the backend, and it would be a shame for Rails to stop trying to woo them. Rails is making a big push for Hotwire (https://hotwired.dev/) and on paper, it certainly looks like a very Rails-y way of doing views. Elixir users seem obsessed with LiveView and this seems cut from the same cloth, so maybe it’s the answer? But for it to be compelling for folks like me, it needs to be a bit more React-y: components with well-defined, type-safe interfaces, unidirectional data flow.


Have a look at AdonisJS.

It's in a somewhat weird spot: On the one hand it is mature and has amazing DX/UX going for it, with a lot of very thoughtful tooling. On the other, for some reason, it has always remained niche with just a couple of core developers, even though it's now nearing version 6 and at least 8 years of releases. I do not know why that is the case. It's a beautifully written full stack framework, taking having inspiration from both Rails and Laravel and in the world of web TS, it should get a lot more attention than it does.

Maybe it's just the weird name :^)


I think it’s main problem is the lack of community. There are almost no third party packages and the core team is pretty small (one or two devs). I wouldn’t risk using it to find myself 5 years down the line using a zombie project I couldn’t move away from.

One thing is picking a library which I can replace if it it gets abandoned, etc. a very different one is picking a full stack framework where replacing it means rewriting everything.

If I wanted a full stack framework, I’d pick Laravel or Rails because they’re already proven, have a big community, are used by big names in the industry and I’m 99.9% sure they’ll still be around and maintained 5 years from now.


I partly agree.

Having a community is nice and motivating, but that makes building good software over that many years without ever being the community darling more impressive to me and seems like a great indicator for deep commitment, that gives me a different kind of confidence than hype and VC money.

I am not sure which sustains better in 2023.


I've also wondered why AdonisJS didn't quite take off and grow a community while another single-developer-centric framework, Vue, did.

But of course Vue is a frontend framework, and maybe the appetite for those was higher (and associated risk was lower) than for a batteries-included JS backend framework like Adonis.


I was looking at Adonis and was shocked at how much it reminded me of Laravel and Rails. The creator definitely had a ton of experience in one or both.


I use AdonisJs at work. Having used Rails and Laravel before it was a very easy jump.


I used AdonisJS. Reminds me a lot of rails but with the added safety of typescript.


Totally agree! I have used many frameworks (even written my own) but nothing gets close to Laravel for PHP and AdonisJS for TypeScript in terms of features out of the box and DX.


Adonis is inspired from Laravel but it’s far from being comparable. Features, community, “battle proveness”, third party packages, etc are just not there yet.


I was just talking about this topic of whether we really has any Rails-influenced JS frameworks out there in the wild. And I struggled to come up with anything off the top of my head other than Sails.js [1]. RedwoodJS looks interesting, what about it in particular do you find exciting?

[1] https://github.com/balderdashy/sails


Is that ad placement a genius piece of guerrilla marketing, or a recipe for total confusion, among the general population at least? Perhaps both.


I believe it must have worked well for them, although I have no specific proof, because that billboard was there for a good 6 months at least and must have cost a fair bit of money given how prominent it was.


Waterloo being the main station for commuters coming in from the south: genius marketing. Hitting both managers and developers.


I wonder why so many went this path. When I started with Ruby like 15 years ago it was just another language to learn, but in the end I just stuck with it.

I still build in Rails. And even thought I tried alternatives I don't think anything is as stable and fast (for me) to create things.

Rails is love


With the focus on server side rendering Nextjs 13 feels a lot more like Rails too. Other frameworks like Remix & Sveltekit seem to be heading this way too.


Ember.js is nicely adopted the bests from Ruby on Rails, and Ember.js is still one of the best JavaScript framework out there, IMHO.


Off topic question, Is it still very hard to make $200K in London as a web developer?


I like GitHub, fwiw. I am not someone who requires everything to be perfect, and I've learned to be tolerant of our human reality, where imperfection is the norm.

But GitHub is not a great Web app. It is frequently/constantly out of sync with the latest data/status. You quickly develop the habit of manually refreshing the page every time you are preparing to do anything with a PR, and that's not something that should be necessary these days.

It surprises me to see a loud and proud blog post detailing GitHub's process of staying so relentlessly up to date with the latest and greatest version of Rails: the app is not properly responsive, for whatever reason, and to a degree that would not be tolerated where I have worked. I would rather read about how they are trying to fix this issue.


I agree with this, but that seems to be a problem with Github's frontend code, not their server code. I don't think that detracts from this blog post's message.


I guess my implied question should be stated more explicitly: is GitHub's front end code entirely separate fro the Rails code? That's not how Rails apps usually work, I thought.

Admittedly, it's been a long time since I looked at Rails code, and I don't have the slightest idea how GitHub is actually architected. But I don't remember the "front end code" in Rails being a separate thing from the server code, typically: the whole point was server-side rendering. Is GitHub using one of the JS/TS frameworks for the front end? I do remember that being a developing trend, a few years back.


Rails can just serve JSON or GraphQL from an API to an SPA frontend, or you can do full server-rendered, or Hotwire to do HTML fragment updates, or any combination thereof. IIRC Github used something like Hotwire but home-grown. I've not done Rails development for a while so not sure what the state of things is for web sockets, but I would think that's not a problem for the framework.

Point is, there's nothing about Rails in particular that would prevent fixing these issues, that's probably the result of development and business priorities, legacy code, etc. that would be an issue with any tech stack.


Order as I remember it: Github initially used pjax, https://github.com/defunkt/jquery-pjax, (maybe "invented" by defunkt?) which I believe was the precursor to turbolinks, https://github.com/turbolinks/turbolinks, which was the precursor to Hotwire.


Makes sense to me, yep. Thanks for the info. I have buddy who works there, I am going to grill him at our dev meetup this week!



What do you mean by the app not being properly responsive? I've never had any problems with it on mobile.


It takes a significant chunk of time for any approvals or change requests to show up in the desktop UI, if ever, without just manual refreshing. This is not a niche observation, fwiw. It's a common complaint.


Ah, responsive as in "responds quickly", not responsive as in "renders appropriately for different viewports". We need more words!


I think I used the wrong term, sorry! Solo dad + software dev = brain slowly melting away.


Ah, I haven't used GitHub Desktop, just the website. That sounds annoying!


And as I have said before but worth repeating again, Rails is perhaps the only open source framework that is being battle tested in development at scale. It may not be the fastest ( or in fact quite slow ), but I dont think you could find similar testing being done and Deployed at the scale of Github on any other framework.

I wonder if Eileen Uchitelle will bring this practice to Shopify as well? Edit: It seems [1] Shopify is also running on latest Ruby and Rails version as well.

[1] https://news.ycombinator.com/item?id=35481352


> It may not be the fastest ( or in fact quite slow )

Web is not CPU bound, it's memory and network bound. Rails can run huge traffic perfectly fine if you know how to code for performance (e.g. caches, async with Kafka etc.) Ruby had also pretty significant speed improvements in the last years.


This! Having served multiple thousand users at once from a cheap $15 DO box the thing about rails performance is just how you built it.

The tools are there, you just need to understand and use them.


> but I dont think you could find similar testing being done and Deployed at the scale of Github on any other framework.

?!

There's at least Spring.


It's been a very very long time I touched anything Java so excuse my ignorance. I know there are many Enterprise, internal Web Apps that uses Spring. But are there any web companies built their SaaS or Consumer Web App on Spring, that is at the scale of Shopify and Github?


Aliexpress and Udemy come right at the top of my mind. But there are quite a few more. (several German web-sites like Trivago). Spring Boot is used more outside the U.S. - Java is respected more in the rest of the world.


Most companies at that scale don't use a single framework or even language but there's probably more Spring at Amazon than RoR at GitHub, Stripe and Shopify combined.


Stripe doesn't use Ruby on Rails, they have a framework developed in house


I couldn't immediately locate it, but there was a famous example of an Amazon property leaking its Java stack trace but since I can't locate it I also can't swear it contained springframework classes


Pretty sure Netflix uses a ton of Spring.

It's one of the most common frameworks in the industry across languages.


Yes, we’ve been doing this at Shopify for quite a while now.


The Spring framework is the 800-lb gorilla comparison and I'd be surprised if Ruby was within an order of magnitude of Java.


That's because for business web apps, the code execution speed is a more minor factor compared to other latency issues.


ASP.NET


It is so nice to have the latest releases of both Ruby and Rails run in production as soon as they are released. Huge for indie developers and small teams that use Rails. Kudos to Shopify too, who are working on improving Ruby tooling a lot recently. Exciting time to be a Rails dev.


Github is one of the few webapps where I can feel daily that the framework used isn't enough. So many things get out of sync/not up to date, which are fixed by refreshing the page.


Is that a fault of Rails though or Github's architecture as a whole? In my opinion, Github is the result of hundreds (thousands?) of devs working on one product with little discussion across the entire project. Their HTML patching solution works great in isolation for one team shipping features to their island of the product. But we as users see it fall short at scale. Like you mentioned, things getting out of sync is a real pain point that doesn't have an obvious fix when you have so many people touching the product daily.


If anything that's a fault in the Frontend though, which is not what the article is about, right?


Yes, just seen it yesterday. However, this is perhaps not of Github origins. Even when you use the recommended interface to git - the command line - it's only as up-to date as you've entered the last command for - or pressing F5 in gitk.


I feel the same with Gitlab, with e.g. build pipeline status not updating when it's done - it feels pull- instead of push-based.

I mean it may be working just fine, but I don't see it - and the success rate of refreshing a page to see an update is so high that it doesn't instill confidence.

These tools should be more reactive, I think, with live progress indicators and the like.


Nothing wrong with pull based. Actually I feel like push connections are more often less reliable as they are lost and not reconnected.


Very true, but that's a frontend problem, not a rails problem.


These problems are always more common in server-rendered apps though, because front-end state is always a patchwork. And the Rails developers and community have a strong preference for this architecture.


I observe the exact opposite. Server rendered UIs are far more often times up to date than client rendered apps.


I can't think of a single website that's real time and server rendered


I do not have examples with me. But anything built with Elixir Liveview will be server rendered and will also be real time, with some gotchas though.

https://github.com/phoenixframework/phoenix_live_view


Its not exactly server rendered, though. After the first page view all the server is rendering are json graphs of dom diffs to be applied by the client application. If you use the routing features then technically its an SPA.


Mind you some parts of Github FE are React https://twitter.com/rauchg/status/1591464351990697984?s=46&t...


The upside is that you can link just about anything in GitHub, and those links work consistently and point at the exact right content. This is much less common with SPAs.


I agree its a valid criticism of parts of the ecosystem, but a lot of people these days are using next.js, nuxt.js or sveltkit and those give you easy patterns to follow so that all those things work, basically for free. SPAs are better now than they were just a few years ago.


SPA =\= client rendering


I think it's a tradeoff - rails makes live frontend updating challenging because you need to maintain a separate system to manage that, especially when you have a large app. JS frameworks have this built in, but their backend is lacking compared to rails IMO


GitHub could've used Hotwire [0] but instead they decided to grow a buggy immature in-house solution.

[0]: https://hotwired.dev/


Even with my kindest interpretation, I cannot find how this comment adds anything to the conversation. It’s also incorrect. Hotwire was released after GitHub’s internal UI framework (which is quite impressive!) was created.

Attributing some UI bugs with their choice of framework is a massive oversimplification of the problem.


> Hotwire was released after GitHub’s internal UI framework

Any source for this?

My understanding is that it was extracted for building Hey from the use cases the 37signals people had on other products such as basecamp, etc.


Yes that’s correct.

I was saying it came after, meaning GitHub could not have used it since it didn’t exist. :)


Working with Ruby and Rails since 2010 and absolutely loving it. Never ever looking back.


> Instead of telling your team you found something in Rails that will be fixed in the next release, you can work on something in Rails and see it the following week!

That's awesome. Not only fixing it for your team but the entire rails world.


> Not only fixing it for your team but the entire rails world

Or breaking it for some.


We are also running on Rails - and loving it, too.

However, incorporating this methodology into our workflow would be both a lot of work to set up and also a lot of work to keep running.

There certainly is a size threshold under which this is clearly an overkill and we are under that threshold.

Yet, if this could be turned into a product, a clean integration, a command I need to run... would definitely use it (and pay for it).


On the contrary, it is so much easier to upgrade a small app every week or so because you have little to test and probability of breaking changes affecting you is minimal.

You should be upgrading all the time since day one, adding necessary infrastructure gradually as your app grows.


Absolutely. I was leading a team a while ago and I instituted this practice to good effect.

Conversely, I was called by a company that I had built an app for previously. They had not upgraded the framework it was built with (Laravel), and ended up offering me consulting days to jump several versions. The irony is that the job ended up being quick and easy to do.


Laravel is relatively painless to upgrade as long as you have the autonomy to get the work done in a timely manner. I've seen upgrade projects drag on for weeks causing issues upstream since most of the team was still working on the product or fixing bugs while one developer was tasked with the upgrade.

Did you use a tool like Shift to help with the upgrade? What about frontend dependencies? The recent move from Mix to Vite is great, but if you have a large frontend, it can be a major PITA to update. More so if you have any sort of custom webpack configuration.


Yes, Laravel does a good job of making upgrades relatively simple


You need to have the right infrastructure/organizational maturity tho.

For example in many organizations partial deployments don't exist and rollback is not easy, so having a "once a quarter we test everything and bump" is less traumatic and expensive than handling minor breakages every week.

You can invest into that infrastructure/maturity but it may not be the best use of your time and money if you don't even know that your company will exist next year.


I've been similar discussions with our security guy and a few of our build infrastructure guy, and yeah.

I think the pattern to consider is: (i) Yes, this would improve our deployment speed/upgrade speed/security posture, possibly by a huge amount. (ii) We have much bigger problems with higher impact.

Like, at work, we could spend a month or so to setup something like dependabot for our private stuff and I'm pretty sure we could get to a point of deploying these dependency updates quickly - or, for less critical systems, automatically even. And it would be cool.

But that won't help us with some of the flagship products in the company that have C++ dependencies on EOL windows components and no automated deployments. We'd rather have the capable guys working on these nasty issues, since these upgrades for the modern products can usually be done by a junior dev in a few days for all of these smaller and well-controlled systems.


The main cost is maintaining high test coverage so that you can be confident in your automation


Going a bit of topic, Arch Linux does mostly the same, and thanks to it all of its users are direct testers/reporters of unpatched projects, with the exception of a few minor patches related to chore in projects that don't allow to customize it (mostly paths). I guess that this has probably improved the overall health of Linux beyond Arch itself during the last 2 decades for similar reasons to the ones provided in this article.


Yes it’s great people use it and report issues. I found and reported a years old bug in Erlang that was exposed by a zlib update. Arch makes it easy to isolate and rollback dependencies which was helpful to isolate the change. On the other hand, Debian/RHEL users never had to know about this complete showstopper bug since they never ran a system that had the new zlib and the old Erlang at the same time.


I must be the only person on Hacker News who dislike Ruby.

It simply has too much magic and doesn't offer enough abilities (unlike Elixir or any Scheme variant) to justify the dynamic type nature.

Go and Rust are perfect for me (and maybe Zig?). They covered everything I need.


I love Go but Rails is just unmatched in terms of development velocity. You're right that Go eschews magic and embraces verbosity but in doing so it makes common tasks, like web development, take 10x longer than languages that do the opposite.


It's only magic if you don't know what's going on under the hood. I felt this way for the first couple of years until I figured it out (experience matters as with every other language/platform). Then there was no magic, just conventions. You can do crazy things with Ruby and people tend to do them thinking it's "cool". Just don't. Rails doesn't have many and they are pretty well documented.


No, you aren't the only one. I used RoR at its peak hype in 2009 and I wasn't impressed. Even super boilerplaty alternatives at the time (like Spring 2.x and iBatis) were a lot more manageable in my opinion. And the few good ideas from frameworks like RoR and Django have long been stolen by all ecosystems, so I don't really see the point anymore.


None of the languages you listed/prefer existed when GitHub was launched, lol.


I've been using Rails and GitHub since the first 2.x days. I have been expecting an announcement that they are switching to .NET since Microsoft bought them. I'm guessing it's too big of a bite, even for Microsoft, thank Ada. I love being able to point recalcitrant policy makers at my company at GitHub as an example of a top web app -- the context for which they already know very well -- being a Rails monolith.

I'd love to see their Gemfile.


> I'd love to see their Gemfile.

I would bet it's present in GHE's OVA (https://enterprise.github.com/releases/3.8.1/download)


Ruby and Rails are the single reasons why my previous startup worked out. We had a small engineering team and had to move incredibly fast. Without the magic we wouldn’t have been able to ship at insane speed. I tattooed the ruby logo on my arm as this was an absolute life changer for me. Having financial freedom now lets me spend so much time with my kids. I’ll be forever grateful.


I totally feel you. Ruby & Rails are part of the reason I work way less than most of my peers for the same amount of money or more.

Being able to built publish ready things, alone, within only weeks instead of months is a total life changer if you like building things


It's been interesting looking into their "leaked" source to see how the application is architected. It's surprisingly accessible given the overall size.


Do you have a link? Sounds interesting


He's probably just referring to the self hosted version of Github.

https://enterprise.github.com/trial


Correct! It was reported a few years back that Github's source code was leaked and it just turned out somebody deobfuscated the source shipped as part of Github Enterprise's publicly-available VM image. I don't have a direct link, but it should still be possible to run the latest image through a script to get the source.


I think the big lesson here is writing excellent tests and making them easy to execute makes your code easier to change fear of breaking something.

Unfortunately I’m fairly certain the message that will be heard is ‘We should totally change something around on the customer every week, that’s what GitHub does’


> Ultimately, if more companies treated the framework as an extension of the application, it would result in higher resilience and stability. Investment in Rails ensures your foundation will not crumble under the weight of your application. Treating it as an unimportant part of your application is a mistake and many, many leaders make this mistake

I really like the sentiment of this quote but that is an easy thing to say for a behemoth like GitHub.

I find these blog posts from google, meta, aws etc super awesome since they run and solve problems smaller companies simply don’t have. And they shape how smaller companies solve similar issue at smaller scale. But doing a setup to be practically on bleeding edge rails for example is not something every company can afford. They need LTS releases and the sorts. Still awesome that they managed to achieve this.


Good post and an interesting glimpse into the behind-the-scenes efforts that goes into maintaining Github.

I cannot agree with the "Should I do it too?" section. It probably works very well for an org as large and dedicated as Github, it very likely makes a lot of sense for what they do at the scale they do it at.

For most of us that are consumers of technologies and frameworks, treating the framework as an extension of an application stack, as the blog has outlined, is a terrible idea; it requires devoting time and effort towards its upkeep, and that is not our core business, nor should it be.

That does not, however, mean it's being treated as an unimportant part of the application stack, and implying so is judgmental. It's importance is that you need to be careful about the tech choices you make and understand what the implications will be down the line.


I absolutely agree that it's very rare for a non-software engineering organisation to care one bit about its tech stack, at least outside of the IT departments. As long as things work reasonably well at a reasonable cost, the organisation will likely be happy about it. This is partly because IT (this includes software development) is seen as an expense center which provides a support function, akin to HR. Maybe some people will think this is silly, especially in companies where every employee spend all of their working time on some sort of digital device, but that's the reality that I have spent the past two decades working within, and I don't particularly mind it.

I'm curious as to why you would disagree with what Adam Hess writes about handling updates though. To me he respectfully outlines that GitHub has engineering capabilities that allow them to update their Ruby on Rails weekly, and that it's a good idea to do so, if you can. I can follow you as far as how the application stack isn't the core business in the sort of organisations you and I seem to work for, and weekly updates likely shouldn't be your goal, but I do think that you should allocate the resources needed to keep things relatively up-to-date.

I'll give you an example of how I don't follow my own advice. We have what has developed into an important asset management system that we build in-house, which due to a lack of updates being prioritized now cannot be build on the LTS version of it's core tech. This isn't a major problem today, because it's on an internal system on a virtual network which is heavily protected, but it's also gotten to the point where it will likely need one or two people to work full time for a week to a month to get it updated. Doing that might actually be cheaper than having kept it sort of up-to-date, maybe with quarterly or even yearly updates, but if we couldn't get those prioritized, how do you think we're going to get a week-month for updates prioritized? :p


> As long as things work reasonably well at a reasonable cost, the organisation will likely be happy about it.

Or maybe the organization is just uninformed of the issues. Most management of organizations lack a view into 80% of the issues at hand.

> some people will think this is silly

It's sad rather than silly.

Like when I go to a supermarket and can't checkout because the terminals are down... right it's not a core part of the business and just a cost center. So I decide to order online and their checkout form goes into an infinite loop and I give up... they've lost customers.


> it requires devoting time and effort towards its upkeep, and that is not our core business, nor should it be.

I would try to argue that this in fact may be subjective.

For me, updating all the dependencies in my stack every morning is a productivity routine that gets me started. If I spend a 30 minutes studying changelog of a dependency and linked github issues, I don't see that time as wasted even though it often does not contribute to the business. I think it has three benefits.

- easy morning start routine to get going

- developer education, expanding horizons

- code climate; having warn feeling in the gut of not falling behind, wanting to avoid dirty plugs and monkey patches, ability to work on the HEAD, should an issue arise.


I think the parent comment is referring to the act of submitting code changes to Rails directly whenever behavior is needed. Staying up to date with what your dependencies are doing and why makes sense for any project, but actively contributing to the dependencies isn't necessarily a great investment when it doesn't benefit your bottom line in a direct and timely matter. Github has a large enough staff that it can dedicate time to open PRs to the Rails repo without giving up its own productivity, but if you're looking at a team of 10 developers, taking time away from your own product to improve Rails can be a significant time cost.


> For most of us that are consumers of technologies and frameworks, treating the framework as an extension of an application stack, as the blog has outlined, is a terrible idea

My team currently does something similar albeit with a smaller framework (we are also much smaller than GH!), and it has done wonders for the stability of our flagship app. Imo it's only a terrible idea if your company does not want you to do it.


> and that is not our core business, nor should it be

And that's problem. Tech is an integral part of most businesses now. Chances are if there's an outage you'll lose customers. It's a core part of your business - possibly 1 of the most important.

Often the #1 reason why something can't be done is that the tech behind it doesn't support it.


I wonder how much they end up having to unit test gems they use, and how much they have to maintain gem compatibility for those maintainers..

I have to imagine this leads to either GitHub being the defacto lead maintainers for those gems, or GitHub removing gems from their stack and writing their own code.


I'm curious what their data access layer looks like underneath that monolith. Is the Rails piece mostly now just a frontend for dozens of other services, properly owned and maintained by other teams? I don't mean to trivialize something that's obviously huge and complex as "just a frontend", but IME one of the biggest things that breaks down in a Rails monolith as it scales is heavy, direct usage of ActiveRecord. Either there's lots of DB migrations happening since there's so many different developers working on different features, which makes development with a shared DB tricky, or the scale makes DB performance problems hard to diagnose since they cut across many teams or tables in complicated ways.


Can't speak to how GitHub does it, but in every Rails org I've been at, eventually we create secondary services that own a specific domain, and the primary app becomes a gateway that clients use to talk to those services. It's rare to spin up a new service in a new language/framework. A typical pattern is to migrate the service within the monolith into a Rails engine, and then move that to a new app where the app mounts only that engine, or one can find a way to make the monolith deployable with only certain engines mounted, like with an env var.

A Rails engine is basically a self contained Rails app, including routes, which you can mount inside of a host Rails app at any route if your choosing. They're usually used to build reusable libraries, but this use case also works very well.


I've heard this several times over the last few months. Like what makes several devs working in the same area of the code "hard"? In my experience whomever is lucky enough to commit first gets the easiest of it, everyone else just rebases and resolves their conflicts. Maybe if you don't rebase and merge instead? I've seen some screwed up stuff happen from bad merges... like entire lines of code vanish.

But generally, I've worked with hundreds of devs in the same code base without issue. So, why do people ask this?


Well, I gave a specific example in Rails, using DB migrations against a shared DB. It’s not an unsolvable problem, and of course each dev can have their own DB, but if this is poorly managed it’s easy to become unwieldy. Outside of that, if many devs are constantly making dependency changes such that every time you “git pull” you have to rebuild environments, etc. Maybe devs are adding features but not prepopulating dev environments with sensible test data so your dev environment gets horked. Etc etc. Its not usually about merging the code itself.


Some of their points which reminded me I consider working well, also for smaller teams.

* Parallel builds/CI to look into the near future: current + next. When time is due current and next need to become green and the next alpha becomes future, which is allowed to fail (3 parallel tracks: current, next and future). Keeps all your build tools en par early, too, so there is less rot, and one team can concentrate on the migration while other teams aren't interrupted by that, but also for a single or only a handful of developers, you can have better change management (at the cost of the extra computing power).

* It feels less a monolith, if its in a dynamic language (not compiled / transpilled / linked) and in many files (there is little to build and deployments are smaller and faster). In such projects, increments are possible across multiple axis in a synchronized dance.

* Pipeline everything and continuously shift left. Identify the most important improvements and if they can step-break the process well, apply the earliest one coming from the developer perspective. Any such change will speed up any change after that step.

* Implement fast turn-around with the parts that change often so you can change them often well. Cooperate well with upstream, this is most often a key to ongoing success.

* Distributing traffic over multiple application servers and being able to deploy different configurations / revisions and directing part of the traffic to it is invaluable. Have your tools/systems configured well to make it easy and comfortable to do this way. Production must not feel like an all or nothing game any longer, but serves as experimental playground.


Even as a much smaller team, building Heii On-Call [0] as a lightweight alerting/monitoring/on-call rotations SaaS based on Ruby on Rails has basically been a pleasure!

And as the article highlights, perhaps the key reason for smooth deployments and upgrades is that the CI testing story is so, so good: RSpec [1] plus Capybara [2] for us. That means we have decently extensive tests of just about all behavior. The few small Rails and Ruby upgrades we've done have gone quite smoothly and confidently, with usually just a few non-Rails gem dependencies needing to be manually updated as well.

The "microservices" story is where we've pulled in the Crystal programming language [3] to great effect. After dabbling with Go and Rust, we've found that Crystal is truly a breath of fresh air. Crystal powers the parts of Heii On-Call that need to be fast and low-RAM, specifically the inbound API https://api.heiioncall.com/ and the outbound HTTP(S) prober background processes. I've ported some shared utility classes from Ruby to Crystal almost completely by just copy-and-pasting ___.rb to ___.cr; porting the tests for those classes was far more onerous than porting the class code itself. (Perhaps another point of evidence toward the superiority of RoR's testing story...)

The front-end story is nice but just a bit weaker. Using Hotwire / Turbo successfully, but I have an open PR to fix a fairly obvious stale cache bug in Turbo [4] that has been sitting unloved for nearly a month, despite other users reporting the same issue. I'm hopeful that it will get merged in the next release, but definitely less active than the backend side.

For me, the key conclusion is that the excellent Ruby on Rails testing story is what enables everything to go a lot more smoothly and have such a strong foundation. I'd be curious if any GitHubbers can talk more about whether they too are using Rspec+Capybara or something else? Are there internal guidelines for test coverage?

[0] https://heiioncall.com/

[1] https://rspec.info/

[2] https://github.com/teamcapybara/capybara

[3] https://crystal-lang.org/

[4] https://github.com/hotwired/turbo/pull/895


My startup is using minitest (mainly for running tests in parallel I believe) and capybara.

Crystal looks great, are you using it mainly for type checking? If so why not Sorbet?


We're using Crystal for "premature optimization" for the parts of the system that need to scale, specifically:

(1) the API server at https://api.heiioncall.com/ which gets hit frequently for check-ins, e.g. cron job monitoring

(2) the outbound probe processes that do website monitoring, polling your desired URL every minute and making sure it's up!


This is a very interesting cautionary tail for those who recommend Rails because Github uses it. It's hard to imagine another core framework where the "best" way to use it is to dedicate an entire department to continuously working off the bleeding edge. And if you read between the lines, changes that Github needs to make are within Rails itself, and as we know there are core Ruby and Rails maintainers working at Github. This is a cautionary tale because of how much overhead Github needs to make this work. If your company has the klout to hire core Rails maintainers and the staff to be able to focus on working off the main branch of one of your framework dependencies, then go for it!

You should also read about Github's history with "Rails" and how challenging it was for them to make it fast enough for them and upgrade it. It's pretty interesting they got around that "challenge" by throwing a LOT more resources at it. It's interesting because this isn't usually a challenge with other major framework upgrades. It makes sense you would need this much investment given the hyper-dynamic dangers of Ruby (not just the language, but the ecosystem), making it difficult and risky to upgrade.

TL;DR the advice in this blog post does not seem applicable to most companies.


This make so much sense, I think every company should update deps frequently and Dependabot and other services like that can help with that. It probably would be too costly to attempt what GH is doing for smaller orgs.


Absolutely love this approach and I think it makes a ton of sense for an open source focused company - however I haven't seen this approach in any company I've worked for in 15 years (where I have helped with most of the Rails upgrades!).

There's a ton of merit to being on the latest release or close to it so you can get the latest security patches easier. The diff between that and the latest `main` branch seems like it has diminishing returns for most orgs.


meanwhile, i have to work on a rails 3.2 app with ruby 2.2


That's battle tested by now. Call it enterprise.


Those versions are so far removed from receiving any security patches I'd probably want management to sign off that they heard me state this and they take full responsibility if shit hits the fan.

Also, are you at least a little concerned about your career using a tech stack that old?


I work on multiple projects, the main one i work on i recently upgraded from 5.2 rails to 6.1, and ruby 2.7 => 3.0


I learned Rails a year ago and haven't seen any sign that should concern me.


This is fine, until someone comes along and asks can we "just" do <this>? Someone tried to foist a Laravel app on me, to update it with some new functionality. It was so old, I had to setup a Linux box with manually-downloaded versions of PHP and the framework. Not being familiar with Laravel, I needed lots of docs and examples. They were impossible to find, and it turns out Laravel from that long ago doesn't work very much like today's version. You're in luck here, because Rails 3 works pretty much the same way as 6 and 7.


nothing wrong with Rails 3.2 :)

get it on Ruby 3.1 if you can - https://railslts.com


What's preventing you from upgrading?


the mindset "if it works, leave it there" edit: also it's a big project without specs, good luck at finding broken stuff...


Going to sound nuts but perhaps you can leverage ChatGPT4 to start generating some tests, based on some prompts for like 20% of your app. you just have to be pretty disciplined about how you write your specs.


actually, maybe not a bad idea


I've been using Codeberg, which uses Forgejo which is written in Go and is fast and light.

Fantastic that GitHub has managed to wrangle so many lines of code in a language I don't care very much for, but my Samsung A53's browser is snappier without it :)

Edit: to be fair, GitHub's has a hamburger menu that morphs to an X that I dearly miss. JK :P


Codeberg runs go on the frontend?


Is that two millions LOC for a single Rails application or across various Rails applications?


It's one single application. Most of the secondary services are powered by go/ruby, running independently in non-monolithic way.

Ex-Hubber


One thing I just cannot wrap my head around is Ruby/Rails metaprogramming and attribution of things (as in "where this method/class/macro comes from?"), especially with so many authors trying to do something to the pre-existing stuff, including the standard library bits.

Like, a lot of times I see some baz.foobarify() I have really hard time understanding where the heck that comes from. RubyMine makes wonders and control-clicking gets me there about half of time, but the rest of it it's either "search in all files, hoping that `def foobarify` is unique enough" or "this is crazy, but I'm gonna set a breakpoint, run it and see where this rabbit hole goes".

It feels kinda like Scala (or, to lesser degree, Haskell) magic operators problem, but worse.

I consider myself proficient with a decent number of languages and frameworks, but Ruby/Rails is one arcane mystery that just never clicked. So I really wonder what's the trick to make metaprogramming shenanigans manageable at this scale?


https://tenderlovemaking.com/2016/02/05/i-am-a-puts-debugger... is ruby gold for this kind of stuff. As a Ruby developer, probably one of the most impactful "quick tips" I've ever picked up is this:

  foo.method(:bar).source_location 
That is wildly useful when debugging/figuring out where the actual code for some method is at runtime.


They call it a monolith and the way they describe it I think they are using that term for the application rather than the repo.


They keep saying "monolith".


> Every Monday a scheduled GitHub Action workflow triggers an automated pull request, which bumps our Rails version to the latest commit on the Rails main branch for that day.

That’s a bold move to do as opposed to being end of week or weekend.


I like Monday releases as if something goes wrong everyone is around to fix it. If something breaks on Friday it ruins weekends.

I think Monday requires more maturity and more successes as it prioritizes dev time over downtime. Saturday outages affect fewer customers but are hard on staff.


We don't even release on Mondays, its too rushed. We ship between Tuesday and Thursday instead and only during productive hours (e.g 10AM-3PM). We want the engineers the be alert and calm.


I sure don’t want to be called on Saturday to be told that there’s something wrong with Friday’s build.


I’m sure paying customers don’t want outages during their prime usage days.


How do we achieve that? Releasing often and in smaller increments. A release causing an outage should be easier to test than the subtle bugs that come creeping one, two, three days later. Either way, if there is a catastrophic failure, its better to have everyone readily available.


Are people still doing major releases on Friday or the weekend?


This is awesome, now I just wish they spruced up their UI with Hotwire :p How many times have you clicked the back button and found the same notifications unread?


Oh, and here I was told that RoR is only for quick-and-dirty prototyping and that "real" web applications are written in ___ (fill in blank)


I dunno. I lived through Internet Explorer, SCO and Linux patent lawsuit days so I reckon it's only a matter of time before Github goes ASP.Net.


I lived through Microsoft purchasing Hotmail and trying repeatedly to migrate it from FreeBSD to IIS/Windows2000. We may have a good 20 years.


A friend at GH was already writing mostly C# and Go before he moved to engineer manager 2+ years ago, so I believe it's already ongoing. Also he absolutely hated having to jump into components in RoR before Codespaces was available internally.


C# is used for Actions: https://github.com/actions/runner, and Go is used a lot for internal services. There is no traction rewriting our monolith in C#.


How does the current Architecture of Github looks like, Apart from Rails, I have heard that they use Go as well.


> scheduled GitHub Action workflow

..sounds like they're not dogfooding dependabot? curious if anybody knows more/why


Dependabot is based on releases from the various package repositories; running of main is pre-release - hence they’re probably using GitHub Actions to pin their Gemfile-defined Rails version to a commit hash.


Dependabot is for security issues, not this?


dependabot can also be used for regular version updates


> Every Monday a scheduled GitHub Action workflow triggers an automated pull request, which bumps our Rails version to the latest commit on the Rails main branch for that day.

So uploading code to a github repo makes github server execute it later on. I wonder if this would qualify for a bug bounty... /s

Edit: I guess not, since this is just a pull request and still requires approval before merging to main. I assume.


Don't work for GitHub, but it would not.

First, it opens a Pull Request, not an automatic merge, so hopefully the code is reviewed before merge.

To exploit this, you would first need to get malicious code merged into rails master which has many eyes, and then get passed more eyes when it gets reviewed by GitHub.

Not impossible, but if you got your code merged into rails master, you have wiggled your way into many more environments than just GitHub.


Even if it was just not pull-request, highly unlikely that it would qualify.

We can assume that only trusted maintainers can put code into the Ruby main branch. Similarly, every build system depends on many many signed packages. It is not much different than that.


Does anyone know if Github uses Sorbet for their Ruby and Rails code?


We do.


Are there other languages where the dominant web framework changes so much? Will Rails ever be "done"?


It can never be done as long as the web keeps evolving and the web shows zero signs of slowing down evolution

And those changes are the reason rails is still really good, modern, and not considered legacy software almost 20 years later


.NET and Django are The pillars of backward compat and stability IMO.

Change is somewhat inevitable though


I hope not.


[flagged]


Was this comment generated via ChatGPT? Not trying to catch you out, just curious if I'm able to successfully spot it in the wild.


I also got very "ChatGPT" vibes from that comment but thought I was being too paranoid by the last paragraph


Got the same vibes. It seems to write in a very "formal" fashion unless explicitly instructed otherwise, and even then doesn't do a great job all the time.


I passed their comment to ChatGPT 3.5 and it seemed confident that ChatGPT wrote it.


For the record, this (asking the model if the output is theirs) is not a reliable way to determine AI authorship of comments. The false positive rate is quite high, and will return true simply for a comment being well written and lacking idiosyncrasies.


I don’t think ChatGPT can tell if something was written by ChatGPT. At least not reliably.



ChatGPT is probably not trained for this. The classifier seems like a different model/product. Can access it in the playground here:

https://platform.openai.com/ai-text-classifier


You caught me red-handed! Yes, the previous comment was actually generated by ChatGPT. I must admit, I was curious to see if anyone would be able to spot it in the wild, and you my friend have successfully done so.

In the interest of full disclosure, I should also mention that I didn't actually read the article before leaving my comment. I know, I know, shame on me!

But on a more serious note, I think it's important to recognize that while AI has made incredible strides in recent years, there is still a lot of work to be done on the creative side. Many texts generated by AI tend to have a similar structure or tone, and it's not always easy to tell them apart from human-generated text.


What prompt did you use? I immediately smelled chatgpt from the comment too. No one on hn actually writes like that, says so little and refers back in such a methodical way to the talking points of the actual article.


The true test is "is it actual marketing-speeech or chatgpt". There are few easy things to catch like

>and this has resulted in real tangible benefits such as better database connection handling, faster view rendering, and improved security posture.

No actual honest tech person would ever say "security posture"

It also generally read like someone trying to sell tech to managers. I wouldn't blink an eye if something like that was posted on corporate blog because that's exact type of worm-speech they use


This one is also gpt. Can we please keep hn comments human only?


I apologize fellow human. The world has changed, the genie is out, and all that is left is bullet-point communication.


Am I the only one already bored ****less by everything ChatGPT?


Interesting, reading GPT output with an expectation of human quality made its shortcomings a lot more obvious to me. Thank you for that experience. In particular, it stood out that significant parts are an oddly direct regurgitation of input text, suggesting they didn’t undergo translation into abstract form then back to text, and that it doesn’t pick out interesting or unexpected things to respond to, instead coming up with some comment on many parts of the input like it’s a checklist.


You're welcome. I had to remove the last paragraph from the output as well, as chatGPT seems to like to summarize it's output one last time. I see that a lot in it's output and it's a dead giveaway.


> As someone who has experience working with large codebases, I am thoroughly impressed by GitHub's approach to upgrading their Ruby on Rails monolith. It is clear that the GitHub team has invested a significant amount of time and resources to ensure that their application runs on the latest version of Rails and Ruby, and this has resulted in real tangible benefits such as better database connection handling, faster view rendering, and improved security posture.

To be fair, they did spend years not adding features. (Only really starting to add features when the Dear GitHub letter came out) It seems like they spent that time wisely just investing in stuff like this. GitHub are in a unique position where they spent so long ignoring features while having a large team. I feel like for the majority of teams these sort of investments are rarely possible because feature work is required.


I down voted because it just seems like you wanted to being up the dear GitHub.

It's been years... 7? with great features that go well beyond the letter. Yes gh was stagnant but this is pretty well in the past and your points have almost no current relevance.


> I down voted because it just seems like you wanted to being up the dear GitHub.

I only brought up dear GitHub because I am sure people wouldn't have had an idea what I was talking about.

> Yes gh was stagnant but this is pretty well in the past and your points have almost no current relevance.

When my point is they used a bunch of time that other companies wouldn't give to build a solid foundation it seems very much that the past, that being the foundation, is very relevant. My entire point, is that they're in a very unique position a position that others can't really fairly expect since very few companies are able to ignore features for such a long time. This isn't to bash github. This is to put into persective what they've done and if others can do it. It removes the whole "GitHub did it so..." thing that many will suggest down the line.


Another big rails application was twitter, it has been rewritten but the concept fitted well with rails. Twitter currently uses some java framework as far as I know. Question, does twitter still have some RoR deployed?


Every time it's pointed out that GitHub is a rails app I am reminded of one of my favorite security exploits ever

https://news.ycombinator.com/item?id=3663197


All the cool kids left Ruby for Go, why are they still in that land?


Since they left ruby (2.0) raw rails performance improved by 75% (3.0)

https://www.fastruby.io/blog/assets/images/RRBPerfHistory_72...

Then in 3.2, ruby released YJIT which improved median rails response time by about 100%

https://raw.githubusercontent.com/easydatawarehousing/ruby_m...

It's kind of a golden era for rubyists that stuck it out. Beautiful maintainable code that hits C performance for many tasks.


Didn't know that, thanks for the update. Actually, now I want to give it a try, remembering my old romantic days.


"... hits C performance for many tasks"

Really?


particular improvements that come closer to C are related to string operations and regex

https://tomaszs2.medium.com/ruby-3-2-0-is-from-another-dimen...

here are some benchmarks from 2019 (ruby 2.5) related to string and regex operations. already certain operations were faster in ruby than C or go, particularly on longer strings.

http://jultika.oulu.fi/files/nbnfioulu-202001201035.pdf


Wow


Because they care more about getting things done than whether clout-chasers think they're "cool".


I get things done just fine with Go. And all the code is very clear. With ruby it is not, and a lot of ruby programmers may have a feel about how things work, while they are not really. They just enjoy the syntax. The book "The Ruby Programming Language" which written by the creator or ruby (I forgot his name) is full of "unfortunately", unfortunately this works this way now, unfortunately that works that way now.

And people definitly chose ruby because it was cool. So what you said should be applied to Ruby, not Go.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: