Hacker News new | past | comments | ask | show | jobs | submit login
Use Rails (jmduke.com)
87 points by mscccc 8 months ago | hide | past | favorite | 84 comments



> (Really, though, it doesn't matter. Your software stack is almost certainly not going to decide the life or death of your business.)

This is the part that's going to sting.

I suspect, given the general admiration for Paul Graham on YC, many people subscribe (if only unconsciously, and at least to some degree) to the idea that using techno X (where X would be Common Lisp in the case of PG, but everyone will insert their own pet tech here) can _by itself_ make your startup successful.

Whereas the sad truth is that choosing the wrong tech can definitely _kill_ your shop, but choosing the "right" one will not ensure its survival...


The key here though is that choosing a bleeding edge tech is more likely to be a problem because it's fairly untested.

I can tell you what ALL the problems with rails are. For the most part they won't even start to bite you until you hit scaling problems. By the time you hit scaling problems with Rails you can probably afford to pay engineers to solve the scaling problems, and/or port off at that point.

I love sveltekit and use it a bunch for my own personal projects, but it's too immature to recommend to others. Instead I mostly point them at Next.js if they want a javascript stack and Rails if they don't. I have created and maintained apps based on both platforms for 6 and 16 years respectively and know exactly what I'm recommending to people.


I was big on Rails scene when it was huge in 2008, and saw a big exodus from it. I found so many of the "Rails" problems were solved by learning two languages: ruby, and sql. People would go to crazy lengths to avoid looking at the queries they'd actually run. I can admit to not really learning ruby as a language by itself, and can now see how much better my old code would've been if I wasn't just blindly shoehorning Railscasts in there.

Similar problems for people who learned angular but not typescript, laravel but not php, etc.


When software engineers start a business, it's easy for them to hyperfixate on technical decisions because those are the types of problems they understand well enough to optimize.


Yep. The problem with the story of bikeshedding is that it assumes people know bike sheds but not nuclear reactors. There are plenty of cases of people who are great at nuclear reactor design but have no idea how workers should park their bicycles.


I've seen a really interesting pattern play out a few times now.

A startup needs to raise money and hitches onto the latest tech stack that grabs investors' attention. The startup raises on a valuation inflated based partly on the tech stack itself, meaning enough attention isn't given to the actual product or business model. Ultimately the startup runs into money problems when they can't live up to the tech stack hype and the valuation that went with it.

I saw this first hand with a startup picking Plaentscale early on. There's nothing wrong with the product and it solves certain problems really well, but I saw one particular startup grab it early because it was getting a lot of attention and completely missed that the limitations of Planet scale ran smack in the face of what the startup wanted to build.


what limitations? PlanetScale has public companies running on top of it. If a startup found limitations then its a skill issue.


This was when foreign keys still weren't supported by Planetscale. The specific data model was heavily relational and queries were very inefficient without foreign key support. Distributed writes were also important, and if I remember right those weren't supported either though that's a really common limitation.

Assuming a particular tool works for all situations because it works for some is a mistake though. Plenty of companies use all kinds of tools, they're picked to match the specific use case and there is no magic bullet.


> The specific data model was heavily relational and queries were very inefficient without foreign key support.

I can't help but wonder if you're conflating the notion foreign keys (and the usefulness of having them be indexed) and foreign key constraints, which ensure data integrity at the expense of write performance.

I have a narrow view of the performance of MySQL foreign key constraints and would be interested in learning of cases where they might actually improve certain queries.


I'm actually curious what the distinction is in your view. I've never really considered a column to be a foreign key when the constraint isn't used.

Having a column that we give business logic context to is useful, and indexing a column that should contain values for another table is helpful for query speed, but at least in my opinion they really aren't foreign keys unless that constraint lives directly in the database layer itself. I'd say the same for columns that are used as unique identifiers without actually adding unique constraints to the column.

There are good performance reasons to do either one if you're willing to take on the data integrity responsibility in the application code, but the column itself really is just a typed column if the constraints live elsewhere (again in my opinion, I think the technical definitions may ignore this functional argument).

Where I find foreign key constraints helpful for queries is when I need to be absolutely sure of the data integrity. Say I need to make a complex query that joins across three different tables based on foreign keys. If the table constraints exist I know that (a) any value in a foreign key column is valid and the referenced key exists and (b) if no rows are found I can trust that its just because none exist.

Without foreign key constraints, I may not know why the query didn't find any results. It could be because there just aren't any matching rows, it could also be that one of the keys is no longer valid (or never was). If I don't care about that second error state my query may not change much, but if I need to know why the query failed to match and handle any invalid data accordingly I couldn't do it.

When writing, I also much prefer having a single insert that I know will fail if the foreign key isn't valid. This could be done with a more complex query, or a transaction, but then I'd be taking on that responsibility when it could live directly in the db. Beyond the complexity there, I have to assume the database authors would be able to write a more efficient foreign key validation check them I could from my end.

That said, what's been your experience handling foreign keys when the constraint is either unsupported or unused? Do you avoid it mainly for the bump in query performance, and if so how do you avoid that performance hit elsewhere in your code?


Excluding multi-column keys and joins, I'd describe a column as a foreign key in the context of a given query; i.e. when that query references a unique column in a JOIN condition on a related table. This differs from an explicit declaration of a foreign key constraint which provides all the useful referential integrity characteristics that you eluded to. If you said to me "Database Foo doesn't support foreign keys!" I'd take that to mean that you couldn't perform queries with joins.

In my experience, working on systems where foreign key constraints are liberally applied has been a net negative. Certain classes of DML statements (ON DELETE CASCADE I remember as being infamous) are certainly worse in performance than they otherwise might be. As an administrator I remember being repeatedly and painfully hamstrung by the inability to make arbitrary DB writes, which may momentarily violate strict data integrity, but are necessary for immediate practical reasons.

Obviously data integrity suffers without explicit constraints, but I'd rather work to backfill and/or clean up messy data than deal with a frustratingly rigid and poor performing system. I've worked on a number of large-scale MySQL database deployments at various tech companies, and I can't recall many, if any, that required or possessed pristine referential integrity. I can see why it's conceptually compelling, and I appreciate how automated tooling can generate very useful entity relationship diagrams if FK relationships are explicitly spelled out by constraints.

I think the "performance hit elsewhere" only happens given the assumption that strict referential integrity is a requirement, perhaps in a banking context or another where messy data simply cannot be tolerated.


There's no perfect way to do anything in software, especially when it comes to a database. There are always tradeoffs.

There are times when I'd skip constraints and trust the application logic to handle it, sounds like you've run into those as well.

Personally I just have a really high bar when it comes to moving data integrity out of the db and into the app code. Stale data isn't usually my concern, I'm more concerned with how simply and clearly I can define the data contract with anything consuming the data.

When I can guarantee that a column marked as a foreign key will always be a valid foreign key, consumers aren't at risk of a whole class of errors when reading and writing data. Any one query may be marginally slower because the constraint is in the database and always executed, but I know for sure that I'll never have a frontend blowing up because they didn't realize the foreign key isn't really a foreign key, or a backend accidentally writing bad data because the foreign key value wasn't manually checked for validity before being written.

I can say that the larger the team I've been on the less I've seen issues with data integrity loving outside the database, assuming the project is architected well. When an entire team is dedicated to the database and another team, or teams, are dedicated to just the application logic that manages the db, it tends to be much better documented and tested.

Smaller teams have a tendency to along code around much faster and write fewer tests while still finding product-market fit. In those cases, database constraints are an absolute must in my opinion, app logic is just moving too quickly to have any faith in it maintaining data integrity and teams are often growing quickly enough that stuff falls through the cracks.

Honestly as long as someone on the project is seriously considering the tradeoffs, the team is in a pretty damn good spot regardless of what their needs and preferences end up being though!


Indeed, design is the art of balancing tradeoffs I think. I appreciate your argument for constraints being a guard rail for developers who are moving fast and occasionally breaking things. I think nowadays you can enable and disable the checking of constraints in MySQL dynamically without having to restart the database, which would have greatly reduced my past frustrations.

Nice to chat!


I’ve worked at 3 large scale startups now that removed all their FK constraints due to performance and inflexibility issues. The lack of them never really caused an issue - I’ve never missed them. If you have a services / multiple db architecture you’re going to have to deal with dangling references anyway.

I think they’re somewhat obsolete as a concept.


> If a startup found limitations then its a skill issue.

"If it didn't work, you must have done it wrong" is maybe the most toxic sentiment in tech and in life. The Agile Coach's mindset.


Rails is absolutely fantastic for projects below 10,000 lines with 1 or 2 contributors, especially if you want a classic forms-based UI. And you can get a huge amount done under those constraints in Rails.

But as of couple of years ago, Rails came with a number of drawbacks:

1. There was no really viable system of static typing that a significant number of people were enthusiastic about. See https://www.reddit.com/r/ruby/comments/105sdax/whats_the_lat... for a discussion.

2. The lack of static typing meant far less IDE support. Fewer documentation tooltips, less autocompletion, etc.

3. I used to do a lot of Rails consulting. And whenever I had to drop into a codebase with more than 50,000 lines or 5 active developers, it was generally a painful slog. Too many weird Rails plugins that stopped being maintained, too much magic, too many nasty surprises while refactoring.

Basically, smaller Rails projects were an absolute delight. Larger Rails projects, though, tended to feel more like a swamp. Tools like https://activeadmin.info/ could tip the balance where applicable.

I still think that small Rails projects are fantastic, and I don't think anything since has remotely matched Rails' productivity within that niche. There's just too much mature tooling, and much of it works together seamlessly. But not too many projects want classic multi-page apps right now, and small projects often grow up to be big projects.


Static typing vs not has clear pros and cons. I do appreciate that JS/TS offers you a choice but I personally prefer Ruby the way it is.

Your other comments are basically: - ruby/rails has a great 3P package system - oh and yes you can choose "bad" ones - ruby/rails let's you quickly write great code - oh and yes you can just as quickly write "bad" code


I have worked on a decent number of dynamically-typed Ruby and Python systems in the 50,000 to 250,000 line range, written and maintained by teams. This has never felt like a strong use case for dynamic typing. You end up losing:

- A lot of IDE support. The loss of documentation tooltips, in particular, can be painful in a team environment.

- The ability to change an API and immediately see all the affected code. This affects refactoring speed when making big cleanups. Massive updates I could do in a few hours in Rust might take 2 weeks on a big Rails project.

- Team-wide clarity on exactly what goes into key data structures. Can something be null? Does it allow numbers, or only strings? Etc.

With two developers and a small code base, you can keep most of this information in your head. And Rails is still unmatched for terse, clear code, plus off-the-shelf modules for many common tasks.

I'm not even sure that Ruby could be retrofitted with a really worthwhile type system, to be honest. JavaScript already required a lot of black magic, and in some ways, Ruby is even more dynamic. So perhaps Ruby is better left as-is, even if that makes it a poorer choice for projects that would benefit from static typing.


Well I don't mind to agree to disagree :)

Similar points between your comment and the other reply. Still I worked with Java in the past and Ruby has been a liberation for me. It allows me to simpler, cleaner and less complex to write my code.

I work on one of the largest Rails apps and the added typing is usually more of a burden than anything - sure could be the implementation of typing which is harder for Ruby than let's say JS (to your point)

Still I don't mind others wanting to write typed code and finding more positives to it than I do - I just still am not convinced that statically typed is ever objectively better than dynamic and it stays of subjective nature.

TLDR: imho the quality problem of most large code bases is not due to dynamic typing but due to many other factors that lead to low quality code


The pros almost always outweigh the cons these days though for any sizeable project. This may not have been historically true when the performance of IDEs and compilers was worse, but it certainly is true today. I've spent most of my career in dynamic languages but have spent the last year and a half or so strictly in strongly-typed languages. There's a night and day difference in crashes/reliability, refactor-ability, and tooling. Dynamic languages shove all of your validation into runtime where you may or my not be looking when failures occur and refactors are fucking nerve wracking because it's impossible to know if something really doesn't rely on that function or not. Tests can fill this gap, but that's a lot of additional code you now have to write and maintain and refactor and run in your slow af CI pipelines.


I suspect you'd find the same thing if you were a python or php consultant diving into python or php projects.


This is correct, but doesn't really change the discussion at all does it?


It does. The parent was making an argument that rails is not great past 1 or 2 developers and more than X lines of code. I was pointing out this argument is not a strong one, since the this is not exclusive to rails, instead hoping to focus on what is exclusive to rails, so we can have a discussion about "Use Rails".


A surprising answer to issue 2 is that Copilots are filling in the gap. Rails, VS Code, and Github Copilot work very well. Both for autocomplete and explanation. I expect them to get better as context window size improvements arrive.


One new thing on the table is that AI (github copilot, openai) seems to perform way better on non-statically-typed languages. Anecdotally, the suggestions are way better in ruby than go.

I could guess why, but I'll let others speculate.


This goes hand-in-hand with the idea of using "boring" technology. If it's mission critical, I'd rather be using something battle-tested than the brand-new hot tech. There's re tooling, more documentation and likely more search-able solutions to whatever problems you may run into.


I strongly agree with the article's points. While I haven't used Rails, I'm a Django developer and have never built something with Rails; I like the benefits of sticking with full-stack frameworks. Rails is a solid choice, but Django, especially when combined with htmx, also enables quick and robust application development.

It's refreshing to read about someone who prefers a straightforward approach (using boring technology) rather than dividing work across multiple teams and technologies. Using a single, full-stack framework not only simplifies the development process but also enhances the overall enjoyment when working :)


I would totally expand "Rails" to "Django or Rails or Laravel or any other well-maintained mature full-stack web framework in the language you are the most comfortable with".

There's a lot of awesome options for boring technology! Even javascript has next.js which is pretty boring at this point, even if it is missing some things.


AdonisJS is also really great, very much inspired by Laravel (which was inspired by Rails)


First time hearing of that, not sure it passes the "battle tested" sniff test here.

No offense, might be a great framework but the gist of the blog post is A) what you know or B) Rails (if you don't know what to choose)


It might also mean that if you know JS then use AdonisJS. Which has become Rails of JS and is pretty old and mature too.


Wow I'm surprised I never heard of it - probably because it's boring and "old" for JS standards... And not backed by FAANG.

Thank you for sharing!

Though I couldn't find out any well known companies using it (re battle tested) but some might just not disclose which is fine.


Yeah, my organization does not disclose, but I can say I've used AdonisJS in production in the financial industry for about for about 6 years now.


The article said you should stick with what you know and are productive with. For you, the article recommends Django as your first choice.


What is your ideal Django stack (from front-end, to backend, APIs, database, hosting, etc), using well-supported, solid tech?


One ideal setup I have is: Django + htmx + templates with jinja2 (to use macros) and styling with tailwindcss

The good part is that if you start early with a good design, it's fairly clear how to move from views to a REST API (DRF or Django-ninja) later and split the front end if you need to.


That's interesting - I'll have to explore HTMX and TailWindCSS further - one thing that has held me back is just looking at the HTML source, with all the odd tags, looks messy. I know that's an emotional reaction and not a technical one, but I do appreciate looking at code which just looks clean and neat to me...

I've been learning React / Django / Bootstrap (open to ideas on that, but it's just been "there" for me) / SQLite, Postgres / Stripe for payment, Docker, and hosting on AWS and exploring Fly.IO for hosting. I haven't dug into APIs, but curious your thoughts DRF / FastAPI / Django-ninja).

I know this is a Rails thread, so there might be a better place to have a discussion about it...

Thanks for your insights!


For templating, that is why I prefer Jinja2 with macros instead of Django templates.

  {% macro IssueCard(issue, type) -%}
     ... lot of HTML
  {% endmacro -%}
  {% for issue in myissues %}
    {% macro IssueCard(issue, 'myissue') %}
  {% endfor %}
The alternative is creating template tags, which can be a lot of work, and it gets you out of the templates.

I have a folder called macros/ where I put all the macros I need, and then I use the import call from Jinja2.

I agree with you; having long and deeply nested HTML templates creates too much noise when developing. Jinja2 Macros help with that.


These days I prefer using a NodeJS framework like AdonisJS over Ruby on Rails, but Rails had an important role in getting us here.


What are you using for your ORM? Last I checked Prisma is the new hotness, but it's testing story is frankly unacceptable (i.e. manual DB purges, https://www.prisma.io/docs/orm/prisma-client/testing/integra...).


I use Lucid which is also maintained by the AdonisJS team (it's built upon Knex). I do development with SQLite then migrate to MSSQL for production (the latter not my choice, but organizational, but it rolls with it just fine). It has served my needs well.


When Rails became popular I started using it on basic web site catalog / e-commerce projects in 2007. It was very difficult to deploy. Thanks to Heroku and other pioneering companies that is different today. However, at the time I left behind PHP and LAMP stack (Linux, Apache, MySQL, PHP) which provided very simple and cheap deployment. Years later I regret that transition from using LAMP to Rails. I wish I had just stayed using PHP, it would have saved a lot of time and difficulty.

Question for people here on HN: Could a similar position to the blog post be taken in 2024, ie "Just use PHP (and LAMP)"?


For web apps maybe. What's the "boring" / productive stack for desktop apps? There's this weird paradox with programming languages which causes unproductive stacks to become more popular because programmers like "difficult" stuff and they also generate more online activity.


> What's the "boring" / productive stack for desktop apps?

I've been trying to find it for years. I've started maybe 5ish desktop apps over the last decade and each time did the dance of "QT can't possibly be it...can it?" And then googled and tried everything I could find. In my experience it's all pretty bad. Unironically the best solutions I've found are either Unity/Godot or Electron.


It seems like you and parent are asking "What is the boring/productive stack for *cross-platform* desktop apps?" And the answer to that question is probably, as you say, something like Electron.

If you pick an OS, I think there are generally good answers. In Windows, it's .NET and C# with Visual Studio as your IDE. On OSX, it's Swift/ObjectiveC and AppKit, with XCode as your IDE. For Linux? idk, is it the year of the Linux desktop yet?


Yeah, I would categorize this response as

> In my experience it's all pretty bad.



The boring, productive stack for desktop apps IS web apps.


For most web applications rails is fair enough, a boring technology (as well other boring stacks).

Although it got my attention that rails could work really well on production with SQLite (even more boring technology), for small sites.


Definitely would love to see more SQLite adoption, but hosting isn't as straightforward as it should be depending on your hosting choice (e.g. Heroku)

Also had slight issues with concurrent writes on SQLite though potentially things like a SQLite per user would be neat especially when it comes to backup/recovery


Rails is great, as great as it ever was... I work with it on a daily basis, the only issue right now is the job market... And the fact that if you don't pair it with a shiny frontend framework like React, you'll have a tough time getting employment, this is more so for newcomers to the stack, in lieu of the radical experience requirements for jobs in the sector these days.


When you get sort-off fluent in Rails, it's astonishing how quick you can go from an idea to product. It took me a couple of days before I started getting the hang of it, the docs and its website is incredibly well done. And when you get stuck on something, rest assured someone else on the web has experienced the same issue.


One strong point for productivity/getting stuff done fast in real world: good REPL. It's not just Rails of course, but worth noting when comparing interpreted languages to compiled ones. Rails console is not only a great development tool, it's also an invaluable way to tackle customer support problems quickly.


There are several reasons I've come to prefer phoenix over rails but a big one is definitely the experience of using livebook with it. The first time I saw someone use that to simultaneously diagnose and document an incident just blew my mind. Now a few years in it's one of the few things in programming that ever really lived up to my expectations lol.


I see the arguments and they resonate with me (e.g. boring tech). However, rails has some pretty unique footguns compared to other frameworks [^1]. I would vastly prefer some other framework that isn't built on a meta-programming language that essentially has a narrow niche (e.g. django).

Further, it's 2024, building on top of a language / framework without proper compile-time tooling (e.g. static types, capacity for an LSP) is a terrible idea.

[^1]: https://bower.sh/on-autoloading


This article is just as valid if you ran :%s/Rails/Django/g.

I use Django to run both www.fpgajobs.com and www.firmwarejobs.com and love it.


You may want to add some moderation features or otherwise increase friction for adding job postings for www.firmwarejobs.com because the very first listing that I see when I load the page is "Doing your mom".


HAH oh my god I hadn’t seen that yet.

Duly noted. Thanks for telling me.


Doesn't the article say exactly that?


Most of my fortune is because of Rails.


Use ASP.NET Core instead. Just as convenient for small projects, ten times as fast and very robust as the product keeps growing.


No, use Phoenix!


Rails is great, until it's not. Many of whe successful companies built on Rails were started in a different world without lots of external APIs (OpenAI anyone?) to integrate with, user expectations around central identity, authz and other things you'll want to talk to in order to serve a request.

In 2024, I wouldn't start a company or project based on a language and framework that doesn't have a great concurrency story (don't tell me how great Fibers are please). There are plenty of alternatives (nextJS, Remix, etc).

On top of that, the prevalence of OO antipatterns in the Ruby community (global mutable state, prevalence of inheritance over composition, complete disregard of SOLID) will eat up any initial productivity gains pretty fast.


My company (large, well known tech co) actively instructs engineers to not use concurrency features, in a language that is known for having "great concurrency", unless they have a really, really good reason to.

The vast, vast majority of workloads, especially at small startups, do not need a concurrency story outside of running N processes. Concurrency often gets in the way more than it helps unless you're actively trying to optimize something.

My opinion, of course.


Sounds like the language doesn't have "great concurrency" if you go out of your way to discourage it?


You never make outgoing HTTP calls, talk to a database or do any other IO? Genuinely curious.


Where did I say anything like that?


"The vast, vast majority of workloads, especially at small startups, do not need a concurrency story outside of running N processes. Concurrency often gets in the way more than it helps unless you're actively trying to optimize something."

The assemption that the ability to optimize something is a trivial nicety vs an essential tool to keeup your business running and growing boggles my mind every time I'm engaging with the Rails community.

I get it that most startups in the early phase don't care if a request is taking 200ms or 1s and should just go with whatever is easiest to implement. The tendency to generalize this thought to mean that nobody ever needs to care about these problems is crazy though.


I honestly feel like you're responding to an entirely different person, despite the fact that you quoted my post.

Rails can scale horizontally just fine for a huge, huge majority of business cases. If you have a business critical use case that truly needs however many thousands of things done in a tight loop inside a single process...obviously Rails is not the tool you should be using.

I did not say "the ability to optimize something is a trivial niceity". I said "concurrency often gets in the way more than it helps unless you're trying to optimize something". I'm not sure if you're purposefully misrepresenting my words, but it does feel like you are especially given the "so you never make HTTP requests" comment you started this thread with.


I also have the feeling that you are talking to a different person. I'm talking about latency, you are talking about bandwidth. I'm talking about concurrency, you are talking about parallelism.

You can scale up any Rails app super easily by throwing money at the problem, it's trivial. When a single page load or API request to it takes 20 seconds, it doesn't help you if you have 1000 servers that can respond to hundreds of requests in 20 seconds each, when you need that request served in 500ms. This means that while serving a single request you might need to do things concurrently (like make an http request to an authz service and do a db query at the same time).


The amount of APIs in the world that require concurrency within a single request scope to meet latency needs approaches zero.

In practice, you don’t make that db call until the auth request is done and the user is verified. In practice, you don’t make the outgoing api call until you already have the results of the db call, because you need your data to form the outgoing request. Etc.

APIs where intra request concurrency is needed absolutely exist. But they’re the exception not the rule, even at large scale tech co’s. And yes. Rails is a bad choice for those.


"The amount of APIs in the world that require concurrency within a single request scope to meet latency needs approaches zero."

I've never worked on any non-Rails API where this was true. The Ruby community keeps telling me this, but in languages where concurrency is well supported, it gets used everywhere to a great extent. I obviously don't have hard data to support this, but your claim seems pretty far fetched to be honest.

"In practice, you don’t make that db call until the auth request is done and the user is verified"

Of course you optimistically do any idempotent DB operations while waiting for auth to succeed if you care about latency.

"you don’t make the outgoing api call until you already have the results of the db call, because you need your data to form the outgoing request"

These dependencies of course exist, but so do parts of the graph where they do not. You might want to make 2 or 5 outgoing calls based on your DB query and have to wait for 2 out of these 5 to make another DB query. This is so common that there are libraries like https://github.com/uber-go/cff to explicitly model those dependency graphs, visualize them and resolve them at runtime.

My theory is that system designs like this are just impractical to implement inside Rails today, which leads heavily Rails biased engineers to not even consider them, which leads Rails experienced developers to never have seen them, which in turn fuels the sentiment that they are rare. I'm not saying that you fall into this category, but from my experience many engineers who have only ever done Rails in their career do.


Just out of curiosity, what are you doing with a great concurrency story in a company you start in 2024?


Make HTTP calls to OpenAI? Pusher? DynamoDB? KMS? SQS? Make a datababse query? Anything that you cannot or do not want to run in your monolith service.

You make it sound like it's sufficient to have concurrency support in 2024. My point is that it's necessary.


Http clients have pipelining for parallel requests that doesn't require threads. Database calls in Rails 7 now have `load_async`. You can still have other services, outside of Rails. Would that cover it?

P.S. I'm a huge fan of Elixir/Phoenix, but didn't find the big need for concurrency in practice that Rails doesn't address somehow.


Yeah, there are small, niche solutions for solving concurrency for very specific use cases vs general support for this in the language. This doesn't help you when you are using the AWS SDK to make HTTP requests or when you want to hide latency of making a DB query and and HTTP request concurrently.

I'm currently working on a large Rails App in my day job and lack of concurrency support is a major limiting factor for the growth of the whole company (~$10B market cap).


I understand your general point, but these solutions are not niche. They cover almost everything needed in web dev. The niche problems are the ones that arise at scale, which is when a business typically has the resources to solve them. (Hint: you don't have to solve them in Rails).

Large app / company challenges don't apply to new apps. Statements like "I wouldn't start a new app without a great concurrency story." or "I wouldn't start a new app without microservices." etc don't make sense, because new apps have different priorities (i.e. finding the fit, surviving, staying relevant).

I went through a phase of building Elixir/Phoenix apps for 3 years, thinking that I will switch permanently, but ended up coming back to Rails due to higher productivity in that stack. This was in 2015-2018, so maybe Elixir/Phoenix has gotten more ergonomic since. I'm not sure.


You can build a great NodeJS monolith, be very productive and not end up in a corner where you have to spend years of effort fixing your early technology choices down the road. It's not a zero sum game and completely rewriting millions of lines of business logic that your company is relying on for revenue is a hard sell.


You're saying that NodeJS monolith comes with no downsides compared to Rails at early stages. We disagree here.

From my brief surveys, nodejs ecosystem comes with less security out of the box, less standardized project structures, fewer thoroughly-designed and supported packages (vs cutting edge experiments), more complex upgrade paths, more projects getting abandoned, all of which can slow you down every day. Friction from these can be deadly.

On the other hand, being in the corner at scale doesn't force you to rewrite everything. Just the piece that put you there. You can leave Rails app to handle most things, and extract parts of specialized infra as needed.

Would be curious to look at specifics — what kind of pains your company is experiencing with Rails today.


"being in the corner at scale doesn't force you to rewrite everything. Just the piece that put you there. You can leave Rails app to handle most things, and extract parts of specialized infra as needed."

You can't easily, because this now puts a network boundary between your Rails app and the part you extracted. A network boundary that requires you to do IO, which will (by default) cause you Rails app to blockingly wait for a response (vs not doing so in other runtimes, ie Node).

"Would be curious to look at specifics — what kind of pains your company is experiencing with Rails today."

Some things I've seen in the last 3 month:

* Rails timing out after 30s while allocating 500MB of memory (mostly) in ActiveRecord to compute 5MB of JSON to return to an API caller

* 90% of request latency of ~10s spent waiting for downstream services to respond to requests. Most of these could be fired off concurrently (ie `Promise.all` in node). 9s/10s this Rails worker is sitting around doing nothing and eating up ~300MB of memory.

* trying to extract out Authorization to a centralized service (so that other extracted services don't have to call into the monolith in order to make authorization decisions) is a major pain as the monolith now has to make calls out to the centralized auth system to in order to make authz decisions.


A internal network boundary is probably worth it for heavy jobs, since you usually don't want it to interfere with serving web requests (no matter the tech).

You probably already know what I would say to each of those examples.

> Rails timing out after 30s while allocating 500MB of memory (mostly) in ActiveRecord to compute 5MB of JSON to return to an API caller.

I can make a JS or Go program perform the same way. In fact the exact same thing happened in my shop with Go/Gorm. The key question is: how do you compute the 5mb of JSON? The devil is in those details. We changed the way we computed ours, and the issue was gone.

> 90% of request latency of ~10s spent waiting for downstream services to respond to requests. Most of these could be fired off concurrently (ie `Promise.all` in node). 9s/10s this Rails worker is sitting around doing nothing and eating up ~300MB of memory.

This sounds broken. Why is the worker doing nothing for 9 out of 10s? But like I said earlier, there are a bunch of ways to use HTTP1.1 pipelining to run them concurrently. (https://github.com/excon/excon and https://github.com/HoneyryderChuck/httpx support it, but you can also do that with Net::HTTP I believe) And you can still start threads, which are still concurrent while blocking on IO.

> trying to extract out Authorization to a centralized service (so that other extracted services don't have to call into the monolith in order to make authorization decisions) is a major pain as the monolith now has to make calls out to the centralized auth system to in order to make authz decisions.

This seems unrelated to Rails. Not sure why monolith can't continue handling authorization.


> This seems unrelated to Rails. Not sure why monolith can't continue handling authorization.

Agreed. You can totally keep some data in the monolith and some data in new services, and stitch them together if/as needed: https://www.osohq.com/post/distributed-authorization


"I can make a JS or Go program perform the same way. In fact the exact same thing happened in my shop with Go/Gorm. The key question is: how do you compute the 5mb of JSON? The devil is in those details. We changed the way we computed ours, and the issue was gone."

The problem is ActiveRecord in my case. The data layout is not great (lots of joins through relationships, I think 12 tables or so). ActiveRecord objects are HUGE compared to the few bytes of actual data they hold.

What do you use (except raw sql) in Ruby if you cannot use ActiveRecord? There is no other ORM that's optimized for fast reads and I don't feel like writing one.

I actually reimplemented the same API endpoint in Go using https://github.com/go-jet/jet and measured 10MB of allocations and essentially zero overhead over queries itself, a 50x speedup.

Don't get me wrong, this is not what the typical Rails shop will deal with, but it definitely shows where limitations of Rails lie and I'm dealing with stuff like that on a weekly basis in my job, and it's not even a large Rails app (3M LOC).


I agree that if you make a very complex query in ActiveRecord, it could eat a lot of resources.

That said, I would highly recommend going with raw queries for this sort of complexity, no matter the language. There are usually 2 kinds of queries: normal ORM-powered CRUD operations (which can get moderately complex), and hairy specialized report-style calculations. The latter I always recommend to keep in raw, well-written, well-commented SQL form. You can still wrap it into some nice object.

You could write them in something efficient like Jet or Elixir's Ecto, but for such a complex case I'd argue that you shouldn't obfuscate SQL at all. For all other cases ActiveRecord works well.

If you are serving these results in real time, something like materialized views (in postgres) would move the burden of calculation to when data changes, rather than when data is viewed.

And to tie it back to the original convo: a very efficient concurrent language doesn't solve these root causes, rather gives you more time not to address them, and allows you to get away with more neglect. There's some value in that, but you have to weigh it against the downsides mentioned in previous comments. If the language+framework is super efficient and has no downsides to its ecosystem and ergonomics, then there's no debate, I'd just use that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: