Hacker News new | past | comments | ask | show | jobs | submit login
It's not Ruby that's slow, it's your database (berk.es)
121 points by ksec on Nov 8, 2022 | hide | past | favorite | 196 comments



I was left confused after reading this article.

1. It claims Rust is ~10x faster than Ruby, based on a benchmark that reads a 23mb file, and then iterates over the data a single time. In my experience, Rust is between 20-100x faster than Ruby in purely CPU-bound workloads. But the author's main contention is that most work is IO-bound instead of CPU-bound, so probably not a big deal.

2. The author claims "it hardly matters that Ruby halts all code for 15ms to do garbage collection, if the fastest database-query takes 150ms". I've written applications that query Postgres databases with tens of millions of rows, where the 99th percentile response times are <10ms. I'm not sure why it just needs to be taken as a given that databases will take 150ms to return any data.

3. This flame graph from the article[0] seem to show that the vast majority of the request time is spent in Ruby parsing timestamps, rather than in the database. This seems to make the opposite point that the author is trying to make. I'm not familiar with this stack, so maybe I'm missing something- can anyone explain?

[0]: https://berk.es/images/inline/flamegraph_sequel_read.svg


I'm also confused.

2. Agreed. 150 ms is extraordinarily slow for a point lookup from a single table. A simple lookup should take around 1 ms since it'll be cached in memory.

3. Also agreed. The actual database time looks to be the first flame. Hovering over it shows PG::Connection::exec which accounts for 2.5% of the time.

I was curious about date parsing and dug up the source [1]. Seems like you could gain a ton of speed back by using a postgres specific timestamp parsing routine. In Go, it's 40 lines [2]

[1] https://github.com/ruby/date/blob/d21c69450a57a1931ab6190385...

[2]: https://github.com/jackc/pgx/blob/a968ce3437eefc4168b39bbc4b...


Author here. Sorry for the confusion.

1. I didn't want to make the point that Rust is faster, per sé. But wanted to show why it is faster. Because that teaches us a lot about when it matters. Indeed, IO bound vs CPU bound. The collecting/reducing is CPU and memory juggling-bound the reading of the file IO. that IO part should hardly matter, but the processing is what makes the difference. Ruby is slow here. Rust isn't. Point being: when you are doing a lot of CPU-bound stuff, ruby is probably a bad choice (and Rust a good one). But since in practice of web-services are almost all about IO (and some de/serialization) it matters less there.

2. I too have written PG backed services (in both Rust and Ruby) where database-collection is under 10ms. In a typical SAAS/PAAS setup, however, there will be a network between app and db, adding to latency. Localhost/socket makes it a lot faster, esp. visible on queries that themselves are fast. The main point, however, wasn't the absolute measures, but the relative numbers. When -relatively- your GC halting becomes significant compared to waiting times for the database, then certainly: Ruby is a severe bottleneck. This happened to me often on convoluted (and terrible) rails codebases. Where GC locking was measured in seconds (but where sometimes database queries were measured in tens of seconds).

3. The flamegraph indeed shows that Datetime::parse is the bottleneck in that particular setup. I tried to explain that with:

> The parsing (juggling of data) takes the majority of time: DateTime::parse. Or, reversed: the DateTime::parse is such a performance-hog, that it makes the time spent in the database insignificant.

But I also tried to spend time to explain all the situations in which this is mitigated. Yes! in this case Ruby truly is the bottleneck. But when we move to more wordly cases, these bottlenecks shift towards the database again. E.g. a write-heavy app. Or one that uses complex lookups that return little data.

Again, sorry for the confusion. I guess the title simply doesn't match the actual content of the article very well. Which is more about "when does the bad performance of Ruby, the language, really matter, and when doesn't it". I hoped the intro solved this, but should probably have spend more time on a better title. Sorry.


If the query selects a small number of rows and columns, and uses an index, it typically takes less than 1 ms on modern hardware. At least with MySQL.

ORM however might be slow. For example, taking 100 rows from SQLite with SQLAlchemy takes 1ms, but getting them as ORM objects takes 8 ms.


And the ORM is written in... Ruby? So then we are back to "Ruby is slow".


SQLAlchemy is python. Which is slower than Ruby :P


That's apples to oranges. Python is slower, but Sqlalchemy ORM might be able to fetch results faster than Ruby. Though I'd assume the difference would be neglectible.


Yeah. I'm confused, too.

Anecdote: I help maintain a pretty slow Rails app. I recently did some data munging in Go against the MySQL database that backs the Rails app. The Go tool was so fast, I thought it hadn't worked. It was basically instant. Accomplishing the same goals with Rails would have been slower by a factor of 10 in my experience.

I know Ruby != Rails, but if I'm doing this sort of thing in Ruby, I'm generally doing it in Rails or with a lot of the gems that Rails uses, so it's a fair comparison for my uses.


> I'm not sure why it just needs to be taken as a given that databases will take 150ms to return any data.

Run your database on a t2.small instance on AWS (1 vCPU, 2GiB RAM). Why would you do that, you ask? I don't know, but that's what we got on an old job.

This was also used to prove MongoDB is faster than PostgreSQL, even though Mongo was running on-prem on much better hardware.


Nope, single digit millisecond performance for me on those nodes when the tables are cached - anything else and you are complaining about performance of the storage medium.


How much data? We were doing a few million writes/day on peak days and the nodes couldn't keep up.


A few million writes a day is still well within the write performance of one of those nodes. But... we were talking about querying the data, no?


Well yes, but if your database is busy writing, it's going to have less time for reading.


I'm not sure what your point is. You said the nodes are slow. They are not slow, and will handle thousands of requests a second when configured correctly.

If you arent getting that then you are doing something big, something inefficient, or something stupid - and that would be the same on any size node.

Size your instances accordingly.


That’s covered on literally the line after the chart:

> The parsing (juggling of data) takes the majority of time: DateTime::parse


Yes, that's the part I'm confused about. This looks like a case where the vast majority of request time is spent in Ruby, so it would seem a faster language could give a significant speed up.

But right after seeming to acknowledge this, the author instead concludes that "even with a very poor performing ORM, the Database remains the primary time consumer".


I think my favorite part is buried in footnote 5:

> Ironically, the performance issue becomes less articulated in this non-http, non-rails context, yet in these cases people generally dismiss ruby as option, for its performance-issues. Which, catch-22, is one of the reasons Ruby is hardly used outside of Rails (and/or Web).

It's a shame that most people only write Ruby in the context of Rails. It's a lovely language, and performant enough for a wide array of tasks. It can be startlingly elegant and exceedingly productive.

Of course, it's valid to talk about Ruby performance in the context of Rails, as the author does. However, it's gratifying to see a more detailed discussion on what Ruby folks mean when they say "your database is usually the bottleneck," and the author does a good job at examining different aspects.

(All that said: Rails is still a really good choice. So much of our work is just writing CRUD apps, and it's kinda boring. Rails makes it boring and easy, and it's a really good tradeoff.)


> It's a shame that most people only write Ruby in the context of Rails

I fully agree with this! I used to use Rails for a few years (not professionally, FWIW), but didn't enjoy Rails that much. Even though, it's hands down the best "in class" framework, it beats anything in the JS or Python ecosystems (and claiming Django is anything close to Rails is just offensive to Rails).

But anyway, I've yet to find something that's as good as Ruby for daily scripts or task automation. Ruby is one of those few languages you can just read (I can't explain it well), and coming back to old Ruby code, the "wtf is going on here" moments are fairly scarce (sans code that touches metaprogramming/eigenclasses/DSLs, but even then it's more straightforward than in other languages). Lately I tend to just use Fish for most things (which are simple), but still reach for Ruby when the task is a bit more involved. Like, yeah there's Python, but I never liked Python. That's not to say Ruby isn't without any warts, but that's just... Technology, I guess lol.


I don’t have some kind of grand, unified theory about this - but the people I know who genuinely enjoy Ruby on its merits seem to think about code differently. Not different-as-in-bad, but just… different. I too find it hard to explain.

> Ruby is one of those few languages you can just read

I think this has something to do with it, though. The stdlib has a lot of different ways to “say” the same thing (TIMTOWTDI, anyone?) and used well it can be quite legible. But some programmers I know find that really frustrating.

Maybe it has something to do with how one visually processes and reads code? How one might associate semantic meaning to things? I don’t know how, but it feels like there’s something interesting there to study.


Yeah, there's a reason that _why the luck stiff was a Ruby person. Writing Ruby is (or can be) a more creative endeavor.

Some people really dislike the inability to recommend or see one unambiguous path to follow. These people are generally happier with Python.


I agree. I’m not a professional developer, but I sometimes make bots/daily scripts, as well as scripts for work that are beyond Excel, and I always use ruby. My colleagues use python for similar work scripts, but to me they are painfully difficult to read and understand.


I'm gonna play devil's advocate here: Ruby sucks in large codebases where conventions aren't well defined (unlike Rails, which has well understood concepts and abstractions).

Having no ability to do any kind of static analysis means a large legacy codebase will contain giant rabbit holes for you to fall into any and everywhere you look -- unless, of course, the architect(s) responsible for the codebase thought very carefully about this and limited the number of abstractions. If they didn't, you're left stepping through all the metaprogramming and one-off abstraction implementations.

Obviously, my experience is anecdotal, but I've decided my current job is the last Ruby job I'll take. I don't see any reason to pick Ruby over Typescript if you're not using a popular framework. Types save an incredible amount of time when you're ramping up.


I think the issue is with, what GP calls "boring CRUD".

This is where Rails truly shines! It's a near perfect fit. But that means anything that isn't a CRUD-setup, but e.g. command or event driven, or leaning on (complex) domain-logic or heavy in workflows, it lacks.

It lacks features to deal with it. It lacks structure to isolate a domain-model. It makes it easy to spread logic all over, but hard to put that logic in a bounded context. It is all about HTTP, when that really should be a detail in many apps. It is all about the database (as I write in my article "Rails is all about The Sacred Dictating Database") which should really be a boring detail.

"Large codebases" often fall in this category of nog being very cruddy. And therefore probably should eschew Rails. I won't choose it for these cases.


> Ruby sucks in large codebases where conventions aren't well defined

The community is working to improve the outside-Rails ecosystem with efforts like dry-rb or ROM. So you can find plenty of useful non-Rails conventions if you look for them, but if you're doing typical web stuff it is hard to compete with the productivity monster that Rails has become.

> I don't see any reason to pick Ruby over Typescript if you're not using a popular framework. Types save an incredible amount of time when you're ramping up.

I guess RBS and Sorbet are trying to cover this point. I don't know how well they scratch that particular itch of yours as I haven't worked with Ruby for quite some time.


> dry-rb or ROM

When I was still doing Ruby, dry-rb was one of the projects I was excited about, but it didn't ever seem to gain a lot of momentum, and in my experience, Ruby means Rails (or at the very least something like Sinatra + gems originating from Rails) in almost all companies.

I was also left wondering, at some point, why solnica didn't just abandon Ruby for a statically typed functional programming language, as that seemed where he was headed with his efforts. It's what I ended up doing, at least.


> Having no ability to do any kind of static analysis

Wouldn't Sorbet count as static analysis? It even supports gradual typing specifically for legacy codebases.

https://sorbet.org/docs/gradual


You are exactly right and the pitfalls you talk about are common to all dynamically typed languages.


Which is why some dynamically typed languages, like PHP, are moving towards types.


rails, postgres, PORO, and htmx is a monstrously productive stack


Personal opinion, if I was going to use htmx with a PORO backend I'd probably go for Roda[1] and Sequel[2].

If it was going to be read heavy I think I'd also pair that with SQLite for low latency and cheaper deployments.

If I didn't know exactly how requirements are likely to change over time I'd probably go with with Rails, Postgres[2], Redis and Hotwire. You can go a long way with that and a small team.

1. https://roda.jeremyevans.net/index.html

2. https://sequel.jeremyevans.net/


If you are one Rails, why not the hotwire stuff?


https://github.com/hotwired/hotwire-rails

deprecated. they use stimulus and its worse obfuscated JS than htmx IMO


Any examples?


you're looking for what? my personal million dollar website or my billion dollar startup?


just any evidence to back up your argument


just my years of experience, sorry


What is PORO?


plain old ruby object. basically you take your truth from the database and you plug it into an object that takes the data and functionally produces a data output that you feed to your view.


The biggest database performance problem that I have seen time and time again over the 20+ years I've been using sql databases now is caused by people not understanding that databases are relational (ie designed to perform efficiently over tables, not individual rows) and so running code that essentially causes hundreds/thousands of individual queries where they could just do one big query. Any time you are doing a select query in a loop you should sit down and take a long hard look at your code.

Almost always

  Do big expensive slow query to get everything I need in one hit
  for each row of result
     do something with it
is going to be much faster than

   for each thing in some collection in my program
      get the database row that corresponds to just that one thing
      do something with it
Round trip io, query parse time, cache effects etc are just going to totally destroy the performance of the second one, and optimising the db or programming language perf is going to do almost nothing to improve things if you're doing it this way. Lots of ORM type abstraction layers make it hard sometimes to see that's what you're doing, so people blame db performance but they're actually doing an insanely inefficient algorithm.

My biggest win ever from this was taking an 8 hour operation down to 15s just by moving some sql out of a loop as per the above.


On the other hand, weird things can happen at scale inside an RDBMS.

For example: window queries, where you're trying to extract `row_number() = 1` of each window-partition. (Or "DISTINCT ON" queries, which are basically syntax sugar for this.) These can be extremely slow, because in many RDBMSes† they generate a query plan that reads+materializes all the rows of each window-partition, in order to then just discard all-but-the-first row-tuple of each window-partition. (In other words, the filtering-out by rank doesn't get pushed back through the query plan to turn the whole-index scan into an index-prefix scan.)

The (surprising) solution to this over-materialization, is to use a recursive CTE, using the previous query's matched key as the inductive case's filter condition, to cause index-prefix matching to happen "one query step at a time." Which, if you think about it, is just "causing hundreds/thousands of individual queries"... just with all of those queries happening within a single tx, statement, query plan, table lock, and (hopefully!) page cursor.

† Some RDBMSes do know how to push this window-node attribute back into the fetch node; the feature is called "index skip scans" or "loose index scans." (Before anyone asks: Postgres isn't one of them... yet. https://wiki.postgresql.org/wiki/Loose_indexscan)


This is good to know about, used a similar window function in Postgres recently. Will double check the performance of that query.


On top of it, disregarding stored procedures and waste network traffic with data that is going to be thrown away.


Django has pretty great features for handling exactly this. Unfortunately it doesn't by default warn you if you're missing this optimization. I've had to write that warning system myself into iommi, but with that on it's super nice.


Well, Django has an ORM that makes this kind of feature necessary in the first place? ORM's are trying to hide the relational nature of your db, don't they?


The Django ORM is a quite thin abstraction over SQL. But yes, the ORM makes it easy to create this problem by mistake. But sometimes you really don't care. You can write one-off scripts directly in a shell and then it often doesn't matter. I take advantage of this a lot.


Well no, it's in the name - Object Relational Mapping.


Not really... At least, not a good ORM.

The actual idea behind a model based ORM system is to basically solve two problems.

The first is SQL is old and its methodology doesn't directly map onto modern programming paradigms. This one is just apparent by seeing what you get back from raw queries; tables. Want to do something with the table? Go and iterate over the rows individually. Want to get something from a foreign table? Go and do a JOIN before you even start iterating. Want to use binary json properly (the one SQL can actually index so it's not slow as molasses)? Enjoy a serialization syntax that makes you want to rip your eyes out once you get into the fine details. And that's without delving into how every "flavor" of SQL is subtly different with its own idiosyncrasies compared to everyone else. SQLite is not MySQL is not SQL server is not Postgres. Just about the only thing actually shared between them is the extremely basic CRUD tasks (and only if your definition doesn't include any complex WHERE operations). Anything to do with configuring the database is implementation specific in syntax.

ORMs broadly solve this by letting you use OOP paradigms instead; you designate a model that inherits from some base class with fields that themselves are typed to something in the database (including foreign key fields, so the relational status of your database is preserved). After that you can use traditional OOP methods and queries to read and write to the database. Most ORMs will also offer you some form to speed up any query slowdown that might occur due to unnecessary joins by letting you express them beforehand in a syntax that actually makes programmatic sense instead of trying to awkwardly map things.

The other thing ORMs solve is that by default you uh... don't check in your database scheme into a VCS. Some projects just do an SQL dump and expect you to import that. ORMs are basically a solution to that, in that they are a reference for how your database is expected to look like to be functional.

The really fancy ORMs also typically introduce (or support) a migration system so you can also track and easily update your database when your models get changed (which is well, necessary due to the disconnect between the database and the application), but that's not everywhere (SQLAlchemy for example doesn't support it).

To be clear, ActiveRecord is kinda garbage. It does have a migration system but rather than doing model agnostic queries it just references the original models, which leads to problems when setting up the database in development mode since you can't just run all migrations. Instead rake gives you the option to create a database dump of a working environment and have that be the new "canonical" version of the database. For that reason alone it's a mess. It's also incredibly slow, has (as the post mentions) lots of footguns that you need to be careful with and in some cases cannot actually be worked around easily.

ActiveRecord is arguably what killed Ruby for most webapps and I can say that pretty confidently.


> It does have a migration system but rather than doing model agnostic queries it just references the original models, which leads to problems when setting up the database in development mode since you can't just run all migrations.

That's true and this has caused problems before, but there are at least two potential solutions that I've used (none of which are typically referenced in tutorials etc. of course):

1. Write your migration in SQL (ActiveRecord allows that - and in some cases it's even necessary)

2. Copy your model classes into your migration file, namespace them and reference those namespaced models in your migration.

> ActiveRecord is arguably what killed Ruby for most webapps and I can say that pretty confidently.

I don't particularly like ActiveRecord, but do you have any particular evidence for this assertion?


> The first is SQL is old and its methodology doesn't directly map onto modern programming paradigms.

Well, SQL is clunky. But relational ideas map very well with (some) modern programming.

Of course, that's for the kind of modern programming that's not OOP.


Sounds obvious... of course, sometimes ORM and other abstraction layers conspire to hide that you are doing queries in a loop.


Years ago I was part of an effort to migrate a compute footprint from one data center to another that was across town.

When the DB VMs were migrated suddenly we had complaints about certain ETL jobs going from minutes to days.

I opened up a packet capture and took a look (thankfully at the time the queries weren’t encrypted). What I saw was that this big name ETL tool was loading data one row at a time using one query per row. This apparently was ok when the DB and app server were in the same building but adding a few additional milliseconds to each query totally screwed the jobs.

Thankfully there was a parameter that could be set in the DB driver to fix the issue and we just had to convince the ETL tool vendor to set that.


In my experience 9/10 web application performance problems are related to database interactions. Is the query using an index? Are you hammering your DB with 300 N+1 queries to render an API response. These are usually the biggest offenders when poor performance is observed. If you're interactions with you DB are bad, it doesn't matter what programming language you use, performance will be mostly equally awful regardless of language.


As an ex-DBA, the lack of understanding - and even basic performance monitoring - of your average Sr Software dev is mind blowing.

As a side note, the willingness of those same devs to allow queries of arbitrary size and constraints hit their db is equally frustrating.

“As long as we prevent against SQL injection, who cares?”


Before NewRelic this was so much worse too.


Bad indexing, no indexing, and fragmented indexes seem to cause no end of issues, and there always seem to be shortages of people capable of fixing it properly.


They’re out there—but I just ran EXPLAIN ANALYZE on my company’s search for them, and the query planner’s not too happy. It has to scan the entire resume heap before doing an on-site join with candidates, all while dealing with resource contention from queries of other employers. I know recruiters and e.g. triplebyte, stackoverflow all offer indexes to speed up this search, but in our case those indexes wouldn’t fit in cash.


Why does the index need to fit in "cash" (cache RAM)? Generally an index on disk, especially SSD, is far faster than traversing the entire dataset, because it allows the query executor to quickly narrow it down. Even if this requires some disk IO, it's a lot faster than doing all the disk IO for the entire dataset.


I'm not sure if you missed the joke (he basically was saying "It's hard to hire people who can do this sort of thing; they're out there but in demand and recruiters are expensive" but using DB terminology) and are thinking he's actually describing a DB operation, or if you're building on it so obliquely -I- am missing the joke you're trying to make.


That was very very good, but I wonder what sort of latency you experienced on the query that produced that result.


In order to fix a problem, one must be able to first identify the problem. Many people are still unaware of how indexing works, even though the devices they utilize daily perform this action routinely.


not to mention ORMs that query a lot of data through non ideal joins.


We use Python for web apps and I heard one developer complain that Python is a slow language and asked whether we should consider a faster language ("like PHP").

I just told them that if they find a function in the web application where we are actually bottlenecking on our Python code, I'll personally rewrite that in C (although I might actually just use Rust instead).

No complaints or rewrites thus far. I don't actually think the other developers, including the one lodging the initial compaint, know where our performance bottlenecks are. They just know that "Python is slow" and repeat that.


This is kind of relevant now: Mastodon is seeing a lot of growth all the sudden, the popular server for it is in Rails, and scaling up quickly is a challenge for server operators used to the levels of traffic from two months ago.

Twitter did rewrite their frontend to JVM languages because it was enough of a bottleneck to be worth it. If you can get Mastodon substantially further by serving some views using something besides Rails, or even going around ActiveRecord in places, that is a pretty different short-term to-do list than if the immediate problems are in the data layer. (Long term you have to scale both of course!)


Ugh, tell me about it. I had an interesting weekend dealing with traffic loads that grew by about 7x in a few days. I wrote about my misadventures at https://blog.freeradical.zone/post/surviving-thriving-throug... .

The gist of it is that a RoR “Sidekiq” task queue gets CPU bound after about 25-30 worker threads doing things like making REST API calls to remote servers, querying a database, insert status updates, etc. I can’t help but to think that the equivalent written in Go, or even Ruby-without-Rails, could handle many times the traffic with fewer CPU and RAM resources.


I'm experimenting with some alternatives. I prefer Rust (because I speak that) or Ruby (it's my main language) but the Go option gotosocial is my favorite.

It lacks a frontend and admin UI, so not ready for prime-time, i guess. But good enough to glue onto a pinafore frontend. And to manage a small instance.

The most promising rust alternative, rustodon, is ground to a halt, it seems. A lean, sinatra-based backend could work, but wouldn't solve the complexity of hosting caused by Ruby runtime and deps. That also doesn't exist to my knowledge.


I don’t have the bandwidth to contribute code to anything right now, but I hope one of those makes progress. GoToSocial looks promising, especially with their plan to be able to import (or possibly even run on top of) existing Mastodon databases.


I use pleroma, written in Elixir and hosting everything properly inside pg, for a personal instance and it just works. No magic knobs to turn, no significant delays.


I assume that Mastodon is using ActiveJob to push tasks to the queue. According to the creator of Sidekiq, bypassing ActiveJob would lead to significantly lower resource usage: https://github.com/mperham/sidekiq/wiki/Active+Job#performan...

I never tried it though. Just remembered the Wiki note.


Sounds like a job for MUMPS.


Yeah, I mentioned Elixir in the post as an example of a better alternative.


Did you actually observe CPU at 100%, or just that your sidekiq workers were falling behind?

Solid chance that you are still IO bound and could push past 25 workers. The main bottleneck is probably RAM.


I actually saw them at 100% CPU with RAM to spare.


What’s the state of TruffleRuby? Last time I checked it was impressively fast. I know that it is really hard to port everything (and sometimes minute details can change behavior), but nowadays Truffle can even execute C code, so those FFI parts can also do as is.


It just needs a small puma config tweak: https://twitter.com/eregontp/status/1588199934796365826


Wow, that is impressive, thank you! I’m eager to see some benchmarks.


Twitter spent a billion a year running their service, surely a lot of that can be shaved off but I think it’s a bit simplistic to think people can just run a comparable service for free.


I don't understand why the author first describes how bad N+1 queries are and then later claims that they are sometimes good? Yes, of course that can be true in very specific circumstances, but in many cases I've seen atrocious performance due to N+1 queries and fixing them was the first step in making an unresponsive website perform.

Don't also really agree that adding validations, joins etc. to your DB is "coupling your application logic to the DB" or that it makes the app slower (???). The thing that is coupling your business logic to the DB is a bad architecture which, unfortunately, most ORMs (including ActiveRecord) encourage. The fact that you can access a propery on an ActiveRecord object in a view and have this secretly make a database call, is one of the reasons for those infamous N+1s.


The thing about Ruby is there are plenty of gems that will help protect your code base from many of the performance hiccups.

Off the cuff: bullet, strong_migrations, activerecord-import, lol_dba

I'm sure there are more. This has been a bit of a passion point for me for a few years. I even taught a class on it. Ruby is a wonderful language to use with a database. ActiveRecord too...you just have to know how to avoid some of the foot guns.

Many Rails devs get committed to doing things "the Rails way" to ship faster and focus on application level logic, but even the ActiveRecord guides say that when it's time to performance tune you're going to have to get into the database innards.

ActiveRecord is geared towards team productivity. It's perfectly capable of great efficiency too, you just have to be more deliberate about it. The "scopes" structure is wonderful for piecing together reusable parts of queries though. It makes leveraging the fancy parts much easier.


Libraries are great, but they're not a fix for a broken architecture.

It's been quite a while that I haven't worked with in Ruby on Rails, but the fact that "architecture" was often considered a dirty word and the "Rails way" was touted as the solution to all problems in my experience often led to systems where everything was coupled to everything.

> ActiveRecord is geared towards team productivity.

ActiveRecord is geared towards making things easy at the start. The problem is that every ActiveRecord object always carries a dependency to the DB around, making it difficult to enforce a clean separation between different layers of your system.


"Keep all logic out of the database. It already is the slowest point. And hardest to scale up."

I don't think this is true in all cases. Sure, ORMs are awesome, but sometimes you need to write SQL queries by hand, and those queries necessarily implement some business logic (even if they're just retrieving data).

Nice explanation here:

https://tapoueh.org/blog/2017/06/sql-and-business-logic/


> Keep all logic out of the database. It already is the slowest point. And hardest to scale up.

That's... weird advice. I think a lot of what happens in Rails projects (based on my very limited experience) is that developers start to rely on this easy syntax that ActiveRecord provides and stop thinking about the queries that the ORM is creating. So you end up with these massive N+1 queries that kill performance.

This is one of the reasons I like to stay away from ORMs at all costs. A majority of what an ORM provides can be solved by a view or a function.

As to Ruby's performance... yeah... it's pretty terrible. We had to completely abandon Docker for Ruby on Rails because the performance was absolutely abysmal, even with VirtioFS enabled.


That's not a good reason to stay away from ActiveRecord. AR has plenty of ways to write optimal queries (all the way down to raw SQL). That said, green developers often won't ever go that far due to not knowing SQL well enough. No matter what, you gotta have some competency with SQL whether you're using an ORM or not.

Things like DB views come with their own problems and constraints, as well as DB level functions (stored procedures).


It is good advice to learn SQL.

I would suggest enabling the following two configurations in any Rails project. It should be a must in any new Rails project IMO:

1. Enable `config.active_record.strict_loading_by_default` - this will raise ActiveRecord::StrictLoadingViolationError on almost all cases of N+1 thus forcing the developer to fix it

2. Set `config.active_record.warn_on_records_fetched_greater_than` so that you know when someone will load a lot of records, thus forcing to paginate, load only what is needed ...

I think starting with these two will help mitigate a lot of problems even when using ORM in Rails.


> We had to completely abandon Docker for Ruby on Rails because the performance was absolutely abysmal, even with VirtioFS enabled.

can you share more details on this? not sure I get how any kind of VirtioFS comes into the game here


It's a Docker for Mac issue: https://github.com/docker/roadmap/issues/7


Thanks, I was thinking you were talking about [production systems] Linux.


> A majority of what an ORM provides can be solved by a view or a function.

If you want to get a single number or column of numbers - then yes, you can go with an SQL query. But if you need to get data about products in a store, you'll have to fetch the data and create objects for every product. Wow, you have just written an ORM.


Yeah, this is bad advice.

For example, Postgres has fantastic "upsert" and data integrity checks that reduce round trips to the database, meaning they're incredibly fast and result in less-complex client code.


There is always an alternative to writing SQL queries by hand, and it's usually a better one IME. Any ORM worth its salt will let you do the query in your blog post via the ORM, as a single query.


> There is always an alternative to writing SQL queries by hand, and it's usually a better one IME.

I spent years writing code using Spring/Hibernate, and I can state with certainty that both of those statements are demonstrably false.

Every application starts with good intentions, a simple CRUD webapp, and an ORM, then at some point the business requirements yield an N+1 problem in ORMs or several non-trivial left joins into records that don't map the shape of the entities. At that point it's far easier to write the query in straight SQL and produce a straightforward mapping into the record structure, which doesn't play well with the ORM because that bypasses its entity cache, which causes another huge set of problems on its own. So now not only is there ORM maintenance and SQL maintenance, there is now a problem with the conjunction of the two technologies.


I agree with you, however, some ORMs handle this fairly elegantly.

    At that point it's far easier to write the query in 
    straight SQL and produce a straightforward mapping 
    into the record structure, which doesn't play well 
    with the ORM because that bypasses its entity cache, 
    which causes another huge set of problems on its own
Rails' ActiveRecord ORM offers at least two ways to handle this.

1. ActiveRecord plays really nicely with views (including materialized views) in my experience. It treats them just like tables, basically, except you can't write to them. (note: there may actually be some cases where you can write to them; not sure)

2. You can supply your own handrolled SQL to ActiveRecord, e.g. `User.find_by_sql("select a,b,c from blahblahblah")`

YMMV obviously but I've been working with Rails since 2014 but this has covered all of my performance needs.

Plain old ActiveRecord default query generation is fine 99% of the time, and it's rather elegant/easy to sidestep it when I wish.


> I spent years writing code using Spring/Hibernate, and I can state with certainty that both of those statements are demonstrably false.

> Every application starts with good intentions, a simple CRUD webapp, and an ORM, then at some point the business requirements yield an N+1 problem in ORMs or several non-trivial left joins into records that don't map the shape of the entities. At that point it's far easier to write the query in straight SQL and produce a straightforward mapping into the record structure, which doesn't play well with the ORM because that bypasses its entity cache, which causes another huge set of problems on its own. So now not only is there ORM maintenance and SQL maintenance, there is now a problem with the conjunction of the two technologies.

I agree that that's often the end result, but in my experience 100% of cases are due to SQL fanboys who are unwilling to spend 5 minutes actually reading the ORM documentation and finding out how to do their N+1 query or complex join properly, which is actually easier than doing it in SQL if you try.

And don't get me started on "Hibernate is slow. The entity cache? Oh, our unnecessary custom SQL query made that inconsistent so we've disabled all caching".


I agree but I think Hibernate is an extreme. There is a middleground, query builders, which enable dynamic query construction (e.g. dynamically appending filter predicates) with reduced cognitive load of something like Hibernate.

I personally was heavily in the camp of "write raw queries ideally with code generation for statically typed/generated code" (as exist in Rust, Go, TypeScript, etc.), but I have since tempered my position since it does become a bit brittle and repetitive. Lately I've been playing with Jooq and it seems great.

There are tradeoffs everywhere though, so with Jooq you still aren't 1-to-1 with raw SQL, there is a bit to learn, but I consider it a worthwhile investment (and a minor one relative to an actual ORM).


A pure query builder is just writing SQL on syntax tree level, more or less. This makes sense for the same reason why you want your macros to operate on ASTs and not raw text. But I would argue that it's still much closer to plain text SQL in the code than to any ORM.


Right, but it lives with your application code and has the same syntax as the application code. That's probably preferable to SQL stored procedures (which often live outside source control).


It’s not hard to get SQL DDL and stored price in source control with Liquibase or Flyway. I’ve even done TDD sproc unit (integration) tests in them. But I’m a webapp turned data engineer guy…


    That's probably preferable to SQL stored 
    procedures (which often live outside source control).
Stored procs definitely have some big pros and big cons, but I don't think this is one of them -- any ORM with a decent set of tools to manage migrations (ActiveRecord is one) makes this objection a non issue IMO.


I explicitly do not want to manage stored procedures in the same way as typical migrations - if I did, I would wind up with many, many versions of the procedure in my code base as it evolved over time. This would make grepping or locating the latest version pretty annoying.

Flyway (migration tool in Java) has a notion of “repeatable” migrations, though, which would do the trick.


    This would make grepping or locating the latest version pretty annoying
Wouldn't this be an issue with any database object managed via migrations? Do any of them make this easy for any database object?

In ActiveRecord, you have your migrations folder(s) and then you have your `structure.sql` (essentially the raw output of mysqldump or pgdump) or the equivalent.

If I need to see the literal database definition of any database object I look it up in there. Not the slickest solution but works well enough - really just a few keystrokes in my editor.

I'd be curious how other migration tools handle (or fail to handle) this.


> Wouldn't this be an issue with any database object managed via migrations?

Like I mentioned, check out flyway repeatable migrations.


I’ve found that all this does is make the query less readable. SQL is purpose made for writing queries, and avoids unnecessary syntax noise you get when trying to fit the query into a host language based dsl.


That really depends on the language - specifically, on whether it already has constructs that can map nicely (e.g. LINQ in C#), or macros to define them, or syntax that is generally amenable to DSLs even without macros in the picture (e.g. Lisps).

SQL itself is also not a particularly well-designed query language. E.g. the order of the query doesn't reflect the natural data flow (SELECT .. FROM .. is reversed - compare to XQuery's FLWOR, for example), there are warts like WHERE vs HAVING etc. A good DSL can do much better.


SQL is powerful. A DSL that "fixes" things in this area getting all the other language feature interactions right isn't trivial, all the while users have to learn yet another language. Take PRQL for example: https://prql-lang.org. It looks nice, but the examples are very basic. What about window functions, grouping sets, lateral, DML, recursive SQL, pattern matching, pivot/unpivot etc. Might be doable, but perhaps, they've already made a decision that won't enable one of those features without adding new kludges.

Besides, every single "fix" will be a proprietary solution, while SQL is an ISO/IEC standard that's here to stay and universally adopted.

> A good DSL can do much better.

Stonebraker's QUEL was "better", before SQL, and yet, where is QUEL today?


[PRQL core-dev here]

Thanks for the PRQL shout-out!

> Take PRQL for example: https://prql-lang.org. It looks nice, but the examples are very basic. What about window functions, grouping sets, lateral, DML, recursive SQL, pattern matching, pivot/unpivot etc.

Window functions are very much supported! Check out the examples on the home page & in the docs.

The others aren't yet, but not because of a policy — we've started with the most frequently used features and adding features as they're needed.


> Besides, every single "fix" will be a proprietary solution, while SQL is an ISO/IEC standard that's here to stay and universally adopted.

And yet in practice the fixes end up more portable. How many of the things on your list of non-basic SQL have consistent syntax across databases, yet alone consistent behaviour?


All of them


A good DSL is not easy to implement, of course.

But the point here isn't just that it can be more regular than SQL. Integrating with the syntax of the host language is also a considerable advantage, ideally with static type checking.


In a statically typed language, what you get from a good query builder is that "malformed SQL statements" blow up at compile-time instead of at run-time.


Some languages also provide this for SQL strings (e.g. the sqlx library in Rust) will compile-time check raw SQL strings.


> There are tradeoffs everywhere though, so with Jooq you still aren't 1-to-1 with raw SQL

You're probably hinting at writing derived tables / CTEs? jOOQ will never keep you from writing views and table valued functions, though. It encourages you do so! Those objects play very well with code generation, and you can keep jOOQ for the dynamic parts, views/functions for the static parts.


"Say now we want to display the album list sorted by album’s duration, shortest first."

How would we do this in an ORM like ActiveRecord or Django ORM, without generating multiple queries?


I haven't used Rails or ActiveRecord for a long time, but from memory it gives you an escape hatch in the form of Arel that lets you represent parts of your query as SQL while still using the ORM for the rest.

Edit: I think it's something like

    Album
      .select(:id, :name, "SUM(songs.length) AS total_length")
      .left_outer_joins(:songs)
      .group(:id)
      .order("SUM(songs.length) ASC")
I think you don't even need to explicitly wrap it in Arel.


I use rails every day. Looks close enough. You’d just need to specify the fields with the table name, I.e. ‘albums.id’.

Also, for anyone reading, a total_length method dynamically will be added objects returned. :)


In Django, you would write an annotation as documented at https://docs.djangoproject.com/en/4.1/topics/db/aggregation/...

So something like

  Album.objects.annotate(duration=Sum('track__length')).order_by('duration')
There are also libraries that enable you to define an annotation or aggregation inside a model, that you can then get with a call like select_properties, similar to the built-in select_related (which you use to get a foreign key in one query) and prefetch_related (which is select_related for many-to-many fields).


Thanks. I haven't used Django for ages, but do recall using annotate() in the past. At that time, I didn't need to worry about performance, so didn't look into what happens under the hood, i.e. whether the call generates more than one SQL query.


I don't know about ActiveRecord or Django specifically, but: the obvious way in their query builder? Adding a joined collection, doing an aggregation on it, and sorting by it, are all easy in any ORM worth bothering with. What did you try and where did you get stuck?


No, Ruby is slow. Its GC is ungodly slow. And you can’t truly multithread, thus you can’t properly parallelize or maximize concurrency.


> No, Ruby is slow. Its GC is ungodly slow.

Ruby is quicker than Python and that's the most popular programming language there is at the moment.

> And you can’t truly multithread, thus you can’t properly parallelize or maximize concurrency

You definitely can. You've been able to fork processes for longer than I can remember. Ractors just became a thing, making multithreading cheaper and easier. Ruby's also had multithreaded servers for at least a decade. Not that it even matters that much, most people are using on-demand cloud servers that just spin up more servers when needed, making multi-threading less useful anyway.

Even if you forget Ractors exist, it's like saying C can't do multithreading because it's not in the language definition...


> Python and that's the most popular programming language there is at the moment.

Ordering the top 3 languages is “subjective”, let’s say that they are Javascript, python and java. And out of these, python is 10x slower than the other two.


Soon to be only 5x slower. There's some serious performance improvements going on in CPython.


Unless they go with a JIT compiler, no real closing of performance gap will happen.


10x is really being charitable.


> Ruby is quicker than Python

Python is slow as shit too. I would never use either language in a user path production environment.

> fork processes

Forking a process takes upwards of 50ms vs forking a thread is in microseconds. That might not seem like a lot, but a use will be able to tell the difference between a 50ms page load time and a 150ms page load time. Also, multi processing is not optimal for sharing memory like a local cache.

My ultimate point is there’s truly no reason to use these languages when Java, Go, and Rust are around


    Ruby is quicker than Python and that's the most popular programming language there is at the moment.
Do you have any reputable sources for this? All I can find are benchmarks run by companies with clear vested interests in showing one language is faster than the other.

Also, this isn't quite the strongest argument for its performance; popularity hasn't been tied to performance for a long time (see: all those years NodeJS was leading language popularity lists, or Java).


https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Of course, in the real world it doesn't really matter. Both are used in places where performance doesn't really matter as long as they're fast enough. But anyone with experience with both knows they're both slow (well, some Python fanboys try to pretend otherwise).


Java is one of the most performant languages there is, so I’m not sure what’s your point here.


It is now but was absolutely not at introduction, nor when it started leading charts of "most used languages"


How many developers will really need it though?

It's fast enough for Stripe's API, it's fast enough for Shopify; hardware in exchange for productivity is a pretty fair trade.


Except most companies won't attract the same top-tier Ruby/Rails talent. For example, it looks like 1/3 of the current Rails Core Team work at Shopify, including the person who is the #1 all-time Rails contributor.


Rails Core team member working at Shopify here.

You seem to be misinterpreting what we’re doing at Shopify. We pretty much never work on the product or anything like that, and we don’t go around the app chasing perf issues either.

At best we work on the infra or dev tools to encourage or enforce better patterns, or sometimes in some kind of support capacity for teams having Ruby or Rails specific issues, but Shopify would do fine without us.

We’re mostly here to ensure Rails and the Ruby ecosystem Shopify heavily depends on is maintained and healthy.

Now maybe you need to tier talent to scale a Rails infra (I don’t think so), but Shopify’s presence in the Rails or Ruby core teams is no indication of that.


I may be misunderstanding you, but can you elaborate on why you believe you need "top-tier" Ruby/Rails talent to run a performant Rails application?

In my experience, you need no more than mid-level talent.

You need a certain % of people who are strong at working with databases to ensure good (normalized, indexed, etc) database design. You also need a certain level of organizational will to avoid letting a Rails monolith get absolutely out of control, but this is less "a need for top-level talent" and more "just stick to boring old Ruby/Rails best practices."

Those are, of course, things you need when working with any stack. I don't find them to be Ruby/Rails specific.


> It's fast enough for Stripe's API...

Stripe is moving to Java.


If you need maximum requests per second for sufficiently high volumes of traffic - ie, Stripe/Twitter/FB/whatever stuff - then yeah Ruby/Python/etc doesn't cut it.

If your needs get extreme enough (high frequency trading?) then Java doesn't cut it either.

Most (I suspect > 99%) use cases don't fit these criteria. I know this is skewed a perception is skewed a little by the HN crowd, where many folks legitimately are trying to build the next Stripe or Twitter, but I think the world of modern software development is somewhat poisoned by this belief that we all need to follow FAANG-scale practices and architectures.


Java is used in HFT, and for the variant where it is not fast enough, general purpose CPUs are not fast enough so not even hand-written assembly can compete anymore. They use FPGAs for that kind.


Java is used in HFT for markets that have a time window, but only because they tune the GC to never kick in during market hours


Every large enough organisation eventually moves to Java (or C++, or Rust). When you have more bodies than things to do you just optimize... But how did all these startups get so big?


Stripe’s API is one of the slowest of any payment processor, and it’s a major source of contention within the company


Both Stripe and Shopify have more resources than an average small startup


I deliberately startd my article with a few paragraphs showing that Ruby indeed is very slow.

> Let's be clear: ruby is slow. The garbage collector, JIT compiler, its highly dynamic nature, the ability to change the code runtime and so on, all add up to a sluggish language.

There's no argument there. The main point, however, is that in many cases (where Ruby is used) this hardly matters, because other stuff than Ruby is the bottleneck. The database being the most obvious culprit.


It can be argued that if it's mostly a glue language translating requests into db queries and db responses into HTML it doesn't need to be blazingly fast.

Although that does rather define the shape of the application to look a certain way. You can certainly do a lot more with a faster backend language. Not everything translates well to a database query. Throwing more hardware at the problem doesn't get around this.


It depends on the workload. You absolutely can parallelize many processes using tools like Sidekiq. My single threaded ruby app might take hours to query data sequentially from an API. But with sidekiq suddenly you have to throttle your concurrent processes because the API is breaking under the stress.


Yeah antidotal, but I was doing some tests of an existing Rails app API compared to Hasura (erlang GraphQL server) talking to same DB, and querying same data was like 100x faster in Hasura..


Hasura is Haskell, not Erlang, I thought.


You are correct, my bad


anecdotal: based on personal experience

antidotal: based on curing a deadly poison


Thanks, love what autocorrect comes up with..


Ruby web guy for 15 years.

95% of the time the problem is poor database choices. Usually to much contention / locks.

On the rare projects where I need something like rabbitmq performance. I don’t use ruby.


Just use optimistic locking on all the things bro


JRuby does not have GIL and offers true multithreading


I feel bad that I’ve forever tarred JRuby with the atrociously slow performance of ThoughtWorks’ Mingle. If you think Jura’s sluggish…


I do think this might have been true in the past, but SSDs have changed the game for databases. Databases became 10x faster over night but this hasn't sunk in to every developer yet.

Also ORMs have created an environment, where bad queries are sent to the database for questionable gains (after saving time writing some initial code). After writing a nice Java ORM in the 90s myself (also wrote my own Ruby ORM framework called Ruby.RO before Rails became a thing), then using Hibernate/JPA extensively for a decade and later ORMs in Scala and TS.

I now prefer plain SQL (with Go). Quite often you can write some CTE that is fast but would have been several calls by an ORM. I also now use CTE to read/update/deleate data in one query (but be careful b/c execution order).


> I now prefer plain SQL

I don't know if Go is capable of, but Rust has a library called SQLx that does check that your SQL query at compile time (it makes the compilation slower than normal, but it's a good trade off I think), if this could done in Go it will be amazing too.


Yes I was using SQLx when writing Rust code and I've enjoyed it a lot.


True ... I always preferred to roll my own DAL instead of an ORM, because when you actually get into the weeds, the kinda optimizations you can do with your own queries will always be better than what an ORM can do.


> I now prefer plain SQL

I prefer plain SQL using an ORM. Usually using the ORM for insertion, SQL for extraction.

The ones I have used are not mutually exclusive (doctrine, hibernate).


The Sequel ORM (mentioned in this article) is absolutely amazing. There is a plugin for Rails to swap out ActiveRecord with it as well.

It does a good job of letting the DB do its job. Like you can define complex constraints on tables in your database and then the corresponding models will automatically detect them and have the related app-level validations. So you can keep more data integrity enforcement in the database where your transactional guarantees are but still have nice error messages in your app without duplication.

If you're looping over a collection of records it can detect if you call an association method on one of them and load the same association for all the records you're looping over in a single query, avoiding most N+1 issues.

Lots more cool stuff too. I haven't used ActiveRecord in quite a while, so I'm not sure whether it has since absorbed some of the Sequel behavior, but it's definitely worth a look.


So apparently we're still discussing this and you can find variations on this article stretching back a long time https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Like everything engineering it really depends though, and Ruby is sadly quite slow. Twitter was rough before they replaced Rails. Most Ruby devops tools I've used have been substantially slower than tools built in other languages (e.g. Puppet vs Ansible). Ruby certainly still has value as a tool, but it should be understood to be one that isn't very performance focused.


You are right. And I tried to link to as many relevant previous articles in my article here. Still, I think it is good to keep sharing this. If only because writing a blog-post allows me to organize my thoughts on this ;)

WRT the twitter story: correct. But that conflates Rails with Ruby and it uses a case that is quite exceptional.

Ruby is only a part of why Rails is slow. Rails is mostly slow because of the heavy and poorly optimized reliance on the database (as I argue in the article) and because of it's immensly complicated layering and dataflow. A single DateTime from a database might pass through some thousands of classes and methods before being send off in a body of HTML. Ruby is a problem here, because tousands of times "something rather slow" makes it very slow. But the "thousands of layers" really is a problem in itself. Without rails it could easily be less than hundred and you'll still have a neat architecture and layering and stuff.

The other thing is that no-one except twitter (and maybe three, four other companies) are operating on that kind of scale. At such scale, entirely different metrics start to matter than what we usually encounter on a typical 2000 MAU SAAS product.


I’m a Rails developer by trade, been doing it for over a decade. Bad indexing and lazy N+1 are 90% of the performance problems in a typical rails application. The other 10% is when people do aggregates or joins in application code instead of sql. I absolutely despise ActiveRecord because it makes non-trivial aggregates a pain to write. Sequel is a much better ORM but good luck getting a team on board.


    I absolutely despise ActiveRecord because it makes 
    non-trivial aggregates a pain to write
I'm a SQL guy at heart.

I certainly agree with you... I loathe writing any non-trivial query in ActiveRecord's query builder.

However, I find ActiveRecord does a really good job of getting out of the way when I deem it easier to write some raw SQL.

Generally do one of two things.

If I have a really big nasty query, I'll implement it as a (standard or materialized) database view. I may query it directly or map a model to it.

Or, I'll just use `MyModel.find_by_sql("blahblahblah")` which happily accepts my handwritten SQL.

I don't have a link handy but DHH stated ages ago that the ability to do stuff like that was an explicit goal of ActiveRecord.


I would like some kind of middle ground, which is why lately I tend to lean toward using something like Sequel and mapping my own domain entities. Mixing domain entities with persistance has proven to me to be a colossally bad idea.

I'm in the process of writing a rather large new feature for our application that heavily uses aggregates and window functions, and writing queries that try to use these sql feature while also taking advantage of scope chaining is just an exercise in pure frustration.

I think the more I lean on "advanced" SQL features, the more I just think I've probably outgrown Rails personally because it doesn't easily enable the kind of solutions that I need.


    why lately I tend to lean toward using something like 
    Sequel and mapping my own domain entities
I'm definitely itching to use something leaner like Sequel in a project. Before coming to Rails I used some nice lean C# ORMs like PetaPoco and Dapper and I really enjoyed that approach.

    I think the more I lean on "advanced" SQL features, 
    the more I just think I've probably outgrown Rails 
    personally because it doesn't easily enable the kind 
    of solutions that I need.
I definitely agree that the "treat the database as a dumb, interchangable storage solution" mentality that infects Rails and other development communities is misguided IMO. Leaning into more advanced database stuff quite frequently the way to go for non-trivial solutions.

In practice, I still find Rails pretty amenable to a data-first approach. Applications tend to be a mix of advanced database stuff and simple CRUD operations. Realistically, a lot of developers are good at one but not the other. So on a given team, a mix of developers is able to play to their strengths.


I think you just summarized the entire article in thee sentences. Thanks!


Building your modern crud app is still incredibly hard to get right while still being performant in any language. IMO, neither ruby nor the database are at fault.

- Materialization pipelines are hard, especially ones that need transactional constraints.

- Passing async connections from the http request all the way through to the database is hard.

- Data distributions can change which can impact indexes, join ordering, and the optimal materialization pipeline.


It’s definitely ruby that’s slow. Your database may also be slow.

I love ruby. But it’s slow. It’s a great choice for a lot of things.


Got some sources? Ruby3 was faster than python at some point, iirc


I started working with Ruby almost two decades ago and I can confirm that, in fact, Ruby is slow. It might be fast enough to make it viable for most web apps, but compared to Go or Rust it will be much harder to write a fast app in Ruby. I've seen tweets and articles like this one (it's the databse not Ruby!) numerous times and it usually doesn't map to reality, especially in bigger applications. Of course you can make Rails to respond in milliseconds, but in real production apps I've usually seen 50-80% of the responses time to be spent in CPU (ie. Ruby). At the same time when the app grows and you add more middleware an empty response can easily take 5-10ms with a bit of a traffic (like: an empty controller action, no database queries etc). It's better with smaller frameworks like Sinatra, but still you can make the database much faster than the Ruby code making queries

update:

Maybe out of curiosity I'll make some benchmarks later, but even if you look at frameworks benchmarks you can see that for the same database queries there's an order of magnitude of difference between Ruby and Rust or Go frameworks: https://www.techempower.com/benchmarks/#section=data-r21&tes... (look at latency)


While I agree with your general point (yes a Ruby web app once basic database access patterns have been optimized is essentially CPU/GVL bound), I'd advise caution when using things like techempower benchmarks of the benchmark game.

Different languages receive very different levels of care in these, and from memory there was some big no-nos in the Rails benchmark.

One I remember for instance is that they use redis as a cache but without a connection pool [0], and AFAICT they run puma with at least 5 threads [1], so they are very likely to content on that one Redis connection.

That's just one of the many things I spotted when I looked at it a few months back.

[0] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast... [1] https://github.com/TechEmpower/FrameworkBenchmarks/blob/5429...

Edit:

To add to my point, you linked to the "20 queries" benchmark, which is essentially:

    render json: 20.times.map { |i| World.find(i) }
Somehow this average 260ms latency while the single query version average 8ms, so something definitely don't add up, since worst case scenario it would be 20x slower so ~160ms.

Another evidence is that the "cached" version is also 260ms.

And this is a dummy app barely doing anything, I've seen apps doing a ton more work in much less time.


> Ruby3 was faster than python at some point, iirc

That’s the lowest bar for performance.


Python is slow.


Incredible insight, thanks


Database: slow. Ruby: not slow. Our next steps are clear: we must reimplement the database using ruby.


I already did that. I decided to keep things simple, so I just built a web app that proxies raw SQL to mysql and returns the results. So it's even better - your database is written in ruby AND has an http json api for wider compatibility!


Outside of Rails I used to write map/reduce jobs using ruby and a jruby jar file to deploy the job to our hadoop cluster. Startup time for each job is super slow due to jruby but developer ergonomics mattered to me more than efficiency here. Slow but who cares.


Yes, index the database. Another thing that can make Rails apps quite fast is using caching (Rails has some builtins that make that easy). In some cases you end up skipping a database lookup and running little Ruby code, which is a huge speedup when you can do it.


Luckily there is more to Ruby than Rails.

I love writing Ruby code, but dislike Rails (with a passion - as I point out in my article in both footnotes and between the lines ;)

(bad) performance of your Rails code is very much about Rails. And very much about how easy Rails makes it to abuse your database. But much less about the performance of the language Ruby.


Caching seems to be a general purpose approach that got lost the longer my career went on. I suspect as networks and computers got faster people stopped thinking the added complexity was worth it, but it's still invaluable in bigger applications.


Caching on web apps got harder the more stuff moved to the client, I’m not sure it’s just laziness.



From my experience with Python and Ruby, it's not the database itself, it's the database driver that spends quite some time on serialization and deserialization every time you want to send around some data, even more so when you pack some objects to store as semistructured/json/xml. So no, it's not the database, it's your interpreted language.


Before we proceed, are you aware that a lot of popular database drivers for Ruby (and Python? not sure) implement the performance-critical bits in good old natively compiled C?

For example, the Ruby postgres gem: https://github.com/ged/ruby-pg/tree/master/ext

(I wasn't sure until I checked just now, so I'm not questioning your familiarity with the tech. Just not sure if that's commonly known)

    So no, it's not the database, it's your interpreted language.
Or it's your developers.

If you're moving enough data over the wire that application-side serialization/deserialization becomes an issue, often this is because developers are retrieving a whole truckload of records when they really only ought to be returning one.

Even in a fast application language, this is a problem. You're still burning extra CPU cycles, you're still allocating extra RAM, you're moving the same amount of extra data "over the wire", and you're still consuming extra database resources.

Of course, there truly are many use cases where you might want to move a truckload of data between your application and your database, and do some heavy crunching on the application side. I wholeheartedly agree Ruby is not ideal for that.


Marshalling the result into python or ruby objects is expensive no matter how you do it, and that cost is due to the programming language you need to marshal to, the logic being implemented in C can't help that.


Languages aren't interpreted, implementations are.


It's safe to call a language for which there's no production compiler an interpreted language.


Well, that’s neither Python nor Ruby. Both compile their code, and can do so ahead of time too.


None of the major Python implementations do any AoT compilation.

Python is probably the poster-child language of "interpreted". It's run, line-by-line through an interpreter, and turned into byte-code, which is executed by the VM. There's no compilation, and in CPython, very little optimisation. PyPy is obviously different on that last point as it's a JIT, but the larger point still stands.


> It's run, line-by-line through an interpreter, and turned into byte-code, which is executed by the VM.

Just a small technical correction: you can pre-complie to .py source files to .pyc bytecode files without waiting for the interpreter to do it line by line, many distro package managers already perform this step automatically when a package is installed. Though you're right in the sense that it doesn't really improve performance, CPython itself is the slowest part, not bytecode generation.

> There's no compilation, and in CPython, very little optimisation.

+1. This was actually a deliberate design choice [0]. Guido van Rossum himself said, "Python is about having the simplest, dumbest compiler imaginable." CPython was designed for ease of learning and use, it was never designed to be a high-performance language. Pure Python code (without C extensions like NumPy) can never be good at number-crunching tasks.

[0] https://nullprogram.com/blog/2019/02/24/


There are still some basic optimizations the Python compiler makes. It has, for example, a keyhole optimizer, and some built in optimizations around integer objects to name two.

It’s nothing too fancy, but it still does the basics.


GraalVM?


If your database is slow and ruby isn't, simply rewrite your database in Ruby.


But that wasn't the point, really?


So, in real world, i will just cache every thing with redis, and problem solved ?


Or use PolyScale and get the cache without changing the code.


Classic whataboutism, same as python bottleneck discussions, lets just shift the blame! Never understanding people don't understand database design or how query engines work.


<Screams in Active Record>

The default ORM in Ruby… not great at scale.

I had a large client with moderate traffic, they were slow because they were absolutely crushing their DB. Now granted it was not JUST ruby, but ruby was a big part of it. We profiled their slow queries and found stuff no human would think is sane.

Active record is great for the developer and terrible for database optimization. Which is fine until you try to scale. Then you fall over on your face.

Ruby is slow because it constructs retarded database queries that no human would ever use. Think ten lines long and multiple joins.


The question is: is ActiveRecord worse at query generation than other ORMs?

My answer is no, it is not.

In my experience ORMs are all pretty much equal at this. (However, I would love to be wrong!)

I don't know of any ORM that intelligently analyzes your database and comes up with highly optimized queries. I'm not sure such a thing is possible. The database's own query analyzer can optimize query execution because it has intimate knowledge of the database's own data structures, indexes, data cardinality, etc.

I accept ORMs as a necessary evil for most work. However, I enjoy ActiveRecord because unlike some ORMs it makes it fairly easy to simply use raw handwritten SQL when needed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: