
Ruby: We have decided to go forward to 3.0 this year - anshul
https://github.com/ruby/ruby/commit/21c62fb670b1646c5051a46d29081523cd782f11
======
tiffanyh
It appears that Ruby 3 might come short of their 3x speedup goal [1][2] ...
has anyone tried out Graal/TruffleRuby?

Graal/TruffleRuby has shown some massive perf increases [3]

[1] [https://pragtob.wordpress.com/2017/01/24/benchmarking-a-
go-a...](https://pragtob.wordpress.com/2017/01/24/benchmarking-a-go-ai-in-
ruby-cruby-vs-rubinius-vs-jruby-vs-truffle-a-year-later/)

[2] [https://pragtob.wordpress.com/2020/08/24/the-great-
rubykon-b...](https://pragtob.wordpress.com/2020/08/24/the-great-rubykon-
benchmark-2020-cruby-vs-jruby-vs-truffleruby/)

[3]
[https://www.reddit.com/r/ruby/comments/b4c2lx/truffleruby_be...](https://www.reddit.com/r/ruby/comments/b4c2lx/truffleruby_being_2x_faster_than_mri_26_a/)

~~~
owens99
Look for tenderlove’s comments on TruffleRuby. The performance is at the
expense of memory.

~~~
hinkley
In Ruby, as in NodeJS, the GIL pushes you to scale horizontally. The memory
footprint of Hello World becomes a big problem, because the number of copies
you run will be proportional to the number of cores you have, not the number
of machines. You get no benefit from moving from an 8 core box to 16 or 20
cores.

I suspect if they do manage to pull off more concurrency in Ruby 3, that
vertically scaling machines will make more sense. If 8 cores benefit from a
shared footprint, instead of one core per process, then the budget looks more
attractive.

So now might not be the right time to cherry-pick some of these features, but
it may not be far off.

~~~
schneems
> GIL

FWIW the GIL has been the GVL since YARV was merged in and it became based on
a virtual machine rather than purely interpreted. I believe this was 2.0.

> because the number of copies you run will be proportional to the number of
> cores you have, not the number of machines

While this is true, Ruby is also very CoW optimized so while forks grow
linerally in size (with count), usually the first fork is drastically smaller
than the process it was forked from.

I work at Heroku and recommend perf settings to customers. 5 years ago people
were mostly hitting memory limits. Now it's pretty common to see apps that are
maxing out the CPU well before coming close to ram limits.

Especially when compared to javascript, Ruby is extremely memory efficient.

I agree with your larger statement but wanted to chime in and expand on those
two points.

~~~
wjossey
Just want to reaffirm this post. I scaled ruby for a living for almost 8 years
to millions of request per minute and this post is 100% accurate.

~~~
throwdbaaway
The high memory usage of ruby still causes problem if the app is single-
threaded. I scaled databases for ruby apps for a living for almost 8 years,
and sadly single-threaded legacy ruby app is still a thing.

Anyway, in the single-threaded scenario, the app may appear to be CPU bound
under the steady state. However, when some hiccup happens in a database or in
another microservice, all the ruby processes could soon be blocked waiting for
network responses. In this case, ideally there should be plenty of idling ruby
processes to absorb the load, but it will be rather costly to do so due to the
high memory usage.

There are potential fixes of course, but with trade-offs:

\- Aggressive timeout: May cause requests to fail under the steady state

\- Circuit breaker: Difficult to tune the parameters, may not get triggered,
or may prolong the degraded state longer than necessary. Also not a good fit
when the process is single-threaded, as it can only get one data point at a
time.

\- Burning money: Can only do this until we hit the CPU : memory ratio limit
imposed by the cloud vendors.

\- Multi-threading: Too late to do this with years of monkey-patching that
expects the app to run single-threaded.

~~~
jashmatthews
Dealing with latency variability is a Hard Problem™ and really not much to do
with Ruby or process vs thread parallelism.

[https://dl.acm.org/doi/pdf/10.1145/2408776.2408794](https://dl.acm.org/doi/pdf/10.1145/2408776.2408794)

~~~
throwdbaaway
Well, having more spare ruby processes / threads would make the app more
resistant to latency variability, and could have made some incidents into
nonevents.

Also, while I don't disagree that it is indeed a hard problem, I do have very
good experience with an async java stack, where I didn't have to worry about
things like this. As long as a sane queue limit is defined on let's say the
jetty http client, if something bad happens at the other end, the back
pressure would kick in by failing immediately the requests that couldn't make
it into the queue. Other parts of the app would then continue to be
functional.

So, I would contend that it has a lot to do with ruby high memory usage, made
much worse when single-threaded, and it looks like ruby 3.0 still won't have a
complete async story yet?

EDIT: I checked the link again, and it looks Jeff Dean was talking about
latency at p999 or above? By "hiccup", I actually mean something that would
increase _avg_ latency by perhaps 5~10x times, e.g. avg latency of 100ms under
steady state + timeout of 1 second + the remote being down. Sorry for the
confusion. Here, I am lucky if people start caring about p95.

~~~
jashmatthews
That's not an inherent property of a particular language or concurrency model,
though. That's having logic to track request queue depth for a particular
service or endpoint and fail fast/load shed. You can do the same in Ruby! Some
would probably say this is what a service mesh is for.

Maybe you're thinking of the new Actor based model for compute parallelism?
Async IO in production Ruby has been a thing for easily more than a decade.

~~~
throwdbaaway
Of course it is not an inherent property of a particular language or
concurrency model, but it is a property of a particular __language ecosystem
__. As a turing complete language, everything is doable in ruby, but at what
cost? Now we are back to trade-offs I listed above.

As for async IO in production, looking at the client library,
[https://github.com/socketry/async-http](https://github.com/socketry/async-
http) is barely 3 years old, and probably reached the production-ready state a
few months ago, if we are being generous.

But good point about service mesh. Moving the circuit breaker responsibility
to the service mesh would definitely help in my case, as the sidecar would
have all the data points from the 10+ single-threaded ruby processes running
in the same pod, and thus could make a much quicker decision.

~~~
jashmatthews
[https://github.com/socketry/async-http](https://github.com/socketry/async-
http) is just a new client library. PostRank, Zynga, CloudFoundry were all
running async IO in production on Ruby ~10 years ago. CRuby support for non-
blocking IO dates back to the mid 2000s.
[https://github.com/eventmachine/eventmachine](https://github.com/eventmachine/eventmachine)
actually dates back even further

If you're using Unicorn then you've already got Raindrops which gives you a
really simple way to do shared metrics across forked processes like in-flight
requests to another service or how many of your Unicorns are busy.

~~~
throwdbaaway
EventMachine has been losing steam for awhile now, which is why I brought up
Async as the new hotness. I don't think it is fair to classify async-http as
"just a new client library". As of now, in the ruby ecosystem, the Async
framework is the only player in town. From my perspective, it still looks
pretty much unproven, but perhaps we just live inside different bubbles.

It kinda feel like we are talking past each other here. I would just like to
clarify that I inherited all these different ruby apps, and I don't have the
magical ability to go back in time and say "Hey, perhaps we should use an
async framework from the beginning" or "Dude, enough with the monkey-
patching". And even if I do, those could be bad advice, as the ruby apps are
making money in production.

Anyway, thanks for the suggestion to share metrics across processes. That will
definitely help with the circuit breaker decision making in my case.

------
mfontani
What a year!

Python 2.x dying; Python 3 becoming the norm; "Perl6" renamed to raku & Perl5
thinking of bumping to v7... and now Ruby going all the way to v3.0!

~~~
Alex3917
Python3 has really been the norm since 3.4, which was released in 2014. After
that it took another year until most major packages were updated to Python3,
but that happened at some point in 2015. By 2016 there weren't many packages
left that weren't Python3 compatible, or that didn't at least have Python3
replacements.

~~~
framecowbird
most Linux distros still have Python 2 as the default though, right?

~~~
marcuskaz
No, Ubuntu 18.04 shipped with Python3 as the default.

~~~
3np
Not true. Vanilla 18.04 LTS ships with no Python installed by default and the
main repos `python` package is 2.7. Same for Debian 10. Ubuntu 20.04 switched
the `python` package to Python3.

Even conservative CentOS (where Python is a base dependency for the system, as
opposed to the above) is on Python3 as of now though.

As far as I am aware, Debian 10 buster is the last mainstream LTS distro to
default to python2. Should change next year with Debian 11.

------
sickcodebruh
I’m excited about performance improvements but thrilled at the idea of adding
types. Has anyone here worked with Sorbet or a prerelease 3.0 in Rails and
able to share some notes?

~~~
bradleybuda
Our team has been using Sorbet at getcensus.com for almost a year now, and I'm
generally very happy with it though it's not perfect.

Like most non-trivial Rails apps, our test suite takes a while to run, so I
like having Sorbet to catch "dumb" issues without having to run the full
suite. Running `srb tc` to check types is incredibly fast and seems to be
scaling well as our codebase grows. It catches the obvious stuff, but has also
found some subtle bugs in flow checking and is great for refactoring support.
The false positive rate is extremely low - if Sorbet flags a regression in
your type checking, it's very likely to be a real bug.

The Slack community is helpful and responsive - if you're thinking of using
sorbet, I'd strongly suggest joining.

The downsides are:

\- Unclear workflows - it's hard to know when you need to "rescan" for new
type definitions in gems, the stdlib, and in generated code in your own app

\- Poor Rails integration - the sorbet-rails package is helpful and being
actively developed, but it's clear that the maintainers don't use Rails and
aren't going out of their way to support it.

\- Upgrades are rough - the sorbet tools that scan your gems and code to find
"hidden definitions" are seemingly unstable from release to release. There's a
good chance that upgrading to a new version of sorbet will break your type
checking for mysterious and hard-to-debug reasons. Lots of this is probably
related to Rails as well.

\- IDE integration isn't quite ready for prime-time yet. I've gotten it
working in Emacs with lots of experimentation and poking around, and I think
some folks have it working in VSCode too, but it's not officially "released"
or supported and it crashes somewhat often. It's still stable enough to be
useful and I'm glad I have it.

It's great and seems to be getting better, and it has absolutely made me more
productive, but know that you're still adopting an alpha- or beta-quality tool
and it's unlikely to "just work".

~~~
hdoan741
_sob on sorbet-rails_

j/k. sorbet-rails maintainer here. I agree with the assessment that sorbet
doesn't go out of the way to support some Rails feature, eg method overloading
or scoping block accurately. Sorbet tool is opinionated about some of the
design choices that makes it hard to support Rails' extensive use of meta-
programming. That said, Sorbet is still useful in checking the custom code we
write on top of Rails and their interactions. It may be hard to type the model
files themselves, but we can type-check the code making use of the models!
Recently, I started a new project on Rails and it's quite fun building it with
type from scratch :D

I find Sorbet a very helpful tool for development. I hope people will give it
a try and contribute to tools around it (sorbet-rails included) so that we
have great tools to use!

For those who are interested in the topic, I outlined some of the technical
challenges with using Sorbet on rails here [https://medium.com/czi-
technology/static-type-checking-in-ra...](https://medium.com/czi-
technology/static-type-checking-in-rails-with-sorbet-dc0414a502d9)

~~~
bradleybuda
I did not mean to put down sorbet-rails - it has been really useful to us and
we appreciate all the work you and your team have put in to it!

But I do have the sense that building Rails apps using sorbet won't feel
"first class" until we have some sorbet maintainers that use Rails or the
Rails team starts adopting sorbet (or both!)

~~~
hdoan741
Yeah, I totally agree with this. In some way, it's the difference in
philosophical approach of the two system that it'll be hard to reconcile. Eg:
Rails favors convenience (method that just works under various condition),
whereas Sorbet favors explicitness (no method overloading, typed struct class
instead of hash)

------
sacado2
Are they going to make non-backward-compatible changes, or is this just a
marketing move?

~~~
pmarreck
looks like they're adding an optional type system, which means you get the
worst of both worlds- no guarantees AND no compile-time type checking (think:
what happens if type-checked code calls non-typed code?) so I have no idea how
they're going to make that fast since all types will still have to be checked
at runtime

~~~
sosodev
Are you sure it’ll be awful? Sorbet
([https://sorbet.org/](https://sorbet.org/)) is pretty popular already. It can
statically check your whole project and dynamically check it at runtime. It
also doesn’t add that much overhead so I’m not sure what you’re on about...

~~~
oomkiller
Sorbet looks really cool, too bad they developed their own typing system that
keeps types in a totally separate file instead of adopting it.

~~~
tomc1985
The 'separate' file is to avoid breaking backwards compatibility and avoiding
a Python2/3 fiasco

~~~
virtue3
Also see coffeescript, etc.

------
ricardobeat
Context? What changes are in v3?

~~~
splitrocket
The goal of Ruby v3 is to be 3 times faster than v2.

[https://blog.heroku.com/ruby-3-by-3/](https://blog.heroku.com/ruby-3-by-3/)

Unclear if that goal has been achieved.

~~~
anonova
From benchmarks [1], 2.8.0 (with jit) is ~2x faster than 2.0.0p648 [2].

[1]: [https://pragtob.wordpress.com/2020/08/24/the-great-
rubykon-b...](https://pragtob.wordpress.com/2020/08/24/the-great-rubykon-
benchmark-2020-cruby-vs-jruby-vs-truffleruby/)

[2]: [https://pragtob.wordpress.com/2017/01/24/benchmarking-a-
go-a...](https://pragtob.wordpress.com/2017/01/24/benchmarking-a-go-ai-in-
ruby-cruby-vs-rubinius-vs-jruby-vs-truffle-a-year-later/)

~~~
riffraff
notice the "3x3" goal is on a specific benchmark (optcarrot), not on all
workloads.

~~~
tom-lord
There's no such thing as a performance measurement "on all workloads".

------
sunnydey
I'm waiting for this since long time. Hope it will be more human and more
feature and more user friendly than other languages.

------
sunnydey
I'm waiting for this since long time. Hope it will have more features, more
faster, more performance oriented, more exciting, and more user friendly than
other languages.

------
maxpert
Did we get JIT?

~~~
anonova
Yes, it was introduced in Ruby 2.6 (2018): [https://www.ruby-
lang.org/en/news/2018/12/25/ruby-2-6-0-rele...](https://www.ruby-
lang.org/en/news/2018/12/25/ruby-2-6-0-released/)

~~~
pizza234
Well, yes and no (althought the question is a bit open).

It's more or less beta quality, and very primitive. It's discouraged to be
used with Rails, so I'd be inclined to state that "we didn't get it yet".

I'm also personally skeptical that the unusual approach (invoking a whole C
compiler in a separate thread) will stand in the long term - but that's my own
take.

~~~
jashmatthews
The CRuby JIT is stable but whether it improves performance or not is workload
dependent.

It's simple not primitive. MJIT is designed to take advantage of a C compilers
optimization.

"Compile to C" worked for Chicken Scheme for the past 20 years and continues
to be a popular way for functional langauges to compile. It's also how Nim
works. It's all about different trade-offs.

~~~
oblio
Chicken Scheme compiles ahead of time, doesn't it?

~~~
jashmatthews
Yeah but it's not an important distinction.

~~~
oblio
Isn't it? Compiling C takes time, with Ruby you would be doing it for each
execution. Is there some catching involved?

~~~
jashmatthews
Almost everything is in a pre-compiled header. Its actually similar overhead
to an LLVM based JIT.

You wouldn’t want to do it for a browser JIT but for a server side app it’s
OK.

~~~
oblio
Does Ruby ship with a compiler? How will this work on Windows?

