
Reduction in Garbage Collection Pause Time in Ruby 2.2 - mparramon
https://www.omniref.com/blog/blog/2014/11/18/ko1-at-rubyconf-2014-massive-garbage-collection-speedup-in-ruby-2-dot-2/?hn=1
======
kevingadd
Incremental garbage collection is great for realtime-ish use cases like games
and multimedia, too, because if you tune the collector right, you can ensure
your GC never causes you to drop frames or fail to mix a sound buffer on time.

Need to hit vsync every 16ms? Just run a 1-3ms incremental GC every frame when
you have time to spare, and as long as your game's theoretical performance is
good enough, you won't ever hitch. This is one of Lua's advantages for game
scripting because it's had a fairly solid incremental collector for a while,
making it easy to give the GC a fixed chunk of time to operate in every frame
so your game never pauses.

Scripting is a huge boon for people doing game development, game prototyping,
and multimedia scripting (toolsets like Processing, for example), so having
another scripting language to choose from for that is awesome.

~~~
the8472
MRI still has the global interpreter lock, i.e. is effectively single-threaded
for CPU-bound tasks.

IO/waiting for forked jobs can be done in background threads since the lock is
released on some select function calls.

So while incremental GC certainly does bring down latency caused by the GC it
does not bring down latency for tasks that could be parallelized.

~~~
chrisseaton
MRI is multi-threaded, even for CPU-bound tasks - it just doesn't run those
threads in parallel.

~~~
nitrogen
Hence the "effectively" in the parent comment; Ruby's interpreter lock can be
very problematic in threaded apps that use I/O functions from C-based gems
that forget to release the lock.

~~~
chrisseaton
It's not a case of 'effectively' or not. MRI is 100%, totally, unequivocally,
multi-threaded.

The fact that those threads do not run in parallel is an entirely unrelated
issue. Parallelism is not an essential quality of multi-threading.

~~~
teacup50
This is a completely pointless semantic distinction that nobody cares about.

What makes threads interesting is parallelism; if the goal is merely
cooperative or pre-emptive multitasking, where there's no concurrency, then
you're _" effectively"_ not multi-threaded as per any modern expectation.

~~~
chrisseaton
If threads aren't interesting except for parallelism, what do you think people
were using them for the multiple decades when the vast majority of people were
running single-cored uniprocessor systems?

The answer is that it's often useful to manage multiple things at the same
time, even if we can't or don't want to actually do them at the same time.

We may want to use blocking IO for the simplicity it gives us compared to
asynchronous IO, but allow other parts of the system to continue to run during
that blocking.

We may want to let threads and the kernel manage our context and context
switches for us, instead of manually storing and restoring state, because it
can be simpler that way.

~~~
teacup50
> _If threads aren 't interesting except for parallelism, what do you think
> people were using them for the multiple decades when the vast majority of
> people were running single-cored uniprocessor systems?_

They largely _weren 't_. POSIX threads weren't standardized and widely adopted
until the 90s, and it wasn't until multi-processor systems became more
prevalent that threads saw wide adoption at all.

------
jamon51
It's too bad that keyword arguments are semantically opposite between CRuby
and RubyMotion in Ruby 2.0+. It'll cause a divergence and makes it a little
more difficult for Rails+RubyMotion engineers like me to switch.

I understand the reasoning behind both decisions (RubyMotion is following
Objective-C's semantics, CRuby is following Python et al).

For those who aren't familiar with RubyMotion's syntax:

    
    
        def my_method(arg1, keyword: arg2)
          puts arg1
          puts arg2
        end
    
        my_method("test", keyword: "test2")
        # output
        test
        test2

------
rmchugh
Ruby is a wonderful language. It would be great if there was some deep-
pocketed multinational willing to fund its development in the same way as
Facebook has done for PHP or Google has done for Python.

~~~
jxf
Heroku pays Ko1 and Matz [0], and Heroku is owned by Salesforce, which is a
public company. Granted, Salesforce doesn't have Google or FB's war chest --
but it's not like Ruby isn't being supported by companies.

[0]:
[https://blog.heroku.com/archives/2011/7/12/matz_joins_heroku](https://blog.heroku.com/archives/2011/7/12/matz_joins_heroku)

~~~
arnvald
Apart from Koichi, there's one more person - Nobu - who is working on CRuby
full-time. Nobu's the guy with most commits to CRuby. Matz now does not work
on CRuby, only on MRuby and the language specification.

It's also worth noticing that another implementation, JRuby is supported by
Redhat. IIRC 2 people work on it full-time: Charles Nutter (Headius) and
Thomas Enebo

~~~
nirvdrum
There's a few of us at Oracle working on an alternative backend via JRuby, as
well. Chris Seaton gave a talk on it at RubyConf last week. He has a series of
posts about it that are pretty good:

[http://www.chrisseaton.com/rubytruffle/](http://www.chrisseaton.com/rubytruffle/)

Obviously, this isn't people being directly paid to work on MRI, but there are
side benefits like a richer set of rubyspecs and sometimes fixes landing in
MRI.

~~~
timr
That was a great talk -- really sort of mind-warping to think that there are
people working on "de-optimizing" the JVM so that Ruby can run faster in the
average case.

------
fsiefken
nice timing together with the memory profiling and optimizations of rails 4.2
with the allocation tracer gem. Also see Aaron Patterson on optimizing memory
usage in Rails apps:
[https://www.youtube.com/watch?v=-D15q-_hdzs#t=1033](https://www.youtube.com/watch?v=-D15q-_hdzs#t=1033)
Koichi Sasada (Ko1) is also the author of the allocation tracer gem Patterson
demoed:
[https://github.com/ko1/allocation_tracer](https://github.com/ko1/allocation_tracer)

------
Dirlewanger
So the keyword argument speed-up is pretty insane, but how does it compare to
normal positional arguments? Is there an advantage over them too? In the
bigger picture, there's obviously arguments for and against keyword arguments
(biggest one is increasing connascence).

~~~
jcoder
> increasing connascence

Well, you're trading connascence of position for connascence of name, which in
this case is much better because you are providing context for the argument
data.

------
sdsykes
>GC pause time dropped from 16ms to about 3ms — slightly better than a 500%
improvment

I wouldn't call an 81% reduction a 500% improvement.

~~~
igravious

       100 / 5 = 20 -> 100 - 20 = 80 (/ = reduction)
    
       100 * 5 = 500 (* = improvement)
    
       half full = half empty
    
       same difference

------
igravious
Even with a reasonable amount of Google-fu I can't find out how to track the
development version of Ruby where these changes land with rbenv (I don't use
RVM). Would some kind soul enlighten me?

------
aikah
nice!this will make ruby usefull for a new range of applications like
games,sound synthesis,...

~~~
SEMW
Re sound synthesis in ruby, that's already a thing -- check out [http://sonic-
pi.net/](http://sonic-pi.net/)

~~~
simpsond
Well... there goes my weekend.

------
jblow
Very misleading title. This is not about a GC speed up, it is about a
reduction in pause time. The GC a itself is eating as many cycles as before
(possibly more due to the cost of making things incremental) and they are
clear about this in the article.

~~~
dang
Ok, we changed the title.

