Hacker News new | past | comments | ask | show | jobs | submit login
Reduction in Garbage Collection Pause Time in Ruby 2.2 (omniref.com)
215 points by mparramon on Nov 22, 2014 | hide | past | web | favorite | 50 comments



Incremental garbage collection is great for realtime-ish use cases like games and multimedia, too, because if you tune the collector right, you can ensure your GC never causes you to drop frames or fail to mix a sound buffer on time.

Need to hit vsync every 16ms? Just run a 1-3ms incremental GC every frame when you have time to spare, and as long as your game's theoretical performance is good enough, you won't ever hitch. This is one of Lua's advantages for game scripting because it's had a fairly solid incremental collector for a while, making it easy to give the GC a fixed chunk of time to operate in every frame so your game never pauses.

Scripting is a huge boon for people doing game development, game prototyping, and multimedia scripting (toolsets like Processing, for example), so having another scripting language to choose from for that is awesome.


At least in games Ruby has a hard time competing with LuaJIT; all that extra headroom is really nice to have, because 16ms isn't much at all.

Having said that, GC improvements are a good thing. Some of the RPG Maker folks will be delighted when this eventually reaches them.


MRI still has the global interpreter lock, i.e. is effectively single-threaded for CPU-bound tasks.

IO/waiting for forked jobs can be done in background threads since the lock is released on some select function calls.

So while incremental GC certainly does bring down latency caused by the GC it does not bring down latency for tasks that could be parallelized.


MRI is multi-threaded, even for CPU-bound tasks - it just doesn't run those threads in parallel.


Hence the "effectively" in the parent comment; Ruby's interpreter lock can be very problematic in threaded apps that use I/O functions from C-based gems that forget to release the lock.


It's not a case of 'effectively' or not. MRI is 100%, totally, unequivocally, multi-threaded.

The fact that those threads do not run in parallel is an entirely unrelated issue. Parallelism is not an essential quality of multi-threading.


This is a completely pointless semantic distinction that nobody cares about.

What makes threads interesting is parallelism; if the goal is merely cooperative or pre-emptive multitasking, where there's no concurrency, then you're "effectively" not multi-threaded as per any modern expectation.


If threads aren't interesting except for parallelism, what do you think people were using them for the multiple decades when the vast majority of people were running single-cored uniprocessor systems?

The answer is that it's often useful to manage multiple things at the same time, even if we can't or don't want to actually do them at the same time.

We may want to use blocking IO for the simplicity it gives us compared to asynchronous IO, but allow other parts of the system to continue to run during that blocking.

We may want to let threads and the kernel manage our context and context switches for us, instead of manually storing and restoring state, because it can be simpler that way.


> If threads aren't interesting except for parallelism, what do you think people were using them for the multiple decades when the vast majority of people were running single-cored uniprocessor systems?

They largely weren't. POSIX threads weren't standardized and widely adopted until the 90s, and it wasn't until multi-processor systems became more prevalent that threads saw wide adoption at all.


I don't understand what this means... Can you explain?

Here's my basic understanding:

MRI is a single process, with multiple threads... They can't run in parallel because of the global interpreter lock (which is used because the code in MRI is not thread safe? Meaning each thread can touch the same memory and may leave it in a broken state for another thread, so we lock access to it first...).

So: there are multiple threads, I am guessing each with their own task (maybe one to run the code, one to do garbage collection, ... I have no idea), but because of the GIL they can't be parallelized. Still multi-threaded anyway.


You can create multiple OS-level threads in MRI using Thread.new. Those threads can run code at the same time, just not Ruby code. There is a function that C extensions are supposed to call while running I/O that can block, or while running CPU-intensive non-Ruby code, that releases the global lock.


If you write a C extension to do CPU-intensive stuff, you can release the lock.


This was also one of the major changes in the Android ART VM vs Dalvik.


It's too bad that keyword arguments are semantically opposite between CRuby and RubyMotion in Ruby 2.0+. It'll cause a divergence and makes it a little more difficult for Rails+RubyMotion engineers like me to switch.

I understand the reasoning behind both decisions (RubyMotion is following Objective-C's semantics, CRuby is following Python et al).

For those who aren't familiar with RubyMotion's syntax:

    def my_method(arg1, keyword: arg2)
      puts arg1
      puts arg2
    end

    my_method("test", keyword: "test2")
    # output
    test
    test2


Ruby is a wonderful language. It would be great if there was some deep-pocketed multinational willing to fund its development in the same way as Facebook has done for PHP or Google has done for Python.


Heroku pays Ko1 and Matz [0], and Heroku is owned by Salesforce, which is a public company. Granted, Salesforce doesn't have Google or FB's war chest -- but it's not like Ruby isn't being supported by companies.

[0]: https://blog.heroku.com/archives/2011/7/12/matz_joins_heroku


Apart from Koichi, there's one more person - Nobu - who is working on CRuby full-time. Nobu's the guy with most commits to CRuby. Matz now does not work on CRuby, only on MRuby and the language specification.

It's also worth noticing that another implementation, JRuby is supported by Redhat. IIRC 2 people work on it full-time: Charles Nutter (Headius) and Thomas Enebo


There's a few of us at Oracle working on an alternative backend via JRuby, as well. Chris Seaton gave a talk on it at RubyConf last week. He has a series of posts about it that are pretty good:

http://www.chrisseaton.com/rubytruffle/

Obviously, this isn't people being directly paid to work on MRI, but there are side benefits like a richer set of rubyspecs and sometimes fixes landing in MRI.


That was a great talk -- really sort of mind-warping to think that there are people working on "de-optimizing" the JVM so that Ruby can run faster in the average case.


thanks for that, wasn't aware of that connection.


What has Google done for Python? The only story I know of is Unladen Swallow, which was a side project that ultimately failed.


Others will know more, but at very least, they hired Guido for ~7 years and let him spend half of his time working on improving python.


[Same person, lost password for the previous throwaway]

Hiring GvR is better than nothing, but still a very long way from a "deep-pocketed multinational funding Python development".


perhaps a bit of an exaggeration. i apologise. i suppose i'm mainly bitter about PHP. It completely baffles me that FB have invested so much money in the language ecosystem


I wouldn't say Facebook funds the development of PHP per se, but they do fund the development of HHVM and they made the initial draft of the language spec.


Enova Financial in Chicago employs Brian Shirai to work on Rubinius, a new implementation of Ruby.

http://rubini.us/


I could say the same for perl, or anything really.


There's only so much money you can throw at broken language designs and still expect a reasonable return.


I'll bite. What's broken?


Yeah, I'd be interested too. I code PHP, Ruby, Python, C#, js and all of them have quirks and stuff. But Python and Ruby are almost identical other than some syntax sugar. PHP is PHP. $needle, $haystack gets me everytime. JS is just evil. Fucking hate that language. Dog turd. C# is nice, a bit verbose.


I think he's a troll.


He's a smart guy, a Haskell programmer, and co-author of a pretty great Haskell book.

I don't think he's a troll; it just seems that his opinion of Ruby is influenced by experience in other languages.


If he doesn't want to elucidate his point, its basically trolling, no matter what your experience. Its a line designed to draw a rise out of people - that's it.


Are you suggesting that ruby's design is more broken than PHP's?


More that Facebook's return on the amount of money they've spent on PHP is pretty poor.

Facebook has had the author of jemalloc locked up working on making PHP go faster for a few years now. Surely there's a more productive place to sink that kind of talent.


nice timing together with the memory profiling and optimizations of rails 4.2 with the allocation tracer gem. Also see Aaron Patterson on optimizing memory usage in Rails apps: https://www.youtube.com/watch?v=-D15q-_hdzs#t=1033 Koichi Sasada (Ko1) is also the author of the allocation tracer gem Patterson demoed: https://github.com/ko1/allocation_tracer


So the keyword argument speed-up is pretty insane, but how does it compare to normal positional arguments? Is there an advantage over them too? In the bigger picture, there's obviously arguments for and against keyword arguments (biggest one is increasing connascence).


> increasing connascence

Well, you're trading connascence of position for connascence of name, which in this case is much better because you are providing context for the argument data.


Koichi says the performance should now be comparable to normal positional arguments.


biggest one is increasing connascence

Is that a valid argument? IMO, keyword arguments are just making it explicit.


>GC pause time dropped from 16ms to about 3ms — slightly better than a 500% improvment

I wouldn't call an 81% reduction a 500% improvement.


   100 / 5 = 20 -> 100 - 20 = 80 (/ = reduction)

   100 * 5 = 500 (* = improvement)

   half full = half empty

   same difference


See, kids? Denominators are important!


Even with a reasonable amount of Google-fu I can't find out how to track the development version of Ruby where these changes land with rbenv (I don't use RVM). Would some kind soul enlighten me?


nice!this will make ruby usefull for a new range of applications like games,sound synthesis,...


Re sound synthesis in ruby, that's already a thing -- check out http://sonic-pi.net/


Well... there goes my weekend.


o thats nice, is there something like that for creating visuals for vj-ing through ruby code - or are there ways to integrate the ruby generated music with ruby generated visuals?


Very misleading title. This is not about a GC speed up, it is about a reduction in pause time. The GC a itself is eating as many cycles as before (possibly more due to the cost of making things incremental) and they are clear about this in the article.


Ok, we changed the title.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: