Need to hit vsync every 16ms? Just run a 1-3ms incremental GC every frame when you have time to spare, and as long as your game's theoretical performance is good enough, you won't ever hitch. This is one of Lua's advantages for game scripting because it's had a fairly solid incremental collector for a while, making it easy to give the GC a fixed chunk of time to operate in every frame so your game never pauses.
Scripting is a huge boon for people doing game development, game prototyping, and multimedia scripting (toolsets like Processing, for example), so having another scripting language to choose from for that is awesome.
Having said that, GC improvements are a good thing. Some of the RPG Maker folks will be delighted when this eventually reaches them.
IO/waiting for forked jobs can be done in background threads since the lock is released on some select function calls.
So while incremental GC certainly does bring down latency caused by the GC it does not bring down latency for tasks that could be parallelized.
The fact that those threads do not run in parallel is an entirely unrelated issue. Parallelism is not an essential quality of multi-threading.
What makes threads interesting is parallelism; if the goal is merely cooperative or pre-emptive multitasking, where there's no concurrency, then you're "effectively" not multi-threaded as per any modern expectation.
The answer is that it's often useful to manage multiple things at the same time, even if we can't or don't want to actually do them at the same time.
We may want to use blocking IO for the simplicity it gives us compared to asynchronous IO, but allow other parts of the system to continue to run during that blocking.
We may want to let threads and the kernel manage our context and context switches for us, instead of manually storing and restoring state, because it can be simpler that way.
They largely weren't. POSIX threads weren't standardized and widely adopted until the 90s, and it wasn't until multi-processor systems became more prevalent that threads saw wide adoption at all.
Here's my basic understanding:
MRI is a single process, with multiple threads... They can't run in parallel because of the global interpreter lock (which is used because the code in MRI is not thread safe? Meaning each thread can touch the same memory and may leave it in a broken state for another thread, so we lock access to it first...).
So: there are multiple threads, I am guessing each with their own task (maybe one to run the code, one to do garbage collection, ... I have no idea), but because of the GIL they can't be parallelized. Still multi-threaded anyway.
I understand the reasoning behind both decisions (RubyMotion is following Objective-C's semantics, CRuby is following Python et al).
For those who aren't familiar with RubyMotion's syntax:
def my_method(arg1, keyword: arg2)
my_method("test", keyword: "test2")
It's also worth noticing that another implementation, JRuby is supported by Redhat. IIRC 2 people work on it full-time: Charles Nutter (Headius) and Thomas Enebo
Obviously, this isn't people being directly paid to work on MRI, but there are side benefits like a richer set of rubyspecs and sometimes fixes landing in MRI.
Hiring GvR is better than nothing, but still a very long way from a "deep-pocketed multinational funding Python development".
I don't think he's a troll; it just seems that his opinion of Ruby is influenced by experience in other languages.
Facebook has had the author of jemalloc locked up working on making PHP go faster for a few years now. Surely there's a more productive place to sink that kind of talent.
Well, you're trading connascence of position for connascence of name, which in this case is much better because you are providing context for the argument data.
Is that a valid argument? IMO, keyword arguments are just making it explicit.
I wouldn't call an 81% reduction a 500% improvement.
100 / 5 = 20 -> 100 - 20 = 80 (/ = reduction)
100 * 5 = 500 (* = improvement)
half full = half empty