Hacker News new | past | comments | ask | show | jobs | submit login
Fibur gives you full concurrency during your I/O calls in Ruby 1.9 (gist.github.com)
53 points by wycats on Dec 19, 2011 | hide | past | favorite | 20 comments



Would this be useful for environments where extreme performance is necessary, such as Fibonacci API servers?


Only if you're reading Fibonacci numbers from an I/O. Fibur only allows concurrency while doing I/O operations.


We can use the technique first described here https://github.com/glenjamin/node-fib by using Fibur.pass.

  def fib(n)
    Fibur.pass
    return n if (0..1).include? n
    fib(n-1) + fib(n-2) if n > 1
  end
Now, if you wrap this in Fibur.new, you will get parallel operation!


Bro, you get parallel operation anyway, that Fibur.pass just added scheduler overhead.

The whole point of Fibur is you don't need to use #pass!!!


Not true!, it will keep the response time interquartile range small, providing a consistent user experience!


These things are usually hard to implement, but Aaron has put his mastery on the table, very clean and well-written code.


A bit light-on with the comments, though.


Genius! (~_^)


Do note, as is mentioned in the comment Thread for the gist, that Fibur is also fully Ruby 1.8 compatible...


lol, comment Thread


This changes everything. Looking forward for Ilya Grigorik's writeup on the features and possible uses for this library.


Is this truly the core of the library?

https://github.com/tenderlove/fibur/blob/master/lib/fibur.rb

What am I missing...?


In ruby 1.9, I/O is asynchronous in threads. This is a clever joke that many people who didn't read the code fell for...


In ruby 1.8, I/O is asynchronous in threads too...


Needs rack-fibur_pool


What benefits does Fibur give you that em-synchrony doesn't? Not suggesting there isn't a need for a competitor, but trying to place it in the space.


What benefits does em-syncrony give you that Fibur doesn't?


Make sure you read and understand the source code before commenting here. It won't take very long, trust me.



So, I'm not sure if this is a reaction to my Sinatra::Synchrony post yesterday (http://news.ycombinator.com/item?id=3367139) or not, but in the event that it is:

Sinatra::Synchrony is not designed to correct a non-existent blocking IO problem. I was under the (incorrect) assumption that certain non-network IO blocks occurred in MRI.. I was thinking of system in particular, but I was wrong. In threads, there is no GIL on IO.

The main purpose of this gem is to allow a developer to utilize EventMachine without having to resort to callbacks, and without having to switch to a custom framework exclusively for this.

The primary reason you would want to do this is that in some scenarios, you can get a nice performance boost by using EventMachine with your application. But I didn't want to have to re-write all my code to utilize callbacks, because then I couldn't change my concurrency model. For example, if I used EventMachine with its callbacks approach, it would be much harder to switch back to Rubinius with threading.

In a nutshell, Sinatra::Synchrony is just a tool, it should probably not be your first choice, and you shouldn't use it unless you understand the problem domain.

So because of this, I have provided a section of articles to help people understand this (http://kyledrake.net/sinatra-synchrony/#further-reading), and I also provided a section describing how Ruby's current IO subsystem works (it needs to be updated a bit): http://kyledrake.net/sinatra-synchrony/#caveats

The reason I dropped this on Hacker News yesterday was because I was getting sick of people writing about how they were dumping Ruby or Python or something else for Node.JS to get "scalability" when in fact not only is blocking IO not a problem in Ruby itself, but you can also use a Reactor pattern in Ruby, just like Node.js does. I thought it was important that people could see how easily you could integrate a reactor pattern without having to completely rewrite your code base.

I think the problem this is highlighting is that people are simply not informed on what's going on here. In other words it is a documentation failure. This is not helped by the fact that Thin uses EventMachine, which turns off threads (hence killing the internal non-blocking code), and Thin is default on many different things (Sinatra, Heroku, Rack, etc). Supposedly Thin now has a "threaded" mode which Sinatra 1.3 uses, but I don't know if that is still based on EventMachine or not. Point is, we shouldn't be using EventMachine by default on web servers at all, it should be something you explicitly turn on and understand.

I will be writing a blog post this week to discuss this, since apparently there's a lot of confusion out there. I'm going to also provide our deploy strategy for non-EM web apps. Our main site does not use Sinatra::Synchrony. It uses Rainbows! with a thread pool and it works beautifully, we can handle thousands of hits per second on our site without a problem. Anybody that tells you Ruby doesn't scale is lying to you. Ruby scales incredibly well for a wide range of problem domains.

The one thing I want to say though, is that Sinatra::Synchrony does provide a valuable purpose: It is a power tool designed to give you an option for improving raw performance by doing something that isn't necessarily the best thing to do, but I've occasionally needed that capability, so I wanted to share that capability with the community.

TLDR: Fuck this shit bro get a coors lite out of the fridge and use Node.JS it ROFLscales bro. Fibur is so tight that only a Hacker could have wrote it, but there were some problems I found: https://github.com/tenderlove/fibur/issues




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: