Hacker News new | comments | show | ask | jobs | submit login
"Ruby developers need to stop using EventMachine. It's the wrong direction." (slideshare.net)
149 points by caseorganic 1614 days ago | hide | past | web | 70 comments | favorite

If you really like evented programming, use node. Everything there is evented. I tried EventMachine and it just felt worse than node, documentation is worse, libraries are worse, it just wasn't a whole lot of fun.

I haven't tried celluloid yet, but I've learned one lesson building things with PHP, Ruby, Python, Node, Java, Scala, etc. That lesson is to use tools with a community of people around it that are using those tools to solve the same problems as you. The more community is around a project the more likely that someone else has stumbled upon whatever bug or problem you have hit already and found a solution if it exists. Also, documentation tends to be better and easier to find on more popular languages and frameworks.

So, if you like busting out CRUD web apps, you probably should look at PHP and Zend/Symfony/etc. framework or Ruby on Rails or Python Django or Java Play. If you like doing single page JS apps you should look at Knockout, Backbone, and Ember. If you want to do more evented/parallel networked apps you should look at node, scala, clojure, go, and erlang because those communities care a lot about threading, evented, actor pattern type programming.

Evented/multicore/multithreaded programming is just not something that say PHP, Ruby, and Python have embraced as much as a community because it's not for the most part a problem that the average PHP, Ruby, and Python dev is trying to solve most of the time.

Python programmers have been doing evented code for almost a decade using Twisted. EventMachine happened later, but then, so did the mainstreaming of Ruby. You realize, don't you, that Tcl has all three of Javascript, Ruby, and Python beat when it comes to evented I/O?

Node does not own evented I/O.

I agree that Node doe not own evented I/O, my point was more that if you care about concurrency, parallelism, evented io, you should work in communities that care about those things and build tools around them. Ruby as a community cares more about Rails than evented I/O. Node is purely evented I/O, so that's basically ALL they care about. I guess Python is probably more in between.

I didn't mean to say node owns evented I/O, just that their whole community embraces it.

Well actually, the JVM is probably the most potent platform when it comes to async I/O, not only because the standard nio library and the high-level wrappers, like Netty and Mina, are very mature, but also because you can compose blocking and non-blocking pieces of functionality like you cannot do with Node / V8.

Checkout Akka, it's awesome: http://akka.io/

There are very few libraries.

Are you trying to say that there are very few libraries for something that runs on the JVM?

There are few async libraries.

That hasn't been my experience.

For event programming I use AnyEvent (Perl) the so called DBI of event loop programming.

It's well documented, straightforward to use, robust and works seamlessly on top of many event loops (libev, libevent, Glib, Tk and more).

And I just need to look at AnyEvent::* namespace on CPAN to see what can be run with it - http://search.cpan.org/search?query=anyevent%3A%3A*&mode... (currently lists 630 modules)

I've looked at Celluloid. It offers a much nicer paradigm than eventmachine. It also provides a nice way to distribute actors over multiple machines (DCell). I strongly agree with this slidedeck. Callback muck is maddening.

> Callback muck is maddening.

Use em-synchrony…

I wonder why nobody ever talks about drb, a distributed system that ships with ruby. It's active in Japan, and there has been an effort to make it better known in the rest of the world with a book dedicated to the topic in english (published by Pragmatic Programmers). Still, it hasn't stirred much interest. Even if it's less advanced than DCell, it's still a recommended way to start learning about distributed computing.

Hello, author of Celluloid here.

I think DRb pretty cool and has long been underutilized. That said, there are a few fundamental design problems with the way it works that I think DCell solves:

1. Distributed systems really need to be built on top of asynchronous protocols, and DRb is a direct mapping of a synchronous method dispatch protocol onto a distributed systems protocol. Similar attempts at this include: CORBA and SOAP. If anyone disagrees with this I can go into more detail but I think you will find ample distributed systems literature condemning synchronous protocols. DCell is fully asynchronous (but also provides synchronous calls over an underlying asynchronous protocol)

2. DRb is multithreaded but does not provide the user with any assistance in building multithreaded programs. This becomes particularly confusing when you have to deal with a proxy object (DRb::DRbObject) in a remote scenario but not in a local scenario

As an aside; Celluloid is awesome and you really made concurrent programming in Ruby better by releasing it :)

It's brittle, opaque, and relies on Marshal, an interchange standard nothing else uses. It's so easy to build alternatives to Drb that use better interchanges that nobody uses Drb. And once you do that, you find it's easy to back your system with Redis or a database, which you do because it makes sense, and then you start getting pulled towards message architectures --- which are themselves usually superior to direct invocation of remote methods.

In my previous load testing of Drb, at 86 connections (remote objets) with even moderate use, things consistently went bad. This was independent of number of machines.

Happy to see informed opinions, thank you. Sounds pretty damning, too. Thomas, are you saying that messaging is always superior to RMI, and that there is no valid use case for RMI? I am thinking out loud here, but isn't a distributed system that leverages RMI quicker to implement? Might be suitable when prototyping. Or teaching a classroom. Does that make sense?

DCell might be more advanced, but it's nowhere near ready for real-life application (1).

1. https://github.com/celluloid/dcell/commit/e3115f284084a78756...

I'm not a Ruby developer, but it looks to me that using drb when there's Celluloid is like using asyncore when there's Twisted, in Python.

As an added bonus, every time I've tried a problem in both, Celluloid has been faster.

Not to judge this presentation up or down; I'm not a Rubyist... but a reflection for myself - you know you're getting old when you see ideas that have come and gone and come and gone come again.

Like rings on a tree, you can judge how old you are by how many times a particular idea has resurfaced after some time in obscurity.

We write a lot of EventMachine code here; I have some questions. He writes:

EventMachine is • A frankenstein guts the ruby internals • Not in active development • Makes non-blocking IO block • Requires special code from Ruby libraries • Hard to use in an OOP way • Is really difficult to work with • Poorly documented

I'm not sure I understand how EventMachine "guts the Ruby internals" (I didn't watch the talk). It's true that EventMachine's internals are C++, not Ruby; there was originally a reason (again, I think it had to do with green threads) that it was designed this way. I'm not sure I can think of the Ruby functionality that EventMachine changes, or the manner in which EventMachine mucks with the interpreter or its runtime. I'm obviously ready to be corrected, but I'm missing how this impacts me as a programmer. Maybe he's talking about exception handling?

I also wasn't aware EventMachine "wasn't under active development". Because it's just an IO loop. Is libevent in active development? Do I need to be aware of that to use it? The underlying OS capabilities EventMachine maps haven't changed in over a decade. I think I'm actually happy they aren't constantly changing it.

I also don't understand how EventMachine "makes non-blocking block". All EventMachine I/O is nonblocking; it's essentially a select loop.

I also don't understand what special code EventMachine demands from libraries. Maybe he means database libraries? That is, maybe he's referring to the fact that you can't use standard Ruby database libraries that rely on blocking I/O inside an EventMachine loop? I'm wondering, then, what he expected. We wrote a little library (a small part of it is on Github) to do evented Mysql, but we stopped doing that when we realized that Redis evented naturally, and we just hook Mysql up through Redis.

"Hard to use in OOP way" just seems wrong, given that the ~30 evented programs I can find in my codebase directory all seem to be pretty object-oriented. So, that's not so much a question on my part.

Really difficult to work with? I've taught 7 different people EventMachine, in a few hours each. EventMachine is easier than Ruby's native sockets interface, in several specific ways.

I think maybe the issue here isn't so much EventMachine, but the idea of using EventMachine as a substrate for frameworks like Sinatra and Rails. That idea is whack, I agree. Trying to retrofit a full-featured web framework onto an event loop seems like an exercise in futility.

But on the other hand, I've been writing Golang code for the past 2 months, and Golang is militantly anti-event; it doesn't even offer a select primitive! Just alternating read/write on two different sockets seems to demand threads! And what I find is, my programs tend to decompose into handler functions naturally anyways. I try to force myself to write socket code like I did when I was 13, reading a line, parsing it, and writing its response, but that code is brittle and harder to follow than a sane set of handler functions.

So, long story short: I'm not arguing that evented code is the best answer to every problem, or that web frameworks should all be evented, or that actor frameworks aren't useful. It's probably true that a lot of people rushed to event frameworks who shouldn't have done that. But there are problems --- like, backend processing, or proxies, or routers and transformers, or feed processors --- where event loops are the most natural way to express a performant solution.

> I'm not sure I understand how EventMachine "guts the Ruby internals"

EventMachine does not use the Ruby IO primitives (e.g. TCPSocket, UDPSocket, etc) as the basis of its IO abstraction, and instead has reimplemented its own set of primitives for doing IO.

Because of this it can't take advantage of work being done in Ruby core to advance Ruby's socket layer. For this reason IPv6 support langered, among other problems.

This also severely complicates making multiple implementations of the EventMachine API, such as its JRuby backend (which maps onto Java NIO)

I think the real problem is EventMachine's original goal was to be a cross-language I/O backend similar to libevent or libev, but since it wasn't a particularly good one, the only language that wound up using it was Ruby. Compare to Twisted, which is built on libevent, or to Node, which is built on libev/libuv

Author of Packet (https://github.com/gnufied/packet) a Reactor library that did use Ruby IO primitives here. I wrote this before, eventmachine had pure Ruby reactor. Non blocking calls were introduced in Ruby 1.8.5 IIRC and my library made heavy use of them, and they were buggy. Some issues I found:

* select() call used to segfault. * the non blocking read/write behaved very differently under different OSes and I am not talking Windows. There were differences between Linux and OSX for example. * Again frequent crashes when reading/writing when nonblock flag is set.

I am sure, situation is lot better now but for building a reactor library, native Ruby IO primitives fell short. There is always lack of advanced selectors (Epoll/KQueue) as well.

Now as I replied to thomas(?) below and since you yourself have wrapped libevent for Ruby 1.9, there were severe limitation in interpreter back then for even libevent wrapper to work. So I guess, given historical reasons, it made sense Eventmachine did not use native Ruby IO primitives or libevent.

I like libevent and all, but I don't think Twisted's use of libevent is a particularly big win for Twisted.

Again, this is apocryphal, but I remember someone trying to wrap Ruby around libevent and failing; I remember there being a reason this had to be done bespoke. And having fallen into this particular NIH trap many times before: there's just not a whole lot to a simple socket I/O loop.

What's been your experience with JRuby/EventMachine?

For what it's worth, I've done two libev bindings, rev/Cool.io (originally in 2008) and nio4r in 2011

Zed shaw tried to make libevent work with Ruby, but this was when we had Ruby 1.8.5 and there were problems with Ruby interpreter that prevented this (around threading IIRC).

At least 2 years ago, twisted did not use libevent, it was mostly pure python except for a very tiny piece of code (and optional stuff of course).

I am sure you could write a libevent-based reactor, but twisted was started a long time ago (10 years ? The twisted book in o'Reilly was published in 2005), before libevent existed I think.

No, libevent existed before Twisted; its first release was in 2000, and we used it extensively at Arbor Networks just a couple years later.

Thanks for the correction. Seems that I confused the dates with libev that came after.

> ... Compare to Twisted, which is built on libevent, or to Node, which is built on libev/libuv

Or compared to AnyEvent which works with [m]any event loop - https://metacpan.org/module/AnyEvent

> Golang is militantly anti-event; it doesn't even offer a select primitive

s/anti-event/anti-callback/. They are not the same thing. Under the hood, Golang is doing the eventing and select() for you, while letting you write simple procedural code.

> I try to force myself to write socket code like I did when I was 13, reading a line, parsing it, and writing its response, but that code is brittle and harder to follow than a sane set of handler functions.

How is simple, procedural code hard to follow? Even the way you write it sounds simple "read, process, write". That is really more hard to follow than a series of onRead, onWrite callbacks?

> I'm not arguing that evented code is the best answer to every problem

No, but like many others you seem to be confusing and conflating call-back driven code with event-loop based code. Everyone agrees on the benefits of event loops over kernel threads. Not everyone agrees that call-backs are the best interface to event loops - more and more people are switching on to green threads (which look just like normal threaded code) as the best and simplest interface to the event-loop - as seen by Golang, gevent, Eventmachine, Coro (Perl) etc etc.

Real evented code isn't a series of onRead/onWrite callbacks, because real evented code buffers strategically, and abstracts itself into state machines with simple callback interfaces, so that the rest of the program can be written in terms of higher level events.

I don't want to turn this into "events work for every program", because like I said above: I don't think event interfaces work for all kinds of programs. I've found them poorly suited to full-featured web frameworks, for instance.

I did not predict this incredibly boring "evented runtime vs. evented API" controversy, but will dispense with it by saying that it is incredibly boring, so you win it in advance. :|

I don't see the "evented runtime vs. evented API" issue as boring. This kind of thing is the core of any application you are going to build; it's hardly a minor or nitpicky issue.

Using non-blocking APIs requires you to turn your application "inside out", putting state that would normally have been on the stack into a state machine. You wind up in a maze of callbacks and low-level details.

This kind of thing might be appropriate for writing high performance code in C, but it's a mystery to me why anyone would want to do it in a higher-level language.

It forces you to keep your methods short with reasonably appropriate boundaries between them. Good programmers should be doing this anyway, but an event/callback API at least prevents you from writing a 300-line "do everything" method.

> green threads (which look just like normal threaded code) as the best and simplest interface to the event-loop - as seen by Golang, gevent, Eventmachine, Coro (Perl) etc etc.

That list is just screaming for someone to write a comment mentioning that you forgot Erlang.

When I saw the title, I thought it was going to be coverage of how under-the-hood EventMachine's model only allows it to scale up to a certain throughput per-process (which is reasonably high for most uses of it) or how there is some fundamental complexity in how it's written that pretty much guarantees there will be livelock conditions (and deadlock conditions). So I share your bewilderment about why they chose the things to highlight that they did.

The lock conditions it has are for particular workloads at particular throughput/utilization, so 99% of the people using it won't hit those conditions initially (and a lot of people aren't writing systems that will ever reach those limits). However, we have a particular service that acts as a TCP multiplexer/router/load balancer to provide high availability for some of our mission critical applications. A little over a year ago, I initially wrote it using EventMachine in a couple of weeks, then spent almost a month trying to find what I thought was a bug in my code where a full deadlock would occur if 3 or more connections were established in a small enough time frame. Turned out to be a bug in EventMachine. After fixing that one, I found another where if 5 or more open connections fired the same event within a small enough time frame, all 5 would hit livelock and given enough connections hitting the condition, the whole process would deadlock. Once I hit the second bug in EventMachine itself directly related to concurrency handling, I switched to Netty and rewrote the whole thing in Scala in about a week and it's been rock solid since.

I've been doing some development on the side in Go because its particular flavor of types and concurrency model is fascinating. It's not that Go is anti-event, it's that it's overwhelmingly stream oriented (not low-level streams, but data streams). Once I finally hit that moment of clarity that goroutines/channels was all about connecting streams of data and not connecting raw streams, they became a much more natural solution. However, don't get me started on the difference between a non-blocking read on a channel and a blocking read on a channel.

You're threading, I presume? We do high speed high connection rate work in EventMachine (for instance, to work with order entry for trading exchanges) and have never seen anything like this --- but we religiously avoid Ruby threads.

Yeah we are, as it was the least-convoluted method for handling the multiplexing (but wanting to maintain the same handler for all of them). Burned entirely too much time assuming that documented ways to handle things were actually rock solid, not rarely used. The first bug where 3 connections could deadlock everything was unrelated to threading. It was related to how it created identifiers for each connection and would create identical identifiers for two different connections so it would think there was data pending for a socket, but the data had already been read, so it tried to do a blocking read when there wasn't data to read. I never fully tracked down the second bug, but it was more likely related to using threading.

One thing we did have was a large majority of the connections were coming from the same IP (but different source port) and a peak connection rate of ~200 connections per second per server and would last a few seconds (so at any given moment, roughly 1000 connections being established or with data in flight). I'd be really interested in hearing what your traffic patterns are roughly like (and hand-wavy what you have event machine doing, like proxying requests, building/returning its own responses, etc.) if you're willing to share at all.

  > Golang is militantly anti-event; it doesn't even offer a select primitive! Just alternating read/write on two different sockets seems to demand threads!
Go only blocks the goroutine. It uses native event primitives (epoll/kqueue/etc) under the hood so that IO is non-blocking to the rest of the process (e.g. other goroutines). You said you have used Go for 2 months, so I assume you are aware of that.

So..can you be more specific about what you were talking about in that statement?

There are times when you might reach for select() not to scale the whole program, but because for instance you're trying to hot potato data from a Reader to a Writer.

Obviously, the whole of Golang's concurrency model is lightweight threads scheduled on I/O events. But that's not exposed to the programmer; in fact, it's hermetically sealed away from the programmer from what I can tell.

So now you know what I meant by that statement.

every time you perform i/o, the current goroutine yields to the scheduler.

If you want to manually yield to the scheduler, you call runtime.Gosched() and the calling goroutine yields. If you want to lock a goroutine into a specific thread, you call runtime.LockOSThread(). Everything is single-threaded to start until you call runtime.GOMAXPROCS(), but that's intended to go away in the very near future. So... I'm not sure what you mean by sealed away; the Go runtime does the sensible things that it can do automatically, but you still have the ability to change the scheduling behavior if you really want to.

what do you mean by "doesn't offer a select primitive"? I'm not familiar with your definition of select, because Go has a select keyword for concurrency control, and I'm under the impression you're talking about select as its defined in EventMachine, could you elaborate?

I'm referring to the system call. Sorry, I can see how that would cause lots of confusion.

Thanks. I think I understand a bit better now.

Go provides concurrency primitives (scheduler yielding i/o, channels, goroutines, etc) upon which you can build your own "events". I agree (and this seems to be your point) that Go doesn't provide a canned event mechanism that you can just hook into for callbacks on IO.

For example in the Go http server[1] a goroutine accepts in a loop and kicks off goroutines to process and handle new requests.

[1]: http://golang.org/src/pkg/net/http/server.go?s=30200:30241#L...

Sure, I'm using the Go http server.

Take the example of a simple proxy to see where I'm coming from. Sure, I'm happy to have Go manage all the connections and sockets and I'm happy to spawn new goroutines for each connection and all that. But a proxy accepts an inbound connection, makes an outbound connection, and then monitors the outbound and inbound sides for data. The loop to do this with sockets could be a simple two-descriptor read select(2) call. But instead, I have to spawn two more goroutines, one to "monitor" (really, read) from inbound, and one for outbound.

> But instead, I have to spawn two more goroutines, one to "monitor" (really, read) from inbound, and one for outbound.

What does Golang's select/case not do that you want?

Is there some feature of the language or the libraries that I am missing that turns a socket (err, a TCPConn) into a channel that I can read with Golang's select construction? Seriously asking. That would be awesome.

No, I think you would need to wrap the TCPConn in a goroutine that does the Read, handles any errors, then passes the bytes back to the consumer via a channel. At least it would be a very generic and small goroutine to do this.

I agree that would be a nice feature though.

You know what's happening in this thread? I made it sound like I was criticizing Go for needing to spawn goroutines to do this. I'm not! It makes sense, in the context of Go, to do it this way, even though as a C programmer by training that's not my first thought on how to do it.

I'm not criticizing Go for being anti-event; I'm just observing that it is. Idiomatic Go --- like, the code in the standard library --- has a strong bias towards straight-line code.

> I'm not criticizing Go for being anti-event; I'm just observing that it is. Idiomatic Go --- like, the code in the standard library --- has a strong bias towards straight-line code.

I think I understand where you are coming from now - I think you are saying Go is "anti-event-based-callback-driven" rather than "anti-event-loop-implementation" - which is absolutely true. Go's concurrency model is build on CSP (Hoare's Communicating Sequential Processes)[1] which seems to advocate procedural threads rather than callbacks.

[1] http://golang.org/doc/go_faq.html#goroutines

Is there a large overhead for having to use a goroutine rather than the lower level? Do you think it's appropriate for Go to expose that, or do you just miss it?

> Is there a large overhead for having to use a > goroutine rather than the lower level?


Would you just select { case <- reader... case writer <- data..} ?

I know this isn't the same as posix select, but it does let you have one goroutine coordinate the hot potato..

That would require a goroutine for each socket to hot potato from the socket to a channel, right?

Yes. This allows you to have the "when any of these socket's state changes" semantics in your controller/dispatcher function. I would like it if they had such an interface in stdlib...

If you really do just need to take data from one Reader and send it to a Writer, you can make a new Pipe.

> I try to force myself to write socket code like I did when I was 13, reading a line, parsing it, and writing its response, but that code is brittle and harder to follow than a sane set of handler functions.

Do like the http library does and separate out socket handling to be in terms of net.Listeners/net.Conns, and create a handler abstraction for yourself. Your socket code will be testable, because its trivial to write fake net.Conns that are backed with byte buffers. Your handler code will be testable because they are in terms of parsed objects.

I tend to write socket code once per server project, and then leave it alone for months, so this isn't a big deal for me. What makes Go nice for me is that I can block in client code, and retain the efficiency of using a select loop.

I don't have access to the source for the proxy I wrote for work, but here's one I whipped up quickly: http://play.golang.org/p/Fz19qSehCg

Go doesn't expose select, and has the runtime do that for you; but this allows them to make all Go libraries share a select loop, which has nice performance characteristics. Although, now that I think about it, it is probably possible to have a userspace implementation of select that works atop the runtime's shared select loop. Hmmm...

I ended up writing my proxy this way. See the back-to-back "go copy" lines? That's what blew my mind. I get why Go works this way, but wow would I ever not write code that way in a threaded C program.

From hard learned professional experience: EventMachine isn't written very well, when I say that I mean specific things like "whoever wrote the sub process handling in EventMachine wrote code that is uniquely wrong on every platform I've ever heard of." If you are not familiar with this (and are curious, in some sort of macabre way), I urge you do grep for SIGCHLD or wait in the source and see what you find.

What EventMachine does (or at least what it did a year or so ago when I was debugging this) is this: sub processes are equated to popen. When the input side of that pipe closes (that is, when the sub process closes STDOUT) the process will be finally be waited upon—and if it doesn't terminate in a hard coded timeout which by the way blocks the rest of your program, then it will be forcibly gunned down, with SIGKILL if necessary.

Among the problems that arise here, note that unless your daemon script is written to unconditionally drop STDOUT upon forking (uncommon) and you attempt to launch a daemon from within a sub process you are managing using EventMachine, the subprocess itself will terminate quickly, the daemon will go on its merry way, and your driver program will never, ever tell you it has finished running until that daemon is dead and anything it has spawned that might possibly use STDOUT is also dead. And god forbid it close that stream and then dare to continue running, for EventMachine will shoot it dead within IIRC 20 seconds, and lock up your driver program for the duration to boot.

Programmers who understand how the Unix process model works will write a very small signal handler for SIGCHLD that writes a byte on a pipe or some similar method of notifying the main event loop and call wait on the child immediately and then close its end of those pipes. I am reliably informed by those who understand the Windows process model that what EventMachine does is even more wrong there. This is a subsystem that was not written by anyone who knew what popen does, could not be bothered (or was perhaps incompetent to read) what any of a dozen standard implementations of it do, and appears to have debugged the code into some form of submission and then released it upon an unsuspecting public.

This is the only colossally wrong decision they made that I can list off the top of my head, but that's because it was so stupid I stopped looking for trouble after that. EventMachine does not handle anything but a very straightforward select loop very well, and I am sufficiently terrified of what lives under the covers in that system that I would rather write the select by hand (massive pain though it may be) than let this system anywhere near it. The thing that really alarms me is that people build walls of cardboard like NeverBlock (which reaches deeply into the guts of the Ruby software I/O and replaces it with EventMachine driven coroutines) atop this foundation of sand and then wonder when it falls over sideways in an impenetrable and impossible to debug fashion.

Coroutine programming (for that is, essentially, what we are talking about) can be a very elegant way to solve certain problems, but it works best when it is simple, or it least localized (e.g. samefringe). In an event driven server, every little piece must be audited carefully to ensure that it does not block. You get all the same problems any preemptive concurrency model does, with some added nastiness; in exchange you get some slightly better scalability numbers. It is at its best in a fairly simple program such as, say, nginx in its proxy configuration, where it speaks streams and SSL and talks to some application server on the other end of a different stream for anything sophisticated.

Nobody uses EventMachine to manage daemon processes. Subprocesses in EventMachine aren't "equated to popen"; they exist for the sole purpose of doing evented I/O popen-style. I'd be careful about calling a developer "incompetent" because they write something that doesn't admit to arbitrary use cases.

I'm being elliptic about why here because I don't think I can talk about the internal architecture of that system in public, but warning people off one particularly stupid third party bug that we fixed in our internal fork is not, I believe, a problem. Anyway, we certainly did use it to manage daemon processes, although not deliberately; we had a daemon that communicated with external software about system events, and running shell scripts was part of that. We didn't necessarily anticipate folks running 'service httpd start' in those shell scripts, but it was not an inherently unreasonable thing to do.

And this isn't "arbitrary use cases"; this is an explicitly supported function that is completely contrary to good practice and sane behavior and, to boot, has the ability to arbitrarily kill programs for impenetrable reasons and block for significant periods of time (the central sin of event driver programming). You can't tell me that if you saw something like this in a random crypto library you wouldn't immediately tell everybody to stop using it; why should EM's developers get a pass for their, yes, incompetently written popen? I would actually be considerably happier if it wasn't in the library at all; at least then it wouldn't be wrong.

Do you have other examples of how badly constructed EventMachine is, or is it just that you can't use their process I/O stuff as a daemon manager?

I was using Adam Langley's net/ssl code in Golang to build an HTTPS proxy, and only after several hours of hair-pulling did I discover that Langley hadn't implemented the compat SSL2 handshake that Firefox uses with proxies. net/ssl in Go was, for no good reason other than an omission, unsuited for use as an HTTPS proxy. Should I say net/ssl was incompetently written? That seems like a bad idea to me.

> But there are problems --- like, backend processing, or proxies, or routers and transformers, or feed processors --- where event loops are the most natural way to express a performant solution.

Your backend processing apparently fits in one machine, since you "hook Mysql up through Redis." I'm personally astonished you get so much done without a distributed environment tolerant of the relevant failures I see when I read that sentence.

After using EventMachine in heavy production at Minefold for the last year and a half we're finally ditching it for Go. Our experience has been similar to the OP's. However we had random exception gobbling, internal threading issues and increasingly worse performance + code maintainability.

We've been happily using Go for a few internal systems are are slowly galvanising most of our backend with it too. Life is happier.

I just replaced an EventMachine-based project with Celluloid::IO - it was a pretty simple replacement, although I had some issues with concurrent connections (https://github.com/celluloid/reel/issues/11).

I have yet to test it in production, but it looks pretty promising.

I had a lot of problems with EM blocking networks connections if an event loop was tying up CPU, but I suppose that's to be expected.

Well, when I hear about Actors I reach for my gun.

I knew a guy who chose Scala for a project so he could use Actors for concurrency.

The system never gave the same answers twice and wouldn't peg all the cores on a 4-way machine.

I spent two days trying to fix it, then I got wise and switched back to Java and got it working in 20 minutes with ExecutorService with (i) no race conditions, and (ii) nearly perfect scaling up to eight cores.

1999 called and it wants its DCOM back.

There's a fork out there called EventMachine-LE, which is currently in active development for latest features (i.e. IPv6). https://github.com/ibc/EventMachine-LE

Anyway, just wanted to throw it out there that I've deployed several production apps (gaming/messaging servers) using EventMachine, and combined with em-synchrony, I'm pretty comfortable with its performance and limitations. There seems to be good community support (thanks to Ilya Grigorik's articles and em-synchrony) with plenty of examples/documentations.

Just wondering, are there any people out there with Celluloid app experience that's currently in production? Like any other geek, I love programming "the right way" but I know nothing about Celluloid and it's real-world benefits/drawbacks (performance, API, code maintainability, code documentation, community support, blog articles, etc).

Sidekiq is probably the best example of Celluloid in action.

Correct, and for those interested in approaching Celluloid, here is post that demonstrates a concurrent "Hello, world" program: http://danielsz.posterous.com/167870244

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact