This gem had a huge impact in May of this year on one of Airbnb’s Ruby services. Last year when I was still at Airbnb I did the research and foundational work to make Airbnb’s HTTP client gem compatible with the Async gem and Falcon server. I’m no longer there but this year, one of my friends who’s still at Airbnb put that work to use on Airbnb’s Ruby service that talks to a Genesys API.
Most Genesys API calls have reasonable response times but there are these occasional, weird, unpredictable 30-second response times. The slow responses can come in bursts, where you can a bunch of them at once. This makes it extremely hard to predictably scale the Ruby service that calls it, because in normal synchronous mode, every thread that calls Genesys is blocked and on longer available to take more client requests. You could scale up the service to assume these 30-second response times will happen all the time, but then you’ve got huge server costs just for the 1% case.
But using Async and Falcon (an Async-compatible HTTP server), Airbnb was able to safely scale down the service by around 85% and turn on autoscaling, and it’s working beautifully.
I would love to learn a little more here. So for 30 second tail latency cases, doesn't Ruby HTTP client support a timeout with cancellation? The only useful case where I believe Async Ruby can help is the parallel requests cases.
Disclaimer: I am not a full time Ruby person, so if it sounds naive please feel free to point it out.
Yes, Ruby’s HTTP client supports timeout. As I mentioned, we weren’t using Ruby’s HTTP client, we were using Airbnb’s HTTP client with some additional capabilities. But it also supports timeout. The issue here was that I don’t think it would help to time out with cancellation and then try again (if that’s what you’re suggesting). Genesys API calls weren’t failing, they were just returning very slowly. A retry wasn’t necessarily likely to return any faster since the slow responses tended to happen in bursts, presumably based on something weird happening on their side. Meanwhile, phone calls continued coming in to Airbnb’s support center (this gem handles incoming calls) and we needed to continue routing those to Genesys as well.
You mentioned “The only useful case where I believe Async Ruby can help is the parallel requests cases.” That’s what was happening here: because of the tail latency, we sometimes needed to have a lot more parallel requests.
How does falcon come into the picture here? Is this structured like a microservice between rest of airbnb and genesys, so that whatever needs to interact with genesys, interacts with this service instead and get a consistent latency?
The Airbnb service’s API is implemented over HTTPS. Falcon is built on top of Async. As its README says, “Each request is executed within a lightweight fiber and can block on up-stream requests without stalling the entire server process.” So whenever an API request comes into Airbnb’s service, Falcon creates a fiber to handle it in-process. If that fiber makes an API call to Genesys, control returns immediately to Falcon in the main fiber’s event loop to be able to handle more requests while the request fiber is awaiting Genesys’ response.
I don’t know the details of where this service sits in the service diagram. I think it may actually be something Genesys calls, and it in turn sometimes calls back to Genesys. No, the point of this service is not to get consistent latency on Genesys API calls.
Ruby’s built in timeout library creates another thread with a sleep call. If something is blocking on a socket in the C code, the timeout doesn’t work.
Is it true that a timeout wont' work on something blocking on
a socket in C code? As I understand it, normally, a thread blocking on IO -- even in C code -- does not prevent context switching to another thread -- such as the timeout thread.
But there are other reasons to avoid the stdlib timeout library, it's true; and also other ways to timeout on a network call, network call API's should offer their own timeout arguments normally.
> any blocking operation (a method where Ruby interpreter waits) is compatible with Async and will work asynchronously within Async code block with Ruby 3.0 and later.
That's pretty magical. Does this mean it would be possible to implement structured concurrency [0], but without the function coloring problem?
Regardless, I think I prefer how this looks from a code read/writeability perspective compared to Zig's or Swift's (and potentially future Rust's [2]) approaches.
It's obvious from the examples provided in the post. For example, you can use the method `URI.open` both synchronously and asynchronously.
It's something I didn't want to mention in the article because it's a relatively advanced async concept, but yea - Async Ruby is already colorless and it's great.
That's right - Async Ruby uses fibers (stackful coroutines) as a concurrency primitive. There's a lot to say about this, but the end result is that we get to write simple synchronous code and it can run asynchronously.
I find it deeply irritating that this is how we choose to describe things these days. Rather than talking about modality, or even using a few more words to describe the actual differences in terms of the underlying programing language models.
Telling me that Ruby's async is colorless implies that I know about the description of programming models as "colors" of code, which was just some random analogy someone came up with which caught on. Nothing about this has anything to do with color. Even syntax highlighting doesn't really fit. This is akin to particle physicists getting cheeky and calling one of the quantum states "color" when it has nothing to do with wavelength (correct me if I'm wrong).
This is an appeal to stop trying to be so cute. Thank you.
What do musical scales have to do with it? (Modality :P)
"coloring" goes back before a few blog posts, it was probably popularized with programmers that keep up with crypto, as 'colored coins' are described in an influential paper right around the creation of Ethereum (Vitalik is one of the authors)[0], but I found some earlier references too, like coloring a complex plot to add a dimension [1], so carrying it over to computer programming, you have functions that have otherwise identical signature that you need to distinguish, so you make them different colors.
Thanks! Didn’t know about Petri nets, makes me wonder if this all didn’t start with Red/Black binary trees, so named because that’s the two colors of ink they had on hand, cited 1978:
In "colorless" async/await, async functions are implicit (no async keyword), and when called, the compiler auto-unwraps the promise, so everything looks like "sync" code but any function stack could "save and resume" at an "async point".
As someone who operates mainly in Python, I am so jealous. As far as I'm aware [1], you have to re-write your tooling to take advantage of async in Python. Does anyone have any insight into why Python async doesn't work the same way? Does it come down to fundamental language differences?
> Does it come down to fundamental language differences?
No, I don't think there's anything fundamentally different.
Ruby 3.0 implements a "fiber scheduler" feature that enables "colorless Async". Fiber scheduler is an obscure Ruby feature, but the end result is brilliant. It was also a huge amount of work.
Side note: Fiber scheduler was implemented by the same guy who created Async Ruby - Samuel Williams. This guy is the mastermind (and master-coder) behind this project.
> Does anyone have any insight into why Python async doesn't work the same way?
Ruby async seems to be implemented using stackful coroutines. IIRC Guido has been opposed to adding these to core Python, preferring stack-less coroutines because they require every yield-point to be explicit (i.e. marked with `await`).
There are libraries for a python that support stackful coroutines, such as gevent.
I'm optimistic that this will be less painful as more of the Python ecosystem becomes async-friendly. We have `aiohttp` as a suitable replacement for `requests`, and major libraries like Django (not ORM yet), Flask, FastAPI, SQLAlchemy now have async support as well.
It has already gotten less painful as time goes on. However, in my experience so far if I need e.g. Django to use Google Cloud Storage for media storage, I need to write my own shims that marry together an async alternative to Google's Python SDK (which I have no idea if there are any plans for async support) with third part django packages that I'd normally use. The experience isn't terrible, but I end up spending a lot of time trying to making consistent calling conventions.
And I think I've come to the conclusion that any re-usable code I write for IO should just be written async with magic sync wrappers for use in sync contexts.
I am actually not a 100% how far the compatibility shows. When I test Async gem with httparty and open-uri, there is no speed-up when comparing with threads:
I stopped using python long ago, out of the frustration with the breakage caused in the ecosystem first due to the 2/3 transition, and then by splitting again the ecosystem with all the async stuff they introduced, when having instead gone the way of gevent would have ended up in something like this.
The company I was at the time tried really hard to move to python 3. It was a huge codebase with thousands of engineers. After more than a year of work they cancelled the project and said they'll never upgrade to python 3, and instead started writing smaller "microservices" around the core product just to calm engineers anxiety to use python3. Now Django is suffering kind of the same problem with all the async stuff being bolted on, adding a lot of gotchas and side effects and incompatibilities around every corner. It is a mess.
I left that company and I will never use python again.
I love how Ruby seems to care about the ecosystem, and both ruby and rails keep innovating and providing awesome full stack solutions.
Whether or not a software project succeeds is substantially influenced by politics.
When I read your post a strong anti python 3 feeling is conveyed.
It may well be that it’s impossible to upgrade such a company/code base when the engineers have such a negative attitude.
The barrier was perhaps not technical but about the attitude of the people.
There were lots of python 2 developers who were rabidly anti python 3. Some of the most well known python developers were aggressively very publicly anti python 3. Imagine a python 3 upgrade project with that attitude prevailing. No chance of success.
Today python 3 is more popular than ever before and is arguably the most popular language in the world after javascript. Lots of people love python 3.
I think its kind of disingenuous to make such statements. If you see how google is still unable to move from python2 to python3 we will know its a technical problem [1]. And remember chromium is a project with huge number of employee paid with very high amount of renumeration. So the barrier is mostly technical. I think such attitude comes from frustration which gets complemented with time.
I personally think python3, ruby etc. are way better than JS in many case but I think world is a bit unfair due to browser language monopoly here.
Yes. The 2->3 change was not engineered to make migration easy, and as a maintainer of a large and project that people have built on top of this is something that both current and potential users will judge you on.
Not speaking from a place of scorn, I and my team have made this same mistake, we lost both users and momentum. Big learning experience.
There may be some huge piece I'm missing, but how exactly is the "starting multiple async tasks wrapped in blocks and waiting for them to finish at the end of the main Async block" different from "starting multiple threads wrapped in blocks and manually collecting them at some point"? I thought Ruby did release the GIL when a thread is blocked (waiting for IO etc).
The difference is just that fiber overhead is lower so you can run more fibers than threads on a given system. Even though fibers have been around a while, people rarely used them because they are cooperative rather than preemptive, so you had to manually write the scheduling logic. Much easier to just use threads.
I think the big breakthrough for Ruby Async is that fiber scheduler in Ruby 3.0 now makes it possible for the runtime to manage fibers in a less manual way, so you now get the lightweight option more easily. The Async gem seems to be wrapping all that up in a very nice interface so you can write simple code and get good concurrency without much effort.
- Async Ruby is much more performant than threads. There are less context switches, enabled by the event reactor. The performance benefits are visible in simple scenarios like making a thousand HTTP requests.
- Async is more scalable, can handle millions concurrent tasks (like HTTP connections). There can only be a couple thousand threads at the same time.
- Threads are really hard to work with - race conditions everywhere. Async doesn't have this.
> - Threads are really hard to work with - race conditions everywhere. Async doesn't have this.
Having worked a lot with various flavours of async (I was one of the many people in the loop for the design of DOM Promise and JavaScript async), I regret that, while many developers believe it, *this is generally false*.
In languages with a GIL or run-to-completion semantics, of course, you get some degree of atomicity, which is a nice property. However, regardless of the language, once you have async, you have race conditions and reentrancy issues, often without the benefit of standard tools (e.g. Mutex, RwLock) to solve them [1].
Ruby's async syntax and semantics look neat, and I'm happy to see this feature, but as far as I can tell from the examples in the OP, they're going to have these exact same issues.
[1] Rust is kind of an exception, as its type system already forces you to either &mut/Mutex/RwLock/... anything that could suffer from data race conditions (or mark stuff as unsafe), even in async code. But even that is because of the race conditions and reentrancy issues mentioned above.
> I regret that this is generally false. Once you have async, you have race conditions and reentrancy issues
Can you get more specific please?
My experience is 100% from Ruby where I've worked heavily with threads in the past, and with Async Ruby for the past 18 months.
From what I can tell, threads require a great deal of locks (mutexes) and are frustratingly hard to get right - because of the language-level race conditions.
Async Ruby has been refreshingly easy to write. I have yet to encounter an example of a race condition.
If it helps to bridge the language gap here: from what I know Async Ruby is similar to Go's goroutine model.
await async {} // Await a block that does nothing, essentially yielding back to the reactor.
print(global.a) // If you're unlucky, global.a has changed.
Task 2:
global.a = 2
await async {} // Await a block that does nothing, essentially yielding back to the reactor.
print(global.a)
Now enqueue both tasks.
Depending on scheduling, you could end up printing (1, 1), (1, 2), (2, 1) or (2, 2). This can become much worse if you're awaiting in the middle of filling a data structure.
Feel free to replace `await async {}` with querying your database or a remote API and `global.a` with any variable or data structure that can be shared between the tasks.
This example is, of course, a toy example, but I've had to debug sophisticated code that broke non-deterministically because of such behaviors. The source code of the front-end of Firefox (written in JavaScript) is full of hand-rolled kinda-Mutex implementations to avoid these race conditions.
Thank you for the clarification. You are right, these types of race conditions are possible with Async Ruby.
I find these races relatively easy to spot: yes, global state CAN change when you yield back to the reactor.
IMO thread race conditions are much, much worse. Global state can change AT ANY POINT, because thread scheduler preemptively switches threads. Here's an example:
global.a = 0
thread {
global.a = 1
print(global.a) // a is 1 or 2
}
thread {
global.a = 2
print(global.a) // a is 1 or 2
}
print(global.a) // a is 0, 1 or 2
> I find these races relatively easy to spot: yes, global state CAN change when you yield back to the reactor.
It's good that you can find them easily. In my experience, these changes can creep into your code stealthily and only hit you from behind, months later, when the winds shift.
Some of my traumas include:
- global state that ends up captured by a closure while you didn't realize it was (perhaps because the code you wrote and the code the other dev wrote accidentally shared state);
- mutating hashtables/dictionaries while walking them, without anything in your code looking suspicious;
- and of course accidentally mixing concurrency between two reactors, but luckily, that's not something that happens every day.
> IMO thread race conditions are much, much worse.
IMO, thread race conditions are a case of "it's bad but you knew what you'd get when you signed up for it", while async race conditions are a case of "don't worry, everything is going to be fine (terms and conditions apply, please contact your lawyer if you need clarifications)".
This isn't really an issue with threads though, the exact same issue is present in green thread/fiber implementations; it just so happens that in Async Ruby the GIL saves you from this specific problem due to making variable accesses atomic (as I understand it anyway, I'm not super familiar with the Ruby VM).
In general, green threads/fibers are vulnerable to the exact same shared memory issues as threads, the only benefit to them is that they are, for a certain class of problems, a more efficient concurrency primitive than threads by avoiding context switches out of userspace, and in many cases provide you the ability to plug in your own scheduler if you so desire allowing you to optimize scheduling for your own workload.
If you remove the sleep there's no race because the fiber scheduler never hits a blocking call anywhere, but the sleep stands in for some kind of i/o. And then there's the terrible global variable.
(Conversely this does show you how badly you have to write code to get race conditions with async and the sleep there is important to the generation of the race, which would not be necessary with threads)
And I think if you remove the sleep it doesn't race due to the GIL and single-threadedness of ruby for pure userspace computations?
You avoid race conditions of writing to the same variable but you still can't avoid race conditions where the critical section is longer than 1 statement
e.g. in JS:
foo = await fetch()
bar = await fetch()
doSomething(foo, bar)
You can't be certain foo isn't modified while the second fetch is executing (assuming foo is globally scoped)
There’s still an important difference in that you’re yielding control explicitly by using await, instead of being preemptable everywhere. That’s good enough in most cases.
The problem with function coloring is that it tends to proliferate throughout APIs (since you need to be async to call async), so as things evolve you find yourself needing to await the majority of function calls… this makes it kind of tough to avoid awaiting during critical paths.
That said I still agree that it’s easier to manage than Threads, but something tells me we’re still going to want structured-concurrency versions of the common primitives like rw locks and mutexes, even in async environments.
In any language, avoiding race conditions or unexpected values changes without dedicated primitives or special conditions is relatively contingent on preventing thoughtless non-read usage of "shared" memory.
The problem is that it's often relatively easy to make unsafe writes without realizing, especially since the guarantees of "safe" primitives can be misunderstood. And of course many people don't realize that async code can have race conditions because they don't really understand the details of how async even works, or that some languages make extra guarantees that they're unknowingly depending on.
Having candidates explain the difference between parallel and asynchronous has been a relatively effective first level interview screener for me, especially with more junior roles.
Goroutines are just (lightweight) threads - they can run in parallel. You need locks whenever you access shared data.
AFAIK Ruby’s fibers are cooperatively scheduled and can only yield at function calls. Code that doesn’t make a function call (e.g. incrementing an integer) is safe to run on multiple fibers without locks.
For comparison, Python’s stackless coroutines can safely do anything except `await` without requiring locks.
1. Check for existence of row by a key in a Postgres table
2. If not present, create one
You can have a race condition to 2. You could do the insertion itself as the check to avoid this issue. But regardless this is a race condition that you need to think about in async environments.
True, this is why database libraries/orms can be dangerous - the "normal" db way is to wrap 1,2 in a transaction. Unfortunately, while transactions are often available many db interfaces lures the unwary programmer away from using transactions, because reading/writing the db "looks like" reading/writing a variable/object.
The same thing could happen with any other form of concurrency. In this case you could wrap the two statements in a transaction with the appropriate isolation level. Not sure what it is called — read consistent? Snapshot?
> There can only be a couple thousand threads at the same time.
You can easily have tens of thousands of threads on Linux. Beyond 50 000 or so you may need to adjust some settings using `sysctl`, but after that you should be able to push things much further.
Task counts themselves are also a pretty useless metric. Sure, you can have may fibers doing nothing. But once they start doing something, that may no longer be the case (this of course depends on the workload).
> Threads are really hard to work with - race conditions everywhere. Async doesn't have this.
You can still have race conditions in async code, as race conditions aren't limited to just parallel operations.
> Task counts themselves are also a pretty useless metric. Sure, you can have may fibers doing nothing.
Sorry if I wasn't explicit. I was talking about tasks/fibers performing actual work, like handling HTTP connections.
It's practically possible for Async Ruby programs to do work with hundreds of thousands Async Tasks (concurrent fibers).
Some users have worked with millions of Async tasks, but I'm not sure if it was practical work, or proof of concept.
> You can easily have tens of thousands of threads on Linux.
Thank you for the correction about threads on Linux. I'm not sure if you're talking about threads in general or threads in Ruby?
I've only lightly tested increasing the number of threads in Ruby to about a thousand, and my test code was very slow.
I think that has to do with thread switching overhead. Threads have a relatively high switching overhead, so I don't think it's advisable to run more than 100 threads - in Ruby at least.
> I think that has to do with thread switching overhead. Threads have a relatively high switching overhead, so I don't think it's advisable to run more than 100 threads - in Ruby at least.
If you run many threads the bytecode VM of all threads are contending for the GIL in order to advance the program.
While true, the same would apply to code using Async blocks. Async does not attempt (or intend) to enable parallel execution of VM byte code.
If the parent commenter saw a speed up in their tests, it’s reasonable to assume that is because of the difference in overhead of switching threads vs fibers.
> race conditions everywhere. Async doesn't have this
What does Async (Fibers underneath) do differently than normal Threads? Using threads to handle concurrent work doesn't immediately bring race conditions, unless the programmer explicitly creates them (accessing the same stuff from different threads).
Fibers themselves AFAIK don't stop you from accessing the same stuff, aside from the obvious "the fiber code runs 'atomically' until it hits the next yield (which non-blocking Fibers take away anyway).
> What does Async (Fibers underneath) do differently than normal Threads?
There's a lot to say on this topic.
- Threads implement "preemptive scheduling". A scheduler switches control from one thread to another every 10ms or so. A thread running the code may be ready for the switch or not. The ensuing race conditions are nasty.
- Async + Fibers implement "cooperative scheduling". The currently running fiber (voluntarily) yields control to another fiber when it's ready. The result is there are no race conditions.
There's so much to say about this, I'll blog about this in the future.
> The currently running fiber (voluntarily) yields control to another fiber when it's ready. The result is there are no race conditions.
This is generally untrue, as it assumes the developer knows the ramification of which methods hit scheduling points and understand what state the global set of potentially pending work might modify while suspended.
You might not have to focus so much about concurrent threads modifying the same memory simultaneously, but you absolutely could have the value change unpredictably during a write-sleep-read.
Note there are environments which try to make it more obvious which methods might result in a suspension, and have the developer acknowledge that so that the code remains understandable/maintainable. The keywords used for this are typically 'async' and 'await'.
Yep… this is why the “thank goodness these functions aren’t colored!” people confuse me. Colored functions are a very good thing, they make it explicit where context switches happen and make understanding async interactions easy.
People who don’t like “colored functions” IMO are similar to folks who don’t like types. They want to be able to change something to be async without a compiler yelling at them to go through all of its call stacks and ensure they can handle the asynchrosity, similar to changing a function to return “null” sometimes and not wanting a compiler to make them verify all calling code can handle a null.
That being said, I do wish all functions could be colored. The most painful async migrations are when calling code happens in a constructor, which in JS cannot be made async.
There’s value in being able to write code without the ceremony of types or declarations or compulsory exception declarations and the like, just like there’s value in having tools like a REPL.
The problem comes in that it takes a lot of discipline and understanding of both your code and dependencies to successfully manage projects without those safety rails. They are also extraordinarily difficult to add-on later.
I’ve written small reverse proxies in say Node.js which saved me days of time over doing so in something like Rust. I’ve also hit errors in Node.js code which have made me want to give up on technology and live in a cave.
Yes, the sweet spot IMO is gradually typed languages. I love how I can have TS code that 100% does not type check and the compiler will yell at me all it wants but still produce the compiled JS without a problem. It's liberating to say "yes, thanks for letting me know that at the moment this will absolutely not work in X edge case or when called from Y context with Z data, but I don't care about that right now just let me run it as-is to make sure my general approach is correct".
Even better, in large projects when there is a constant drift of dependencies, I often will pull in the latest main for dev work and see that certain modules aren't found or have been updated and are now being called in a way that isn't compatible with my version. I can look at those errors and decide if the relevant changes will impact my area, if so I go through the full rebuild, otherwise I just ignore them and let TS yell at me.
I guess the overall theme is that I like it when the compiler doesn't think it's smarter than me. Compilers that say "no, you can not build this, you must resolve these issues before I will let you proceed" are much more painful to work with than ones that say "hey watch out, this particular area will probably not work as you expect. Feel free to try out the build, but you really ought to fix that before committing"
Edit: looking back on this and my original comment, I see how they're somewhat in opposition! It would seem I like the compiler to forbid me from shooting myself in the foot with concurrency, but not from shooting my self in the foot with types/dependencies.
Cooperative scheduling means that every function call is potentially a scheduling point, so it isn't really a significant change from threads. Sure, if you are careful and only call functions that are not guaranteed to preempt then you are safe, but critical sections make this explicit.
And if you are running code on multiple cores then lack of preemption doesn't help anyway.
I assumee unless you have a GIL, most operations on collections are not thread safe, but they are async safe, since context switching can only happen where you await.
Please, someone, correct me if I've misunderstood.
The big difference appears to be that async Ruby does not merely give you an easy sugar to perform the sync-over-async antipattern you have described. The real innovation is that, as far as the user is concerned, Ruby is magically turning blocking methods into non-blocking ones.
That's basically how I'm thinking of things as well. To illustrate a bit further, consider the following:
Given a blocking method call `foo(x)`, I can make it non-blocking by wrapping it in a "thunk" as `λx.foo(x)`.
Where things start to get interesting is when I add another method call `foo(x) + bar(x)`. Now to keep things "async" I need to transform the abstraction into something more like `λx.foo(x) + λx.bar(x)`, and have the `+` call dispatch both fibers and wait for them before performing its operation.
Doing this automatically seems pretty cool, I'll have to think about this a bit more sometime.
The difference only shows itself in the real world, when you do a bit more per thread/coroutine and end up mutating shared state. This is where threads can lead to race conditions, whereas coroutines will not (unless you're basically ask for it)
So the Async gem allows the programmer to fire "tasks" and wait for them to finish (the same way we can fire threads), but instead of OS-level threads (which is what MRI uses for Threads) it uses a new kind of Fibers, called "non-blocking Fibers", that are lightweight and don't use OS threads, like normal Fibers, but unlike normal Fibers they yield automatically to the scheduler when blocked (sort of like threads).
Is this a correct-ish way to describe the current state of affairs?
Threads use quite much more memory than coroutines. Spawning eg. 3 threads for each request, x 1000 requests per second would probably eat a ton of memory.
Stackful coroutines (often used to implement colorless async) is the same except you can specify stack size. You can specify thread stack size yourselves as well, though no one does that.
OTOH, growable stack is useful, as Go demonstrated.
That seems mostly like the question of "Does this HTTP framework support HTTP pipelining". While I don't know the answer, it doesn't seem highly relevant. Most clients went away from using pipelining, since follow-up requests on the same connection are subject to unknown latency (stuck behind the first request) and a connection failure can impact all of those requests.
The better approach is to use either more connections, or proper request multiplexing via HTTP/2 or /3. In the latter case a server framework would just see multiple request invocations in parallel.
I just tried this out. Falcon with count=1 & sinatra. Worked perfectly. It seems every request is processed in an async block so literally no special code is required. A request waiting on network will allow others to go through.
It does, and that’s specifically why I’m really looking forward to benchmarks for rack frameworks switching to falcon - while it may not take Rails to Phoenix’s performance and latency, I bet it would close the gap considerably.
Just tried it out. Async blocks evaluate to Task objects which have a wait method and a result attribute which evaluates to the value of the block.
require 'async'
res = Async do |task|
name_task = task.async do
sleep 2
"Jenny"
end
task.async do
sleep 5
9
end
"Hello #{name_task.wait}"
end
puts res # => "Hello Jenny" after 5 seconds.
After using Python's and JS's async implementations this seems beautiful by comparison. Here's the a rough Python equivalent.
Which means that you can just write a whole normal Ruby program, that just uses Task::Async.current.async(...) wherever it likes to schedule subtasks (sort of like calling spawn/3 in Erlang), and then treat them as regular futures, even returning the future out of the current lexical scope without thunking it; and then have exactly one Async block at the toplevel that kicks off your main driver logic and then #wait s on it. All without having to pass the current task down the call stack everywhere.
(And if you want to schedule a bunch of stuff to happen in parallel and then wait for it all to be done, but you're below the toplevel Async block, you'd do that by scheduling the subtasks against an Async::Barrier: https://socketry.github.io/async/guides/getting-started/inde...)
> Which means that you can just write a whole normal Ruby program, that just uses Task::Async.current.async(...)
> All without having to pass the current task down the call stack everywhere.
Yes, you can also use `Async { work }` instead of `Async::Task.current.async { work }`.
It's trivially easy to get the results, here's a quick example:
require "async"
require "open-uri"
results = []
Async do |task|
task.async do
results << URI.open("https://httpbin.org/delay/1.6")
end
task.async do
results << URI.open("https://httpbin.org/delay/1.6")
end
end
I'm not sure if this is the most realistic example since you're implicitly relying on this being at the top level and there being a global await at the end of the block. Surely any real program will have all the work done inside a single top-level event loop.
require 'async'
Async do
results = []
Async do
sleep 1
results << "Hello"
end
puts results # => []
end
I just ran your example and I'm getting `puts results` line to output "Hello", just as the program intends. I'm not sure why you're getting a different result.
In any case, I'm assuring you: getting the results "out of tasks" is trivially easy.
> Surely any real program will have all the work done inside a single top-level event loop.
I don't get this. Can you please explain more what you have in mind and I'll try to help clarify things.
Sorry, I needed to add a sleep to get the behavior I wanted. Now it just prints nothing.
What I mean is I assume any real program is not going to be creating and destroying event loops any time they want to do something Async and that they'll essentially run main in a top level Async do. In fact it seems like that's the only safe thing to do with this library because the following snippet changes semantics depending on whether it's nested in an existing event loop or not.
results = []
Async do
sleep 10
results << "Hello"
end
puts results
So it seems like you'll pretty much always have to have an explicit wait before you can get your results in 99% of cases.
No function colors! Is this like Go, where each fiber maintains a separate stack at runtime, or is this like Rust, where each task is effectively transformed into a state machine?
I think this deserves more attention, especially it is coming from the original author of sequel.
>Polyphony is a library for writing highly concurrent Ruby apps. Polyphony harnesses Ruby fibers and a powerful io_uring-based I/O runtime to provide a solid foundation for building high-performance concurrent Ruby apps.
From what I was able to tell (I followed the discussion around this) enabling Rails to work with Async is not huge amount of effort. One guy was working on this, but he got distracted with other stuff.
It's just a matter of time someone steps up and gets this to work.
I imagine something more "native" to rails will happen eventually though. But would need to be after this makes its way into core ruby(which has not happened yet apparently).
The only caveat is that it doesn't work with Ruby on Rails, because ActiveRecord doesn't support async gem. You can still use it with Rails if ActiveRecord is not involved.
I don't understand. Is this misleading then? Did HTTParty have to implement Async for their example to work?
> You probably have your preferred HTTP gem, and you may be asking "will it work with Async"? To find out, here's an example using HTTParty, a well-known HTTP client.
They then go to show that your "preferred HTTP gem" will just work. How is the ActiveRecord situation different?
You're not wrong. If you have external services that are accessible via HTTP, there is some potential to get some performance gains there. You're just limited in terms of some of the core Rails utilities.
It does it with threads though, which has implication when we talk about moderate/big traffic and Action Cable. A big win would be if Action Cable will be able to work with Fibers/Async or something equivalent. It's an annoying performance bottleneck and it would be super awesome if Ruby/Rails just solves it.
I think there has to be massive benefit to the Rails community before there can be a consideration of adding such a feature.
I could be wrong here, but I'm not sure it will solve some of the more drastic issues of what happens in a production Rails app which I believe to be mostly around memory and garbage collection (from my limited experience and understanding).
You might be able to eke out more performance in terms of having more clients be able to hit a page, but I suspect that might make memory more of an issue, not less.
I think what's being talked about here is the back end implementation for ActionCable. By default it uses ruby threads to push over open web sockets. There's at least one production quality drop in implementation (https://anycable.io/) that address the default scalability issues you'll have with ActionCable. The async support would seem to allow one to go much further with default rails before needing to move to something more performant.
I'm guessing the Rails implementation for the webs sockets part is thread-based and if that's the case fibers can make a big difference. But I'm also a bit out of my depth here.
> you can think of it as "threads with none of the downsides"
And likely no actual parallel execution of Ruby code. I suppose fibers are scheduled for later when they perform I/O. Just like how C extensions release the interpreter lock before some expensive function, allowing other Ruby code to run in concurrently.
I thought so... Threads and processes are still in use. While yielding control on system calls is a great idea, it's a bit innacurate to say it's just like threads with none of the downsides. The upsides are missing too. Only the system calls are running in parallel here. The Ruby code remains single-threaded.
Still, much better than languages that lack schedulers where people are tricked into writing cooperatively scheduled code without even realizing it.
In MRI (aka "CRuby", standard ruby), you don't get parallelism with threads either, due to the GIL. (Similar to CPython).
A lot of people are used to thinking mostly of that scenario, where ruby doesn't currently give you parallism for threads either.
JRuby does give you parellism for threads -- but I believe currently doesn't actually implement true coroutines (Fibers are just implemented on top of threads), so there are now two reasons this wouldn't be as useful on JRuby, true.
Multiple async blocks aren't really running concurrently because GIL is still in play. Control of what's running only switches if one async block yields... manually, or automatically. What's new is Ruby is auto-yielding when waiting on network or other blocking actions.
Great article! I was wondering if you could add an extra section on why Async will make the @counter error go away. I'm new to Async and Threads in general and the rest of the article was super easy to follow but I didn't quite understand why or how that error would disappear.
I was wondering if you could add an extra section on why Async will make the @counter error go away. I'm new to Async and Threads in general and the rest of the article was super easy to follow but I didn't quite understand why or how that error would disappear.
How will this deal with temporary CPU starvation in long running connections? I am currently using concurrent-ruby's thread pool to run a few hundred Net-SSH Connections per process at the same time. This helped me to avoid most of these kind of issues.
This gem is almost a decade old (although looks like it's been updated to work with fibers?). There's a bunch of gems that allow for asynchronous/concurrent and/or parallel execution for Ruby. It looks like a nice enough gem but it's not particularly novel. Seems like it's just trying to ride JS' popularity with the name. Ruby threads also allow for non-blocking operations.
Edit - this isn't some new paradigm for Ruby. It's a gem... A bunch of others ("Parallel", "Concurrent-Ruby", "Eventmachine", etc...) have been around forever and do similar things.
Hm, the first commit to the gem was in 2017. The gem was not "advertised" so that the interfaces can be polished and done right.
Async gem is hugely improved with Ruby 3.0 release (from December 2020) when Ruby language added "fiber scheduler" feature just to integrate better with Async gem.
Most Genesys API calls have reasonable response times but there are these occasional, weird, unpredictable 30-second response times. The slow responses can come in bursts, where you can a bunch of them at once. This makes it extremely hard to predictably scale the Ruby service that calls it, because in normal synchronous mode, every thread that calls Genesys is blocked and on longer available to take more client requests. You could scale up the service to assume these 30-second response times will happen all the time, but then you’ve got huge server costs just for the 1% case.
But using Async and Falcon (an Async-compatible HTTP server), Airbnb was able to safely scale down the service by around 85% and turn on autoscaling, and it’s working beautifully.