I feel that I have to point this out once again, because the article goes so far as to state that:
> With this last improvement Zig has completely defeated function coloring.
I disagree with this. Let's look at the 5 rules referenced in the famous "What color is your function?" article referenced here.
> 1. Every function has a color
Well, you don't have async/sync/red/blue anymore, but you now have IO and non-IO functions.
> 2. The way you call a function depends on its color.
Now, technically this seems to be solved, but you still need to provide IO as a parameter. Non-IO functions don't need/take it.
It looks like a regular function call, but there's no real difference.
> 3. You can only call a red function from within another red function
This still applies. You can only call IO functions from within other IO functions.
Technically you could pass in a new executor, but is that really what you want? Not to mention that you can also do this in languages that don't claim to solve the coloring problem.
> 4. Red functions are more painful to call
I think the spirit still applies here.
> 5. Some core library functions are red
This one is really about some things being only possible to implement in the language and/or stdlib. I don't think this applies to Zig, but it doesn't apply to Rust either for instance.
Now, I think these rules need some tweaking, but the general problem behind function coloring is that of context. Your function needs some context (an async executor, auth information, an allocator, ...). In order to call such a function you also need to provide the context. Zig hasn't really solved this.
That being said, I don't think Zig's implementation here is bad. If anything, it does a great job at abstracting the usage from the implementation. This is something Rust fails at spectacularly.
However, the coloring problem hasn't really been defeated.
The key difference to typical async function coloring is that `Io` isn't something you need specifically for asynchronicity; it's something which (unless you make a point to reach into very low-level primitives) you will need in order to perform any IO, including reading a file, sleeping, getting the time, etc. It's also just a value which you can keep wherever you want, rather than a special attribute/property of a function. In practice, these properties solve the coloring problem:
* It's quite rare for a function to unexpectedly gain a dependency on "doing IO" in general. In practice, most of your codebase will have access to an `Io`, and only leaf functions doing pure computation will not need them.
* If a function does start needing to do IO, it almost certainly doesn't need to actually take it as a parameter. As in many languages, it's typical in Zig code to have one type which manages a bunch of core state, and which the whole codebase has easy access to (e.g. in the Zig compiler itself, this is the `Compilation` type). Because of this, despite the perception, Zig code doesn't usually pass (for instance) allocators explicitly all the way down the function call graph! Instead, your "general purpose allocator" is available on that "application state" type, so you can fetch it from essentially wherever. IO will work just the same in practice. So, if you discover that a code path you previously thought was pure actually does need to perform IO, then you don't need to apply some nasty viral change; you just grab `my_thing.io`.
I do agree that in principle, there's still a form of function coloring going on. Arguably, our solution to the problem is just to color every function async-colored (by giving most or all of them access to an `Io`). But it's much like the "coloring" of having to pass `Allocator`s around: it's not a problem in practice, because you'll basically always have easy access to one even if you didn't previously think you'd need it. I think seasoned Zig developers will pretty invariably agree with the statement that explicitly passing `Allocator`s around really does not introduce function coloring annoyances in practice, and I see no reason that `Io` would be particularly different.
> It's quite rare for a function to unexpectedly gain a dependency on ...
If this was true in general, the function coloring problem wouldn't be talked about.
However, the second point is more interesting. I think there's a bit of a Stockholm syndrome thing here with Zig programmers and Allocator. It's likely that Zig programmers won't mind passing around an extra param.
If anything, it would make sense to me to have IO contain an allocator too. Allocation is a kind of IO too. But I guess it's going to be 2 params from now on.
> If anything, it would make sense to me to have IO contain an allocator too. Allocation is a kind of IO too.
Io in zig is for “things that can block execution”. Things that could semantically cause a yield of any kind. Allocation is not one of those things.
Also, it’s perfectly reasonable and sometimes desireable to have 13 different allocators in your program at once. Short lived ones, long lived ones, temporary allocations, super specific allocators to optimize some areas of your game…
There are fewer reasons to want 2 different strategies to handle concurrency at the same time in your program as they could end up deadlocking on each other. Sure, you may want one in debug builds, another in release, another when running tests, but there are much fewer usecases of them running side by side.
Yielding in this context means to a different “thread” in your context, not the OS. If you want to express “this is a point where the program can do something else” it is a yield. If you block and can’t switch to something else… it is not.
So if you’re using an API like mmap like that you should think of it as IO (I don’t think you can, but am not sure).
Stockholm syndrome? Many years ago, I was specifically wanting a programming language where I could specify an allocator as a parameter! That's one of Zig's selling points.
> But I guess it's going to be 2 params from now on.
>> So, if you discover that a code path you previously thought was pure actually does need to perform IO, then you don't need to apply some nasty viral change; you just grab `my_thing.io
Python, for example, will let you call async functions inside non-async functions, you just have to set up the event loop yourself. This isn't conceptually different than the Io thing here.
But the asyncio event loop is not reentrant, so your faux sync functions cannot be safely called from other async functions. It is an extremely leaky abstraction. This is not a theoretical possibility, I stumbled on the issue 15 minutes into my foray into asyncio (turns out that jupyter notebooks use asyncio internally).
There are ways around that (spawn a separate thread with a dedicated event loop, then block), or monkey patch asyncio, but they are all ugly.
except you cant "pass the same event loop in multiple locations". its also not an easy lift. the zig std will provide a few standard implementations which would be trivial to drop in.
I thought it was the exact same; an event loop in Python is just whatever Io is in Zig, make it a param, get it from an import and a lookup (`import asyncio; loop = asyncio.get_running_loop()`). I might be misunderstanding what you're saying though.
hm maybe. i guess ive only used python in situations where it injects it into amain so i could have been confused. i thought python async was a wrapper around generators, and so the mechanism cant be instantiated in multiple places. i stand corrected.
Per thread—once you start working in multiple threads you have the choice to have one global event loop, which comes at the cost of all async code being effectively serialized as far as threads are concerned*, or one event loop per thread.
* Which can be fine if your program is mostly not async but you have that one stubborn library. Yay async virality.
I can't say there's no good reason to have per thread event loops, but I think I can say if you do know of one you're suffering a terrible curse. I can only imagine the constraints that would force me to do this.
Because you have a main application running a web server and a daemon worker thread performing potentially long running tasks that use async libraries and you don't want to block the responsiveness of your web server. It's really not that bad, at least in Python.
Well, here we go I guess. Why can't you just use FastAPI? Or Tornado? Isn't there also an async Flask? Isn't Django also async now? What minor god have you angered to be chained to a non-async framework?
Of all the responses, this was perhaps my least expected one when I was talking about being chained to an async framework. Async isn't a replacement for threads, async doesn't let you spread your work out over multiple cores and doesn't give you time slicing. In Python the asyncio module actually gives you a threadpool to run computationally intensive work in as kind of one-offs. But when you need something like background job processing and want to also reap the benefits of asyncio, like being able to pull multiple tasks off the queue, and progress on others while a job does io, then you need an event loop in the other thread. It was specifically avoiding locking up FastAPI that lead me to use multiple event loops in the first place.
You're free to spin the job worker off to another process but however you swing it it's still multiple event loops you deal with. But with threads you get to only load your Python app into memory once.
Skipping to the end here: if you're inside a Python thread and you're making your own event loop, something has gone badly. Maybe you made a bad choice (using threads inside an async task instead of an executor), maybe you have some legacy nightmare dependency thing to deal with, but multiple event loops, let alone nested event loops, are suboptimal from a resource usage standpoint.
I do something like that with event driven firmware. There is an allocator as part of the context. And the idea that the function is executing under some context seems fine to me.
> I do agree that in principle, there's still a form of function coloring going on. Arguably, our solution to the problem is just to color every function async-colored
I feel like there are two issues with this approach:
- you basically rely on the compiler/stdlib to silently switch the async implementation, effectively implementing a sort of hidden control flow which IMO doesn't really fit Zig
- this only solves the "visible" coloring issue of async vs non-async functions, but does not try to handle the issue of blocking vs non-blocking functions, rather it hides it by making all functions have the same color
- you're limiting the set of async operations to the ones supported in the `Io`'s vtable. This forces it to e.g. include mutexes, even though they are not really I/O, because they might block and hence need async support. But if I wrote my own channel how would this design support it?
Colouring every function async-coloured by default is something that's been attempted in the past; it was called "threads".
The innovation of async over threads is simply to allocate call stack frames on the heap, in linked lists or linked DAGs instead of fixed-size chunks. This sounds inefficient, and it is: indexing a fixed block of memory is much cheaper. It comes with many advantages as well: each "thread" only occupies the amount of memory it actually uses, so you can have a lot more of them; you can have non-linear graphs, like one function that calls two functions at the same time; and by reinventing threading from scratch you avoid a lot of thread-local overhead in libraries because they don't know about your new kind of threads yet. Because it's inefficient (and because for some reason we run the new threading system on top of the old threading system), it also became useful to run CPU-bound functions in the old kind of stack.
If you keep the linked heap activation records but don't colour functions, you might end up with Go, which already does this. Go can handle a large number of goroutines because they use linked activation records (but in chunks, so that not every function call allocates) and every Go function uses them so there is no colour.
You do lose advantages that are specific to coloured async - knowing that a context switch will not occur inside certain function calls.
As usual, we're beating rock with paper in the moment, declaring that paper is clearly the superior strategy, and missing the bigger picture.
It's basically how Java does it (circa 17) as well.
It's something you really can't do without a pretty significant language runtime. You also really need people working within your runtime to prefer being in your runtime. Environments that do a lot of FFI don't work well with a colorblind runtime. That's because if the little C library you call does IO then you've got an incongruous interaction that you need to worry about.
Neither Java nor Go make all functions async. What they provide is stackful coroutines (or equivalently one shot continuations) that allow composing async and sync functions transparently.
Golang isn't color-blind. The magic of async/await isn't that the program isn't blocked, it's that the CALLER doesn't have to be blocked. It gives the caller the flexibility to continue and synchronize at its discretion.
In Golang to avoid blocking the CALLER you'd still have to wrap the call in a Goroutine and use something like a channel(or shared mem) to communicate back to the caller.
Guess what ends up happening IRL? People create a set of functions that return channels, and a set of functions that don't for maximum flexibility. Two colors.
And that's viral much like async/await. You block on that channel? Now your caller needs to wrap you in a Goroutine. Or you have to return the/a channel. etc etc etc.
> It's quite rare for a function to unexpectedly gain a dependency on "doing IO" in general.
From the code sample it looks like printing to stdio will now require an Io param. So won’t you now have to pass that down to wherever you want to do a quick debug printf?
Zig has specifically a std.debug.print() function for debug printing and a std.log module for logging. Those don't necessarily need to be wired up with the whole stdio machinery.
Yes. You can always use the blocking syscalls your OS provides and ignore the Io system for stuff like that. No idea how they’d do that by default in the stdlib, but it will definitely be possible.
I think the key is, if you don't have an "io" in your call stack you can always create one. At least I hope that is how it would work. Otherwise it is equally viral to async await.
> * It's quite rare for a function to unexpectedly gain a dependency on "doing IO" in general.
I don't know where you got this, but it's definitely not the case, otherwise async would never cause problems either. (Now the problem in both cases is pretty minor, you just need to change the type signature of the call stack, which isn't generally that big, but it's exactly the same situation)
> In practice, most of your codebase will have access to an `Io`, and only leaf functions doing pure computation will not need them.
So it's exactly similar to making all of your functions async by default…
I' scratching my head here, because many languages avoid colouring. Effectively all I think you've done is specify an interface for the event loop. Python and I expect a few other languages have pluggable event loops that use the same technique.
Granted some languages like Rust don't, or at least Rust's std library doesn't standardise the event loop interface. That has lead to what can only be described as a giant mess, because there are many async frameworks, and you have to choose. If you implement some marvelous new protocol in Rust, people can't just plug it in unless you have provided the glue for the async framework they use. Zig has managed to avoid Rust's mistake with it's Io interface, but then most async implementations do avoid it in one way or another.
What you haven't avoided is the colouring that occurs between non-async code and async code. Is the trade-off "all code shall be async"? That incurs a cost to single threaded code, as all blocking system calls now become two calls (one to do the operation, and one wait for the outcome).
Long ago Rust avoided that by deciding other whether to do a blocking call, or a schedule call followed by a wait when the system call is done. But making decision also incurs it's over overhead on each and every system call, which Rust decided was too much of an imposition.
For Rust, there is possibly a solution: monomorphisation. The compiler generates one set of code when the OS scheduler is used, and another when the process has it's own event loop. I expect they haven't done that because it's hard and disruptive. I would be impressed if Zig had done it, but I suspect it hasn't.
If you are using a library in rust, it has to be async await, tokio, send+sync and all the other crap. Or if it is sync api then it is useless for async application.
This approach of passing IO removes this problem and this is THE main problem.
This way you don’t have to use procedural macros or other bs to implement multi versioning for the functions in your library, which doesn’t work well anyway in the end.
You can find 50 other ones like this by searching.
To be honest I don’t hope they will solve cooperative scheduling, high performance, optionally thread-per-core async soon and the API won’t be that good anyway. But hope it solves all that in the future.
> Or if it is sync api then it is useless for async application.
The rest is true, but this part isn't really an issue. If you're in an async function you can call sync functions still. And if you're worried it'll block and you can afford that, I know tokio offers spawn_blocking for this purpose.
> If you are using a library in rust, it has to be async await, tokio, send+sync and all the other crap
Send and sync is only required if you want to access something from multiple threads, which isn't required by async await (parallelism vs concurrency)
1) You can use async await without parallelism and 2) send and sync aren't a product of async/await in Rust, but generally memory safety, i.e. you need Send generally when something can/is allowed to move between threads.
Yes, but async Rust is basically built on tokio's runtime, which is what most the big async libraries depend on, like hyper/axum/tokio etc. And tokio is a thread-per-core work-stealing architecture, which requires Send + Sync bounds everywhere. You can avoid them if you depend on tokio-proper, but it's more icky when building on something like axum, where your application handlers also require these bounds.
Iirc I had a situation a while back, in which I used async await with tokio with a non Send or Sync type and it compiled when I didn't use spawn[1] (implying multithreading) but a simple loop with sequential processing.
Only when I wanted to enable parallelism using spawn, I got a compilation error.
> Send and sync is only required if you want to access something from multiple threads, which isn't required by async await (parallelism vs concurrency)
In theory this is correct. In practice, a lot of APIs (including many in tokio) require both traits even for single-thread use cases.
I'm not skipping anything. And in fact acknowledge this exact point:
> That being said, I don't think Zig's implementation here is bad. If anything, it does a great job at abstracting the usage from the implementation. This is something Rust fails at spectacularly.
This comment is justly flatly incorrect. You don't need Tokio at all to write an async library. Nor do you need send sync. Not sure what other crap you are speaking of, either.
Sync APIs can be spawned in worker threads as futures, too. Generally executors have helper methods for that.
Here's a trick to make every function red (or blue? I'm colorblind, you decide):
var io: std.Io = undefined;
pub fn main() !void {
var impl = ...;
io = impl.io();
}
Just put io in a global variable and you won't have to worry about coloring in your application. Are your functions blue, red or green now?
Jokes aside, I agree that there's obviously a non-zero amount of friction to using the `Io` intreface, but it's something qualitatively very different from what causes actual real-world friction around the use of async await.
> but the general problem behind function coloring is that of context
I would disagree, to me the problem seems, from a practical perspective that:
1. Code can't be reused because the async keyword statically colors a function as red (e.g. python's blocking redis client and asyncio-redis). In Zig any function that wants to do Io, be it blue (non-async) or red (async) still has to take in that parameter so from that perspective the Io argument is irrelevant.
2. Using async and await opts you automatically into stackless coroutines with no way of preventing that. With this new I/O system even if you decide to use a library that interally uses async, you can still do blocking I/O, if you want.
To me these seems the real problems of function coloring.
Well, it's not really a joke. That's a valid strategy that languages use. In Go, every function is "async". And it basically blocks you from doing FFI (or at least it used to?). I wonder if Zig will run into similar issues here.
> 1. Code can't be reused because the async keyword statically colors a function
This is fair. And it's also a real pain point with Rust. However, it's funny that the "What color is your function?" article doesn't even really mention this.
> 2. Using async and await opts you automatically into stackless coroutines with no way of preventing that
This however I don't think is true. Async/await is mostly syntax sugar.
In Rust and C# it uses stackless coroutines.
In JS it uses callbacks.
There's nothing preventing you from making await suspend a green thread.
I should have specified that better, of course async and await can be lowered to different things (that's what Zig does afterall), what I wanted to say is that that's how it works in general. JS is a good counter example, but for all other mainstream languages, async means stackless coroutines (python, ruby, c#, rust, ...).
Which means that if I want to use a dependency that uses async await, it's stackless coroutines for me too whether I like it or not.
In ruby async is based on stackful fibers. With https://github.com/socketry/async-debug you can see a tree of all fibers with their full call stack. It also avoids the problem people talk about in this thread with go of passing a context parameter everywhere for cancellation as you can kill or raise any exception inside another fiber. I haven't used them but PHP fibers are also supposedly stackful. And Java and every JVM language has them since project loom in JDK 21.
It doesn't block it. But it does make FFI much more expensive in go than in languages like Rust, because every foreign call needs to set up a c-compatible stack.
The global io trick would totally be valid if you’re writing an application (i.e. not a library) and don’t have use of two different implementations of io
There are plenty of libraries out there which require users to do an init() call of some sorts at startup. It is perfectly possible to design a library that only works with 1 io instance and gets it at init(). Whether people like or want that… I have no clue.
I'll let a real category theorist get into the details that I'll likely flub, but the IO monad is where you end up if you start on this path. That context can be implicit, but it's there, and if you want any help from the compiler (to, for example, guide Claude Code towards useful outcomes) you've got to reify it as a real thing in the formality of the system.
Async and coroutines are the graveyard of dreams for systems programming languages, and Andrew by independently rediscovering the IO monad and getting it right? Hope of a generation.
Functions in the real world have colors: you can have predictable rules for moving between colors, or you can wing it and get C++ co_await and tokio and please kill me.
it's not a monad, since you can do unholy (for fp) things with it, like stash it in a struct and pass the struct around (even to functions which have no clue theres an io call) or just grab a globalized io object and use that at will arbitrarily at many entrypoints in your function.
most importantly, besides the obvious situations (creating the io object, binding it to another object), it's not generally going to be returned from a function as part of its value.
When I write in Haskell, I find myself mentally glossing the returned monadic state, along the lines of, "Oh, an M x is just an x that does monady stuff to get the x". This becomes natural once you get the hang of do-notation and sometimes monad combinators. So I'm not really thinking about the monadic state in the return value a lot.
It's not really any less natural than thinking stateful programming, except now the state is a reified thing, which I think is strictly advantageous once you get used to it.
i'm relatively confident that andrew will happily break the language again (god love him for that) when it becomes clear that you really want that algebra.
though i will say, for a systems language, it's probably better to invert the lift/unlift relationship, default to do-notation and explicitly unlift into pure functions. that's almost what const meant in C++ to begin with but it lost it's way.
you're just spreading FUD about zig breaking before 1.0. there is zero basis for you to be "relatively confident" about this matter, because
1. the io situation is basically restructuring analgously to the allocator situation, which is at this point battle tested. there are currently no "monads wrapping statefulness" anywhere in zig.
2. It's not in zig's nature to build something because it it satisifies an fp idiom. the abstractions and resulting "thing that the hardware does" (at least in release builds) are generally more or less obvious. the levels of compiler reinterpretation to achieve functional purity are not really the sort of thing that zig does. for example, zig does not have a privileged "iterator" method that the compiler reinterprets in a way that unrolls blocks or lambdas into loops without crossing a frame boundary.
Go also suffers from this form of “subtle coloring”.
If you’re working with goroutines, you would always pass in a context parameter to handle cancellation. Many library functions also require context, which poisons the rest of your functions.
Technically, you don’t have to use context for a goroutine and could stub every dependency with context.Background, but that’s very discouraged.
Having all async happen completely transparently is not really logically possible. asynchronous logic is frequently fundamentally different from synchronous logic, and you need to do something different one way or the other. I don't think that's really the same as "function colouring".
And context is used for more than just goroutines. Even a completely synchronous function can (and often does) take a context, and the cancellation is often useful there too.
I think the main point is in something like Go, the approach is non-viral. If you are 99 levels deep in synchronous code and need to call something with context, well, you can just create one. With C#, if you need to refactor all 99 levels above (or use bad practices which is of course what everyone does).
Also, in general cancellation is something that you want to optionally have with any asynchronous function so I don't think there really exists an ideal approach that doesn't include it. In my opinion the approach taken by Zig looks pretty good.
Why do you encourage avoiding it? Afaik it's the only way to early-abort an operation since Goroutines operate in a cooperative, not preemptive, paradigm. To be very clear, I'm asking this completely in good faith looking to learn something new!
> Afaik it's the only way to early-abort an operation since Goroutines operate in a cooperative, not preemptive, paradigm.
I'm not sure what you mean here. Preemptive/coorporative terminology refers to interrupting (not aborting) a CPU-bound task, in which case goroutines are fully preemptive on most platforms since Go 1.14, check the release notes for more info. However, this has nothing to do with context.
If you're referring to early-aborting IO operations, then yes, that's what context is for. However, this doesn't really have anything to do with goroutines, you could do the same if the runtime was built on OS threads.
Goroutines are preemptive only to the runtime scheduler. You, the application developer merely using the language, cannot directly preempt a goroutine.
This makes goroutines effectively cooperative still from the perspective of the developer. The preemptive runtime "just" prevents things like user code starving out the garbage collector. To interrupt a goroutine, your options are generally limited to context cancelation and closing the channel or socket being read, if any. And the goroutine may still refuse to exit (or whatever else you want it to do), though that's largely up to how you code it.
This difference is especially stark when compared with Erlang/BEAM where you can directly address, signal, and terminate its lightweight processes.
Exactly, thank you. I knew that the runtime gained the ability to preempt but the clear fact that you cannot get a handle to a goroutine (eg `gr := go fn()` is proof you have no way to take advantage of this ability as a user.
But if you store your context in a struct (which is not the recommend “best practice” – but which you can do) it's no longer a function coloring issue.
I do that in on of my libraries and I feel that it's the right call (for that library).
If the struct has a well-scoped and short-lived lifecycle, then it is actually better to put the context in the struct. Many Go libraries including the stdlib do this despite not being "best practice".
An exception to the short-lived rule is to put context in your service struct and pass it as the base context when constructing the HTTP server, so that when you get a service shutdown signal, one can cancel requests gracefully.
It's well scoped, but not short lived; it's an SQLite connection.
But the API surface is huge, with 100s of methods on the connection and derived objects, with it being unclear which might block and be worthy of asynchronous cancellation. You never know when pulling an additional column if that one might be an overflow text/blob that does additional IO.
The solution, while not amazing is a method that you use like this:
old := conn.SetInterrupt(ctx)
defer conn.SetInterrupt(old)
This changes the “interrupt” context for the duration of your function scope, and covers all potentially blocking calls that you might make. Also, from the name, it's quite clear that this context is used only for interruption/cancellation (interrupt is the SQLite name for this, which I try to adhere to).
Nope, because I didn't mention it was to be passed around as a compulsory parameter, rather have your logic organised across structs with methods, hide most details behind interfaces.
It's not required, but eschewing it ends up going against the grain, since so much of the ecosystem is written to use contexts, including the standard library.
For example, say you instead of contexts, you use channels for cancellation. You can have a goroutine like this:
go func() {
for {
select {
case <-stop:
return
case <-time.After(1*time.Second):
resp := fetchURL(url)
processResult(resp.Body) // Simplified, of course
}
}
}()
If you want to be able to shut this goroutine down gracefully, you're going to have an issue where http.Get() may stall for a long time, preventing the goroutine from quitting.
Likewise, processResult() may be doing stuff that cannot be aborted just by closing the stop channel. You could pass the stop channel to it, but now you're just reinventing contexts.
Of course, you can choose to only use contexts where you're forced to, and invent wrappers around standard library stuff (e.g. the HTTP client), but at that point you're going pretty far to avoid them.
I do think the context is problematic. For the purposes of cancellation, it's invasive and litters the call graph with parameters and variables. Goroutines really ought to have an implicit context inherited from its parent, since everything is using it anyway.
Contexts are wildly abused for passing data around, leading to bloated contexts and situations where you can't follow the chain of data-passing without carefully reviewing the entire call graph. I always recommend not being extremely discriminating about where to pass values in context. A core principle is that it has to be something that is so pervasive that it would be egregious to pass around explicitly, such as loggers and application-wide feature flags.
What would you use in its place? I've never had an issue with it. I use it for 1) early termination 2) carrying custom request metadata.
I don't really think it is fully the coloring problem because you can easily call non-context functions from context functions (but not other way around, so one way coloring issue), but you need to be aware the cancellation chain of course stops then.
Like you said you dont NEED context. Its just something thats available if you need it. I still think Go/Erlang has one of the best concurrency stories out there.
> If you’re working with goroutines, you would always pass in a context parameter to handle cancellation.
The utility of context could be called a subtle coloring. But you do NOT need context at all. If your dealing with data+state (around queue and bus processing) its easy to throw things into a goroutine and let the chips fall where they will.
> which poisons the rest of your functions.
You are free to use context dependent functions without a real context:
https://pkg.go.dev/context#TODO
The thing about context is it can be a lot more than a cancellation mechanism. You can attach anything to it—metadata, database client, logger, whatever. Even Io and Allocator if you want to. Signatures are future-proof as long as you take a context for everything.
At the end of the day you have to pass something for cooperative multitasking.
Of course it’s also trivial to work around if you don’t like the pattern, “very discouraged” or not.
Agree that with something like go, there is truly no function coloring at all. However, since most real world async things require cancellation, a context parameter is always present so there is some "coloring" do to that. Still, it is much less viral than C# style async await as if you don't have a context in your call stack you can still create one when needed and call the function. I don't think it is reasonable to abstract cancellation in a way that nothing has to be passed in so perhaps the approach presented here is realistically as good as it gets.
Aside from the ridiculous argument that function parameters color them, the assertion that you can’t call a function that takes IO from inside a function that does not is false, since you can initialize one to pass it in
To me, there's no difference between the IO param and async/await. Adding either one causes it to not be callable from certain places.
As for the second thing:
You can do that, but... You can also do this in Rust. Yet nobody would say Rust has solved function coloring.
Also, check this part of the article:
> In the less common case when a program instantiates more than one Io implementation, virtual calls done through the Io interface will not be de-virtualized, ...
Doing that is an instant performance hit. Not to mention annoying to do.
> Doing that is an instant performance hit. Not to mention annoying to do.
The cost of virtual dispatch on IO path is almost always negligible. It is literally one conditional vs syscall. I doubt it you can even measure the difference.
Sure you can. An `async` function in Javascript is essentially a completely normal function that returns a promise. The `async`/`await` syntax is a convenient syntax sugar for working with promises, but the issue would still exist if it didn't exist.
More to the point, the issue would still exist even if promises didn't exist — a lot of Node APIs originally used callbacks and a continuation-passing style approach to concurrency, and that had exactly the same issues.
Other commenters have already provided examples for other languages, and it's the same for Rust: async functions are just regular functions that return an impl Future type. As a sync function, you can call a bunch of async functions and return the futures to your caller to handle, or you can block your current thread with the block_on function typically available through a handle (similar to the Io object here) provided by your favorite async runtime [0].
In other words, you don't need such an Io object upfront: You need it when you want to actually drive its execution and get the result. From this perspective, the Zig approach is actually less flexible than Rust.
If you have a sync/non-IO function that now needs to do IO, it becomes async/IO. And since IO and async are viral, it's callers must also now be IO/async and call it with IO/await. All the way up the call stack.
You’re allowed to not like it, but that doesn’t change that your argument that this is a form of coloring is objectively false. I’m not sure what Rust has to do with it.
Sure it is a function coloring. Just in a different form. `async` in other languages is something like an implicit parameter. In zig they made this implicit parameter explicit. Is that more better/more ergonomic? I don't know yet. The sugar is different, but the end result the same. Unless you can show me concrete example of things that the approach zig has taken can do that is not possible in say, rust. Than I don't buy that it's not just another form of function coloring.
It’s more like adding a runtime handle to the struct.
Modulo that I’m not sure any langage with a sync/async split has an “async” runtime built entirely out of sync operations. So a library can’t take a runtime for a caller and get whatever implementation the caller decided to use.
> I’m not sure any langage with a sync/async split has an “async” runtime built entirely out of sync operations.
You get into hairy problems of definition, but you can definitely create an "async" runtime out of "sync" operations: implement an async runtime with calls to C. C doesn't have a concept of "async", and more or less all async runtime end up like this.
I've implemented Future (Rust) on a struct for a Windows operation based only on C calls into the OS. The struct maintains everything needed to know the state of the IO, and while I coupled the impl to the runtime for efficiency (I've written it too), it's not strictly necessary from memory.
> You get into hairy problems of definition, but you can definitely create an "async" runtime out of "sync" operations: implement an async runtime with calls to C. C doesn't have a concept of "async", and more or less all async runtime end up like this.
While C doesn't have async OS generally provide APIs which are non-blocking, and that is what async runtimes are implemented on top of.
By sync operations I mean implementing an "async" runtime entirely atop blocking operations, without bouncing them through any sort of worker threads or anything.
It's funny, but I do actually like it. It's just that it walks like a duck, swims like a duck and quacks like a duck.
I don't have a problem with IO conceptually (but I do have a problem with Zig ergonomics, allocator included). I do have a problem with claiming you defeated function coloring.
I do want to say that I regretted that comment as nonconstructive after it was too late to edit it. Others in the thread are representing my argument better than I can or care to.
I mean... you use `await` if you've used `async`. It's your choice whether or not you do; and if you don't want to, your callers and callees can still freely `async` and `await` if they want to. I don't understand the point you're trying to make here.
To be clear, where many languages require you to write `const x = await foo()` every time you want to call an async function, in Zig that's just `const x = foo()`. This is a key part of the colorless design; you can't be required to acknowledge that a function is async in order to use it. You'll only use `await` if you first use `async` to explicitly say "I want to run this asynchronously with other code here if possible". If you need the result immediately, that's just a function call. Either way, your caller can make its own choice to call you or other functions as `async`, or not to; as can your callees.
The moment you take or even know about an io, your function is automatically "generic" over the IO interface.
Using stackless coroutines and green threads results in a completely different codegen.
I just noticed this part of the article:
> Stackless Coroutines
>
> This implementation won’t be available immediately like the previous ones because it depends on reintroducing a special function calling convention and rewriting function bodies into state machines that don’t require an explicit stack to run.
>
> This execution model is compatible with WASM and other platforms where stack swapping is not available or desireable.
I wonder what will happen if you try to await a future created with a green thread IO using a stackless coroutine IO.
If `foo` needs to do IO, sure. Or, more typically (as I mentioned in a different comment), it's something like `const x = something.foo()`, and `foo` can get its `Io` instance from `something` (in the Zig compiler this would be a `Compilation` or a `Zcu` or a `Sema` or something like that).
> Using stackless coroutines and green threads results in a completely different codegen.
Sure, but that's abstracted away from you. To be clear, stackless coroutines are the only case where the codegen of callers is affected, which is why they require a language feature. Even if your application uses two `Io` implementations for some reason, one of which is based on stackless coroutines, functions using the API are not duplicated.
> I wonder what will happen if you try to await a future created with a green thread IO using a stackless coroutine IO.
Mixing futures from any two different `Io` implementations will typically result in Illegal Behavior -- just like passing a pointer allocated with one `Allocator` into the `free` of a different `Allocator` does. This really isn't a problem. Even with allocators, it's pretty rare for people to mess this up, and with allocators you often do have multiple of them available in one place (e.g. a gpa and an arena). In contrast, it will be extraordinarily rare to have more than one `Io` lying around. Even if you do mess it up, the IB will probably just trip a safety check, so it shouldn't take you too long to realise what you've done.
> Mixing futures from any two different `Io` implementations will typically result in Illegal Behavior
Thinking about it more, you've possibly added even more colors. Each executor adds a different color and while each function is color-agnostic (but not colorless) futures aren't.
> it will be extraordinarily rare to have more than one `Io`
Will it? I can immediately think of a use case where a program might want to block for files on disk, but defer fetching from network to some background async executor.
but that's not even the case, because it's certainly possible to write a function that receives an object that holds onto an io (and uses it in its vtable calls) that equally well receives an object that doesn't have anything to do with io [0]. The consumers of those objects don't have to care, so there's no coloring.
[0] and this isn't even really a theoretical matter, having colorblind object passing is extremely useful for say, mocking. Oh, I have a database lookup/remote API call, which obviously requires io, but i want fast tests and I can mock it with an object with preseeded values/expects -- hey, that doesn't require IO.
I think in practice the caller still needs to know.
If I call `a.foo()` but `a` has and is using a stackless coroutine IO but the caller is being executed from a green thread IO then as was said before, I'm hitting UB.
But, I do like that you could skip/mock IO for instance. That's pretty neat.
> Adding either one causes it to not be callable from certain places.
you can call a function that requires an io parameter from a function that doesn't have one by passing in a global io instance?
as a trivial example the fn main entrypoint in zig will never take an io parameter... how do you suppose you'd bootstrap the io parameter that you'd eventually need. this is unlike other languages where main might or might not be async.
>you can call a function that requires an io parameter from a function that doesn't have one by passing in a global io instance?
How will that work with code mixing different Io implementations? Say a library pulled in uses a global Io instance while the calling code is using another.
I guess this can just be shot down with "don't do that" but it feels like a new kind of pitfall get get into.
Zig already has an Allocator interface that gets passed around, and the convention is that libraries don't select an Allocator. Only provide APIs that accept allocators. If there's a certain process that works best with an Arena, then the API may wrap a provided function in an Arena, but not decide on their own underlying allocator for the user.
For Zig users, adopting this same mindset for Io is not really anything new. It's just another parameter that occasionally needs to be passed into an API.
while not really idiomatic, as long as you let the user define the Io instance (eg with some kind of init function), then it doesn't really matter how that value is accessed within the library itself.
that's why this isn't really the same as async "coloring"
> you can’t call a function that takes IO from inside a function that does not is false, since you can initialize one to pass it in
that's not true. suppose a function foo(anytype) takes a struct, and expects method bar() on the struct.
you could send foo() the struct type Sync whose bar() does not use io. or you could send foo() the struct type Async whose bar uses an io stashed in the parameter, and there would be no code changes.
if you don't prefer compile time multireification, you can also use type erasure and accomplish the same thing with a vtable.
It's hard to parse your comment, but I think we are agreeing? I was refuting the point of the parent. you have given another example of calling an IO-taking function inside a non-IO taking function. the example I gave was initializing an IO inside the non-IO taking function. you could also, as pointed out elsewhere, use global state.
> In order to call such a function you also need to provide the context. Zig hasn't really solved this.
It is much more flexible though since you don't need to pass the IO implementation into each function that needs to do IO. You could pass it once into an init function and then use that IO impl throughout the object or module. Whether that's good style is debatable - the Zig stdlib currently has containers that take an allocator in the init function, but those are on the way out in favour of explicitly taking the allocator in each function that needs to allocate - but the user is still free to write a minimal wrapper to restore the 'pass allocator into init' behaviour.
Odin has an interesting solution in that it passes an implicit context pointer into each function, but I don't know if the compiler is clever enough to remove the overhead for called functions that don't access the context (since it also needs to peek into all called functions - theoretically Zig with it's single-compilation-unit approach could probably solve that problem better).
So this is a tangent from the main article, but this comment made me curious and I read the original "What color is Your Function" post.
It was an interesting read, but I guess I came away confused about why "coloring" functions is a problem. Isn't "coloring" just another form of static typing? By giving the compiler (or interpreter) more meta data about your code, it can help you avoid mistakes. But instead of the usual "first argument is an integer" type meta data, "coloring" provides useful information like: "this function behaves in this special way" or "this function can be called in these kinds of contexts." Seems reasonable?
Like the author seems very perturbed that there can be different "colors" of functions, but a function that merely calculates (without any IO or side-effects) is different than one that does perform IO. A function with only synchronous code behaves very differently than one that runs code inside another thread or in a different tick of the event loop. Why is it bad to have functions annotated with this meta data? The functions behave in a fundamentally different way whether you give them special annotations/syntax or not. Shouldn't different things look different?
He mentions 2015 era Java as being ok, but as someone that’s written a lot of multithreaded Java code, it’s easy to mess up and people spam the “synchronized” keyword/“color” everywhere as a result. I don’t feel the lack of colors in Java makes it particularly intuitive or conceptually simpler.
Yes, the main character of that article really is mostly JavaScript. The main issue there is that some things must be async, and that doesn't mesh well with things that can't be.
If you're writing a game, and you need to render a new enemy, you might want to reduce performance by blocking rather than being shot by an invisible enemy because you can only load the model async.
But even the article acknowledges that various languages tackle this problem better. Zig does a good job, but claiming it's been defeated completely doesn't really fly for me.
> He mentions 2015 era Java as being ok, but as someone that’s written a lot of multithreaded Java code, it’s easy to mess up and people spam the “synchronized” keyword/“color” everywhere as a result. I don’t feel the lack of colors in Java makes it particularly intuitive or conceptually simpler.
Async as a keyword doesn’t solve this or make writing parallel code any easier. You can still mess this up even if every function is annotated as async.
> A function with only synchronous code behaves very differently than one that runs code inside another thread or in a different tick of the event loop.
I think this is conflating properties of multiple runtimes. This is true in JavaScript because the runtime works on an event loop. In Java an “async” function that reads from a file or makes an http call doesn’t run in a different threads and doesn’t run in a different tick of an event loop. So what value does it have in that type of runtime?
Personally for me I think “async” is putting pain on a lot of developers where 99% of all code is not parallel and doesn’t share memory.
I believe the point is less about "coloring" not having value as a type-system feature, and more about its bad ergonomics, and its viral nature in particular.
> It was an interesting read, but I guess I came away confused about why "coloring" functions is a problem. Isn't "coloring" just another form of static typing?
It is. Function coloring is static typing.
But people never ever agree on what to put in typing system. For example, Java's checked exceptions are a form of typing... and everyone hates them.
Anyway it's always like that. Some people find async painful and say fuck it I'm going to manage threads manually. In the meanwhile another bunch of people work hard to introduce async to their language. Grass is always greener on the other side.
> But people never ever agree on what to put in typing system. For example, Java's checked exceptions are a form of typing... and everyone hates them.
I love checked exceptions. Checked errors are fantastic and I think most developers would agree they want errors to be in the type system, but Java as a language just hasn’t provided the language syntax to make them usable. They haven’t made it easy to “uncheck” when you can’t possibly handle an error. You have to write boilerplate:
Something s;
try {
s = something();
} catch (SomethingException e) {
throw new RuntimeException(e);
}
It sucks when you face that situation a lot. In Swift this is really simple:
var s = try! something();
Java also hasn’t made them usable with lambdas even though both Scala [0] and Swift have shown it’s possible with a sufficiently strong type system:
> Isn't "coloring" just another form of static typing?
In a very direct way. Another example in languages that don't like you ignoring errors, changing a function from infallible to fallible is a breaking change, a la "it's another colour".
I'm glad it is: if a function I call can suddenly fail, at the very least I want to know that it can, even if the only thing I do is ignore it (visibly).
> Isn't "coloring" just another form of static typing?
Yes, and so is declaring what exceptions a function can throw (checked exceptions in Java).
> Why is it bad to have functions annotated with this meta data? The functions behave in a fundamentally different way whether you give them special annotations/syntax or not. Shouldn't different things look different?
It really isn't a problem. The article makes people think they've discovered some clever gotcha when they first read it, but IMHO people who sit down for a bit and think through the issue come to the same conclusion you have - Function coloring isn't a problem in practice.
> but IMHO people who sit down for a bit and think through the issue come to the same conclusion you have - Function coloring isn't a problem in practice.
I dunno man, have you seen people complain about async virality in Rust being annoying? Have you ever tried to read a backtrace from a program that does stackless coroutines (it's not fun)? Have you seen people do basically duplicate work to maintain a blocking and an async version of the same networking library?
The alternative is to jump through a bunch of hoops to hide the "coloring" behind some opaque abstraction that is complicated and that will still get in the way when things go wrong.
Everyone complained when async IO was done with callbacks, so sugar was added to the callbacks, and now everyone has spent over a decade complaining about what flavor of sugar tastes best.
Y'all at Zig have a solution, I trust Zig's solution will be a good one (zig is lots of fun to use as a language) but at the end of the day, IO is slow, that needs to get hidden somehow, or not.
Everyone should have to do embedded for awhile and setup their own DMA controller operations. Having async IO offloaded to an actual hardware block is... A different type of amusing.
> Well, you don't have async/sync/red/blue anymore, but you now have IO and non-IO functions.
> However, the coloring problem hasn't really been defeated.
Well, yes, but if the only way to do I/O were to have an Io instance to do it with then Io would infect all but pure(ish, non-Io) functions, so calling Io functions would be possible in all but those contexts where calling Io functions is explicitly something you don't want to be possible.
So in a way the color problem is lessened.
And on top of that you get something like Haskell's IO monad (ok, no monad, but an IO interface). Not too shabby, though you're right of course.
Next Zig will want monadic interfaces so that functions only have to have one special argument that can then be hidden.
> Technically you could pass in a new executor, but is that really what you want?
why does it have to be new? just use one executor, set it as const in some file, and use that one at every entrypoint that needs io! now your io doesn't propagate downwards.
I am not very experienced in async Rust, but it seems there are some pieces of async Rust that rely too much on tokio internals, so using an alternative runtime (like pollster) results in broken code.
Searching for comments mentioning "pollster" and "tokio" on HN brings a few results, but not one I recall seeing a while ago where someone demonstrated an example of a library (using async Rust) that crashes when not using tokio as the executor.
There's two details that are important to highlight. tokio is actually 2 components, it's the async scheduler, and it's the IO runtime. Pollster is only a scheduler, and does not offer any IO functionality. You can actually use tokio libraries with pollster, but you need to register the IO runtime (and spawn a thread to manage it) - this is done with Runtime::enter() and it configures the thread local interface so any uses of tokio IO know what runtime to use.
There are ideas to abstract the IO runtime interface into the async machinery (in Rust that's the Context object that schedulers pass into the Future) but so far that hasn't gotten anywhere.
Yep. The old async wars in the Rust ecosystem. It's the AsyncRead and AsyncWrite traits. Tokio has its own, there was a standard brewing at the same time in the futures crate. Tokio did their own thing, people burnt out, these traits were never standardized to std.
So you cannot use most of the async crates easily outside Tokio.
Sure. Let's do an imaginary scenario. Let's say that you are the author of a http request library.
Async hasn't been added yet, so you're using `std::net::TcpStream`.
All is well until async comes along. Now, you have a problem. If you use async, your previous sync users won't be able to (easily) call your functions. You're looking at an API redesign.
So, you swallow your pride and add an async variant of your functionality. Since Tokio is most popular, you use `tokio::net::TcpStream`.
All is well, until a user comes in and says "Hey, I would like to use your library with smol (a different async runtime)". Now what do you do? Add a third variant of your code using `smol::net::TcpStream`? It's getting a bit ridiculous, and smol isn't the only alternative runtime.
One solution is to do what Zig does, but there isn't really an agreed upon solution. The stdlib does not even provide AsyncRead/AsyncWrite so you could invert your code and just work with streams provided from above and keep your libary executor agnostic.
Given an `io` you can, technically, build another one from it with the same interface.
For example given an async IO runime, you could create an `io` object that is blocking (awaits every command eagerly). That's not too special - you can call sync functions from async functions. (But in JavaScript you'd have trouble calling a sync function that relies on `await`s inside, so that's still something).
Another thing that is interesting is given a blocking posix I/O that also allows for creating processes or threads, you could build in userspace a truly asynchronous `io` object from that blocking one. It wouldn't be as efficient as one based directly on iouring, and it would be old school, but it would basically work.
Going either way (changing `io` to sync or async) the caller doesn't actually care. Yes the caller needs a context, but most modern apps rely on some form of dependency injection. Most well-factored apps would probably benefit from a more refined and domain-specific "environment" (or set of platform effects, perhaps to use the Roc terminology), not Zig's posix-flavoured standard library `io` thing.
Yes rust achieves this to some extent; you can swap an async runtime for another and your app might still compile and run fine.
Overall I like this alot - I am wondering if Richard Feldmann managed to convince Andrew Kelley that "platforms" are cool and some ideas were borrowed from Roc?
Passing in your dependencies as function arguments is a form of dependency injection. It is the simplest and thus arguably best form of dependency injection.
Not at all. Dependency injection is the injection of dependencies as logical parameters. The simplest and arguably best way to inject logical parameters to a segment of code is to use function parameters. You can have a complicated DI framework that involves Java class annotations and megabytes of XML, but that’s not the central idea.
> inject logical parameters to a segment of code is to use function parameters. You can have a complicated DI framework that involves Java class annotations and megabytes of XML, but that’s not the central idea.
so the central idea of dependency injection, a concept with a wiki of like 5000 words [1], is just pass parameters to functions...?
i guess i'm happy to accept that (i personally DGAF about DI or whatever) but it certainly means that all the people discussing DI (like yourself) are peddling snake oil...?
> so the central idea of dependency injection, a concept with a wiki of like 5000 words [1], is just pass parameters to functions...?
Specifically, parameters that refer to a dependency that is being injected.
And yeah, there are a lot of fundamentally simple ideas that can be massively overcomplicated. Let’s take a look at that Wikipedia article:
> There are several ways in which a client can receive injected services:[29]
> * Constructor injection, where dependencies are provided through a client's class constructor.
> * Method Injection, where dependencies are provided to a method only when required for specific functionality.
> * Setter injection, where the client exposes a setter method which accepts the dependency.
> * Interface injection, where the dependency's interface provides an injector method that will inject the dependency into any client passed to it.
You’ll notice that these are more or less an enumeration of ways to pass parameters into functions in object oriented programming. Most of the complexity isn’t the idea of dependency injection itself but rather in building abstractions for doing dependency injection, especially in an object-oriented language.
And yeah, a lot of the complicated versions of DI, like Spring, probably are mostly snake oil. But I object to the notion that I am peddling snake oil because I’m not advocating for anything like Spring or claiming that DI is anything more than parameterization.
> so the central idea of dependency injection, a concept with a wiki of like 5000 words [1], is just pass parameters to functions...
... instead of functions reaching out to obtain those dependencies from globals. Yes, that is exactly what it is about. Like much of 90-00s era OOP design discourse, it's vastly overcomplicated for no good reason.
Most forms of dependency injection are abstractions over parameter passing. If you find yourself passing the same parameters into many different function calls there are ways of abstracting that, even in Zig. It’s not going to look like Spring and if that’s a dealbreaker for you, just use Spring, it’s a free country.
Does Zig have closures? If yes, than at least in that case the IO pointer can be a bound parameter. For languages with the async keyword function coloring also applied to closures.
The original “function colouring” blogpost has done irreparable damage to PL discussions because it’s such a stupid concept to begin with. Of course I want async functions to be “coloured” differently, they do different things! How else is a “normal function” supposed to call a function that gives you a result later——obviously you want to be forced to say what to do with the result; await it, ignore it, .then() in JS terms, etc. these are important decisions that you can’t just ignore because it’s “painful”
There is nothing obvious around that - it is driven by what abstractions the language provides related to concurrency, and with different choices you will end needing different ways to interact with it.
So yes, given how the language designers of C# and JavaScript choose to implement concurrency and the APIs around that, then coloring is necessary. But it is very much implementation driven and implementation of other concurrency models then other ways to do it that don't involve keywords can make sense. So when people complain about function coloring, they are complaining about the choice of concurrency model that a language uses.
I have a much longer rant elsethread, but the tl;dr; is:
In some languages red can call blue, but blue cannot call red (JS). In some other languages blue can call red, but the resulting combined function is blue (traditional async with optional blocking). Finally some languages allow blue to call red and having the resulting combined function to be red (lua, scheme, go, and I believe Zig). As color is no longer a n unabstractable restriction in these languages, it no different than other kind of typing.
I think the point is 3 doesn't fully apply anymore. And that was the main pain point. You couldn't call a blue in red even if it didn't use IO without some kind of execution wrapper or waiter. Now you clearly can.
> With this last improvement Zig has completely defeated function coloring.
I disagree with this. Let's look at the 5 rules referenced in the famous "What color is your function?" article referenced here.
> 1. Every function has a color
Well, you don't have async/sync/red/blue anymore, but you now have IO and non-IO functions.
> 2. The way you call a function depends on its color.
Now, technically this seems to be solved, but you still need to provide IO as a parameter. Non-IO functions don't need/take it.
It looks like a regular function call, but there's no real difference.
> 3. You can only call a red function from within another red function
This still applies. You can only call IO functions from within other IO functions.
Technically you could pass in a new executor, but is that really what you want? Not to mention that you can also do this in languages that don't claim to solve the coloring problem.
> 4. Red functions are more painful to call
I think the spirit still applies here.
> 5. Some core library functions are red
This one is really about some things being only possible to implement in the language and/or stdlib. I don't think this applies to Zig, but it doesn't apply to Rust either for instance.
Now, I think these rules need some tweaking, but the general problem behind function coloring is that of context. Your function needs some context (an async executor, auth information, an allocator, ...). In order to call such a function you also need to provide the context. Zig hasn't really solved this.
That being said, I don't think Zig's implementation here is bad. If anything, it does a great job at abstracting the usage from the implementation. This is something Rust fails at spectacularly.
However, the coloring problem hasn't really been defeated.