
Google/fchan-go: Experimental channel implementation - mmastrac
https://github.com/google/fchan-go
======
kyrra
There is a 23-page pager[0] in the repo that explains what this does. From my
basic understanding, they are trying to implement a non-blocking queue for the
channels. It looks like it could deliver a 2x-4x performance boost when there
are lots of GoRoutines running, and more modest 1x-2x boost when GoRoutine
count = GOMAXPROCS.

[0] [https://github.com/google/fchan-
go/blob/master/writeup/write...](https://github.com/google/fchan-
go/blob/master/writeup/writeup.pdf)

------
sanxiyn
Rust's channel is entirely lock-free. It always was, ever since the first 1.0
release.

This is why I think Rust project sometimes has misguided priority. Is making
channel implementation lock-free a good use of pre-1.0 time? Evidently, as
shown by Go, lock-free channel is not necessary for wide use, and can be added
later. Implementation work should have been deprioritized and probably spent
on, say, async IO like Tokio.

~~~
arthursilva
Rust is a much more community driven project, people contribute what they fell
like working on mostly.

~~~
sanxiyn
You are right in general, but in this case, Rust's lock-free channel
implementation and core piece of Tokio (futures-rs) were done by the same
person, who was employed by Mozilla when both work was done. So it does
reflect Mozilla's priority, not community whim.

------
jakewins
It could be that this design covers this as well - but one thing worth calling
out explicitly is the effects that message passing can have on GC.

If you spend tons of effort making your queues low contention and super low
latency, you best make sure the API you give the user doesn't then undo all
that fine engineering by overloading the GC by allocating a million messages a
second.

Shameless plug, in that I tried to address that in 4fq by having the queue own
the memory for messages. The main job of the queue then being to arbitrate who
controls which message slots in RAM:

[https://github.com/jakewins/4fq](https://github.com/jakewins/4fq)

~~~
infogulch
Go's channels take a copy of the value you pass to it. Now, if you pass
pointers or interfaces, of course you'll have allocations. But it's entirely
possible for two goroutines to communicate over a channel and produce zero
allocations, where values put into and taken out of the channel are stored
exclusively on the stack.

------
throwaway3111
This implementation doesn't support the select statement.

~~~
AYBABTME
Only an implementation in the runtime, by the language, would.

~~~
mariusae
That's false. With anonymous closures you can implement select. See concurrent
ML, or [https://github.com/twitter/util/blob/develop/util-
core/src/m...](https://github.com/twitter/util/blob/develop/util-
core/src/main/scala/com/twitter/concurrent/Offer.scala) (which also happens to
be lock free)

~~~
lobster_johnson
He's referring to the "select" keyword, which is built into Go. You can't use
it with anything other than the built-in channel type.

