
Buffer: Composable Buffers for Go - tombenner
https://github.com/djherbis/buffer
======
stinos

       // Buffer 32KB to Memory, after that buffer to 100MB chunked files
      buf := buffer.NewUnboundedBuffer(32*1024, 100*1024*1024)
    
      pool := NewFilePool(100*1024*1024, "") // "" -- use temp dir
    

I seem to have a twofold feeling about this code: on one hand I really like
the brevity of these statements, but on the other hand it does come at the
cost of being rather cryptic: without the comments one can only guess what
exactly is going on. To the point that should I want to use this in my own
code and make it understandable I'd almost would be forced to either copy the
comments as well or else wrap it in a method with another name or. In such
cases it probably would have been better if the original API already had done
this.

~~~
TillE
That's one problem that named arguments solve pretty nicely. Unfortunately, I
don't think Go supports them.

~~~
MetaCosm
There are a lot of ways to make arguments nicer in Go. The simple on is to
just use a configuration struct (or anon struct) at that has nice names. But,
for complex stuff, functional options as laid out by Dave Cheney:
[http://dave.cheney.net/2014/10/17/functional-options-for-
fri...](http://dave.cheney.net/2014/10/17/functional-options-for-friendly-
apis) (video:
[https://www.youtube.com/watch?v=24lFtGHWxAQ](https://www.youtube.com/watch?v=24lFtGHWxAQ))

------
elithrar
Nice work. buffer.NewPool is a great convenience over writing your own
sync.Pool.

I was previously using
[https://github.com/oxtoacart/bpool](https://github.com/oxtoacart/bpool) as a
64K buffer pool for rendering (concurrently) template/html contents—so I can
check for the errors from template.Render—before then using io.Copy to copy
the "known good" contents to the http.ResponseWriter. I may have to look into
using this.

------
djherbis
Hey author here, happy to answer your questions.

~~~
heavenlyhash
Have you / would you consider extending this concept to buffers that are still
single writer but support many readers -- something like a `FreshReader()
io.Reader` function that returns a reader that starts from the beginning
again? (I have an application that needs this semantic and ended up just
rolling it quick-and-dirty; this sounds like very similar concepts done up
with better gift wrapping.)

~~~
djherbis
I'm currently working on something that does that sort of thing.
[https://github.com/djherbis/fscache](https://github.com/djherbis/fscache)

It doesn't have the composability features yet, but it already handles 1
writer with many concurrent readers. Let me know if that helps!

------
kenferry
As a relative go newb, I do have a question about this: I thought channels/go
routines were already essentially composable buffers. No?

~~~
fishnchips
Yes and no. Goroutines are lightweight threads - what other languages might
call fibers. Channels are synchronisation primitives designed to work with
goroutines but they do indeed provide some buffering. What OP is doing though
is creating customisable buffers that allow you to control for memory usage
and that are likely designed to replace things like bytes.Buffer rather then
buffered channels. Of course I may be totally wrong ;)

~~~
djherbis
You've got the right idea. bytes.Buffer is great (I even use it under the
hood) but this repo is intended to give you more control over how your buffer
behaves as the amount of data stored in it changes.

