Hacker News new | comments | show | ask | jobs | submit login

(Badger author) We have tried huge prefetch size, using one Goroutine for each key; hence 100K concurrent goroutines doing value prefetching. But, in practice, throughput stabilizes after a very small number of goroutines (like 10). I suspect it's the SSD read latency that's causing range iteration to be slow; unless we're dealing with some slowness inherent to Go. A good way to test it out would be to write fio in Go, simulate async behavior using Goroutines, and see if you can achieve the same throughput.

If one would like to contribute to Badger, happy to help someone dig deeper in this direction.




To fill the queue on Linux goroutine wont be enough you would need to use libaio directly.

sudo apt-get install libaio1 libaio-dev.


Go has no native support for aio. Based on this thread, Goroutines seem to do the same thing, via epolls. https://groups.google.com/forum/#!topic/golang-nuts/AQ8JOHxm...

I think the best bet is to build a fio equivalent in Go (shouldn't take more than a couple of hours), and see if it can achieve the same throughput as fio itself. That can help figure out how slow is Go compared to using libaio directly via C.


While Network socket in Go are using epolls automatically, file are not. From looking at Badger code for example: fd.ReadAt(buf, offset) would block.

See this issue: https://github.com/golang/go/issues/6817




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: