Hacker News new | comments | show | ask | jobs | submit login

Range iteration latency is very important and might be limited by concurrency. I think you can only get 100K IOPS on Amazon’s i3.large when the disk Request queue is full.

fio [1] can easily do this because it spawn a number of threads

While working with Rocksdb we also found that Range iteration latency was very bad compared to a B+-tree and that RocksDB get good read performance mostly from random read because it's using bloomfilters.

Does anyone know if this got fixed somehow recently?

[1] https://linux.die.net/man/1/fio

(Badger author) We have tried huge prefetch size, using one Goroutine for each key; hence 100K concurrent goroutines doing value prefetching. But, in practice, throughput stabilizes after a very small number of goroutines (like 10). I suspect it's the SSD read latency that's causing range iteration to be slow; unless we're dealing with some slowness inherent to Go. A good way to test it out would be to write fio in Go, simulate async behavior using Goroutines, and see if you can achieve the same throughput.

If one would like to contribute to Badger, happy to help someone dig deeper in this direction.

To fill the queue on Linux goroutine wont be enough you would need to use libaio directly.

sudo apt-get install libaio1 libaio-dev.

Go has no native support for aio. Based on this thread, Goroutines seem to do the same thing, via epolls. https://groups.google.com/forum/#!topic/golang-nuts/AQ8JOHxm...

I think the best bet is to build a fio equivalent in Go (shouldn't take more than a couple of hours), and see if it can achieve the same throughput as fio itself. That can help figure out how slow is Go compared to using libaio directly via C.

While Network socket in Go are using epolls automatically, file are not. From looking at Badger code for example: fd.ReadAt(buf, offset) would block.

See this issue: https://github.com/golang/go/issues/6817

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact