Hacker News new | comments | show | ask | jobs | submit login

That's strange, that doesn't look like a normal graph to me, it looks like a cache or queue of some sort is backed up. Did you try to use dtrace / iosnoop / iostat etc to see what might be the bottleneck?

For average commodity hardware I found something like 400 reqs/s/node was normalish, even sustained. Yours looks like about 2 minutes in it dies. Come to think of it, could you have your open file descriptors limited in the OS settings? That looks just like pattern I'd expect to see from that.

Might be unrelated but common pitfalls I had were: - Using the HTTP proto. Protobuf is way faster. - You can tweak the r and w values to get less read and write consensus when you can afford to, depending on the task and data. - ulimit open file descriptors might be too low.

In any case, if you were to do a short writeup, I'm sure the basho guys at the mailing list would be interested.

Hey - the Basho guys were aware and reproduced it pretty quickly. They saw the same response from their new bloom filter branch they're introducing soon too.

I was monitoring with iostat and a couple of other tools. It was certainly very heavy on io, with 80% util, 20% iowait, and that increased as the currency went up.

I was using protobuf, and a w value of 1, so I was out of things to optimize.

When I was inserting objects already in Riak's cache, it ran about 3 times faster, but of course that's not possible with new objects.

How long after they reproduced did you give them to fix the issue? I looked up the thread on their mailing list and you seemingly jumped the gun a bit on your conclusions.

Feel free to investigate further. I had to move on.

So what you are saying is I was right. Thank you. People who report a bug and give less than half of a day for someone to investigate has never dealt with a vendor like oracle or IBM. This tells me you haven't had a data problem before and based on your willingness to give up so quickly leads me to believe you won't end up with data problems that this article is talking about anyway.

Ha. I've had and have plenty of data problems. After 2 days of making adjustments as per Basho's suggestions to try and improve the write throughput, I moved on. You seem to be making a lot of judgments and assumptions about that decision based on very little information. I guess this is troll food.

Meanwhile, back in Postgres-and-MySQL land we're wondering why we should have to entertain this kind of ridiculousness.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact