Hacker News new | past | comments | ask | show | jobs | submit login

Average latency is NOT correct because YCSB latency is measured per-insert, not averaged over the total runtime. Here is the code from DBWrapper.insert:

		long st=System.nanoTime();
		int res=_db.insert(table,key,values);
		long en=System.nanoTime();
		_measurements.measure("INSERT",(int)((en-st)/1000));
... where db.insert makes the htable.put call we're talking about.

The HBase default is correct for what YCSB is trying to measure ("live" updates, i.e., the update should be readable from the regionserver afterwards), but someone submitted a YCSB patch to override the default to make latency look better time some time ago, and most benchmarks including this one have inherited the mistake.




The client will block after the buffer is full, and will wait until the outstanding requests are ack'd from the server. So the times will still be pretty close to correct. Only the last few puts won't be timed.

https://github.com/elliottneilclark/hbase/blob/trunk/hbase-s...

So I created a pull request for YCSB so that last little bit of the buffer is counted for HBase.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: