Then lookup the n-th largest key and then pull the values?
SELECT value from key_value_pairs order by size(key) LIMIT N
How would you use this approach to find say, the 195687th largest element in O(log(n)) time?
Edit: However, I do know I can write a SQL query to find this - an ordinary index plus limit should do this. It's just such an approach gets really awkward and so unpredictable when you are dealing with a lot of distinct columns and tables - why I implemented this with key-value DB (a custom index on top of Kyoto Cabinet).
Regardless of how many elements compose the key the calculation is the same. I'd be surprised if you couldn't approach O(log(n)) time with a good SQL implementation, some computed columns and/or indexed views.
Just wondering was the b-tree implementation you did a pure B-Tree or was the data structure itself modified in some way. eg. storing the number of elements to the left or right?
I used a B-tree modified by adding a member variable including the total number of children of each parent. Its so simple a modification I don't know why more implementations don't use it but, it seems they don't.