I'm not sure what exactly qualifies as respectable scale, but the Mongo master was running out of space and IO capacity with 24 SSDs and 90 million user records, and was replaced by a sixteen-node Riak cluster.
I'll happily share any other statistics you're interested in.
Edit: the Riak cluster actually contains lots of other data (communications, object metadata, etc.); we didn't need sixteen boxes for the user records.
90 million users is a great datapoint, yes! In my book that's more than respectable.
The only other stat that I'm curious about is the total size of the DB. Certainly databases with tens of millions of records can be held completely in RAM these days... but that also depends on how big each record is.
All-told the users database when we started migration was about 600 GB on disk, so not the most easily stored database in RAM, but not impossible if you get enough large machines.
In fact, we use Redis a lot at Bump, although almost exclusively for queueing and transient state, and not as a persistent database. For a period of time we did store long-lasting metadata in Redis, and as we became more popular instead of throwing engineering effort at the problem we threw more memory, culminating with a handful of boxes with 192 GB of RAM each. We've since moved that entire database to Riak. :)
I'll happily share any other statistics you're interested in.
Edit: the Riak cluster actually contains lots of other data (communications, object metadata, etc.); we didn't need sixteen boxes for the user records.