I would recommend against DataMapper. It is nicely constructed, but has some weird issues. Also, DataMapper 1 is going to be replaced by DM 2, which is a completely different beast (but better, hopefully).
Don't be fooled by ease of setup of the DB too much. It should be rather straight-forward, but even if a DB can just be downloaded and ran to dabble with it, always remember that this just shifts the moment at which you really have to dig into the details. You don't want to do that 2 days before going to production. Be on the lookout for red flags, though: unreasonable and weird configurations that have to be flipped for no understandable reasons at all before going into production. That shows that the project is not maintained very well or the developers have lost track. Also try to figure out how long it takes you to solve a new problem with the database, with documentation reading and all. Pick the one that you can wrap your head around best.
It's actually pretty rad to have every single location I've ever checked-into via Facebook stored in this database. I can say "find all the spots 2 miles from my house" and it's shockingly quick.
I have PostgreSQL on my mac. It's easy to install and use.
PostGIS was the killer. Version 2.0 (homebrew installs postgis2) does not play nice with Django locally (on my mac) and the ticket has been open for a year or something to fix it. That's a huge red flag for me right there. https://code.djangoproject.com/ticket/16455
I guess I could install 1.5 manually.
My problem with trusting Ruby/Rails is that I've only been seriously developing in it for a month.
Lets put it like that: feel free to try some new stuff, but don't start getting all crazy with it. So, if you tried Rails and weren't that convinced by it, maybe default to something that you know, even if it isn't a love relationship. Learning is a great thing, but do it piece by piece, especially if your goal is getting work done.
I've been testing ElasticSearch's geo location search and most queries take 50-150ms.
Standard term searches and filters will take like 5ms, with id GETs taking mere fractions of that still.
If I hit it repeatedly for a while it comes back in as quick as 0.14, but usually not that fast.
ping to server:
michael at Achilles in ~
○ ping -c 5 apollo
PING apollo (192.168.1.130): 56 data bytes
64 bytes from 192.168.1.130: icmp_seq=0 ttl=64 time=9.497 ms
64 bytes from 192.168.1.130: icmp_seq=1 ttl=64 time=83.113 ms
64 bytes from 192.168.1.130: icmp_seq=2 ttl=64 time=7.335 ms
64 bytes from 192.168.1.130: icmp_seq=3 ttl=64 time=6.998 ms
64 bytes from 192.168.1.130: icmp_seq=4 ttl=64 time=6.393 ms
--- apollo ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 6.393/22.667/83.113/30.241 ms
apt-get install mysql-server redis riak-server mongo-10gen-server
That's why everyone chooses MySQL as a relational database. It is easy to setup, easy to maintain and EVERY problem has been solved and searchable online.
The underlying issue is that (except for the most trivial cases), investing time in properly operating and using your database is a huge gain that many young companies ignore. 6 Month later, they have a burning datastore at hand that they don't know how to fix.
There are huge benefits that come from being the most widely deployed database.