The problem is that memory is expensive, and when you can only put ~64GB in a 1U server it's a lot of rack space costs too.
For about the same money as 64GB of RAM you can buy 1TB of Intel SSD storage and that (6x160GB disks) will fit in a 1U server too. SSDs have very good random read performance and that is likely to get significantly better in the coming years. RAM is already fast enough for this type of job and the real problem is lack of capacity / high cost.
I suspect that the people like facebook who have relied on RAM mostly so far will start moving to SSDs to cut costs - they have already moved away from netapp for their storage to cut costs.
The 5 minute rule gives us a simple planning rule of thumb for comparing storage technologies:
Given some access frequency, if an item is accessed more frequently than this break even time iops dominates the cost calculation. Likewise if an item is accessed more rarely, capacity cost dominates the calculation.
For current technology, roughly speaking, the break even time between RAM and SSD is 5 minutes, and about 6 hours between SSD and spun disk.
To cost optimize for a particular system you need to know the distribution of access times. For something like facebook the access times are heavily biased towards recently created items (ie, almost all users start walking the data graph from the top of the news feeds, and days old data is very unlikely to be touched).
This can make it difficult to get utility out of something like SSD that straddles a thin band in the middle of the storage hierarchy. Your dollars may be better spent on ram and disk in the proper proportions.
This means that throughput and latency will be the scarcest resource in the future, and regardless of the demands of your application, a time will eventually be reached where SSD has enough capacity at $X, but HDD does not have good enough throughput or latency at $X, so you switch to SSD. And eventually, the same logic will apply for SSD -> RAM.
SSDs are pretty new and are very fast despite several problems that the makers haven't quite worked out yet. Random access seek time can be improved by adding more chips. Also seek times will no doubt increase with faster clock speeds and reduced feature sizes in the same way CPUs and memory do now.
SSDs are lower power than HDDs and Memory is high on power usage not to mention the savings by having fewer servers.
SSDs are simply too new for people to design for them, if you look at what http://www.rethinkdb.com/ are doing, or the TRIM feature, or log based file systems, there is quite a lot to be done to update the software people use to take advantage of SSDs. It has all been written with the limitations of HDDs in mind.