

Samsung announces 3,000MB/s enterprise SSD - RobAley
http://www.engadget.com/2013/07/18/samsung-enterprise-ssd-NVMe-XS1715/?utm_medium=feed&utm_source=feedly

======
programminggeek
How much would this speed up say a MySQL DB? Would it be enough to eliminate
the need to do some caching? Would it allow joins to be fast enough to not
need to denormalize so soon?

My point is, does this buy out of the need (up to a certain scale) to use
engineering time to solve scaling problems?

NOTE: I realize that you still want to use good design and plan for the
appropriate amount of scale, I am just curious how much dev time this could
potential save in terms of rewriting code for scale.

~~~
mmetzger
It can definitely save you some time (SSDs in general, not just this one,) but
how much depends on the operations in question.

Example workflow - SQL Server, approx 1TB working set - putting it all on SSD
(a RAID set of consumer level SSDs) basically doubled our operations per
second. Modifying the algorithm a bit while still on the SSD gave 20x speed
up.

2nd example - SQL Server, approx 10TB of RAID Prosumer SSD vs same amt of 15k
SAS drives: SSD = 8x speedup.

3rd example - SSD as swap on linux operations - performance dropped by 10%.
This mostly had to do with the SSD used and its role.

In other words - it's not a magic bullet but it can definitely buy you some
breathing room.

