You can buy something like kdb, but that costs $70k a year and requires that your engineers learn some extremely new semantics if they're only used to SQL and (choose any popular language here, Python to C++).
Did you mean large, fittable-in-memory, and (distributed across multiple computers and/or very reliable)? Because that is much harder.
KEY and INDEX are literally the exact same thing and are just there to support different syntaxes: http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
Perhaps you are referring to primary keys vs secondary indexes, for which there is a difference in how they are physically stored on disk and in memory, particularly with InnoDB. There is a substantial performance gain to be had by having a relevant primary key that is referenced in the where clause as the data is stored in the same page as the primary key, both on disk and in the buffer pool. Secondary indexes reference the primary key, thus require two lookups.
Anyway, why is there not a single benchmark available to support the sales-pitch?
(In your favor I'll just pretend you didn't mention kdb here...)
Please explain. What's bad about kdb, besides how nonstandard it is?
At the least they should come up with some seriously impressive benchmarks before dropping names like that.
This ties with my own experience using it.
For example, where can we find your configurations for the MySQL vs. MemSQL benchmark you show in your video? Or how big the dataset was, etc...?
The way I see it, this is very much targeted at financial firms that use kdb+ or products like it that perform really well for things like writing lots of tick data from dozens of exchanges at a time and querying across them quickly in a familiar SQL style.