Hacker News new | past | comments | ask | show | jobs | submit login
MySQL 8.0: Retiring Support for the Query Cache (mysqlserverteam.com)
144 points by aleksi on May 30, 2017 | hide | past | favorite | 83 comments



Product Manager for the MySQL Server here (and post author). Happy to answer any questions...


I'm a developer who drastically prefers PostgreSQL due to things like window functions, a more predictable query planner, a variety of index types and the ability to index over calculated fields, improved strictness out of the box (e.g., UTF-8 being UTF-8, no implicit truncation of long strings, no silent and lossy automatic typecasting), and so on.

I'm forced to use MySQL at work, because it's much easier to work with for our operations teams.

That said, my perception is that PostgreSQL is catching up to MySQL in terms of operational overhead and replication strategies faster than MySQL is catching up to PostgreSQL on the end-user side of things.

To an end-user like me, what would you point out are some current advantages that MySQL has over PostgreSQL, and what do you see the MySQL project doing to help it catch up to the growing gulf in feature parity?


I would maybe start off by saying feature parity was/is never the goal. The original goal of MySQL was to be the "Ikea of databases" (both come from Sweden).

Having said that, I think expectations on what is the minimal functionality have evolved, and we have responded by adding functionality like JSON in MySQL 5.7, and CTEs and Window Functions in 8.0. In terms of the specific issues you raise:

* utf8 vs utf8mb4 is "problem #1" http://mysqlserverteam.com/sushi-beer-an-introduction-of-utf... - we have switched the default in 8.0, and will deprecate utf8mb3 to reduce confusion: http://mysqlserverteam.com/mysql-8-0-when-to-use-utf8mb3-ove...

* Implicit truncation and automatic type casting is no longer the default (strict was enabled for new installs in 5.6 (2013) and all installs in 5.7 (2015)). That is, unless the standard specifies it should (there are some weird cases).

* 5.7 has virtual columns + indexes. This allows for a functional index.

Edit: Missed a word, added computed columns


> adding functionality like JSON in MySQL 5.7

In the Java world, the Spring framework is about to drop a new major release revolving around asynchronous or "reactive" programming. Many other programming languages and frameworks are already moving in this direction.

This model pulls people toward alternative databases such as Cassandra, MongoDB, etc. Because you lose many of the advantages of asynchronous processing in the application tier, if your data access tier still relies on synchronous blocking API's (such as Java's JDBC and JPA).

Recently I was researching whether any relational database vendors were adding support for asynchronous access, and stumbled across MySQL's new "X Protocol" basically by accident. The documentation is still rather new and thin, and I was puzzled that it seems to revolve 100% around the use case of MySQL as a document store to compete with MongoDB.

Are there any plans to push the X Protocol further toward the spotlight, beef up the documentation, and emphasize use cases around asynchronous access as a relational database rather than document store?


The "X" comes from it being a crossover/hybrid protocol between SQL and a CRUD API. It has a lot of modern protocol features, and uses protobufs, which makes it easy to add new driver support. The CRUD API has overlap with Document Stores like MongoDB, but the X Protocol really goes beyond that.

There are plans to continue developing the X Protocol. We have also chatted about doing translation from the classic protocol to X Protocol in the MySQL Router. I agree with you that async is important. Stay tuned :-)


Red Hat is trying to address this with their Debezium project:

http://debezium.io/


Thanks for your reply! I'm thrilled to hear that MySQL is taking strictness much more seriously with the above-mentioned strategies. It looks the top few of my biggest gripes are getting addressed, and that's exciting news.

Could you elaborate more on MySQL being the Ikea of databases? What exactly does that mean, and how do those things differentiate it from PostgreSQL currently (and how will they in the future)?

If it's mostly just around new-developer friendliness, I worry that MySQL has more of an (admittedly somewhat exaggerated) reputation as the "PHP of databases" right now: it's easy to use out of the box, but in a way that doesn't discourage poor practice and results in traps and land-mines for future development. How do you guys plan on achieving such a (laudable) goal without bringing about the kind of baggage that stereotypically has come with it? FWIW, I suspect the strictness changes you mentioned will go a long way towards addressing that, but I'm curious if there's more you have in mind.

PS. Apologies if I've set up a bit of a straw man in my last paragraph. I'm trying to predict what your answer to the prior paragraph might be and pose additional questions based on that answer to avoid an extra round-trip.


I want to start by saying that gripes/land mines don't help anyone. We try to balance fixing these issues, while keeping a good upgrade story. XKCD does a good job of explaining the problem in https://xkcd.com/1172/

In every major release we have a pseudo quota for incompatibility (5.5 changed default storage engine, 5.6 and 5.7 refactored a lot of the optimizer and transitioned defaults like strict mode on, 8.0 makes large changes to data dictionary).

w.r.t Ikea of databases - I have a large Ikea desk that I sit some very heavy computers on. It is simple in design, but that does not make it limited.


I appreciate the responses. Thanks so much for your time!


My company was one of those set on MySQL and was resisting PostgreSQL, and we succeeded that we now have Postgres and MySQL side by side, and operationally Postgres is actually less overhead (especially with tools like repmgr and barman).

It also doesn't allow idiotic setups, which later turns out to be a very bad decision. We had one cluster that was two machines: master & slave, over time, decision was made to write certain data just to slave, while different kinds of data was only written to master.

Then, since slave contained more data, we started backing up the slave. Now replication between slave and master broke and that was discovered week later once all old logs were purged. An interesting acrobatics needed to be performed to restore that to working condition.

Postgres, it would not let you make any changes to the slave, the replication slots make sure no logs are purged if replication is down.

It's still a bit unfair to say, because we have more MySQL than Postgres nodes, but Postgres just seems to work so currently it requires no attention. Barman doesn't make daily backups, but continuously every transaction. It can be integrated with nagios/icinga and makes much more through tests.

I think your operations team resisting is just being scared of the unknown, but it is also very beneficial to them too.


Sorry but you could have limited access to the slave to prevent writes... there are legit replication strategies that allow you to write to multiple nodes, master-master, etc...


There is an option called [super_]read_only designed for this use-case:

https://dev.mysql.com/doc/refman/5.7/en/server-system-variab...


When I first learned about relational databases and looked for a free and open source engine to play around with, people told me there were MySQL and PostgreSQL, and that I should pick one; what is the difference, I asked, and was told that basically, MySQL is fast and easy to learn, while PostgreSQL had "real" transactions, referential integrity and so forth, plus lots of features.

How does the landscape look today? I am under the vague impression that the Postgres developers have worked hard on improving performance (and adding more features), but the MySQL developers probably have not been sitting on their hands all those years - what are MySQL's strong points today? (I remember the documentation being very good, and I count that a strong point!)


What are MySQL's strong points today? It's an interesting question, because some of the features that I'm most proud of MySQL having are the ones which might miss your radar:

* Performance_schema means that any time MySQL is allocating memory, performing IO or waiting on locks - there is a really easy SQL interface to debug issues. I always say that most users are not getting the performance they are entitled to, because they don't have the visibility.

* Logical replication means that it is very easy to do rolling version upgrades. It is also easier to have remote replicas and not have schema changes re-send the whole table across the wire.

* Group Replication (new) - Built in active/active HA.

* The InnoDB Storage engine is very good. It uses an update in place w/REDO model, that has a lot of nice performance characteristics for short-medium sized transactions. The IO and CPU scalability is also very good these days, and we have a number of contributors to thank for that (Percona, Facebook, Google). InnoDB supports native aio, direct io, and can read/write in multiple threads. The change buffering feature that it has (aka insert buffer) is very good at reducing IO on a number of workloads. Its compression feature is also important for reducing space on SSDs.

* I actually think our bug workflow is very good if you are a production DBA. No new features in a stable release, and only the docs team closes bugs. This has a good way of forcing the release notes to be very accurate.

* The tunables and overrides for DBAs, and the tooling is very stable. MySQL 5.7 supports server-side query rewrite based on a pattern where I can insert a query hint if required.


Thanks! That sounds very interesting, especially the clustering. Good to hear that MySQL has evolved, as well!

(The performance part, too, but so far I've been fortunate enough not to run into problems where the database was really the bottleneck, and I hope it stays that way. ;-) )

EDIT: Now that I think of it, at work we do have a MySQL instance running for a few applications that require it. I am kind of embarrassed to admit that I pay it almost no attention beyond regular backups, but I'm happy to report that is has not given me any trouble at all (which is why rarely think of it - that is kind of the Nirvana for a sysadmin like me!)


I also have a list of what was released in MySQL 5.7: http://www.thecompletelistoffeatures.com/

I'll work on a new version for 8.0 at some point. In the mean time:

http://mysqlserverteam.com/the-mysql-8-0-0-milestone-release... http://mysqlserverteam.com/the-mysql-8-0-1-milestone-release...


Gotta say I'm a big fan of tokudb over innodb myself. Much better performance as tables grow and much much better compression.

My own benchmarks:

https://williamedwardscoder.tumblr.com/post/160628887798/com...


TokuDB uses a write-optimized data structure. A B+tree like InnoDB is more read optimized (although writes scale well while the key pars of the tree are in memory).

There will always be use-cases for each.


I know you are the product manager and gave a line to tow and all, but tokudb really belongs distributed with stock MySQL. It's a basic enabler for a modern revolution in MPP which perhaps Oracle doesn't want to realize.. ;)

When tables are small and fit into memory tokudb and innodb are damn close performance-wise. But innodb has crap compression so stops fitting into memory a lot sooner, and innodb performance drops off a cliff when it stops fitting into memory. Whereas tokudb just sails on awesomely. It's easy to have multi-TB tokudb tables. Just a shame that the next jump upwards is forgotten-ware like ShardQuery. MariaDB is doing Spider and Tokudb ...


Not everything is a document store. Some people actually have normalized schemas with foreign keys. TokuDB doesn't support foreign keys. Can Fractal Trees technology support the exact same API as B-Trees, only faster? I don't know. I would love to see a comparison when both technologies are at feature parity.


> Can Fractal Trees technology support the exact same API as B-Trees, only faster?

Yeap. That tokudb doesn't support foreign keys is not a data-structure thing, its a feature they cut from the mysql shim that sits on top of the fractal tree library.

I've got a list as long as my arm about missing features, features that don't work well together, blatant opportunities that the optimizer misses, etc.

In all seriousness, I'd be all over postgres instead if it just had fractal trees ;) Most of the problems are not the engine, they are the (R?)DBMS on top.


We are currently using MySQL on RDS for most of our work. We're very tempted to move to Aurora on AWS for the benefits it has around write scaling, disk size scaling, table size limits and its claims around improved performance. I assume you're watching these developments, what is your take? Do you plan to compete, mirror or simply go another direction with MySQL?


We compete in the sense that Aurora is a fork of MySQL 5.6 (2013).

As a product manager, I do watch Aurora (along with SQL Server, Postgres, MongoDB, MariaDB etc). I'd rather answer questions about our products if you don't mind :-)


Fair enough. My biggest question is the same old problem that plagues all databases: Do you have any plans coming up to help us deal with modifying large tables, scaling writes and/or dealing with current known storage limits?


I think plague might be a stronger word than I would use, since I think there is always pressure on new entrants to exaggerate problems with existing technology. For example: I have several customers with 200TB databases on a single server. I helped a customer a few weeks ago insert 50K/queries/s sustained on some not particularly special hardware (higher is possible; depends a lot on schema+indexes).

But back to your question - Yes. We are working on improving use cases like insert throughput, and changing the file format so we can support an instant DDL. Having our new data dictionary in 8.0 provides a strong basis for this.

The larger vision is a 4 step mission, described in our Keynote from last year:

https://www.youtube.com/watch?v=4ihSsQ2z-Cc&feature=youtu.be...

Actual mission slide starts at 57:30. Step 4 is to introduce write scale out with sharding.

(Hi btw!, I'm also an Australian living in Toronto.)


Mysql for some reason still lacks:

* uuid type

* datetime with timezone

* storing (and retrieving) the view definition as it was defined with comments, formatting, etc

* using the same temporary table multiple times in the same query

This is not an extensive list, but are things that bite me every day with mysql.

Even though it's not query cache related, I wonder why such basic features are still missing and what are the plans to include them? They sound much simpler and more important to add than adding a nosql protocol to a sql database.


I'm not sure if that was a question, but I'll answer :-)

* For UUID, we've added helper functions to store it in insert-friendly order: http://mysqlserverteam.com/mysql-8-0-uuid-support/

* For Datetime + Timezone, this is something we are looking into currently.

Datatypes are actually not simple to add in MySQL. While STRICT mode is the default, we support the upgrade case of it disabled. Which leaves us with a number of implicit conversions to handle. I wish it wasn't the case, but it's not be lack of demand on our side :-) We intend to schedule refactoring work to make this easier in the future.

* For storing/retrieving view definitions

I hear you on that one. There is a documented reason though:

> The advantage of storing a view definition in canonical form is that changes made later to the value of sql_mode will not affect the results from the view. However an additional consequence is that comments prior to SELECT are stripped from the definition by the server.

* Re-using the same temporary table can now be worked around with a CTE (8.0), which is preferred.

It is not just a case of prioritization, but also resourcing. We have a large team, and have different people working on data types from protocol work :-)


Can you give some thoughts over why one should chose MySQL over MariaDB?


MariaDB diverged from MySQL 5.5 (2010). I'm quite proud of what we've managed to achieve since then:

- MySQL 5.6 (2013) https://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html

- MySQL 5.7 (2015) I have a list @ http://www.thecompletelistoffeatures.com/

- MySQL 8.0 (in development) http://mysqlserverteam.com/the-mysql-8-0-0-milestone-release... http://mysqlserverteam.com/the-mysql-8-0-1-milestone-release...

In terms of some of the most recent work, I think the utf8mb4 performance improvements will have a big return for users: http://mysqlserverteam.com/mysql-8-0-when-to-use-utf8mb3-ove...


You could've placed your content on a more known domain than random looking one for 5.7 like Medium. People have "mind score" for domains and funny looking ones aren't exactly easy to go to.


Do you have any thoughts on the future of MySQL with regards to SQL/MED (Management of External Data)? https://en.wikipedia.org/wiki/SQL/MED

Clearly a hard and costly problem to crack, but I always wondered with MySQL's pluggable engine tech, if something couldn't be done in this area (or even if that might have been part of the goal of the pluggable tech?).


Not something we are currently looking at, but possible in the future.

I am not quite the right person to answer if the storage engine API can handle the use case for this. It is a slightly different problem, in that you need to push down a lot more conditions into the engine. In some ways we do this, with our MySQL cluster product already.


I really like the computed column capabilities of recent releases, however, I'm frequently frustrated by MySQL's poor support for materialized views and subquery optimization.

Isn't MySQL is dependent upon the query cache for caching subqueries? Are there plans to introduce materialized views or improve subquery performance?


There are a number of subquery performance improvements in MySQL 5.6 (including semi join and materialization). These are not dependent on query cache. 5.7 also added a new derived merge optimization (subquery in the from clause).

No current plans to add materialized views.


Will adaptive hash index ever be disabled by default? While it's orders of magnitude less horrible than the query cache, it's in the same category of things which add variance for marginal gain.


I looked into this for defaults for 5.7 and 8.0.

Our performance team feels like ON is still the better default, as it applies to more workloads than not. Improvements were also made in 5.7 to partition the hash.


Does MySQL have any plans to introduce proper support for DDL transactions? (with no implicit commits!)


We've made the first step in 8.0, by moving the data dictionary to use a transactional backing store internally (no more FRM files). This means we can now do atomic DDL (i.e. drop 3 tables with all/none semantics).

Extending it to transactional DDL is something I'd like to see in the future, but it is not in scope for 8.0.


These days it is much easier to implement a query cache specific to your application's needs in other layers of your infrastructure. These other layers also tend to scale horizontally much more easily than a relational database does. I like it.


In a former job I had an application that made excellent use of the Query Cache. A number of computers (in clusters across a large campus) that all needed to check on the same computed value that was derived from a query that spanned 5 tables. The computers each checked in up to every 5 minutes, and really only needed to know when the computer value changed, but without setting up some sort of push system (which would have been engineering expensive), we just fell back on the Query Cache. Worked beautifully, especially since we would only see an extra recompute a few times a day.

Yes, you could architect a separate system to do this (e.g. a separate table, possibly with a stored procedure firing for changes on any of the tables), but that would necessarily involve more complexity.


This is a quite lucky edge case; in order for this to work, you need to have the underlying tables (5 in this case) change infrequently.

This is actually, directly and indirectly, the core issue of query caches.

The one mentioned is the direct problem; the indirect one is that for each record change in any table [present in the query cache], the entire query cache must scanned and invalidated, which causes locking.


It's good that MySQL gets better because I'm tied to it since Sequel Pro is the best database browser there is and it only supports MySQL.

Datagrip has that Java kind of non-nativeness (weird scrolling and the looks) and it's never easy to read/write triggers, relations and export data into csv etc.

I did find Postico being decent last time I checked but what a limiting factor on the tooling.


http://dbeaver.jkiss.org/ is the best I've used for a universal SQL (and nosql) client.


I started using it since it was last mentioned on HN and it's really impressive, I've been converted.


JetBrains DataGrip comes close and supports many different SQL dialects (it's also comes with IntelliJ as the SQL plugin).


datagrip/intellij is great, no weird scrolling for me.

lots of benefits when renaming tables/cols - it updates my java references too.

Saying sequel pro is the best is rather controversial, ever tried pgadmin or http://www.postgresqlstudio.org/, or sql server management studio.


I find the I'm much less productive on pgAdmin 4 compared to MySQL Workbench 6. On my machine the web-like interface that pgAdmin uses since version 4 seems rather unresponsive. It's still a relatively new product though, I'm sure it will perform better over time.


pgadmin has been around since 1998.


> It's good that MySQL gets better because I'm tied to it since Sequel Pro is the best database browser there is and it only supports MySQL.

Oh my god this 1000%

I will use the "worse" DB every time if it allows me to be more efficient in my work.


>>Use of non-deterministic features will result in the query not being cached (including temporary tables, user variables, RAND(), NOW() and UDFs.)

Makes me wonder how many simplistic client side caches check for this. Pretty sure I've seen PHP shopping apps that blindly hash the query.


I think what is important is to have it configurable. For example, in ProxySQL Query Cache you can choose which query to cache and which query not to cache. And some times it also makes sense to cache query that MySQL QC wouldn't cache. For example, once I saw an application running `SELECT NOW()` hundreds times per second (across multiple clients). Caching `SELECT NOW()` for 500ms on the client side would have reduced a lot of unnecessary network round trip.

Author of ProxySQL here.


How often would that actually be a problem? RAND() and udfs aren't often used in queries, I would say. And NOW() is used for something like "items from last week" where some time difference doesn't matter too much (you need some cache invalidation as new data comes anyways)


I don't think it's common per-se, but it leads into the next point as well. Because of MVCC, what you are supposed to see (while returning non-stale results) gets complicated.


Yes, it's one of those things that isn't usually a problem, but that may arguably be worse...it's rare enough that you have no idea it's happening occasionally.


AFAIK w/Rails ActiveRecord caches queries per-request. So if the SQL command matches, the cached result is returned. Where each request is short lived, caching the results likely isn't a problem.

Maybe the PHP app's you've seen take the same approach?


> AFAIK w/Rails ActiveRecord caches queries per-request.

Really? That surprises me. Do you have any information about that?

My understanding is that there's no "ambient" caching. Any caching is in the ActiveRecord::Relation objects themselves. So if you had a `users = User.whatever` then `users` might have some logic for deferring to a cache in it, but if you had a separate `users2 = User.whatever` then that wouldn't be subject to the cache. I'm quite sure this is the case for associations: u = User.find(123); u.posts; u.posts, the first statement will query the DB as will the second, but the third won't, since the fact that `u.posts` has been executed will be cached in the `u` object. But if you did `u2 = User.find(123)` that would hit the DB again.

I haven't done any rails 5, so maybe it's new there.


I was referring to

http://guides.rubyonrails.org/caching_with_rails.html#sql-ca...

I'm not sure when this was added. v3 or v4?


No...the shopping cart apps cache queries for long periods of time, globally. They store them either on disk, or in shared memory via something like apcu (mmap shared cache). They are typically hashed using an md5 of the query and the parameters.

I haven't done a big analysis, but I wouldn't be surprised at some issues with NOW(), CURDATE() or maybe UUID(), etc. Especially for things happening at midnight, or a month boundary, etc.


This PostgreSQL diehard had a usecase that required a cluster of 3 masters to stay up if two of them died. Turned out that was really easy with MySQL multi-master and multi-source where each master is a slave to the other two.


Though this is probably a good move, I think it's going to catch out a lot of bad web apps based on my experience of trying to disable it at a web hosting company :-)


I always assumed that it makes sense to have the query cache active. It just makes sense logically.

On my production server, "SHOW STATUS LIKE 'Qcache%'" currently gives me:

    Qcache_free_blocks	2184
    Qcache_free_memory	7629288
    Qcache_hits	328939349
    Qcache_inserts	14440714
    Qcache_lowmem_prunes	7814306
    Qcache_not_cached	290027559
    Qcache_queries_in_cache	7497
    Qcache_total_blocks	17225
About 20 hits per insert. Sounds good to me.

Anybody here who is more "in the know" then me and can tell me what I can conclude from these numbers and/or which other stats to look at?


It's not easily possible to tell from these numbers.

For example:

- You can see a hit, but what was the cost of a miss? If it was a point lookup, it is very low cost.

- Comparing hits to inserts, does not show the cost added to every other query to search the cache (Com_select) that may have not been added to the cache after execution due to judged to be non-deterministic.

- Inserting results into the query cache can cause stalls (mentioned on Rene's post)

- Performance should be judged on 99th+ percentile. Even if it makes some queries faster, has it improved your p99?

If you wanted to take a more wholistic approach, it is good to measure this from the Application with something like NewRelic.


> what was the cost of a miss

Yeah, no idea.

Regarding "Com_select", it seems to be 0:

    SHOW STATUS LIKE "Com_select"

    Variable_name       Value
    Com_select          0
What does that mean?

As for the wholistic approach .. well, the server tucks away nicely. I never experience any lag and all users are happy with the speed. So there is no strong drive to investigate. Computers are so fast these days. A cheap $15/month VPS can easily handle tens of thousands of users per day.


It is definitely high time, the QC is an abomination, it's totally at odds with the idea of MVCC.

In my experience, anyone who enabled the QC was deeply mistaken about its behaviour.


Exactly! This should have been done a long time ago.

The MySQL query cache was nothing less than a _trap_ for unsophisticated DBAs.


when optimizing old school server rendered web apps turning on the query cache always gave the most bang for the buck. hit rates around 90% or more.


Out of curiosity: how does people using Wordpress [at somewhat scale] handle layers of cache? Last I used it, relying on the Wordpress cache layer just didn't cover enough cases. The MySQL one "saved" me seeing how Wordpress likes to ask the same question. A lot.


If you enable WordPress's object cache, it should get most page views down to a couple or even zero queries. It's poorly written themes and plugins where problems can arise.

The easy solution is to just enable full page caching. On single server setups, something like WP Super Cache works well. On distributed setups like WordPress.com, this is more appropriate: https://github.com/Automattic/batcache


The object cache doesn't persist across HTTP requests. You have to install some 3rd party plugin for that.[1]

[1]https://codex.wordpress.org/Class_Reference/WP_Object_Cache


Correct, which is what I meant by enabling it. Apologies for not being clearer.

It used to persistent via writing objects to the filesystem but it actually ended up being _slower_ in some configurations.


Having an object cache is one layer. However, caching SQL in conjunction could speed up application response times, depending on the behavior of the workload. I have seen the performance increase by 4x by caching SQL.


Primarily by putting a caching proxy server like Varnish in front of Wordpress.


Yep, this is what we do. Adding in some kind of object cache cluster wasn't a significant enough of an improvement over just hitting mysql to justify the extra point of failure.

The problem is WP just makes a ton of simple queries - in any complex install the bottleneck is probably going to be network latency to the database.


I think this echo's Rene's point about moving the cache closer to the application as well.

I have some hypothetical numbers illustrating the impact of network latency: http://www.tocker.ca/2013/11/18/how-important-is-it-to-merge...

The simple queries are basically free once they get to MySQL. Query cache does not help.


I am trying http://www.heimdalldata.com/ as a replacement.


I'm not sure that it will "only affect a small number of users".

Maybe this should be an option with false as default value, it could help with the transition.


> The query cache has been disabled-by-default since MySQL 5.6 (2013) as it is known to not scale with high-throughput workloads on multi-core machines.


It has already been disabled by default for a while. From the article:

> The query cache has been disabled-by-default since MySQL 5.6 (2013)


Yep.

What they should have done was add a notification in the error log when you startup-- "Query Cache is enabled. Are you really sure you want to do that? Read http://whatever/qcisawful.html for more information."

And this should have been added to the final releases of every version all the way back to 5.0.

For people saying "please, it's been off since 5.6 in 2013", my company's primary product runs on Percona 5.5 (2010), and our secondary product, is actively developing the _next_ version on Percona 5.6. That isn't particularly unusual.


I'm working with a company called http://www.heimdalldata.com/ they do SQL auto-caching with no code changes. A perfect replacement for Query Cache if you have a read-heavy application. They also offer a nice dashboard that helps identify database inefficiencies. I would try them out to see if you get a performance boost.


How does it know if the underlying data changed


How is this different from MySQL query caching?


Unlike other solutions, the software auto-caches and auto-invalidates SQL; it takes away a developer having to manually configure the cache. Heimdall works with both MySQL and Postgres




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: