I tend to dislike cost-based optimizers because they add a layer of uncertainty when you're attempting to optimize database queries. I know what I want the optimizer to do. The problem is that when I run test queries against a local database or even a staging database, the statistics used to calculate costs differ. This means production can do something totally undesirable.
Yes, that's a real problem that DBAs routinely grapple with. To address the problem, RDBMS vendors typically support hints and "plan management" features that allow DBAs to "pin" a particular plan once they've determined it's best. However, this usually requires a knowledgeable DBA; many developers would prefer that the database automatically chooses the plan, even if it changes from time to time. CockroachDB will definitely want to support both kinds of users.
Yeah, the problem is that basically none of this is portable, though. So if you're attempting to support both MySQL variants and Postgres variants, it gets nasty fast.
Not portable and you have to deploy separate from your application. A company I worked for solved this by using/setting hints for queries that were known to be mis-optimised in the common case.
If you're looking to be cross platform then hopefully you've already got a layer to translate intent to an appropriate SQL dialect, so you can drop it in there.
Some databases let you just set the stats numbers to what you think they are and pin them, so changes to data won't change the stats and the optimizer will always use your stats and not stats calculated from the data. You'll get the "prod" execution plans on an empty database!
You can do that in Db2 z/os. In our shop we do it daily. In prod stats are up to date, we have job that would get all the stats from prod required for accesspaths and apply it on Dev empty tables modelling production. So Dev team when they bind packages which would use these stats to generate accesspaths 99.99% of the time same accesspaths would be generated in Dev and we know how the program would perform. Also for some tables we set pre defined stats, if that table a had real stats, oboy.
Also Cost-Based can do some really dumb things if a table's contents change by quite a bit before stats are run again. I like being able to give hints so that you can switch between Cost or hierarchy based queries.
Great article, and it sounds like it took a pretty big leap of faith to go from heuristics to cost-based optimization. If this was addressed in the article I apologize: do you only ever select transformations that yield equivalent results, with immediately lower cost, or do you also explore transformations that incur in immediate cost increase, but then later pay off? i.e. are you doing simple hill climbing, or something more interesting?
Good question, and something I didn't have the space to cover in this article. The answer involves "physical properties". Physical properties are interesting characteristics of an expression that impact its layout, presentation, or location, but not its logical content. Examples include row order, column naming, and data distribution (physical location of data ranges in the cluster).
As an example, consider a memo group that contains both a MergeJoin and a HashJoin operator. The MergeJoin "requires" its inputs to be sorted on the same key. If an input is not already sorted, then we insert a "sort enforcer" operator that will provide the required ordering. Of course, that increases the cost of producing the input, which the optimizer must take into account.
So, getting back to your question, for any given memo group, we always select the equivalent plan with the lowest cost, for a given set of physical properties. However, that has the effect of sometimes selecting a memo group operator that has a higher cost, because it "pays off" higher up the operator tree. Back to my example, say the SQL query is something like this:
SELECT * FROM a, b WHERE a.name=b.name ORDER BY a.name
The lowest cost plan might use a MergeJoin, because its output is already naturally sorted on "name". However, the input to the MergeJoin might be higher-cost than logical equivalents, because it must be sorted. Therefore, we have the situation you alluded to, where we choose a higher-cost subtree because it minimizes the cost of the larger tree that contains it.
Great article!. While reading it I see some similarities with "Plan Stitching" from MSR as shown here https://www.microsoft.com/en-us/research/uploads/prod/2018/0.... It is interesting to see what the performance differences between the two. Also, I am curious of why build a query optimizer from scratch when you can adopt or port something like Orca?
Interesting MSR paper. With regards to Orca, we did take a look at it. It's an OLAP optimizer written in C++ (and very lightly commented and documented, unfortunately), and we wanted a Go-based optimizer that was more integrated with our codebase, as well as targeted at OLTP scenarios (as example Orca supports parallel search, which makes sense in OLAP, but not so much in OLTP). There is quite a bit of overlap, though, and we read their excellent paper and consulted the code from time to time (I hereby invoke a curse on Hungarian notation, which makes reading their code quite painful). We also suspect that Greenplum did not open source some key bits of the optimizer code, such as their search prioritization algorithms.
Yes, that code was painful indeed (was trying to port it at one time). On another note are there any plans on adding advisory tools for database tuning (such as AutoAdmin from MSR) or adding ML to self-tune depending on workload?
Database tuning tools are not on the near-term roadmap, but I think it's a good idea that we'll want to look at eventually. Using ML to self-tune databases is a hot research subject right now, so we'll be following developments on that front.
>it sounds like it took a pretty big leap of faith to go from heuristics to cost-based optimization
Im pretty sure cost-based optimization is common enough and practically proven effective enough that its not really fair to call it a “leap of faith”; eg im pretty sure mysql and postgres both do it (and you can it see it in their EXPLAINs), and I imagine most commercial dbs do as well
Its more of just a natural progression as the db gets improved
Its a good article/work nonetheless, but nothing out of the ordinary in db implementation afaik
Postgres and MySQL both do calculate costs. One difference is that they are not based around the Memo data structure (someone correct me if I'm wrong about that!), which gives an extra measure of flexibility, power, and elegance to solving the problem of SQL optimization. The Memo structure was introduced by Goetz Graefe in his seminal papers on top-down, rule-based query optimization (Exodus, Volcano, Cascades), which are the chief inspiration for CockroachDB's new optimizer. In the late 90's, Goetz joined Microsoft, and assisted the SQL Server team in rewriting their optimizer from scratch based on the techniques he had pioneered. I believe this foundation is a big reason why the SQL Server optimizer has become one of the best (if not the best) in the world. We are indeed building on the shoulders of giants.
>I believe this foundation is a big reason why the SQL Server optimizer has become one of the best (if not the best) in the world
Im not at all versed in the subject, but it seems like the memo datastructure isn’t significant to the correctness of the outcome: it makes the search space smaller/faster, but doesn’t seem to lead to better cost assignment, and the logical equivalence rules should be enforced regardless of the memo dt (the alternative being external to the memoization structure)
Is it just that faster search => more paths searched before timing out/giving up? And perhaps easier to reason about/encode?
To be clear, the Exodus/Volcano/Cascades foundation includes more than just the Memo. Also important is the idea that you can declaratively define rewrite rules and generate parts of your optimizer. Also, optimization proceeds top-down rather than bottom-up, which allows you to prune the search space as you explore, as well as open up important lines of communication from nodes higher in the tree to nodes lower in the tree (i.e. required physical properties, which I explain in another HN comment).
You're correct on your points (search space can be smaller/faster, with less memory usage). In addition, when you're dealing with a piece of software as complex as a SQL optimizer, choosing modular, elegant abstractions can make a big difference in terms of understandability, maintainability and extensibility of the codebase. That in turn leads to fewer bugs, such as subtle violations of logical equivalence when transforming the tree. I think the more structured, general approach to optimization allows mere mortals to manage higher levels of complexity, in which the RDBMS defines 100's of transformations, offers many join ops (hash, merge, nested, zigzag, etc.) and data types (geospatial, json, etc), as well as incorporating data locality in the new world of geo-distribution.
As one example to illustrate why there's a practical correspondence between well-designed abstractions/architecture and correctness (i.e. fewer bugs), consider these PRs that we recently merged:
These PRs randomly disable transformation rules and perturb the cost of expressions in the optimizer, in order to test alternate (but equivalent) plans. It took just 100 lines of code to add this support, because it simply plugs into the general optimization framework. And in return, our existing test cases can now test multiple different plans, allowing us to shine light into the dark corners where bugs like to hide (cockroaches, of course!).
As another example, we have a testing mode called "optsteps", which will single-step the optimizer and print out the tree after each transformation to make it easy to track down bugs. Here's an example of that output, again made possible by the extensible, modular design of the optimizer:
Yes, Postgres, MySQL, MSSQL, Oracle, and i'm sure DB2 all use a cost based optimizer.
+1 on the sentiment, good to see them continuing to improve, they have lots of competition to look at for examples of what to do / not to do. Progress is good none the less.
more generally, ignoring database query optimisation specifically, if you are interested in learning about discrete optimisation techniques, I recommend this course: https://www.coursera.org/learn/discrete-optimization
> outside expert on database optimizers run a months-long bootcamp, with the goal of teaching our developers how state-of-the-art optimizers work, complete with homework assignments to read seminal papers on the subject
Any chance you could share more about this? This general area of how to build a production grade SQL optimizer seems to be a thing that's more scattered in tiny pieces across a wide number of papers, or held in peoples' heads, than aggregated in a teaching manner. It seemed that the realistic general advice on how to build a SQL optimizer was to poach someone from the SQL Server team. ;)
I've generally just gone referring back to the unfinished Building Query Compilers[1] when pondering this subject. Not that the hundreds of references don't provide sufficient material to read though as well, but it'd be interesting to hear what a bootcamp like this presented as state of the art.
This is like studying chess, the approach to studying the complexities that arise from the number of routes a query can be executed is like how one will study an opening, also the tree data structure mentioned is reminiscent to chess, statistics also hence studying GM games. I feel like someone with a good understanding of chess will enjoy such a project. Anyways great post and breakdown.
Ah, my past life and knowledge developing Oracle systems comes flooding back.
I don't know if Oracle still uses CBO now, or even SQL or PL/SQL but I am sure that a large chunk of their revenue still comes from supporting these legacy systems.
I've been looking for books that cover database implementation techniques such as this — the "memo" algorithm was new to me, but I'm also generally interested in query planning and optimization. Anyone got any recommendations? I started a thread: https://news.ycombinator.com/item?id=18410692.
Has any thought been given to whether this query planner could be adapted (much further down the road, I'd guess) to support dynamic replanning? (That is, altering the plan mid-query, if it should be found that the row-count estimates were way off.)
We've talked about ideas along those lines. However, rather than altering the plan mid-query, we'd be more likely to have some kind of feedback mechanism that corrects estimates for the benefit of future plans.
Another commenter posted an interesting paper that's related to your question:
[Cockroach Labs engineer here, I worked on statistics]
All table statistics are stored in a system table (system.table_statistics). CREATE STATISTICS scans the table, calculating statistics on a column, and adds a new row to system.table_statistics.
We still have a ways to go in making statistics automatic (right now it requires a manual call to CREATE STATISTICS) and more useful to the optimizer (which currently only uses basic stats). We're working on that right now for the 2.2 release.
SELECT *
FROM a, b
WHERE a.x = b.x AND a.y='foo' AND b.z='bar'
The best plan for this simple query depends on the selectivity of the predicates. For example, if 20k rows have a.y='foo', but only 10 rows have b.z='bar', then it's best to scan a b.z index, then lookup matching rows in a. But if the #rows is reversed, then it's better to scan an a.y index and lookup in b. This is a simplified example, but we do see queries along these lines in real OLTP workloads.
You're correct that for many OLTP workload queries, simple heuristics are sufficient. However, even if that's true for 90% of queries, it's the last 10% that gets you. We've seen customers with 10 queries in their workload, where 9 work well but the last 1 gets a bad plan that is 10x slower than it could/should be. Maybe they can rewrite it, or maybe they don't have sufficient knowledge to do so. Or perhaps they're using an ORM and don't have control over the queries it's sending to the DB. In addition, many mostly-OLTP workloads contain a few OLAP reporting queries in the mix. Developers don't expect their OLTP database to perform like a dedicated OLAP data warehouse DB, but they also expect it not to fall over when it gets a more complex query.