One of the reasons I don't like ORMs is that I'm not able to see the underlining query and truly optimize a service. That may be fine for a new service where performance isn't crucial, but once it needs to scale, you need to put on your engineering hat, get your hands dirty, and optimize queries.
You'll find you need to re-write queries so that there isn't complex nesting in the WHERE statement and flatten your logic so that the SQL optimizer can use your indexes. You may need to put SELECT statements within SELECT statements, where the innermost SELECT uses indexes and the outer queries are using the result of the inner query, which is smaller than the whole table.
I feel that SQL aimed to be Python and became x86 assembly instead. It's no longer a simple "just works" query language the moment you have to worry about predicate flattening, join decomposition, CTEs that introduce optimization barriers, and "IN()" being faster than equivalent "JOINs".
As a result, I started a project that allows you to write read-only database-agnostic queries called GraphQL compiler: https://graphql-compiler.readthedocs.io/
The core idea of the project is to get us the convenience of specifying the "what question I want answered," but without the inconvenience of "how is the answer computed / with which specific set of queries / where did the data come from?" -- unless you want to peek under the hood, of course. All the visibility into the nitty-gritty details available on demand, but without the tedium of having to hand-optimize queries and know all the "magic" ways in which queries get faster or slower for each individual kind of database.
So...exactly like SQL, then?
> It's no longer a simple "just works" query language the moment you have to worry about predicate flattening, join decomposition, CTEs that introduce optimization barriers, and "IN()" being faster than equivalent "JOINs".
Though it doesn't seem to address how to optimize things using the GraphQL compiler, when there's a need, without massaging the queries, as with SQL.
This is clearly a massive challenge, but one made easier by the fact that GraphQL compiler queries (unlike SQL queries) operate at a much higher level of abstraction. In SQL, you write "here's a CTE, now recursively JOIN X.foo to Y.bar" where X and Y could be just about anything, even something where a recursive JOIN is nonsensical. If you want to put a WHERE clause, you have to decide whether it goes in the recursive CTE itself, in a separate CTE that is ordered before the recursive CTE, or if you want to wrap the recursive CTE into another SELECT and put the WHERE there. The correct answer varies from database to database, and as a function of the size, layout, and index coverage of your data.
In GraphQL compiler, your queries are much more declarative in comparison: your query would say "find all subgraphs where vertex A's field 'foo' has value 123, and where A has a recursively-expanded edge (i.e. 0+ hops along that edge) to a vertex with field 'bar' with value 456". It's then the compiler's job to figure out which of the many equivalent SQL statements (or other db language queries, if you aren't using SQL) is going to be the best way to compute the result you asked for.
Here's an example from our test suite:
input query: https://github.com/kensho-technologies/graphql-compiler/blob...
Microsoft SQL Server-flavored compiled SQL output: https://github.com/kensho-technologies/graphql-compiler/blob...
I'm writing a blog post about this with more detail, follow me on Twitter if you'd like to see it when it comes out.
When the database is open-source, and I spot something that's broken that I know how to fix, I try to fix it. Here's a fix for a severe database query planner correctness bug I contributed to an open-source database called OrientDB: https://github.com/orientechnologies/orientdb/pull/7015
Unfortunately, Microsoft SQL Server, Oracle, and many other databases are not open-source, and I can't hack on their query planners. And even if they were, SQL is an absolutely massive language (the complete spec is 100,000+ pages). The GraphQL compiler query language is tiny in comparison, the spec is maybe 10 pages: https://graphql-compiler.readthedocs.io/en/latest/language_s...
It's a lot easier to intelligently map a small language to a big one than it is to optimize the big language outright.
In a sense, SQL is just not designed to be easy to optimize — it's too broad, and there are too many equivalent-ish ways of doing the same thing. This is why even after incredibly smart people cumulatively spent engineer-millennia on the query execution and optimization systems in SQL databases, we still keep having issues and there are still plenty of areas for improvement.
More info and more concrete examples of "why not just write SQL" in my upcoming blog post!
The main reason is the inherent difficulty of cardinality estimation, as the paper says.
Not every optimization is worth having. There is typically a distributed cost, paid in extra planner cycles for queries that don't benefit from the optimization. This is one of the main reasons why it's hard to contribute new optimizations to the Postgres planner. Naturally, it's possible that a marginal optimization will be incredibly important to one particular query or user. It's a value judgement in each case.
Frankly, I find the suggestion that SQL is not designed to be easy to optimize baffling.
The difficulty of cardinality estimation is a function of the expressiveness of the language. Imagine a new query language, SQL2, that only has the SELECT and FROM keywords -- no WHERE, no JOIN, nothing else. Cardinality estimation in SQL2 is trivial: just store the counts for each table, and you can estimate everything trivially. Optimal query plans are trivial by extension as well.
Now let's add the WHERE keyword and a few operators to this SQL2. Cardinality estimation and good query planning got much harder now! For example, if the WHERE predicate touches two columns, we need to know about correlations between the two columns, or we might make incorrect estimates and therefore get worse plans. And since the plan space got bigger, we spend more cycles on planning. If we continue to add more features to SQL2 to bring it to parity with SQL proper, all these problems get harder as we go.
The language semantics behind GraphQL compiler aim to get sufficient expressiveness for many (hopefully, most) use cases, while limiting the scope so that the problems of cardinality estimation and good planning don't become too hard to solve effectively. In comparison, SQL is significantly more expressive, and as a result also much more difficult to execute and optimize.
> user: petergeoghegan
> about: PostgreSQL major contributor, committer.
“The GraphQL compiler turns read-only queries written in GraphQL syntax to different query languages” is useful for GraphQL ‘fans’, but I don’t see how that is going to solve that.
Designing a user friendly query language for relational data isn’t the hard part. Executing such queries efficiently is.
For SQL, there’s half a century of research on that. This paper is part of it, and indicates that, at this moment in time, effort is better spent on methods for keeping statistics on data up to date than on making cost models more fine grained.
I couldn't agree more with this part :)
To be clear, GraphQL compiler isn't trying to "solve" SQL itself, just merely the fact that by the time all the table splitting, sharding across disks and machines, and similar required maintenance operations are done, your SQL query has grown impractically complex and entirely unreasonable to write. At sufficient scale, and with sufficient additional non-SQL databases in play, writing adequate queries becomes wildly impractical.
In the GraphQL compiler world, all your databases (SQL and non-SQL) are represented in one unified schema against which you write database-agnostic queries, and GraphQL compiler handles the nitty-gritty details of "which query runs where." GraphQL compiler is not a toy project I build for fun — it's a core piece of data infrastructure that Kensho (company where I work) has been happily using in production for over 3 years now.
I'm writing a blog post about exactly this, and I hope to publish it very soon! Follow me on Twitter if you'd like to see it when it comes out.
I thought most of them had some feature where you could dump the query before it gets sent to the DB.
Stuff like this: https://stackoverflow.com/questions/1412863/how-do-i-view-th...
If you're looking at the generated SQL I would rather just use the SQL directly in my code. There's probably features in ORMs where you can write raw SQL and tell it how to map the result to an object but I haven't used an ORM in a while.
A good table design is only good BECAUSE it enables efficient predicate use on the SQL queries.
You can’t just query any column willy nilly, you have to plan it. That’s why I like thinking in SQL with the table definition on-hand.
Example, if I write “SELECT * WHERE x OR y” and “y” isn’t indexed, then this will do a full table scan. Not ok. I need to plan my queries so it does something like “WHERE x OR (y AND z)” where “z” is indexed so it filters by “z” and then “y”. I don’t want to have to try and figure out how to get the ORM to produce that.
But the alternative to an ORM is not opaque blobs if SQL hard coded into the app all over. How do you handle SQL injection attacks for example? What if you add/rename/drop a column? Do you just grep you code and edit every blob of sql in the app?
And that also assumes that you are using databases as an object store. Databases are also useful to answer questions like: show me the number of users who have signed up each day for the last month.
Have a class representing a table and methods where you hit the database and map the response to an instance of the class.
It’s nice in a typed language when I map what the query will return and the compiler enforces it.
But not all my queries map to a class, but it’s not a big mess since we only use statically typed languages on the server so I still need to map the result to a tuple or dictionary of not a class.
I guess my problem isn't with ORM's, its with ORM SQL generation.
All the ones I've used or written allow bypassing selects, joins or an entire query and manually translating results, so they don't have to get in the way when optimisation really matters, but the vast majority of the time IME in most apps that just isn't necessary so I'll take the reduced friction of a query builder that automates the basics as long as it allows bypassing it when required.
This is only true for trivial problems, in the real world you are going to have tradeoffs. You can't design a schema for transactional and analytical workload at the same time, yet every type of business needs some sort of analytics on their data.
The great thing about SQL is that you don't exactly need to know what your future queries are going to look like. Or your dataset.
You need to have a fairly detailed knowledge, though. How are you going to make educated trade-offs if you don’t know what kind of queries will be made, how often, and how important it is they run fast?
That’s why, in evolving programs, the database evolves, too, even if the table content stays 100% the same. Moving some tables to faster storage, splitting them horizontally or vertically, adding or removing indices, compressing or no longer compressing fields, updating statistics more frequently, etc.
The vast majority of your queries will not fall into the category of "bottlenecks that need to be optimized" though, and you (and your probably more inexperienced team) will benefit massively from the less error-prone & more extensible nature of ORMs (I never again want to have to deal with an attempt at SQL code reuse that has grown into a string-formatted, quadruple-manifestation, triple-escaped nightmare)
A good ORM will also ease the transition into more manual SQL too, so that you can still retain the benefits of e.g. uniform abstract objects app-side.
You can. I mentioned EXPLAIN in my comment.
And the query planner isn’t a black box. Once you read the documentation on how the order of operations is determined by the engine, you can start to be thinking on the same plane as the query engine. You can infer how a query will use indices and the way the WHERE clause will be used.
Admittedly it’s not as easy as using an ORM, but if you’re a SQL expert then you can make queries much more optimized. You’ll never internalize how a SQL interpreter reads your queries unless you do it.
I’m talking about huge tables that are hit many times a second, where you need to start thinking like a Formula One team, being creative with queries to shave off hundreds of milliseconds.
If you're willing to put that kind of time and effort in, you should have no trouble understanding how your ORM generates queries - IME they tend to be far clearer, better documented, and more introspectable than database query planners.
Also, there's a feedback loop to consider. You can choose to get to know your DBMS, or you can choose to hold it at arm's length. Both can be reasonable options, but only one is going to foster expertise.
I've worked in enough places to see a clear pattern: Companies that use ORM for anything beyond really simple CRUD tend to have creeping performance problems with their database. (And typically also a healthy contingent of team members pushing to solve those problems by doing something drastic like migrating to NoSQL.) Companies that don't use ORM generally don't have the same problems.
For my part, I find micro-ORMs such as Dapper to be the sweet spot. They get the vast majority of the convenience benefit, without encouraging poor housekeeping practices.
But I don’t see how one could become an expert at writing ORM queries without knowing the underlining SQL, which means you’re putting in the time to master two languages.
I may be just be an outlier, and that’s fine. I like geeking out over SQL optimization.
It's like working in a transpiled language: you need to understand the intermediate language a little, but you don't need to master it. Indeed I'd argue that the ORM often corresponds more clearly to what's actually going on at the query planner level than the SQL does: pulling out an entity via an indexed link to another entity is very different from scanning through a table for cases where one value matches another, and they look different in the ORM and in the query planner, but in SQL they're both just "JOIN".
Until its not and then I want hints to save my ass, and they are not hints, I want want to TELL the f'ing computer what to do because I know better than the optimizer period.
So surprised to find out PG doesn't support hints don't think I will ever be able to move anything serious until it does, just not going to take that kind of risk.
I have played the whole rewrite query to try and convince the optimizer what to do with barrier tricks, no thanks, give me some hints and I will tell it exactly what to do when thanks.
Often times PG won’t bother with an index for a variety of reasons (sequential scans can be legitimately faster in some scenarios), especially when the number of rows is small.
It's not just about index usage, its also which type of join (loop, hash or merge) and join order.
Can anyone comment on how relevant this is with the enhanced statistics types in Postgres 10, 11, 12?
One of the most useful areas for future improvement is making plans more robust against misestimations during execution, for example by using techniques like role-reversal during hash joins, or Hellerstein's "Eddies".
So, the paper definitely talks about how independent column statistics are a problem with big tables in the default stats configuration.
...But the option of creating correlated, non-independent column statistics did not exist in PG until after this paper. Which was my point.
In my experience, flat out increasing statistics sample rates fixes 80%+ of the problems in this paper, with basically no downsides. (You can push that computation to downtime when no-one cares.)
> ...the most important statistic for join estimation in PostgreSQL is the number of distinct values. These statistics are estimated from a fixed-sized sample, and we have observed severe underestimates for large tables.
Live statistics, incrementally updated on DML execution, is a key feature for a good query optimizer. As a zero-administration RDBMS, SQL Anywhere had gained a reputation as a best-of-breed query optimizer  a decade ago; I'm curious if this still holds true.
In the last decade, the importance of OLAP queries in row stores has diminished due to the superiority of column stores. I'd be interested in a comparison of the Citus query optimizer vs. say Presto.
Another post in this thread mentions adaptive query planning and mistakenly imply that the GEQO is a module for this. My hands have been itching to look into experimenting on some improvements on the GEQO, specifically by improving the genetic algorithms used. When there are many similar queries, so in the adaptive query planning setting, one could also use reinforcement learning to improve query planning over time.
Optimizers are weird.