That depends on the programming language, for example C# with LINQ and functional programming languages are pretty good at expressing queries. SQL has no capability for abstraction and so only the simplest queries will be succint, if you want to e.g. union two similar queries then you'll have to copy and paste.
C# LINQ is basically very roundabout way of doing what SQL was made for. What SQL does, and does really well, is operate on sets of data (aka tables) using set theory. You can get succinct expression with C# and functional languages, but they are not really possible to optimize the same way SQL is because it's not bound by set theory the same way.
SQL by default operates on multisets instead of sets; iIt has no syntax for a set (aka table) literal, SELECT 1, "a" FROM DUAL UNION SELECT 2, "b" UNION ... doesn't count; sub-queries (using another SELECT in a FROM clause) are a pretty recent addition; and the list goes on. It's not based on a set theory, or on the first-order logic, or the relational model, not really. If it were, it would be be Prolog.
Regarding table literals, some products support using
VALUES (1, 'a'), (2, 'b'), ...
as a table constructor in general, not just in INSERT, and looking at SQL:92, SQL:99, and SQL:2003 it looks to me like this should be standard SQL. Derived tables, aka subqueries, exist in SQL:92 at least as well, so whether or not that is considered recent depends on how you look at it, I think.
I am not trying to defend SQL with this, and all in all this does not take away from the points you raised, but the above were something that stood out.
They were in SQL:92, but IIRC adoption was somewhat slow and patchy, and has generally finished somewhen in the early 2000s.
My point is, the parent's claim that SQL "operate[s] on sets of data (aka tables) using set theory", and does it "really well" but when you actually look at SQL, you realise that sets/tables aren't really first-class ― derived tables were added in later versions of SQL, and literal tables still don't exist, but those are things you expect a language focused on table manipulations to have. Nope, it's a language for building very specific kinds of queries which was then patched and extended into something more general.
I find most strongly typed functional languages get too hung up on the structure of the rows. E.g. what happens to SELECT user_name, price FROM orders JOIN users ON user_id? You might end up with something like this:
But that will quickly get messy if you have multiple tables, and you have to explicitly say how to order joins (which you don’t do in sql), and something which just analyses the function calls (ie can’t look inside the lambdas) can’t really know that it only needs to look at a few columns of the tables. So maybe instead you need special values to represent the columns of your tables but then how can you represent the necessary renaming when you join one table multiple times? And how will you describe computations with those fields?
Ultimately I think this is a problem largely involving bindings and sets of things and that these are difficult (if not impossible) to move to the type level while providing a nice api. So you could either get a risky api that can throw runtime errors for queries that could never be valid or you get issues like the ones I described above.
Datasets there work like [(column_name, value)] heterogenous lists would work on Haskell. It does really not get that problematic. (All the problems go away in Haskell if you decide to use maps too.)
But yes, it throws runtime errors. The nature of retrieving data from a foreign service is that it will throw runtime errors. There is some verification you can do, but a strong typed database layer just makes promises it can not fullfill.
With Common Table Expressions you can process the shared data in one set ("temporary table" is a good mental model for it), compute the unique parts in subsequent sets and use the results for the UNION in a single query.