It seems to me that the representation of cardinalities as "Univalent Types" could just as easily be done using Decidable Setoids (eg the number 3 is represented by a Setoid with a decidable equivalence relationship that divides its members into 3 equivalence classes).
There are only four things in this system that require a notion of equality and/or equivalence: tuples, cardinalities, relations (as functions from tuples to cardinalities) and queries (as functions from relations to relations).
I think this could all be done with in ordinary plain dependent type theory, without any HOTT or Univalence Axiom (for example, in Idris, which is apparently incompatible with the Univalence Axiom). Each of the four notions of equivalence would be represented by a corresponding type function.
Have I missed something?
No, thank you.
It's not formal verification by any means, but it seemed to work pretty well in practice. It would take forever though. I'd start it and walk away for a couple hours when I wanted a thorough test. But my confidence in correctness and performance gain (it measured perf too) was well worth the effort.
If you have a query 1 that satisfies case a and b but then identify a bug and add case c you can write a query 2 which satisfies a, b and c and indeed Cosette may identify that case as the difference (or not!) but you still have no guarantee that you will not then discover case d which is not satisfied by 1 or 2 or a future 3.
And then you ask: Why?
Well, if you can prove two DBMSs equal for all SQL queries, you can prove that an optimization to the DBMS (such as an improvement to the query optimizer or new indexing systems) will not affect the behavior of any possible SQL you provide.
This is, what I assume, the long-term target of the article.
Additionally SQL as supported by some DBMSes (e.g. Postgres due to recursive CTE and window functions) is turing complete. That makes proving equivalency in general a bit challenging.