The general optimization is actually not that novel: A DBMS might do that for parts of a query. At least in a high performance database lecture this was taught as a possible optimization. Edit: I'd intuitively expect the improvements to be more than 5%< though.
I have been JIT-ing query predicates in database-like systems for over a decade, and some commercial databases have supported it a lot longer than that. There are a couple relevant aspects that impact the performance benefit.
The performance gains are much higher if the database engine was designed to have JIT-ed execution from day one. Grafting it onto a database engine after the fact, like Postgres, is going to gain substantially less benefit than is theoretically possible. Additionally, it mostly benefits databases where query predicates have a large amount of data to process, it doesn’t do much for OLTP. But in the right system and context, large integer factor performance improvements are routinely achievable.
Yes, this is a thing. But the key difference there is... that's being done ... in user space.
We live in a world where just last week Google turned off io_uring access on a pile of machines, and that's only "executing" restricted sets of operations. Executable code in kernel = big giant target painted on back.
Ah, yes, that's an excellent point. Because someone else talked about 5% performance I looked at it with my performance hat on, not the security hat. OTOH we have BPFilter with its VM.
FWIW, the IBM mainframes have channel control programs, which as I understand it, can dynamically generate small programs and send them to the channel controller to execute; can include branching / conditionals also. https://en.wikipedia.org/wiki/Execute_Channel_Program
E.g. PostgreSQL can do that these days (not sure if it did back then): https://www.postgresql.org/docs/current/jit-reason.html