So either it was called key, and Kei is a homophone, or visa versa, thus the mispelling or 2 translations from a different language, or they really like Kei cars ?
Where is the <1% of the world population who can translate this to human-speak, when you need them?
> The lambda-Pi-calculus Modulo is a variant of the lambda-calculus with dependent types where beta-conversion is extended with user-defined rewrite rules. It is an expressive logical framework and has been used to encode logics and type systems in a shallow way. Basic properties such as subject reduction or uniqueness of types do not hold in general in the lambda-Pi-calculus Modulo. However, they hold if the rewrite system generated by the rewrite rules together with beta-reduction is confluent. [...]
Also a mumbo dumbo here, but I might be able to say something helpful. This is a bit of long post as I am trying to provide some context.
A useful way to think of logic and type systems is simply as rewrite rules on arbitrary strings. In logic we call these strings "theorems" and in type theory we call them "types".
In particular, say we have the logical theorem "a /\ (a -> b)". In classical (propositional) logic this corresponds to "a implies b, and a is true", so we make the (obvious) deduction that "b" is true. In other words, we rewrite "a /\ (a -> b)" with the string "b".
Intuitively, we want to say the above rewrite means the same thing as "(a /\ (a -> b)) -> b". However, the former is a rewrite while the later is just a static string. Thus we are forced to distinguish between the formal "->" character and the rewrite process, despite these naively/intuitively both meaning inference.
A bit of terminology. Classically, we call strings "theorems" and when we rewrite one string for another, we call that a "deduction". So say we have a string "S" and rewrite it as "T", a common way of notating such a deduction is "S => T". Furthermore, the whole system of deductions is what we call our "metalogic" while the collection of theorems produced is our "formal logic".
Anyway, so we have to distinguish between "->" and "=>", which seems a bit unfortunate. There is this standard tool in classical logic called the Deduction Theorem which adds a rule that given "S => T" we can deduce "S -> T". However, the justification for this uses somewhat sophisticated math and, by necessity, it's an argument in some (meta)metalogic. There are other ways of handling this and metalogics that don't have the Deduction Theorem at all. I have even read that Quantum Logic doesn't have such a rule and that adding it in turns the whole system into classical logic!
Now, to finally get to the point, Kei looks like it uses a metalogic it's calling "λ-Calculus Modulo Theory" which ports the Deduction Theorem to type theory, or in particular, the dependently typed λ-calculus. They seem to be calling these new rules Γ-reduction. In other words, it attempts to make general rewrite rules, like β-reduction, first-class citizens by adding types that correspond to said rules.
If I'm understanding correctly, one attractive feature of this is that it "closes the loop," per se, allowing one to encode their λ-Calculus Modulo Theory within itself, which by extension, let's us encode any other λ-calculus as well. Ostensibly, this should allow for maximally compact and expressive types, since you don't have to hack simple metalogical concepts into a less powerful type system.
I only skimmed their paper, but in general one could use their system to model some badly-behaved (i.e. non-confluent) type theories; however, they seem to have a (meta)theorems showing that certain uses of the Γ-reduction give you well-behaved ones.
A “confluent” term rewriting system just means that the set of equations in that TRS has a property where, if there’s ever multiple applicable rules that can be applied, no matter which of those rules you choose to apply to a term, you eventually get to the same state (again, when the rules are applied repeatedly to a term).
Beta-reduction  is one of the core operations you can perform with lambda calculus? You can think of it as being equivalent to a function call; an alternate analogy is when a C compiler inlines a particular function call during compilation, it’s performing beta-reduction at that call site.
That doesn’t explain everything in the quoted text, but hopefully it helps make it a bit more understandable. If you really want to learn more about this area of CS, take a look at the book Types and Programming Languages (TAPL) .
~I really need a welcome doc to peoples who aren't familiarized with those kinds of topics.