I apologize if this is off-topic, but I wonder if there has been work done on algorithm analysis using the lambda calculus? The texts that I’ve read on algorithms describe the algorithms using procedural code, and then the analysis is done on the amount of procedures that get called. However, I would imagine that the description would be different for non-procedural styles of programming. I’ve heard of a book called “Functional Data Structures” but I haven’t read it yet. I’m wondering if there has been work done on algorithm analysis at the lambda calculus level.
The main problem is that beta reduction is an awkward operation from the perspective of computational complexity, which is a fairly fundamental concept in algorithm analysis. There's a discussion of that here:
P.S: Purely Functional Data Structures is a great book. It's focus is more on the implementation details of algorithms in a functional language (ML), and the necessary abstractions.
It's not an analysis of algorithms within a formal model, like lambda calculus.
Here is a simple lambda calculus program:
((λ x . (x x)) (λ y . y))
((λ y . y) (λ y . y))
Space complexity is also an issue; if our program expands to m symbols, surely space complexity is at least O(m) as well?
Worse yet, β-"reductions" don't necessarily make the program shorter; and it can expand to arbitrary size before it starts to get smaller. It's not usually obvious how m (length of the program) is related to n (number of elements acted on, in the usual O(n) sense.)
An "awkward operation from the perspective of computational complexity" indeed!
A β-reduction is basically the same as a function call, we know how to implement those, and they take constant time. Your example could be directly interpreted as a program in e.g. Lisp or ML, and there the function call is definitely not implemented by stepping through the entire program symbol by symbol.
I think the issue in the stackoverflow post above is that the lambda calculus is nondeterministic, so if you try to define the cost as the smallest number of β-reductions to reduce a term to a value, then you're in trouble because it's in general very hard to figure out what the optimal way of reducing a program would be. But that also shows that it's a completely unrealistic model. Surely what we should do is pick an evaluation order, e.g. call-by-value, and then count the number of reductions there. This corresponds to what actually happens when you run your program on a real computer.
In fact, CBV evaluation is really well-behaved, and you can given an extremely intuitive cost semantics to such a lambda calculus: see section 7.4 in , and also see  to show that this semantics is sound in the sense that you can compile it down to an abstract machine respecting the cost. (Incidentally, this is something that Bob Harper has been talking about for a long time, see e.g. his blog post  for some polemics.)
If anything, I think what this argument shows is that Turing machines are bad for complexity-theoretic purposes. We do not want to "analyze our algorithms in terms of atomic operations that can be implemented in O(1) on a Turing machine", we want to count atomic operations that can be executed in O(1) time on the actual computers we run stuff on. So Turing machines are ok if you just want to distinguish polynomial from non-polynomial time, but for finer-grained distinctions you need a more accurate model, like a RAM, or... like the lambda calculus!
((λ x . ((x x) (x x))) (λ x . ((x x) (x x))))
((false BlowsUp) (λ y . y))
(((λ x . (λ y . y)) ((λ x . ((x x) (x x))) (λ x . ((x x) (x x)))) ) (λ y . y))
This is an extraordinarily important case, because recursive functions are typically written with two branches: a base case that terminates, and a recursive case that goes on for ever. For example, consider everyone's favorite recursive function, factorial:
if n <= 1:
return n * factorial(n-1)
There is of course a so-called "normal" evaluation strategy for lambda calculus (which is guaranteed to work) but it is basically "call-by-name" (to contrast it with the call-by-value nomenclature) and it has the problems I describe above and is hard to analyze.
if b then e1 else e2
b e1 e2
(b (λx. e1) (λx. e2)) tt
Similarly, you can make a CBV-version of the Y combinator by eta-expanding some subexpressions to prevent it from evaluating it too eagerly .
The exact details of the encoding don't matter too much, because when you actually write programs you want to use some syntactic sugar like "if then else" or "let rec", so you just care that it can be translated into lambda calculus in some way which preserves the operational semantics.
Which actually makes me curious: what sort of change did you have in mind?
"Functional Bits: Lambda Calculus based
Algorithmic Information Theory"
HoTT is a foundations of mathematics thing (think ZF/ZFC), so it's a bit weird to see it on the list. But it's kind of hot right now, so you see it just about everywhere.
Of all the things in this list, this is the one you think is odd?
The idea behind homotopy type theory is a groundbreaking connection between type theory and abstract homotopy theory (really, higher-category theory) and had far reaching consequences in both areas.
For example, cubical type theory solves many problems with equality in type theory and it was only developed by understanding the homotopical models of type theory that were developed for HoTT. I could give more examples, but seriously, this alone is a major advance in logic.
 On Function and Concept, 1891; http://fitelson.org/proseminar/frege_fac.pdf
Is there some sort of logic or formalism that would let me algebraically represent a program like bubble sort, and then algebraically simplify it to a quick sort?
If you produced a mind-map to show their inter-relatedness, It's quite a dense network.
A quick sketch of some of the nodes and vertices might start to look like:
Logic <--> Gödel's theorems
Gödel's theorems <--> Proof theory
Proof Theory <--> Linear Logic
Computability <---> Turing Machines
Computability <---> Lambda Calculus
Combinatory Logic <--> Lambda Calculus
Type Theory <--> Lambda Calculus
Homotopy Type Theory <--> Type Theory
Pi Calculus <--> Process Calculi/Distributed Systems
Category Theory <--> Everything...
And that's far from complete. I imagine synthesising all the concepts into a single book, with a cohesive narrative would be quite hard, without some deeper unifying theory, uniting all concepts. (Category theory and HTT may be the best contenders).
In this case, a Wiki might best capture the semantics of structure. Although I'd love to be proved wrong and find such a book!