

Optimising pointer subtraction with 2-adic integers - dons
http://blog.sigfpe.com/2010/05/optimising-pointer-subtraction-with-2.html

======
sz
Fascinating.

And I thought I might be missing out on learning CS by learning abstract math
instead.

~~~
camccann
Computer Science--as opposed to software engineering, or day-to-day
programming in general--is nothing _but_ an odd flavor of applied abstract
math. Chances are, you know more CS than most people with CS degrees, since
most undergraduate "computer science" programs are actually a mish-mash of
software engineering and dumbed-down bits of CS.

~~~
sz
This is reassuring, because for months now I've been searching and searching
for a sort of "mathematician's introduction to computer science". Something
precise, concise, with elegant and efficient constructions and rigorous
argument... possibly in the style of Rudin's classic analysis textbook?

I recently discovered Knuth's TAOCP but the thing is overwhelmingly long and
I'm not sure it's what I'm looking for.

~~~
camccann
I'm not sure if what you're looking for exists--it depends on what angle
you're looking to approach it from, for one thing. TAOCP is a classic for
certain kinds of rigorous, formal reasoning about algorithms, but it can be a
bit intimidating. One of Knuth's famous quotes is "Beware of bugs in the above
code; I have only proved it correct, not tried it", if that gives you some
idea of his style...

On the other hand, if you want to reconstruct computer science via symbolic
logic and _very_ abstract math, variations on Church's lambda calculus (from
which most functional programming languages derive) correspond closely to
intuitionistic logic/the internal logic of some cartesian closed category.
That stuff is liable to give even some mathematicians a headache, though, and
it's easy to lose sight of more practical corners of CS that way.

~~~
sz
The more abstract the better! I love category theory.

Curry-Howard isomorphism blew my mind when I read about it; that's the sort of
thing I'd like to read about. Do you know of any good treatments from the
lambda calculus perspective?

~~~
camccann
Well, there's not much to the basic idea--the λ-calculus just _is_ the
structure of a cartesian closed category. Exponential objects represent
morphisms by objects, a.k.a. first-class and higher-order functions, the
equivalence Hom(X * Y -> Z) ~ Hom(X -> Z^Y) is currying, a morphism ((Y^X) *
X) -> Y is function application, and so on.

A basic cartesian closed category alone only gets you the simply-typed
λ-calculus, though, which isn't much fun. The untyped λ-calculus is of course
incoherent from the C-H perspective, since unrestricted recursion[0] gives you
Turing-equivalence, which corresponds to an inconsistent logic; a categorical
description would probably be easy but I don't think I've seen one.

More expressive typed λ-calculi such as System F are more interesting from a
computer science perspective, but I've not been able to find much material
discussing them from a logical or categorical perspective. Instead, most
mathematicians would jump straight to something like Coquand's Calculus of
Constructions[1], which is (if memory serves me) the internal logic of a
topos.

Category theory also sees some use in computer science beyond just C-H stuff--
for instance, describing recursive data types as fixed points of endofunctors
on the category of types. This permits an elegant categorical description of
operations on those types--e.g., the "fold" or "reduce" function on lists is
represented by an initial F-algebra. The categorical habit of dualizing
everything in reach also pays off: the dual of well-founded recursion
collapsing structures using an initial F-algebra is productive corecursion
building structures using a terminal F-coalgebra, which turns out to be a
useful description of non-terminating programs that compute only on demand and
then return to an idle state.

Another fun one: Representing data structures algebraically using sum and
product types, it turns out there's a meaningful semantics for taking _the
derivative_ of the type--you get the original type with any one value missing,
plus enough information to specify the location of the hole.

There's also something involving a relationship between Kan extensions and the
continuation-passing transform but that's really getting beyond my knowledge.

I've also seen stuff about running C-H the other way, and using categorical
models of linear logic as a semantics for expressing resource awareness and
side effects in a programming language.

If you want to see some of the above in action, the ML family of programming
languages are based on typed λ-calculi, and acknowledge mathematical
connections a bit more that most other languages. Haskell [2] is a more
distant cousin in that family, and of the not-completely-obscure languages it
tends to most cheerfully play with concepts from category theory. For
instance, take a look at some of the material on this blog:
<http://comonad.com/reader/category/category-theory/> If nothing else, you can
probably dig around from Haskell-related material to find more general stuff.

[0] The best-known implementation for recursion is Y, Curry's fixed-point
combinator, written in lambda terms as (λf. (λx. f (x x)) (λx. f (x x))). In a
typed setting, if it can be expressed at all, it ends up being equivalent via
C-H to a proposition (A -> A) -> A, which permits a proof of any proposition.
This makes it easy to see how the Halting problem is closely related to
Godel's incompleteness theorem, as any language with a coherent interpretation
via C-H will reject some perfectly sensible programs (i.e., is incomplete).

[1] The CoC is also the basis for the proof assistant/programming language
Coq. See also the "Lambda Cube" for a taxonomy of orthogonal type system
enhancements.

[2] Named for Prof. H. Curry, of course.

------
th
I never thought I would see p-adic numbers applied to CS. I'm glad my number
theory class wasn't a complete waste.

