
Object Oriented Mathematics (1995) [pdf] - mindcrime
http://www.diku.dk/~grue/papers/oom/oom.pdf
======
zygomega
Modern FP thinks of object oriented processes as comonadic. See
[http://www.haskellforall.com/2013/02/you-could-have-
invented...](http://www.haskellforall.com/2013/02/you-could-have-invented-
comonads.html?m=1) for a good exposition.

------
jhanschoo
The author seems to write from the perspective of an undergraduate whose main
exposure is limited to lecture notes and handouts.

I find that most pedagogical mathematical literature actually do lay out what
prerequisite knowledge they expect of the reader, and make sure to identify
the exact nature of symbols used in each expression, contrary to the
experience of the author.

Mathematics today is still very much prose, in which equations are shorthand
for common, repetitive expressions.

Nevertheless, the ambiguity the author observes is something I commonly find
in lecture notes, where the prose and context around certain theorems and
equations are stripped away for brevity.

------
ajarmst
The author could have benefitted significantly from some exposure to Abstract
Algebra, as he seems completely unaware of pretty rudimentary things in Field
and Group Theory. The author might also benefit from some exposure to an
actual mathematician, judging by the laughable canards he levels at
mathematicians and their lack of "rigour" in his abstract. Finally, the author
would benefit from some exposure to actual published papers, judging by his
apparent unfamiliarity with what abstracts are for.

~~~
pervycreeper
His fundamental conceit, namely that mathematics sometimes suffers from
ambiguous or sloppily used notation, is a valid one, and this fact has caused
a great deal of confusion for a great number of people.

~~~
ajarmst
That's a reasonable point, but I wonder if that problem is usually encountered
when mathematicians from a particular branch of study read works from a
different one whose usages and assumptions differ. I've bounced of it in even
very simple areas, like whether 0 is a natural number, whether is is positive
or negative (or other), whether 1 is prime, etc. Within the context of a
particular community, such ambiguities aren't present.

~~~
archgoon
"This works perfectly well for communication between researchers in the same
field, but may be an obstacle for communication between researchers from
different fields or for newcomers such as students"

It appears you did not read the abstract after all.

------
Animats
Way too many years ago, I was trying to invent what should have been "object
oriented constructive mathematics", but we didn't have object oriented
programming yet.

There was an actual reason for this. I was working on program verification
[1], and we'd put together a system which used the Oppen-Nelson prover in
combination with the Boyer-Moore prover. We needed to prove that their
theories were consistent.

The Oppen-Nelson prover is a complete, fast, automatic, decision procedure for
expressions composed of addition, subtraction, multiplication by constants,
conditionals, logical operators, and structure and array access. This subset
of mathematics is completely decidable. (If you add multiplication of two
variables, it becomes undecidable.) It can also accept "rules", which are
identities that it accepts as true. In our system, the Boyer-Moore prover was
used to prove any new rules needed, which could then be imported into the
Oppen-Nelson prover. Anything complicated involving loops usually required a
new rule.

The Boyer-Moore system is completely constructive, and is based on recursive
functions. Numbers are defined as (add1 (add1 (add1 (zero)))), for example.
One can then write recursive functions for addition and subtraction, and work
up to multiplication and division. A few hundred theorems cover basic number
theory.

Arrays, though, were a problem. There are four classic axioms, from McCarthy,
which define basic array semantics. The Oppen-Nelson prover has those built
in, but the Boyer-Moore system does not. We thus wanted to prove them in the
Boyer-Moore system. If we could do that, it became safe to prove new things
about arrays and import those rules into the Oppen-Nelson prover.

The axioms: arrays have two operations, SELECT and STORE. SELECT(array, index)
returns the appropriate element from an array. STORE(array, index, newvalue)
returns a new array where newvalue has replaced the element previously at
index. We then have rules such as

    
    
        SELECT(STORE(A, I, V), I) = V       // what you store, you get back
    

or, in Boyer-Moore notation:

    
    
        (implies (and (arrayp! A) (numberp I))
            (equal (selecta! (storea! A I V) I)
                    V))
    

The rest can be seen at p. 129 of the manual[1] if you care.

Arrays had to be defined in the Boyer-Moore system as a list of (subscript,
value) tuples. Not a set, a list. A set isn't a constructive construct,
because, informally, a set is a collection of unique values, the order of
which is not significant. In the Boyer-Moore world, two values are equal iff
they are identical. Two sets of tuples with a different order would not be
equal.

So we had to define an array in the Boyer-Moore world as a list of (subscript,
value) tuples ordered by increasing subscript. This is a clunky notation,
because then we have to prove that the STORE operation preserved the
correctness of the ordered list. Then we had to prove that all the rules for
arrays were always true for that clunky representation. This required about 50
pages of machine proofs.

Back then (this was around 1981-1982), long machine proofs were not acceptable
in mathematics. I had a JACM paper rejected for that reason. The approach was
just too ugly.

Years later, I realized that what was needed was a kind of object oriented
version of constructive mathematics. The key concept is that two things are
equal if there is no way they can be distinguished. This comes from the theory
of uninterpreted functions:

    
    
        forall f, x: f(x) = f(y) implies x = y
    

So we would like to be able to define an type with public and private
functions, one which exposes a new "equal" operation for the type. Then, if we
can prove that the new "equal" function obeys the rule above for all public
functions, and we disallow all further access to the private functions, we can
construct a consistent theory with a new, more abstract notion of "equal". Now
we can write set theory in Boyer-Moore theory without adding new axioms.

Unfortunately, I figured this out about a decade too late. We didn't really
need that result to get a valid verification system, but it would have cleaned
up the theory and made it publishable. But anyway, there's a form of object
oriented mathematics which could be potentially useful.

The verification system was never used much; it was for a dialect of Pascal
for Ford engine control programs that was never used in production. We looked
at extending it to Ada (too hard) and C (too ill-defined). DEC SRL did a
similar system for Modula III, but that died with DEC SRL, DEC, and Modula
III. Some of the ideas were reused, decades later, in Microsoft's Spec#.

[1]
[http://www.animats.com/papers/verifier/verifiermanual.pdf](http://www.animats.com/papers/verifier/verifiermanual.pdf)

~~~
chris_wot
You wrote that:

    
    
      forall f, x: f(x) = f(y) implies x = y
    

What if f(x) is x^2 and f(y) is y^2? You could have x=2 and y=-2.

I'm sure there is something I'm missing, but wouldn't that make the
proposition false? i.e. f(x) = f(y) is true, but x = y is false, thus true
implies false is false

~~~
Animats
The key is "forall f". That is, this must hold for all functions. In a
constructive world, the set of all functions is finite. There's an existing
set of known functions on the object. Any new functions must be created using
those functions, so if the known functions have that property, so must
functions built from them.

The key idea is that new functions can't look inside the object directly.
Think private data and function members in C++. So two objects may have
different internal representations, but if the exported functions won't let
you distinguish them, you can treat them as equal.

~~~
chris_wot
So if the result of all values of x of the function f and g then the functions
f and g are equivalent. Is that correct? If so, then does the theory handle
encapsulation only?

------
aruss
Is this a joke?

------
kmicklas
This guy really needs to learn more type/category theory, or Haskell, or
something. OOP may have been hip and cool in 1995 when this was written but
applying it to math is an even worse idea than for programming.

~~~
platz
FP has been "mainstream" in academia for quite a while, so an academic
utilizing OOP now has the effect of being "radical". also,
[http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent](http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent)

