
Logic, Explainability and the Future of Understanding - smartmic
https://blog.stephenwolfram.com/2018/11/logic-explainability-and-the-future-of-understanding/
======
chenglou
> even though the underlying rules (or underlying “language”) used in
> different areas of mathematics are different, there ends up being some way
> to translate between them—so that at the next level of abstraction the path
> that was taken no longer critically matters.

This and the previous arguments made me feel much better about the value of
using proof assistants to advance mathematics! For example, there’s been quite
a bit of controversies surrounding the usage of e.g. Coq to prove the 4-color
theorem; folks complain that although the proof can be verified, brute-forcing
it doesn’t provide us with further mathematical insight. But that might be
fine, since we’d use the proof to build up other abstractions, and if we felt
the need, we can always come back and re-prove 4-color using a more humanly
explanable method.

The other nice insight is that maybe we get to define “simplicity” with a bit
more rigor than how we usually define it (I’m aware of Clojure’s definition of
it): a few number of somewhat orthogonal axioms, that enable the biggest
reduction of quantity of steps needed when proving things with said axioms.
For software engineering, this means having relatively few primitives, that
are composable enough to greatly simplify a program either statically or
dynamically. No need to try too hard to reduce the axioms/helpers into a
single one, as the ratio of utility might tip to the wrong side.

It’s also cool to notice that such common definition of simplicity might
change depending on what new concepts we assimilate. Also interesting that you
can somewhat quantify it by using a probability distribution over the common
use-cases of your axioms!

~~~
espeed
This paper posted this week...

SPIRAL: Extreme Performance Portability

    
    
      This paper provides an end-to-end discussion of the 
      SPIRAL system, its domain-specific languages, and code 
      generation techniques
    

[http://users.ece.cmu.edu/~franzf/papers/08510983_Spiral_IEEE...](http://users.ece.cmu.edu/~franzf/papers/08510983_Spiral_IEEE_Final.pdf)

------
aakilfernandes
I got lost quite a bit on this, but luckily I stayed through the end.

> It’s a similar problem in natural science. You see some elaborate set of
> things happening in some biological system. Can one “reverse engineer” these
> to find an “explanation” for them? Sometimes one might be able to say, for
> example, that evolution by natural selection would be likely to lead to
> something. Or that it’s just common in the computational universe and so is
> likely to occur. But there’s no guarantee that the natural world is set up
> in any way that necessarily allows human explanation.

~~~
Iv
I remember a quote, I think it was from Asimov, that really enlightened me. He
explains that before Newton's law of gravity, we had ways to calculate orbits
and planets positions. It involved complex abacus but a model existed. The
fact that he worked so hard, trying to find a simpler model was, according to
Asimov, an essential act of faith in "the intelligibility of the universe". He
argued that this was a principle of faith held by many fundamental
researchers.

~~~
tensor
I'm pretty sure that the previous methods were imperfect, so it wasn't simply
that we wanted a "simpler" method, but one that would stand the test of time.

~~~
Custardian
lv is citing an Asimov quotation on Newton's actual motivations, not the
motivations Newton plausibly had or you think Newton had.

Trying to infer the motivation from the result is exactly what
aakilfernandes's quotation is referring to!

(Whether Asimov had done his research on this is another matter.)

------
carapace
If you liked that you'll like this:
[http://www.markability.net/](http://www.markability.net/)

It's a walk-through of George Spencer-Brown's Laws of Form.

It turns out all logic can be modeled by simple containment, and there's a
complete basis (discovered by Bricken):

    
    
        A() = ()
        ((A))B = AB
        A(AB) = A(B)
    

One interesting thing about LoF notation is that every expression is also a
schematic of a logic circuit. The letters are inputs, the expression is the
output, and each container represents a logical (multi-input) NOR gate.

Another interesting thing about it is that it's significantly more efficient
to derive proofs in LoF notation than conventional notation. (Often a proof
that takes several pages in conventional notation will be less than a dozen
lines in LoF.)

You can write a simple SAT solver that doesn't require the input expressions
to be put into a Normal Form first.

\- - - -

As Wolfram and others have pointed out: It's _relatively_ easy to program
computers to search for new proofs and such[1], the trick is making it
"surface" the human-interesting ones.

[1] E.g. DeepAlgebra
[https://arxiv.org/abs/1610.01044](https://arxiv.org/abs/1610.01044)
[https://github.com/przchojecki/deepalgebra](https://github.com/przchojecki/deepalgebra)

> We outline a program in the area of formalization of mathematics to automate
> theorem proving in algebra and algebraic geometry. We propose a construction
> of a dictionary between automated theorem provers and (La)TeX exploiting
> syntactic parsers. We describe its application to a repository of human-
> written facts and definitions in algebraic geometry (The Stacks Project). We
> use deep learning techniques.

~~~
aldoushuxley001
Yeah the Laws of Form notation is very elegant and provides a really profound
conceptual foundation. Always loved the book and the people who've tried to
take it even further (some surprisingly good discussion on the LoF yahoo group
over the years)

~~~
carapace
The idea of imaginary Boolean values is pretty striking, eh?

I find it fascinating that some people have a near-mystical experience with
LoF while others sniff at it as just another notation that doesn't add
anything fundamental.

------
xvilka
Investing in opensource systems like Coq, Agda, etc, seems more promising.
They both work on improving their experience based on the latest achievements.
Mathematica is very expensive and proprietary for the most of the world.

~~~
anonlastname
There's a difference in the kind of things that proof assistants like Coq can
do and what computer algebra systems like Sage can do. One can still use CAS
like Sage to prove things and not just do numerical computations, but it's a
bit of a different approach

------
jeromebaek
It's interesting that a single formula "encapsulates" all of Boolean logic,
but I fail to see why having a single formula, rather than two or three, makes
things simpler rather than more complex. The proposed formula with 6 NANDs is
almost impossible to intuitively understand. Why give up axioms that are
easier to understand, like the Peano axioms, or even the pair of 4-NAND and
commutativity axioms, just because you want one formula?

I do see some value in it, but the idea of "simpleness" being captured by
"least number of formulas" seems shortsighted.

------
aakilfernandes
> But how many axioms does one need? And how simple can they be?

In this article, what does "simple" mean? At first I assume it is "simple to
understand" but clearly thats not the case.

~~~
superlopuh
I'm guessing Kolmogorov

------
User23
A nice short read for an introduction to equational logic:
[http://www.cs.cornell.edu/home/gries/Logic/Equational.html](http://www.cs.cornell.edu/home/gries/Logic/Equational.html)

The Leibniz rule in particular carries a lot of weight.

------
empath75
I’m guessing he’s not talking about first order logic when he’s talking about
defining ‘logic’ with a single axiom because it seems to me there are a whole
hell of a lot of steps missing to go from his axiom to ‘proving’ anything at
all.

~~~
tunesmith
0th-order. Propositional logic, boolean algebra.

