
What is logic? - Hooke
https://aeon.co/essays/the-rise-and-fall-and-rise-of-logic
======
tel
Part of the fun of logic is realizing that there are multiple forms of it
which change one's language and ability to express ideas. They also don't
dominate one another—some statements' truth depends upon what logic they're
formalized within.

Model theory is good here in that it studies the connection between logical
language and more "physical" models. Some of these models can give a relatable
account for what a logic "means". For instance, Boolean logic correspond with
set operations and therefore we can use that correspondence to portray a
meaning for statements in Boolean-like logics.

Intuitionistic logic is an interesting one if you're not familiar with it. It
can be seen as the logic of working with "demonstrations", building and
analyzing them. For instance, if I have an object "X" such that witnessing it
immediately convinces someone of the truth of "1+1=2", and if I have a whole
collection of such items, then intuitionistic logic talks about how to compose
and decompose collections of these things. Another interesting operation is
"not" such that "not X" is a tool for using any X you happen to have to show
that the world is inconsistent. This acts as a refutation of the existence of
values like X!

This differs from set-like logics because, for instance, "not (not X)" is not
the same as X: being able to show that if you had (a way to prove the world
inconsistent if you had an X) you could show the world as inconsistent... it's
a far cry away from actually having an X!

Intuitionistic logic is interesting for two reasons. First, it models
communication and persuasion by being an algebra of "persuasive items" like X.
Second, it's "constructive" and thus works similarly to how programming
languages express things through their creation.

~~~
hackits
Correct me if I'm wrong, I've all ways seen Maths to be easy part. The
translation of English Language (grammar, sentence structure) that can be
ambiguous to a very precise meaning such as Math's to be the hardest part and
where people with good translation skills excel.

~~~
tel
I think much of maths is that translation. You learn a lot of mathematics and
you recognize powerful, subtle, deep patterns. Then you spend your time
looking for those patterns in the world.

It's the same thing with even the most basic "translation problems" that kids
in schools learn. They're armed with arithmetic and high-school algebra so
they get arithmetic and high-school algebra "real world" scenarios to analyze.

At higher levels its the same. The problems are just harder and the patterns
aren't necessarily given to you. Sometimes, oftentimes, nobody even knows what
they are.

------
cousin_it
Gödel's arithmetization of logic is one hell of a beautiful mathematical idea,
up there with linear algebra or probability theory. One of its big selling
points for me was how easily it solves the unexpected hanging paradox. (The
simplest arithmetization of the judge's self-referential statement is a
statement about numbers that can be shown to be self-contradictory.) It's also
cool how it turns Russell's paradox into Gödel's theorem, Curry's paradox into
Löb's theorem, etc. The connections to algorithms and computability theory are
also neat (Gödel's idea of "effectively axiomatized system" is any computer
program that can print sentences, which has just the right amount of
generality and connects to the halting problem in the obvious way). To me
arithmetization is simply the right approach to logic, which easily subsumes
everything that makes sense and rejects everything that doesn't.

~~~
westoncb
Do you have a recommendation for a book that covers these topics that's
technical but not necessarily a text book?

~~~
empath75
The classic popular treatment is: Godel Escher and Bach: An Eternal Golden
Braid, by Douglas Hofstadter

~~~
westoncb
I've read about a third of it. It's too long and too focused on other topics
for me personally.

I also read the book 'Gödel's Proof' a long time ago, before I had much
familiarity with pure math, and remember it describing a very interesting sort
of 'architecture' that Gödel had put together—but I left still wondering about
any applications other than producing the incompleteness theorems. Maybe I'd
get more out of it now though...

~~~
alasdair_
GEB is well worth a second attempt. It's one of the very few books that
completely changed my worldview.

~~~
fenomas
I read GEB a few years ago, and it struck me as a book full of ideas that
would have transformed me had I been encountering them for the first time. I
think the book may have been so influential that its big ideas had largely
filtered out into places where I'd encountered them. If I'd read it at 19,
say, I think it would have rocked my world.

Note that I'm mostly talking about the book's first half (about incompleteness
and canons and whatnot). The second half (ants, etc) struck me as much more
dated.

~~~
empath75
I did read it at 19 and it did rock my world.

~~~
mr_overalls
Same here. I read it just after Zen and the Art of Motorcycle Maintenance.
Rite of passage for nerds. :-)

------
westoncb
My take: it's a human means of systematically exploring implications of
already accepted beliefs.

There are implicit and explicit forms of it.

The implicit form is something our brains do automatically and is a
consequence of its structure. Perhaps the way it models things intrinsically
does not allow for what we would call contradicting statements (though it
contains many separate models which if unified would contain contradictions);
then, the space of desirable implications is narrowed by a motivation to find
a certain _kind_ of implication (i.e. there is typically a goal when engaging
in reasoning—we aren't often indifferently interested in all implications)
combined with the fact that contradictions aren't allowed. It seems like
implications are usually discovering that some entity belongs to a class it
wasn't previously known to belong to, at which point it inherits the
attributes of other things in that class.

The explicit forms seem like attempts to model the implicit form in
mathematical language, in order to bring logical processes more under control
of conscious thought, or 'executive function' (plus reasoning on paper extends
working memory). It seems possible that with the 'correct' formulation, a
logic could viewed as a theoretical science describing properties of the
structure of the human brain that give rise to reasoning.

~~~
kazagistar
The interesting point of the given article was that pre-mathematical-
formalization, logic was backwards of that purpose, and often remains so even
today. The search does not start with assumptions and seek conclusions;
rather, it starts with conclusions, and then seeks which sets of axioms
fulfill them. A vast majority of mathematicians don't care about the axiomatic
foundations of mathematics, nor mind if it gets entirely reformulated; they
already know math works, and its irrelevant which particular set of axioms is
used to prove that.

The assuptions => theorm structure is an artificial abstraction over a very
different logic process. Symbolic logic is a framework for mechanical
validation, rather then anything like how people actually do logic.

~~~
westoncb
> The search does not start with assumptions and seek conclusions; rather, it
> starts with conclusions, and then seeks which sets of axioms fulfill them.

Right, there was an issue with labeling justification 'logic'. An interesting
point, but not as interesting as the title of this story—which I prefer to
discuss :)

I was getting at the general process of inference making in the human brain
(not even conscious inference making necessarily), which I see as the real
root of all the different things we call logic.

------
danielam
Speaking of classical logic, I came across a book on Fred Sommers' work
entitled "The Old New Logic" (David Oderberg is editor). I haven't read it
yet, so I can't comment, but based on some excerpts and reviews, it sounds
like a good read. I thought I'd mention it here for those interested. Link:
[https://mitpress.mit.edu/books/old-new-
logic](https://mitpress.mit.edu/books/old-new-logic)

Also, those interested in the fallout of Russell's paradox are encouraged to
look into Lesniewski's mereology (this is a topic that occurs in my master's
thesis where I attempted to formalize the structure of Aristotelian living
things mereologically using many-sorted logic). In short, Lesniewski held that
the axiomatic set theory of the day betrayed the intuitions of naive set
theory and resulted in paradoxes. He attempted to dispense with the these
paradoxes by proposing a formal system known as mereology that replaced the
primitive element relation in set theory with the parthood relation (other
relations can be defined in terms of parthood, e.g., overlap, disjoint,
proper/improper parthood depending on which one you assume, and vice versa).
Link:
[https://plato.stanford.edu/entries/lesniewski/](https://plato.stanford.edu/entries/lesniewski/)

------
kensai
"The idea that ordinary language is expressively inadequate to account for
mathematical (or even logical) reasoning became a recurring theme in the
ensuing tradition of mathematical logic, so much so that the term ‘symbolic
logic’ became synonymous with this tradition. Doing logic came to mean simply
working with special symbols, not with ordinary words. In this respect, it is
worth noting that the humanist authors had criticised the Latin of scholastic
logicians precisely as ‘too artificial’, and even the Greek language that
Aristotle relies on for syllogistic logic is regimented and removed from
ordinary ways of speaking at the time. In a sense, perhaps a certain degree of
‘artificiality’ is at the core of logic throughout history, as it operates at
levels of abstraction that are at odds with ordinary language usage."

True that, but were/are there alternatives? Difficult to say.

~~~
throwaway729
_> In a sense, perhaps a certain degree of ‘artificiality’ is at the core of
logic throughout history, as it operates at levels of abstraction that are at
odds with ordinary language usage_

Of course, you could say the exact same thing for programming languages (after
all, what do programmers do if not "simply working with special symbols, not
with ordinary words"? And isn't this the primary criticism of programming
brought by people who find it distasteful -- that expressing their _fuzzy_
intuitions in a precise enough manner for the computer feels like a pointless
exercise in frustration?)

Really, the only difference between the proponents of mathematical logic over
informal prose in the early 20th century and the proponents of programming
languages over informal prose today is that computers obviate the problem with
fuzzy thinking and fuzzier exposition. The philosopher or mathematician could
hand-wave about their holes in their arguments, but the programmer can only
hand-wave so much the uselessness of his unexecutable pseudo-code.

The golden standard would be a form of speech that is _natural and ordinary_ ,
but also avoids _misleading, omitting, and eliding essential details_.

In other words, a language that is formal enough to be parsed, checked, and
generated by a proof assistant program. But also natural enough to be read as
if it's prose written by a native speaker of a natural language.

I'm of the opinion that such a golden standard does not exist, and so we're
forever doomed to incremental improvement.

~~~
tikhonj
On a slightly orthogonal note, I think that not only can you think of
programming languages this way, but that you _should_ —programming languages
and formal logics are ultimately instances of the same general idea. I've
found this to be a consistent and powerful view for reasoning _about_
programs, programming languages, proofs and logics in a uniform sort of way.

The Curry-Howard correspondence is an important idea that's a specific
instance of this, showing a direct relationship between specific kinds of
logic and specific kinds of typed programming languages, but I think it's
useful to think about programming languages that don't correspond to well-
studied logical systems in a similar way.

This helps explain why the study of programming languages—and especially
programming language theory—isn't _really_ about programming languages: it's
more about abstraction and reasoning in general. It's deeper than you'd
suspect from the name. I'd go so far as saying that programming language
theory is less a theory of programming languages and more a general theory of
computation _from a language point of view_ , as compared to "normal"
theoretical CS.

~~~
mafribe

       programming languages and 
       formal logics are ultimately 
       instances of the same general 
       idea. 
    

That's deeply questionable. The CH-correspondence breaks down as soon as you
your computation includes non-termination, concurrency, timing, distribution
etc. Classical logic doesn't really have wholly convincing CH-correspondences
either.

It seems to me that constructive proofs are a special class of programs, and
fall under the purview of programming language, but not the other way around.

~~~
mkehrt
Non termination is modeled as false: a function that takes an A and never
terminates has type A -> F, or ~A.

There's some connection between classical logics and concurrency, though it's
hard to get at. You can use callcc to implement threads, and Getzen's
classical sequent calculus has multiple conclusions.

Here is a PHD thesis that uses modal logic to model distributed code in a CH
fashion: [http://www.cs.cmu.edu/~tom7/papers/modal-types-for-mobile-
co...](http://www.cs.cmu.edu/~tom7/papers/modal-types-for-mobile-code.pdf)

Classical logic is perfectly modeled with callcc, which has the type Peirce's
Law, and allows one to write terms with the types of the law of excluded
middle or Double Negation Elimination.

~~~
mafribe

       Non termination is modeled as false:
    

That means the CH correspondence is not interesting. If all non-terminating
processes are falsity, types say nothing whatsoever of interest (from a
specifiation and verification point of view) about non-terminating
computation.

The T. Murphy PhD doesn't deal with distribution because it lacks the key
feature of distribution which is _partial failure_.

As far as I'm aware the work on modelling double negation by call/cc doesn't
actually provide a 1-to-1 correspondence between programs and proofs. I've
forgotten the details but I think only some programs come out being proofs.

------
abtinf
> “It’s logical, but logic has nothing to do with reality.” Logic is the art
> or skill of non-contradictory identification. Logic has a single law, the
> Law of Identity, and its various corollaries. If logic has nothing to do
> with reality, it means that the Law of Identity is inapplicable to reality.
> If so, then: a. things are not what they are; b. things can be and not be at
> the same time, in the same respect, i.e., reality is made up of
> contradictions. If so, by what means did anyone discover it? By illogical
> means. (This last is for sure.) The purpose of that notion is crudely
> obvious. Its actual meaning is not: “Logic has nothing to do with reality,”
> but: “I, the speaker, have nothing to do with logic (or with reality).” When
> people use that catch phrase, they mean either: “It’s logical, but I don’t
> choose to be logical” or: “It’s logical, but people are not logical, they
> don’t think—and I intend to pander to their irrationality.”

[http://aynrandlexicon.com/lexicon/logic.html](http://aynrandlexicon.com/lexicon/logic.html)

~~~
westoncb
> _If logic has nothing to do with reality, it means that the Law of Identity
> is inapplicable to reality._

That may be the case. If our brain is only capable of forming self-consistent
schemas (consider how it reacts when encountering new facts that contradict
existing schemas:
[https://en.wikipedia.org/wiki/Cognitive_dissonance](https://en.wikipedia.org/wiki/Cognitive_dissonance)),
then the world's apparent consistency may only be a consequence of our
contemplating it through the human brain. (To be clear though, it's
consistency within _single_ schemas—not between, where contradictions abound.)

> _If so, then: a. things are not what they are; b. things can be and not be
> at the same time_

Can someone be a hero and not be a hero at the same time? I think there are
some subtleties to this issue arising from the nature of language. Certainly
at our scale (i.e. not quantum scales—although even there I don't really think
'real' inconsistency is happening, but it's less clear cut) a certain kind of
consistency has very strong empirical support (it's probably our _best_
supported hypothesis!), so it would be adaptive for us to build models of the
world that would reflect that property—but this doesn't mean we've hit on
something deep about the nature of reality.

~~~
abtinf
To be honest with you, I just about fell out of my chair reading your comment.
Seriously, this is the first time I've posted a relevant quote from Ayn Rand
on the web and received a substantive and thoughtful response.

I will have to think on the points you raise.

------
pitaj
From what I know, pure logic or even math are useless without assumptions.
This is what reason and science give us: a standard set of axioms from which
logic and math can lead us to useful conclusions.

~~~
lgas
Where can I find this standard set of axioms?

~~~
baddox
I'm not sure what the parent commenter means by "reason and science" giving us
axioms, but for mathematics the most common axiomatic system is ZFC set
theory. Some of the axioms are easy to understand even with a mathematics
background:

[https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t...](https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory#Axioms)

~~~
lgas
I would consider these to be mathematical axioms. I was looking for what pitaj
was describing as coming from outside of math. Maybe I'm just misunderstanding
what pitaj was saying.

------
spacehacker
I've been surprised to not find anything for ctrl-F "Bayes". It turns out that
traditional deductive logic is simply a special case of applying Bayes rule in
order to find out the state of some binary variables given the other variables
in which the probabilities are only 0 or 1. See for example these examples
from Barber's "Bayesian Reasoning and Machine Learning" (p. 39):

[https://i.imgur.com/W1b0Wem.png](https://i.imgur.com/W1b0Wem.png)

[https://i.imgur.com/JOFvd9C.png](https://i.imgur.com/JOFvd9C.png)

~~~
cscurmudgeon
Deductive logic is sort of like the machine code for reasoning. It encompasses
everything, but it is hard to see in daily usage, just like machine code.
Bayes Rule is just one part of it.

If you look at the Mizar mathematical library you will find a formalization of
Bayes Rule using plain FOL.

In fact, one of my Logic II (an intermediate logic class) exercises was
formalizing a version of Bayes' rule in classic first-order logic.

------
s_kilk
"We just don't know"

[https://www.youtube.com/watch?v=Kh0Y2hVe_bw](https://www.youtube.com/watch?v=Kh0Y2hVe_bw)

------
woodandsteel
Interesting article.

I would add that Frege, Russell and Whitehead did not just want to logicize
mathematics. They thought that reality itself followed this logic, and so were
doing metaphysics. The later Wittgenstein is a decisive refutation of this
idea, and this is a key reason logic has gone out of fashion.

------
bluetwo
Logic is a tool. Like all tools, they hold no value until given a purpose,
then they take on the value of the task they solve.

(Just my opinion. Agree or disagree. That's cool.)

------
samirillian
Great summary/history lesson, but the title is a bit misleading. The author
spends very little time discussing the nature of logic per se.

------
juskrey
Just a tool to embrace the unknown.

------
catnaroek
An article titled ”What is logic?” with no reference to computing? Does the
author still live in the 19th century or what?

------
lngnmn
A set of general rules for not losing a connection to reality.

Could be used in reasoning or computer programming.

~~~
metaphorm
this is amusingly missing the point. logic is, by its nature, an abstraction
away from reality. it's a set of specific (not general) rules for evaluating
statements of language (whether that language is natural, mathematical, or
artificial/computer) such that meaning can be disambiguated.

"reality", whatever it is, has no such requirement that it can be expressed in
unambiguous statements.

~~~
paganel
> it's a set of specific (not general) rules for evaluating statements of
> language

Not the OP, but you could say that no "language" exists outside of "reality",
and stretching it further you could also say that all types of languages
(formal, mathematical, artificial/computer), including the associated "rules",
are social constructs which depend heavily on the time and place of their
"existence", i.e. on "reality".

Until we meet some quantum-like entity which might "talk" to us from outside
the boundaries of this physical Universe we're doomed to be solipsistic: we
construct "abstractions" that are meant to distance us from "reality" only to
find out in the end that "reality" was behind those "abstractions" all along.

