Easier to spot the trick when it's all written in code. "There-is" is a huge overreach from a computational perspective, opens the door for "this sentence is false" type logical knots and unbounded complexity.
I know its the first lesson in discrete math, but "there exists a _" makes no sense from an algorithmic perspective. How do you find it?
I think I was confused about the when function...
I was at first thinking of when as when in clojure which is essentially just a macro for
(if .. do ..)
(macroexpand '(when 1 2 3 4)) (if 1 (do 2 3 4))
but that is not what it seems to do in PM-Lisp take the example of `=` definition:
(def = (and (when A B) (when B A)))
If this were the same functionality clojure it would be testing the truthyness of the evaluation of each when expression
so If I had (and (when 3 5) (when 5 3)) it would evaluate to (and 5 3) which would evaluate to 3 which is truthy.
So that got me stuck in the article. But it seems that when is more like a test/assertion... (when x z) saying for all values of x ... z is implied (i.e. the same way one mathematical rule implies another).
I'm going to reread this article and try to understand better.
Don't know about clojure (is that a lisp?), here when is therefore in formal logic - "A implies B". Ex, "If it's raining, the streets are wet".
It doesn't "do" anything, just sets a constraint which can be matched against later. I took a class on Prolog in school, which works by back-matching against conditionals like that. I remember just sorting a list being mind-bendingly recursive.
I still think there's some funny business going on in the intersection of computable functions, infinite sets and "there exists a _" style conditionals. Can't put my finger on why "there exists a number of the form 2n + 1" feels fine, but "there exists a function in the set of all possible functions" feels wrong.
The imperative if in programming is different from the logical if. But it isn't disconnected from logic.
In fact, it is closely related to the logical and: it is and with short-circuiting, enabling the left expression to control effects in the right.
In Lisp, the two-form if, namely (if x y), can be replaced by (and x y), because and has the right logic and result value, and also the short-circuiting semantics that y is not evaluated if x is false.
If we want the logical if, we can use the equivalent expression (or (not x) y), which has the same truth table:
x y | (if x y) | (or (not x) y)
------------+-----------+----------------
nil nil | nil | t
nil t | nil | t
t nil | nil | nil
t t | t | t
The if/and relationship of course shows up in other languages:
if (pointer != NULL) {
if (pointer->foo == 42)
do();
}
There's no overreach whatsoever and you can eliminate "there exists" altogether without any loss of expressivity.
"There exists a number X such that P(X) is true." is nothing more than a short hand for "It is false that for all numbers X, P(X) is false."
In other words, if you accept that it's possible to reason about "for all" then you must accept that it's possible to reason about "there exists" since all "there exists" does is assert that a property is not false for every number.
I don't think OP was saying that "for all" is any more meaningful from a computational perspective than "there exits". The point is that both are higher order functions that return an answer for any function on all natural numbers (or inputs in a more general setting).
A set doesn't need to be infinite for us to not be able to say something about all it's members. Each busy beaver number is very much finite but you'd run out of universe if you tried calculating BB value for a number as small as 12.
Whether something is practical to compute is a very different matter from what OP stated was their concern, namely logical knots, potential inconsistencies/contradictions of the "this sentence is false" type, and unbounded complexity.
It really isn't. Math proofs need to be not only finite but small enough to fit in the universe to be discovered. This strictly finite nonsense is a left over from a time when we could only do symbol manipulation with a human brain.
Lisp will always be relevant since it's based on math. For it to become irrelevant we need to discover or invent a completely new kind of math, and we simply haven't evolved to that point.
Btw any reason why Russel cannot detect this barber cannot shave himself who does not shave any one who shave himself?
He should not known(without this) that self referral has an issue and hence if one apply his principal Mathematica on itself it is doom. Why he does not think that way given he got this?
The barber's paradox can not be expressed in Principia Mathematica (PM). PM defines a theory of sets, classes, 2-classes, 3-classes, etc... so that members of an N-class can only contain elements that are of type K-classes where K < N. The barber's paradox can not be formulated in such a theory.
Consequently PM does not have an issue with the barber's paradox. PM is incomplete in that there are truths that can be expressed but can not be proven, but there are no known paradoxes/inconsistencies.
From a programming point of view PM is like a statically typed programming language. If something type checks in PM then it's correct. But as anyone who programs in Rust can tell you, sometimes you know that something is correct but it's a pain to find a way to express it in the language's rigid type system and sometimes it's simply impossible to express that a program is correct using the language's rigid type system. That's the problem that PM has and the sense in which PM is incomplete.
I know its the first lesson in discrete math, but "there exists a _" makes no sense from an algorithmic perspective. How do you find it?