I find this sort of allergic reaction to be interesting in its own right. Yes, there are pseudointellectuals who abuse the theorem, but they're just that, pseudointellectuals, and academics usually don't engage with pseudointellectuals. I suppose the situation is similar with Aaronson's despair at the layperson's misunderstanding of quantum computers, but so far as I can tell Godel's theorem is virtually unknown among the masses, in any case much, much less known than quantum mechanics. So it's interesting why academics feel the need to have this sort of allergic reaction - in the case of quantum mechanics it's more understandable.
So the more interesting because Godel, the man himself, was preoccupied with philosophical implications of his theorem. This is the man who "proved" God's existence with modal logic, the man who spent decades after publishing his most famous theorems in a dusty study writing philosophical (and extremely rigorous) implications of his theorems that hardly anyone read, the man who eventually starved to death because he thought - his philosophy led him to believe - he was being poisoned.
In fact, Godel's theorem ought to be used more, not less, in philosophy. Some, like me, believe it has profound implications to moral philosophy. See: https://arxiv.org/abs/1805.08347
But this is a frustrating position to be in for someone like me, because philosophers don't understand Godel's theorem, and mathematicians/computer scientists don't care for philosophy. Instead they spend their time writing pieces like this that justify them for not caring about philosophy. It's pretty lame.
I'm not talking about "pseudointellectuals" in this case, unlike with quantum mechanics. I'm talking about actual working mathematicians who use oblique references to Gödel to, e.g., dismiss formal methods or constructive logic as worthless.
This is extremely annoying, since Gödel's theorem's, as the article rightfully points out, are precise statements about formal systems. There's nothing mysterious about any of it. When someone invokes Gödel you can usually take it to mean "I don't want to engage with this topic further" and not as a serious argument.
> This is the man who "proved" God's existence with modal logic,
I'm pretty sure that was a joke. The final assumption he makes in this proof is logically equivalent to "god exists".
I have a simpler description of the theorem that includes the details:
You can choose 3 out of the 4 things within a axiom system (You can think of it as a language):
1. Completeness: No expressible statement can be neither proved nor disproved.
2. Consistency: There is no statement such that both the statement and its negation are provable from the axioms.
3. The system must be able to describe natural number
4. The system must use formal methods. (The proofs are finitistic)
For mathematicians nowadays, some of them have a natural Platonist mathematics view and take away 4.
Some ultrafinilist like Edward Nelson, wants to take away 3.
In practical cases we can take away 1, e.g. total functional language.
Some philosophers just wants to take away 2 without justify all the other assumptions.
In terms of moral philosophy, a much simpler alternative to derive the undecidability of nature, Chaos theory, can be used instead of assuming 1, 3, and 4.
The case here, in my view, is much less interesting philosophical than the interpretations of quantum mechanics, where most physicists refuse to discuss.
I've only skimmed it, and I'm not very familiar with moral philosophy, but this seems bizarre. If I understand it correctly, your claim is:
1. Free will is uncomputability (undecidability)
2. A good action is a free action (and therefore uncomputable)
3. A bad action is an action that pretends an uncomputable system is computable, thereby denying agency and humanity
I think 1. is suspect because people are not proper Turing machines. They have limited storage and limited running time. They are not currently practically computable, but it seems physically possible to simulate them. I don't think Gödel's theorem applies, although practical uncomputability might be a working substitute.
If you predict that some person will take some action, that's almost always a prediction that they will take that action within the next ten seconds/hours/decades. Actually simulating them is impossible for various reasons, but Gödel's theorem isn't one of them. You can't prove whether an arbitrary program will halt, but you can prove whether an arbitrary program will halt within the next hundred thousand execution steps.
2. seems like a really bizarre way to define goodness. I think morality is subjective enough that you could define it like that, but I don't know why you'd want to. I think the conventional view is that free will is the requirement for both good and bad actions, not that free actions are by definition good.
I think 3. might cover almost every thought we have about other people. Internal computable models of other people are how we operate, as far as I know. Again, you could define badness like that, but I don't see the point.
> [page 8] Now the question paraphrases to: does there exist an algorithm for me to love my child? Of course not. So the distinction between [shallow benefits] and [deep] benefits is that: there is an algorithm to bring about [shallow benefits], whereas there is no algorithm to bring about [deep] benefits.
Doesn't that conflict with the idea that we are Turing machines? If we are Turing machines, then everything we do is the output of some (possibly uncomputable) algorithm, including loving people.
I don't think this engages with Gödel's theorem in a useful way, or even depends on it.
To be fair, it being physically possible to simulate people doesn't mean that human mind is computable/decidable.
Even if the "low-level" physical behavior of simulated neurons could be formalized, there's the possibility that the "high level" mind could belong to a different "emergent" layer that can't be represented by an axiomatic system.
In essence, you would have a computer program that represents a human mind, but the answers to problems asked to that mind would be evaluated as human replies, not formal expressions; and human replies are not axiomatic.
If not, Godel theorem is irrelevant
> [page 5] Just as humans can have free will because we can look at our own mental activities from a distance, so a program can be uncomputable because it can take as input its own self in data form.
But I don't think it holds up, because we don't have full access to our own mental activities, only to a grossly simplified model of our own mental activities.
In this sense, people aren't even proper regex machines. They have limited running time. But do you really believe the brain has less computational capacity than a regex machine?
> I think the conventional view is that free will is the requirement for both good and bad actions, not that free actions are by definition good.
What free will means is basically the core problem of moral philosophy, and different positions take different formulations of it. Consequentialists usually try to say that both bad and good actions are free (critics say consequentialists have absolutely zero freedom). Kantians say a free action following the categorical imperative is a good action. What I'm doing in this essay, though that's not the point of this particular essay, is showing a bijection between the categorical imperative and the halting problem.
If they're simple enough, they can be simulated by a brain, so a brain has more computational capacity. If you can make them as complex as you want, then they can match (and exceed) the computational capacity of the brain, just by incorporating all states a brain can have and pre-computing the transitions. For practical purposes it's almost always best to treat brains (and other computers, for that matter) as Turing-complete, but I think that strictly speaking they're closer to DFSMs.
But the point is that almost all predictions about people's decisions involve time bounds. When there's a time bound, those decisions become computable. "Will this program return 1" cannot be solved in general, but "Will this program return 1 within the next hour" can be.
This is wrong. This DFSM does not halt: node with arrow pointing back to the node on empty input.
I think you also need to have at least one transition for every symbol in the alphabet from every state for it to be a valid ε-NFA, and then it can be reduced to a DFSM, giving it a bounded running time.
DFAs can run in bounded time, because every symbol takes exactly one step to be processed, and there is only one way to process a string.
That also means NFAs can run in bounded time, by reducing them to DFAs first, but that's not a very satisfying explanation.
A NFA rejects a string if it can not end in an accepting state. It's not intuitively straightforward to see whether that's the case, but it can be done in bounded time by exhaustively checking possible paths (which is possible with a finite number of states and a finite input string).
You can't always perform that trick with Turing machines, because they can have infinitely many reachable states.
Gödel's Theorem's have been around for decades, and have been incorporated into most modern philosophical thought. Maybe the "internet['s]" view is more skeptical, but that's largely a function of it being newer than most of the literature.
Gödel's Theorem's, as well as other post-modernist positions are currently receiving a backlash, overdue and well justified IMHO, the fact that they mostly appear on the internet is more a function of timing that form.
Can you elaborate on how his philosophy led to his starvation?
Philosophy was the disciplinary home of formal logic until the latter part of the 20th century (meaning, research in formal logic largely took place in philosophy), and Anglophone philosophy departments are still full of people who are extremely competent at formal logic. I mean, look at the figures discussed in TFA: Boolos, Jeffrey, Hintikka, Quine, Smullyan, and Nagel were all philosophers. Are you really insinuating that Quine didn't know what he was talking about when discussing logic?
You make good points, but altogether my issue is that, at the current moment, few anglophone philosophers have more than a casual grasp of Gödel's theorem. How do I know that? I was in graduate school for philosophy, then dropped out, partly because philosophers who seriously engage in, e.g. both moral philosophy and computability theory are far and few between.
That's a quite a bit weaker of a claim than "philosophers don't understand Gödel's theorem". To back that up you need to point to some philosophers who (a) discuss Gödel's theorem in such a way that (b) they demonstrate their lack of understanding. As it is you're just claiming that one particular subfield of philosophy doesn't have much overlap with another. How is that an indictment of the field? There are few computer scientists working at the intersection of quantum computing and database theory. There are few physicists working at the intersection of biophysics and string theory. It's not a meaningful criticism unless there's reason to think that there is something in the intersection that practitioners are culpable for overlooking, and you haven't provided any such reason.
That's because we're having a casual conversation, and the reader is supposed to pick up on context. This is not a proof or a paper.
People write many things on the internet, and it is still surprising to me how they choose to do that. I wouldn’t be concerned about most of it, particularly if it is about topics I know to be deep. I for one looked into some articles on the internet at some point, Godel or not, I made the mistake of thinking that they write what they knows of.
As for godel, I’m sure you can find contemporary math/philosophers.
Another example was in grad school in mathematics. I knew one particular thing was off but I couldn’t place it. A long time has passed since. Recently I came across a note from a math professor who had spent the time to go through all books covering the subject, and explains what everyone had missed. I agreed with him.
Today's list based interfacing with the internet, the nature of bite sized information surfing, the resulting fuzzy attention, and the property-based worldview makes it very difficult to transcend this paradigm of mechanical / single-threaded cognition without crossing some pretty deep and frightening oceans so to speak.
Most importantly _every one_ of the great forefathers of thought were entangled in philosophical and mystical thinking which were not _only_ "silly magical endeavours" only worthy of X century but actually central to their worldview and absolutely central to the development of "how the western world is" today.
Also most of them were polymaths, and so much more intelligent and broad scoped than what we give them credit for. That we see pop cultural icons like Neil Degrasse Tyson tweet that Philosophy is not a real science and is basically "stupid" is telling and a monumental crime against the history of "thought" in my eyes.
Anyway I highly recommend the very scientific and very well made podcast the "Secret History Of Western Esotericism" for a deep dive into the genealogy of all of western thought that shakes things up a bit while giving a nice broad overview of philosophy from before the greeks until today : https://shwep.net
As a huge history/philosophy/math buff myself I think the world is in serious need of new historical paradigm that acknowledges that the borders between what we call Math/Philosophy/Science/Religion/Language is way more fluid than what we currently have written in our history books, and that such a reconnection to the historical roots of philosophy would be like reattaching humanity's umbilical cord after trying to separate our selfs too early from both nature and history somewhere after the enlightenment.
I have struggled a bit with what I would call "coding brain" which makes it harder for me to appreciate abstract poetry or the improvisational character of social relations after a whole day of deep coding or deep math. Think Julian Jaynes crazy theories, something to do with brainwaves or left and right halves. I really have no idea why you get as locked in as you do, it's probably an evolutionary mechanism so you don't go crazy, something that shamans, philosophers and mathematicians have in common.
If you force yourself (which almost no one does) to read through great poetry for a whole day, your reality tunnel begins to change from box-like to more fluid synchronistic, associative etc.
Then take it one step further and begin to read "difficult" philosophy like Heidegger, Deleuze, or Kant, - really contemplate what they are saying while you read slowly (even less people do this) - and I promise you a whole new world will open up. Also try to develop visual / spatial thinking like Euclid to Einstein also employed.
New modes of thinking get unlocked to use a modern methaphor.
Sounds a bit crazy, but anyway that's where philosophy, math, poetry and thinking in general get real interesting!
Did you see how much reading he has done from a broad scope of fields? I would have respect and be interested in anyone who has read even half of what he has read. Why is parts of the new atheist crowd always so quick to violently dismiss people even if they are extremely well read?
I would never do the same. For example i disagree with Dennet on a lot of things but i still have respect. Also i am not religious.
If your knowledge of "spirit" stems from pop culture phenomenon like Life of Pi, and your entrance to philosophy is through the new-atheist / neo-enlightenment crowd of course you won't understand all of philosophy - just as if you angle your way in from any exclusive school of thought.
The whole point of his text is that if you don't actually read the Philoshophy, and do it with a broad scope, or do the actual ritual you won't "get" it. Just like you won't get meditation by reading about it as Sam Harris points out.
In my eyes Eliezer Yudkowsky is very clever, but outside a narrow field he is just wrong and perfect example of the militant unemphatic new-atheism crowd which are historically revisionist and ultimately chauvinistic in their approach to philosophy.
But that is just my opinion and you don't have to agree, *so i don't get the downvotes honestly.
This is also why many actual philosophers agree that his blog is a bit too sophistic: (although i agree he has some very interesting and well written posts! ) https://www.reddit.com/r/askphilosophy/comments/425h9e/why_i...
I say start with your own curiosity instead of hopping on a bandwagon, doing that always leads you astray, that's at least how it has been for me, just as Eric writes - try to actually read the texts without too much framing, try to meditate, try to adopts different mindsets, try to have historical hyperstitional dialogues etc.
This means reading the cannon yourself, engaging in meta analysis from different angles, both leftwing, rightwing, positivistic and hermeneutic. Read history through different lenses. That's the whole point, to get actual eureka moments from new modes of thinking instead of labelling yourself a rationalist or whatever and then reading only second or third hand sources while constantly having an agenda in the back of your mind (we all do this).
We don't like organised religion, snake-oil salesmen or depak chopra either so the whole "lets put everything from neo-platonism to wishy washy pop astrology" into the same trash can and set it on fire is extremely saddening to many historians/philosophers.
This is very much in line with my conclusions so far. To me this is the real mind of a Hacker. Someone who experiments with everything on equal terms through metacognition and embodied experimentation instead of falling into either yuppie capitalist materialism or becoming a soul suckingly boring taxonomy specialist slave. (yes I'm partially talking to myself here)
To me this quote pretty much sums it all up:
"Rituals are programs written in the symbolic language of the unconscious mind. Religions are program libraries that share critical subroutines. And the Gods represent subystems in the wetware being programmed. All humans have potential access to pretty much the same major gods because our wetware design is 99% shared.
Only...that cold and mechanistic a way of thinking about the Gods simply will not work when you want to evoke one. For full understanding, the Apollonian/scientific mode is essential; for direct experience, the Dionysian/ecstatic mode is the only way to go."
Perhaps the diction could be improved but I think what the piece points out is a dastardly trap as you're just upon grasping (whatever can be grasped of) Gödel's Wonder.
I speak as a layman.
In consideration of this point and the available evidence, I remain deeply unconvinced there is a unrealized, grand philosophical insight to be derived from whatever Gödel’s later, more obscure work had to say about the incompleteness theorems. Likewise Fermat was probably the foremost authority on his most famous conjecture during his life; but the common consensus is that he did not, in fact, have a “truly marvelous proof” of the FLT which he couldn’t fit in the margins.
Everyone thinks silly things from time to time, their prior work notwithstanding. I’m inclined to believe Gödel either wasn’t as invested in the philosophical implications of his earlier work as you believe. Or perhaps he was, but I really doubt he actually arrived at a novel, rigorous result from it. Occam’s Razor suggests we would know if he had.
Aside from that, inability to understand a thing should really count against it. Or do you suppose Miyazaki's Inter-Universal Teichmüller theory is strong because it's borderline incomprehensible?
Like I said, Occam's Razor suggests there really isn't much there.
am not looked to get myself banned but the parent comment is as outright an original, authentic, relevant comment you will find here. yet there it is, too light grey to read. serious question: what is going on?
The comment casts a lot of aspersions on academics but doesn't actually justify any of it.
At the end of the day, I agree with the essay (too many people misuse Godel -- starting with GEB, and onward), but I don't like the justification behind the post. After all, the First Theorem is used to drive towards the Second. It's intellectually lazy to just cover the first without touching the second: "Any consistent formal system S within which a “certain amount of elementary arithmetic” can be carried out cannot prove its own consistency."
In other words, I think the essay falls prey to exactly what it's criticizing: a purposefully neutered explanation of Godel to arrive to some unrelated conclusion.
Assuming this is Hofstader's Godel Escher Bach, I'd like to see support for this claim. GEB is just a popular exposition of Godel and a paean to self-reference. It's not a rigorous work but I don't know of any terrible violations it involves.
It's true that he uses it as an analogy, but not a loose or vague one. For example, "But it is important to realize that if we are being guided by Goedel's proof in making such bold hypotheses, we must carry the analogy through thoroughly [...] If our analogy is to hold, then, 'emergent' phenomena would become explicable in terms of a relationship between different levels in mental systems." p708
He's literally taking the logic of Goedel's proof, transferring it to brains step by step, to argue for a particular theory of consciousness.
It is odd to think of someone "using" or "misusing" Godel - if the points are salient and clear, how is it misuse? (At least, in this case, where GEB is a philosophical text, not a scientific one.)
I don't agree. The more modern result of incomputability has certainly decreased the importance of the first incompleteness theorem, but the idea of unprovability is still important.
And I don't think its a result that most high-schoolers can prove (with rigor). I would like to point out that your reference does not in fact even prove it (even informally). It only proves that there are incomputable functions. Such a function does allow you to make an unprovable statement, but that is an extra step at least as large as the given proof.
There are a bunch of important details to the formal proof (as Godel wrote) that are not easy. The method to find a fixed point of the unprovable statement is definitely not something I could repeat from memory. Understanding why the "certain amount of elementary arithmetic" is one of the details I feel is important to understanding the theorem.
So indeed, we reach a contradiction, but the hypothesis was :
> To make things even easier, we can just throw out programs that loop infinitely or don’t return a 1 or a 0
so the contradiction tells us that \bar f is not in this set. More specifically, \bar f may loop infinitely at some point, I guess it does loop when you feed \bar f to itself.
Not at all. An unprovable statement is, in fact, exactly such a non-computable function. Supposing S is a non-computable function, all we need is a meta-logical jump from "computing S(x)" to "checking if the statement 'S(1) = 0' is correct" -- the latter of which is also trivially not possible (because S is not computable).
In any rigorous logic it is not. An unprovable statement is a non-existent value whereas the non-computable function is a mapping to a non-existant value. Sure there are ways of turning one into another (however there is not an isomorphism), but that does not make them the same.
> all we need is a meta-logical jump from "computing S(x)" to "checking if the statement 'S(1) = 0' is correct" -- the latter of which is also trivially not possible (because S is not computable).
As the other comments point out, you can't check just one case. f bar's definition makes it computable at a bunch of functions.
The lecture you linked in the article actually has a follow up where it goes into showing that a formal language would be able to prove any computable function's value. It isn't hard, but it requires some thought.
I need to check my undergrad metalogic notes, but I think they are "almost" the same. You might be right though, it's a very nuanced point you're making. IIRC, Godel and Lob used the "|- Prov(...)" (or Bew(...)) predicate to essentially make them the same.
† The knowing is external to the system at hand, and the proving is internal to the system at hand. I go into detail about this in my blog posts linked above.
So I'm not sure it only applies to some special cases, I think several relevant incomputable problems might have this characteristic.
The nature of knowing what is true is to capture a slice of reality in a model, and a model by its essence must leave out details, considering only elements which are important to its function. It seems that all of language, including anything we create with it, is an incorrect but useful model of Truth. Different models describing different things are expected to produce contradictions, but this math thing was talking about itself and it couldn't even do THAT consistently, right?
If Godel's theorem seems to be blown out of proportion as a synecdoche for the hundreds of years of collective malaise about unattainable truth, it is at least appreciable (and relevant to this wider domain) that Godel devastated the world by showing how logic fails at proving the consistency of a simple mathematical system (correct me if I'm wrong—it's not just axiomatic systems, "there ain't no such animal" of a consistency proof).
Anyway, there is a popular distinction between lowercase-t truth and uppercase-T Truth. The uppercase form represents knowledge we can never understand (outside our Circumference), and the lowercase form represents what we understand now. Since our understanding constantly changes over time, I imagine truth being a function of time, t. As t approaches infinity, truth(t) approaches this limit of Truth.
I highly recommend this poem (analyzed by Nerdwriter) on the expressibility of truth as well: https://youtu.be/55kqNg88JqI
Yes, but that larger system itself contains statements that are true but unprovable in itself, and so on in an infinite regression.
And before one hand-waves it away as a mere 'hole' that we can safely ignore, Chaitin extended Godel's work to show that there are in fact an infinitude of 'relevant' mathematical statements that are unprovable within any formal system S:
Thanks. And oops, I didn't know that the result was proven in a larger system. Did some searching and found this short answer and a longer description that I liked.
This statement in particular helped:
> The Incompleteness Theorem prevents ZFC from proving its own consistency and for that we need to have an additional axiom, giving us a stronger theory which can then prove ZFC is consistent
but you’re extrapolating beyond godël’s incompleteness theorem here, which only makes that sort of assertion for axiomatic systems, as the article exhorts not to do, do you not?
what does seem self-evident is that logical systems cannot encompass the whole of human experience (yet?) so we continue to search for a more complete system of thinking. godël’s work probably is only incidental to this, not a completely explanatory theory.
The only way out I see is uncountably many axioms.
I don’t think Gödel was demonstrating the "limitations" of logic. I think he was demonstrating the limitations of lower logics so that we could wield higher logic. Just as Cantor broke through into the transfinite.
“From the paradise, that Cantor created for us, no-one can expel us.”
I don’t see Gödel’s theorems as expressing the failure of logic, I think the holes(incompleteness) it illuminates are doorways.
Gödel prompted people to explore things like plurivalent logic and fuzzy logic, at least in the “west”.
A HN comment even pointed out that fuzzy logic won't fix paradoxes: https://news.ycombinator.com/item?id=7719214
>I don’t see Gödel’s theorems as expressing the failure of logic,
It shows the limitation of formal systems, which we live and is the universe
> I think the holes(incompleteness) it illuminates are doorways.
No it doesn't, it shows limitation of understanding.
And for those who say that we need to use fuzzy logic (which is used in neural networks, Bayesian probability theory) still won't solve paradoxes and what Gödel showed us.
Here some food for thought, imagine perfectly crafted sentences with logical induction (list like- where the wrong one gets deleted and the right one goes one list up). Even if we manage to make all sentences consistent (the one who are true) there still be one comment who can not be denied or approved.
So if someone wants to build a Gödel machine (philosopher machine) he can use reddit/hn like Turk Machine (pitch it as a seed AI Gödel machine, this idea is obviously absurd but interesting nonetheless)
Edit: mathematics , as Russell said, is the most precise tool at reaching clarity so imagine what language fails at attaining.
But in fact, the whole proof is based on the self-application problem, where it's roughly analogous to the liars paradox. But the proof doesn't say anything about other, non-"reflective" cases. Theoretically, it might be possible to build a program that makes correct decisions whenever that's possible, and otherwise ends up in an infinite loop (e.g. in the self-application case). I wonder if there are any efforts in this direction; so far I've never heard of it.
Now we know the program of the second case never halts but it was not an easy proof, the first one is still open.
Isn't it the case that if B is correct, then the mind is contained within some axiomatic system, therefore A is wrong (i.e. our ability to learn truth is indeed limited by G's theorem)? I do not see how both A and B can be correct at the same time.
But if human minds were Turing machines, then they are certainly axiomatic systems since Turing machines are axiomatic systems. This contradicts his initial point which concludes that not all of our sources of knowledge are axiomatic in nature.
Gödel's theorems are profoundly interesting in their own right. They also required significant creativity and ingenuity for their time. But if there are conflicting "interpretations" of what they "actually mean", then that means most must be incorrect (or more charitably, extremely incomplete).
Frankly I'm flabbergasted you would compare (mis)interpretations of mathematical theorems to the legitimate subjectivity inherent to literary criticism...
Additionally, interpretations are always subjective because interpretation of something is unique to an individual subjective viewpoint.