Hacker News new | past | comments | ask | show | jobs | submit | tybug's comments login

That kind of behavior can happen at the threshold of Hypothesis' internal limit on entropy - though if you're not hitting HealthCheck.data_too_large then this seems unlikely.

Let me know if you have a reproducer, I'd be curious to take a look.


(not OP but I would be surprised if the answer wasn't) yes, because you're changing the order in which the random draws are interpreted. But this isn't a problem in practice because you generally aren't changing the generator in the middle of debugging a failure.


So, this means refactoring becomes potentially difficult. While the gem is still a great accomplishment and very useful, I'd have to engineer my way around this issue before using it with things like a Rails Model which could have changing shape.

@OP:

I wonder if the README (and possibly runner) should suggest writing a test-case that doesn't rely on PBT when the user wants to preserve a case for future testing.

The issue here is that if you're saving a singular example and it represents a weird corner case, it's totally conceivable that a small change will result in an invisible change to that test case.

Another idea: it'd be great if the test could simply take examples that are failing and add them to a `failing_examples.rb` or some such. I know I'd use a feature like this quite a bit.


I think there are usually three actions a programmer can take when PBT fails.

- Create a test case that doesn't depend on PBT as you suggest.

- Fix the production code being tested since its failure is an unexpected bug.

- Fix the PBT itself. This means that the programmer has had wrong assumption for the test target.

I think it's difficult for the tool to know which choice is the best on a failure. But if there's any good idea, I'd like to incorporate it. :)


>yes, because you're changing the order in which the random draws are interpreted. But this isn't a problem in practice because you generally aren't changing the generator in the middle of debugging a failure.

Correct. The test inputs are determined by a seed and generators (including the order of generators).


I'm hoping someone can enlighten me here. My understanding is that there is a turing machine of 748 states [0], which halts iff ZFC is inconsistent (Thm 1). But this machine is a "physical" object, in the sense that we can materialize it on a computer and run it. Though we don't have the computing power for this currently, there is nothing in principle stopping us from running this machine for BB(748) steps: if it halts, we have proven by Thm 1 that ZFC is inconsistent. If not, we have similarly proven that ZFC is consistent.

I want to stress that this is key to my confusion. This is not just some abstract result; this is a computation that we can perform and draw a real value from.

Of course, I'll now fall back on godel's second incompleteness theorem and say that one cannot prove, inside ZFC, that ZFC is consistent. But if the above turing machine halts, then we proved ZFC is consistent - a contradiction!

Where is the mistake here? My current guess is there is a technical detail in the proof of Thm 1 which uses a stronger metatheory than ZFC to show that the 758-state turing machine halts iff ZFC is inconsistent. This is not a contradiction, because yes, we can run the turing machine for BB(748) steps, but that will only show that ZFC+ proves ZFC is consistent, which is already well known - ZFC + there exists an inaccessible cardinal does the job.

However, I haven't looked at the paper in detail to know whether this is the case. Does anybody who has thought deeply about this problem have insight to offer?

[0] https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-unde...


The issue is the "run the turing machine for BB(748) steps" part. We don't know what BB(748) is. If the god of busy beavers came to us and told us that value, then we could (in theory) run the TM that long and just like you say, that would prove whether ZFC is consistent. But in order for us mere mortals to compute BB(748) we would effectively need to figure out if this specific 748-state TM ever halted (along with all the other 748-state TMs).


To put it another way, an oracle telling us an upper bound on BB(748) would be strictly more powerful than an oracle telling us ZFC is consistent.


We don't need to know BB(748), just an upper bound on BB(748).

... which means we can't prove any upper bound on BB(748) within ZFC.


A lot of math draws conclusions from something not possible in practice (e.g. "let's take all the natural numbers and ...").

All the parent says: if it's possible even theoretically to learn the answer to the BB problem then we've proven something that cannot be proven, as shown by Godel incompleteness.


> This is not just some abstract result; this is a computation that we can perform and draw a real value from.

No, this isn't a computation we can perform. There isn't enough energy in the visible Universe we can use to increase entropy to run this computation.

Even if we built a computer that would use all matter and energy in the Universe, even if the computer only had one task and even if it ran the task as efficiently as is physically possible, it would not complete the computation.

So this is kinda where mathematics gets disconnected from physics and reality in that we can talk and reason about those things but they no longer have physical meaning.


It doesn't matter whether we can physically do it, as long as we can mathematically do it. In math running a TM for an arbitrary, finite number of steps is not a problem. The actual answer to OP's question was given in one of the other subthreads, namely: the result tells us that BB(754) is not computable in ZFC. Some 754-state TMs halt, others run forever, but there is no way to figure out (without an oracle) which of them runs the longest while still eventually halting.


>"It doesn't matter whether we can physically do it,"

Hmm. I'm not sure that I agree - philosophically if you could prove that something is computable mathematically and yet at the same time know that physically it cannot be done due to the laws of physics, I would say that imposes a "second order" halting problem.

By which I mean if you had some mathematical algorithm that could be proven to tell you some property of a system like halting, and you knew the number of steps required by that algorithm to produce that answer, if that number of steps is beyond the capacity of the physical world to provide, then there is a set of algorithms that can be proven to never be decidable.

I'm sure you are thinking, oh maybe we get more efficient, but to solve the bounding problem itself you must provide a constant time solution within the bounds of the universe, or else the "mathematical" solution is itself undecidable.

And even if you found a fast, low constant time solution (the holy grail), it still must be shown that the categories of problems are themselves infinite variations of finite categories, so that your low constant time solution could conceivably be applied within the limits of computation capacity, or else there will always be for some large enough set of problems some that are not decidable.


If you insist that only tractable problems are decidable (or computable), you don't really get a nice or useful theory of computation. You need a more fine-grained hierarchy.

Almost all real numbers are uncomputable (computable reals have measure zero; in other words, the probability of an arbitrary real number being computable is exactly 0). BB(754) is uncomputable in this sense even though its definition is extremely concise. Still, its value could be computed by a machine stronger than a TM (but it would again be unable to compute its own BB problem).

Of the countable subset of reals that are computable, almost all are like 𝜋, in that computability does not mean being able to actually calculate or represent its exact numeric value in the physical world. Only that it's in principle possible to compute it to any precision desired – but of course this isn't possible in reality either. Only in math.

Then there are numbers that are unbelievably large, like 3^^^^3 or (the immensely larger) Graham's number G, which nevertheless admit an incredibly compact definition, and we can prove various properties of them. Yet their numeric representation would not fit in our future light cone (either in space or in time). Where do you draw the line? After all, G is an entirely ordinary natural number, and indeed almost all natural numbers are greater than G.

Complexity theory draws a line, somewhat arbitrarily, between polynomial-time and superpolynomial-time problems, the former being tractable and the latter intractable. But a number might be easily defined in terms of a tractable function and still be entirely unfeasible to ever calculate or represent in the physical world. (Such a number wouldn't even have to be particularly large if the function that defines it grows sufficiently slowly.)

Nowhere is there a distinction made between "computable in reality" and "only computable in math", because it's impossible to formally define such a distinction. Some numbers are obviously within our reach, almost all others obviously aren't. In between there's a huge gray area.


Hmm, I think its the same problem, you are somewhat arbitrarily drawing the line between "math" and the universe (or maybe I'm agreeing with you and don't realize it).

A halting/decide-able problem is one that is precisely about tractability - does it end, and can you decide that. The maths is about whether there is a tractable solution, in theory ignoring any energy/mass requirements.

The maths of the maths of the tractability issue is also one of tractability - if you provide an infinite solution to an infinite problem then is it a solution or an admission of defeat? You can't even decide if you can decide that you can decide.

If there is no maths for the maths for the decidability question of is it tractable (not even energy computeable), then in my mind there is no difference between the two, i.e. you have no higher order reduction with which to express the lower order problem. Without a constant time solution, there may not be enough math to express the solution to the problem, i.e. this may just be another set of infinities, as you mentioned the integers, irrational numbers, precision, etc. (which are lower order entities for which we have tractable mathematical solutions).

And the lower energy bound of the universe to the set of problems to me remains interesting, in that maths itself is an invented set of relationships, in the sense that the set is invented and less in that the relationships are invented (they exist but we value certain relationships over others, so the set is invented), so any maths that cannot survive and exist within the information space that is the universe is to some degree irrelevant - if there is no way to relate to it, does it exist? I think so, we simply lack the ability to process or observe it.

Consider, for instance, a constant time solution to the decideability problem, however the maths to prove it require a set of equations and theorems larger than the processing time of the universe. Then a solution exists, perhaps, but we will never discover it. It doesn't mean it doesn't exist, but it does not exist for us, and we can neither prove nor disprove that one exists.


I think that it would be fairly easy (meaning that I think I could do it in 8 hours) to build a Turing Machine that prints 10^1000 1's on its tape and then halts. We would not be able to construct and run physical Turing machine that could do every move physically (by moving electrons) within 100 billion years, but we can still write the proof that it halts, probably with less than 3 pages. (In fact, we could probably tell you which state it is in for any i in the set {1, ...., halting step} without ever constructing and running a physical machine.)

On the other hand, if the "shortest" proof of halting is more than 10^20 pages long, then we would have major problems "writing" it.


If you restrict yourself to problems that fit in the physical universe, everything can be done using finite automata. That’s a bit boring.


I agree with your overall point, but note that computation doesn't actually increase entropy (we can do reversible computation, e.g. using quantum gates, Toffoli gates, billiard-ball computers, etc.)


But if the universe is infinitely large then any finite thing should fit in it, right? Or are we saying that BB(748) can be infinite?


Even if the universe is infinite, you can't use its infiniteness because you can't communicate partial results across infinite distances.

It is natural to think that the more time you have, the further you can travel to, potentially. But when it comes to the universe, the opposite is actually true. The more time passes, the less of the universe you can reach. A lot of universe you can see today is actually not at all reachable, meaning even if you shine the light back it will never reach the destination.


Computational capacity must not be able to travel faster than the speed of light, yes?

We are told that quantum entanglement cannot transmit information, I am unaware of how rigorously this has been proven/disproven, on the off chance there is something the scientists have missed, the requirement to travel might not be necessary. Either through some advanced entanglement (a novel approach or a not yet understood state of matter), or reaching for a more exotic theory, some additional dimension enabling warp or wormhole like behaviors.

Then computational capacity would depend on some ratio of power (to entangle, set up facilities, etc.) to time spent not doing these things, and might possibly have an upper limit, or not. Perhaps this is a halting problem in itself?


> We are told that quantum entanglement cannot transmit information, I am unaware of how rigorously this has been proven/disproven

This has been rigorously proven [1]. If you are transmitting information you are doing something in addition to using entanglement.

[1] https://en.wikipedia.org/wiki/No-communication_theorem


Only because our universe is currently expanding, if it wasn't or would stop in the future, given enough time you could eventually distribute the task over a large enough region to perform the computation and afterwards combine the results in one place. And one would probably have to throw in a couple of technical requirements, for example that the energy density does not decrease faster than the future light cone expands.


It can be finite, but you have to definitively prove the non halting cases are infinite (which, being infinite, can't be computed with brute force)

Look, you're arguing the collatz cobjecture can be proven by counting up by 1 & seeing when a number hits a loop without reaching 1 (which, if there is such a case, would eventually prove via refutation, but if not, you'd never know if you were about to hit an answer or not)

Mathematical uncomputability for problems with answers is a thing. Read up on Gödel's Incompleteness Theorem

& some conjectures have been refuted by computers: https://math.stackexchange.com/questions/2638897/conjectures... but until it's done the question remains unknown


Spacetime (might be) infinitely large, but that doesn't imply that there is infinite matter or energy.

That said, this is kind of a dumb argument because far lower k values have lower BB(k) values already exceed the apparent information value of the universe at any given instant. Maybe there is infinite energy and matter, but that's also irrelevant if we can't perceive more than a finite subset of it.

Edit: well, i guess what would matter would be the information value of time, multiplied by enough bits to store the machine—I'm not sure i'm literate enough in the area to compute that. But, assuming that the heat death of the universe reaches a single (possibly compressed) end-state, it should still be finite—it's seeming quantized, anyway.


It does not matter how much energy is there available in the Universe. Even if there is infinite amount of it, you can't still use it because only finite amount can ever be reached / affected by any single observer. Only finite amount of universe can ever reach any single observer.

So for example, there is a limit on the mass of the computer that can be constructed and still send its result to a single spot in space in the future. And the longer you wait, the smaller the limit because less and less of the universe is available to you to build the computer.


> Of course, I'll now fall back on godel's second incompleteness theorem and say that one cannot prove, inside ZFC, that ZFC is consistent. But if the above turing machine halts, then we proved ZFC is consistent - a contradiction!

No, the machine halts iff ZFC is inconsistent -- as you correctly stated up top. Somewhere along the way you got this reversed, looks like. There's the problem.


You're right, I misstated this - but I don't think this is fatal. The other sibling commenters pointed out the real issue with my thinking.

The argument goes the same even though I misspoke here. If the machine {halts, runs forever} then ZFC is consistent. But this is a contradiction; so ZFC must be inconsistent. Tada, I have an inconsistency proof!

That was the implied next step which made me think my logic was clearly incorrect (which, it was).


It's simpler than this still. If it runs forever (likely), then you will never be able to say anything about ZFC.

If you see it halt, ZFC is inconsistent. If you never see it halt, you CAN'T conclude anything.

But we could already do that under Gödel incompleteness, so there's nothing unusual there!

If you write down random proofs on paper and find a correct proof that leads to contradiction, you've proved ZFC inconsistent, without using BB. If you keep trying forever and never find one, you'll never be able to conclude anything at any point, just like with watching the machine run


> If it runs forever (likely), then you will never be able to say anything about ZFC.

But if you run it for BB(754) many steps, you will know.


Yep. But I think it's easy to show that this is circular, since you can't know BB(754) without knowing whether it runs forever.

And you can't prove that it'll run forever without seeing it go past BB(754) and still keep going

BB(754) is X if ZFC is consistent, Y otherwise

Since you can't prove that ZFC is consistent (only disprove), you can't know BB(754), which is the thing we were trying to use to determine whether ZFC is consistent in the first place!

The definition doesn't make it obvious, but this is just the same as plain Gödel incompleteness, we can't get any extra info about ZFC even in principle (unless we happen to see it halt, by chance)


> You're right, I misstated this - but I don't think this is fatal.

It is crucial at this types of results, when you search for a proof.

There are a lot of things true, better make a table of it, instead of a wall of text:

  - If you observe the machine halting, ZFC is inconsistent. 
  - If the machine hasn't halted yet, you don't know if ZFC is consistent or not.

  - If ZFC is inconsistent, the machine will eventually halt. (You have an upper bound for this, given a contradiction.)
  - If ZFC is consistent, then the machine won't halt ever.
Also it is consistent with ZFC that this machine halts, since it is consistent with ZFC that ZFC has a contradiction. This means that if ZFC happens to be consistent, and you work in ZFC+contradiction, then you will know that your machine will eventually halt, yet it won't halt ever.


I don’t think that’s it. I think it’s easy to take such a machine and make a new one that halts iff ZFC is consistent.

Call the machine that halts iff ZFC is inconsistent A. Now consider the machine which computes the following algorithm:

Run A BB(754) times. If A halts, then run forever, else halt.

If A halts then our machine runs forever, and vice versa. Thus, our machine halts when ZFC is consistent and our machine runs forever when A halts, so ZFC is consistent iff our machine halts.

“Halts” doesn’t seem like a real word after writing that.


How do you make the new machine compute BB(754)? BB is the canonical example of an uncomputable function, precisely because you can decide the halting problem if you can compute it (or any upper bound). Granted, BB may be computed for specific arguments, as OP mentions for 1–4, but the existence of the ZFC-dependent machine is, at least to me, a very good argument that the boundary of what's possible is much lower.


Oh, sure. I was just pointing out that the hardness is in determining the busy beaver number and that it didn’t matter if your algorithm halts iff ZFC is consistent or if it’s an algorithm that halts iff ZFC is inconsistent.


No, if you had an algorithm that (you could prove) halts iff ZFC is consistent, then if that algorithm halts, you’ll have a proof that ZFC is consistent, which isn’t possible. Thus, the existence of such an algorithm would be a contradiction that proves the inconsistency of ZFC.

The problem with your construction is that it relies on knowing the value of BB(754), which is impossible to know so long as ZFC is consistent, since its value is dependent on the consistency of ZFC.

Conversely, if ZFC is inconsistent, then there exists a (finite) proof of this fact, so the opposite case isn’t a problem.

Essentially it’s like saying define X to be the length of the shortest proof of the inconsistency of ZFC, if one exists. If I could prove any upper bound on X, I could prove the consistency of ZFC, which, according to Gödel’s incompleteness theorem, would itself prove the inconsistency of ZFC.


The intuition is that a monkey typing randomly on a typewriter can come up with texts which either are or aren't valid proofs of ZFC or not, and each one which is a valid proof either is a proof of a contradiction or not. To check either of these things is mechanical. If ZFC is inconsistent, eventually the monkey should hit on the inconsistency.


> The problem with your construction is that it relies on knowing the value of BB(754)

Eh, this doesn’t really matter. That busy beaver number is just an integer, so there is some TM that does exactly as I have described.

Thus, there I have proved that there is a turing machine that halts iff ZFC is consistent.


It’s an unknown integer, whose value depends on the consistency of ZFC. Let me show you why this is circular.

I can define another integer N which is 1 if there exists a proof of the inconsistency of ZFC and 0 if there doesn’t (note that BB(754) already encodes this information). Then I can define a program that determines the consistency of ZFC thusly: if N=1, I define the program to immediately return false. If N=0, I define the program to immediately return true. Thus, there exists a program that can determine the consistency of ZFC, it’s one of the two programs I’ve defined.

The fact that there exists a program that returns the consistency of ZFC isn’t in question. The trick is proving that a particular program does so. Or if you like, proving that there exists a program along with a proof that it does so. What you’ve defined is an oracle: it depends on already knowing the answer to what you’re asking so it doesn’t have to compute it.


It is not circular. Such a Turing machine clearly exists.

What we’ve seen is that there plainly exists a Turing machine which halts iff ZFC is consistent.

All of the other window dressing you’ve added hasn’t changed that simple fact.

I agree that finding busy beaver numbers is the issue. I do not agree that the existence of a TM that halts iff ZFC is consistent is hard.


You "clearly" is not so clear.

BB(754) is an uncomputable number. It's independent of ZFC, so an enumeration of all consequences of axioms of ZFC doesn't contain it. How is that supposed TM of yours is supposed to know whether it has run BB(754) steps or not?

Oh, but other slightly bigger TMs exist – lets say in class TM(860) for the sake of an example – that might halt with after a more steps than BB(754). This _sounds_ intuitive. But: how do you prove that? It might be that all TM(860)s either halt within BB(754) steps or then run forever. There indeed might be some that halt in finite steps after BB(754), but that is not guaranteed! You need to prove it. But with what?


Oops, never mind, there exists a simple construction that you can perform to each TM(754) that clearly extends BB(754) a finite amount. Maybe you are corrent that such Turing machine exists. But seems that identifying it isn't possible in ZCF.


I agree that the problem is identifying the machine, not its existence.


Oh dear.


Excuse me?


I’m saying that all you’ve proven is that if you know a priori whether ZFC is consistent, you can construct a Turing machine that returns this value. If you consider that to be window dressing I don’t know what else I can tell you.


I’m saying that it doesn’t matter if you know if ZFC is consistent; I have proven there exists a TM that halts iff it is.


The only problem is that proving it halts is "easy" - run the program, wait a few million years, it halts. Yey.

Proving it doesn't halt is much harder, since you can run it for TREE(3) steps if you want, that's still not proof it won't halt in TREE(3)+1 steps.

So in a way, it's not possible to "just run it", sadly.


So… what if BB(748) is uncomputable? Or rather, didn’t you just prove that it is?


I suppose I did!

I was having a hard time reconciling this with the intuition that BB(n) is in principle "computable" (colloquially speaking) for any n - my thinking went that if I want to compute BB(n), I can enumerate turing machines and run them until they halt, since infinitely looping machines are excluded from BB(n). But of course I have now reduced this to the halting problem! How do you know when you're "done" for that n? You don't.

Thanks to you and sibling commenters.


Similarly, if you know BB(n), you can use it to solve the halting problem for Turing machines up to that size.


Isn't the easier proof that BB(n) isn't computable something like

- assume BB is computable

- there exist a TM called X that computes the function

- it has K states

- X(K+1) produces BB(K+1) but from the definition of BB our machine cannot produce a result higher than BB(K).


There's a difference between a TM/algorithm/etc. that computes a function, like BB(n) (for all Natural numbers n); versus computing a particular value, like BB(748).

For comparison, there is no TM which computes the halting function halts(p) (for all programs p); but it's easy to compute particular values like halts("exit") or halts("while(true){}")


Yes. My reasoning applies to a function n=> BB(n)

Isn't that what "the function is not computable" is about?

Or is the thesis that the value of BB(748) can't be computed?


BB(n) for any particular n is always computable, no exceptions. There simply is no single computable function that can compute BB(n) for every n, but for any particular n there absolutely is a TM that computes it.


Indeed, the value of BB(n) for each n is just a number; and it's easy to construct a TM which outputs some particular number. However, we don't know what those TMs are, for the same reason we don't know what those BB(n) numbers are.

The same applies to more limited settings too, e.g. boolean questions like whether the Collatz conjecture holds: I know that one of the following programs calculates the right answer, but I don't know which:

- PRINT "true"; HALT

- PRINT "false"; HALT

(Perhaps we should also allow `PRINT "independent of the given axioms"; HALT` as well!)


It is funny to me that you are going for BB(748) in this context when you could go for the much, much, ... lower number BB(745), as outlined in [0].

[0]: https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-unde...


> there is nothing in principle stopping us from running this machine for BB(748) steps

How would we compute the value of BB(748)?


Computing BB(748) would be best, but if we could get an upper-bound estimate that's reasonably close, that would suffice.


Do note that any function f(n) that is always (or even just eventually always) greater than BB(n), is uncomputable, for very similar reasons.


How can you come up with an "upper bound estimate" without having some idea of the structure of the computation of the specific 748 state Turing Machine in question?

Imagine you had an oracle telling you BB(748) excluding this machine. How do you get an upper bound on the runtime of this machine?

(There is an answer: this is surely not the optimal construction, and so such an oracle would give you an answer for some smaller ZFC machine which you could likely use to extrapolate a value for this machine. However, eventually you'll find a minimal state ZFC machine and that won't work anymore.)


Reaching a "reasonably close" upper bound estimate wouldn't provide proof ... probability maybe, but not proof.


If we've proved that a number is an upper bound on BB(748), then running for that many steps without halting means it has also run BB(748) steps without halting.


> Though we don't have the computing power for this currently

I don't think you get how big BB748 is. To use a metaphor:

If you took stuffed the entire observable universe full of computronium that can do more calculations in a fragment the size of a human cell than our entire civilization, then shrank that universe down to the size of a grain of sand and filled our entire universe with that, THEN did this once for every possible distinguishable person (ie. If you can say after a lifetime of detailed observation that person A isn't identical to person B then they're distinguishable) you still aren't even close to BB748. In fact, I'd be surprised if you're over BB10 and you're definitely under BB20.


BB(6) might be too large even to store (let alone compute) using all the mass and energy of the universe. I think the computable limit if you converted all the mass in the observable universe to energy (E = mc^2) would be BB(5), assuming adherence to the Laundauer Limit[0].

After calculating this I found a quote in wikipedia[1]: "There is not enough computational capacity in the known part of the universe to have performed even S(6) operations directly." That cited this paper[2], which is probably better than my model at utilizing the total available physics of the universe for calculation purposes. Anyways, the mass of the known universe is on the order of 10^56 grams[3]. Converting this all to energy using E=mc^2 yields on the order of 10^70 Joules (10^88 electron volts). Setting or clearing a single bit of information requires at minimum 0.018 eV. That allows about 10^90 bits.

BB(6) may require on the order of 10^90^2 bit flips. So it is absolutely not computable using all the mass and energy in the universe. In fact, I don't think it's even storable using all the mass and energy in the universe.

I don't really understand Busy Beaver, so if I got any of this wrong please correct me for the record.

0: https://en.wikipedia.org/wiki/Landauer%27s_principle

1: https://en.wikipedia.org/wiki/Busy_beaver

2: https://arxiv.org/pdf/quant-ph/0110141.pdf

3: https://www.wolframalpha.com/input?i=mass+of+the+universe


> if it halts, we have proven by Thm 1 that ZFC is inconsistent. If not, we have similarly proven that ZFC is consistent.

The second part is wrong. We can't physically check that a program runs forever - this requires an infinite amount of time.


His point is if you know the value of BB(748) then you don't have to wait forever, just BB(748) steps, as after that the Turing machine is guaranteed not to halt. The problem with his argument is that we don't know the value of BB(748). Not only that, it is incomputable, which resolves the contradiction.


Right, we don't know the value BB(748), and moreover, "knowing" it is not enough, we would still need a proof that a certain number matches BB(748). And such a proof is not easier than a direct proof of ZFC consistency, I presume.

BTW, a value can't be uncomputable, only a function can.


It is categorically false that BB(748) is not computable. On the contrary any particular BB(n) can be computed by some Turing Machine even though there is no Turing Machine that can compute BB(n) for every n.


So then what's stopping you from running the BB(748) machine, getting the number, then running the ZFC machine and proving ZFC consistent or not?


A proof of existence is not the same as a construction, so the fact that we know that there exists a TM that computes BB(748) does not mean that we know which specific TM does it or how to construct such a TM.


It's easy to generate all proofs in any system, just pick any axiom, then apply any inference rule, and repeat. So if you ever generate both P and not P, then you've proven its inconsistent.

So if it's inconsistent, it's straight forward to prove. It's only if it's consistent that we can't prove it in any finite time.


A Turing machine which halts iff ZFC is consistent seems like it would be more interesting. In theory it would be possible to run it and show ZFC to be consistent that way.

Although I imagine it suffers from the problem that it can only be shown to work within ZFC or any theory powerful enough to prove the consistency of ZFC, which tells us nothing.


A common misconception in the first paragraph:

> However, Godel shattered these aspirations in 1931 by proving the existence of true but unprovable mathematical formulas.

Godel sentences are unprovable, but they are not "true". They are independent, i.e. true in some models and false in others. The godel sentence happens to be true in the standard model - some people take liberties with this and extend to calling it true, but that's not accurate, or at the very least misleading.


Isn't the claimed "truth" based on an appeal to a more intuitive notion of truth than a formal model-relative one?

The Godel sentence for some Formal Axiomatic System asserts, by way of numerical encoding, its own unprovability in that FAS, and since it in fact cannot be proven in the FAS it is "true" in the sense that it asserts (via its numerical encoding) something which is in fact the case, i.e. just the ordinary sense of "true".


“True in the standard model” is generally what most working mathematicians who are not logicians mean by “true”.


No disagreement here. But I hold that this is an especially confusing way of describing independent statements, especially in this paper, which certainly was written by a logician.

I take particular issue with the phrasing because it confused me for years in highschool and throughout undergraduate. How could something be true and not provable? It took me until a model theory course to realize that it cannot, and most pop descriptions of the incompleteness theorem are - dare I say it - wrong. Though I'm sure you would argue it's not wrong, merely an informal description.

I would, however, expect a bachelor's thesis not to stray into such confusing territory.


It's exactly that point that I've never understood. Please elaborate. What is the 'standard model'?


The model of natural numbers with addition and multiplication that we intuitively understand as the correct one. As Godel showed you can't actually describe this model in first order logic (Skolem-Lowenheim further shows how hopeless we are in describing the model). "Every child knows what natural numbers actually are".


More concretely: you can add an axiom to standard PA that says, "There exists a number that is the Godel number of a proof of G." The resulting system is consistent, and includes a new kind of number that is not a natural number (because you can prove that N is not a proof of G for any N that is a natural number). It's analogous to adding an axiom that says, "There exists a number whose successor is 0", which introduces a new kind of number that we call "negative numbers." Math is extended in familiar interesting directions like this all the time by adding axioms like "There exists a number whose square is 2" (which gives you irrational numbers) or "there exists a number such that the successor of its square is 0" (which gives you imaginary numbers).


You didn't say otherwise, but it's interesting to note that the irrationals is quite "big" in the sense that you can do a lot without them. Not only can you make a nice consistent extension of usual structures on N, Z, Q etc by just adding sqrt(2), you can even add all square roots, or all n-th roots, or the solutions to all polynomials without getting plenty of good irrationals. Sets between Q and R are often neglected, but there's a wealth of good maths in there.


Exactly. A statement is true by definition if and only if it is satisfied in every model. Also, Gödel also proved the completeness theorem that states that a statement is true if and only if it is provable. So, another way to look at undecidability is this: a statement is undecidable if and only if it can be neither proved nor disproved.


UPDATE: since first writing this comment, I’ve checked four quasi-randomly selected books from my shelves (Leary and Kristiansen Introduction to Mathematical Logic, Hodges Model Theory, Manzano Model Theory, Avigad Mathematical Logic and Computation), and they all use valid/validity rather than true/truth to describe formulae that are true in all models, as I originally pointed out below. To be clear, I’m not at all trying to score points by appealing to the literature, but I think it’s really important to clarify that your definition isn’t standard because it will confuse people; I myself was confused by this exact point when I studied logic having previously read the “true but unprovable” description of Gödel 1.

> A statement is true by definition if and only if it is satisfied in every model.

I don’t think this is a standard definition.

Every treatment I’ve seen refers to truth with respect to a model; if no model is specified, it is assumed to be obvious from context. Outside of formal treatments (i.e. in the setting where the 99% of mathematicians who aren’t logicians do their work), the model is the standard model.

First-order formulae that are true in every model are validities.


> I don’t think this is a standard definition.

Well, I suppose it depends on your definition of standard. That's how I have been taught logic. I also believe it is the historical notion. Honestly, "true but unprovable" sounds like a bad way to explain undecidability to me. Would you have been confused by "neither provable nor disprovable" instead? Also, this introduces a bias: the axiom of choice is neither provable nor disprovable in ZF. Are you going to say it is "true but unprovable" or "false but unprovable"?

> Every treatment I’ve seen refers to truth with respect to a model

That's called satisfiability.

> Outside of formal treatments (i.e. in the setting where the 99% of mathematicians who aren’t logicians do their work), the model is the standard model.

I simply cannot agree to that. What exactly is supposed to be the standard model of ZFC? For most mathematicians, what is true is what has been proved.


> Well, I suppose it depends on your definition of standard.

Of course :) I believe my distinction between validity and truth is the one generally used in the literature (I have listed four examples above), and the one that would be understood by most working mathematicians and analytic philosophers who care about mathematical logic.

We both agree that there is a clear distinction between formulae that are true in some model (specified, or inferred from context) and formulae that are true in all models; the latter are not particularly interesting to most mathematicians once one has agreed on the logic (e.g. classical, constructive, etc.) in which one operates, hence I think it’s reasonable to use “true” to refer to the former, as indeed many authors do.

> That's called satisfiability.

Many logicians say that a formula is true in a model (sometimes true in a structure) if it’s satisfied in that model under all assignments.

Can you find me a reference in the literature where “true” is used to mean “true in all models” consistently?


> We both agree that there is a clear distinction between formulae that are true in some model (specified, or inferred from context) and formulae that are true in all models; [...]

Sure. But I feel we are deviating from the subject. We have obviously been educated differently so it is pointless to argue about that, but there is a language issue. You insist on comparing what I mean by "true" (alone) with "true in a model". However, that's an apple to orange comparison. We should be comparing what I mean by "true" (alone) with what you mean by "true" (alone), and by that, you mean: "true in the standard model". (I don't think your references validate that use, although I don't have access to all of them at the moment.) The obvious problems with that are:

- I don't think there is such a thing as a standard model in set theory (actually you cannot prove that a model of set theory exists).

- When most mathematicians say something like "X is true", what they mean is "X can be proved from the axioms of set theory", which in logical terms means "X is valid". Are you really arguing against that?

- And of course (back to the original point), you get that confusing idea that "undecidable" means "true but unprovable" (I had never heard of the incompleteness theorem being presented that way before.). I argue "undecidable" is "neither provable nor disprovable".

EDIT: "X is valid" should read "X is valid in set theory".


> When most mathematicians say something like "X is true", what they mean is "X can be proved from the axioms of set theory".

I'm just one mathematician, but I certainly don't mean that.

Before we can prove anything about sets, we need to pick some axioms. Zermelo set theory (Z) would be enough for most of ordinary mathematics. If we need something stronger, there's Zermelo–Fraenkel set theory with the axiom of choice (ZFC). Or if I need something even stronger, there's, for example, Tarski–Grothendieck set theory (TG).

What I mean by "X is true" is technically difficult to define. The statements

(1) X is provable in Z.

(2) X is provable in ZFC.

(3) X is provable in TG.

are all increasingly accurate characterizations of "X is true", but none of them capture everything about it. And that's kind of the point. There is no proof system P such that "X is provable in P" would work as a faithful definition of "X is true". So the best we can get is this tower of increasingly sophisticated axioms that still always fail to capture the full meaning of "truth".

There is a convention among mathematicians: Anything up to ZFC you can assume without explicitly mentioning it, but if you go beyond it, it's good to state what axioms you have used. ZFC is not a bad choice for this role. It is quite high in the tower. In most cases ZFC is strong enough, or in fact, overkill. But still, it is not at the top of the tower (there is no top!), so sometimes you need stronger axioms. The fact that ZFC has been singled out like this is ultimately a bit arbitrary - a social convention. "X is provable in ZFC" may be the most common justification for "X is true", but that doesn't make it the definition of "X is true".


I think there are two points of disagreement here:

1. The definition of “truth”; and

2. My claim that most mathematicians who aren’t logicians use “true” in the sense of being true in some model, not in all of them.

Re. 1, I accept that this is just naming, but I will insist that your usage of “true” is nonstandard, and should carry a disclaimer as such :)

I think this is evidenced by the fact that “true but unprovable” is a very widely used characterisation of the Gödel sentence for a logic, and it is completely wrong given your definition of “true”, and further evidenced by the references I provided that use “valid” for true in all models.

I think point 2 is a lot more fuzzy because we’re trying to talk about what is happening inside other people’s minds, in particular their view of what a mathematical result actually is; I’m happy to concede that I may be guessing wrong regarding what most mathematicians are actually thinking.

As per 1, my position is that there is no such thing as “true alone”, at least not in mathematical logic as it is conventionally presented English speaking world (and apparently in the Spanish speaking world, if one source is enough to generalise); truth only exists in the context of a model.

Accordingly, when we talk about the truth of a formula, we have a model (or possibly a class of models) in mind; I claim that when a mathematician says the Gödel sentence is true, they mean it is true in the standard model of the naturals, which is why the “true but unprovable” characterisation is used. I don’t know if my references support this, since it’s harder to find than just looking for the definition of “validity” in the index, but if you Google “truth of the Gödel sentence” you’ll find a lot of people using “true” to mean true in the standard model (of the naturals).

I suspect that most mathematicians are Platonists (this may be my bias creeping in) and they believe the objects they work with are real; I certainly believe the naturals are real, and I believe that non-standard models of the naturals are not the (real) naturals, even though I’m perfectly happy to “play” with non-standard models as an intellectual exercise. I claim that most mathematicians have other (real) objects in mind when dealing with things other than the naturals, and those form the meta-mathematical model for the notion of truth.

> When most mathematicians say something like "X is true", what they mean is "X can be proved from the axioms of set theory", which in logical terms means "X is valid". Are you really arguing against that?

That doesn’t mean “X is valid”; if something follows from the axioms of set theory then it holds in all models of set theory (yes, assuming a sound deductive calculus, but that’s a given), but that’s not enough to consider it valid, it would need to hold in all models compatible with the language. So yes, I’m arguing against that, I think what you have written is factually wrong, unless it’s a typo?

> you get that confusing idea that "undecidable" means "true but unprovable"

I don’t think I ever said this, I agree that this is confusing, and in fact it’s just wrong as stated (under my usage of “truth”), since of course there are undefinable sentences that are false (but irrefutable), such as the negation of the Gödel sentence.


> As per 1, my position is that there is no such thing as “true alone”, at least not in mathematical logic

Yes I agree. There is always some context implied if we are being rigorous. But we do use the word "true" alone. Thus, the question is what is the implied context? I claim that this context consists of commonly agreed upon axioms. If I understand correctly, you claim it is a mental model.

Personally, I am not sure whether I qualify as a platonist. I do have a mental model that I use to evaluate mathematical statements, but that mental model is fluctuating. It is sometimes wrong (i.e. inconsistent) and therefore in needs of an update. Because of the mere possibility of errors, I (and this may be my personal bias) only consider statements "true" those that are proven (from some agreed upon axioms).

On the other hand, if you consider mathematicians as a community, I believe that mathematicians don't share the exact same mental model. So, a statement that mathematicians (as a community) will agree is "true", will be a statement that is satisfied in all their mental models. This is therefore a notion of validity rather than satisfiability. Of course, the mental models of mathematicians are unlikely to exhaust all possible models of a given theory. However, the ultimate arbiter of truth in the mathematical community is the satisfiability in all possible models of the theory, i.e. the proof.


> Thus, the question is what is the implied context? I claim that this context consists of commonly agreed upon axioms. If I understand correctly, you claim it is a mental model.

As I mentioned in a sibling comment, I dispute your claim that most mathematicians work from foundational axioms such as ZFC. I think the number of mathematicians who can actually write down the axioms of ZFC is far smaller the number who claim that mathematics is founded on ZFC, and the number who actually do rigorous proofs in ZFC a discipline other than logic far smaller still.

If a mathematician has never seen a rigorous proof of a theorem from the axioms of ZFC, and either seen it machine checked, or hand verified it (probably intractible for most real maths), can they say it's true if they subscribe to your approach to truth? I don't think they can.

Given that these proofs don't even exist for most theorems (there are efforts to build libraries of machine checked proofs, but even ignoring the fact that the most recent of these don't start from ZFC, they are by no means complete) it would seem to me that most mathematicians must adopt a more Platonist approach.

To be clear, I like the idea of axiomatisation and I'm a fan of the introduction of proof assistants (indeed I've used Coq pretty heavily in the past), but I dispute that rigourous proof is what actually determines truth; the human understanding of the naturals predates it's formulation in PA, if PA cannot prove something that we know to be true from our understanding of the naturals then that is a limitation of the axiomatic process, we should not start changing our notion of the naturals to fit the axioms.

> This is therefore a notion of validity rather than satisfiability.

As I mentioned in another reply, I have not seen the term validity used in this context, only in cases where the formula is true in all models; this is regular semantic entailment.

Note that your original claim, however, was

> A statement is true by definition if and only if it is satisfied in every model.

Did you mean to say "satisfied in every model that satisfies ZFC"?

I am more comfortable with this as a valid mathematical position (although I still claim it is nonstandard terminology); I disagree, but my disagreement is purely philosophical, whereas I hold that the original claim is so nonstandard as to be incorrect.


> I suspect that most mathematicians are Platonists (this may be my bias creeping in) and they believe the objects they work with are real

> [...] I dispute that rigourous proof is what actually determines truth [...]

This is perhaps a bit out of topic, but to me these two statements are contradictory. I suppose that you should define what you mean by "real" (and Platonism). I certainly think that mathematical objects are real, but by that, I mean that they exist independently of my own mind. However, they can't exist independently of a mind if truth is determined by evaluation against a mental model. Even if that mental model is shared within a community, because that would turn mathematics into a belief system. Also, the human mind is fallible and prone to mistakes, so in my view, it is reasonable to doubt what comes out of it.

Sure, mathematicians agree on axioms for things like natural numbers, and deduction rules. However, I think that the reality of natural numbers and proofs (as mathematical objects) does not stem from a shared mental model, but from their finitary nature, which makes it possible to implement them on a computer. I am also skeptical that the human mind has any innate model for most advanced concepts in mathematics (I even doubt that it is true for real numbers). I think that the intuition we have of most mathematical objects is formed after exposure to simpler mathematical notions. That intuition is shaped by what is proved and disproved from prior mathematical knowledge. Yes, proofs written by mathematicians don't look very formal (and often, the more advanced are the maths, the less formal and detailed are the proofs), but I dispute that they are not rigorous and can't be translated into a formal framework. In my view, this is mostly a matter of efficiency and practicality.

To illustrate what I say, consider Mochizuki's claimed proof of the abc conjecture[1]. Here we have a claimed proof so difficult that most specialists fail to determine whether it is correct or not, although Scholze&Stix believe there is a gap. I say that most mathematicians don't have a mental model that allows them to determine whether the abc conjecture is true or not, and because of the fallibility of the human mind, it is reasonable to doubt those that claim they do. One can of course take sides, but in that case, we are no longer doing mathematics. The only thing that can resolve the issue will be a more readable and more rigorous proof. That's what determines truth.

[1]: https://en.wikipedia.org/wiki/Abc_conjecture#Claimed_proofs


I don’t think my position is contradictory: I believe that (non-technical, unqualified/alone) “true” is a property that exists independently of whether we are able to formally prove something in some kind of logic. This is because we are making assertions about “real” objects (e.g. the naturals), and I believe that these statements are either true or false.

I think this is a philosophical disagreement that we’re unlikely to resolve.

It’s not clear to me which definition of (non-technical, unqualified/alone) “true” you are using.

We’ve had a few:

1. Your original definition, where we say P is true iff it holds in all compatible models (which I claim is highly nonstandard);

2. The definition you started using later, where we say P is true iff it holds in all models of ZFC (which I claim is still nonstandard);

3. The definition I suggested, where we say P is true iff it holds in some “standard model”;

4. Something else.

Let’s consider the naturals; what is your opinion on the truth of the Gödel sentence for ZFC (let’s assume consistency, otherwise definition 2 cannot possibly be useful)? Under definition 3 is is true, but under definition 2 it isn’t.

If you think it isn’t true then you are saying that we don’t really understand the naturals intuitively and we can only understand them by axiomatisation.

I cannot counter that position mathematically, only philosophically, but I will say that we have seemingly used and understood the naturals for a very long time before they were first effectively axiomatised, so it seems to be a bold claim.

This is very far away from my original point, however, which was purely about your position that “true” means “true in all models” (i.e. definition 1), however it seems you are no longer adopting this position (in favour of definition 2)?

When used in a technical sense, as far as I am aware “true” is always qualified with respect to some specific model, which may be obvious from context and thus not explicitly stated, but there is always a formal model in mind; truth is generally seen as a model-theoretic concept, not as proof-theoretic one.

Re. ABC and IUTT, I’m fully behind Buzzard et al. pushing for machine verified proofs, as I say, I have been a big user of Coq. I just don’t think we can ever say that proof is what determines truth given we know from Gödel that proof is fundamentally limited.

Proof is a good way of convincing ourselves something is true, indeed anything we prove true is in fact true by soundness, but it’s not the arbiter of truth.


> 3. The definition I suggested, where we say P is true iff it holds in some “standard model”;

By the way, I wish you would answer my previous objection about that definition in the context of set theory. What is the standard model of ZFC? (or ZF?) As far as I know, you can't prove that a model for ZF exists (unless you assume some powerful axioms, in which case you won't be able to prove that a model for the extended theory exists).

Edit: Another situation where that definition is problematic is the case of an inconsistent theory. Obviously, an inconsistent theory cannot have a standard model since it does not have a model at all. Whereas with my definition, we get the usual "Ex falso" as expected.


> What is the standard model of ZFC? (or ZF?)

The standard model that most set theorists have in mind is something like the Von Neumann Universe, V. Note that this is a proper class, so it's not a structure as usually considered in model theory.

We (hopefully) can't prove V is a model of ZF in ZF, because that would amount to proving consistency and fall foul of Gödel 2, but the axioms of ZF come from an attempt to axiomatise our understanding of set theory in the sense of it being the study of the objects that make up the Von Neumann Universe.

> Obviously, an inconsistent theory cannot have a standard model

Indeed. Paraconsistent logics are an attempt to deal with inconsistency from a proof-theoretic stance, but I'm far from an expert and I don't know what models of paraconistent theories look like.


> This is very far away from my original point

Yes, the discussion has deviated, and I don't think we will resolve the disagreement, but I wanted to make my position clearer w.r.t to the claim that "most mathematicians are Platonists [...] and they believe the objects they work with are real".

> It’s not clear to me which definition of (non-technical, unqualified/alone) “true” you are using.

I may be elliptic and not very clear, but I have not changed my definition. We can't do mathematics in a vacuum. There is always a context, which consists of a language, i.e. a fixed set of constant, function and relation symbols, and a theory, which is a fixed set of statements of the language. Typical theories are ZF, ZFC, PA, etc. For me, "true" (alone) means satisfied in all models of the theory, and equivalently by completeness, provable from the theory. (And by the way, your notion of "true" (alone) as "satisfied in the standard model" is equivalent to requiring that the theory be complete.) That would be your definition 1, except for the "non-technical" part. Now, the discussion deviated towards set theory because to compare my idea of "true" (alone) with yours, I used your comment:

> “True in the standard model” is generally what most working mathematicians who are not logicians mean by “true”.

which lacked context and seemed to me to be especially problematic in the context of set theory. And also, "most working mathematicians who are not logicians" implies a context of set theory. So the "non-technical" definition would be your definition 2 although I think ZF+DC (the axiom of dependent choice) is closer to what most mathematicians won't have a problem with than ZFC (depends on the discipline I suppose). Probably a mistake to talk about "most mathematicians" though.

> If you think it isn’t true then you are saying that we don’t really understand the naturals intuitively and we can only understand them by axiomatisation.

I mean something more subtle. I think we understand the naturals intuitively but only to some extent. Enough to write some axioms, but not enough to reliably answer many seemingly simple questions about them. I also think that our intuitive understanding is not static but grows as we study mathematics.

> we can ever say that proof is what determines truth given we know from Gödel that proof is fundamentally limited.

This is perhaps where the disagreement is? I don't have a problem with the fact that proofs are fundamentally limited.


I think we're understanding the phrase "true in all models" in different ways.

I understand it to mean: "true in all structures compatible with the language".

On the other hand, I think you understand it to mean: "true in all models of some latent theory left implicit", where the theory may be ZF(C) or something else depending on context?

I'm basing that on your comment:

> For me, "true" (alone) means satisfied in all models of the theory, and equivalently by completeness, provable from the theory. [...] That would be your definition 1 [...]

That isn't my definition 1, because I'm refering to all structures compatible with the language, not all models of some theory. This is probably an abuse of terminology on my part because usually we reserve the term model for structures that model a theory [1], sorry for the confusion!

> And by the way, your notion of "true" (alone) as "satisfied in the standard model" is equivalent to requiring that the theory be complete.

Please can you explain this? I don't think that's what I mean. We know that PA isn't complete, but when I say the Gödel sentence is true I mean that it's true in the standard model of the naturals.

> There is always a context, which consists of a language, i.e. a fixed set of constant, function and relation symbols, and a theory, which is a fixed set of statements of the language

I completely disagree with the idea that this context always existed, it's too Formalist. There is a rich history of mathematics before the concept of a formal language and a formal theory existed; if you were to ask Gauss if he worked in ZF or ZFC or TG I don't think he would have an answer, but clearly he had some concept of mathematical truth.

[1] : Although all structures are vacuously models of the empty theory, so technically they are models, but that's not very convincing or useful...


> On the other hand, I think you understand it to mean: "true in all models of some latent theory left implicit", where the theory may be ZF(C) or something else depending on context?

Yes, that's what I mean. (For me, "structure" is preferred to "model" when nothing is implied.)

> The standard model that most set theorists have in mind is something like the Von Neumann Universe, V.

Now I am getting confused. Isn't that equivalent to requiring the axiom of regularity? I have a book on set theory by JL Krivine with the theorem: "V is the whole universe iff the axiom of regularity holds". This book also proves that if "U is a universe (i.e. a model of ZF) then the collection V inside U satisfes ZF+axiom of regularity" (which proves the relative consistence of the axiom of regularity).

To talk about the Von Neumann Universe, you must assume some "surrounding" universe which is a fixed but arbitrary model of ZF. Thus, X is true in the Von Neumann Universe if and only if X is satisfied in all models of ZF+axiom of regularity. That certainly matches my idea of "true", albeit with a weaker set of axioms... (I proposed ZF+DC as a least common denominator because a large part of analysis can't be done without some form of axiom of choice.)

> Please can you explain this?

Let us call S your standard model of PA. I understood your idea of "X is true" as "S satisfies X". Now, let T be the set of all statements satisfied by S. Then T is a complete, consistent theory that extends PA and "X is true" if and only if "T proves X". (Of course, T is much larger than PA, and in fact, by incompleteness, there are no recursively enumerable theories equivalent to T.) This correspondence between complete consistent theories and models is not one-to-one though, a complete consistent theory may have infinitely many models.

> if you were to ask Gauss if he worked in ZF or ZFC or TG [...]

Fair enough, but I think he was familiar with Euclid's elements, and would have agreed on the fact that there are things that are assumed to be true because they are intuitive and things that are proved to be true. In my view, ZF is the culmination of an effort to minimize that intuitive part. By constrast, the notion of model (and Tarski's notion of truth) are more modern.


> That doesn’t mean “X is valid”; if something follows from the axioms of set theory then it holds in all models of set theory

Yes, I was being elliptic. That should read "X is valid in set theory". The point being that it is a notion of validity (ie valid in all models of set theory) rather than a notion of satisfiability (ie valid in a particular model of set theory).


For some reason the "reply" buttons past a certain level of nesting were missing for me, but that appears no longer to be the case, so I'm moving a previous comment here

The notion of relative validity is just semantic entailment, no? I have never seen that referred to in terms of validity, which has been reserved strictly for formulae that are true in all models, not in some class of models.

I’m a Platonist, and I suspect most mathematicians fall towards that end of the spectrum, so I disagree that most Mathematicians see ZFC as the arbiter of truth. They certainly aren’t doing formal proofs in ZFC, and in fact I suspect that most non-logician mathematicians would have difficulty reciting the axioms of ZFC.

That’s not to say I don’t appreciate proof theory and the desire to work in an axiomatic framework, indeed in a past life I spent most of my time formalising various things in Coq, but I don’t think it’s relevant to fundamental mathematical truth, which I believe exists outside of axiomatisation (and I think most mathematicians would agree).


> For some reason the "reply" buttons past a certain level of nesting were missing for me, but that appears no longer to be the case, so I'm moving a previous comment here

If I understand correctly, HN throttles reply speed by hiding the reply button for some time after a comment was posted. The deeper the thread the longer this timeout gets.


Can you give an example of a nontrivial statement that's true in every model?


If you don't have any axioms, the statements that are true in every model are exactly the tautologies (by definition). Usually though, one is interested in a particular set of axioms, typically ZFC. Then "every model" implicitely means "every model of ZFC", so "true" statements are the statements that are true in every model of ZFC, or equivalently by Gödel's completeness theorem, the statements that are provable from the axioms ZFC (and only ZFC). As for examples of such statements, well, that's virtually all mathematics. (The use of exotic axioms is quite specialized within mathematics.)


Now you're moving the goalposts! You can't claim that's it's even "widely accepted" that the axiom of choice is "true". I can see this as a fine way of distinguishing DeMorgan's laws from the continuum hypothesis, but the meaning of "true" is a stickier subject.


> You can't claim that's it's even "widely accepted" that the axiom of choice is "true".

I have never claimed anything like that. The original comment was a reaction to the notion of "true but unprovable" which is wrong because what is true is precisely what is provable. You may have an intuitive notion of "true", but with logic, the devil is in the details. In my experience, it is better to stick to the mathematical definitions, especially when talking about things like the incompleteness theorem.

Now, the mathematical notions are as follows. First, you agree on some deduction rules, then some axioms (aka a theory), and by definition, what is true is what is satisfied by every model of the theory. A completeness theorem is then a theorem that states that what is true is precisely what is provable. (Proved by Gödel for classical logic.)

Of course, you may disagree with the choice of axioms. However, when introducing a new axiom, mathematicians don't argue whether it is "true" or not, they have to justify in one way or another that it is relatively consistent. The same thing is true for the deduction rules. In other words, consistency, not truth, is the right metric for axioms and deduction rules. Finally, observe that mathematicians who argue against the axiom of choice or the law of excluded middle do not claim that these are false, they claim that these are not constructive. Yet another notion not to be confused with truth.


> You may have an intuitive notion of "true", but with logic, the devil is in the details. In my experience, it is better to stick to the mathematical definitions, especially when talking about things like the incompleteness theorem.

> A completeness theorem is then a theorem that states that what is true is precisely what is provable. (Proved by Gödel for classical logic.)

As I pointed out in another comment, you are actually using a nonstandard definition of “true”/“truth” yourself; what you are calling “truth” is generally referred to as “validity”.

> However, when introducing a new axiom, mathematicians don't argue whether it is "true" or not

This is not representative of the historical development of mathematical logic and analytic philosophy at all.


I didn't claim you'd claimed it ;)

Where are you getting this definition of truth? I don't think things are a neat and simple as you're making out. Are you familiar with Tarski's work on (semantic) truth?


> true in some models and false in others

Isn't that the case for pretty much everything that you can twist them in any direction by the choice of axioms?

https://xkcd.com/704/


>> true in some models and false in others

> Isn't that the case for pretty much everything that you can twist them in any direction by the choice of axioms?

By models in this context are usually meant models of this one particular set of axioms you’re talking about at the moment (considered in some sort of ambient metatheory that isn’t always spelled out explicitly but you can go in that direction too if you want). For example, you can construct a model of PA in ZF that is morally { {}, {{}}, {{{}}}, ... } for the elements (that was not a syntactically valid formula in ZF, but you can prove a critter that is intuitively described by it exists nevertheless) and x \mapsto {x} for the successor, even though PA says nothing about sets, functions, or anything of that sort. Basically, for “model” read “implementation” (an “interpretation” of a thing within another thing is a synonymous term that actual logicians occasionally use).

(And yes, there are models of PA inside ZF that are substantially different from that one[1].)

[1] https://golem.ph.utexas.edu/category/2019/06/nonstandard_mod...


Formal systems need not be consistent; if a formal system assumes both P and ~P as axioms, you can derive S and ~S (for any S) by explosion.


Is the problem that you can't proof consistency? Because if it's consistent then shouldn't either P or ~P be true or the axiom doesn't exist?


It is possible to prove consistency but to do so requires using a more powerful system of axioms which in turn might be inconsistent. What Goedel showed is that in any system that can encode arithmetic (and consequently some notion of computation, e.g. lambda calculus) can not be complete if it is consistent.


Ah, thank you, that makes more sense. Maybe a weird follow-up question: Is it possible (or does it make sense) to find a proof in an inconsistent system and to try to "transform" it into a consistent one?


Yes, paraconsistent logic could be considered an instance of such a system: https://en.wikipedia.org/wiki/Paraconsistent_logic.


> But the same reasoning can be applied to any formal system F, with fairly minimal assumptions:

Strictly speaking, F has to be satisfiable/consistent as well. {P, ~P} is a trivially recursively enumerable, complete theory which contains Q.


You get this for free in (2). You couldn't "reason about programs" if F were inconsistent, nor would the halting problem make any sense in an inconsistent system.


I would argue "reasoning about programs" formally means "can prove P Q R ..." (with appropriate choices for P Q R) and doesn't implicitly carry an assumption of consistency. But, fair enough.


You make a good point, and on the other hand you're right that "reasoning about programs" is not a term of art, so who knows what it means. Given how Lean is used in the article, I took it as basically being some kind of Turing-complete language (lambda-calculus or some variant or what-not).


I realize this quote isn't meant to be taken literally, and it's a nice one-liner encapsulation of metaclasses, but it always bothered me. The people who "actually need metaclasses" were, at some point, learning about metaclasses for the first time and were "wondering whether they need them".

This quote ignores the middle ground of users who have a valid use case, but don't yet understand metaclasses. Not great advice for people who are trying to learn metaclasses.


I do sympathize with this sentiment, but there are times when "if you're wondering if you need it, you don't need it" is a very true statement, and Python metaclasses are in that boat. To see why, let me explain in more detail:

There's a "basic" [1] understanding of Python's object model, one which understands that most of Python is actually syntactic sugar for calling certain special methods. For example, the expression a[b] is "really" just calling a.__getitem__(b). Except that's not actually true; the "real" object model involves yet more dispatching to get things to work. Metaclasses allow you to muck with that "yet more dispatching" step.

So when do you need metaclasses? When you need to do that low-level mucking--the kind of mucking most won't know about until actually needed. If all you know about metaclasses is kind of what they are, then you very likely haven't learned enough about them to use them to actually need them. Conversely, if you've learned those details well enough to need to muck with them, then you've also learned enough to the point that you can answer the question as to whether or not you need metaclasses.

[1] I suspect most Python users don't even have this level of understanding, which is why I put it in scare quotes.


I suspect Tim's meaning was that ordinary users don't need to know how metaclasses work in detail. It is enough that people know they exist (i.e. do something at time class is defined). If you need them, you are doing deep enough magic that you can take the time to learn how they work.

Maybe he was also suggesting that people should generally not be using them. In my experience, it is exceedingly rare to need them. In 29 years of writing Python code, I think I've used them once or twice.


The most salient example I can think of is tracking all subclasses of a class. This is how django tracks `Model` subclasses to create a table for each model, for instance. I wrote a blog post on this topic: https://tybug.github.io/2021/11/06/tracking-all-subclasses-o...

I've also used metaclasses in the past to apply a decorator to certain methods of all subclasses, without needing to specify that decorator in the subclass declaration itself.


Came here to say this. This is my most common use of metaclasses, and __init_subclass__ looks to be a much cleaner solution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: