If the proven theorem is useful, isn't it better to be able to make use of it earlier, rather than wait for the elegant proof?
An example: If the AdS/CFT conjecture is demonstrably proven, it will elevate it from "useful computational technique" to "law of mathematics, probably how nature does it, too". I'll take any proof, as early as I can get it, in order to guide our experimental work.
And for your example that touches on applications of math, it approaches scientific research from the wrong perspective. If some math follows reality, (agrees with experiment and even better makes predictions that turn out to be right), there is has to be some backing to it, even if the rigor isn't fully there. Physics often runs with ideas before Math has caught up (see QM in the early days, HEP in general, etc).
So, if we back off from applications who, quite frankly, only care about whether X is true in only a few, really important number of conjectures I can think of--big questions like P=NP for example--then math itself I imagine is, well, a study for its own sake. So, what above exhaustive proofs of your run-of-the-mill conjecture? Who would want to read a paper from a math person that merely iterates through a 200TB cache of data?
People forget that science is a social phenomenon as much as it is systematic process.
I want this all so I can do this myself, so I can prove a run-of-the-mill conjecture and not waste my time with it. I want this because proving is difficult and there is more to SAT than mathematical proofs, it is used in the industry. I want this because I trust a verified proof that the airplane won't crash more than a human proof.
The availability of computer proofs does not devalue the study of mathematics.
In case anyone else had no idea what the parent was talking about.
Wikipedia on Quantum Physics is usually pretty atrocious, and this page is no exception. It reads like... well, nerds showing off knowledge. (There's nothing wrong with that, but unless you're already pretty steeped in things quantum, the page is jargon without meaning)
A more readable summary: http://whystringtheory.com/toolbox/ads-cft/
Indeed. Wikipedia on math in general is useless unless you already understand it.
Then again "If someone cannot explain something in plain English, then we should question whether they really do themselves understand what they profess." - Richard Feynmann
Being able to explain it doesn't mean the recipient will be able to understand the explanation.
Richard Feynman has all sorts of explanations in his books and lectures that are still beyond most people's understanding.
This does not mean : "in plain English and in five minutes".
Gauge/gravity duality under particular circumstances is certainly interesting, but unfortunately our universe is not AdS and in particular does not have a boundary on which the conformal field theory can be constrained. So being able to define string theory nonperturbatively on a Minkowski space defined on the boundary of spacetime is certainly interesting, but as far as we can tell our spacetime does not have that boundary and moreover it would be much nicer if we had a background independent formulation for a nonperturbative string theory, and the gauge/gravity duality does not really offer that. One might draw the analogy to being able to define a patch of Minkowski space around a point on a geodesic through a general curved spacetime: General Relativity explicitly promises only that a smooth coordinatization (e.g. Fermi coordinates) is possible in an infinitesimal neighbourhood around the point although in weak gravity the patch can be much bigger (and the system of coordinates more "Cartesian"). The worry (by this analogy) should be that gauge/gravity correspondence falls apart in strong gravity.
Moreover, it is an enormous stretch to argue that if a large class of bulk theories have the same symmetries as certain boundary theories (or are related in some other way) then therefore there background independent physics are recovered. Rovelli enjoys pointing out that in GR you can always select a preferred frame and map an SR theory over that and do useful computations, but that doing so gives no insights into proper Lorentz-invariant physics, so how does choosing a preferred spacetime and mapping a field theory over that providing insights into quantum gravity?
However, some string theorists (notably Verlinde recently) have been trying to get out of the AdS "box" and think about other chosen spacetimes (e.g. asymptotic de Sitter) to see if some gauge/gravity ideas are even workable for those spacetimes. Maybe they are.
> I'll take any proof, as early as I can get it, in order to guide our experimental work.
If you assume -- because General Relativity survives all observational tests and is expected to be completely correct in weak gravity -- that the string theory has to reduce to (or at least fully reproduce) classical GR in the appropriate limit then you can't expect any experiments which could distinguish between the two in the near future. But if you have a string theory that departs from GR in a testable limit, then sure, compare that with existing and future evidence.
However, string theory might testable outside the gravitational sector if the bulk-boundary correspondence covers more than that sector. Given that string theory tends to come with infinite fields, extra dimensions, supersymmetry and other theoretical objects that are not in the Standard Model, then BTSM experiments may have something to say about such string theory models.
Lots of situations where the result obtained is only thing that matters to most users.
A proof that works is the most important part. After that you can make it nice
Not the 200 terabyte one, you can't.
Once I could tell a mathematician the count, in both cases someone they proved the result in an alternative way fairly soon afterwards :)
But oppositely, I don't think anyone's demonstrated "proof mining" to be practical in any general sense. Huge proofs have been produced by brute-force, yes, BUT ONLY after the original theorem claim was altered in a clever and important fashion (for 4-color theorem, for Ramsey's theory and so-forth).
For example, the graph minor theorem states that a certain order on graphs does not have an infinite antichain. Classically this may have some interesting consequences, but in reality it's not very useful. However, the proof contains some real gems, such as the notion of tree decompositions and the graph structure theorem that lead to mountains of new results all throughout graph theory and related disciplines. Tellingly, the graph minor theorem is a short and simple question while the graph structure theorem is a deeply technical statement that would not have been conjectured on its own but rather was discovered in the process of showing the graph minor theorem.
Proof used to be a distillation of a complex phenomenon, or a complex logical tangle, into a smooth thread that one could follow and agree that, yes, this is obvious when you put it this way.
Sure, at some point computers will take over the work of doing proofs, just like in a different trade hammers took over fists, or wheels took over feet, or engines took over muscles, hearts and lungs. But at that point we will have to re-examine this concept of "proof".
Rene Descartes or Isaac Newton would throw a fit if they could see this.
To be sure, myself I am very much a classicist of the old school when it comes to such matters. But this is a brave new world we are slowly but surely moving into.
No. The goal of proving a theorem is proving it any way possible. And "inelegant" proof is just as much a proof as an "elegant" proof.
> and in the best case scenario a proof that actually forms a sensible new theoretical framework in which the theorem becomes almost trivial to prove.
The idea is that, in creating the proof a mathematician invents a new way of looking at the problem. When viewed in this new way, the problem itself sometimes becomes trivial. When this happens, the the framework that was developed could be used to attack new problems that had previously been unapproachable.
When I was working on my PhD, I probably saved days or weeks of effort by writing a program to brute force an approximate solution to an equation I couldn't work out. After seeing that the solution wasn't what I wanted, I was able to spend the time I would otherwise have spent on figuring out the equation on something more useful.
Note that most people believe that co-NP /= NP, which implies that an unsatisfiability proof via SAT solving in general won't be efficiently verifiable.
Recent progress in automated reasoning and supercomputing
gives rise to a new era of brute force.
The game changer is “SAT,” a disruptive, brute-reasoning
technology in industry and science. We illustrate its
strength and potential via the proof of the Boolean
Pythagorean Triples Problem, a long-standing open
problem in Ramsey Theory. This 200TB proof has been
constructed completely automatically—paradoxically,
in an ingenious way. We welcome these bold new proofs
emerging on the horizon, beyond human understanding—
both mathematics and industry need them.
How do you verify a 200TB proof?
Preferably on different hardware.
Hardware error is almost certain in this case. c.f. rowhammer or bitsquatting.
This study from Google is pretty interesting:
From the paper that you cited:
Our first observation is that memory errors are not rare
events. About a third of all machines in the fleet experience
at least one memory error per year (see column CE Incid.
%) and the average number of correctable errors per year
is over 22,000. These numbers vary across platforms, with
some platforms (e.g. Platform A and B) seeing nearly 50% of
their machines affected by correctable errors, while in others
only 12–27% are affected. The median number of errors per year for those
machines that experience at least one error ranges from 25 to 611.
``We have presented a formally verified tool chain to check DRAT unsatisfiability certificates. In single-threaded mode, our approach is more than two times faster than the (unverified) standard tool drat-trim, on a benchmark suite taken from the 2016 SAT competition.''
 Lammich, Peter. "Efficient verified (UN) SAT certificate checking." International Conference on Automated Deduction. Springer, Cham, 2017.
The new technique creates smaller and easier witnesses. The solver they used output this witness during solving and saved it. They also created a formally-verified witness verifier. Finally run the verifier for every witness in the sub-problem.
 Speaking as a scientist.
I'm not in that SAT stuff enough to know if one can prove that :-)
Sometimes I imagine that if we had much larger short term memories, an argument with only 633 cases would be obvious.
I'm glad that Terry seems to have found a different proof. I attended a discussion of it during last week's Mathematical Congress of the Americas here in Mtl, and I think they were saying that Terry's proof seems correct.
Peter Norvig, 2011: On Chomsky and the Two Cultures of Statistical Learning
That term implies that the speed of the computer that was used to find the solution is the determining factor.
It is not. Modern SAT solvers use very elegant search algorithms that have enabled the solution of SAT-encoded problems that could never have been solved if only "brute force" was used.
I gave some thoughts about this lately, and it occurred to me that it's a problem that is similar to how computers play chess. I'd like to know if I'm right.
The equation 2x + 100 = 500 is an algebraic identity and with algebraic rules there is a finite number of ways to transform it. For instance, x + x + 100 = 500, 3x - x = 400, x + 200 = 600 - x, and so on. Each of these new equations can once again be transformed in a finite number of ways. So basically the initial equation is the root node of a tree that represents all the ways the equation can be transformed.
So it's a similar problem than for chess, or many other board games, where you start from a position and you must explore a tree of possibilities to find the leaves (checkmates, for instance). For an equation the leaves are nodes of the form x = numeric value.
Does that mean success in solving board games can have applications in computational math?
x + x + 100 = 500
3x - x = 400
you can probably list any variety along the lines of
(n)x - (n - 2)x + y = 400 + y
so clearly that isn't a finite transformation space.
Now, these transformations might look trivial, but this problem is too. So there's an interesting question of bounding your algebraic transformations to a subset of all possible transformations that still make all possible solutions available, which ultimately doesn't answer yours.
PS. On second thought the number of rules is finite, so the number of possibilities has to be finite, at least if you apply the rules one at a time. It's the iteration of the rules that makes the tree infinite.
The reason a move tree in a chess game has a finite number of nodes is not due to the finite number of possible transforms, but due to the fact that the number of possible board states is finite.
 graph, really.
The theory of real closed fields is complete and decidable, for example.
For integers for instance, the rule is in their definition, and there is only one. 2 is 1+1, 5 is 4+1 and so on.
> are easily simplified back to the original one (or close to it).
Not really. The counter-example being expansion of factors.
((true AND false) OR true) AND NOT (true OR false)
You can simplify these proofs by extracting subproofs and replacing them from database by substitution. It is like compression but makes more sense.
First, giving an example of solving an algebraic equation using brute force is misleading, because you can't enumerate all values of a real number.
Second, the risks of secrutity regarding brute force computing having nothing to do with SAT.
Also, their description of SAT is inaccurate. It's not new, and it was actually one of the first formal problems studied in computer science. Also, the original, linked paper notes that it is not brute-force computing that is itself the breakthrough, but the addition of heuristics in SAT solvers that make brute-forcing feasible.
Apologies for being nasty
This is thanks to a newish technology called Satisfiability (SAT) solving, which is a method of generating proofs in propositional logic.
SAT3 was old when I was in college. And it's not to generate proofs.
Not really. All kinds of non-elegant, 80 and 200 page proofs (a proof doesn't have to be brute force to be inelegant) are regularly churned and appreciated every day.
Those are in fact the bread and butter of mathematics in practice, not the few and far between elegant proofs people read about.
The average reading speed is 200wpm. The average human lifespan is 79 years, or about 41.5 million minutes. If you did nothing but read at 200wpm for your entire life, you might read 8.30 billion words. A word is 5.1 letters, on average, and in ASCII, a letter is one byte. Multiply that all together to get an estimated human reading speed of 42.4 GB/lifetime.
It would take 4728 entire human lifetimes to read this proof once.
This is vastly underestimating the actual number, due to silly things like "eating", "sleeping", "learning to read", etc.
Not really, since what matters is the compressibility of the proof, not its size.
We might not be able to read all individual clauses, but it's enough that we know how the clauses are constructed.
Problem is: a smart guy can catch a malfunctioning in another smart guy's line of reason. Can they catch an error in 200TB of proof?
Yes, you could either put an army of people on picking through the data, or write another software program that also picks through the data
In either case, we are talking about two different things. The initial discussion was about probabilistic theorems, which is the idea of throwing a bunch of tests at a problem and getting a sense of how likely it is that the theorem is correct vs going through the entire space of a problem through brute force.
Then somehow the discussion changed to whether people and computers can be trusted to not make mistakes when brute forcing the problem space, which is an entirely different subject from the topic of this post.
Not what I'm saying. I would suggest that absolute certainty is impossible wrt verifying conventionally proved theorems as well.
Call it what you will.
> If you are saying all math is probabilistic even with conventional theorems, how can you predict a future of probabilistic math if it's already here?
One crucial difference is that our confidence level in a given proposition will be made explicit and will be quantified. I think you are misinterpreting what I'm saying as well-- the "experiments" I'm suggesting are not naive monte carlo simulations or tests of a nonexhaustive sample of special cases, but the generation of formal logical proofs (although not necessarily limited to that). The uncertainty would arise physically from the largeness of the computation, and once that barrier is crossed, it's conceivable that other (less rigorous) methods could be added to the battery of techniques as well (thereby increasing confidence). Also note that the size (amount of information) of the theorems and proofs would be large, and perhaps will surpass human comprehension, even after multiple stages of approximation and abstraction. The degree of confidence in such theorems would also be close to certainty as well. No human readable proofs would be harmed in the process, except for the false ones!
Was thinking much bigger than that. Probably won't be feasible in the near future, and would of course be directed by very clever agents (mathematicians), themselves having excellent mathematical insight.
>It is extremely low, and very unlikely even for a data set of this size, especially when compared to human error rates.
That's the idea. Replace "practically impossible to discover or know" with knowing to an astronomically high degree of certainty. Never suggested otherwise.