Hacker News new | past | comments | ask | show | jobs | submit login

Can some bored billionaire please throw $100M at this project?

Talk about revolutionary, true automated pure math would be a human milestone on par with very few developments in history.




I'm confused why you would say that.

FTA: > Ultimately every named construct or concept in pure mathematics needs to have a place in our symbolic language.

The reason your plea confuses me is that I don't understand the social value in encoding all _public_ "named construct or concept" in a _private_ "symbolic language" that only one proprietary piece of software that can be monetized by one corporation can benefit from?

Why would we want to do that?


We would want to do that for the same reasons that people write textbooks. Mathematica was not designed for building applications like Java or C++, but as a tool for research and learning. To that end, Mathematica is well designed and the documentation is exceptionally curated, just like, for example, a physics textbook, which also encodes public knowledge in a private (copyrighted) manner.

The mathematicians and software engineers at Wolfram Research should be compensated for their efforts, just like the authors and publishers of textbooks. And I'd much rather pay for my textbooks than see them filled with ads.

As a lover of mathematics I'm happy to see people writing and thinking about the ideas in this post. It's a noble pursuit.


Your analogy textbook <-> mathematical software does not hold. Think it through. It's not a copyright issue. I can explain in detail my point of view if you like.

It's not a noble pursuit when the gain is too one-sided. Wolfram and his company would accrue to much material advantage at the expense of an open and public mathematical discourse. This much is plain to see, watch out for rhetoric of the kind on display in that post!


I would rather they build one that does not lock people in to a single proprietary product run by an egomaniac.


> Talk about revolutionary, true automated pure math would be a human milestone on par with very few developments in history.

Hilbert thought so too, but it is proveably not to be (http://en.wikipedia.org/wiki/Entscheidungsproblem), no matter how much money is thrown at it by how many bored billionaires.

(EDIT: To be clear, I am not claiming that there is no room for automated assistance of pure math, only that it can never be wholly automated.)

(Second, important EDIT: As Khaki points out (https://news.ycombinator.com/item?id=8170062), I should make it clearer that many objections about what computers can't do apply equally well to show what humans also can't do.)


The Entscheidungsproblem says that proving theorems is at least as hard as the halting problem. This seems irrelevant to the issue of whether human input is essential for finding or proving interesting theorems.


I agree that I should not have singled out automated mathematics as uniquely vulnerable—human efforts at formalisation are also subject to limitations. I was also conflating Turing's work on the Entscheidungsproblem with Gödel's not-unrelated but meaningfully distinct incompleteness results, which are more relevant to the point that I was trying to make.

Nonetheless, I point out that I did not claim that human input was essential either for finding or proving interesting theorems (although proveable results about (non-)computeability seem as close as one could get to such a statement!), but only that math could never be completely automated (or, to use what, as you implicitly point out, is a more appropriate term, even formalised).


I think the problem is a disagreement on what kind of "automation" is the goal. You seem to want to automate all possible math. Sure, we can't do that. But we certainly can try to automate any math a human can do (eventually).

An automated factory cannot build any possible car. But we no longer need to have people hand assembling the components.


> I think the problem is a disagreement on what kind of "automation" is the goal. You seem to want to automate all possible math. Sure, we can't do that. But we certainly can try to automate any math a human can do (eventually).

This is a very helpful reply; thanks.

I don't want to automate all math, since it is impossible to do so; and I think that I have clearly indicated in each of my posts in this thread that I am speaking only to that particular, narrow issue—but I can see how that narrow nit-picking might not be of interest. The reason that I brought it up was because it sounded to me like what pnut (https://news.ycombinator.com/item?id=8169453) was suggesting.

Now, I do not believe that computers can automate any math that a human can do, except in the narrow sense that I might be able a posteriori to tell a computer to follow exactly the steps that I took. (Some 'intuitive' processes are presumably not subject to automation even in this very weak sense.) This is a much stronger assertion requiring much stronger demonstration, and I am not prepared to (nor, I think, have I claimed to) address it. Perhaps, with that caveat, that's the discussion that I should have started (with perhaps my weaker point as a footnote).


>Hilbert thought so too, but it is proveably not to be (http://en.wikipedia.org/wiki/Entscheidungsproblem), no matter how much money is thrown at it by how many bored billionaires.

Given that human beings do mathematics just fine, I think there might be some way to deal with this.


> Given that human beings do mathematics just fine, I think there might be some way to deal with this.

I'm confused by this response.

Certainly I am not claiming that no mathematics can be done, but equally certainly there is a lot of math that seems, at least "at our present level of (mental) technology", to be out of human reach. Since it is (I think) precisely this sort of math in which Wolfram is most interested, it seems odd to point out the math that humans can do as a counterexample.

(I am further confused because it seems to me that the sets of tasks that humans can perform easily and that computers can perform easily are, if not disjoint, at least of very small overlap, so that it might not be unreasonable to take the presence of a task in one set as reason to question its membership in the other.)


Wolfram's essay addresses this point...


There is at least one thing Wolfram skimps over:

"So in principle one can imagine having a system that takes input and generates “interesting” theorems about it."

I can imagine having a spaceship that goes faster than light, too. It is more than slightly harder to actually build it.

Generating theorems is easy (it may even be possible to generate all theorems in order of increasing length of their proof and not get stuck into generating evermore longer proofs of theorems encountered earlier, but I am not sure of that)

However, we have only very faint ideas about what constitutes an interesting theorem (are theorems even dull or interesting independent of their known proofs? if one finds a new interesting proof for a theorem that previously only had a dull proof, I think the theorem can become less dull), and certainly do not know how to programmatically discriminate them from dull ones.


Life's too short to read Wolfram when he gets going, so I only skimmed it; but: where? I see:

> In a sense an axiom system is a way of giving constraints too: it doesn’t say that such-and-such an operator “is Nand”; it just says that the operator must satisfy certain constraints. And even for something like standard Peano arithmetic, we know from Gödel’s Theorem that we can never ultimately resolve the constraints–we can never nail down that the thing we denote by “+” in the axioms is the particular operation of ordinary integer addition. Of course, we can still prove plenty of theorems about “+”, and those are what we choose from for our report.

and:

> But there will inevitably be some limitations—resulting in fact from features of mathematics itself. For example, it won’t necessarily be easy to tell what theorem might apply to what, or even what theorems might be equivalent. Ultimately these are classic theoretically undecidable problems—and I suspect that they will often actually be difficult in practical cases too. And at the very least, all of them involve the same kind of basic process as automated theorem proving.

Are these what you mean? Both of these seem to amount to, "Sure, mathematicians say you can't do it, but I hold out hope", an argument which is no more persuasive than one would expect to find from various circle-squarers. Less subjectively, they both seem to miss the point; the incompleteness results, for example, do not just say that certain theorems aren't clearly specified, or are equivalent to unknown other theorems, or vague things like that, but specifically that there are true but unproveable theorems—putting paid to any attempt at complete automation.

I emphasise again that this is only a knock if you dream grandiosely of capturing all of mathematics in an automated (or, as Khaki pointed out above (https://news.ycombinator.com/item?id=8170062) that I should really be saying, even formal but human-constructed) framework; it says nothing about the feasibility of automated theorem-proving for some results, and, indeed, we have a success story for one such result on the front page even now: https://news.ycombinator.com/item?id=8169686 .


Automation is doing automatically what humans manually. So your point misses the point.


> Automation is doing automatically what humans manually. So your point misses the point.

Could you clarify what point I'm missing? It's true that I am, intentionally, conflating automation and formalisation of mathematics, activities which can be meaningfully distinguished. Is it to that that you object?

On the other hand, I think that it is also meaningful to point out that, although automation is just a version of human activity carried out by a computer, not all human activity is subject to automation—at least, not yet. The two bad examples that spring to mind are chess-playing and facial recognition (bad because both have recently come into the reach of computers).

In principle, any human activity that we understand can be automated simply by giving a computer sufficiently precise instructions; but (1) if those instructions are too detailed and ad hoc, then it may be just as hard to give them as to carry out the task, and (2) there are many human activities that we do not understand.


> In principle

What principle is that? I've never heard of such a thing.

> any human activity that we understand can be automated simply by giving a computer sufficiently precise instructions

Care to back up this clearly false statement? I've heard only the opposite regarding computers. Also, hard AI being nothing more than an impossibility and (political)fantasy of know-nothings is a very real option. The peak of automation will be reached within this century and it won't be ``great'' in its ability.

> (2) there are many human activities that we do not understand.

You're just being pseudo-intellectual and making up false principles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: