Step 2: Get boycott popular. Less people submitted to Nature.
Step 3: Submit to Nature. Accepted due to less competition.
The boycott doesn't apply to Nature, and arguably doesn't apply to non-publication writing like News and Views. Rather the boycott is of Nature Machine Intelligence, and the first author of the underlying paper in NMI is a signatory on the boycott list.
Step 5: Profit!!
Step 5: You get tenure and more grants
The names may sound similar, but there’s a big, big difference.
Published in Nature Machine intelligence. Publicized through the regular Nature site.
But at least this article seems to be available as Open Access. I am not sure if this is temporary or not.
The real merit of academia is that people can be alone in a corner doing one's work, if they choose to. Industry enforces socialisation, conformity to norms, and shmoozing.
Also, the track record of the academic world vis-a-vis disruption is impressive. The oldest western universities are around 900 years old.
That said, the article seems to be a fluff piece about a good but not groundbreaking paper. (Nature MI is not Nature, not by a long shot. It does not have any prestige, yet.)
That's a bit dubious. If you don't network in the academia I doubt you can get far at all, while you can very well just comfortably do your job as a technical employee even if you are not really into networking (if you don't have further ambitions). Networking is absolutely essential to would-be professors or anybody who wants long-term stable employment in the academia, and the competition is much fiercer.
I'm only speaking to the practical impact.
It's possible someone will find some way work this down into something practical, but I'd be fairly surprised. Real machine learners get finite data, for which the continuum hypothesis is irrelevant.
(I assume the original authors fully understand this.)
As you state, that's a much weaker statement than the truth: "So these results almost certainly have no practical importance", and as expected, the stronger statement is nowhere in the original paper.
The press release, while being careful not to make false statement, commits a pragmatics sin of letting the reader think there is something going on (by saying "maybe" where they should say "almost certainly not", and by even publushing in the first place a pop-science summary of a highly technical result that has no meaning to a popular audience, and so necessarily the (abstract mathematical, unphysical) importance of the reuslt is completely destroyed in translation.
I haven't read the paper and so don't know how applicable it is, I just don't like your argument.
The practical effect is that you should not try to build an algorithm to detect infinite loops in general (though you wouldn't need undecidability for this, per se. E.g., NP-completeness of the halting problem would probably have been sufficient to kill attempts at a general algorithm dead in the water).
Of course, it is true in specific cases that you can decide whether a given program halts, and, indeed in almost all practical instances if you know how to solve a given problem then you should be able to construct an algorithm to solve it which provably halts. (Not including, of course, things that depend on environmental inputs, something that computes whether a given number satisfies Goldbach's conjecture, etc.)
There is, of course, enormous theoretical value in the undecidability of the halting problem, and in Kurt Gödel's undecidability of a general axiomatic system. And this theoretical value guides future theoretical research which is likely to lead to further practical value in future - which indicates a practical effect in the long-term. Also, of course, it indicates that an attempt to build a general theorem-prover is a useless endeavor (though approximate / specialized theorem provers are, of course, possible and indeed exist).
In particular, Gödel's undecidability has quite real implications for an Artificial General Intelligence which would have to reason about mathematics in general, and about its own thought processes in particular. Under classical mathematical reasoning, such an AI would be unable to "have confidence" in its own reasoning process. However, if the assumptions are relaxed from statements with 0/1 truth values to statements with probabilities, and time-independence of truth is relaxed, it is possible to define an agent which is able to assign (almost) consistent (actually "coherent") probabilities to what its own beliefs w̶i̶l̶l̶ ̶b̶e̶ ̶i̶n̶ ̶t̶h̶e̶ ̶l̶i̶m̶i̶t̶ ̶(̶a̶s̶ ̶t̶ ̶g̶o̶e̶s̶ ̶t̶o̶ ̶i̶n̶f̶i̶n̶i̶t̶y̶)̶) are at time t. See https://arxiv.org/abs/1609.03543 for more details. This is probably the strongest such statement that can be made in this class of arguments.
I'm not sure about this. We often make calculations with computer representations of real numbers. We can calculate quantities based on algebraic or transcendental numbers with arbitrary precision (optimizations, navigation), or we can use them as a basis (image processing) and represent rational numbers in their terms. Real numbers are all over computer applications.
Suppose we are only working with rational multiples of Pi. Then we can ONLY have finite calculations of irrational numbers and it's impossible to have a finite calculation of a rational number (except for multiplication by 0).
>Representing rationals isn't affected by the paper's findings, because they have the same cardinality as the naturals.
The Natural numbers are still infinite which presents its own inherent difficulty. It's not as though countability lends itself better to programming.
> The Natural numbers are still infinite which presents its own inherent difficulty. It's not as though countability lends itself better to programming
Actually countability does lend itself to programming, essentially by definition. You cannot write a function to enumerate all the real numbers precisely because they're uncountable, even if you had an infinite amount of time. You can trivially do this with a "merely" countably infinite set, like the naturals or rationals.
> Suppose we are only working with rational multiples of Pi. Then we can ONLY have finite calculations of irrational numbers and it's impossible to have a finite calculation of a rational number (except for multiplication by 0).
It sounds like you're suggesting we can store an irrational number in a finite representation by generating it with a finite function (e.g. multiply a rational number of times). But that doesn't work. You can't represent a number N in a finite number of steps if you can't represent all factors of N in a finite number of steps.
This fact follows from the field axioms for the same reason that all products of irrationals and rationals are themselves irrational. Simply put, if you can't do it for Pi, you can't do it for a rational multiple of Pi.
Even with infinite amount of time your enumeration function will not generate all of the natural numbers even though the natural number are countable.
You're right that we can't write a function to determine an immediate successor but we can write successor functions.
>It sounds like you're suggesting we can store an irrational number in a finite representation by generating it with a finite function (e.g. multiply a rational number of times). But that doesn't work. You can't represent a number N in a finite number of steps if you can't represent all factors of N in a finite number of steps.
We are getting a few concepts muddled. Storing an irrational number as a finite representation is trivially easy. For example: "pi". Alternatively I could write an algorithm to generate the decimal expansion of the square root of 2. Where it's difficult is when we talk about numbers that aren't describable in a finite number of terms, i.e. Uncomputable. There is a certain unknowable quality to these numbers and I would suggest that this isn't inherent from the cardinality of the real numbers and that we run into similar problems when trying to compute some natural numbers, like random numbers.
Huh? You can trivially write a function to generate all natural numbers:
return increment(x + 1)
> Where it's difficult is when we talk about numbers that aren't describable in a finite number of terms, i.e. Uncomputable.
This is not what uncomputable means. Uncomputable means we cannot calculate a number to any precision, not that the number has an infinite decimal expansion.
Pi is not uncomputable. You can’t store its entire decimal expansion in finite space but it is certainly computable - we compute it all the time. However, almost all real numbers are strictly uncomputable. Among many other proofs, this is evident from the simple fact that the set of all possible Turing machines is countable. Since the reals are uncountable, it follows that almost all reals must not be computable by any Turing machine.
Likewise, there are many potential calculations involving real numbers for which a computer will output a strictly incorrect - not just imprecise - result because its real solution is uncomputable. This does not happen in the naturals or rationals.
Did you mean “for any precision, we cannot compute to that precision” or “we cannot compute to every precision”?
I might be nitpicking here, but uncomputable != indescribable. For example, Chaitin’s constant  is uncomputable but perfectly describable (and in fact limit-computable ). The set of arithmetically-definable  numbers is even larger than that of limit-computable numbers. Computable and limit-computable numbers lie at levels Δ^0_1 and Δ^0_2 of this hierarchy, respectively.
How would you start? You can't put a rational multiple of pi exactly into a computer. You can't get a rational multiple of pi exactly out of a computer. You can calculate rational numbers on the computer and scale them by a factor of pi outside the computer, if you really want, but that doesn't mean the computer is working with irrationals.
> It's not as though countability lends itself better to programming.
Huh? Yes it does; countable sets are still infinite but they're better-behaved than bigger sets.
Of course we can: "1/2*pi"
>You can't get a rational multiple of pi exactly out of a computer.
Using the same limitations you seem to have in mind, we can't get one third exactly either.
>...countable sets are still infinite but they're better-behaved than bigger sets.
Yes, and that is surprising because we do have numerous clever applications that skirt around most of the intractability of exotic real numbers. That is why I think this paper is a bid deal, especially because it points to the cardinality of the real numbers instead of, say, their inclusion of the uncomputable numbers.
We can work with exact rationals; we can do arithmetic on them and normalize them. In particular, we can compute whether two rationals are equal (in finite time), which we can't do with reals.
We also often make calculations with finite subsets of rational numbers. We don't work with all of the rational numbers.
> We can only approximate the real numbers with finite precision.
The same can be said for rational numbers. It really matters that we can be arbitrarily precise when we talk about .333333... (one third), 1.41421... (square root of 2) or 3.14159... (pi) because we can be as exact as our calculations demand.
First, yes the rationals are already countable. But the reals are uncountable. You cannot write a function to generate all real numbers - not even if you allowed the function to run for an infinite amount of time.
Second, most of the reals are individually uncomputable, which literally means no, we cannot calculate them to arbitrary precision (or any exact precision).
Likewise we cannot just redefine the set of real numbers to be restricted to the set of computable reals. Analysis rapidly breaks down when you do so and it follows that the real field is no longer closed (by definition, and therefore no longer subject to the field axioms).
I was with you until you wrote that Analysis would break down. We can easily do analysis with dense subsets of the reals and also extensions of the reals, like in Non-Standard Analysis.
Whilst this is true, I feel it should be noted that there exist constructive and computable versions of analysis which exist to work under under assumptions like "all the reals are computable".
Of course, the big difference is that we've mostly agreed on a single canonical version of the classical reals and of classical analysis, whilst there are numerous versions of different constructive analyses depending on what exactly you mean by "constructive".
Additionally many theorems of classical analysis "split up" into multiple independent versions which would be equivalent classically, but only some of which are true constructively.
Worse than that, often there is not a single definition of the "reals" any more. For example, two definitions of the reals (Dedekind Cuts and Cauchy Sequences) may result in entirely different objects constructively speaking. And, in fact, it may be that neither one is actually a good way of capturing the concept of a "real" "number" in that particular universe, so you need yet another definition.
As terrible as this might make it sound, it is worth the effort, in my opinion. In trying model the real world, classical mathematics throws away just a little bit too much information, in my opinion. This messiness is not just the result of wanting to work under some weird axioms, but an actual statement about the messiness of reality. Trying to sweep these issues under the rug is not a sustainable strategy if you eventually want to solve real-world problems.
Using classical mathematics is fine as a first-pass. It's a good way of investigating the problem under some simplifying assumptions and getting some good and useful results. But after that is done, it's not always enough to just declare the problem "solved" (or "unsolvable"). You need to consider what the practical version of the results will be, especially if you want to write a program to use them.
I am glad that the world is moving towards more computable and constructive versions of mathematics. It is still early days, and there are still way too many different takes on trying to solve the same mathematical issues, but soon enough we'll converge on a select few "generally acceptable" versions of constructive mathematics (in particular, something based on some form of constructive Type Theory) and we'll be able to reform our mathematical and scientific knowledge on a more stable base.
All the Dedekind cuts and Cauchy sequences do is tell us, in essence: "yes, the axiomatized reals you're using are correct and fine because there are constructive proofs of their existence." Moreover, the point of those proofs (and analysis more generally) is to demonstrate mathematical continuity and uncountability. No matter how you slice this problem, if you construct analysis you will lose computability along the way.
You are correct that under classical assumptions of logic ("normal" mathematics), the Dedekind Cuts and Cauchy Sequences indeed result in the construction of isomorphic objects.
However, under constructive assumptions of the logical axioms of mathematics, such as intuitionism (not accepting the law of excluded middle, i.e., assuming that the statement "P and NOT P" is not necessarily true for all P), the construction of the Dedekind Cuts and the Cauchy Sequences may result in different objects, i.e., it may no longer provable that they are equivalent.
1. For x and y you cannot prove that x<=y or x=>y
2. Every function is continuous
See this submission from two months ago:
And floats are really a whole other beast than fractions or naturals or real numbers. As I wrote earlier - the "+" operator isn't associative. Python snippet:
x = 0.0
x += 1.0
for i in xrange(1000):
x += 1e-17
x = 0.0
for i in xrange(1000):
x += 1e-17
x += 1.0
print before() == 1.0
print after() == 1.0
Edit: for instance, you could imagine a representation (think object of a virtual class in c++) that gives you a fraction and a 1/2^n bound for error, where you can query the number for whatever n you see fit. Still, in some cases it's undecidable to determine which number is greater.
Our mathematical tools for quantum mechanics are wave functions, with presumed real-valued quantities. Problem is, we can never feed them precise enough inputs for that to matter, since all of our measurements are interval-valued quantities.
That depends entirely on what one means by "real numbers".
See previous hacker news discussion of a different paper on real numbers, here:
It has many interesting properties. Among those properties is that it contains every number you could every conclusively identify, and that it is only countably infinite, meaning it is much smaller than the reals. It is useful for many things (and a good thing it is, since it is the most sophisticated set of numbers we actually get to compute with), but it is no substitute for the reals.
That is to say, if machine learning cannot use the real numbers then they cannot solve any problem which depend on any properties of the reals, such as producing a machine learned function to compute the square root of two.
Now, it may be true (in fact I think it likely to be true) that the cardinality of the reals is a little bit different of a property than the denseness, enough so that machine learning can approximate the denseness and not the cardinality, but the reason for this doesn't seem to be simply "machine learning doesn't use the real numbers".
When you do calculus on the reals, you only ever approximate or encode a finite (really, countable) subset of them.
(I find the actual research paper easier to understand than the news coverage.)
"consider a Turing machine Z that enumerates, one after the other, each of the provable statements in ZFC. To describe how such a machine might be constructed, Z could iterate over the axioms and inference rules of ZFC, applying each in every possible way to each conclusion or pair of conclusions that had been reached so far. We might ask Z to halt if it ever reaches a contradiction; in other words, Z will halt if and only if it finds a proof of 0 = 1. Because this machine will enumerate every provable statement in ZFC, it will run forever if and only if ZFC is consistent. It follows that Z is a Turing machine for which the question of its behavior (whether or not it halts when run indefinitely) is equivalent to the consistency of ZFC. Therefore, just as ZFC cannot prove its own consistency (assuming ZFC is consistent), ZFC also cannot prove that Z will run forever."
This argument works for every axiomatic system: there will be a Turing machine whose halting problem cannot be proved in it. If there wasn't, you'd be able to prove that the consistency-checking machine would halt, thereby proving that the system is consistent, which is impossible per Godel incompleteness.
The shocking conclusion of Gödel's incompleteness theorem is that the problem that our logical foundations are incomplete is inherently unfixable; any extended system of axioms will have new statements which are true but which it cannot prove.
(That said, there are certain sub-areas of mathematics where we can prove that every true result in one of these sub-areas is actually provable from a certain system of axioms adapted to the given sub-area. But there is no way around Gödel's incompleteness phenomenon if one wants to encompass all of mathematics (and even arithmetic with natural numbers suffices to yield the incompleteness phenomenon).)
Incompleteness is a property of every sufficiently complex formal system and thus poses a general constraint in logic.
That a particular learning problem is not provable in ZFC is not that surprising. Connections between learning and Incompleteness are way more interesting (and there is a lot of pseudo-research going on, "proving" that humans are not simulatable by computer etc.)
It's not an earth-shattering paper, it's a technical curiosity.
the OP is a light article about a light paper, to promote a new lightweight journal that launched this week.
I think the argument should be the other way around.
The halting problem doesn't say "given any Turing machine, no algorithm can determine whether it halts". It says "no algorithm can determine whether any given Turing machine halts".
In the Generative Adversarial Network (GAN) framework, this would be the One Weird Trick the generator could pull that would always foil the discriminator.
This paper is interesting because it connects learnability to a different kind of undecidability than the Halting Problem: the Continuum Hypothesis. In some ways it's more troubling than the Halting Problem, because at least we can usually figure out whether some actual program halts, using some cleverness, but no one can begin to say whether the Continuum Hypothesis is true or false.
The Continuum Hypothesis is more like P!=NP, in that most mathematicians suspect the CH is true (that is, there's no cardinality in between the integers and the reals). That's good news, because it implies the EMX learning model will work:
> Their results imply that the finite subsets of the unit interval have monotone-compression schemes, and therefore are learnable in EMX, if and only if the continuum hypothesis is true, which is known to be unprovable.
If someone comes along and casts doubt on the Continuum Hypothesis through some mystical but persuasive line of reasoning (news flash: not gonna happen), then there might be some EMX functions that can't be learned, but no one is going to lose sleep over that. There are lots of more immediate reasons why machine learning fails in practice, like not having good training data.
> The Continuum Hypothesis is more like P!=NP, in that most mathematicians suspect the CH is true (that is, there's no cardinality in between the integers and the reals).
I don't think this is correct. It's very well-accepted that CH is independent of ZFC. Now, you might mean that most mathematicians suspect that adding CH to ZFC is more intuitive/natural/useful than adding !CH to ZFC, but that's a quite different statement. I don't know much about this, but it seems plausible that both extensions of ZFC could be useful for different things, in the same way that using the parallel postulate is useful for some things (Euclidean geometry) and using alternative parallel postulates is useful for other things (non-Euclidean geometries).
The P=NP problem, AFAIK, isn't yet proven to be independent of ZFC, although I have heard some seemingly educated people suggest their suspicions that it may be.
And now you have me wondering if you win the Millennium Prize if you prove that P=NP is independent of ZFC. Surely you would.
Yes, the P=NP problem is not yet proven to be independent of ZFC. It's certainly a possibility, and to be honest a quite intriguing and fun one, but we shouldn't make the mistake to think that any hard open problem is independent of ZFC. (I know that you didn't state that, I'm just writing this for the record.) Sometimes hard open problems are simply hard open problems.
I guess I'm a constructivist about this.
You can't just wave your hands and take !CH as an axiom. Remember what that means: you reject that there is no set with cardinality between the integers and the reals, so you're implying there is such a set. I realize not all mathematicians share this point of view, but if you can't somehow construct the intermediate set you're claiming exists, or show that it can be constructed, then you don't really have any grounds for taking !CH to be true. Without a positive reason for believing such a set exists, I have a hard time pretending that !CH could possibly be a "useful" (your word) axiom to add to ZFC.
By contrast, imagining that CH might be true is easy. You don't have to construct any never-before-seen mathematical objects to make your point. You just presume, without fear of being contradicted by ZFC, that the integers are countably infinite, and the reals are uncountably infinite but only sort of "minimally" so (there are higher orders of uncountable infinity, but none lower). You may never be able to prove CH (certainly not with ZFC, and almost certainly not with any other logical system), but at least the model makes sense: there's no cardinality in between that of the integers and the reals.
Which is not to say that CH is particularly useful! In fact, maybe the most interesting thing about the paper is the idea that the CH could be meaningfully connected to anything in an applied form of math like machine learning. Does that weird connection between CH and ML have any practical consequences for ML in general, or the EMX learning algorithm in particular? Probably not, especially since assuming CH is true just implies EMX should work, so it's fine to keep using it to solve learning problems.
In other words, if you wanted to convince an ML practitioner that their learning algorithms might be unreliable because the Continuum Hypothesis might be false, I think they would be within their rights to request you show them an actual set with cardinality between the integers and the reals. When you couldn't, they would then be within their rights to ignore your objection.
You say you do math and not philosophy, but the whole CH independence proof kinda moves this discussion out of the realm of (ZFC-based) math, doesn't it? If you don't want to consider philosophy here, there's not much more to say.
Even Grothendieck's EGA has a couple of arguments that cannot be carried out in ZFC in the generality he presents them in, but I wouldn't call EGA a philosophy treatise nor an algebraic geometer a philosopher
In their example they are expressing it in set theory and than operating on it...
I probably failed at explaining on a simple level...sorry
Assumption that all learning algorithms can be implemented on a Turing machine cannot hold true, since there are mathematical problems that cannot (take for example https://en.m.wikipedia.org/wiki/Hilbert%27s_tenth_problem). Unless you categorise machine learning problems as problems that can be implemented with a Turing machine, making the statement a tautology.
Edit: thinking about my comment maybe I am the one being tautologic. If there is a learning algorithm that can solve Hilbert's problem it would transcend mathematics
Turing machines are just a formalization of the concept of algorithms/computer programs.
So problems that can't be implemented as a Turing machine are not "learning algorithms", or any kind of algorithm.
A Turing Machine represents the most complete computational toolkit we happen to know of--not the most complete computational toolkit possible. If we discovered a new computational trick tomorrow--one that a Turing Machine can't perform (the ability to solve the halting problem for Turing Machines, for instance), then we would have discovered another--higher--class of computers, and it would be a reasonable linguistic move to refer to their programs as "algorithms" as well.
That said, the models of computation bifurcate when studying the question which higher-order functions (functions which input functions as arguments) are computable. Such a function can be easily computable according to one model and not computable at all according to another. I'm writing this because far too often the unqualified statement "all the usual models of computation are equivalent" is made. This is emphatically only true for first-order computation, not for higher-order computation.
The Church-Turing thesis can't be proven, since it is a statement about what is possible in the physical universe, not a statement about mathematics. But the intuitive arguments in favor of it (i.e., that Turing machines capture exactly what we mean by "computation") are, for me, extremely convincing.
...but whether or not it's Turing's, it's still a valid position to take. It just strikes me as a trope of human history to identify the limits with our limits. Kind of like how we once thought we were the center of the universe.
Yeah, it seems unlikely that we'll ever watch a neural network perform an action that would yield an unprovable result, but given that we can't know either way I think it's more fun to think that it's at least possible.
* http://www.cse.uconn.edu/~dqg/papers/strong-cct.pdf The Interactive Nature of Computing: Refuting the Strong Church-Turing Thesis (Published version here: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455...)
See also a previous hacker news discussion of the Church-Turing thesis here:
albeit I take issue with the supposed "proof" by counter example via Compass & Straightedge construction given in the video there, due to results like these two:
* https://link.springer.com/article/10.1007/s10472-018-9603-0 Kellison, Ariel; Bickword, Mark; Constable, Robert L. - Implementing Euclid’s straightedge and compass constructions in type theory [September 2018] (Code here: http://www.nuprl.org/LibrarySnapshots/Published/Version2/Mat...)
* https://geocoq.github.io/GeoCoq/ (Numerous papers by Michael Beeson et al., just check the page, I won't link them all here)
(Note that the authors of the two above works take very different yet similar approaches to the whole matter, it really pays off to compare both to each other.)
However, there seem to exist at least two different spanners in the works:
* https://link.springer.com/article/10.1007/s10472-018-9610-1 Makowsky, Johann A. - Can one design a geometry engine? On the (un)decidability of certain affine Euclidean geometries
* https://link.springer.com/chapter/10.1007/978-3-662-57669-4_... Makowsky, Johann A. - The Undecidability of Orthogonal and Origami Geometries
(If the first of the two above papers makes you wonder "well who DIDN'T overlook Ziegler's theorem... Well... https://scholar.google.com/scholar?cites=2279217158168053275)
Oh and in case anyone here still believes Hilbert's old wives tale about /his/ Axiom of Choice, there's plenty of AC to go around in constructive mathematics:
But if that doesn't convince you yet, you might go through the 5 stages of accepting it anyway, as outlined by Andrej Bauer here:
* Here: https://www.youtube.com/watch?v=21qPOReu4FI
* And here: https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016...
And here's one last paper to almost conclude what turned into a bit of a larger-than-I-expected Constructivism propaganda piece comment:
http://t-news.cn/Floc2018/FLoC2018-pages/preprint_cX1w.pdf "On Expanding Standard Notions of Constructivity"
And to now conclude this, a semi-joke:
Wouldn't the fact that one can't prove the Church-Turing thesis, but could falsify it by counterexample, put propagation of belief in the meme of the Church-Turing thesis into either the complexity class of RE, or co-RE? ;)
If we can prove that the "decision problem" of who stays in prison and who does not is undecidable, invariant of the specific implementations of the ML algorithm, this could make a case for stopping such overreaches of ML.
I don't think that's the correct conclusion. More likely, all learning models have unprovable problems so the choice of model is not important wrt the existence of unprovable problems.
Really, this is not a surprising result. All mathematical systems gave unprovable problems. ML is a mathematical system and no exception.
I do get your point, The outcome of these equations is not always as expected, due to the real world data not being prepped and cleaned.
HOWEVER, I would argue this has nothing to do with ML at all, and only to do with the input data.
If I have a complex equation: y=x+1
And I put in x=1 I expect y=2. If I put in x=11 and expect y=2 then the equation "producted unexpected results". This is not the fault of the equation, it's the fault of the input.
Example: All the really clever math you use to make an encryption algorithm is all 100% correct. Then all the really clever math you use to show that it would take the heat death of the universe to crack your clever encryption is 100% correct. The user uses 'password' as the key; How does your crypto stand up to a brute force? Is that your algorithms fault? Did your difficulty proof lie to you?
I know key length is a well understood. In terms of how algorithmically "valid" real world data that can otherwise torpedo entire complex systems, it's as good an example as any.
I'm not sure I exactly agree with this premise. If you read about the principles of chaos engineering, (https://principlesofchaos.org/) it's possible to simulate real world events in testing. And if there's a rigorous mathematical backbone to ML as there clearly is, some determinations about its limitations should be universal for all cases, even if the emergent results in production are unpredictable and could range over intractably many possible outcomes.
Right now ML and AI are like airships in the 1920's if and when something goes wrong and lots of people die (or are blinded) the community isn't even in a position to properly investigate what's happened. Before we get to focusing on the equivalent of hydrodynamics we need to move to an organisation and practice of engineering discipline - that's what the aircraft people did, and that's why the windows in jetliners aren't square, and that's why you can fly off on holiday.
If AI and ML don't do this and instead everyone spends their hours and days doing maths that isn't absolutely at the core of the real issues of application then watch as confidence and trust evaporates and be ready to wait 20 years to see any value arise.
But - maths that achieves results like those in compiler design and optimisation, I'll buy that for $1!
These statements are false. It sounds like your extrapolating what you've read in a few blog posts and assuming that's how the entire industry operates. You don't read headlines about people digging through the data and error logs on a daily basis b/c it's not headline worthy but that doesn't mean it's not being done.
In other words, "Machine Learning" here is a buzzword, without which the paper would probably get less attention.
Hey guys, here's an algorithm.
Oh by the way, we can not prove that this algorithm does something useful, and their can you.
gets published in nature
Talk about bang for the buck
These hastily drawn pop-sciency conclusions are part of the problem why ML/AI has garnered so much media attention these days ...