Hacker News new | past | comments | ask | show | jobs | submit | more urgoroger's comments login

This is correct, and I think it's mentioned in the article. I think this is probably what most people would think of too rather than the heap based solution. Using the heap solution also requires you to know how to use a heap in whatever language your interviewing in, or you could implement one (which would probably a lot more effort), so I prefer this one.

EDIT: Ah, I didn't read your comment carefully enough. I thought you were referring to a divide and conquer approach when you meant 'like merge sort', where you the list of lists in half for each recursion, the recombine the sorted halves using a standard merge sort list merge.

What you describe is basically the same as the solution given by the author. However, you don't give a way to find the minimum pointer, in the case of the author, a heap is used. Using a more naive data structure/algorithm (e.g. just finding the min over all pointers every time) will probably give worse asymptotic complexity than using a heap to maintain the least element


There are a two cases: BQP contains something BPP doesn't: Then the thesis is false as long as we can realize a quantum computer, as then there exists something which can be solved in polynomial time (with bounded error) on a machine in the universe which cannot be done on a classic Turing machine as efficiently.

BQP is a subset of BPP (equal or ~~strict~~): Then the state of the thesis is unknown. There is still some chance that is is false, either by a superpolyomial speedup somewhere higher up the hierarchy (not too sure about this one, my complexity theory is not sharp enough to rule out that this is impossible given BQP is a subset of BPP), or that there exists another realizable model of computation other than quantum computing that beats the randomized turing machine.

EDIT: It appears that BPP (trivially) a subset of BQP, so that the strict inclusion of BQP in BPP is impossible, though BQP = BPP is possible.


While it is true that compatibility should remain intact, this comment seems to misunderstand what the STRONG Church-Turing thesis purports (EDIT: at least the one Google is probably referring to). I'm not sure if it's completely agreed upon, but the strong Church-Turing thesis that I am familiar with is: "Every realistic model of computation is polynomial time reducible to probabilistic Turing machines" (https://arbital.com/p/strong_Church_Turing_thesis/) "A probabilistic Turing machine can efficiently simulate any realistic model of computation" (An Introduction to Quantum Computing, Phillip Kaye, Raymond Laflamme, Michele Mosca)

In other words, the efficiency of every realized model of computation needs to be similar to that of a Turing machine for the thesis to hold. As soon as machines capable of quantum computation are realized, the thesis is broken, as they give superpolynomial speedup over classical Turing machines (assuming things like integer factorization are not in P).

EDIT: I guess it should be noted that there are probably other 'strong Church-Turing theses' hanging around, but I'm fairly certain the folks at Google are referring to this one, which I too am most familiar with. If OP is referring to one which does not require similar efficiency, then the quotation does seem kind of ridiculous. I also agree that the part of the quote that says "current computers cannot replicate" needs to be interpreted with some notion of efficiency as well.


From watching a couple of Scott Aaronson videos, I was under the understanding that Grover's algorithm shows that you don't get an exponential speedup over classical Turing machines from quantum computation in general, but only for specific problems that exhibit structure such as integer factorization à la Shor's algorithm. Here's a quotation from Lecture 10 of his Quantum Computing series [1]:

"So the bottom line is that, for "generic" or "unstructured" search problems, quantum computers can give some speedup over classical computers -- specifically, a quadratic speedup -- but nothing like the exponential speedup of Shor's factoring algorithm."

[1] https://www.scottaaronson.com/democritus/lec10.html


Right. But even having some individual problem that gives an exponential speedup will break the hypothesis.


According to Wikipedia:

"the [Church-Turing] thesis has several possible meanings:

1) The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the Strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics."


What is meant by "non-recursive functions" here?

From my knowledge a non-recursive function would be easier than a recursive one, since a non-recursive one will not call itself while a recursive one can. But the quoted phrase seems to imply it must be something more difficult instead...

Problem is, wikipedia's page https://en.wikipedia.org/wiki/Non-recursive_function does not define it either, it just redirects you to the recursion page (though not recursively, so wikipedia did not go for the joke) which does not define it either.


Non-recursive in this context means functions that aren’t computationally equivalent to those computable by recursive functions only [1]. This is equivalent to Turing completeness. It’s the second & third meaning in your link.

A “non-recursive function” would be more powerful than that; or, conversely, it would be a function which cannot be computed using a Turing machine. Alternatively, these are also know as super-recursive functions [2].

[1] https://en.wikipedia.org/wiki/Computable_function [2] https://en.wikipedia.org/wiki/Super-recursive_algorithm


Thanks for the explanation! Neither of the two linked articles mention the term "non-recursive" by the way, so it's thanks to your response I know it's a synonym of super-recursive, not thanks to wikipedia.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: