Hacker News new | past | comments | ask | show | jobs | submit login
Quantum A.I. Publications (research.google.com)
161 points by sonabinu on Nov 27, 2017 | hide | past | favorite | 79 comments

Does anyone know a good resource about quantum computing for quantum physics beginners? I've tried to understand the concepts from Wikipedia, but it's a bit too abstract and I keep on getting lost down the rabbit hole reading up about the QP terms being used.

All I understand so far is that apparently, thanks to many independently controllable quantum states, some NP problems can be solved in polynomial time. If I'm not mistaken, this would require infinitely many possible different quantum states, but I've never seen this stated (or contradicted) explicitly anywhere.

Nice book, explaining QC at a high-school level: Terrance Rudolph "Q is for quantum".

Here is a very modern approach, that is also understandable without much mathematics: "Bob Coecke: From quantum processes to cognition via pictures" https://www.youtube.com/watch?v=h84CtK33Q8s

Do you already understand the basics? Feynman is nice aswell, I like how he rambles. Try diving in here and backtracking as needed: http://www.feynmanlectures.caltech.edu/III_05.html Don't forget to read it with a thick Brooklyn (long island) accent.

Yes go to MIT OPENCOURSE and rake free recorded classes fo AI an computer science

David Mermin has written a very good introduction specifically targeted towards computer scientists and mathematicians looking to learn and contribute to this field. His book "Quantum Computer Science" is really great, and I highly recommend reading it if you're interested in learning more. Basic linear algebra is pretty much all that's required, although familiarity with mathematical notation definitely helps.

A good place to start might be this paper by Mermin, which defines classical computing bits (Cbits) in a specific mathematical framework and then expands that definition to define Qbits and their states: https://arxiv.org/pdf/quant-ph/0207118.pdf

I'm not sure if materials on edx are still available but Quantum Mechanics and Quantum Computing [1] course there was mindblowing. I was surprised how the ideas and maths were simple enough.

[1]: https://www.edx.org/course/quantum-mechanics-quantum-computa...

At least the videos can be found on YouTube: https://www.youtube.com/watch?v=bT5rFIZZeKI&list=PL2jykFOD1A...

Maybe the notes of his course at Berkeley can be used to supplement the videos? http://people.eecs.berkeley.edu/~vazirani/f16quantum.html

Avi Wigderson's book "Mathematics and Computation" (links to free copy here https://www.math.ias.edu/avi/book) has a nice chapter on Quantum Computing and might be a good intro if you have basic appreciation of complexity classes. He keeps the proofs to a minimum.

Wow "Quantum Physics for Dummies" is an actual book lol:


RANT: As a researcher in quantum optics and spectroscopy, holy shit am I tired of reading through these comment threads where it's clear that nobody understands the topic, yet people still make assertive nicely-sounding statements that are bound to get up-voted by other people who also don't understand the topic. ~ugh~

Ok let’s get down to it. Are you prominent or especially knowledgeable in the specific area discussed? The first irony to avoid here is that you might not be an expert because it’s possible to do advanced work in your areas that’s quite removed from this. That’s true for all research, it’s endlessly splintered into sub fields. I forget who is commonly said to be the last person to know most all of science, was it Newton?

Secondly, even when people are wrong here they don’t seem any more intransigent or arrogant on average. The point is I would assume they are acting in good faith, however I can’t be sure because you criticize people without specific citations. You use citations in your papers I assume?

Let me change course and try and make this as productive as possible by looking at your comments in the most positive light. You are eminent in the field and somehow found yourself reading terribly uninformed assertively stated comments.

What a lost opportunity to correct and inform. What you could say in only a matter of minutes, could seem invaluable to those passionately interested in the subject. I’m sure it would be appreciated, raise the quality of discourse, and may help spread a more informed and accurate lay perspective of the field.

What would Susskind do? If he were here reading comments on a black hole article (for a reason I can’t imagine)? I don’t know him personally but whenever I listen to him lecture I’m always impressed by how willing he seems to share his genius with people interested in learning.

I'm seconding the point about educating hn readers. If you're an expert in the area please correct any misconceptions you see. It might feel like a losing battle, but it's not.

HN readers aren't sheep. We're more than capable of skimming past light-on-fact comment to get to the good ones.

It's only when HN discuss a topic in which you are an expert in that you realise 90% of the comments are absolute nonsense. It happens every time there is a biology topic as well.

It happens with core computer science topics as well. The misconceptions around Big O notation, for example, are astonishing. O(f(n)) is the set of functions that asymptotically grow at most as fast as f(n). It's not limited to measuring running time in the RAM model, or somehow captures the variable n, or anything like that. You can use it to measure memory use, or communication volume, or anything else you like. It can be defined in multiple variables, like number of nodes and edges, or input size and approximation factor, or whatever else you might need. And all of n², n²+n log(n), √n log(n), 42, and 1/n are in O(n²) because O isn't Θ, ffs.</rant>

Same with economics; when someone says in the comments for a rant piece that homo economicus is a core concept of all modern economics because that's what they learned in that one 101 class they took... Grr...

It must be quite fun for you to read so many armchair financiers here reconstructing modern economics from first principles on the one hand, while throwing in gems like “economics is not a science” because “not all participants are rational actors as usually assumed” on the other :)

That's very apt. Except it stops being fun pretty quickly.

The Murray Gell-Mann Amnesia Effect

That's an excellent talk from Michael Chrichton [0]. Plus this was another rather relevant piece to some of the comments too:

> Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the “wet streets cause rain” stories.

[0]: https://web.archive.org/web/20141025035056/http://www.thegre...

> It happens every time there is a biology topic as well.

Let’s add cryptography and quantum complexity theory while we’re at it :)

Would that be a sign that people who understand these concept should participate more and debunk what's wrong?

I'm sure we can all do with a bit of education here.

Few researchers have that kind of time.

But I suppose they have the time to make disparaging remarks about everyone else’s understanding, without any overtures at correcting it?

Edit: This seems to be an unpopular comment already, so let me sidestep potential criticism with the following clarification: I work in cryptography, both professionally and academically, and I make every attempt to help improve understanding of that discipline here on HN. My comment here is directed at the current top comment in the thread. An open rant about some group’s poor understanding of a highly technical area to that group does not improve their understanding, and it will just isolate everyone who reads the comment who doesn’t have a firm grasp of the field (which by the nature of the rant will imply most people reading it).

The feeling of exasperation is valid, but share it with colleagues, or augment it with some attempt at guidance. Otherwise, what are you ranting about except to yell at something that isn’t going to change, while everyone must sit around and be witness to your condescension?

t would be nice if HN users could claim expertise and it's level when people have relevant expertise, their comments would come first.

for example:

    field: physics
    specialty: quantum optics, spectroscopy
    level: researcher, PhD

They could get upvoted more by including credentials when talking about complex issues. I think that's how it works on Quora.

But there is -- though not always -- more of a culture of anonymity here.

Also as someone who has done research (quantum chemistry), which comments specifically are you referring to? The only ones I see making incorrect statements are those that are already expressing their uncertainty and asking for further clarification.

Can you be more specific and educate us?

Yes, please. It doesn't look like on this particular thread, there are many comments that are completely off.

Also, CS people are probably just as good talking about the CS aspect of quantum computing as quantum physicists are regarding the quantum aspect of it. no?

> Also, CS people are probably just as good talking about the CS aspect of quantum computing as quantum physicists are regarding the quantum aspect of it. no?

No, but that seems like an unfair comparison. Quantum physicists are by definition highly trained scientists; CS people range from teenagers hacking on word-press, to well-trained engineers without much science background, to highly-trained scientists.

Or put another way, that depends on what you mean by "CS People".

The top end of your range of computer science people is maxed out at, engineers without much science background?

The name Turing ring a bell? Just opinion, I consider him one of the greatest minds of his century. He worked on a lot of things but some would say he fits pretty well in cs. And there are more like him who just didn’t happen to become famous to the general public. I could give a dozen more examples of people’s contributions, profound insights, fundamentally important basic research results, that might make you extend, at least the top of your range a bit.

No. Reread my comment. It hasn't been edited and explicitly mentioned highly trained scientists. The last three words for the first paragraph... Your characterization of my comment is just a plain and obvious misreading.

More over, as a scientist, I don't think highly trained scientists are "higher" than highly trained engineers. They're different and not well-ordered. So your "maxes out at" verbage is not agreeable to me.

Wow, I must admit I somehow did not see the last fragment of that sentence. Reading it again it appears, unfortunately, those were exactly the few words someone would want to miss, to earn a promotion from mistaken to 180 degrees wrong with special recognition for offensive qualities.

I apologize with no reservations. I try to read precisely before responding, but clearly did not take care to do so in this case.

Ironically after reading/thinking more, we may actually have similar perspectives on the subject. I’m curious what you mean by “not well-ordered”? However if you do you feel like replying, no problem, I would understand that.

In any case, take care. Sorry for the misunderstanding.

> I’m curious what you mean by “not well-ordered”?

Just that researchers aren't better/smarter/etc. than engineers and vice versa. They are different jobs, both essential in their own way, both difficult in their own way.

> Sorry for the misunderstanding.

No worries! :)

I'm just an idiot and the password is really swordfish2

Wow, it really is.

Same thing happens with deep learning and AI threads.

Oh well, how timely, I was talking about quantum optics yesterday, and now Im curious about textbooks references, any suggestions ? Even fundamentals and not up to date QC research.

I would rather them just stick with Arxiv and then when they have something actually working practically, then post how it actually works. Sort of how Apple does it in their machine learning journal.

"Soon we hope to falsify the strong Church-Turing thesis: we will perform computations which current computers cannot replicate."

Yeah, no. This is a ridiculous statement and it is embarrassing to hear it from Google. Quantum computers are not hyper-Turing machines, and there is no reason to suspect they ever will be.

While it is true that compatibility should remain intact, this comment seems to misunderstand what the STRONG Church-Turing thesis purports (EDIT: at least the one Google is probably referring to). I'm not sure if it's completely agreed upon, but the strong Church-Turing thesis that I am familiar with is: "Every realistic model of computation is polynomial time reducible to probabilistic Turing machines" (https://arbital.com/p/strong_Church_Turing_thesis/) "A probabilistic Turing machine can efficiently simulate any realistic model of computation" (An Introduction to Quantum Computing, Phillip Kaye, Raymond Laflamme, Michele Mosca)

In other words, the efficiency of every realized model of computation needs to be similar to that of a Turing machine for the thesis to hold. As soon as machines capable of quantum computation are realized, the thesis is broken, as they give superpolynomial speedup over classical Turing machines (assuming things like integer factorization are not in P).

EDIT: I guess it should be noted that there are probably other 'strong Church-Turing theses' hanging around, but I'm fairly certain the folks at Google are referring to this one, which I too am most familiar with. If OP is referring to one which does not require similar efficiency, then the quotation does seem kind of ridiculous. I also agree that the part of the quote that says "current computers cannot replicate" needs to be interpreted with some notion of efficiency as well.

From watching a couple of Scott Aaronson videos, I was under the understanding that Grover's algorithm shows that you don't get an exponential speedup over classical Turing machines from quantum computation in general, but only for specific problems that exhibit structure such as integer factorization à la Shor's algorithm. Here's a quotation from Lecture 10 of his Quantum Computing series [1]:

"So the bottom line is that, for "generic" or "unstructured" search problems, quantum computers can give some speedup over classical computers -- specifically, a quadratic speedup -- but nothing like the exponential speedup of Shor's factoring algorithm."

[1] https://www.scottaaronson.com/democritus/lec10.html

Right. But even having some individual problem that gives an exponential speedup will break the hypothesis.

According to Wikipedia:

"the [Church-Turing] thesis has several possible meanings:

1) The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the Strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics."

What is meant by "non-recursive functions" here?

From my knowledge a non-recursive function would be easier than a recursive one, since a non-recursive one will not call itself while a recursive one can. But the quoted phrase seems to imply it must be something more difficult instead...

Problem is, wikipedia's page https://en.wikipedia.org/wiki/Non-recursive_function does not define it either, it just redirects you to the recursion page (though not recursively, so wikipedia did not go for the joke) which does not define it either.

Non-recursive in this context means functions that aren’t computationally equivalent to those computable by recursive functions only [1]. This is equivalent to Turing completeness. It’s the second & third meaning in your link.

A “non-recursive function” would be more powerful than that; or, conversely, it would be a function which cannot be computed using a Turing machine. Alternatively, these are also know as super-recursive functions [2].

[1] https://en.wikipedia.org/wiki/Computable_function [2] https://en.wikipedia.org/wiki/Super-recursive_algorithm

Thanks for the explanation! Neither of the two linked articles mention the term "non-recursive" by the way, so it's thanks to your response I know it's a synonym of super-recursive, not thanks to wikipedia.

I too was skeptical. Is there any computation that a quantum computer can perform which cannot be simulated by a classical computer with an RNG?

No, just that some problems are believed to require superpolynomial slowdown.

Correct me if I'm wrong but isn't BQP ?= BPP considered a separate problem from the strong church turing thesis?

In this case the "strong Church-Turing thesis" (usually also known as the "extended Church-Turing thesis"[0]) is usually written as the following:

> A probabilistic TM can efficiently[1] simulate any realistic model of computation.

Taken from [0].

Note that this statement is actually false if two things are true: (a) BPP ≠ BQP (this is unknown, at the moment; we don't even know if BPP ≠ P, but it's not difficult to show that P ⊆ BPP ⊆ BQP) and (b) that BQP is physically realizable (e.g. we can actually make physical quantum computers)---both statements are, overall, what Google is claiming.

Why? Because (b) would mean that there is a realistic model of computation which models BQP and (a) would mean that there is at least one problem in BQP which is not polynomially reducible to a problem in BPP; therefore it follows that a probabilistic TM (which models BPP, by definition) cannot efficiently simulate the particular model of computation given by a model of BQP (i.e. a quantum computer).

On the other hand, if we find that BPP = BQP (which is widely believed to not be true) then we're back to square one and the strong Church-Turing thesis would still hold.


[0] https://en.wikipedia.org/wiki/Church–Turing_thesis#Variation...

[1] "Efficiently" here means that any model is polynomial-time reducible to a standard model of probabilistic computing.

Additionally, the "probabilistic" aspect comes from the "fact" (i.e. observation) that we can make "good" random number generators in real life (say, by measuring background radiation), but it's unclear if there exist pseudo RNGs that are "good enough," algorithmically, to derandomize BPP. That is, such that BPP = P (the claim of "good enough" PRNGs is stronger than BPP = P, in fact, see https://mathoverflow.net/questions/2272/pseudorandom-generat...).

That is, we do not know if P ≠ BPP.

There are a two cases: BQP contains something BPP doesn't: Then the thesis is false as long as we can realize a quantum computer, as then there exists something which can be solved in polynomial time (with bounded error) on a machine in the universe which cannot be done on a classic Turing machine as efficiently.

BQP is a subset of BPP (equal or ~~strict~~): Then the state of the thesis is unknown. There is still some chance that is is false, either by a superpolyomial speedup somewhere higher up the hierarchy (not too sure about this one, my complexity theory is not sharp enough to rule out that this is impossible given BQP is a subset of BPP), or that there exists another realizable model of computation other than quantum computing that beats the randomized turing machine.

EDIT: It appears that BPP (trivially) a subset of BQP, so that the strict inclusion of BQP in BPP is impossible, though BQP = BPP is possible.

It is certainly a separate problem. Just as the original comment says: I don't think anyone seriously argues that quantum computers will violate the CT thesis.

(Except Penrose, perhaps.)

I think it has to be. The former problem is about computational complexity; the latter is about computability.

I don't know much about this topic -- I skimmed through Scott Aaronson's "Quantum Computing and Democritus", and got lost by the last third. As far as I remember, it was all about computational complexity. I don't recall anything about computability, but correct me if I'm wrong.

I just had to look up the strong Church-Turing Thesis:

The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the Strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics.


I don't have much justification, but I believe it's false. There is a lot more to the universe than computers. Computers cannot simulate even elementary physical systems accurately.

Doesn't it take supercomputers to even accurately simulate even a few atoms?

I think the biggest cultural "lie" in the last couple decades is "the Matrix". Since that movie, educated people including software engineers take for granted the fact that the universe can be simulated by computers.

They write silly papers like "we are almost certainly living in a simulation". Granted, that argument is sound if you believe it's even possible to simulate the universe with a computer.

But I actually take that as further evidence that it's NOT possible to simulate the universe with a computer.

Anyway, I haven't thought about these things in a long time, but I would say that just because something is impossible on a Turing machine doesn't mean it's impossible in the universe!

> I don't have much justification, but I believe it's false. There is a lot more to the universe than computers. Computers cannot simulate even elementary physical systems accurately.

Correct me if I'm wrong, but wouldn't a system that computes a non-recursive function be some sort of box where, given the same inputs, you invariably obtain the same outputs; however there is no way of finding any mathematical or algorithmic expression that could, in any finite amount of time, predict the same result? To find yourself in that situation, you need a physical process that is intrinsically non-computable; being unable to compute the sum of computable processes is not enough.

A problem that I see in simulating the universe (i.e. the sum of computable processes) is that in quantum mechanics, you can't simply isolate yourself from the system under scrutiny. I.e. to compute and predict the state of the universe a computer would have to take into account its own internal present and future state which was the aim to compute in the first place.

You can simulate a lot of atoms with a modern supercomputer. The only problem is just how many atoms there are even in a tiny grain of sand. However that doesn’t mean all hope is lost. You just have to get really clever - there is a lot of repeating patterns in a grain of sand - you don’t really need to simulate all or even most of it. The same holds true for most structures.

Wouldn't an incomputable physical process have serious implications beyond CS? How would a physicist even model such a thing?

edit: I guess not, since all you need is access to the real numbers which is a strict superset of computable numbers, but this doesn't seem like an "interesting" violation of the SCT.

What makes you think physics can model everything? That's the point.

Math is FULL of functions that can be described -- have properties X and Y, which are useful to prove theorems -- which nobody has thought about how to compute.

(Obviously, there are the ones related to incomputibility like the Godel function, but there are others too... I'd appreciate help from others here :) )

And it might not even be useful or interesting (for the purposes of mathematics) to compute them. Not all math is constructive.

Likewise you can describe physical processes without computing them.

If SCT is false then for example it may be possible for a quantum computer to use an uncountable set of inputs to compute in finite time whether an arbitrary turing machine will end.

That said, you're right, if all that is required is real numbers, then hypothetically an analog computer is all that would be required. But in a truly quantum world where SCT holds, an "analog" computer is only a computable approximation.

(Full disclosure: I guess....)

Quantum computers (assuming we can build one) can access computational resources far beyond what classical computers can do: imagine a classical computer so big and hot that it collapses down into a black-hole. Or it takes so long to run the computation (eg. a classical computer the size of the solar system) that heat-death of the entire universe comes first. These are the physical limitations of classical computation, and a quantum computer will (if we can build one) surpass this.

> These are the physical limitations of classical computation

Classical computation (i.e. the Turing machine) is a mathematical model, and isn’t bound by the physical limitations you imagine. In particular, a Turing machine (which, after all, requires an infinite tape) is already not physically possible.

From the OP: "computations which current computers cannot replicate." I was just explaining a bit more why this is so.

If it matters at all, I've just finished a PhD in quantum computing.

I don't think that's really true. The difference between a quantum computer and a classical computer is that a classical computer can represent N states given N bits, whereas a quantum computer can represent 2^N states given N q-bits. However, this has a major caveat: you can never know the state of the system. You only get to measure one value at a time and you can never re-create or copy a previously obtained state.

Still, some clever people have figured out how to exploit this to achieve polynomial speedup on problems where certain properties of the problem are exploited to get around the limitation of measurement in a quantum system.

Whenever I hear that "quantum computers will forever be specialized processors" I think that at how we must've thought about the very first computers, which were only really good at doing math calculations. Even GPUs have gotten way more general-purpose than what we thought they could achieve even 2 decades ago.

I think that if quantum computers become mainstream enough we'll find ways to use them for more general-purpose stuff.

Why not?

How does one get hired to be a researcher at google? What job title or level would this be? How does being a researcher differ from being a regular SDE? What background are they looking for here? Do these people write code on a daily basis?

"Soon we hope to falsify the strong Church-Turing thesis: we will perform computations which current computers cannot replicate. "

WTF, has Google gone completely insane? This is so wrong from every angle...

See the above thread given in [0]. It sounds weird, but they are talking about the strong CT thesis (e.g. that all realizable models are polynomial-time-reducible to BPP), not just the usual one (that no realizable model can solve halting-hard problems).


[0] https://news.ycombinator.com/item?id=15785215

As alluded to above, “Strong CT” confusingly refers to several very similar, yet different, concepts. I think this is throwing people off. Quantum computers will almost certainly not disprove the CTD principle [1], which is also known as “Strong CT”.

[1] https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93...

This is still a theoretical problem, not one solvable by building a faster computer

You're right that part of the problem is, indeed, theoretical (it remains to be shown that BPP ≠ BQP, even if it is widely believed to be true), but part of the problem is also physical (we need to build a model of BQP---i.e. a quantum computer---to show that there is a physically realizable model of it).

Is this some pre-launch marketing for Google?

I wonder if they're still able to meet their promise that they'll announce a 50-qubit quantum supremacy computer by 2017. Because the year is ending very soon. Was it delayed, or did IBM crash their party when they announced that even 56-qubits are not enough for quantum supremacy? (not sure if IBM's test was enough to prove that, though)

If we get a useful quantum computer, what then? How will machine learning improve? As far as I know, QM speeds up integer factorization, but I don't see how this transfers to machine learning.

Quantum computers are good/super-efficient at optimization problems.

That means they can optimize a solution for something with less data. Right now our AI is only good at solving problems for which we have millions and millions of data points. But we don't have that much data for every problem in the world. Quantum computers could potentially help us create an AI that can work with fewer data points.

Not an expert, but I believe most of the benefits will come from quantum annealing, which is capable of escaping local minima in optimization problems.

Simulated annealing is only guaranteed to find the global minimum as time approaches infinity. Does quantum annealing have a better guarantee?

Why is this being upvoted?

It's literally just 20 random quantum computing papers with an unrelated heading and a Google banner, because they got flagged with a tag somewhere.

The papers aren't random--the names in blue are Google researchers.

I rather use Firefox Quantum than Google Quantum AI, yeah I know it's unrelated and hence the article too.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact