Hacker News new | comments | show | ask | jobs | submit login
I'm Scott Aaronson, quantum computing/computational complexity researcher. AMA
649 points by ScottAaronson 17 days ago | hide | past | web | favorite | 356 comments
Hey HN,

We recently recorded a podcast (https://blog.ycombinator.com/scott-aaronson-on-computational-complexity-theory-and-quantum-computers/) where I discussed my research, AI, and advice for nerds in general or people who want careers in science.

We covered many but not all of the questions submitted over the internet so AMA!




Hi Scott,

Shtetl-Optimized's tagline is famously "Quantum computers would not solve hard search problems instantaneously by simply trying all the possible solutions at once". What phrase do you think should replace 'trying all the possible solutions at once' in the public conciousness as a succinct description of the mechanisms of a quantum computer? Or is this topic simply too complex to be distilled into a neat synopsis while retaining accuracy?


A quantum computer is a device that exploits constructive and destructive interference among exponentially many amplitudes, which are numbers that are closely related to probabilities but can be positive, negative, or even complex.

If you feel that sentence wasn't clear enough, and it would take at least a few more paragraphs to flesh it out ... well, duh, what did you expect? :-D

For a SLIGHTLY longer account, see my attempt to explain quantum computing in 35 seconds or fewer, which Maclean's magazine challenged me and others to do in response to Justin Trudeau's quantum computing explanation: https://www.scottaaronson.com/blog/?p=2694

When I did a piece for the New York Times, I managed to get an explanation that I was reasonably happy with into ~6 paragraphs: https://www.nytimes.com/2011/12/06/science/scott-aaronson-qu...

Given that quantum mechanics is, famously, one of the most counterintuitive things that humanity ever discovered, I don't think it's that big of an ask for people to read 6 paragraphs about QC before they decide they basically know what it's about. :-)


Here's my attempt at a two sentence over-over-over simplification that at least gets people away from the "magic bit-sting that contains your answer." (It also harkens back to an old Einstein quote, so may be attractive to science writers.)

Quantum computing is a technique that lets you sample a problem's answer-space using "loaded dice," such that the problem's correct answers correspond with probability spikes in your dice throws. Right now, we only know how to usefully "load" those dice for certain problems, and it's pretty hard to do.


Do you mean all the possible solutions to a problem when you say 'answer space' ?


That's how I would interpret it, yes


Wow, that was nice and succinct.


"Loaded dice ", I have not heard that before... Is it same as Biased dice


Yes


My favorite quote of yours is that quantum computers "have a profile of abilities so strange that no sci-fi writer would have had the imagination to invent it" - it's a great quote to inspire people to dig deeper into the (literally beyond classical imagination) concepts of quantum mechanics!


In your 35 second blurb, and the New York Times article, it seems like the point you're making is that interference is the main way that quantum computers work. But what makes quantum interference special over other kinds? You can get interference in sound waves, light waves, radio waves, etc. You also mention that magnitudes can be complex, but the same is true of other kinds of waves; complex numbers are used for discussing impedance in electronics.

These are even relatively simple to work with; back in high school, I set up my basement as a darkroom, set up a sandbox for isolation, borrowed a laser from my physics teacher, bought a kit online with a beam splitter, mirrors, lenses, and film, and made some holograms of various objects utilizing light interference.

You could set up an apparatus in which light goes through a beam splitter, reflects off mirrors to travel via different paths, and is recombined and interferes in the end to produce an interference pattern. You could probably encode a lot of information in the exact length of the different paths, perhaps in an array of mirrors which could be actuated to produce slightly different path lengths in different parts of the beam (after the beam is expanded), and use the interference to make calculations.

Other than the smaller scale, and greater difficulty of working with it, what is special about quantum interference that would make it more amenable to solving problems that are NP complete than some apparatus producing similar kinds of interference with light?

Also, has it been proven (or argued sufficiently convincingly) that quantum computation at scale is actually possible? I'm wondering if there could be an issue where it requires more computation to construct a quantum computer than the computation you get out, or require a non-constant number of quantum computers (with respect to the size of the problem) to actually get reliable enough results out, or something of the sort.

I think this is somewhat like the questions of whether certain automata are Turing-complete (https://en.wikipedia.org/wiki/Wolfram%27s_2-state_3-symbol_T...), when a sufficiently complex process is needed to encode the problem into the automata that it could be argued that the computation was not actually carried out by the automata itself (I don't actually know if that question was answered; Wikipedia references a mailing list thread that has a lot of discussion, but I haven't seen any authoritative conclusion).

Given that empirically, only extremely simple quantum computers have been able to be constructed, what makes us think that there isn't some kind of tricky scaling issue like this were the additional complexity of building, running, or verifying the results of quantum computers will negate the benefits?


The difference between QC and all those other examples of interference is that, in the case of QC, the interference happens in configuration space rather than ordinary 3-dimensional space. And configuration space is enormous; it has a dimension that grows exponentially with the number of particles. The number of paths that could interfere with each other to produce a given amplitude is likewise exponential (in that case, in the number of computational steps).

No, of course no one has proven that it can work: presumably, the only proof that will convince everyone will be the actual construction of the machines! But in the 1990s, the theory of quantum error-correction convinced almost everyone that, as far as current physics can say, the difficulties (though staggering) seem to be ""merely"" difficulties of engineering. As I discussed in another answer, a deep reason why QC could never be scaled would be MUCH more interesting scientifically than a mere success in scaling it (which would "merely" confirm what physicists already believe). And of course, with the ongoing efforts of Google and others to demonstrate "quantum supremacy" with 50-70 qubits, we're likely to get experimental results that are relevant to your questions within the next few years.


Out of all your replies, I THINK that this is the one that helped me grok why QC is different than somehow simulating quantum math in a classical computer. The idea of being able to tap into more than 3 dimensions sounds like something very fundamental, kind of like relativity, and a key aspect of how our universe works that at least I was never aware of (and probably a lot of other people!)

Would you consider writing an in-depth article on configuration space and how it applies to QC (and possibly other research) and sharing it here on HN someday?


Pretty much any intro to QC (and in particular, any of the intros I've written, and linked to elsewhere on this thread) will make the point about Hilbert space (that's what it's called) having a dimension that grows exponentially with the number of particles in your system. This is because every possible classical configuration of the system is its own orthogonal "direction" in Hilbert space, and the full state can be an arbitrary superposition (i.e., complex linear combination) of those directions.

Ok, I'll try another attempt at explaining the stuff :P Not sure if correct, because I'm not a parent :P neither a quantum scientist actually, just dabbled a tiny bit in QC at one point :) So you can choose whether to believe my words or not :)

Imagine you are a parent/teacher in a room full of happy kids, playing their kid games. Some kindergarten playground or something. They're generally all doing some kind of stuff, and doing it in parallel. Is this "doing work in parallel"? Every one of them is doing something totally else, one kid is building a castle, the other is throwing bricks at it and destroying it ("interference cancelling each other's work"). One is digging holes in a sandbox, another just kicked the sand inside, filling the holes back.

Now, you are just one, insignificant adult in this room. Imagine you would want them to do something for you. Can you just shout at them, "do me some parallel computation"? "Build me a castle of bricks"? Meh, sure you can, but they'll look at you funny, maybe a few of them will start, but their attention will be soon diverted by others, and anyway they'll soon get bored and start fudging around.

But here comes the fun part - if you're a smart and creative teacher/parent/..., you can actually do much better: you can "trick" them into doing your work; you have to either find some kind of a "system", or a "fun game", that they will like, that will fit their abilities and sensibilities, so that they'll choose to generally more or less contribute in the direction you want them to. You have to find a way of doing the task that will be "compatible" with them. Then, collectively, you can actually have them make your work done! But if you don't find the trick - sorry, no free lunch for you :) But you can still keep enjoing watching in awe and wonder how they're having fun, the little buggers... erm, sweethearts :P

In a somewhat similar way, in QC, you have to invent a system that can trick all the qubits, who have their particular, peculiar ways of living and behaving, to contribute to some particular result that will be meaningful and useful to you. Otherwise, they'll totally do some kind of "parallel work", but the result will be just irrelevant mess. To make the challenge even more tricky, you're actually outside the room when the work is happening. You don't see the "calculation" ([wavefunction] vector) each kid... umm, qubit is contributing, you only see the one final result. Nah; that would be too easy still; you can only see the shape of the result's shadow (just the length of the final vector).

(edit: ah, and I forgot the most important thing: if I'm not wrong, each extra kid is are actually contributing exponentially more work; if you have N qubits, you are trying to trick 2^N vectors to work for you)

Sorry for still being very vague and handwavy :)

Hmm, one more vague analogue could be to "computer proof systems/theorem provers", e.g. Idris, or trying to prove/enforce something with GADTs. You have this set of rules/mechanisms; now, you have to sit and squeeze and tear your brain in different ways to invent how to force those limited rules to encode the thing you want to prove. Not easy. But sure a challenging and potentially fun brainteaser :)


As someone still trying to understand the essence of quantum computing, your analogy to me can be summed up as coherence only comes from coherence. Which to me is too general, and describes computing as a whole. How coherence is determined in the particular case of quantum computing as opposed to classical computing is the meat and potatoes that I'm looking for.


Sorry, don't think I can do more at this point; I once managed to understand the Shor's algorithm from some book; but I didn't reinforce it, and now I'm left only with a vague recollection of the main a-ha moment... Though I actually don't really get what do you mean by coherence here (esp. in the area of classical computing).


My coherence I mean something intelligible that can be acted on (ie. computer code and its results).


“Quantum Computing is the exploitation of quantum state evolution to perform computation.”


A quantum computer is a high-tech ouija board.


[flagged]


If you post like this again we will ban you.


I'll just point to this https://www.smbc-comics.com/comic/the-talk-3 which was posted as an answer to a similar question I had on a previous discussion about a blog post of his.


That was fantastic and enlightening, thank you.


Ditto, fun read thanks for sharing.


Exactly the question I would love answered.

Edit: this seems roughly answered to a question by user r4um


I don’t see where it’s answered?


Hi Scott,

Do you think there is a significant chance that quantum will never take off (i.e. there are non-obvious limitations that will prevent quantum architectures like superconducting qubits / trapped ions / quantum dots /... from ever outperforming classical supercomputers)? Related, what in your opinion is the best indicator (or would be the best indicator if demonstrated) of the potential of quantum devices?


Yes, I do think there's a significant chance of that. If it happens, my main interest would be to understand WHY. What are the non-obvious limitations that you mention? What is true about the world that makes it seem to have this exponential explosion of amplitudes, yet makes it impossible or infeasible to harness them for computation?

The depressing possibility, of course, is that we never succeed in building useful QCs, but we also never learn anything deep about why we failed: it was just too complicated, too messy, and then at some point the funding ran out.

But I like to proceed on the assumption that the world is ultimately comprehensible (what other choice does one have in science? :-) ). On that assumption, if QC can never work, then there must be a deep reason that's not articulated in any of the existing physics books: either a breakdown of quantum mechanics itself, or else some new principle on top of QM that "screens off" or "censors" QC. Needless to say, the discovery of that principle would itself be a revolution in science -- indeed, I'd personally be more excited about it than a "mere success" in building scalable QC! (But my own bet is on the "boring, conservative" possibility, that QC can ultimately work.)

If we see the milestone of "quantum supremacy" achieved in the next few years -- i.e., a 50-70 qubit quantum computer used to solve some artificial sampling task many orders of magnitude faster than we know how to solve it classically -- that will obviously be one strong indicator that the potential of QC can be realized. An even better indicator would be the use of a quantum error-correcting code, like the Kitaev surface code, to keep encoded qubits alive for longer than the underlying physical qubits are staying alive for (or better still, to perform 1- and 2-qubit gates on them).


There's a famous study somewhere showing that people interpret phrases describing probabilities differently.

By "significant chance" do you mean something like 10% or something like 70%?


He means (-40 + 17i)%.

Sorry, couldn't resist :-)


Yes, that's PRECISELY what I meant.


I think you have to find the "perdec" symbol on your keyboard. That way you'll get a percentage when you square it.


I would also like to know the answer to this. Popular culture has seemingly latched onto the phrase "quantum computer" and decided it's the next logical step for computing in general, without ever really defining what it is or thinking about it too clearly.


Well, never is a long time. I guess the more interesting question (to me) is whether engineering challenges will be easily overcome so as to make quantum computer components cheap and ubiquitous, or if there's some innate difficulty to their production that will make them uncommon for everyday personal use.


People struggled for 40 years to make a blue LED.

Even if quantum computers required near zero temperatures, superconductors and such stuff, there is no reason why you couldn't have all that in a no-serviceable-parts-inside box if the economic incentives were strong enough.


Well the question really is economic incentives. If it is too hard to do (say, doesn't scale) or can be simulated efficiently-ish on a classical computer -- no one will do it.


I love your optimism.


I also second this question. Do you believe we will be able to construct real machines that can run Shor's or Grover's algorithms in a practical way, and how long is it likely to take to achieve that? (parents question put much better than mine, would like to hear the answer)


You were said to be a skeptic of quantum computing company d wave. Then you started believing and then went back to skepticism. What is your current status, do you think it works? What would you like to see from them?

Also, what is your take on Max Tegmark's quantum suicide experiment. Would it work? If yes would that imply that each of us should expect to live a really long time subjectively?


My position on the technical fundamentals never changed much: namely, D-Wave is building devices that could be interesting from various engineering perspectives, but that as far as most of us can tell, are not getting speedups over existing computers that are clearly attributable to quantum computation (as opposed to building special-purpose hardware that's, essentially, very fast at simulating itself). If you want quantum computing speedups, I think you're going to need qubits of much higher quality, and ultimately error correction or at least error mitigation. In principle, D-Wave could do that, and I applaud any steps they take in that direction. However, I'm personally much more excited right now about the experimental efforts in superconducting quantum computing that are happening at Google, IBM, Intel, and Rigetti -- all of which use qubits with orders-of-magnitude better coherence times than D-Wave's qubits. In some sense, D-Wave optimized for being able to say that they had 2000 qubits as quickly as possible, rather than for the qubits actually doing what we want.

On a more sociological level, D-Wave earned a lot of bad blood with the academic QC community by making false, inflated, and overhyped claims (with a primary offender being its founder, Geordie Rose, who's since left the company). And I certainly took them to task for those sorts of things on my blog. Then the D-Wave folks met with me, John Preskill, and other academics, and pledged to improve in how they communicated, so I was nicer to them for a while. Then they went back to egregious hype about speedups that weren't real, so I criticized them again. Nothing more to it than that. :-)

Regarding quantum suicide: no, I do NOT recommend killing yourself any time anything happens in your life that makes you unhappy, on the theory that other versions of you will survive, in other branches of the quantum-mechanical wavefunction where the bad event didn't happen. This is partly because, even assuming you accept the Many-Worlds Interpretation, "your" moral concern and responsibility presumably extend only to those branches that are in "your" future -- you have no contact with the other branches! And partly it's because I take it as almost an axiom of rationality that, if a metaphysical belief leads you to do "obviously insane" things with your life, then it's probably time to look for a better metaphysical belief. :-) (I wouldn't say the same about scientific or mathematical beliefs.)


Another problem with the quantum suicide thought experiment is that there are plenty of branches where you end up alive but horribly disabled.


Yet another problem with that is no matter how small the measure of those branches is, you'll end up in them anyway.

The death of natural causes qualifies too.


Am I hearing this right, you think the whole multiverse concept is... meta-physics at best?


I think he was saying about whether you should morally care about the other branches counted as meta-physics.


Yup.


You simply can't something 'physics' if its not testable. :)


That word 'testable', is very loaded. :) But I get what you mean. Are there things that we can't test that do exist?


>Are there things that we can't test that do exist?

There's many reasons to believe that objects that exit our light cone continue to exist after they do, even though they could never have any future interaction with us to confirm that. (Say a spaceship leaves Earth at near the speed of light in a straight line, and then enough time passes that the space between the ship and Earth is expanding so fast that the spaceship or any kind of signal from the spaceship would have to travel faster than light to return to Earth, which is impossible. Believing that the spaceship disappears when it exits our light cone requires believing in unnecessarily more complicated physics.)


Scientifically speaking, no. A scientific hypothesis must be falsifiable, and to be falsifiable it must be testable. I guess in some sense you could claim that there are hypothesis that are testable, but which we do not have the capacity to test. But then, is the claim that "one day in the future, we will be able to test this other claim" itself falsifiable? I'd argue not (it's a recognizable, not decidable claim, in the computational sense, and I think that for a claim to be falsifiable, it must be decidable).


> A scientific hypothesis must be falsifiable

This is certainly one understanding about what science should be (although not a scientific one interestingly enough). Personally I prefer Thomas Kuhn's demarcation, which by my understanding concentrates more on whether a scientific program is producing interesting predictions which turn out to be true.


I'm not speaking about science as a whole or a scientific program, but a scientific claim. CERN is certainly not falsifiable, but it produces predictions which (often) turn out to be true. It does so by devising falsifiable claims and then testing those claims.

In other words, the method to create interesting predictions which turn out to be true is to create interesting predictions, then test those predictions, and update your understanding of the world based on them. Once your world-model is good enough, your predictions will often be true. And, perhaps, eventually your predictions will be so often true that they become uninteresting, so you must move on to other questions.


Fair enough, I think Kuhn was referring to things like the world-models and you're referring to finding out if the predictions of the model matches reality.

The heliocentric model of the solar system made less accurate predictions than the geocentric model for years, because the geocentric model was mature and had had lots of tweaks applied to it. In that time, you could have asked the heliocentric model to make a prediction, and shown that it was wrong compared to the geocentric model. You would have been wrong to conclude that heliocentrism was wrong though, it just hadn't matured as a theory enough yet.

All models are wrong, but some are useful.


> Are there things that we can't test that do exist?

Lots of people think so (e.g. unmeasurable things predicted by theory like parallel universes, but also things like evil or God or the color purple), but by definition it's hard to be very sure, or to transfer your own confidence in such things to others.

Lots of these kinds of questions reduce to quibbling about definitons; and also by definition, if we can't test the thing then the universe isn't going to punish us either way for believing or not.


> if we can't test the thing then the universe isn't going to punish us either way for believing or not.

If we can’t test the thing then what we are discussing is faith, not science.

Nothing wrong with faith and beliefs but I think it’s important to differentiate between these things and science because often times science is used as a basis for untestable beliefs and then people really start to think that those untestable beliefs are actually backed by scientific research.


Do you have any empirical evidence for any particular QM interpretation? If not, does that make them unscientific?


I think the various interpretations of QM, until one is proven (or we otherwise come to one definition)... they all lie within philosophy.

Despite the scientific method giving rise to the fact-based ever-improving test-able body of work we call Science.. that doesn’t stop people from creating their own religions and beliefs based on it.


I think there’s a grave misunderstanding about the multi-universe interpretation. The scales at which the uncertainty principle come into play are vastly small. I think they’re small enough that they don’t summate to larger scale variances. The larger level probably asymptotes to the one reality.


> This is partly because, even assuming you accept the Many-Worlds Interpretation, "your" moral concern and responsibility presumably extend only to those branches that are in "your" future -- you have no contact with the other branches!

Would you say that the only moral way to implement quantum suicide is with a Doomsday Device that would destroy the entire world, thus ensuring your actions won't affect anybody else even in the worlds where you die?


No, I wouldn't recommend that either. :-)


This is slightly off-topic (I'm going to be that "this is sorta more a comment than a question..." guy for a second), but I just want to say that Scott's blog is one of my favorite blogs on the whole internet. If only there were more like it!


Thanks!!!


I know that me-too type posts are frowned upon here on HN, but in this case I think an exception is warranted. Too many scientists ensconce themselves in the ivory tower and treat the rest of the world with attitudes ranging from indifference to outright disdain. I also want to thank you for not following that model.

On a totally unrelated note, I've been trying to wrap my brain around coherent states and the photon-number/phase uncertainty relationship (e.g. http://hitoshi.berkeley.edu/221a/coherentstate.pdf). Do you know of any simple intuitive stories one can tell about that like one can with position-momentum uncertainty? I know this isn't really in your wheelhouse, but people who both understand this stuff and are willing to field questions like this are exceedingly rare (see above paragraph).

(FWIW, and for the benefit of lurkers, this question was prompted by the discussion on this blog post: http://blog.rongarret.info/2018/05/a-quantum-mechanics-puzzl.... Also FWIW, that's my blog.)


A lot of machines that purport to solve an NP-hard problem in a short period of time have some trick that makes them impractical, like requiring the ability to reduce the noise floor in a signal exponentially or pump an exponential amount of energy into a system. Has it been shown that such physical resources are more-or-less interchangeable with running time for the purposes of complexity theory?


No, I think this is more a question for physics than for complexity theory -- complexity theory basically just takes the computational model and the resources you care about as inputs, then uses math to study how much of the resources are inherently required to solve a given problem.

Absent an ultimate theory of fundamental physics, we're unlikely to have a full answer to your question -- e.g., to be able definitively to rule out the possibility of "hypercomputers" solving NP-hard problems in polynomial time. What we can do is

(1) to explain the failure (often, the forehead-bangingly obvious, don't-point-to-the-exponential-elephant-in-the-room failure) of all EXISTING proposals along these lines, and

(2) to point to deep discoveries in fundamental physics -- most notably, the Bekenstein bound https://en.wikipedia.org/wiki/Bekenstein_bound -- which seem to constrain any future quantum theory of gravity to have a form that would rule out these sorts of hypercomputers (for example, by limiting the amount of energy that can be pumped into a finite region, without causing the region to collapse to a black hole, and by likewise ruling out computer components that are smaller than 1 Planck length across or that do more than 1 step per Planck time).

I've often speculated that ultimately, the hardness of NP-complete problems in the physical world might come to be seen as analogous to the impossibility of faster-than-light signalling or perpetual motion machines---i.e., something that we simply take as primitive and then use to explain other phenomena in physics. But while the hardness of NP-complete problems sometimes gets used in that way already, I also think we have a lot more work to do before the situations are truly parallel. (For starters, we could prove P!=NP. :-) )


What good is oracle separation? The recent separation between BQP and PH, for example, doesn't seem to mean much. Of course such results may spur the discovery of new proof techniques, or new ways of thinking about a problem, but is the result itself useful? I must be missing something, like a way of stringing together oracle separations to produce a real separation... or is it just the case that oracle separations are considered some kind of suggestive evidence that a real separation exists?

Edit: This post[1] says two classes are equal with no oracle if they are equal with respect to every oracle, but apparently only because "every oracle" includes the null oracle...

[1] https://www.scottaaronson.com/blog/?p=451


I think oracle separations are currently only useful as barrier results, showing that certain proof techniques can't solve the full problem.


Here's a modern viewpoint: there are many different things we can mean in saying that one model of computation (e.g., quantum) has abilities that exceed another model's (e.g., the polynomial hierarchy). So, we could be talking about the number of computational steps, the first thing that complexity theory tried to understand and also (by a twist of fate) nearly the hardest. But we could also be talking about several other things, like communication complexity for a distributed problem, or query complexity: the number of accesses that need to be made to a very long input, which is more fancifully called an "oracle." For each of these settings, we could be talking about decision problems, promise problems, relation problems, or exact or approximate sampling problems.

Think of all these separation problems as terrifying monsters that we have to defeat. Some monsters, like P vs. NP with no oracle, are clearly the bosses of the entire game. But in the meantime, we just try to slay whatever monsters we can, because that's how we build up our stamina points and learn ins and outs of the game. And also, almost all these monsters live in dungeons that you can only reach by slaying easier monsters first.

For now, unrelativized separations, like P!=PSPACE, are monsters that we almost never have the tools to defeat. They're like 30 levels ahead of us in the game---or maybe 300 levels; one thing about this game is that it never tells you how many levels you still need to clear.

Oracle separations---and communication complexity separations for that matter---are easier monsters. But the point I want to make is that they're clearly, unequivocally part of the same game as the harder monsters that we're trying to get to.

This is particularly clear in the case of oracle separations from PH. When theoretical computer scientists proved in the 1980s that there's an oracle relative to which PH!=PSPACE, they did so only by proving the inability of small constant-depth circuits of AND, OR, and NOT gates, so-called AC0 circuits, to compute the n-bit PARITY function. This is universally considered one of the most important lower bounds on circuit size ever shown. (Circuit size lower bounds, of course, are the dungeon that P vs. NP is the boss of.)

The connection is this: a PH algorithm that queries an exponentially-long oracle string can be seen as just a massively scaled-up AC0 circuit reading its n-bit input (with no oracle). Conversely, an AC0 circuit is just a massively scaled-down version of relativized PH. There's a giant monster that you have to fight with giant swords, and a tiny monster that you have to fight with tiny swords, but the two battles are isomorphic. Win one and you win the other.

In the same way, proving an oracle separation between BQP and PH, is exactly the same challenge as giving an ordinary computational problem (with no oracle) that quantum computers can solve in polylogarithmic time, by querying the n-bit input in superposition, but that's not in AC0.

Of course, eventually we'd like to prove that interesting problems like 3SAT have no polynomial-size circuits of any kind, thereby establishing P!=NP. For now, though, a little beyond AC0 is the frontier: the most complicated place where we still know how to prove lower bounds on circuit size. And for problems that have the sort of structure that comes with being solvable in quantum logarithmic time (basically, being approximable by low-degree real polynomials), we didn't know until a month ago even how to put them outside AC0, let alone the classes slightly beyond AC0. So an important monster has been slaughtered in the frontier region, and we can now advance to a previously unexplored corridor, in the huge sprawling dungeon where somewhere P!=NP is lurking.


Hey, Scott! Love your book, Quantum Computing Since Democritus. There is one section which confused me, in the Quantum chapter (https://www.scottaaronson.com/democritus/lec9.html). I have two questions:

1) You say applying the unitary operation is the quantum analogue of "taking a coin and flipping it" - what do you mean by this? Could we think of, for example, a pair of slits as applying this transformation on a photon qbit being sent through them, so it's now in superposition with regard to which slit it went through?

2) What does it mean when you say there are two paths to |1>? I understand this section is getting at the physical underpinnings of this mathematical model, but I can't quite wrap my head around how there are two "paths". My mind is stuck thinking of the Bloch sphere as a state machine you can deterministically hop around by applying unitary transformations.

P.S. I did a talk called Quantum Computing for Computer Science which covers everything up to the one-bit Deutsch Oracle problem in 1.5 hours; I found that presenting quantum algorithms as running on a "unit circle state machine" (basically 2d Bloch sphere by restricting states to real numbers) was a very effective way to explain the subject to software engineers! (https://www.youtube.com/watch?v=F_Riqjdh2oM)


1) Yes, a unitary transformation like the Hadamard gate maps the state |0> to |0>+|1>, while mapping |1> to |0>-|1>. In either case, if you then just measured immediately in the {|0>,|1>} basis without doing anything else, you'd see |0> or |1> with equal probabilities, so it would have the effect of a coin flip. But of course, in other cases---e.g., if you measured in a different basis, or if you applied the Hadamard to a state that wasn't just a |0> or |1> basis state---you could see that Hadamard is not just a coin-flipping transformation, because it's able to produce interference.

2) When we talk about the different "paths" that contribute to a given amplitude, it's just a fancy way of saying that we can organize the matrix multiplications in such a way that the amplitude we want is a giant sum. So for example, suppose we apply Hadamard twice in sequence to the initial state |0>. The first Hadamard maps |0> to (|0>+|1>)/sqrt(2). The second Hadamard maps |0> to (|0>+|1>)/sqrt(2) and |1> to (|0>-|1>)/sqrt(2). So by linearity, it maps (|0>+|1>)/sqrt(2) to

((|0>+|1>)/sqrt(2) + (|0>-|1>)/sqrt(2))/sqrt(2) = (1/2+1/2)|0> + (1/2-1/2)|1>.

So in this case, we could say that there are "two paths leading to |0>," both of which contribute 1/2 to its final amplitude (so that the amplitude is 1). There are also "two paths leading to |1>," but one contributes 1/2 to its amplitude and the other contributes -1/2, so the two contributions interfere destructively and the final amplitude of |1> is 0.

This is sometimes called the "Feynman" or "sum-over-paths" picture of quantum mechanics. As you can see, though, it's just a different way of looking at exactly the same math, namely multiplication of matrices and vectors.

So then why use the sum-over-paths picture at all? Well, a few reasons: physicists like it because it often gives them more insight into what's going on, into what are the more and less important contributions to a given process, and it can also make calculations easier. Meanwhile, computer scientists like the picture because it lets us simulate a quantum computer by a classical computer, still using exponential time but now using only a linear amount of memory, rather than the exponential amount of memory we'd need if we tried to store all 2^n amplitudes at once.


The cleanest explanation I read of information / state linearity properties of quantum operators. Thanks! (Working through Brian Hall's Quantum Theory for Mathematicians and Frederic Schuller's course at the moment, both highly recommended).


Hello,

As for "careers in science", what can someone later in life (I just turned 60, can't believe it) still do? I have a CS degree from ages ago, and have a scientific mindset (runs in my family, e.g., my sister was a research scientist [mathematician] before retirement). I don't want to retire anytime soon. But I'm unlikely to get into grad school. I can take MOOCs of course, and do. My interest in science tends toward neurology and embodiment.


I admire your determination to start a scientific career at age sixty! I once faced the "converse" problem---how to start doing research at age 14 or 15, before I'd found any mentor to help me---and it was one of the great revelations of my life that, at least in theoretical fields, the barriers standing in the way are more internal than external. So long as you have the time, and enough income to live off, you can start reading textbooks and arXiv papers. You can email the authors your questions. You can go to conferences, with or sometimes without registering (just don't tell anyone I said so ;-) ). You can sit in on classes at a nearby university (ask the professor; most are fine with it). You can talk to the professors. You can start working on a problem that interests you. If you get somewhere, you can write it up and submit it to a conference or journal. You can offer your services as a research assistant. The gates are open.

Of course there's a chicken-and-egg problem here, where the stronger your research record, the more busy researchers will want to talk to you, and the more they'll talk to you, the stronger you'll be able to make your record. Just like with dating or anything else, I guess, it does take persistence to break this cycle. :-)

And of course, you'll need to be considerate of people's time, and you'll need to decline to take it personally if some people are too busy to answer you. Just plow ahead and ask others.

Lastly, let me strongly suggest finding some other people who started their research careers later in life, and asking their advice. They'll surely be able to think of things I didn't!


I our quantum mechanics class, there was an 80 year old woman with a degree in statistics taking the class with us. You can do anything.


I'd just first like to say that I love reading your work. I'm always delighted when I see an update on Shtetl-Optimized and I admire how you are both simultaneously rigorous and funny in your papers and posts. Your PnP survey stands out in my mind as a truly fun and insightful read in particular.

Two questions:

1. Any meaningful updates you'd make to the PnP survey today?

2. As a total aside, I'm curious for your thoughts on blockchains. Not specifically proof of work as a BFT system per se, but more broadly curious to what degree you think "trustless" transactions and data processing might or might not be transformative.

Apologies in advance if you've recently provided thoughts on either of these topics and I've sadly missed them!


1. The P vs. NP survey is only 2 years old, so all the edits I'd make to it now would be rather minor ones: for example, including some more recent circuit lower bounds of Ryan Williams and others, some more no-go results for Geometric Complexity Theory, and Raz and Tal's BQP vs. PH breakthrough (which required a new circuit lower bound, though not of a kind that can evade the natural proofs barrier).

2. See here for my answer to someone who asked me for my thoughts on blockchain just a couple weeks ago: https://www.scottaaronson.com/blog/?p=3861#comment-1768247


Thanks for the reply! And apologies I didn't realize the autocorrect on my phone turned "PvNP" to "PnP". I'll read up on the recent works you mentioned here. Appreciate the link to your comment on Blockchain technology as well.

If you still have time to answer questions.. is there any recent work in the cryptographic or blockchain space you see as standout? I haven't referred recently to any citations or updates, but found Ben-Sasson, Bentov, Horesh and Riabzev's recent work (https://eprint.iacr.org/2018/046) on ZK-STARKs intriguing.


As a programmer and practitioner, I'm curious about what kinds of skills and training it takes to program quantum computers.

Can you shed some insight into what's really different about the tools and task of programming a quantum computer versus using classical programming languages and tools? Do you think quantum computer programming will rapidly become standard training for CS undergrads, or do you expect it to remain a niche skillset like FPGAs, etc, since it will only supplement and not replace classical computers.

Also, nice to meet you. Your essays have been inspiring over the years.


Thanks!!

I imagine that programming QCs will be a lot like programming classical computers, except with an additional body of technical knowledge that one needs to master. In that respect, it will be a lot like 3D graphics programming, or crypto programming, or AI programming, or compiler programming. And much like with those other types of specialized programming, even in a world filled with useful QCs, I imagine that only a minority of programmers would really need to understand how to interface with them.

Everyone: OK, I'm going to sleep now, since I need to catch a flight tomorrow morning. I'll try to answer a few more questions on the plane, but then I'll probably call it a day (or rather, two days :-) ). No additional questions please. Thanks for all the interesting questions!




Hi Scott. I did not listen to the podcast (yet), so sorry if you answered this already, but what, in your opinion, could be the greatest impact quantum computing has in society? I know its a broad question, but curious of a few bullet points.


Short answer: I think the greatest impacts might be in simulating quantum physics and chemistry themselves, and thereby giving a new window into nature that could have applications to drug design, materials science, batteries, high-temperature superconductors, and more. But I could be as badly wrong as (e.g.) someone speculating about the impacts of classical computers in the 1940s, who could see applications to weather prediction and other physical simulation problems but would've totally missed the creation of Hacker News.

Long answer: See my recent blog post https://www.scottaaronson.com/blog/?p=3848 about exactly this!


What is your opinion on how advances in QC may advance our understanding of human consciousness? I remember that in your QCSD book you make few remarks about Penrose's microtubules theory, the one that tries addressing two questions about human mind:

1. Are mind processes Turing-computable?

2. If yes, at which level in the brain does the consciousness emerge?

Penrose's answers to that (as far as I know) are not generally accepted by the scientific community. What is your intuition about these two questions?


I don't want to prejudice an actual answer here, but it's worth noting Mr Aaronson has produced a fantastic paper, almost uniquely substantive in its field, commenting on several aspects of possible relations between quantum physics and the mind, see https://www.scottaaronson.com/blog/?p=1438

See also his blog posts on IIT, etc., such as: https://www.scottaaronson.com/blog/?p=1799


Hi Scott

There are non-quantum physical systems that exhibits positive and negative amplitudes and interference.

Can I factor large numbers by throwing rocks in a lake and measuring the water height at the right place? Why not?


Not Scott, and this has nothing to do with interference, but it is possible to "calculate" certain things using physical phenomenon like water levels. It's basically a variation of "analog computing".

https://en.wikipedia.org/wiki/Analog_computer

An analog computer or analogue computer is a form of computer that uses the continuously changeable aspects of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved

I don't think it has much to do with QC, but analog computing is interesting in its own right.


Thanks! This is indeed related and part of my inspiration for this question.

Some other interesting references:

- The waterfall argument in paragraph 6 of the mind-boggling "Why Philosophers Should Care About Computational Complexity". https://www.scottaaronson.com/papers/philos.pdf

- "NP-complete Problems and Physical Reality" https://www.scottaaronson.com/papers/npcomplete.pdf


So, basically, yes. Particularly if you consider most concepts in mechanical engineering. What you're talking about is the real implementation of what we call physics, just working as expected, automagically.



Lakes don't have the required quantum properties, I think, but this might work:

http://math.mit.edu/~bush/?page_id=484


What would be your advice for older (25+) people who want to get into science? Is it even possible? Or should I just accept that the train has left and focus on something else? Can you develop your math/logic/critical thinking skills at that point?

How about if you never excelled at these topics in school? Is hard work enough, or do you think some people are born with these talents?


See this previous answer: https://news.ycombinator.com/item?id=17429093

Sure, inborn talent matters. Hard work matters. And what about the less-appreciated converses of those two qualities: namely, acquired talent and inborn propensity for hard work? :-)

We could also mention drive to seize opportunities, judgment in picking the right problems to work on, social skill to get potential collaborators excited about those problems, and of course luck. And probably 200 other things I forgot. They all matter.

Maybe the key is, instead of struggling against the profile of abilities that fate handed you, to find a subject or problem that's an optimal fit for that profile. Had Darwin been forced to spend his life as a mathematician, Godel as a biologist, or Einstein as an experimentalist, you would never have heard of any of them.


Practice makes perfect. Some people are born with the ability to learn some topics faster than others but over time hard work will always bear results. You can learn math/logic/etc the same way any college student learns.


The brain is able to change well into adulthood ("neuroplasticity"), and that includes mathematical/scientific/abstraction centers. There are plenty of folks who didn't get a great start in STEM, but through hard work and dedication, they pushed through the inherent frustration in learning STEM.

While some people might be born with a proclivity for these activities, I wouldn't say any individual could not get into science. For the truly uninitiated, check out Planet Earth, Blue Planet, or Cosmos. For the novices, check out your local astronomy club, ask scientists you know to explain their work to you, and don't be afraid to ask follow-up questions. Get into reading science articles in the popular press, and use those to find links to the real research articles, which will be VERY hard to read for beginners. Feel free to skim those, look at graphs/charts/evidence, and read the abstract/conclusions rather than the intro/methods and the whole thing. Finally, check out some MOOCs or local community classes/continuing education.


Good uplifting answer. But it is for what can lead to a hobby, rather than a career. Even certificates from MOOCs won't lead to a job in science, researcher or not.


Even certificates from MOOCs won't lead to a job in science, researcher or not.

That depends on how tightly you define "science". If you mean "science in traditional academia, working for/at a major research university or research consortium" then you are almost certainly correct. But if you expand the definition to include the corporate world, and roles that maybe aren't pure research, then I would argue that you can get a job doing science with less "paper credentials" than one might expect.

Whether or not that would/could apply to anything related to QC, I'm not sure, as I don't work in (or even really near) the QC domain. But to pick one example: in terms of machine learning / AI, I've definitely seen it. But maybe AI/ML is an exception to everything else just because it is (at the present) such an empirical / observation / experiment based domain.

Outside of all of that is the notion of "create your own job". If you want to be a researcher in Field X, start a company related to Field X and hire yourself. And, no, I don't intend that to be a glib answer, and I certainly acknowledge that it A. isn't easy, and B. is probably harder / easier in some domains than in others.


I don't think you can't, but you have to be aware of the demands graduate school makes on students, including working long hours on course work first and then actual research. If you are older with more commitments, it's just harder to commit to. It can also be pretty isolating, especially given that most of your fellow graduate students will be a bit younger than you (as well as being quite immature, as many have never had an actual career and come straight from college). By the way, what I've explained isn't great, it's terrible, but it's the system that exists now.

Of course, this is just a warning, just be aware of what you're getting into.


I should add, professors who advise you to go to graduate school have a financial incentive for you to go to grad school if either 1) you will be their students (you are a means for them to get grant money and it helps inflate their credentials, not to mention you will be their cheap labor) or 2) they are currently your professor in undergrad or taught you previously (you going to graduate school also helps up their credentials).

Even barring these possibilities, professors who have tenure or tenure track positions just by virtue of statistics are extremely lucky: many smart people write their theses and do substantial work, but there are a small number of actual positions relative to the number of PhD's awarded. For example, in Physics, I think there are may be dozens of openings in the US every year while there are thousands of newly minted PhD's per year, and the majority of good positions as you can imagine go to graduates from a handful (O(1) number) of schools, anything below the top 10 is significantly less well poised for tenure track professorships. Thus, you have to take a professor's advice with a grain of salt, for they're potentially influenced by survivorship bias.


Does the recent result on BQP not being in PH relative to an oracle do anything to your priors about the power of quantum computers relative to classical computers? If you had to distill why that oracle separation works, what would you say is the main trick from the perspective of "where does the power of quantum computers come from"?


Honestly, almost all of the technical innovation in this breakthrough had to do with classical circuit complexity—-once you know my Forrelation problem, there’s almost no further input you need about quantum computation. (Well, a slight amount, since Raz and Tal had to modify Forrelation a bit to get their proof to go through.)

For an attempt at a popular summary of what the circuit lower bound innovations consisted of, see my blog post:

https://www.scottaaronson.com/blog/?p=3827

or, of course, their paper.

No, this doesn’t much change my priors about the power of quantum computation—for one thing, because we all (or at least I :-) ) were already pretty damn confident that Forrelation was not in PH. On the other hand, I was not expecting that the separation could be proved right now—certainly, not without first proving some weaker separations like BQP vs. AM.


What do you think would be the best route into Quantum Computing? Through physics, maths or computer science?


That depends on you and your interests! From the very beginning and up through the present, the majority of people in quantum computing have goten there via physics. But my own background is in theoretical computer science---I've only gradually picked up little bits of physics through osmosis (what's a boson, what's a Hamiltonian, etc. :-) ) and there are still embarrassing gaps in my knowledge. And computer scientists and mathematicians, such as Peter Shor, Gilles Brassard, Umesh Vazirani, and Dorit Aharonov, have clearly made crucial contributions to the field, and we plan to continue doing so!

By now, there are large interdisciplinary programs in quantum information science (at Waterloo, Caltech, MIT, Maryland, Berkeley, CWI Amsterdam, Singapore, Oxford, and elsewhere), as well as smaller programs like the one we've been building at UT Austin -- where in some sense, the work of blending math, CS, and physics into the smoothie of quantum information science has already been done. So one obvious option for a student interested in this field would be to seek out one of those programs -- they typically have courses and research opportunities even at the undergraduate level.


Hi Scott, thank you for writing your blog all these years. Your Busy Beaver essay ignited my passion for computer science, especially in algorithm analysis, logic, undecidability, and probability theory. I used to be someone who only thought in code; thanks to you, I now also think in math.


Thanks; that made my day!!!


In your mind what is the solution to the "Dice Room" paradox you describe in "PHYS771 Lecture 17: Fun With the Anthropic Principle" (https://www.scottaaronson.com/democritus/lec17.html)?

I take it you don't believe in the SIA? Is this paradox irreducible? Is the world going to end soon?


There are at least two plausible solutions to the Dice Room, depending on whether you adopt SSA or SIA. I wish I knew something more insightful to say about it than that, but I still don’t know the right way to think about indexical probabilities. Do you?


Hi Scott, I'm astounded by the accomplishment of AlphaZero in quickly becoming a chess master without chess specific programming. Could a program of the same kind be adapted to infer or deduce the rules of chess from a large set of valid games? Or is that a different kind of problem?

If so, could it be adapted to learn the rules when we're not clear on them either, like those for the games of love or politics?


You should probably wait for an AMA with someone who's actually an expert in AI, and ask them this question.

Since you asked me, though, I did a quick Google search and found the following paper:

https://arxiv.org/pdf/1711.09667.pdf

where indeed they use machine learning to induce the rules of chess from a large number of played games (and then learn to play better than any human). It doesn't surprise me at all that this would be feasible with current tools, although I haven't studied the paper yet, and would be curious to know how many games you need before you've learned all the rules around, e.g., castling, en passant, promotion of pawns, and perpetual check.

My guess is that for a machine to learn the rules of the games of love and politics will take somewhat longer. :-)


>> where indeed they use machine learning to induce the rules of chess from a large number of played games (and then learn to play better than any human).

They didn't induce the rules of chess! What they did was learn an evaluation function for chess moves, from scratch, i.e. without giving it the rules or any hand-crafted features.

So they learned a classifier for good/bad board states, which was later used in an alpha-beta minimax search. However, the classifier didn't learn the rules of the game, only a mapping from board positions to labels, good/bad. This is not made explicit in the text, but minimax typically has a rule-base to generate moves as successor states for the search; so the rules of the game were hand-crafted, but did not contribute to the learning of the evaluation function.

It's extremely hard to induce a set of rules with neural nets, or indeed any statistical machine learning algorithm. I mean, imagine representing the rules of chess as a function; let alone trying to learn that function from data.

(Full disclosure: I study algorithms that learn rules from data; they're not neural nets).


Aha, thanks for that extremely useful clarification! I was also confused from the abstract and intro about exactly what the inputs and outputs were. Yes, the clear impression was created that they learned the rules---in the sense that they could then generate moves that would (usually? almost always?) obey the rules---when it sounds like really they just learned an evaluation function that encodes lots of implicit knowledge about the rules, of course, but that still requires a piece of code that knows the rules explicitly as a "guardrail" when generating moves.

Since you obviously know this subject, what are the best current results in the direction of inducing the rules of chess from played games?


On further thought, we should distinguish two problems:

(1) "Induce" the rules, in the sense that after training on millions of played games, you now have a neural net or whatever that plays mostly or entirely according to the rules even though it can't articulate them.

(2) Output an explicit description of the rules.

I could easily believe that (2) is beyond the current abilities of AI, even if it turns out that (1) is doable.


>> Since you obviously know this subject, what are the best current results in the direction of inducing the rules of chess from played games?

To be honest, I don't think there's much work on inducing the rules of chess, in particular. It's probably considered a) easy enough to do by hand and b) too hard to machine-learn.

>> On further thought, we should distinguish two problems:

(1) - Yep. The most likely approach would be a classifier trained to label moves as legal/ illegal. The resulting model would be a vector of numerical parameters so not a traditional rule base. It would also only be correct within some margin of error, probably not 0, limiting its uses (e.g. it wouldn't make sense to train it to play and then pit it against a player with a correct rulebase; they wouldn't be playing the same game).

>> I could easily believe that (2) is beyond the current abilities of AI, even if it turns out that (1) is doable.

Also yes. Exactly on point in fact.

When we're talking about learning rules, we 're talking about learning automata, the subject of inductive inference, an older branch of machine learning (well, ish) that fell out of favour after a bunch of theoretical results showed it was basically impossible to learn any interesting class of automata from examples (the most famous is Mark E. Gold's result from Language Identification in the Limit; only finite languages can be learned from finite examples, anything else is only learnable "in the limit", from infinitely many examples, or an all-knowning oracle, etc).

Modern machine learning starts with Valiant's A Theory of the Learnable, which introduced PAC learning and a relaxation in the assumptions of inductive inference, about what should (and, therefore, can) be learned.

In short, the difference is that inductive inference attempted to learn complete definitions of various automata, whereas modern machine learning attempts to approximate them; well, strictly speaking it's about approximating functions, not automata as such.

So yeah, pretty much, like you say: (2) is, in principle, not possible whereas (1) might even be possible in practice.

Now, normally this is where I'd plug my own research, but I've already written plenty :)


Dah. Hang on, what am I talking about- chess has a finite number of moves so the "language" is finite. That means it's learnable in polynomial time, from finite examples ... in principle :)


Right, but even if chess were infinite, we could still imagine an algorithm that would output the smallest explicit rule set that was consistent with all the games it had seen so far, and that would quickly (though how quickly?) converge on the correct rules in the case of chess, even if it would be up against undecidability when trying to do the same thing for a completely arbitrary game.


It depends on how you mean "converge". It would certainly tend towards the full ruleset, learning more of them, or more accurate versions of them, the more examples it saw. It would not really terminate though, at least according to the learnability results from inductive inference.


Results from inductive inference show that you’re not going to converge in general. In the specific case of chess, however, my point was that we, because we know the complete list of rules, could plausibly look in from the outside and say after some finite point: “ah yes, now the system has gotten all of them.”


Oh, I see. Yes, for sure. That'd be the active learning framework [1]. I get the feeling it's not very popular because it inserts a human in the process. Many machine learning people are rather allergic to anything that a) introduces "human bias" to the learning process and b) reduces the potential for end-to-end automation.

The article you quoted above is a good example. It's main claim is that an evaluation function was learned without knowledge of rules or hand-crafted features, i.e. explicit human participation.

It's a political issue, really. And a silly one- there are domains where very useful background knowledge is available; physics, language, mathematics, etc. There's no reason not to use it.

In chess, for example, it might be possible to learn a decent set of rules just by carefuly choosing the examples to feed to the learner. Or, indeed, evaluating the learning so-far, therefore acting as an all-knowing Oracle.

___________________

[1] Well, strictly speaking active learning is where the system asks the user for examples/ counterexamples, but there's different options and what you suggest would fit in there.


Most of the rules of chess are trivial and thus should be deducible from observing less than one complete game. Rare things like castling might take a couple of games.

Love and politics have rulesets that are many orders of magnitude more complex; so complex that we don't even know how to write them all down.


>> Most of the rules of chess are trivial and thus should be deducible from observing less than one complete game. Rare things like castling might take a couple of games.

You'd think so. Try googling "chess rules induction".


How do you feel about the state of higher education? Specifically, do you think expensive degrees, underpaid PhD/Postdoc positions, and shortage of tenured jobs may discourage students from joining academia and push them into industry instead? It feels to me our best brains are at Facebook and Google doing cutting-edge research on how best to manipulate our mammal brains into buying ever more crap...


Yes, I'm (not surprisingly :-) ) a fan of increasing federal funding for academic research. Equally important, enough of the funding needs to be in unrestricted, competitively awarded grants for basic science, rather than tied up in huge projects that are someone's hobbyhorse. And dramatically cutting NSF graduate fellowships---something that the NSF will likely do if Trump gets his way with slashing the NSF budget---would obviously be a huge step in the wrong direction.

In quantum computing in the US, we have no right to complain right now about the level of federal funding, since almost everyone else has it worse! For us, the limiting factor has instead tended to be tenure-track faculty positions in CS and physics departments. All the funding and brilliant students in the world wouldn't matter if, outside of a few elite universities, the CS departments all said "this is really physics," the physics departments all said "this is really CS," and both treated it as a passing fad that might disappear in a couple years. Fortunately, though, the situation has steadily improved and I expect that to continue.

On the whole, I think the fact that Google, Microsoft, and other companies hire so many CS PhDs is good, not bad. That's a big part of what lets us admit lots of talented PhD students, and honestly tell them that their career prospects afterward are good. On the other hand, I think it's also true, as you say, that society could better capture the value of a lot of these people by trying to keep them in the open academic world.


for sure more research is being done at the corporate level, and youre absolutely right about why that sucks (public institutions mean public knowledge). in some cases its the only way to it though, unless we step up and fix academia (ie provide more funding for science)


Do you think the state of the art in quantum computing is already more advanced than we realize, in the same way that the state of the art in cryptography was when R, S, and A thought they had discovered RSA?


If I knew the answer, I couldn't tell you. :-)

More seriously: people have mooted this possibility for as long as I've been in this field (~20 years). But keep in mind that, when Cocks and Williamson at GCHQ discovered what would later become known as RSA and Diffie-Hellman key exchange---so, 3-4 years ahead of the open world---cryptography essentially didn't yet exist as an academic subject. Almost all the action was still closely tied to the intelligence community. So, no surprise that a not-yet-existing discipline had fallen behind!

By contrast, quantum computing has been openly studied for decades and has thousands of people working on it all over the world. The central thing that causes me to be skeptical of the "million-qubit quantum computer sitting in the NSA's basement" hypothesis, is that we pretty much know who the best people are, and we haven't noticed any effort to vacuum them all up analogous to the Manhattan Project.

Like, it's no secret that the NSA and DoD, and other military and intelligence agencies around the world, are interested in this field and fund a good deal of work on it. In fact my main grant right now (the Vannevar Bush Faculty Fellowship) comes from the Office of Naval Research. But if the secret world is light-years ahead of the open world, then they'd also need to be executing a giant cover operation of pretending to care about what we in academia are doing! :) So at what point does it become an unfalsifiable conspiracy theory?


Hi Scott

1. What do you think is the potential for emergence of a third computing model, apart from classical and quantum, which is sufficiently different in its fundamental computing paradigm from these? Do you believe biological substrates offer opportunity for computing which classical/QC cannot replicate?

2. If we achieve quantum supremacy, who do you think will be the biggest winners and losers, in terms of professional skill set and in terms of business models for private companies? Will physicists become much more influential if their ability to model and deliver new evidence (and thereby theory) and convert them into commercial products is advanced?

3. Would it be fair to say that d-wave machine is more an analog computer than a quantum computer; And do you think there is possibly a class of computing problems that might be better/more efficiently solved by analog computers than digital, and without requiring QC?


1. Quantum computing is the most powerful model of computation we have based on currently known physics---in the sense that anything more powerful would need to be based on new physics. From the standpoint of theoretical computer science, a biological computer is "just" a different way to implement classical computation, typically with very slow speed but very enormous parallelism. It might someday have practical advantages, but unless we're totally mistaken about biology, it's not going to solve any problems efficiently that are outside BPP (i.e., classical probabilistic polynomial time). For that you need a quantum computer.

2. Yes, if quantum computing is successful, I imagine that might be good for the careers of many of the people involved in quantum computing.

3. Regarding D-Wave, there's this weird tendency to get fixated on words and definitions (but is it "really" QC or "really" analog?) even after you've explained the reality of the situation. See https://news.ycombinator.com/item?id=17426023 for my current summary.

Again, I'm skeptical that analog computers will ever be able to solve any problems outside BPP---for that I think you need a QC. (I.e., if not for quantum mechanics, I would have believed in the Extended Church-Turing Thesis. :-) ) Whether analog computers will ever again compete with digital ones on the constant factors, leaving aside the asymptotics, for problems that people actually care about, is a harder and more complicated question to which I don't know the answer.


Hi Scott,

Do you think we will see a Quantum Winter (or even several), similar to how we've had several AI Winters?

We see tremendous amounts of funding in academia (and also industry) to build ambitious projects that still have significant issues to overcome (both theoretical and engineering) before being any close to being implemented. At the same time there (still) seems to be much misunderstanding, such as Vivek Wadhwa's recent post[0] or various reports about an unhackable internet based on quantum information. I am afraid that this may lead to people having unrealistic expectations on what can be built and how fast it can be built, with the consequence that we will see less and less funding and investments in the nearby future.

[0]: https://www.scottaaronson.com/blog/?p=3645


Yes, when you look at the history of AI, a "quantum winter" is an obvious worry---i.e., a situation where the hype about QC becomes so unmoored from the reality as finally to cause the popular narrative to switch its polarity. Picture a pointy-haired boss who's now pouring money into QC, for no better reason than that it's new and faster and it's the future and people are talking about it and it could speed up his company's data mining by trying all possible answers in parallel. If something spooked that boss, one could imagine him joining a stampede for the exits with no greater understanding, canceling good research along with bad. If it happened to AI, then why not to us?

The fear of a quantum winter is one reason---if any reasons were needed besides the truth!---why I've spent so much effort on my blog over the past decade trying to counter irresponsible QC hype. Like, if anyone ever comes to me and says "you lied to me! it turns out that scalable QCs are really hard to build, and a lot more basic science needs to be done, and even if you did manage to build them, as far as anyone knows they'd only give you exponential speedups for a few special problems, not for most of the stuff my company cares about," I'll have a pretty enormous record that I can point to when I reply, "I WAS SCREAMING THAT AT THE TOP OF MY LUNGS AND YOU DIDN'T WANT TO HEAR IT!" For all the good it will do. :-)

Then again, maybe the worry is overblown. Some people claim that we've now passed the point where there will never again be an AI winter, any more than there will be an "electricity winter." The train just has too much momentum. Likewise, so long as the pressure continues to get more and more computing power and stave off the end of Moore's Law, it could be that QC will continue to entice people, regardless of the naysayers and regardless of how hard the engineering problems turn out to be.

I honestly don't know. I feel like I have a hard enough time understanding and communicating the truth about where this field stands in the present---and sometimes, trying to advance it by a small increment---without also prognosticating its future. :-) The latter involves all sorts of questions of economics, politics, and psychology that I have no special expertise about.


Hi!

So, what do you do when you're not flipping qubits around?

Got any cool stories you wanna share?


When I'm not flipping qubits around, I eat, sleep, blog, answer emails, play with my two kids, get depressed reading the news about US politics, and then get even more depressed reading people saying mean things about me and my nerdy friends on Twitter and Reddit.


Awwww man! That sucks, but screw those naysayers. They ain't the ones running point on quantum computing.

(I don't really have much to ask you since anything I could comprehend is easily googlable and I don't wanna waste ur time. Just wanna say keep up the good work and thanks!)


Thanks!!!


Hiya, Scott! I recently heard the first episode of Rationally Speaking podcast episode where you were the guest (yes, I'm catching up with the RS Archives during my long commute). I've also started reading QCSD (hey, if it's going to be our decade's GEB, we may as well give it an acronym. :: grin:: )

I've already corrected my very mistaken understanding of how QC works, as in the tagline of your blog. Are there any other concepts that we need to shake off that would make explaining these concepts easier for the layman? Do you have any words of wisdom for the masses?


Sure, you could shake off the idea (if you haven't already...) that quantum entanglement means communication faster than light. This is, interestingly, exactly the same kind of error as the one that says that a quantum computer is just like a classical computer but with exponential parallelism. Namely, you look at the resources that would be needed to simulate a quantum system using a classical system (faster-than-light communication in the one case, exponential parallelism in the other). You then confuse those with the resources that the quantum system itself provides you.

In reality, quantum mechanics is carving out a third profile of abilities, which is neither as weak as the classical profile, nor as strong as the thing that people mistakenly overcorrect to once you tell them that the classical profile is inadequate. E.g., you can violate the Bell inequality but NOT send instantaneous signals; you can solve factoring in polynomial time but probably NOT NP-complete problems. As I like to say (someone already quoted it elsewhere), it's a sufficiently strange state of affairs that no science-fiction writer would have had the imagination to invent it.


Quantum information theorists such as Patrick Hayden and Leonard Susskind suggest that information can be neither created or destroyed. Do you ever think about such things? If so, how does it enter into your work?


Yes, a fundamental property of quantum information undergoing unitary evolution (meaning, no measurements, no discarding stuff into the environment, etc.) is that it can be neither created nor destroyed. This fact enters into my work, or the work of anyone else in quantum information, sort of like how the number '2' enters into our work---i.e., so thoroughly that it would be hard to pick it out as a separate ingredient!


How do you feel about the Axiom of Choice?


It feels really uncomfortable that, if you have infinitely many people wearing red or blue hats that they can't see, then they can all guess their own hat color with only finitely many of them being wrong. Not to mention Banach-Tarski and a hundred other strange phenomena. This all militates toward rejecting AC.

But then, if we reject AC, we can have infinite sets that are incomparable (i.e., they're not isomorphic and neither is larger than the other). So pick your poison!

Thinking about such things for too long makes me feel grateful that I spend most of my time in the finite world (or, let's say, the world of continuous parameters that we only ever measure to finite precision, so that the statements we care about can ultimately be phrased arithmetically). In this world, because of e.g. the Shoenfield absoluteness theorem, anything that can be proved with the help of AC can also be proved without it.


Thanks for the answer!

For other people reading, here's the blue/red hats puzzle that Scott was referring to (edited from https://en.m.wikipedia.org/wiki/Hat_puzzle#Prisoners_and_hat...):

> A countably infinite number of prisoners, each with an unknown and randomly assigned red or blue hat, line up in a single file line. Each prisoner faces away from the beginning of the line, and each prisoner can see all the hats in front of him, and none of the hats behind. Starting from the beginning of the line, each prisoner must correctly identify the color of his hat or he is killed on the spot. The prisoners have a chance to meet beforehand, but once in line no prisoner can hear what the other prisoners say. The question is, is there a way to ensure that only finitely many prisoners are killed?

The solution is that it is possible, and you can look on the Wikipedia page to find out how.

Even more paradoxical is that the prisoners can still save all but finitely many of themselves even if there are infinitely many possible hat colours. For example you could allow the hat colours to be specified by three real numbers giving an RGB value.


As a follow-up, what is your position on intuitionist/constructivist logic?


I don't begrudge others their sincerely held faiths! But I confess that I haven't yet seen the need for nonstandard logics for anything I've personally been interested in.


Does it make more sense to use an analogy where we have infinitely many people with a strongest 'thought' (a statement they hold as most true), they can all guess their own thought with only finitely many of them being wrong?

My apologies if that sounds ridiculous, I'm honestly not that attentuated to making much sense to myself.

> Thinking about such things for too long makes me feel grateful that I spend most of my time in the finite world

Hmm.


Are you collaborating with the National Labs at all, i.e., http://quantum.lanl.gov/q_computing.shtml


I had close colleagues at LANL, but many of them (Leonid Gurvits, Howard Barnum, Manny Knill) have since left. I'll probably visit Argonne and Sandia sometime in the next few years to give talks, and to learn about what they're doing in quantum computing. I don't think I've written papers yet with anyone from those labs, but, uhh ... "there are no strangers, just coauthors you haven't coauthored with yet!" :)


You wrote this:

> For example, breaking almost any cryptographic code can be phrased as an NP problem. So if P=NP—and if, moreover, the algorithm that proved it was “practical” (meaning, not n^1000 time or anything silly like that)—then all cryptographic codes that depend on the adversary having limited computing power would be broken.

Can you explain this reasoning more precisely? The class P contains difficult problems that require O(n^googolplex) algorithms, so are not solvable in practice. The fact that P=NP would not make these problems any easier.


He explicitly says «not n^1000 time or anything silly like that» in the sentence you quote, n^googolplex would be way more silly


Yup


my point exactly. The class P contains this silly stuff.


He's already covered that; not sure what's left to explain: he's said that if P=NP and if we get there with a practical running time algorithm (e.g. one that solves 3SAT in O(n^4) or something, and with reasonable constants too), then such-and-such consequences. So what are you asking?


The second part of the claim seems much, much stronger than the first part, but he makes it sound like it's a minor detail. I am confused as to why.


The claim was that "all cryptographic codes that depend on the adversary having limited computing power would be broken."

Here's a (very slightly) more rigorous justification:

If P=NP, then any NP problem is in P with at most a polynomial slowdown. That is, if there's an algorithm taking T steps on a non-deterministic Turing machine, we can solve it on a deterministic Turing machine in f(T) steps, where f is a polynomial. Presumably, a "practical" algorithm would be one for which f has a low degree.

The kinds of algorithms we're concerned about in cryptography (and plenty of other fields) already have low time complexity. For example, generating or verifying an HMAC is O(n) in the length of the input. So if we had a way to solve NP problems with a low-degree polynomial slowdown, we could break HMACs in low-degree polynomial time.

It doesn't matter that there are O(n^1000) problems out there that would still be realistically unsolvable, because those problems don't have practical applications in the first place.


I believe section 1.2.1 of Scott's P ?= NP survey addresses your question:

https://www.scottaaronson.com/papers/pnp.pdf


I took a grad-level quantum computing class which I didn't quite have the physics background for, and the lecture that lost me, about 3 weeks in, was on the different kinds of physical gates quantum computers use.

My recollection is that there were 3 necessary gates for a quantum functionally complete set: One gate was classically functionally complete, the other two are where I got lost. Can you explain like I'm 5 (or explain like I'm an undergrad with minimal QM knowledge) what these other gates are doing?

Thanks!


Actually, if you already have a classically functionally complete gate (say, the Fredkin or Toffoli gates), then you only need one other gate to get a functionally complete set for quantum computing. This one other gate could be the Hadamard, which I discussed in other answers. The Hadamard gate is needed to put your machine into a superposition of states, and then also to create interference between the different branches of the superposition. Of course, if you have no superposition and no interference, then it isn't much of a quantum computer!

If your question was instead to explain the concepts of superposition and interference themselves, then unfortunately that would take more time. However, you could try some essays that I wrote a while ago

https://www.scottaaronson.com/writings/highschool.html

https://www.scottaaronson.com/blog/?p=208

in addition to the resources that have been linked to elsewhere on this thread.


What advice would you give for productivity/getting things done?


I'm like the worst person on earth to be giving anyone else advice about that!! Do you have any idea how much time I waste obsessively reading the news, or worrying about people saying mean things about me on social media, rather than doing research or anything else useful for the world? I suppose my advice would be: don't do what I do. As my former PhD adviser, Umesh Vazirani, likes to tell people, "concentrate on the high-order bits."


> how much time I waste obsessively reading the news

This is oddly comforting. As Tim Ferris said in Tools of Titans, every successful person is dysfunctional in some way. I guess the trick is to work around your own personal deficiensies, and that's something everyone must figure out on their own.


It's actually really encouraging to hear that Scott has many of the same vices as me.


I hear that. But followup question! What do you think of commitment devices (like Beeminder!) for imposing discipline on oneself?


Hi Scott,

Is there any chance advances in quantum computing will affect crypto currencies. I.e. it might be possible to shortcut mining or mess with the quorum of the distributed ledger?

I'm curious as crypto currency is so hyped, it would dismay me that advances in quantum computing might disrupt the current landscape, or even better improve it in some way. Entanglement is already deployed and in use for data transit, in curious how annealing or other physics might be a game changer


See this recent survey article: https://arxiv.org/abs/1710.10377

Briefly, a fully fault-tolerant quantum computer could give a square-root speedup for Bitcoin's proof-of-work, and could completely break its elliptic-curve-based signature scheme. Both issues could in principle be fixed by migrating Bitcoin to quantumly harder problems, though in practice doing so could open up security holes of its own. This hasn't been done, but I think maybe there are other cryptocurrencies trying to use quantum-secure crypto from the outset? Googling just now, I found something called "Quantum Resistant Ledger" (https://theqrl.org/) -- does anyone here know more about what's out there?

Let me stress that none of this is a concern with the QCs of the near future, which will have at most a few hundred decent qubits and no error correction, and which will not be able to threaten Bitcoin or any other cryptography.


Hi Scott,

Do you think that chaotic systems would be better analysed by quantum computing over classical computing? More generally, is BQP powerful enough to deal with chaotic systems in the same way P is for linear systems?

Oh, and I think your review of "Enlightenment Now" was a bit too rosy. When he analysed the data he's superb, but he seems to lambast people he heavily disagrees with. Its a tad disheartening.


No, I don't think that QCs will help much for simulating chaotic systems, except insofar as those systems are also quantum-mechanical. For (e.g.) predicting the weather, there may be small quantum speedups that you can get here and there, from Grover's algorithm and faster gradient computation and so forth, but at a fundamental level, as far as anyone knows, you still need to just iterate the partial differential equation from one time step to the next, same as a classical computer does.

Stepping back, note that "quantum" and "nonlinear" are two completely different concepts -- in fact, at the level of amplitudes (i.e., the Schrodinger equation), quantum mechanics is the one example we have in physics of a perfectly LINEAR theory! Alan Sokal had a lot of fun with that point in his "Social Text" parody article: http://www.physics.nyu.edu/sokal/transgress_v2/transgress_v2...

On the other hand, this is perfectly compatible with quantum systems having chaotic phenomena at the level of observables (like the positions and momenta of particles), and in fact there's a whole field called "quantum chaos" that studies such situations.


Why is P=NP/P!=NP so difficult to prove?

I just wanted to say that I think your writing helped me appreciate Michael Cohen and learn what an amazing person he was. The more I read about him, the more I want to be like him. What qualities do you think helped him contribute and be such a great person? What do you think a lackluster programmer could do to be more like him?


If you have not seen it, you may be interested in his 116-page survey of the problem.

https://www.scottaaronson.com/blog/?p=3095


Thank you for posting that! That is a great overview and exactly what I was looking for.


If I use a quantum random number to pick my actions does that mean there's a universe where I took each option?

This terrifies me


Relax, my friend. If you accept the Many-Worlds Interpretation at all, then you're constantly splitting into parallel copies, millions of times per second, with every little decoherence event that takes place in your brain. Whether or not you use a quantum random number generator is totally beside the point. :-)

(But if it makes you feel less terrified, you could also choose one of the interpretations according to which "only our branch is real." As far as anyone can tell today, there are no consequences for any experiment we'll ever be able to do.)


Why would this only apply to using a quantum random number to make a decision?

Also, what do you find terrifying about the existence of multiple universes where every possible outcome of a decision you make plays out?


nvm i didn't want you to answer my question anyway


How far are we from "emergence" in terms of AI ecosystem ? Will quantum computing pave the way for it ?


Sorry, I don't know what "emergence" means in this context. If you mean AGI, I think (hope?) that we're still quite some ways away from it. Yes, quantum computing could help with AI -- for example through Grover's algorithm, which lets you solve many search, optimization, and planning problems in roughly the square root of the number of steps you would need classically. But it's a complicated story: many of the problems we care about will still be asymptotically hard even for quantum computers; conversely, as the spectacular recent achievements of deep learning and reinforcement learning (not to mention our own brains? :-) ) remind us, there's a great deal that can be done even with classical computers---often, outside the regime where we understand theoretically why the methods work. If you check back in 5 years or so, I'm optimistic that we'll know more about the applications of quantum computing to AI than we do right now.


"not to mention our own brains"

Is this a known known that our brain doesn't use QC?


No, it's an unknown known. :-)


Hi Scott. I've read your 'Why Philosophers Should Care About Computational Complexity' - I found it to be a very nice read. Is your book 'Quantum Computing Since Democritus' worth getting my hands on? Is it not going to be too repetitive given that I've read your article?


Thanks!

Given your particular situation, it sounds to me like you should not only buy QCSD, but buy 200 copies of it to give all your friends and family. :-)

More seriously, the overlap between QCSD and WPSCACC is actually relatively small---e.g., WPSCACC just has one short section about quantum mechanics, whereas that's at least half of QCSD. QCSD is also written in a much chattier style (but at the same time, it has more space to exposit basic material). I can't guarantee that you'll like QCSD, but I hope that gives you a sense for the diff.


I think you've written that QM is in part probability math using complex numbers. I've also read that human decision making doesn't map well to classical probability. Do you know if QM/"complex probability" has been used to build better models of human decision making?


Every few months there's another paper on the arXiv trying to do exactly that. Pretty much without exception, I've found the papers to be terrible -- leaping immediately to QM without first considering more prosaic stories for whatever human behavior they're trying to explain. It's like quantum mechanics is a hammer, and explaining human decision-making is a steak, and all these people want to use the hammer to slice the steak for some weird reason, their strongest argument being that the problem of slicing the steak doesn't map well to their bare fingers. It's amazing what some people think they can get others to swallow just by using the word "quantum"! :-)


What would be the best way to understand the relation between amplitudes and probabilities in the “non-quantum” world? In your NYT piece you say that it would be weird to give a “sq(-1) probability of rain tomorrow”, but are there relatable uses or cases for amplitudes outside of quantum physics?


It's not obvious why you'd want to talk about complex amplitudes, whose squared absolute values are probabilities, outside the context of quantum mechanics. Maybe a better way to say it is: once you're talking about such amplitudes, essentially by definition you ARE talking about quantum mechanics. :-)

Having said that, in classical probability theory, it's sometimes useful to look at the (positive) square roots of probabilities, for example to get a distance measure between probability distributions. See here:

https://en.wikipedia.org/wiki/Bhattacharyya_distance

Also, we've often been able to use quantum tools to prove new results even about classical theoretical computer science. See here for a beautiful survey, though one that's already a decade out of date:

https://arxiv.org/abs/0910.3376

In this way, the mathematical tools that we've developed in quantum information can "pay rent" in classical CS, even if we counterfactually imagined that our world wasn't quantum-mechanical at all. It would take some time to go through an example of this, but (e.g.) it was shown that if a certain classical error-correcting code existed (called a 2-query locally decodable code of subexponential size), then an even better quantum error-correcting code would also exist, but the latter was something that people already knew how to rule out.

In these situations, some people would argue that we're not "really" using quantum physics or amplitudes; we're just taking MATH that was developed for that purpose, and repurposing it for something else. But it would be weird to say in a talk, "I'm now going to introduce a unit vector of complex numbers that's just an ordinary vector, nothing physical at all about it, then apply the following tensor product of 2x2 and 4x4 unitary matrices to it...," when everyone knows full well that all your intuition about this came from quantum states. Indeed, I confess that even when I'm doing linear algebra that has nothing to do with QM---i.e., my vectors really are just vectors of real or complex numbers, not amplitudes---I sometimes slip up and use the physicists' notation for quantum states (called the Dirac ket notation).


Hi Scott,

David Deutsch, in his book "The Fabric of Reality", gives example of a problem that is presumably easily solvable by the quantum computer. On the other hand, he explains that the total number of particles in the universe is smaller than number of computations that will take place. The question that he poses is where the problem was actually solved? His answer is that there parallel universes, and the computation is performed in these universes. He also stipulates that the "weird" quantum behavior of particles, such as spooky action at a distance, all could be explained by the multiverse. What is your opinion about the multiverse and potential of quantum computers to solve quantum physics problems?


I found the relevant passage.

"Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10^500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 10^80 atoms in the entire visible universe, an utterly minuscule number compared with 10^500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?"


If advanced quantum computing were available in smartphones and other small devices today, what applications would be possible that are not currently? (Or which would be greatly improved)

Would it affect the things many people do most on their phones like messaging, news and social media


As far as I know, it would have zero effect on any of that ... unless you need to simulate quantum physics or chemistry, factor large integers, calculate discrete logarithms, or possibly solve some large optimization problems while doing your messaging, news, and social media. :-)


Hey there Scott!

I just wanted to say I wrote a student-paper on quantum computational complexity last year and I felt like your papers made up almost 90% of my references. Thank you for making my learning about the subject possible!

On to my question: I'm working on a capstone project right now that's using quantum computing to create a small video-game. I'm using the 5-qubit quantum experience from IBM and I was wondering if you had any cool ideas/suggestions for small game experiences that could use actual quantum computing resources to teach people about the properties of quantum mechanics?

Thanks again for all your work in understanding the quantum world, Scott.


That's a tough one! So far, I've only seen ONE game meant to teach quantum mechanics that I thought actually worked, in terms of being (1) actually about QM rather than some vague analogy, and (2) fun to play. It's this one:

http://quantumgame.io/

Notably, this game doesn't even try to teach about entanglement (which, no surprise, is hard to keep track of in your head!). Tt deals only with a single photon passing through a network of beamsplitters and phaseshifters: a situation that has one foot in quantum physics and one foot in classical physics (the macroscopic state of a laser beam obeys exactly the same math). But the puzzles are really clever!

If you're just trying to create an educational game, why bother using the Quantum Experience? Won't it just introduce enormous errors and delays (~30 seconds per run when I tried it), complicating and obfuscating whatever you're trying to teach? Why not just make some puzzles that force people to reason about small, idealized quantum systems, along the lines of the successful example above?


That game is really quite cool, and certainly in the same vein as what I was thinking of doing - thanks so much for the suggestion (I've already spent too much time solving puzzles)!

I am looking at using something like the Quantum Experience for two reasons: 1) Because giving people real quantum computing results is really cool, and 2) Because it's my capstone project and the work has to have a certain threshold of technical exploration. That being said, I'm not well-versed in alternatives that could maybe just simulate quantum properties without having to go through an actual quantum computer.

I have been looking into creating a small quantum neural net (with either real or simulated qubits) to show how that differs from a neural network running on classical computing. I've been trying to work that into a game idea but I'd be lying if I said I've come up with anything compelling yet. If you got any cool game mechanics/ideas you think quantum computing resources would lend themselves to then I'm all ears!


Do you believe there is now a quantum Moore's Law in place? I've seen graphs showing quantum chips from Google, IBM, and Intel plotted on a log scale that are suggestive. (These exclude adiabatic quantum computers which are different beasts.)

If there is do you think we're perhaps less than 10 years from QC capable of breaking common number theory based asymmetric cryptographic algorithms like RSA or elliptic curve for at least lower key strengths? That's what these graphs suggest.

(I know breaking crypto is not by any stretch the only or the most valuable thing you can do with QC but it's the one that gets the most press and it's relevant to my current work.)


I think it's too early in the field, and there's too much basic research still to be done, to talk usefully about a "Moore's Law." For godsakes, we're not even sure yet whether superconducting qubits or trapped ions or something else (or a hybrid) will be the way forward!

Yes, you can make plots of the number of qubits, coherence times, etc. as a function of year -- and if you listen to talks by John Martinis, Chris Monroe, or the other leading experimentalists, you'll often see such plots. But at the very least, you need to look at both dimensions (qubits and coherence time) -- not just at "number of qubits," which will be severely misleading! And even if you do, there are very few data points to use for extrapolation, since it's really only within the last ~6-7 years that people have even gotten qubits to work well in isolation, let alone scaling them up. So it's really hard to extrapolate.

Like, I'm hopeful that within the next decade, we'll have systems with a few hundred qubits that will be good enough to do some useful tasks that are classically intractable (such as quantum simulation), though they certainly won't be threatening public-key crypto yet. But I'm not sure even about that. And I'd prefer to see what happens with this before speculating about the timescale for the next step, of building a full universal QC (the kind that would break our existing public-key cryptosystems)!


Even if the number of qubits doubled every year from here on out, it would be 15+ years until we had enough working space to run Shor's algorithm on modern cryptographic key sizes.

Back of the envelope:

- It takes 9n error-corrected qubits to break an n-bit ECDH key [1]

- Each error-corrected qubit requires ~2500 physical qubits [2][3]

- Typical ECDH key size is 256 bits [4]

- This year would be the year of ~64 physical qubit machines. [5][6][7]

- log_2(256 * 9 * 2500 / 64) ~= 16.4 years

Note that every one of the quantities in the estimate is subject to future research. E.g. the error corrected qubit size is smaller when using lattice surgery, but not enough to really move the needle on the time estimate.

[1]: https://arxiv.org/abs/1706.06752

[2]: See section VI of https://arxiv.org/abs/1805.03662

[3]: https://docs.google.com/presentation/d/e/2PACX-1vReeRxH80Ruu...

[4]: https://crypto.stackexchange.com/a/47337/7860

[5]: https://ai.googleblog.com/2018/03/a-preview-of-bristlecone-g...

[6]: https://www-03.ibm.com/press/us/en/pressrelease/53374.wss

[7]: https://newsroom.intel.com/press-kits/quantum-computing/#49-...


Do you have any thoughts on the IEEE Quantum Computing Nomenclature Working Group?

http://standards.ieee.org/develop/wg/QCN-WG.html


No, I hadn't heard of it before your comment! Please no one tell the physicist David Mermin about this, or he'll picket the group with his long-running campaign to change the spelling of qubit to "Qbit." :-)


Is it pronounced kew-bit or kuh-bit or kwu-bit?


"cue-bit." I.e., the name of the letter Q, followed by "bit."


Depends on how you pronounce 'Q', I guess...


Hi Scott,

This may be an ill-formed question, but it's something I've been thinking about for a long time:

Do you think the human mind is equivalent to Turing machines, or somehow above it? Assuming we have an infinite tape/memory and time.


There are really two questions here.

The first one is, can the human brain be simulated by a Turing machine in its input-output behavior (to a suitable precision, given appropriate initial data, yada yada)? Note that, even though you specified "infinite tape/memory and time," from the outset I'm going to outlaw "simulations" that simply cache what the human would do in response to every possible stimulus in a gargantuan lookup table. For that kind of simulation trivially exists, and its existence tells us nothing interesting. I'll insist instead that the simulation be "reasonable"---so, at a minimum, that it simulate the brain without an exponential blowup in size.

I don't know for certain, but as a believer in the physical Church-Turing Thesis, my guess is going to be yes, this is possible. I.e., I guess that the laws of physics are fully computable---that there's nothing in them like what Roger Penrose wants---and I see no evidence that the brain can violate the laws of physics that govern everything else in the universe.

(Even here, though, there remains the extremely interesting question of whether, even in principle, one could scan a specific person's brain well enough to "bring additional copies of the person into being," without simply killing the person if one tried. My friends in the futurist and singularity movements expect that the answer is yes, but if one needed to scan all the way down to the molecular level, then the No-Cloning Theorem of quantum mechanics would certainly present an obstacle. For my thoughts and speculations about this question, see my "Ghost in the Quantum Turing Machine" essay, which was referenced elsewhere on this thread: https://www.scottaaronson.com/papers/giqtm3.pdf )

Anyway, the second question is whether, even if we agree that a human brain can be simulated by an appropriate Turing machine, there's some special sauce of consciousness that makes there be something that it's like to be us, but nothing that it's like to be a conventional Turing machine. I.e. there's Chalmers' "hard problem of consciousness."

Here I'm going to plead ignorance, with my extenuating circumstance being that we're up against arguably the ultimate mystery of existence. :-)

Yes, I feel like there's something that it's like to be me. (Though if I were a philosophical zombie, just a glorified Turing machine made of meat, I could've told you exactly the same thing, so you should take whatever I have to say about this with a grain of salt. :-) )

And yes, I take it as axiomatic that there's similarly something that it's like to be you---or for that matter, to be a chimpanzee or a dog (but an ant? a bacterium? unclear). No, I don't understand it. No, I don't know what properties of a computational process are either necessary or sufficient to cause there to be something that it's like to inhabit that process; I don't know how we should build an AI either to ensure that it's conscious or to ensure that it isn't. In practice, we'd probably have to extend a presumption of consciousness to anything that behaved sufficiently similarly to us---that, famously, was Turing's point in 1950. But even here there are many uncertainties: for example, would you still take a machine to have "passed the Turing Test" if you could perfectly predict everything it would say given a copy of its source code---not just in principle but in practice?


Hi Scott,

I am interested in attending graduate school for a PhD in Computer Science and am beginning to think about the areas of research I'd like to get involved in.

Below are a few sporadic questions.

1) I really like mathematics, so what are the foundational areas of mathematics in quantum computing? What areas of mathematics should researchers know very well?

2) Are there any exciting connections between machine learning and quantum computing that you know about?

3) What is a problem that really excites you about the field at this very moment?

4) Can you direct me to good books, papers, or resources for the absolute beginner in quantum computing?


Hi Scott,

Of the two main possible applications of QC, i.e. computational chemistry and cryptography, it's often just cryptography that is mentioned in the media. The Wikipedia entry on QC, in its "Potential" section, discusses mostly cryptography, while only two sentences are dedicated to chemistry. Cryptography is also used as the toy-problem for QC prototypes, i.e. factoring integers.

However, as I understand it, QC-for-cryptography will break existing techniques, and will not (from an application standpoint) bring anything new to the table.

Therefore, my question: does QC have a PR-problem?


It has multiple PR problems, including the one you mentioned. :-)

Everyone who follows QC knows that simulating quantum chemistry could be commercially important, while breaking RSA is not. And as far as I can tell, no one cares anymore about doing tiny demonstrations of Shor's algorithm, to factor 21 into 3x7 or whatever. Over the past 5 years, the experimental interest has shifted to (1) demonstrating sampling-based quantum supremacy (which has nothing to do with Shor's algorithm), (2) demonstrating the building blocks of quantum error correction, and (3) the prospect of doing useful simulations.

(A central reason for this is that, until you have a full fault-tolerant QC with millions of physical qubits, you almost certainly can't run Shor's algorithm in a way that will outperform a classical computer, even if you wanted to. By contrast, people are excited right now that they might be able to learn something new for physics or chemistry with just a few hundred good physical qubits.)

So anyway, we all know all of this, but popular expositions still like to concentrate on breaking public-key crypto because of its wow-factor (and, of course, the undisputed theoretical importance of Shor's algorithm, and its possible eventual security importance). In addition, there's often a time-lag problem, where the people working in the field have one set of concerns, and then the broader discussion is still stuck in the world of 1997.


Hi Scott,

what would you say are the main reasons why you in your mid-thirties are a professor at a prestigious university?

Sheer determination and perseverance? Less partying and more learning? Your upbringing?

I’m asking, because sometimes there’s an interesting story behind it, like with Matt Might (http://matt.might.net/articles/tenure/) who found himself in a objectively bad situation — a child believed to die soon — making him reevaluate priorities and getting successful as a side effect.


Well, the "less partying" part sounds accurate. :-) In grad school, for example, I had some good friends, but very little "party life" and certainly very little dating life---not out of choice, just because I didn't even know how to start. As a result, I was often severely depressed. On the positive side, though, I was more productive at research than I've ever been since! I had unlimited time to think, and also enormous drive to succeed so that at least something would be going right in my life. Just as important, though, I loved actually working on research problems (still do), as opposed to just the recognition you get from other people after you've solved one. I could form an infatuation with a particular problem that would last weeks or months.

Now that I have tenure and a wife and two kids, and get to travel the world and study what interests me, I'd like to tell myself a story according to which "all the hard work and sacrifice ultimately paid off." On the other hand, I know people who had fantastic social lives all throughout college and are now also highly successful---an instance of the general phenomenon that there's no justice in the universe. :-D

Anyway, if you're interested, I told some more of my story in my Scientific American interview with John Horgan - https://blogs.scientificamerican.com/cross-check/scott-aaron...


I have so many questions for you, but lets just go with one.

Can you explain to me the concept of "logical qubit", and the whole debacle about qubit quality? I'm still getting confused again and again over this.


Qubits lose their quantum superposition states as they interact with their environment---effectively getting "measured" by their surroundings. This is called decoherence, and is the central engineering obstacle to building useful QCs.

The longer a qubit lasts while maintaining its quantum coherence, and (especially) the more operations you can do on it while keeping it coherent, the better that qubit's quality.

In practice, we will never be able to build qubits of perfect quality (i.e., ones that maintain their coherence forever except when we deliberately measure them). In the 1990s, some people thought this would be a fatal obstacle to scaling up QCs. But then two closely-related discoveries, called quantum error-correction and quantum fault-tolerance, dramatically changed the picture. These discoveries showed that, even if your physical qubits fall short of perfection, as long as they have a high enough quality, you can glom a bunch of them together into a single "logical qubit"---that is, a qubit that lives in the collective state of multiple physical qubits, and that can still be recovered even if any small number of those physical qubits lose their coherence. Furthermore, one can do an arbitrarily long quantum computation on these encoded (logical) qubits, continuously monitoring to see which of the physical qubits have suffered errors and correcting those errors (but not monitoring in a way that would collapse the logical qubits!). In this way, one can in principle build a reliable quantum computer out of unreliable parts---a generalization of John von Neumann's famous discovery from the 1950s, which showed that the same was true of classical computation.


Any recommended books on complexity theory?


A classic from 1994 is "Computational Complexity" by Papadimitriou.

Two good sources for newer material (from the past quarter-century) are "Complexity Theory: A Modern Approach" by Arora and Barak, and "The Nature of Computation" by Mertens and Moore.

A personal, idiosyncratic favorite is "Gems of Theoretical Computer Science" by Uwe Schoning.


Hi Scott,

If it turned out to be true that advances in advertising technology like profiling and microtargeting (see, e.g. [1]) could effectively deliver the likelihood of electoral victory to their highest paying and most ruthless practicioners, would this be something to worry about? And if so, what action should we take in order to preserve democratic ideals?

1: https://medium.com/join-scout/the-rise-of-the-weaponized-ai-...


Scott what do you think is going to be a successor to the scientific paper/academic journal? What do you think of platforms like distill.pub and Fermat's Library and the impact they might have?


I'd seen Fermat's Library but haven't used it. I hadn't heard of distill.pub before your comment.

Honestly, I don't see scientific papers going away anytime soon. What's the alternative to an individual or group setting out clearly in writing what they discovered, the evidence that it's true, and the background and context to the discovery, putting the writeup where interested people can easily find it, and then taking full responsibility for it, so others can cite and build on the work? As far as I'm concerned, that's all a "scientific paper" means---regardless of whether it appears in a prestigious journal or just the arXiv or somewhere else on the web, and whether it otherwise follows normal academic writing conventions or flouts them. Even an answer on a website like MathOverflow, were it sufficiently well-written and sourced, could as far as I'm concerned be cited as a research paper and given academic credit.

But while "papers," for some definition, are probably here to stay, I'm optimistic about deep reform to the system of journals. The first step, of course, should be for academics to shake off the yoke of the truly brazen predatory publishers like Elsevier. Many of us have already pledged never again to review for or submit to those publishers, at least until they fundamentally change their practices. For more about this, see my review of "The Access Principle": https://www.scottaaronson.com/writings/journal.html

As the next step, we should break free even of society journals, insofar as they put research results behind a paywall. Everything---certainly if it was funded by the taxpayers---should be freely accessible on the web. (In math, CS, and physics, we already put essentially all our papers on the arXiv, where they're freely available, and have been doing that for ~25 years. But the fact that the paywalls even exist still rankles---and other fields, like biology, have yet to catch up.)

Ultimately, we might converge on the model of journals as "arXiv overlays": that is, stamps of approval that particular arXiv preprints have been peer-reviewed (which is pretty much the only service that journals now provide anyway). Or maybe we'll even handle peer review in some other way entirely. E.g., people keep trying to experiment with peer reviews being public, but it keeps failing---possibly because, lo and behold, most academics don't want their frank commentary on the importance of each other's work to be made public with their names on it! :-)

As it happens, my friend and colleague Michael Nielsen used to work in quantum computing, but now spends full time at Y Combinator thinking about the future of scientific communication. He wrote a book that had lots of interesting ideas for how we could improve things, and surely there are many other good ideas waiting to be proposed, possibly taking advantage of recent or near-future technologies. While the concept of a "paper" (or treatise, or monograph, or note, or other unit of written research) strikes me as mostly determined by the nature of science itself, as far as I'm concerned almost everything else is up for grabs.


Hey Scott, thanks for doing this.

What do you think of the Copenhagen Interpretation especially in light of the delayed choice quantum eraser experiment[0]?

I know interpretation of QM is something actual working scientists try to avoid (I know I do such for my little corner of physics) but given how much interpretation informs intuition it's worth considering once in a while.

[0] https://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser


I confess I get annoyed when people make arguments about the interpretation of quantum mechanics using complicated thought experiments involving lasers and mirrors. Why not just talk about it in the modern way, in terms of qubits and states? :-)

For some of my recent thoughts about interpretation of QM, see this blog post, and especially the discussion in the comments section: https://www.scottaaronson.com/blog/?p=3628


I'm a laser physicist, so I think exclusively in terms of lasers and mirrors, but I get your point.

Hi Scott, I first stumbled across your blog while I was across the pond doing my Master's in Delft; it looks like you've been relatively successful since then.

I was wondering how you feel that success/celebrity has affected you. Does it actively drive you to something other than research -- say, are you viewing it as a building block to maybe publishing a book or so? Or does it feel like a distraction from research in that "oh why do I have these arguments online" way that we can often fall into?


There’s no question that celebrity can get to your head in this line of work. Every single time I get off my private plane somewhere, I’m mobbed by quantum complexity theory groupies, not to mention the Hacker News paparazzi, and the women shrieking and throwing their panties at me ... oh god, the women are so persistent! Don’t they realize I’m married?

But I do try hard to “keep it real”—or better, “keep it complex”—by setting aside some quality time just for me and some pen and paper. As I like to say, it’s all about the amplitudes.

(Seriously: I do have a book, Quantum Computing Since Democritus! And I have been approached by people asking me to sign it. But typically only in a few highly selected places, like Cambridge, MA or Berkeley, CA. :-) )


What are some of the most interesting research problems in your field? How promising would you say topological quantum computers are to allowing for fault-tolerant systems?


I'm halfway through the first chapter of Neilsen and Chuang's book. I'm enjoying reading about the subject and am at the quantum parallelism part.

Can you explain why Grover's algorithm has a runtime of root N? It seems like the runtime should be log2(n) because of exponential qubits or 1 because there must be a way for all the qubits to interfere.

Also, What resources do you reccomend for self study? Are there quantum computing meetups in San Francisco that you can recommend?


The reason why the running time of Grover’s algorithm involves sqrt(n) has to do with the Pythagorean theorem—or if you like, the fact that quantum mechanics is based on the 2-norm, in contrast to classical probability theory which is based on the 1-norm. Classically, each time you pick one item out of N to query, you can add ~1/N probability to the marked item—so the probability of having found the marked item after T queries goes like T/N. Quantumly, you can add ~1/sqrt(N) amplitude to the marked item with each query, so the amplitude on the marked item after T queries goes like ~T/sqrt(N), and hence the probability of observing the marked item when you measure goes like ~T^2/N.

A fundamental result from the 1990s, called the BBBV Theorem, shows that not even a quantum computer can solve the unordered search problem any faster than Grover’s algorithm solves it. I won’t prove the theorem in this comment :-), but the intuition is simply that quantum mechanics is a norm-preserving and linear theory. So you actually need to do something to gradually put more and more amplitude onto the marked item; you can’t just instantly and magically give an amplitude of 1 to whichever branch of your superposition happened to hit the marked item.

I’m not sure if there are QC meetups in SF (does anyone else?). But certainly nearby Berkeley is one of the centers of the world for QC—home to Umesh Vazirani’s group, the Simons Institute for Theory of Computing, and now also the startup Rigetti.


That makes some sense with the extra dimension providing more space to store information about all the information. I appreciate the detailed response with some jumping off points. Thank you!


There is a Bay Area Quantum Computing meetup (I'm one of the organizers!)

https://www.meetup.com/Bay-Area-Quantum-Computing-Meetup/

Next one will likely be at the end of July. Hope to see you there!


Looks like I went to the Bloomberg one. Was both really over my head and very fascinating.


Why don't we call Qbits just "qits". Since Quantum-Binary-Digit doesn't really make sense. Do people in the field refer to them in other ways?


A qubit is a linear combination of |0> and |1>, and a qutrit is a linear combination of |0>, |1> and |2>. I remember Scott Aaronson once suggested building QCs with qudits (or qu-n-its with n>2).


No, they're called qubits. It's literally a quantum-mechanical bit, in the sense that it's a superposition of the |0> and |1> states---i.e., a quantum state that returns a bit when you make a complete measurement on it.

There are also qutrits, which are quantum trits (superpositions of |0>, |1>, and |2>), "qubytes" (collections of 8 qubits), and so on.

This terminology is now 30 years old, and in probably thousands of books and tens of thousands of papers. It's not going to change.

The earlier name for "qubit"---the name that Feynman, for example, would have recognized---was "spin-1/2 particle." But while there was some resistance, I think that even within particle physics, condensed-matter physics, etc., they'd now typically say qubit rather than spin-1/2.

In any case, would you pronounce "qit" like "kit" or "kwit"? :-)


Sure it makes sense - a qubit is canonically a quantum spin system, which has two eigenstates, up and down.


What about superposition? Or is that not considered a 'state'?


They're states but not eigenstates. It's like the difference between RGB colour and greyscale. In both cases there are infinitely many possible colours, but in greyscale they're all mixtures of two "primary" colours (black and white) whereas in RGB they're mixture of four (black, red, green and blue).

In a qubit the infinitely many superposition states are all mixtures of just two eigenstates.


Thank you! This makes perfect sense.


A superposition is indeed a state, comprised of linear combinations of the basis states.

Further, (and anyone, please correct me where I'm wrong), the eigenfunctions (which could actually be called eigenstates) of an operator ARE the basis set, as they are orthonormal. (Right?)


One can justify the name using the fact that the state space is 2-dimensional.


When do you think will we have the first quantum computer that can solve useful problems faster than the classical competition, for example in quantum chemistry?


I hope in 5-10 years, but really I don’t know, and no one else does either.


In my opinion, you're one of the most entertaining and approachable writers on mathematical topics like QC. How did you get good at writing?


It’s funny: when Philip Roth passed away recently, I was rereading some of his stuff, and thinking to myself, “why am I so terrible at writing?”

If I have any tips, I guess they’d be bend-over-backwards honesty, willingness to make an ass of yourself, practice, and more practice.


For those wanting to undergraduate research basically in REU in QIT what's your advice for them?,what do they need to take on such an endeavor?,what are researchers looking for in prospective students?, and finally would a researcher take on a student who hasn't had much in terms of coursework but has been teaching themselves ?


Scott, I loved your book (QCSD). What are the chances that overcoming noise in quantum gate systems as they get larger is a practical impossibility beyond some threshold - for instance in the same way that inverting a gaussian convolution requires super-exponentially less noise above a given sampling density / stdev ?


If anything like that turned out to be true, and fundamental (rather than just an engineering limitation), I’d regard it as a shocking discovery that would overturn our current understanding of QM. After all, physics is local—each particle couples mostly to its near neighbors—so there doesn’t seem to be any inherent reason why noise per qubit needs to keep increasing as you integrate more and more qubits.

For more, see my other answers on this thread about QC skepticism.


Hi Scott,

If quantum computing ever becomes commonplace, and quantum computer programs widely written and understood, do you envision that the teaching of quantum mechanics itself will be significantly impacted? Or instead will we simply see quantum complexity theory begin to redefine the undergraduate computer science curriculum?


I think that the teaching of QM has already been impacted by quantum information—and I hope it gets impacted more, regardless of if and when we get useful QCs! To my mind, it’s just infinitely clearer to start with the basic rules and properties of QM—states, unitary transformations, measurements, tensor products, entanglement, density matrices, etc.—illustrated with the simplest systems to which those rules apply, namely collections of qubits (or more generally, finite-dimensional Hilbert spaces). This already lets the students fully understand no-cloning, quantum teleportation, quantum key distribution, basic quantum algorithms like Bernstein-Vazirani and Simon, and other cool things from quantum information, and it already lets them explore the conceptual questions (the measurement problem and so forth) that interest most of them. After this material has been mastered, and only after, one could see how it gets applied to real physical systems like the harmonic oscillator, the hydrogen atom, or a particle in a 1D potential well. So, all the mathematical complications of infinite-dimensional Hilbert spaces—which are not essential to understanding QM itself—could be deterred to this part of the course. This is the reverse of the historical order in which the ideas were discovered in the 1920s, but I think it’s the much more logical order in which to learn them—even if quantum information weren’t a big thing that people now care about.

If you want to see a recent text by a bona fide physicist that presents QM in exactly this way—what I think of as simply the “modern” way—then check out Lenny Susskind and Art Friedman’s remarkable “The Theoretical Minimum” series. Or you could look at pretty much any introductory book or course lecture notes about quantum information, including mine. We do it this way as well, except that we never do get to the harmonic oscillator or the hydrogen atom. :-) I can say from experience that the material is then totally accessible to any bright undergrad who’s done math up through linear algebra.


OK, now I'm out of excuses for having not read Quantum Computing since Democritus ;-) (and The Theoretical Minimum as well, why not both?).

Thanks for the enlightening response!


It is often taught with a second course heavily focused on perturbation theory regarding Stark/Zeeman etc effects on the hydrogen atom. Finite dimensional Hilbert spaces are mostly regarded as spin which needs to get coupled to angular momentum for spin-orbit coupling. So I would envision a branching course sequence. An introductory course as we already have it followed by either or both of a usual second semester course for people who want the stuff for atoms and molecules or the one that sticks to the finite dimensional aspects that are relevant for computing. The physicists might need all 3 while the quantum computer scientists might only need the 2. Can be cross listed across departments.


you’ve written about P vs NP in the past. What are your current thoughts on P vs NP as of today? Do you have any speculation as to whether it’s one or the other, and why? Do you think there have been any major recent breakthroughs (say, in the past 5 years) to move us closer to a solution?


Hello Scott, I have gathered from YouTube that you enrolled in CS Major in order to make your own video game, How does that fit a prodigious math geek who started learning calculus aged 11 and lately one of the pioneering quantum theorists?


Hi Scott -

What was the best part/most favorite thing about your sabbatical?

Can you explain why Forrelation is a guiding light for quantum deep learning these days?

As a bonus - it would be nice to see a walkthrough - commentary of the recent oracle separation proof of Tal and Raz and how they used Fourier transforms specifically.


My favorite part of my sabbatical in Tel Aviv was having some actual time to do research. My second-favorite part was the hummus.

Forrelation is not a “guiding light for quantum deep learning.” I’m not even sure if there’s such a thing as “quantum deep learning” that’s sufficiently well-developed to deserve the name. You can use Forrelation as a separating example for some other quantum learning problems, e.g. k-means clustering, but so far that mostly just shows that you can artifically shoehorn an exponential quantum speedup into those tasks, rather than that the tasks will give us useful quantum speedups in “real life.”

In the Forrelation problem, I conjecture that the only thing that really matters about the Fourier transform is that it’s a unitary matrix all of whose entries have small absolute values. As such, it lets you relate two Boolean functions in a way that a quantum computer can notice, but that’s extremely “global” in nature and doesn’t show up locally. Raz and Tal may have used some additional technical properties of Forrelation in their proof (now that I write that, I’m actually not sure—I’ll need to check!). But if they did, then I conjecture that another proof could avoid the use of those properties.


Just as a quick followup: Avishay Tal has confirmed for me that, indeed, their proof still goes through if you replace the Fourier transform by any other unitary matrix with bounded entries.


Thanks, Scott! I thought you’d missed this. Maybe we’ll get to have you give a colloquium here at Tulane someday soon!


Hi Scott,

Have you seen https://arxiv.org/abs/1806.09729?

I would call that quantum deep learning, or more accurately, foundations of quantum deep learning.


No, I hadn't seen that paper; thanks! I don't want to comment on it without having studied it, except to mention that it doesn't make any reference to Forrelation.

Hi Scott,

What do you think of radical Mathematical Platonism (eg. a la Tegmark)? As a computational physicist I tend to tell people that in my hands a computer is like a telescope, but it lets me see into mathematics---but that seems to tacitly assume the reality of mathematical objects.


How long do you think that we will have a quantum computer that will out compete traditional computation?

What do you think about Topological Quantum Computation?

How do you think we can solve the problem that many people getting their PhD are vastly underpaid and also adjunct professors?

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: