Hacker News new | past | comments | ask | show | jobs | submit login
The Google Willow Thing (scottaaronson.blog)
673 points by Bootvis 19 hours ago | hide | past | favorite | 343 comments





Man, reading this makes me feel so small. Being a "software engineer" consuming APIs and updating database rows seems laughably childish compared to whatever the hell it is I just read. I can't even imagine why I should bother trying to understand it. It's completely inaccessible. Only an elite few get to touch these machines.

> I can't even imagine why I should bother trying to understand it.

Well, maybe you should just try for the hell of it and see how far you get? Becoming fit seems impossible to a morbidly obese 45 y.o, and it is if that person's expectation is unreasonable, but if they just change it to be more reasonable, break it down into manageable routines, then they can get somewhere eventually.

Find some papers, fill many gaps, dedicate a few years in your spare time, in 6 months you'll be 6 months closer than you were.

Whether there's a reason or not, idk, it's something to do, be curious. Don't forget that by dedicating their life to something, they're naturally not dedicating their life to other things, things that you might be able to do, like climbing mountains, making pizza, or coming up with witty banter in social situations.


> in 6 months you'll be 6 months closer than you were

Not to be morbid, but…in 6 months you’ll also have 6 months less of your life left to do those things.

In the past few years, I have felt—-whether due to my aging, the chaos of current events, the mindless march of technology, who knows—-that our time here on earth is a gift that we squander at our peril.

I think some would find that bleak and grim, but I consider it an existentialist battle cry. Make a list of those things that someday you’d like to say you’ve done, or know, or have built, or have given to others…and do one of them.


> Not to be morbid, but…in 6 months you’ll also have 6 months less of your life left to do those things.

The time passes anyway.


> The time passes anyway.

All the more reason to spend it on things that matter to you. The opportunity cost of six months spent deep in abstract CS papers is six months not spent teaching your daughter to play guitar, visiting that place you always dreamt of seeing, finally finishing that book on your nightstand, etc.


These things are not mutually exclusive...

Oh, but they are. We get fewer of those six month periods than we like to think we will.

Not if you teach your daughter quantum computing instead of guitar

You can definitely teach your daughter both.


Must be nice living a life free from the constraints of time.

You are awake for at least 16 hours of the day, you telling me you cant find 4 hours a week to read a paper? So 4/112 hours or around 3.5% of your week...

I guarantee thats more time than most people will spend teaching their kids any musical instrument.

Just spend a week mapping out what you do ans how long it takes you every week and I'm pretty sure you can find double digit hours spent somewhere, maybe even right here on HN


My bucket list says otherwise.

Your revelations echo with that of Seneca, "On the Shortness of Life". You take your life seriously which is an act that I immensely respect.

> ...that our time here on earth is a gift that we squander at our peril.

Yes but - it's up to each individual to decide on their own definition of 'squandering'. Ultimately everything we do is in service of our own search for meaning in life, and learning for its own sake can absolutely fulfil that role.


Don’t stress your inevitable death. But also don’t live like you’re immortal.

Learning for the sake of learning is a good thing

Why? There has to be some pleasure or goal derived from it. I don't think I'd particularly enjoy learning to speak Swahili just because I'm learning.

For me, "aha" moments often make me feel like I've come closer to understanding myself and the universe. I suppose it's a kind of pleasure, but it's not directly a practical goal. It can definitely happen with learning a new language, because the language may encode and communicate information in ways I've never thought about before.

Got it. I guess we're very different then, I don't feel like I'm getting any closer to "understanding myself and the universe" nor is it a goal for me. Perhaps it was when I was younger.

Yeah, we are probably different. I suppose my primary goal is to love and be kind. But trying to figure out what the hell is going on is a strong second place.

Why? There has to be some pleasure or goal derived from it. I don't think I'd particularly enjoy learning to speak Swahili.

This is very rare, so much I can't remember when was the last time it happened, but I was inspired by your words.

Thank you for writing them.


MSR has a very clear and accessible tutorial on quantum computing for anyone interested in getting up to speed with the fundamentals: https://www.youtube.com/watch?v=F_Riqjdh2oM .

> things that you might be able to do, like [...] coming up with witty banter in social situations

Well... guess it's time I start learning quantum computing then


As an autistic that brute-forced the "witty social banter" skill early on and has recently turned 40... I kinda wish I'd learned quantum computing instead tbh.

any tips?

Read a lot of popular books for conversational ideas, follow the news for topical application of them, and attempt cognitive & emotional empathy for per-person personalization of the application.

Simple in theory, juggling plates and knives in practice...


Mirroring goes a long way.

Reddit refugee I take it?

Technically yes, but I've been here for a few years. Why?

I have a treadmill though. Even though I don't use it. I can't get a quantum computer.

Here's a free quantum treadmill

https://www.quantumplayground.net/#/home

You don't need a "real" quantum computer to mess around with quantum computing and learn how it works any more than you need a supercomputer to play around with algorithms and learn how they work.


most experts in that field do not have access to a quantum computer. For the longest times it was a very theoretical field. Having access to a physical machine will not help you for 99% of the knowledge you can acquire in that field right now.

Everyone forgets people have been doing quantum computing research for decades.

Shor's algorithm is from 1994.


not computer, but you can definitely get a quantum devkit (an "emulator") and dabble with it.

How can you get it?

Also, Azure let's you run quantum code.

You can in general start with these search keywords: qiskit caterpillar yosys.


i think you can. A very simple one - an optical based one, one qubit. Several thousand dollars of equipment of a typical university photon quantum entanglement lab.

https://en.wikipedia.org/wiki/KLM_protocol


Clicked on this article to learn a bit more about this newly released chip but instead got extremely motivated to pursue intellectual interests intensely

I haven't tried this yet myself, but have you tried plugging it into GPT or Claude or Perplexity and asking Qs? I've made some progress on thongs this way, much faster than I would have the usual way. Apply the usual precautions about hallucinations etc (and maybe do the "ask multiple AIs the same thing" thing). We don't yet have a perfect tutor in these machines, but on balance I've gained from talking to them about deep topics.

Same, but kind of; I'm so far removed from higher up engineering stuff like quantum stuff, nuclear fusion stuff, LHC stuff, astronomy stuff, AI stuff that I just scan it, grab a coffee, raise an eyebrow and go "Interesting", then go about my day and wonder what the fuck I'm supposed to be doing at work again. Oh right, implement a component, same thing I've been doing for the past decade or so.

Thing is, I don't know how to get out without on the one side giving up my comfort zone (well paid, doable work), and on the other side gaining responsibility / being looked at as an expert in any field (that's where impostor syndrome and responsibility aversion comes in). I really need a holiday lol.


A few years ago I made peace with the fact that my space of ignorance is humongous and will only get exponentially bigger. Even in the domain of my work which is software engineering. It liberated me from the pressure or burden of having to learn or know everything and enabled me to focus on things that I truly like to pursue.

I’ve made a list of 4-5 things at which I want to be extremely good at in 10 years compared to where I’m today. Now I just spend time on those. I occasionally wander into something new just for the sake of diversion.


To be fair, probably everyone who "touches these things" had extensive graduate work on the topic, or put in a lot of long years toiling away at far less exciting and remunerative grunt work. They weren't just super bright CS undergrads who got entry level jobs you couldn't.

This was me yesterday after reading the official Willow release.

Spent yesterday afternoon and this morning learning what I could. I'm now superficially familiar with quantum coherence, superposition, and phase relationships.

In other words, you got this. Now I gotta learn linear algebra. brb.


I gave a plug for this yesterday, but if you want to try the Quantum Katas from Azure Quantum, it runs in the browser and covers this stuff. See lesson 3 for linear algebra. <https://quantum.microsoft.com/en-us/tools/quantum-katas>

One thing I did forget to mention is that you can play with this stuff in a "familiar to software developers" way in our VS Code playground at <https://vscode.dev/quantum/playground/> . This is a 'code first' approach familiar to software developers leveraging VS Code integration. The playground is pre-populated with a bunch of common quantum algorithms.

You can also install the extension in VS Code directly (<https://marketplace.visualstudio.com/items?itemName=quantum....>), you don't need to run it in the browser, but even in the browser it has a fully working language service, debugger, evaluator, quantum simulator, package management, etc. It's all written in Rust and compiled to either Wasm for the browser and VS Code extension, or native code for the Python package. (I'm thinking about doing a video on how we build it, as I expect it will interesting to this type of crowd. Let me know if so).

Disclaimer: I work in Azure Quantum on the product mentioned. AMA.


> Now I gotta learn linear algebra.

Linear algebra does seem to be a hard wall I've seen many smart software engineers hit. I'd honestly love for someone to study the phenomenon and figure out why this is.


Curriculum pacing? Linear Algebra may be showing up at the wrong time during the 4 years.

Linear algebra is a relatively straight forward subject. I won't say easy because there are bits that aren't, and I struggled with it when I was first exposed to it in college. But in graduate school revisited it and didn't have a problem.

So, I really agree that it's about timing and curriculum. For one, it appears somewhat abstract until you really understand geometrically what's happening.

So, I surmise that most non-mathematicians don't have quite the mathematical maturity to absorb the standard pedagogy when they first approach the subject.


It seems wild to me since linear algebra is at the heart of basically all modern mathematics (alright maybe not all but so many things exhibit linear behavior, that it's still useful). Don't most CS majors require some math today? Linalg is like first year college, right?

I enjoy engineering. I hate math.

I would recommend mathacademy.

Also good starting places (but still, only understood a tiny bit of what was there).

https://podcast.clearerthinking.org/episode/208/scott-aarons...

https://quantum.country


It's also way harder to make money doing "not laughably childish" stuffs. The more accessible and human-connected it gets, the more likely people recognize and pays you. People criticize yes-men getting rewarded but you have to be nearly clinically insane to recognize value of a no-machine like a partial prototype quantum supercomputer.

We are all small in every domain in which we are not an expert, which is approximately all of them. The "computer" domain has expanded wildly over the last 50 years to include so many specialties that even within it one cannot possibly acquire expertise in everything. And of course "computers" do not include (although they do impact!) the vast majority of domains.

If you want to go deeper into quantum computing, I can highly recommend Scott Aaronson's own book, "Quantum Computing since Democritus"[0]. Although I have a background in physics and math, I found his style lively and engaging with truly unique and compact recapitulations of things I already knew (two come to mind: his description of Cantor's diagnalization argument, and the assertion that QM is the natural consequence of "negative probabilities" being real. The later is a marvelous insight that I've personally gotten a lot of use out of).

It's also useful to understand the boundary of what quantum computing is. At the end of the day what we'll see are "QaaS" apis that give us the ability to, for example, factor large primes. You won't need to know Shor's algorithm or the implementation details, you'll just get your answer exponentially faster than the classical method. I would not expect desktop quantum computers, languages designed for them, or general user software designed to run on them. (Of course, eventually someone will make Doom run on one, but that's decades in the future.)

https://www.alibris.com/booksearch?mtype=B&keyword=quantum+c...


When Corridor Digital was analyzing the really impressive time spaghetti effect in Loki season 2 they said "there are two kinds of CGI artists. The kind that use the buttons and the kind the program the buttons."

> I can't even imagine why I should bother trying to understand it

Why should you? I agree with your sentiment, super advanced quantum physics is probably out of reach for 99% of the population (I'm estimating here but I think it's reasonable to assume that's the average IQ of the physics PHDs who can actually understand this stuff to a deep level). You can probably make the effort to understand something about what's going on there, but it will be very superficial. Going advanced quantum physics takes a huge amount of effort and an incredible capacity for learning complex things. And even the advanced physics guys don't and can't understand a bunch of very elementary things about reality, so it's not as if the feeling of not understanding stuff ever goes away.


To be fair you don't need to know advanced quantum physics to understand a lot of quantum computing, including Aaronson's discussion of this paper. It's pretty accessible with undergraduate-level linear algebra and computer science concepts.

People feel like this about domains they don't know all the time. Don't allow your ignorance of an area to trick you into thinking you can't learn that area. It's just new to you!

Source: teaching beginners piano for years. Of course sheet music looks like gobbledygook, until you've spent a bit of time learning the basic rules!


The theory of quantum computation is accessible with a good understanding of linear algebra. Anyone working in machine learning or computer graphics should feel encouraged by that, and take a look at the theory.

Quantum error correction is one of those “wow” moments like euler’s identity, it is worth making the effort to get there.


I feel like I have a fairly good understanding of linear algebra, or at least I did when I studied it at uni but that was a while ago.

Any suggestions for where I can learn more about the theory of quantum computation?


e^(iπ) +1 = 0 e, i, π, 1, 0 Beauty

e^(iτ) = 1; true beauty.

As with most novel hardware since the dawn of computing, there are ways to emulate it.

https://www.ibm.com/quantum/qiskit


Damian Conway did it in Perl back in the late 90's:

https://metacpan.org/pod/Quantum::Superpositions

(Sadly, the Perl Module has to do classical calculations underneath to get the Quantum computing code/functions to execute, but it let you experiment with silly little QC toys - "Look, I factorised 15 in parallel!!!")


As hard it may seem to you to tackle that, it's harder to convince others like you that tackling it can be like child play. Not just QM/QC (which btw it's beeing successfully taught to highschoolers) but any "advanced" topic. I hope we'll be able to look back and laugh at how backwards education was "back in the day" and how dumb people were to think that some were "elite few", while the reality is that the "elite few" were the "lucky few" to not be deprived of learning to think either by having the right people around them or the right context to find it by themselves.

totally, education across the globe is currently severely suboptimized. At this point there's troves of research on what actually works when it comes to learning, yet the educational systems that implement these techniques are still incredibly scarce

Most education systems, particularly within the US, have very little interest in educating for critical thought and reasoning. They exist as both day care and labor exploitation pipelines.

they buried the lede. google doesn't have 2 qubits to rub together 100%. 105 "qubits" make a "single" qubit after coalescing or whatever. I'm really annoyed because i've kinda followed this since the mid-90s and this is the first time i am hearing that "it'll probably take millions of physical qubits to crack 256 bit"

to me the whole endeavor smells like a bait and switch or something. I remember about 10 years ago canada or someone had at least a few hundred qubits if not close to 1000 of them, but these were physical qubits, and don't represent anything, really. Google's 105 finally makes a "fast enough" single qubit or at best half of a pair.


Silicon valley figured out decades ago that the trick to keep R&D dollars flowing is to balance on the verge of getting amazing results (AI/AGI; defeat cryptography) and not proving these efforts fruitless. The end result needs only be plausible and fantastic at the same time.

I’d urge you to not feel small.

First of all, the formalism/practice gap is real: taking API calls and updating a database correctly has a mountain of formalism around it. And it is not easy to get right! Concurrent sequential processes and distributed systems theory and a bunch of topics have a huge formalism. It is also the case that many (most?) working software engineers have internalized much of that formalism: they “play by ear” rather than read sheet music, but what matters is if it sounds good.

Second, whether it’s quantum computing or frontier machine learning or any other formalism-heavy topic? It’s eminently possible to learn this stuff. There’s a certain lingering credentialism around “you need a PhD” or whatever, I call BS: this stuff is learnable.

Keep hacking, keep pushing yourself on topics you’re passionate about, but don’t consign yourself to some inferior caste. You’re just as likely as the next person to be the next self-taught superstar.


You captured very well my sentiment. Also same feelings for AI.

I'm wondering if it's time for me to switch professions and give up compsci / software altogether.


The complexity in AI, for common use-cases, is well encapsulated right now.

Many pre-trained models and libraries that hide most of the complexity.


I know absolutely nothing about quantum, but I did watch this Bloomberg video this week which was very helpful to understand it in layman’s term.

https://youtu.be/1_gJp2uAjO0?si=--XzbVlAT3w9hw2L


You can also for Veritasium's videos on quantum computing, if you are a fan of his videos.

We recently launched of our self-paced, mathematically rigorous course on quantum computing! This comprehensive introduction is designed to help you grasp foundational concepts like superposition and entanglement with mathematical clarity and depth.

Here’s a sneak peek from Lecture 6: https://youtu.be/6rf-hjyNl4U

You can sign up via: https://quantumformalism.academy/mathematical-foundations-fo...


Some people say higher education is a privilege, but a few times during college while grinding out difficult classes it felt more like a huge burden.

Being part of a highly educated elite group like that has some huge benefits, but they have also shackled themselves to an insanely specialized and highly difficult niche. I can't imagine the stress they are under and the potential for despair when dedicating years to something with so much uncertainty.


Thank you. Seeing this as the top comment actually makes me feel better as I now know that I am not alone.

Start with the handy precis that Mr A leaves at the top in yellow. You can drop that into conversation right now with some confidence! Mr A has quite some clout hereabouts let alone elsewhere, so that seems reasonable.

You can't know everything but knowing what you don't know and when to rely on someone else to know what you don't know and to confidently quote or use what they know that you don't know, is a skill too.

"Critical thinking", and I suspect you do know how to do that and whilst this blog post might be somewhat impenetrable it is still might be useful to you, even as just general knowledge. Use your skills to determine - for you and you alone - whether it is truth, false or somewhere in between.

Besides, a mental work out is good for you!


I think the jargon makes it seem scarier/less approachable than it is. There's tons of jargon in every new branch of anything interesting that you first notice, but it's not as impenetrable as it appears (at least for a high level understanding imo).

I had exactly the same thought. I read it after struggling for hours, making notes, etc, with a very hard (to me) part of my code and it made it seem pretty trivial.

if it makes you feel any better, it is almost impossible to do anything practically useful at all with quantum computation at the moment. this is a field for researchers, people who are okay with not seeing the fruits of their work materialize for decades, if ever. by not understanding it, you're not missing anything more than what you're missing by not understanding e.g. string theory.

And yet it pays the bills and runs the world, doesn't it.

React 19 is out and it's time to hit the docs eh to solve the same exact problem we've had for the last 30 years in a slightly better (or NOT) way.


> Only an elite few get to touch these machines.

For now, people were saying the same thing about mainframes in the 60s


AWS Bracket is on AWS Free Tier.

The machines you program today were once only accessible by "elites" too though?

Anything that someone else does that you don't understand or can't do always sounds super important, impressive, and enviable. Usually. But you have to realize that he spends his life doing and thinking about this. And will stipulate he's gifted in this area. I've never heard of him but managed to download his CV.

I will also note that sometimes when I read HN links and think one thing then read the comments and people know enough to take issue with what is being said and even call it out.


Let's talk about things that actually matter - where to invest in post-quantum world?

I'll keep this short.

- Google’s Willow quantum chip significantly outpaces current supercomputers, solving tasks in minutes that would otherwise take billions of years.

- Hypothesis: Accelerating advancements in tech and AI could lead to quantum supremacy arriving sooner than the 2030s, contrary to expert predictions.

- Legacy banking systems, being centralized, could transition faster to post-quantum-safe encryption by freezing transfers, re-checking processes, and migrating to new protocols in a controlled manner.

- Decentralized cryptocurrencies face bigger challenges:Hard forks are difficult to coordinate across a decentralized network.

- Transitioning to quantum-safe algorithms could lead to longer transaction signatures and significantly higher fees, eroding trust in the system.

- If quantum computers compromise current cryptography, tangible assets (e.g., real estate, stock indices) may retain more value compared to digital assets like crypto.

Thoughts?


Literally none of this is correct.

> - Google’s Willow quantum chip significantly outpaces current supercomputers, solving tasks in minutes that would otherwise take billions of years.

billions of years you say? Just what kinds of "computing tasks" we talkin about here?


Okay, explain why and how it is not correct then?

I genuinely want to learn and figure out what is the truth and what is the best route of action when it comes to securing a portfolio.


The problem it solved would take a septillion years to do on a conventional computer but no one other than a quantum researcher cares about that problem.

How about solving a problem someone that’s not a quantum researcher would care about. Give me traveling salesmen with n=10. Or factor a 10 digit number. Something.

Until then quantum computers are in the same category as commercial fusion. Long on “breakthroughs”, zero on results.

Look at cancer researchers for a nice contrast. The annual number of “breakthrough that could cure cancer!1!” announcements have dropped to near zero while steady, real progress is being made all the time.


but no one other than a quantum researcher cares about that problem.

The better question may be why the rest of us don’t care about the same things. If we sat here in 2014 and pondered why neural net folks cared about particular problems, I’m not sure where we’d be. Faith and Vision are truly spiritual things, even in tech.


By 2014 NN folks, of which I was one for a decade by that point, were solving real world problems at above human capabilities.

We'd been managing around human capabilities since around 2004 in certain tasks.

There were actual industrial applications in 1994.

You'd have to go back to the 1950s to find the type of research in neural networks that was of no practical application to anyone but NN researchers.


So go back to the 1950s and the point stands. What does it matter when the actual start date is in terms of the point being right or wrong?

> So go back to the 1950s and the point stands.

Yes. "Quantum computers will revolutionize computing by year 2100" is a claim I can take seriously. "Quantum computers will revolutionize computing real soon now" is a claim I am not taking seriously.

And yes, the 1950s AI researchers also boasted that it would take only months, what ended up taking many decades: https://en.wikipedia.org/wiki/Dartmouth_workshop


So quantum computers will be relavent in some time around 2075.

How much money should we putting into something with a time horizon longer than the working careers of everyone here?


Nobody wants your money

I'd like a rebate on my taxes then.

Well I'll be dead by that timeline so it matters a little.

Because when a new technology is demonstrated with a really esoteric use-case, it comes off as being useless for anything more useful.

> I’m not sure where we’d be.

They didn't solve any new problems. They took old ideas and just cranked the dial all the way up to 11. Then an entire market formed around building the hardware necessary to crank it even further. This all represents a revolution in computing power not in what neural net folks were doing at all.

> Faith and Vision are truly spiritual things, even in tech.

Yea, but money isn't.


Because it's all just scaled perceptron isn't it?

The reality is that we had to solve vanishing / exploding gradient problem. That wasn't accomplished by getting faster compute, but by devising better architectures and using activation functions with properties we needed.

That's how ANNs got out of the "somewhat useful" into "really useful" categories. Hardware helped move it even further. But without the work of computer scientists and mathematicians we would never get to the point when the hardware matters.


Because I have shit to do and this doesn’t relate to any of that shit?

It’s a question of overpromising and underdelivering. I’m not anti science. The scientists should definitely go forth and science. It’s the marketing they are bad at. The boy who cried wolf is not a training manual.

Google says the next step is to find a problem with real life application. From https://blog.google/technology/research/google-willow-quantu...

> The next challenge for the field is to demonstrate a first "useful, beyond-classical" computation on today's quantum chips that is relevant to a real-world application.


I always assumed the obvious application was for Google to make a QaaS (Quantum as a Service) for factoring large primes. One obvious big customer would be intellegence agencies, especially the NSA.

The only problem with this business model is that once you factored one number, you kill your market - people will stop using pre-quantum crypto. (The obvious retort is that NSA would have harvested a ton of RSA/EC-encrypted traffic by then and would keep cracking ciphers going back decades. Unfortunately, old secrets is a rapidly depreciating asset class.)

If you can get a quantum computer to take discrete logarithms on secp256k1 or mess with SHA256, then you can either get $$$ very quickly or you nuke almost the entire crypto market. For the former, you'd have to keep your discovery secret but just be unusually good at minting new coins.

Getting crypto coins to move over to post-quantum seems to me to be a much harder problem than e.g. rushing out a new version of TLS or SSH.

The key to Satoshi's original coins is a rapidly _apprecicating_ secret at the moment, but paradoxically also one that might immediately crater out if someone actually discovers a generic way to break the crypto involved.

I'm not an expert on this angle of things but: as far as I know, Shor's quantum algorithm breaks both RSA (factoring) and DSA (finite-field discrete logarithms). But I'm not sure if it works the same way against elliptic curves - or at least you'd probably need a bigger computer to attack the same level of security.

It's not clear to me if a quantum computer could effectively attack SHA256, either: Shor definitely does not help, Grover cuts the search space from 256 to 128 bits but that's still not practical to iterate over.


Partially true. If you copy a ton of encrypted data today, you can maybe decrypt it tomorrow

That's why it won't be a public market.

> I always assumed the obvious application was for Google to make a QaaS (Quantum as a Service) for factoring large primes.

We are decades and decades away from the technology to be able to do that. The problem for Google is trying to find a practically useful use case a bit sooner than that.


Can you imagine? Send us a number, we send you the factors. And a bill. With lots of zeroes

This will never be relevant for solving travelling salesmen problems.

More relevantly, that line of experimentation is specifically to refute the notion that something unexpected will happen with the physics to break the computational scaling.

Nobody with any credibility is claiming that the current experiments are useful for anything practical.


Note there's a caveat: problems in CS can be reduced to other problems in CS. If we solved SAT, well, no one cares about SAT, but traveling salesman obviously reduces to that.

(disclaimer: I don't think that's what is going on here, I'd have to dig into it more)


They didn’t solve any kind of CS problem. As far as I can tell the problem they solved is “what is this complicated quantum system going to do” by building the complicated quantum system and seeing what it did.

Then they claim it would take a gazillion years to simulate on a conventional computer. Which I’m sure is true.


> If we solved SAT, well, no one cares about SAT

Really? SAT is the question "I have a set of constraints. Is it possible to obey all of them at once?"

If it were impossible to use that question to answer any other questions, I'm pretty sure there would be a lot of interest anyway.

It's kind of like how a lot of people care about the determinant of a matrix, which is exactly the same question set against a much more restrictive set of possible constraints.


The argument in favor of the Everettian multiverse (“where else could the computation have happened, if it wasn’t being farmed out to parallel universes?”) seems illogical to me. Aren't these parallel universes running the same computation at the same time, and thus also "farming out" part of their computations to us? If so, it's a zero-sum game, so how could there be an overall performance gain for all the universes?

That's not really how MWI works. There isn't some fixed number of universes; actually, there aren't really distinct universes (or timelines) at all. Attempting to count them is about like trying to measure the length of a coastline; they blend together once you zoom far enough in.

Running the quantum computer causes 'new timelines' to be created. Though so would ordinary atoms just sitting there; the tricky thing about quantum computers is making it so the split is temporary.

So the quantum computer gets split into multiple versions of itself, does some computation in each, and merges the results. This isn't map-reduce; there's a strictly limited set of ways in which you can do the merger, all of which are weird from a classical perspective.

You can argue for MWI based on this, because the computations that got merged still had to happen somewhere. It's incompatible with Copenhagen, more so the bigger and longer-lasting the computation gets. It's not, strictly speaking, incompatible with pilot wave theory; but pilot wave theory is MWI plus an additional declaration that "Here, you see this timeline here? That's the real one, all the others are fake. Yes, all the computation needed to instantiate them still happens, but they lack the attribute of being real."

Though that makes PWT incompatible with computationalism, and hence with concepts such as mind-uploading. Which is a bullet you can choose to bite, of course...


I don't think this is accurate. There is no classical computation happening here, and there is no reason you have to have "room" for the classical computation to happen. That seems to be assuming that the universe is a classical substrate and a classical algorithm has to be instantiated somewhere.

Quantum computing isn't classical computing. It isn't a Turing machine. It is a fundamentally different kind of information processing device, making use of non-classical physical phenomena.

I'm not a defender of Copenhagen, but the wave collapse interpretation has no difficulty explaining quantum computation. A quantum computer creates extremely large, complexly entangled wave functions that upon collapse result in states that can be interpreted as solutions to problems that were encoded in the sequence of operations that setup the entangled wave function. The Everett interpretation is easier to think about in my opinion, and I prefer thinking in terms of MWI when I try to make sense of these results. But it is not necessary.

Computer science is the study of universal Turing machines and their application. But Turing machines are only “universal” in the sense that they can represent any statement of mathematical logic, and that can (we believe) be used to simulate anything we can dream up. But there are intrinsic performance limitations of Turing machines, studied by algorithmic theory, which are artifacts of the Turing machine itself, not physical limitations of the universe we live in. That searching an unordered list with a serial processor takes O(n) time, for example. Grover showed that there are non-Turing machine quantum processes that could be used to perform the same computation in O(sqrt(n)) time. That doesn't mean we need to go looking for "where did that computation actually happen". That doesn't even make sense.


> Turing machines are only “universal” in the sense that they can represent any statement of mathematical logic

That's not quite right. TM's in general are not universal. The "universality" of TMs has to do with the existence of a universal TM which is capable of emulating any other TM by putting a specification of the TM to be emulated on the universal TM's tape. The reason this matters is that once you've built a universal TM, you never have to build any more hardware. Any TM can then be emulated in software. That result is due to Turing.

The relationship between TMs and mathematical logic is of a fundamentally different character. It turns out that any system of formal logic can be emulated by a TM (and hence by a UTM), but that is a different result, mainly due to Godel, not Turing. There is also the empirical observation (famously noted by Eugene Wigner [1]) that all known physical phenomena (with the exception of quantum measurements) can be modeled mathematically, and hence can be emulated by a TM, and hence can be emulated by a UTM. But it is entirely possible that a new class of physical phenomena could be discovered tomorrow that cannot be modeled mathematically.

But here's the thing: no one has been able to come up with any idea of what such a phenomenon could possibly look like, and there is a reason to believe that this is not just a failure of imagination but actually a fundamental truth about the universe because our brains are physical systems which themselves can be modeled by mathematics, and that is (empirical) evidence that our universe is in some sense "closed" under Turing-equivalence (with quantum randomness being the lone notable exception). That is the kind of universality embodied in the idea that TM's can emulated "anything we can dream up". It's called the Church-Turing thesis [2] and unlike the universality of universal TM's it cannot be proven because a counterexample might be discovered at any time.

[1] https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

[2] https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis


I appreciate the detail, but for the record I don’t think we disagree. There’s a lot that I had to oversimplify in my already long comment.

… our brains are physical systems which themselves can be modeled by mathematics..

How do you know this? What is the model? Can an AI come up with the Incompleteness Theorems? It can be proven ZFC that PA is consistent. Can an AI or Turing Machine or whatever do the same?

EDIT: I’m equating “our brians” with consciousness.


> How do you know this?

I don't. But it seems like a plausible hypothesis, and I see no compelling evidence to the contrary.

> Can an AI or Turing Machine or whatever do the same?

Can you?


Unless you're a Cartesian dualist, the mind maps to the brain which is governed by physical laws that are, in principle, fully modeled by mathematical theory that can be simulated on a computer.

..physical laws that are, in principle, fully modeled by mathematical theory that can be simulated on a computer.

Consciousness so far appears to be something that can’t be modeled mathematically. Can a Turing Machine conclude that while it can’t prove the consistency of the axiomatic system it works under if one embeds that system in a larger system then it could be possible to prove consistency?

Aren’t you assuming super determinism? What if consciousness is not “computable” in any meaningful way? For example suppose the theoretically most efficient mathematical model of consciousness requires more variables than the number of particles in the observable universe.


I think you are very confused about consciousness, which is nothing more than what an algorithm feels like from the inside. There is nothing mysterious about it.

But we are now way off topic.


> Consciousness so far appears to be something that can’t be modeled mathematically.

That doesn't tell us anything though: Almost every single physical phenomenon we can start to model mathematically looked impossible at some point in history. Just because something's hard is not a reason to expect it's magic.

If consciousness is somehow un-model-able, that will most likely be for a different reason, where the premise itself is somehow flawed, like how we can't bottle phlogiston or calculate the rate of air turning into maggots.


> all known physical phenomena (with the exception of quantum measurements) can be modeled mathematically

Of course you can model quantum measurements! And Wigner certainly knew so.


> Of course you can model quantum measurements!

No, you can't. You can statistically model the results of multiple quantum measurements in the aggregate but you cannot model the physical process of a single measurement because there is a fundamental disconnect between the physics of quantum measurements and TMs, namely, TMs are deterministic and quantum measurements are not. There is a reason that the Measurement Problem is a thing.


A statistical model is still a model. Lots of physics works that way. Newtonian mechanics might be deterministic, but because you often don't have perfect information on initial state that's not a useful model.

For example in statistical mechanics you work with ensembles of microstates. That doesn't mean thermodynamics is fake and only F=ma is real. Models are tools for understanding the behaviour of systems, not a glimpse into god's simulation source code where the hidden variables are.


For QM it can be shown that there are no hidden variables, at least no local ones [1]. Quantum randomness really cannot be modeled by a TM. You need a random oracle.

[1] https://en.wikipedia.org/wiki/Bell%27s_theorem


> For QM it can be shown that there are no hidden variables, at least no local ones

If we assume that the experimenter has free will in choosing the measurement settings, so that the hidden variables are not correlated with the measurement settings, then it can be shown.

https://en.wikipedia.org/wiki/Bell%27s_theorem#Superdetermin...

But it we are less strict on the requirement of the free will assumption, then even local hidden variables are back on the menu.


That is indeed what I was referring to. To clarify, plenty of classical physical models work only with distributions too. You don't need a random oracle because your model doesn't predict a single microstate. It wouldn't be possible or useful to do so. You can model the flow of heat without an oracle to tell you which atoms are vibrating.

Sure, the 'problem' is that while the schrodinger equation is deterministic, we can only 'measure' the amplitudes of the solution. Is the wavefunction epistemic or ontological?

No, this has nothing to do with ontology vs epistemology. That's a philosophical problem. The problem here is that a measurement is a sample from a random distribution. A TM cannot emulate that. It can compute the distribution, but it cannot take a random sample from it. For that you need a random oracle (https://en.wikipedia.org/wiki/Random_oracle).

This is not really about quantum computing. A classical probabilistic Turing samples from a random distribution:

"probabilistic Turing machines can be defined as deterministic Turing machines having an additional "write" instruction where the value of the write is uniformly distributed"

I remember that probabilistic Turing machines are not more powerful than deterministic Turing machines, though Wikipedia is more optimistic:

"suggests that randomness may add power."

https://en.wikipedia.org/wiki/Probabilistic_Turing_machine


Power is not the point. The point is just that probabilistic TM's (i.e. TMs with a random oracle) are different. For example, the usual proof of the uncomputability of the halting problem does not apply to PTMs. The proof can be generalized to PTMs, but the point is that this generalization is necessary. You can't simply reduce a PTM to a DTM.

The problem is about physics, not Turing machines. You don't need to make a random choice as part of your physical model, the model only makes predictions about the distribution. You can't represent the continuous dynamical manifolds of classical or quantum mechanics on a TM either, but that's ok, because we have discrete models that work well.

I am asking myself:

Does a probabilistic Turing machines needs aleatory uncertainty? (would have called this ontological but (1) disagrees)

Epistemic uncertainty would mean her:

We don't know which deterministic Turing machine we are running. Right now, I see no way to use this in algorithms.

(1) https://dictionary.helmholtz-uq.de/content/types_of_uncertai...


"God does not play dice with the universe" said Einstein.

But he hasn't met my Dungeon Master...


Right I did hijack the thread a bit, but for me, the distribution is more than enough. The rest is just interpretation.

Well, no. The measurements are the things that actually happen, the events that comprise reality. The distribution may be part of the map, but it is definitely not the territory.

Hmm... Turing machines are "universal" in the sense that they can be used to model the behavior of any finite process that can compute a computable function. So if a process exists that can compute something, it must be modelable as a Turing machine, and its operation must be analyzable as a finite series of elementary instructions that are analogous to those of a Turing machine. So I don't see how it doesn't make sense to ask the question "if this process has a better asymptotic behavior than is theoretically possible, where did the computation take place?" I would be more inclined to believe that someone is not counting properly, than it being some kind of hypercomputer. Or even more simply that the process is just physically impossible.

> if a process exists that can compute something, it must be modelable as a Turing machine

Yes.

> and its operation must be analyzable as a finite series of elementary instructions that are analogous to those of a Turing machine

Analyzable, sure. MWI is fine as an analytic tool in the same way e.g. virtual particles and holes are. Nothing here requires that any of these analytic tools are physically corresponded to.


I didn't say I favored the MWI. My point is simply that the question of where computation took place shouldn't be dismissed. Like I said, I would be more inclined to think someone is not counting properly. As a simple example, a parallel program may run in half the time, but that doesn't mean it executed half as many operations.

> the question of where computation took place shouldn't be dismissed

Sure. But it also shouldn’t be glorified as adding anything to an old debate—every QCD calculation has this dynamic of lots of classical “computations” happening seemingly simultaneously.


There isn’t hidden computation going on though.

Computation isn't hidden in a parallel program, either. But if you only count the running time, and not the total instructions retired or the computer's energy usage, you're not getting the full picture.

And there isn’t parallel computation going on either. I think there’s a fundamental misunderstanding of how quantum compute works here. There is no merge-reduce as another commenter is claiming. Even in MWI, the decoherence is one-way.

A Turing Machine operates serially one instruction at a time. But you could obviously multiply its output by multiplying the machines.

The question is then how could we efficiently multiply Turing machines? One way could be by using rays of light to do the computations. Rays of light operate in parallel. Light-based computers don't need to be based on quantum mechanics, just plain old laser-optics. They multiply their performance by using multiple rays of light, if I understand it correctly.

SEE: https://thequantuminsider.com/2024/03/19/lightsolver-laser-c...

So using multiple rays of light is a way to multiply Turing Machines economically, in practice, I would think.

I assume quantum computers similarly just multiply Turing Machines -like computations, performing many computations in parallel, similarly to light-based computers. That does not mean that such computations must happen "somewhere else", than where the quantum processes occur. And that does not require multiple universes, just good old quantum mechanics, from Copenhagen.


Parallel computers don't "multiply their performance" in computation theoretic terms. If you took half as long by computing on two Turing machines simultaneously you didn't decrease the time complexity of the algorithm. You still churned through about the same number of total operations as if you had done it serially.

The distinction between the total operations and the total time is important, because your energy requirements scale with the time complexity of what you try to compute, not with the total time it takes.

An optical computer, for example, has a limit on how densely it can pack parallel computation into the same space, because at some point your lenses overheat and melt from the sheer intensity of the light passing through them. It's possible QCs are subject to similar limitations, and despite the math saying they're capable of computing certain functions polynomially, that doing so requires pushing exponential amounts of energy into the same space.


Good points. I just wanted to draw attention to the idea that Turing Machine is just one way of doing "computing". Whatever is proven about Turing Machines is proven about Turing Machines, not about all mechanical devices, like quantum computers, and "light-computers", in general.

I'm wondering is there a universal definition of "computing"? Saying that "Conmputing is what Turing Machines do" somehow seems circular. :-)


Well, Turing machines are supposed to be that definition. The Church-Turing thesis still hasn't been shown to be false. Proving something of Turing machines that doesn't hold for one of those classes of devices would mean refuting it. Basically we'd need to find some problem that, memory not being an issue, one class of computer can solve in finite time while another can't solve in any finite amount of time.

“Any process can be modeled by a Turing machine” is not the same as “Every process must be isomorphic to a Turing machine, and therefore share the same limitations.”

If I have a magic box that will instantly calculate the Nth digit of the busy beaver number of any size, that can be modeled by a Turning machine. AFAIK there is no constant time algorithm though, which is what our magic box does. So a Turing machine can’t match the performance of our magic box. But no where is it written that a Turing machine puts a an upper bound on performance!

That’s what quantum computers are. They solve certain problems faster than a classical computer can. Classical computers can solve the same problems, just more slowly.


They are in fact the same, since Turing machines are mathematical models. If you had such a magic box, capable of computing things without regard for their time complexity, we might need to revise our assumptions about whether Turing machines are in fact capable of universally modeling all forms of computation. But like I implied on the sibling comment, I'm expressing skepticism that quantum computation is actually physically realizable, to the extent that functions can be computed in fewer operations (i.e. more efficiently) than is predicted by classical computing.

Well you are free to believe whatever you want. But at this point disbelief in quantum computers is akin to flat earth denial. There are thousands of labs across the world who have performed entanglement experiments, and dozens of labs that have built working quantum computers specifically.

I didn't say I don't believe in quantum computers. What I said is I don't believe in quantum computing as a practical technology. Right now "quantum computers" are just research devices to study QC itself. Not a single one has ever computed anything of use to anyone that couldn't have been computed classically.

According to this ( https://arxiv.org/pdf/2302.10778 ), there are no branches, just decoherence (minus the "collapse").

A paper proposing a radically different formulation of quantum mechanics, with only a half dozen non-author-citations, so heavy with jargon that it's hard to figure out what the author is claiming.

Sorry, my crank meter is dialed to 11 right now and I don't think this is worth spending further time on.


Why does the computation have to "happen somewhere"? I understand the desire to push the boundary of the classical reasoning, but I don't think it's something to be taken as an axiom, and I haven't heard about any proof either.

The easy comparison for devs is quantum computing is like git branching.

The big question being the PR approval process.


no that's not a good comparison for MWI, it's more continuous

That sounds like you having a narrow understanding of git - there is no way it is not continuous.

git branches are not continuous

You mean as opposed to discrete? (And not in the usual sense of continuous)

There is nothing stopping you putting probability distributions in git and branching them.


Git operates on commits (branches are really nicknames for specific commits with some extra features), which are discrete steps.

That’s what’s stopping you from doing anything continuous in git.


i think we are just not connecting in what we are saying, nbd

Maybe our consciousness also branches, and all branches are approved.

Ah, so it's all pointers after all...

Thank you. This makes sense.

I’d prefer to say we just go back in time the moment it collapses. How can you disagree with me? No proof of anything and we can all contrive untestable solutions.

No wait, it’s actually God who exists in the wave collapse, and His divine intervention does the computation.


> I’d prefer to say we just go back in time the moment it collapses. How can you disagree with me? No proof of anything and we can all contrive untestable solutions.

That's close to retrocausality which is a serious theory/interpretation of quantum mechanics.


> “where else could the computation have happened, if it wasn’t being farmed out to parallel universes?”

Right. I’m definitely a layman here but as a CS generalist this rings out as narrow-minded to me, biased towards the way we designed physically and then mathematically formalized “computation”.

Not that I have anything against multiverses but.. if I had to choose between “Turing-style computation happened, but needed parallel universes to do so” and “what happened is something counter-intuitive and insufficiently understood about the universe we inhabit” I would bet on the latter.


I'm an actual physicist here. Not in quantum computers, but that was my advisor's field and I've kept paying attention in the years since. Your intuition here is correct, I believe. Classical computer science deals with information processing limits of devices constructed using the classical physics laws of Newton and Maxwell. Quantum physics introduces new elements that permit new kinds of information processing machines, which have no obligation to be constrained by classical law.

> Quantum physics introduces new elements that permit new kinds of information processing machines, which have no obligation to be constrained by classical law.

Question then becomes, how do you build such non-constrained machines? Also, how do you confirm that such machines—or even small scale prototypes—are not constrained by classic laws of physics?


> Question then becomes, how do you build such non-constrained machines? Also, how do you confirm that such machines—or even small scale prototypes—are not constrained by classic laws of physics?

You mean, like... transistors? ;-)

https://en.wikipedia.org/wiki/History_of_the_transistor (search for quantum)


By doing the math of many physics calculations to design a setup that will make these quantum phenomena stable enough to be useful, then building it and seeing if prediction matches reality.

I don't know of any computer scientists, including the quantum-information-theorists, who think that QTMs can compute something that classical TMs cannot. That is, QM cannot solve undecidable problems.

Rule #1 is that technologists must pretend that they understand what they are doing. Woo-woo is fine as long as you can get your peers to play along with you.

But to emphasize one quote from the blog: "the new experiment doesn’t add anything new to this old debate" about multiverse interpretations vs. something else. An equally viable and perhaps conceptually simpler interpretation of the experiment is that the qubits are temporarily put in a 'superposition' of many different bitstrings, some operations are performed, and then measurement collapses this superposition back into a single definite bitstring. No multiverse required.

> No multiverse required.

True, but no exciting plot twists either. :(


Multiple Universes is a "Deus ex Machina" applied by Hollywood a lot these days.

> Aren't these parallel universes running the same computation at the same time, and thus also "farming out" part of their computations to us? If so, it's a zero-sum game, so how could there be an overall performance gain for all the universes?

The result is the same in all universes. There's a kind of scatter-gather/map-reduce feel - each universe computes part of the problem, then you add up all their results to get the final one (in all universes).

The argument works for me, although I already believed the conclusion so I'm biased.


You just need to devise a system where the vast majority of universes give the right answer for this to work.

Start by at least having a universe for each possibility so that every code path is computed. Then add a mechanism to create many more universes when a correct result is hit.

So each wrong result has 1 universe and the correct result alone has 2^300 a universes. Run this. You now end up with the correct result 99.99999% of the time.

I’m not arguing on the above take or not fwiw, it’s just that it easy to see how this could happen in many worlds. Effectively error correction just becomes a mechanism to create more universes for correct answers than incorrect answers and it all works. In fact it’s very reasonable to think of quantum error correction this way (it really is a mechanism to favour correct answers being observed and in many worlds that means it creates more correct answer universes).


> add a mechanism to create many more universes when a correct result is hit

Simple count has not much to do with the outcome observed, IMU each branch in a split is not equally likely. If you finely sub-divide a low-probability branch into 2^300 sub-branches, so what. Instead, you need to merge multiple high probability branches back into a single outcome that is observed "99.99999% of the time".


In many worlds the probability observed can absolutely be thought of as the ratio of worlds with that outcome vs another. Any specific numbers here are just as an example.

This feels like taking quantum bogo sort seriously.

I think the "parallel universe" analogies are misleading at best. I actually think most "narrative" explanations of quantum physics are as well. For a long time I used to imagine quantum physics with the same language we talk about superheroes and time travel and consciousness and whatever.

But then I actually saw some of the math (very basic Hisenberg uncertainty equations) and good comparisons to classical wave mechanics and wave interference. Everything made more sense.

As a layman with some very rudimentary understanding of the math, I would ignore ALL talk about "farming out" and "parallel universrs" and similar concepts. I think thise ideas are charged with too many interesting but fantastical elements which you probably don't find in actual quantum mechanics.


> I think the "parallel universe" analogies are misleading at best. I actually think most "narrative" explanations of quantum physics are as well.

This is inherent to the problem of attempting to describe something that fundamentally does not work like the macroscopic world, using vocabulary rooted in that macroscopic world.


As a layperson with a math background who has dabbled in quantum computing, I agree.

It would help to call the "Everettian" interpretation by the original title given by Everett: the theory of the universal wave function.

Ignore the techno-babble about the multiverse. The way I understand Everett is simply as taking the math literally and seriously to its logical conclusion.


I think the minimum amount of understanding needed to grasp MWI is understanding the double slit experiment. The interference pattern is real, thus the waves are real, but we only experience one measurable outcome where a photon lands in a specific location. The probabilistic part is not which outcome happens but which outcome your reality follows.

The main issue with this line of argument is that you don't see people who like other interpretations claiming quantum computers won't work. Saying quantum computers imply many worlds is the classic mistake of wanting to show A=>B and arguing B=>A instead of ~B=>~A. If quantum computers were inconsistent with collapse interpretations, you'd expect people who think collapse interpretations are correct to be confidently predicting quantum computers will never work. But they don't; they just think the state inside the computer isn't collapsing.

I'm often tempted to argue quantum computers at least clearly favor ontic interpretations (ones where quantum states are real). Because good luck computing things by subjectively having a computer instead of actually having a computer. But, as far as I know, you don't see quantum-bayesianism-ists having any qualms with quantum computers. I think because if you're already biting the bullet of interpreting diagonally polarized light as subjective, and Bell tests as subjective, then interpreting a computation as subjective isn't fundamentally different. It's just more in your face about it.


Quantum computers do defeat hidden-variable models, which would have to hide the computation somewhere. But yeah, Bell and EPR ruled those out years ago anyway.

I don't see people who believe hidden variable models, like pilot wave, claiming quantum computers won't work. So I don't think quantum computers disprove hidden variable models.

I do agree quantum computers disprove local hidden variable models. Because they can run Bell tests, and local hidden variable models can't violate the Bell inequalities.


> local hidden variable models can't violate the Bell inequalities

Superdeterminism is an interpretation of quantum mechanics where they can:

https://en.wikipedia.org/wiki/Bell%27s_theorem#Superdetermin...


Last time I checked pilot wave theory was not developed enough to permit modeling of quantum computers, or at least no one had done the work yet to apply the theory to that use case. It is very undeveloped.

I’m not sure the sibling comment is correct, so I’ll take another stab at it.

First, I also think the “this favors the Everett interpretation” argument is incoherent, but for different reasons. I won’t be defending that view. But I can try to answer your question about the nature of computation and “where” that computation is taking place.

A quantum computer is essentially a random number generator, but with a non-uniform distribution that can be programmatically controlled.

Let’s start with the random part: a qubit can be either a 1 or a 0. When first initialized, when you then read it you will get either a zero or a one with 50% probability. (I am oversimplifying to the point of error! This is technically incorrect, but please excuse me for the purpose of a simplified explanation that makes sense to computer scientists rather than quantum physicists.) In the Copenhagen interpretation the wave function randomly collapsed in a way that can be measured as a 0 or as a 1 with 50% probability for either outcome. In the Everett interpretation there are two universes (really, two disjoint sets of universes): one where you see a 0, and the other where you see a 1, and your likelihood of ending up in either is 50%.

For example, with three qubits you can read a random value between 0 to 7, where each qubit provides a 0 or 1 of a classical 3-bit binary value. If you're making a D&D simulator, you can take three of these registers (9 qubits total) and read them to get three values between 0 to 7, then increment each and add together to get the simulated value of three 8-sided dice, for the attack value of some game weapon.

But so far we're assuming even and independent odds for a 0 or 1 in each qubit. A quantum gate operation lets you modify that probability distribution in interesting ways. Specifically you can take multiple qubits and entangle their state such that the eventual outcome (still random) is constrained to a non-uniform probability distribution. (The details don't matter, but it has to do with the fact that you can add wave amplitude, which is complex-valued and therefore can constructively or destructively interfere.)

So instead of reading three separate 1d8 registers and adding them together, we can use quantum gate operations to perform the addition: entangling the state of a new 4 qubit register (large enough to hold the sum of two 3 qubit registers) such that it represents the sum of two separate, randomly chosen with independent probability values between 0..7. The combined sum will be a random value between 0 and 14, but with the value 7 being most likely, 6 or 8 next most likely, etc. (You can see the probabilities here: https://anydice.com) We then add in the third 3-bit register, getting a 5-qubit result. For good measure, we then add a constant 3 value using quantum gate operations so the result will be what we expect for adding 3 dice rolls, which start counting at 1 rather than 0. Now we have a great random number generator for D&D games. It's a 5-bit register that when you read it will give you a value between 3 and 24, with exactly the probabilities you would expect of three rolls of a fair 8-sided die.

You can interpret what this means in Copenhagen vs Everett, but there isn't a way in which it makes material difference. I will say that the Everett interpretation is easier to think about: you setup the entangled operations such that there simply isn't a universe in which the register reads 31, or some value not representative of a 3d8 dice roll, and if you count the number of possible universes under the born rules and holding the rest of the multiverse constant, the counts match the 3d8 dice probability distribution. So you don't know which universe you will end up in when you finally read the register and, under the many worlds interpretation, split off into one of many possibilities. But you know it will be a good 3d8 dice roll for your D&D game.

Now imagine you have two 1,024 qubit registers (each with random probability for each bit), and you run a full set of shift-adders to multiply them together into a 2,048 qubit register. If you read the result, it'll be a random non-prime number. Why? Because it's necessarily the product of two 1,024 composites. What's interesting is that you can kinda do the reverse, as Peter Shor figured out how to do in the 90's: start with a 2,048 bit register, run the shift-adder in reverse (really, the method of continued fractions, but I'm going for an intuition pump here), then read the two 1,024 registers. What you will get is two values that when multiplied together will result in the input, the factors. If you put "60" in the input, you will maybe get the outputs (15, 4), or (5, 12), or (2, 30), or (1, 60). It's random which will pop out, but what you get WILL be a factorization of 60.

Now put your bank's SSL public key in the register, and read the factors out. The factors are the private key.

I don't think any talk of multiverses is needed to understand this. It's simply mucking around with the probability distribution of a random number generator in clever enough ways that the random number generated, whatever it is, will be A solution (perhaps one of many) to your problem. You do this by destructively interfering the register with wave functions that represent non-solutions, and keep doing this until only solutions are left as possible values. Then you read your random number register and know the result is drawn from the solution set.


Finally, someone mentioning Shor and his algorithm.

Shor’s algorithm is where the rubber meets the road for me. Synthetic benchmarks like what Google made just aren’t that useful imo.

There’s even a case to be made that all the work needed to make quantum computing reliable negates the supposed advantage and makes it no more unscalable than conventional computing. I don’t subscribe to that, but a successful Shor’s Algorithm attack on RSA (with enough bits) would certainly prove that wrong.


> and your likelihood of ending up in either is 50%.

I don’t quite agree with this characterization. You end up in both universes with 100% certainty, but there will then be two of you, one in each universe. Where the likelihood again comes in, is when one of the two yous wonders in which branch they are located, and tries to predict (retrodict?) based on information collected prior to the branching, it will be a 50/50 likelihood. Of course, if they are able to look at the outcome of the experiment, it will become clear in which branch they are located.

This concept of probability in the context of MW is known as “self-locating uncertainty”: https://www.journals.uchicago.edu/doi/10.1093/bjps/axw004


This is philosophy and probably a waste of time to pursue further. Both statements can be correct depending on your point of view.

As a layman I really hope this is correct because it makes a lot of sense. The multiverse thing is supposed to help thinking about it but I just get stuck on some problems of intuition if I try to do it like that. Great comment!

Thanks!

Most of the debate over interpretations boils down to the mind projection fallacy. For each of us one interpretation is so clearly easier to think about, and everyone else must be bonkers for thinking otherwise. In reality one makes sense to me, the other makes sense to you. That’s fine.


the larger the devices we are able to build where quantum effects still dominate, the more it hurts any notion of a 'physical collapse' suggested by some copenhagen branches.

Other, weaker forms of Copenhagenism scarcely even qualify as a real metaphysical interpretation of what is going on. I do think building big things in superposition does seem suggestive that there is no 'collapse upon interaction with observer' effect in metaphysical reality.


> the more it hurts any notion of a 'physical collapse' suggested by some copenhagen branches

Why? The mathematics of quantum mechanics describes perfectly well what is going on in these systems, to exactly the same level of accuracy. There is no predictive difference between Copenhagen and Everett interpretations.

This is a philosophical argument, not a physical one.


I'm not saying it is a purely empirical argument, hence my extensive mention of metaphysics. There are many, many, many theories that can match existing observation, such as superdeterminism or superdeterminism where there is a human-form God right outside of our causal observable universe that is using some magical force to determine where all particles are.

The principle of parsimony, imo, favors an everettian explanation - and constructing larger contraptions with QM effects puts increasing parsimony pressure on why observers like humans are not simply getting entangled with the system they are observing when we see 'collapse.' The mathematics of QM does not describe collapse or how that occurs from wavefunction evolution or why observers are relevant.


I am also curious if Mark Everett, Hugh’s son, has an account here.

In many worlds, he certainly does.

EELS!

It’s not necessary for there to be parallel universes when all possible paths are already taken within this universe.

Parallel universes enter the conversation because we (non physicists) can't identify any scenario explainable with classical physics that allowed those paths to be taken in the allotted time and space.

Have a look at e.g. the path integral. It's a formulation for calculating the result of the mechanism behind probabilistic mechanics. I like to use the word "possible" instead of "probable", when explaining it, sometimes. When a car is hurtling down a road, it cannot really diverge from its "classical" path too much because it is already highly localized and entangled. There's a single car we're talking about. But a single isolated particle is not even so localized as to be able to be called a particle, but more so a wave of possibility of measuring in a certain state, such as a location. (I'm still a student though, so I do not yet fully understand how particle location and space-patch entanglement connect.)

But, anyway, as I grasp it so far... nature already makes it possible to look like one can "take multiple paths" when one is coherent, i.e. isolated from entanglement with environment during that path, i.e. where the "which path" is not already decided via e.g. measurement. For example, the Lamb shift is the consequence of the fact that the universe really doesn't know which paths of particle creation and annihilation, if any, the force carriers between the electron and the proton took. So, it is able to take all of them. Why should it rather be our expectation rather than how nature says it can work? It seems like a mental contortion is needed - but nature really does operate based on possibility because it's the consequence of the more fundamental fact that there being no one / nothing in the entire universe, including the particle itself, that "knows" what state it's in or going to be in, during a certain span of the system's evolution.

At the moment there is such a record keeper, the influence of "possibility" diminishes.


The Everettian multiverse is a bit like seeing some bubbles and foam in the surf and concluding that there are "many oceans." Yes, there are many surfaces of water, but calling each bubble an ocean is a bit disingenuous.

Like the bubble analogy, new Everettian universes are local, small, and the total amount of probability density (water in this analogy) is conserved.


> Having said that, the biggest caveat to the “10^25 years” result is one to which I fear Google drew insufficient attention. Namely, for the exact same reason why (as far as anyone knows) this quantum computation would take ~10^25 years for a classical computer to simulate, it would also take ~10^25 years for a classical computer to directly verify the quantum computer’s results!!

I don't understand that part, can someone explain? There should be plenty of problems that take a long time to solve, but are trivial to verify? Like for example factoring extremely large numbers that are the product of a few very large primes? Maybe not on the order of 10^25 years, but still?


This computation is... handwaves, most handwaving I've done in a while...initializing a random state, and it's well-understood whether this state was randomly initialized isn't computable in a reasonable timeframe on classical computers. ends extreme handwaving

This nerd sniped a bunch of us, because it sounds like "oh we proved P!=NP", the keys to understanding are A) hanging onto the this in "this computation" (this is easier when you're familiar with the computation and it's contextual history in the field) B) remembering prime factors as a plausible application of QC.

Then faced with the contradiction of B, it's neatly resolved by "yes, but the quantum computer isn't big enough to do prime factorization yet"

As noted somewhat sideways in the blog, if someone has a computation that is A) not classically computable in a reasonable timeframe B) is computable on a miniscule quantum computer C) can be verified on a classic computer in a reasonable timeframe, a lot of researchers will be excited.


Does this mean that the problem of not being able to verify QC results will go away in the scenario where we have a large enough QC to solve an NP problem?

Correct, you nailed it.

FWIW:

There were some really cool comments on the last article re: Willow.

One of them being, a reference to a apparently well-known "roadmap" of quantum scaling that apparently got written up a few years back.

Apparently the Willow result was the 2026 baseline projection.

So their message was "well...not too big a deal, we achieved 2026 at ~2025." Also said that the same roadmap would have that achieved in 15-20 years.


The hardware is progressing, we have a issue though: we don't have algorithms to run on quantum computers. Besides Shor's algorithm, useful to break RSA, we have nothing.

Just vague ideas like: it could be useful for quantum simulations or optimisation or maybe ...

If tomorrow we have a full running quantum computing what would we run on it? We are in a vacuum.

The only hope is a breakthrough in quantum algorithms. Nothing in sight, not much progress on this side.

Oh yes, Zapata Computing, the best funded company in quantum algorithms just went under this year.


> Oh yes, Zapata Computing, the best funded company in quantum algorithms just went under this year.

It's kinda hard to make money by developing algorithms for imaginary magic computers.


For one, we could just start simulating quantum chemistry, though at that point it's more like "actually running quantum chemistry" rather than simulating.

Will AI take that use case though?

Sorry but you need to cite some sources... The fact that random adtech devs on HN have no algorithms useful to them to run on a QC does not mean much to me.

You’ve provided me with a beautiful framing to understand some of the most obtuse and self-serving arguments I see on here: “random adtech devs”.

laugh as we may, they get the money.

No-nonsense roll-up of what algorithms a quantum computer can run in principle, along with their big-O speed ups (which don't necessarily reflect practical speed ups): https://quantumalgorithmzoo.org/

Here is another paper about its applicability on a range of problems: https://cacm.acm.org/research/disentangling-hype-from-practi...

This is not true.

There are plenty of quantum algorithms.

https://en.wikipedia.org/wiki/Quantum_algorithm


Qualification: Quantum Algorithms that provide superpolynomial speedup, have an application, and don't need quantum memory or rely on quantum data.

Related:

Willow, Our Quantum Chip

https://news.ycombinator.com/item?id=42367649


>> But for anyone who wonders why I’ve been obsessing for years about the need to design efficiently verifiable near-term quantum supremacy experiments: well, this is why! We’re now deeply into the unverifiable regime that I warned about.

Can anybody explain me why it is hard to find a problem that can be solved only by a basic quantum computer within a short timespan and can be easily verified by a normal computer? I thought there are so many algo's out there for which one direction is fast and the reverse takes ages.


Summary: Its a real result, the cool part is more qubits seem to live longer rather than shorter, bad part the results are not explicitly verifiable, only through extrapolation

Can someone just give it to me straight: should I sell my crypto positions and move to stock indices and real estate? Yes or no?

No nuance, just yes or no.


No.

This is progress towards quantum computing but no where near progress towards a real practical quantum computer that could break Bitcoin's algorithms. If progress continues it could be an issue in the future, check back in next time Google publishes a paper.


Asking the real questions here.

I got that ‘Google has been talking about Willow for ages, this isn’t new’ blah blah blah. The problem is the public only started talking about it yesterday.


No, won’t all your stocks and financial information be gone also?

Also Willow can’t even factor 15=5x3 you are good for a very long time.


no

not for the next 5 years for sure.


Before invoking parallel universes, how about comparing the system to nature's mind-boggling number of particles in the macroscopic world? A single gram contains 10^23=2^76 particles. Google's random circuit sampling experiment used only 67 qubits, Which is still order of magnitude below 76. I wonder why, the chip had 105 qubits and the error correction experiment used 101 qubits.

Did Google's experiment encounter problems when trying to run RCS on the full 105 qubits device?

Before saying that the computation invoked parallel universes, first I'd like to see that the computation couldn't be explained by the state being encoded classically by the state of the particles in the system.


Somehow the universe knows how to organise the sand in an egg timer to form an orderly pile. Simulating that with a classical computer seems impossible - yet the universe "computes" the correct result in real time. It feels like there is a huge gap between what actually happens and what can be done with a computer (even a quantum one).

The universe also computes Pi perfectly every time and nobody is surprised or calling side universes for help explaining it.

Universe does not calculate the digits of Pi. We do.

I think they mean that Pi is part of many formulas in physics.

It's a good questiopn why that is so. But I wouldn't draw from that the conclusion that Universe somehow "calculates Pi", and then puts it in all the forces it "has" so it turns out in our formulas. That is rather fantastical way of thinking and I do see its poetic appeal. A bit like "God doesn't play dice, or does he?"

What is calculation anyway we may ask. Isn't it just term-rewriting?


> It's a good questiopn why that is so

Pi is just a description used for calculating perfectly/near-perfect spheres. A sphere is nature's building block, since every point on it's surface is the same distance from the centre.


> yet the universe "computes" the correct result in real time

Does it? In what sense the result is "correct"? It's not because it's perfectly regular, or unique, or predictable, or reproducible. So what's "correct" about it?

Completely out of my depth here, but maybe there is a difference between evolution of a physical system and useful computation: and maybe there's much less useful computation that can be extracted from a physical system than the entire amount of computation that would be theoretically needed to simulate it exactly. Maybe you can construct physical systems that perform vast, but measurable, amounts of computation, but you can extract only a fixed max amount of useful information from them?

And then you have this strange phenomenon: you build controlled systems that perform an enormous amount of deterministic, measurable computation, but you can't make them do any useful work...


It does seem to, and can anyone credibly say they aren't out of their depth in these waters? (the sandpile thing is not original, it dates back many years). Taking the idea that the "universe is a simulation" [0], what sort of computer (or other device) could it be running on? (and how could we tell we're living in a VM?)

From the same school of thought, to simulate the path of a single particle seems it should require a device comprised of more than a single particle. Therefore, if the universe is a simulation, the simulator must have more than the number of particles in the universe.

[0] https://en.wikipedia.org/wiki/Simulation_hypothesis


If the universe is just the universe, it needs only the number of particles in the universe.

"In what sense is ground truth correct?"

In the tautological sense.


> Somehow the universe knows how to organise the sand in an egg timer to form an orderly pile. Simulating that with a classical computer seems impossible

Is it really?

There's only ~500,000 grains of sand in an egg timer.

I don't know anything here, but this seems like something that shouldn't be impossible.

So I'm curious. Why is this impossible?

What am I missing?


Maybe it's not that hard to simulate, but let's start with looking at just two of the sand grains that happen to hit each other? They collide, how they rebound is all angles, internal structure, Young's modulus, they have electrostatic interactions, even the Van der Walls force come into play. Sand grains aren't regular, consider how determining the precise point at which two irregular objects collide is quite a challenge (and this isn't even a game, approximations to save compute time won't do what the real world does 'naturally').

So while we can - for something as simple and regular as an eggtimer - come up with some workable approximations, the approximation would surely fall short when it comes to the detail (an analytical solution for the path of every single grain).


When the output looks the same as the original we would say that the simulation was successful. That is how computer games do it. We're not asking for the exact position of each grain, just the general outline of the pile.

An image of something is likely to be the simplest model of that thing that happened, and it has A LOT less information than a 3D model of arbitrary resolution would have.

The real issue is that the sand isn't orderly sorted. At a micro level, it's billions and trillions of individual interactions between atoms that create the emergent behavior of solid grains of sand packing reasonably tightly but not phasing through each other.

Where's the performance on common useful tasks?

What's the largest number it can factor using Shor's algorithm? What's the largest hash it can compute a pre-image for using Grover's algorithm?


I think the record for Shor's remains at 15.

In simple terms, if I understand quantum computing, and please correct me if I'm wrong, the big benefit is parallel computing at a massive scale whereas classical computing is serial in nature. If yes likely both method are useful. But a very useful use case for quantum computing is AI training to create the models. Currently consumes a lot of GPUs but QC has nice chance to impact such a use case. Did I get it right?

> the big benefit is parallel computing at a massive scale

The problem with this line of reasoning is that, even though a quantum system might have many possible states, we only observe a single one of those states at the time of measurement. If you could somehow prepare a quantum system such that it encoded the N equally-likely solutions to your classical problem, you would still need to rerun that experiment (on average) N times to get the correct answer.

Broadly speaking, quantum computing exploits the fact that states are entangled (and therefore correlated). By tweaking the circuit, you can make it so that incorrect solutions interfere destructively while the correct solution interferes constructively, making it more likely that you will measure the correct answer. (Of course this is all probabilistic, hence the need for quantum error correction.) But developing quantum algorithms is easier said than done, and there's no reason to think a priori that all classical problems can be recast in this manner.


I think that the big challenge is to recast any classical computations as quantum computations with a superpolynomial speedup.

I think that all classical problems can be cast as quantum computations because quantum computation is just computation - I believe that one can implement a turning machine using quantum gates, so arbitrary computation is possible with quantum gates.

The superpolynomial speedups are the thing.. I wonder if these will be limited to a class of computations that have no physical realization - just pure maths.


Ahaha! Read the subtitle of the blog, literally at the top: "If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel."

>it would also take ~10^25 years for a classical computer to directly verify the quantum computer’s results!!

This claim makes little sense. There are many problems that are much easier to verify than to solve. Why isn't that approach ever used to validate these quantum computing claims?


That's what the author is saying. Researchers in this field should, for credibility reasons, be solving test problems that can be quickly verified. As to why this isn't done:

(1) They're picking problems domains that are maximally close to the substrate of the computation device, so they can hit maximum problem sizes (like 10^25). For many (all?) fast-verifiable problems they can't currently handle impressively large problem sizes. In the same way that GPUs are only really good at "embarrassingly parallel" algorithms like computer graphics and linear algebra, these quantum chips are only really good at certain classes of algorithms that don't require too much coherence.

(2) A lot of potential use cases are NOT easy to validate, but are still very useful and interesting. Weather and climate prediction, for example. Quantum chemistry simulations is another. Nuclear simulations for the department of energy. Cryptography is kinda exceptional in that it provides easily verifiable results.


I would add one more to this, which I would argue is the main reason:

(0) For a quantum algorithm/simulation to be classically verifiable, it needs additional structure; something that leads to a structured, verifiable output despite the intermediate steps being intractable to simulate classically. That additional structure necessarily adds complexity beyond what can be run on current devices.

To pick an arbitrary example I'm familiar with, this paper (https://arxiv.org/abs/2104.00687) relies on the quantum computer implementing a certain cryptographic hash function. This alone makes the computation way more complex than what can be run on current hardware.


Isn't weather prediction extremely easy to validate? What am I missing other than looking out the window tomorrow?

Maybe a working quantum algorithm for weather prediction would outperform currently used classical simulations, but I wouldn't expect it to be bang on every time. Inputs are imperfect. So at best you could benchmark it, and gain some confidence over time. It could very well be good enough for weather prediction though.

Also I doubt that a quantum algorithm is possible that provably solves the Navier-Stokes equations with known boundary and initial conditions. At least you need some discretization, and maybe you can get a quantum algorithm that provably converges to the real solution (which alone would be a breakthrough, I believe). Then you need some experimental lab setup with well controlled boundary and initial conditions that you can measure against.

In any case the validation would be at a very different standard compared to verifying prime factorization. At most you can gain confidence in the correctness of the simulation, but never absolute certainty.


At scale, yes. But this would still be solving toy problems with less variables, fewer dimensions.

And they’re not actually solving weather problems right now, I think. That was just an example. What they are actually solving are toy mathematical challenges.


To the extent that they are useful, they will be easier to validate.

For example: "Calculate whether we'll have El Niño this year"

The validation will not need to be run on a machine, but on the real world. Either we have el niño or we don't.


I think he was making that exact point in this blog.

Because we don't currently know a problem like this that both has a quantum algorithm we can run on this type of device with expected exponential speedup and has a fast classical verification algorithm. That's exactly the author's point/he has been advocating for quite a while the importance of researching such an example that would be better to use.

Depends on what you mean by "this type of device." Strictly speaking there are many efficiently verifiable quantum algorithms (including Shor's algorithm, the one that breaks RSA). But if you mean "this particular device," then yes, none are simple enough to run on a processor of this scale.

They likely mean on any of the current era of NISQ-like devices (https://en.wikipedia.org/wiki/Noisy_intermediate-scale_quant...) like this one or quantum annealers.

Do we have any proof or evidence that such a problem even exists?

> Why isn't that approach ever used to validate these quantum computing claims?

Hossenfelder’s linked tweet addresses this head on [1]. We need four orders of magnitude more qubits before a QC can simulate anything real.

In the meantime, we’re stuck with toy problems (absent the sort of intermediate test algorithms Aaronson mentions, though the existence of such algorithms would undermine the feat’s PR value, as it would afford cheap takedowns about the QC lacking supremacy).

[1] https://x.com/skdh/status/1866352680899104960


In terms of physical simulations, surely you can limit the size of the system to fit your exact complexity needs?

For example, isolate two molecules in a vacuum and predict its future state. Now make it 5, now 100, etc...


That's pretty much the kind of problem they're using here. The problem is that to verify the simulation, you need to run the simulation on the classic device, and that requires time that is exponential in the number of particles being simulated. They've verified that the classical and quantum simulations agree for n=1, 2, 3, ..., now here's a quantum simulation for n that would take 10^25 years to do classically.

That's not true, there are enough interesting problems for NISQ. Quantum chemistry for example.

Could it be that it's not a chance if these kind of problems are chosen? Somehow we can get from a quantum system an amount of computation that goes well beyond what a classical system can perform, but we can't seem to extract any useful information from it. Hmm.

The question is not whether a problem is easier to verify than to solve but whether there is a problem that is provably faster (in the complexity sense) on a quantum computer than a classical computer that is easy to verify on a classical computer.

Right, Factoring and discrete logs both come to mind; is Google's quantum computer not able to achieve measurable speedups on those versus classical computation?

Not a large enough speedup to factor something that couldn't be done on a classical computer.

Well, really it can't run them at all, but a more-general computer this size which could, still wouldn't be large enough.


That's exactly correct, this chip cannot do either of those things yet. Which is why they use this toy-ish random circuit sampling problem.

Google's chip is not general enough to perform any quantum algorithm.

It is perfectly general, but the error rate is too high to operate all qubits simultaneously for more than a few tens of gates without error. This is why error correction is needed but then you need orders of magnitude more physical qubits to deal with the overhead.

Then why can they perform an RCS evaluation but not some other algo? RCS requires the least number of qubits by a huge margin?

He argues this point exactly.

For reference, Aaronson is widely regarded as one of the foremost members of the field.

Try "I don't understand this claim"?


> No doubt people will ask > me what this means for > superconducting qubits > versus trapped-ion or > neutral-atom or > photonic qubits

I laughed at this. If I understood more than literally 2 words of that, then yes - no doubt I would ask about that.


It’s the implementation of the quantum computer - what are the qubits and gates and things made of?

It is like we are still figuring out whether it’s better to use vacuum tubes or semiconductors or what. Google used superconducting circuits for Willow, which can be fabricated with computer-chip-style lithography etc, but needs to be kept extremely cold and the connectivity is obviously etched permanently into the circuit. Other technologies have different pros and cons.


Thank you, much appreciated

Dumb question: can someone explain the following?

Imagine a ball falling on the ground.

Simulating the O(10^23) atoms in each one with a classical computer would take (say) 10^23 times the amount of work of simulating a single atom. Depending on the level of detail, that could easily take, you know, many, many years...

We don't call the ball a supercomputer or a quantum computer just because it's so much more efficient than a classical computer here.

I presume that's because it can't do arbitrary computation this quickly, right?

So in what way are these quantum computers different? Can they do arbitrary computations?


Great question. The device is fully programable. Arbitrary one-qubit operations and arbitrary two-qubit operations between adjacent qubits can be performed. Theoretically these are 'universal for computation', meaning that a large enough device could compute anything computable. You can't program Quantum Tetris or whatever on a bouncy ball :).

But nevertheless, many of these 'beyond-classical' demonstrations feel a bit arbitrary in the way you describe, and there's good reason for this. Logical operations are still quite noisy, and the more you apply, the more output quality degrades. To get the most 'beyond-classical,' you run the thing that maps most readily to the physical layout and limitations of the hardware.

As things improve, we'll see more and more demonstrations of actually useful computations. Google and others have already performed lots of quantum simulations. In the long run, you will use quantum error correction, which is the other big announcement this week.


So isn't this the same as turning a classical computer on and letting it run on whatever garbage is on the RAM at that time, and when some nonsense shows up on the screen breathlessly exclaim that it would take several millennia to get the same result with an abacus, despite the fact that something was "computed" only by strict adherence to the definition of the word? It's not like it takes a quantum computer to produce a meaningless stream of data.

That's a great analogy, and I basically agree with it. But there would be some ancient, abacus-wielding mathematicians who would be impressed by this fast-but-useless computer. One might take it as a sign that once/if the computer can be controlled properly, it might be quite useful.

This might have been part of the history of classical computers as well, except that it turns out to be pretty easy to do classical operations with very high fidelity.


Yeah... But since the device is not doing anything meaningful, there's no way to tell if it actually is computing anything, rather than being a very expensive and very complicated random number generator. Because you don't need a quantum computer to generate a stream of meaningless numbers, a machine being capable of generating a stream of meaningless numbers doesn't demonstrate whether it's computing quantumly.

Re: Noise

There's some probablistic programs that we run that not only don't need determinism, but are actively harmed by it.

For example deep learning training would probably work fine if there was a 1% destructive noise, as long as there were a massive increase in compute.


Thank you!

The key difference is that the problem being solved is a math problem. It can be written down on paper.

A ball falling on the ground can be converted into a math problem. To get the conversion exactly right you will need to write down the exact state of the ball. But you will invariably incur small inaccuracies while doing this. For example, maybe the mass you write down is off by 1 part in a trillion. The math problem is the ground truth, so any conversion inaccuracies are now errors in the ball. In practice these inaccuracies will prevent even the original ball from solving the written down problem much better than you could with a computer.

In the case of random circuit sampling, the written down problem is a tensor network [1] (that happens to also be a shallow quantum circuit). Fundamentally, a tensor network just specifies a bunch of matrix multiplications to do. It's not even that big of a problem: only a few kilobytes of information (whereas the exact state of a ball would be gargantuan). All you have to do is perform the specified multiplications, interpret the result as a probability distribution, and sample from it. The obstacle is that these multiplications create intermediate values that are really really large. The quantum computer bypasses this obstacle by executing the tensor network as a circuit.

[1]: https://en.wikipedia.org/wiki/Tensor_network


The architecture/design of these quantum computers can do arbitrary computations. The specific machines we are currently able to build don't have enough qubits that remain coherent long enough to solve the problems we care about. We've built the aeolipile, now we need to scale it up to a steam turbine capable of moving ships and trains.

What does quantum computing need to move forward? Will just throwing a lot of money at the thing allow it to scale? Or are there fundamental problems blocking it that require new physics or new material sciences?

It's hard to say. For example the Google's paper talks about some rare (once an hour) strong errors. Are they fundamental or have some easy fix? We don't know.

One obvious problem is cooling. We can't cool million qubits in a single fridge, so we will need to split them between fridges and communicate. Also, the wiring is really complicated already and hard to scale (one reason IBM has heavy hex is to have more space).

Another problem is connectivity. For transmon qubits connectivity is fixed. Applying gate to two far qubits requires a lot of swaps, which is expensive. This is less of a problem for ions or cold atoms because they can couple any two qubits; but they likely wouldn't be able to for large amount of qubits.

Another thing is classical control, because the classical data needs to be processed at high speed. Some companies develop specialized hardware for that.

None of these is necessarily fundamental, but these problems need to be solved, in addition to usual scaling (it is hard to manufacture these devices and it becomes harder with each qubit).


I was told by a Microsoft researcher that it will unlock currently unsolvable problems in chemistry, weather modeling, materials science, fusion and plasma physics, drug development… the list went on but it was really long. Most advances he cited would result from improved simulations, iirc.

I don’t recall enough of the conversation (circa 2019) to remember anything about the properties of the problems it helps solve, so I can’t help generalize. Sorry.


Here is a perspective from another MS researcher: https://www.youtube.com/watch?v=WY3htdKUGsA&t=1564s&ab_chann...

Essentially, they argue that unless strong algorithmic breakthrough happens (e.g. having cubic speedup, instead of quadratic), the only practical problem for which quantum computer will be useful, are those where you get exponential speed up:simulation of quantum systems (and breaking of RSA encyrption if you count that). Even those are challenged by other (approximate) simulation by Classical Deep Learning. There will be some quantum models for which quantum supremacy will be useful and Deep Learning wont. The question what classes of systems.


I think we are well at the point where it is just time and money. Its not a physics problem its an engineering problem.

I'm sure there are lots of complicated problems ahead, but i don't think we are waiting for someone to discover "new" physics.


What does homeopathy need to move forward? What does perpetual motion machinery need to move forward?

This is not totally fair; it is possible given certain independence assumptions. But they are likely not physically realizable.

It is almost certain that even within our understanding of quantum systems it is simply not possible to have a quantum computer (much less create one). The assumptions necessary to produce a completely uncoupled system are likely invalid; we have yet to produce a compelling experiment to indicate otherwise. [1]

[1] https://arxiv.org/pdf/1908.02499


From the paper:

> Goal 3: Create distance-5 surface codes on NISQ circuits that require a little over 100 qubits.

> The argument presented in the next section asserts that attempts to reach Goals 1-3 will fail.

Goal 3 is exactly what Google has recently achieved with Willow. Actually, they did even better, reaching a distance-7 surface code (with a little over 100 qubits). Q.E.D.

To be clear, I do think your article is interesting and was valid at the time, but it goes to show how fast the field is advancing.


IMHO, we're still a long way from anything truly useful. The problem Google used to demonstrate quantum supremacy feels more like a glorified random number generator. Even if quantum computers can generate results faster, processing and storing the data still takes a lot of time. It’s hard not to draw a parallel with claims about "instantaneous quantum communication," where entangled particles appear to defy the speed of light — it seems impressive at first, but the practical value remains unclear.

Isn't instant communication impossible, even with entanglement?

Still no real potential applications beyond factoring large integers (of dubious use) and doing some obscure quantum physics simulations.

QC companies are selling quantum AI and quantum finance and other hand wavy stuff. Yet, I don't see any algorithms that has a proven advantage in these domains over classical ones running on clusters of GPUs.


It becomes more and more interesting

I just want to know when this thing is going to put me out of work forever with it’s insane and crazy math skills.

Unless your job involves factoring numbers into primes (or a few other limited math problems) probably never.

There's two great lanes of progress happening in engineering these days: 1. Quantum Computing, and 2. AI.

If they can find some synergies between the two, that could be THE major development. Make better AI by using Quantum Computers, and make better Quantum Computers by applying AI.


QC – nowhere near practical applications

AI – being practically applied in many areas where it doesn't seem to be working well

I really do hope these aren't our best lanes of progress


I don't think it has crazy math skills at all. That's what classical computing is really good at - give it a problem of arbitrary length and a computer should be able to string together instructions to yield you a sum.

Quantum computing, especially right now, is simply focused on getting reliable input/output idempotency. It will be a really long time before it has insane and crazy math skills, and when it does, traditional CPU architectures will probably outperform it.

TL:DR - if the Texas Instrument calculator didn't put you out of a job, neither will quantum computers.


Aren’t matrix calculations a perfect field for quantum computing? i.e. AI would progress extremely fast, wouldn’t it?

No, not really. There is no speed up for general matrix multiply, and generally it won't be practical for any problem with large amounts of input and output data. Closest thing is HHL algorithm for solving big linear systems, which requires a bunch of caveats on the matrix, and even then, it needs to be a subroutine of another quantum algorithm, since it outputs a quantum state, and not the full vector.

Could bitcoin miners use this to "guess" the next block?

Mining is SHA256 guess-and-check lottery which will be safe from a quantum computer.

Bitcoin private keys will become vulnerable with knowledge of the public keys.

There is a way to mitigate this - because a bitcoin address is a hash of a public key, a bitcoin protocol change can occur whereas the public key becomes a secret and the signature is a zero-knowledge-proof (ZKP) of the prescribed transformation of secret -> address. These signatures are going to be big however, and so the fees will be very high.


Not really. The primary issue with Bitcoin x quantum computers is that older Bitcoin transactions contained a full public key (and not a fingerprint / wallet address), so a quantum computer could manipulate that to make a false signature on wallets including Satoshi's (this would be obvious and detectable if someone starts moving very dormant coins). For the larger network there are some other points where it becomes an issue, but before then you'd need to fix HTTPS web traffic and PGP, too.

https://www.deloitte.com/nl/en/services/risk-advisory/perspe...


Yes, but I'm pretty sure the moment this is achieved bitcoin's price will collapse.

And the investment required to produce such a machine is undoubtedly in the XX billions.


"Yes, but I'm pretty sure the moment this is achieved bitcoin's price will collapse."

Not necessarily. There's two interpretations to the question: 1. QC would be very efficient and mine efficiently. 2. QC would break SHA and would be able to reverse the hashing function at O(1).

In scenario 1. The difficulty would increase. The mining rate globally always stays the same. And the voting power would be distributed amongst the holders of the new compute, this has happened before with ASICs. Usually there's some graduality to it, and the capital is distributed so that there is never a 51% monopoly. It's especially relevant how big the jump is, if the new computer is stronger than all of the existing miners combined, then they get 100% theoretically (although with malice). In that case there would probably be a fork or as you put it, BTC would collapse. However, if you have that power, holding BTC is probably not that important anyway. The actual compute is worth more.

On scenario 2. Yes BTC would crash, but then again the actual compute power would be more impactful. BTC would crash but so would encryption, and planes and the world.


> BTC would crash but so would encryption, and planes and the world.

Sad to see Bitcoin advocates use this dismissive argument.

Centralized systems will update their software as the threat increases. Meanwhile, there are no serious proposals for a quantum-resistance Bitcoin. Some are estimating the update will require a hard fork and take 1 year to update.


IIRC Bitcoin is quantum-safe because SHA-256 is quantum-safe.

"except now with the PR blitz from Sundar Pichai on down"

I definitely read this first pass and thought 'damn the CEO is hitting its own quantum supercomputer with bug reports'. that's cold. It just came out.


I just want to know if the stock movement is justified or not.

Stock movements aren't generally justified.

I want the ability to answer such questions with an accuracy slightly over 50%.

return false

You never know the reason for a stock move. Could be that people see Oracle's poor results today as reason GCP will make more money.

If you're a journalist and simply phone a few institutional investors who make up the bulk of this kind of trading and with who you have a trusting relationship, they'll tell you.

If they all mostly agree, that's your answer.

And it's almost always what you assumed anyways from the news, because this stuff isn't rocket science.

So for all practical purposes, yes actually you usually do know, whenever a stock movement is large enough that it's clearly outside the normal noise of day trading.

I mean, can you prove the reason with 100% mathematical certainty? No. But can you be 99% sure? Of course.


You don't need a quantum computer to know everything from now until the end of times has already been priced in.

Oh wow it was up 6.2% at one point this morning from yesterday, now 4.6%.

Depends on which universe you're talking about.

Surprisingly sparse on actual information considering all this humble-bragging (actually, not even so humble) about how he was teaching peasants to catch fish for 20 years, to be forced yet again to hand it out with his own bare hands! Well, he didn't hand out much.

The post reiterates some facts from the original statement, which are pretty vague for most, I believe. The only useful clarification is that simulation results are indeed unverifiable, lol (as some might have suspected, but still nice to have a definitive confirmation from somebody who is supposedly an expert on this).

Then it addresses the cringeworthy "Everettian multiverse" statement discussion. Granted, it indeed was one of the most discussed things on the previous thread, but I have honestly assumed that it's so obviously moot that one can simply ignore it. Everyone knows that at least one of top-3 threads on HN comments must be either a lame joke or some sort of bikeshedding argument.

And that's pretty much it. "This blogger said this usual generic words, that reporter asked for an interview, but I declined, also, kudos to Google team for amazing work, it's unclear if it's any good, but it surely isn't bad!" Uh, ok, thanks for the clarification, man.

I get it that this post was written in a hurry, but given all that "fish-handing" stuff and all these commenters in this very thread complaining about how they don't know the difference between "trapped-ion or neutral-atom" qubits (as if this distinction was the very essence of the post, which author paid much more attention to than to his responses to NYT journalists) it just doesn't deliver.

...So, what did I expect? Well, I didn't expect anything, but let's state the obvious. Google's benchmark was to produce some very specific (and unverifiable) random distribution (which, BTW, he kinda says, but waaay less clearly than it could have been said). Obviously, nobody cares about that. Everyone cares about when they will be able to run Shor's algorithm on Google's Chip, and factor primes and deprecate RSA into oblivion. Obviously. Some may wonder why it's not possible to do it on that Willow thing, others may suspect that it may have something to do with the fact they need a ton of "physical" qubits to emulate logical qubits because of error-correction. Also, it is widely advertised, that the very thing that is special about Willow is vastly better (and somehow "more promising to scale") error-correction. So, what people really want from a generous and skillful fisherman is obviously some critical, appropriately-speculative, ELI5-style analysis of the updated state of the art. What does Willow have, what does it need to become practical, what are some realistic projections on when we can get there. Where is all of that? Where is the fucking fish?!


Damn he's funny.

For 20 years I’ve been trying to teach the world how to fish in Hilbert space, but (sigh) I suppose I’ll just hand out some more fish.


[flagged]


> Anyone can write about quantum computers as if they are remotely qualified

He's the quantum guy. [^2]

You're applying classical computing intuitions (where verification is indeed usually much faster than computation) to quantum computing, where this relationship doesn't necessarily hold. The fact that verification can be as hard as computation in quantum computing is actually a major challenge that Aaronson has written about extensively.

n.b. I've been here 15 years but sometimes have to take a step back and adjust my approach, because I can use this as a quick break to let out frustration with something else. When I do, I find the HN guidelines helpful, almost a joy. [^1] They're written conversationally and are more meditations than rules.

[^1] https://news.ycombinator.com/newsguidelines.html

> Be kind. Don't be snarky. [...] Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

"Other threads in these comments talk about some rick and morty multiverse type of thing, just stay on the sidelines guys"

> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

"Anybody can write about quantum computers as if they are remotely qualified"

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize.

From Aaronson's actual post: "...for the exact same reason why this quantum computation would take ~10^25 years for a classical computer to simulate, it would also take ~10^25 years for a classical computer to directly verify the quantum computer's results!!"

He goes on: "...this is why I've been obsessing for years about the need to design efficiently verifiable near-term quantum supremacy experiments."

[^2]

• Received the [2020 ACM Prize in Computing](https://awards.acm.org/about/2020-acm-prize) for groundbreaking contributions to quantum computing

• [ACM Fellow (2019)](https://www.acm.org/media-center/2019/december/fellows-2019) for contributions to quantum computing and computational complexity

• Named [Simons Investigator (2017)](https://www.simonsfoundation.org/mathematics-physical-scienc...)

• Won the [Alan T. Waterman Award (2012)](https://www.nsf.gov/news/news_summ.jsp?cntn_id=123406), NSF's most prestigious young researcher award

• Received [Presidential Early Career Award](https://www.nsf.gov/awards/PECASE/recip_details.jsp?pecase_i...) for Scientists and Engineers (2009)

• Awarded [Sloan Research Fellowship](https://news.mit.edu/2009/sloan-fellows-0217) (2009)

• Won multiple Best Student Paper Awards: - Danny Lewin Best Paper at [STOC](https://www.sigact.org/prizes/student.html) for quantum computing proofs in local search (2004) - Best Paper at [Complexity Conference](https://computationalcomplexity.org/conferences.php) for quantum advice limitations (2004) - Best Paper at Complexity Conference for quantum certificate complexity (2003)


Ah so the guy knows a thing or two.

I can see that there could be an argument for the edge cases where p=np is true could precisely be this new type of computing.

Re: MWI, author seems to have dismissed the relevancy of such discussion anyways, it just seems to be one of those pop topics that comes up whenever possible and is accessible to the laymen (like discussions about gender popping up when a random sex gene discovery pops up)


> No doubt people will ask me what this means for superconducting qubits versus trapped-ion or neutral-atom or photonic qubits,

I only wonder what it means for cryptography.

The letters "crypt" don't appear in the text.


quantum computing is entering "middle-late 1930s" - it's still quite some time away from Turing&Enigma moment

but it already passed "1920s" with "only radios" - analog, non-digital devices

stability tech is almost here (check quanta magazine), next is the scaling up


As noted in the article, it could be a very sharp transition from "largest factorization performed so far is 3x7=21" to "we can factor real-world RSA."

If you want to make a classical computing analogy, it's like we're struggling to make transistors with more than a single 9 of reliability, and obviously you can't do complex computation with a circuit where every step gives garbage 1-in-10 times.

Except it's not's obvious. 90% reliability could probably be made to work with silicon transistors with bog standard error correcting logic at the hardware level. Quantum error is a little bit more troublesome to work with, but there are also no known theoretical reasons error correction wouldn't work at existing error rates. We just need better algorithms, which might very well exist.

Or, the next generation of chips would offer more 9's reliability, and even with existing error correction just a few more sigma in reliability would put us over the tipping point of being able to make reliable large-scale systems.


There were mechanical computers before the 20th century that had more complexity (in terms of total information) and were more useful than quantum computers are.

mechanical computers were, at best, calculators

universal computation was not a thing yet


Not true. Programmable looms and player pianos existed. They weren't Turing machines, but they were certainly more sophisticated than mere calculators. And of course there's the analytical engine, even if it was never built. These technologies were far more influential (both culturally and within the context of CS itself) and practical than QCs are. It's possible if electricity had taken a bit longer to develop that we would've seen honest-to-goodness mechanical Turing machines during the early 20th century. It's not like just didn't know how to design and build the machines, they were just very complex and expensive.

So if you want to make analogies between quantum and classical computers, QCs aren't even at the level of early 19th century technology. They're still trying to figure out what's even necessary to build devices of a practical size.


Nothing, yet.

This is how miracles work. They are just physical laws fast forward, I like how this explains the ancient miracles. So maybe our previous generations had access to these things and it was lost along the way.

You need more science

Oh there's enough science here. But to understand if someone needs more science, we should reach the conclusion scientifically. Etc etc



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: