
IBM casts doubt on Google's claims of quantum supremacy - furcyd
https://www.ibm.com/blogs/research/2019/10/on-quantum-supremacy/
======
knzhou
Following the quantum supremacy controversy is very frustrating. It gets
constantly framed as a decisive step that changes everything, which is wrong.
But then there's also an opposite and _equally_ wrong reaction that any
challenges to the claim prove that quantum computing doesn't work at all.

The reality is that quantum computing technology has been improving at the
same steady pace for decades, and quantum supremacy was recently invented as a
concrete but not very useful goal that people could realistically aim for in
the near term. That is a totally standard and legitimate way of guiding
technological progress.

The problem is that quantum supremacy is vaguely defined (what does
"infeasible on any classical computer" mean? what computers can you use, and
for how long?) but it is presented to the public as a sharp boundary -- which
means that every company in the field has a huge incentive to claim they are
the first. This IBM claim is not a fundamental objection. Given that
everything they say is correct, Google could _still_ achieve quantum supremacy
by their standards just by incremental improvement, slapping a few more qubits
on. So why even bother disputing Google's claim? Because it will set them back
a few years, and that's enough time for IBM to make a claim of its own. That's
why academics, who have no stake in what company gets the hype, will just
consider this whole episode rather distasteful.

~~~
sanxiyn
Quantum supremacy is a big deal. It is a goddamn experimental evidence against
Extended Church-Turing Thesis. If you never believed ECT (for example, all
physicists seem to think ECT is obviously false) it may not matter to you, but
it still is a serious claim.

Yes, Google probably can achieve quantum supremacy by slapping a few more
qubits. But a few more qubits were, in fact, not slapped yet. So quantum
supremacy is very much not achieved, and Google should not claim so.

~~~
calf
How does quantum computation conflict with the ECT? I think Aaronson had
talked about this but I don't remember his position on this.

~~~
Cybiote
The original Church Turing thesis concerns itself with computability and
quantum computers do not violate it.

The Extended version goes on to say a probabilistic TM can efficiently
simulate all realistic models of computation. Quantum computers very likely
violate the efficiency claim and this supremacy result is strong evidence in
support.

------
akjssdk
The main takeaway here is figure 1 in this article: they show that for an
increasing circuit depth, computation time (on a classical computer) scales
linearly. Google claims, on the other hand, that a classical calculation would
scale exponentially. This is the basis for the graph in the Google blog [1],
which seems to suggest that the Quantum computer can easily reach points (such
as qbits=50, cycles=25) which the classical computer would never be able to
reach. This is not true, if IBM is right. Their linearly scaling graph proves
that they are not nitpicking their input, I would say.

[1]: [https://ai.googleblog.com/2019/10/quantum-supremacy-using-
pr...](https://ai.googleblog.com/2019/10/quantum-supremacy-using-
programmable.html)

~~~
Symmetry
There are many problems where the best known quantum algorithm is
asymptotically better than the best known classical algorithm but, as far as
I've heard, nobody has ever found a case where the best quantum algorithm can
be proved to be better than the best classical algorithm. So there's always
going to be a danger of this happening no matter what problem you attack.

~~~
hktuotroi
Isn't quantum factoring proven to be exponentially faster than the best known
classical one?

The only question is we don't know if there is a better classical factoring
algorithm.

Wikipedia:

> _On a quantum computer, to factor an integer N, Shor 's algorithm runs in
> polynomial time. This is almost exponentially faster than the most efficient
> known classical factoring algorithm, the general number field sieve, which
> works in sub-exponential time_

~~~
wongarsu
Yes, for factoring integers the best known quantum algorithm is better than
the best _known_ classical algorithm. The catch is that we don't know if a
better classical algorithm exists but just wasn't discovered yet.

Compare this for example to sorting. We have proven that any sorting algorithm
working with comparisons can at best be O(n*log(n)) fast, it's impossible for
a faster classical algorithm to exist.

~~~
logicchains
>We have proven that any sorting algorithm working with comparisons can at
best be O(n*log(n)) fast, it's impossible for a faster classical algorithm to
exist.

You can have a faster classical algorithm if it's distributed across n threads
for a size n array. For each index i in an array arr, spawn a thread that
counts the number of values in arr that are less than arr[i] (an O(n) linear
scan), call this x, then do outputArray[x] = (arr[i], 1), where outputArray is
initialised to all zeros. If outputArray[x] already exists, instead increment
the second term (count) in the tuple. Once all threads are finished, then
outputArray will contain the sorted values and the count of duplicate values
(so this only works if !(x < y) && !(y < x) implies y == x). Each thread does
O(n) work, so total work is O(n^2), but because the threads all run in
parallel, the runtime is only O(n).

~~~
dragontamer
For the most part, classical algorithms assume a fixed-amount of compute
power.

"Total Work" is the important part to minimize in practice. If "total work"
scales at O(n^2), then in practice, the job scales at O(n^2). Its infeasible
to build O(n^2) CPUs running O(n^2) threads as n-grows.

------
mellosouls
Repeating my question from the other thread:

If IBM can cast doubt by describing a 2.5 day classical run, why not just do
that instead, and blow the claim out of the water?

There were a couple of responses (thanks) but I still don't understand - it
looks like it's not as easy as they imply.

~~~
hktuotroi
They didn't say 2.5 days on a laptop. They probably mean on a bunch of
datacenters together. That is expensive.

Google implied their supremacy over all of the world computers combined
together.

~~~
ehsankia
These calculations of 2.5 day and 1000 years, they're calculations based on
specific parameters, right? Could they just prove it on a smaller version of
the problem instead? Can they do a version of the problem which takes maybe 1m
on with IBMs algorithm and 1 year with Google's?

~~~
joshuamorton
Aaronson's blog goes into this. The problem is exponentially difficult to
simulate in the number of qibits. Each quibdit doubles the storage required.
25 quibits can be done on a laptop. 50 requires a supercomputer. 53 is just
small enough to fit on the biggest supercomputer. 55 doesn't fit.

There's also an alternative algorithm that requires less space, but it takes
longer.

------
ajay-d
IBM's head of research had a pretty confident statement the other day: "I’m
convinced there are more quantum computers working here than the rest of the
world combined, in this building," Dr. Gil said.

[1][https://www.nytimes.com/2019/10/21/science/quantum-
computer-...](https://www.nytimes.com/2019/10/21/science/quantum-computer-
physics-qubits.html)

------
CJefferson
This had been a consistent problem with quantum computers - - they come up
with a quantum computer that can basically do one thing and compare it against
a general piece of software not tuned for that problem. Once the software is
tuned for the exact problem it is much closer in performance to the quantum
computer. The same thing arises with dwave.

~~~
tarsinge
Yes « Tuned » with 250 Petabytes of storage

~~~
CJefferson
To be fair, that exact computer is, I believe, what Google originally
composted against.

------
andrewla
Also of note is Gil Kalai's updated take on the experiment [1]. The heart of
the complaint is that the quantum computer's solution to this problem relies
on a calibration process, which is done on a classical computer and requires
resources orders of magnitudes higher than the quantum portion of the
computation:

> The Google experiment actually showed that a quantum computer running for
> 100 seconds PLUS a classic computer that runs 1000 hours can compute
> something that requires a classic computer 10 hours.

EDIT: Just as a note, lest you dismiss Kalai as an outsider/crackpot, you can
look at the researchers who show up in the comments on the blog post.

[1] [https://gilkalai.wordpress.com/2019/10/13/the-story-of-
poinc...](https://gilkalai.wordpress.com/2019/10/13/the-story-of-poincare-and-
his-friend-the-baker/)

~~~
zaroth
Crucial question - is the calibration a one time effort where the same
calibration can be used to calculate any number of results within the
calibrated space? In other words, can you amortize the calibration cost across
1,000 runs and come out an order of magnitude ahead?

~~~
readams
"John Martinis explicitly confirmed for me that once the qubits are
calibrated, you can run any circuit on them that you want."

[https://www.scottaaronson.com/blog/?p=4372](https://www.scottaaronson.com/blog/?p=4372)

~~~
andrewla
Just saw this! As Aaronson mentions, coupled with the IBM work, if it does
work out, it should be possible to verify that the 53-bit samples are actually
valid through the cross entropy test, which would be very exciting.

------
snops
The linked paper shows the "classical system" is the Summit supercomputer[1],
using 128 petabytes of disk. Still, its remarkable that the choice of a
different simulation algorithm by IBM that trades memory for CPU usage cuts
the time from 10,000 years to only 2.5 days.

[1] [https://www.olcf.ornl.gov/olcf-resources/compute-
systems/sum...](https://www.olcf.ornl.gov/olcf-resources/compute-
systems/summit/)

------
jaredtn
Summary: \- Google overhyped their results against a weak baseline.

This seems to be commonplace in academic publishing, especially new-ish fields
where benchmarks aren't well-established. There was a similar backlash against
OpenAI's robot hand, where they used simulation for the physical robotic
movements and used a well-known algorithm for the actual Rubik's Cube solving.
I still think it's an impressive step forward for the field.

~~~
a_imho
Also AlphaZero vs Stockfish, the setup has/had a big question mark over it as
well.

------
nevi-me
I'm a layman on this, but reading IBM's rebuttal, it seems that they're saying
"you can't say that a conventional supercomputer would take 10'000 years
because you haven't accounted for spilling to disk and other optimisations".

I understand that some of these optimisations might be on the processing
logic, but doesn't this end up being like comparing apples to oranges?

I understand that there's also what seems to be technicalities around what is
"quantum supremacy", but I assume that on a 'processor against processor'
basis, this is still significant. Am I oversimplifying things?

~~~
sanxiyn
No, the point is that spilling to disk allows you to run classical algorithm
linearly, so there is no exponential speedup. I elaborated this in detail
here:
[https://news.ycombinator.com/item?id=21334025](https://news.ycombinator.com/item?id=21334025)

------
readams
Scott Aaronson now has a detailed analysis and covers the IBM claims as well:
[https://www.scottaaronson.com/blog/?p=4372](https://www.scottaaronson.com/blog/?p=4372)

The high order bit is that the IBM claims do not cast much of a shadow over
the claims of quantum supremacy and still show the same exponential
difference.

~~~
dang
There's a thread discussing Scott Aaronson's post here:
[https://news.ycombinator.com/item?id=21335907](https://news.ycombinator.com/item?id=21335907)

------
dang
There are three threads. The others are:

Google's post
[https://news.ycombinator.com/item?id=21332768](https://news.ycombinator.com/item?id=21332768)

Scott Aaronson's post
[https://news.ycombinator.com/item?id=21335907](https://news.ycombinator.com/item?id=21335907)

------
d0ne
If a worst case to simulate on a classical computer is 2.5 days (per IBM's
blog) then they should either:

1\. Actually preform the experiment per their approach, compare the results,
and provide tangible proof in the coming days.

or

2\. Retract their claims.

In either case, exciting times ahead.

[Edit: Yes, I read their published paper[1] which only claims it is possible
to achieve and does not perform the same experiment.

1\.
[https://arxiv.org/pdf/1910.09534.pdf](https://arxiv.org/pdf/1910.09534.pdf)]

~~~
reikonomusha
They need the world’s most powerful supercomputer to do that, and it may not
be available for them to use.

------
kerng
Seeing this be published on Science Magazine is interesting by itself.

Its like IBM and Google at a high level work on different problems. IBM is
focused on research and public discussion of the problem space. Google on the
other hand is focusing on marketing using fancy terms like "supremecy" that
can easily be misunderstood by the public.

------
nappy-doo
Isn't Google's result published in Nature -- a peer reviewed journal?
Shouldn't IBM just do the same with their refutation?

~~~
sanxiyn
I am sure this exact rebuttal is already sent to Nature and under review and
will be published in the next issue.

------
unnouinceput
Quote 1: "The Google group reiterates its claim that, in 200 seconds...."

Quote 2: " ...by tweaking the way Summit approaches the task, it can do it far
faster: in 2.5 days."

Well, it's still 200 seconds vs 2.5 days, so I'd say the claim kinda stands. I
know I would definitely buy the computer with 200 seconds vs the one with 2.5
days.

~~~
theon144
Not sure it's that clear cut, as per the blog post[0] linked elsewhere in this
thread[1]:

> Recall that a quantum supremacy demonstration would be an experiment where a
> quantum computer can compute in 100 seconds something that requires a
> classical computer 10 hours. (Say.)

>The Google experiment actually showed that a quantum computer running for 100
seconds PLUS a classic computer that runs 1000 hours can compute something
that requires a classic computer 10 hours.

0: [https://gilkalai.wordpress.com/2019/10/13/the-story-of-
poinc...](https://gilkalai.wordpress.com/2019/10/13/the-story-of-poincare-and-
his-friend-the-baker/) 1:
[https://news.ycombinator.com/item?id=21334403](https://news.ycombinator.com/item?id=21334403)

~~~
Veedrac
According to Scott Aaronson this claim is incorrect.

> Next he said that the experiment is invalid because the qubits have to be
> calibrated in a way that depends on the specific circuit to be applied.
> Except, this too turns out to be false: John Martinis explicitly confirmed
> for me that once the qubits are calibrated, you can run any circuit on them
> that you want.

------
papln
Part of this controversy is reminiscent of the Higgs Boson "God particle"
nonsense, where an unscientific catchphrase created by scientists got picked
put by the media and a misinterpretation overshadowed the real scientific
issue.

------
nickpsecurity
I’m with them. My two problems when reading the original argument:

    
    
        They rigged the benchmark to be something quantum-specific before saying classical computers couldn’t do it. I get why. I’d just rather it be something they both could do, classical having great algorithms, and quantum one outperforms it many-fold. An old example was factoring. I’m sure there’s other examples out there. Maybe try to simplify them for low-qubit circuits.
    
        The biggest gripe: basing the argument on what classical computers can’t do. Can’t is an assumption that’s been dis-proven about so many things so many times. Who knows what new algorithms people will come up with down the line doing what we previously couldn’t do. I want an argument that’s instead based on what can and has been done vs the new thing being done. I similarly preferred constructivist logic to classical since the latter was assuming things didn’t exist. Universe surprises me too often for that stance.
    

Another thing about these is the comparison to supercomputers. Me personally,
I don’t care if the quantum computer outperforms supercomputers. I’m looking
at practical QC like anything else: does it do the job more cost-effectively
than alternative solutions? That cost includes hardware, any
leasing/licensing, maintenance, specialists, etc. If it does, then they’ve
achieved something for me. If it doesn’t, then QC was an implementation detail
for a system that didn’t make the cut.

FPGA’s are already an example of how this might play out. They’re friggin
amazing with all kinds of potential. I’d love to have a desktop loaded with
them with libraries using them to accelerate my apps. Them being invented and
improved wasn’t enough for that, though. The duopoly, esp supported by
patents, has kept prices of high-performance FPGA’s way too high.

It would be better if they were priced more like flash memory with the
synthesis software being more like mass-market A/V software. We’d see an
explosion of use. Instead, the legal system allowed the inventors to keep
competition and innovation at low levels to maximize profit. I expect the same
with practical QC for at least 20 years. I’ll still be using a mix of NUMA,
GPU’s and FPGA’s while early adopters mess around with quantum computers.

~~~
sanxiyn
Your biggest gripe can't be solved. P vs PSPACE is a great unsolved problem in
computer science (almost as big as P vs NP), and since PSPACE can simulate
quantum computer, any solution to your gripe immediately leads to resolution
of P vs PSPACE.

~~~
nickpsecurity
That's disproven or just side-stepped given there's already a solution
addressing my biggest gripe: factoring. It's been researched and optimized to
death on classical computers. Quantum should get a massive speedup. Such a
demo of quantum supremacy would be quite convincing even if not 100% proven.

The QC's aren't big enough for that yet. Surely there are other problems
ultra-optimized for classical computers, lots of effort at improving them, and
might still have a QC algorithm that beats it by QS-supporting speed-up.
Something that fits in these smaller machines. Again, the claim that QC did
what CC "likely couldn't" would be validated by the effort CC researchers put
into doing it without same results.

I doubt their setup had nearly as many person years of R&D thrown at solving
it as factoring, sorting, strings, BLAS, certain assembly routines,
optimization techniques, etc. There's probably something in those with an
effective alternative that's QC-able.

------
ace_33
Is 200 seconds not faster than 2.5 days??

I don't see the argument here

~~~
altares
Supremacy does not mean simply faster. That is already proven. It means
impossible to do on a classical computer.

~~~
djhaskin987
I'm sure you mean infeasible. "Impossible" goes back to a problem's
decidability. If a problem is undecidable, there's no way you're going to find
its solution by switching computational models; classical computers can do
anything a quantum computer can do, just "slower".

~~~
akvadrako
Not if they use up all the energy in the visible universe or they need so much
power they collapse and form a black hole.

Real quantum computers should be able to do calculations that are that
powerful, assuming we can prove the best classical algorithms are really
exponential.

------
justicezyx
I believe this is a healthy exchange.

Boom & bust cycle appears wasting more resources and progressing slower
overall than a steady refinement.

------
akimball
Embarrassing for IBM (the snarky quote of Dario Gil) and for Science (the
snarky headline).

------
comnetxr
There seems to be a lot of confusion here about the complexity, for example
calling the simulation here "in the linear regime" due to IBMs result. This is
inaccurate.

A source of this confusion is that we need to discuss space and time
complexity simultaneously. In the algorithms for quantum simulation (and many
other algorithms), there is a trade off between space complexity and time
complexity. ELI5: You don't have to store intermediate results if you can
recompute them when you need them, but you may end up recomputing them a huge
number of times.

For the quantum circuit, the standard method of computation gives exponential
memory complexity in number of qubits (store 2^N amplitudes for a N qubit
wavefunction) and time complexity D2^N, i.e. linear in circuit depth and
exponential in number of qubits. For example, under the IBM calculation, 53
qubits at depth 30 use 64 PB of storage and a few days of calculation time,
while 54 qubits use 128 PB and a week in calculation time. Adding a qubit
doubles the storage requirements AND the time requirements.

Under google's estimation of the run time, they were using a space time memory
tradeoff. There is a continuous range of space-time memory tradeoffs - USE MAX
MEMORY as IBM does, MAX RECOMPUTATION (store almost no intermediate results,
just add each contribution to the final answer and recompute everything) and a
range of in-between strategies. While I don't know the precise complexities,
the time-heavy strategies will have time complexity exponential in both N and
D ( 2^(ND) ) and space complexity constant.That's why googles estimate for
time complexity is so drastically different than IBMs.

Side note: IBM also uses a non-standard evaluation order for the quantum
circulation, which utilizes a trade-off between depth and number of qubits. In
the regime of a large number of qubits, but a relatively small depth, you can
again classically simulate using an algorithm that scales N*2^D rather than
D2^N, using a method that turns the quantum circuit on its side and contracts
the tensors that way. In the regime of N comparable to D, the optimal tensor
contraction corresponds to neither running the circuit the usual way or
sideways but something in between. None of these tricks fundamentally change
the ultimate exponential scaling, however.

As an extra step, you could also run compression techniques on the tensors
(i.e. SVD, throwing away small singular values), to make the space-time
complexity tradeoff into a three-way space-time-accuracy tradeoff. You
wouldn't expect too much gain by compression, and your accuracy would quickly
go to zero if you tried to do more and more qubits or longer depth circuits
with constant space and time requirements. However, the _real_ quantum
computer (as Google has it now, with no error correction) also has accuracy
that goes to zero with larger depth and number of qubits. Thus, one can
imagine that the next steps in this battle are as following: If we say that
Google's computer has not at this moment beaten classical computers with 128PB
of memory to work with, then google will respond with a bigger and more
accurate machine that will again claim to beat all classical computers. Then
IBM will add in compression for the accuracy tradeoff and perhaps again will
still beat the quantum machine.

So this back and forth can continue for a while - but the classical computers
are ultimately fighting a losing battle, and the quantum machine will triumph
in the end, as exponentials are a bitch.

------
perseusprime11
I wish IBM chose their words carefully here.

------
andrewla
We should probably amend the link to point to IBM's actual blog post,
[https://www.ibm.com/blogs/research/2019/10/on-quantum-
suprem...](https://www.ibm.com/blogs/research/2019/10/on-quantum-supremacy/)

~~~
dang
Ok, changed from [https://www.sciencemag.org/news/2019/10/ibm-casts-doubt-
goog...](https://www.sciencemag.org/news/2019/10/ibm-casts-doubt-googles-
claims-quantum-supremacy?rss=1). Thanks!

------
c3534l
I don't trust anything IBM says. They have a long history of lying and
misleading that continues to this day. It's about as trustworthy as Facebook
saying you shouldn't be concerned about privacy on Facebook.

~~~
rahuldottech
Do you have any sources that elaborate on your claims of IBM being misleading?

~~~
thesz
[https://pdfs.semanticscholar.org/9725/c12506f1d77a3e3b1587cd...](https://pdfs.semanticscholar.org/9725/c12506f1d77a3e3b1587cd25a06dfc9e3dd7.pdf)

Quotes: "Fear, uncertainty and doubt (FUD) is a tactic used in sales,
marketing, public relations,[1][2] politics and propaganda." and "FUD was
first defined (circa 1975) by Gene Amdahl after he left IBM to found his own
company, Amdahl Corp.:"FUD is the fear, uncertainty, and doubt that IBM sales
people instill in the minds of potential customers who might be considering
Amdahl products.""

------
rokalakt1337
Is IBM still relevant?

~~~
jfengel
They are doing a whole bunch of research, including in quantum computing. It's
hard to tell what's really "relevant" in research; if you knew, it wouldn't be
research. IBM stands as good a chance as anybody to produce a real-live
quantum computer, and they've hired the people who are in a position to judge
what Google really has and hasn't achieved thus far.

------
IshKebab
Not really though - they just said it wasn't quite as supreme as Google said.

~~~
iherbig
IBM's response to Google called "On 'Quantum Supremacy'" has this to say:

"Because the original meaning of the term “quantum supremacy,” as proposed by
John Preskill in 2012, was to describe the point where quantum computers can
do things that classical computers can’t, this threshold has not been met."

In other words, "quantum supremacy" is not being used as just a buzzword.

So yeah, IBM is casting doubt on Google's claims of "quantum supremacy"
because "supremacy" doesn't just mean "better than everything else."

~~~
excalibur
But it's still pretty fuzzy. If we agree that 10,000 years on a classical
system is supremacy but 2.5 years isn't, where's the cutoff? 25 years? 250? An
average human lifespan?

Edit: Thanks guys for pointing out where I misread the article, leaving
mistake intact so the replies make sense.

~~~
htfu
2.5 _days_, not years.

A still fuzzy and arbitrary but imo somewhat elegant cutoff would let
feasibility be set by the market and technological development, in the sense
of something not being worth throwing compute at _now_ because the timescale
of work is slow enough that future classical efficiencies will progress (much)
quicker.

~~~
AlexCoventry
For market forces to be meaningful, the calculation would need to provide
value to someone. At the moment the desired output is simply "a distribution
which is hard to simulate classically," so assessment in terms of market value
would be very premature.

~~~
htfu
But that goes for any raw benchmarking, it’s busywork by design.

~~~
AlexCoventry
I'm pretty cynical about this specific experiment, but that's taking it to
another level.

The ROI was terrible, but it wasn't mere busywork when Galileo was mucking
around with balls and ramps.

~~~
htfu
Haha sorry should've been much clearer, I meant specifically computer power
benchmarking.

You perform busywork, if for marketing purposes usually of the kind or at
least set up in a way favorable to your product, and on the presumption that
it will in some manner transfer to real work.

The more synthetic the less likely that is, but it's still indicative of
something and I don't see what you mean is so different here.

------
2OEH8eoCRo0
Judging entirely by my experience with IBM- put up or shut up. Every
interaction I've had with IBM the last few years has been shit. It's to the
point that all of my confidence in the entire organization is gone, regain my
confidence. Put up or shut up IBM.

------
cjbenedikt
Pretty rich by IBM to voice any criticism after its "Watson" disaster.

------
nautilus12
Saying that it evokes "white supremecy" is kind of a pathetic move.

~~~
JohnJamesRambo
Well the word creeps me out and I wish they would choose another.

It isn’t even the right word to use since the intended meaning is a quantum
computer that can do something that a classic computer can’t do at all, not
just better at it.

Quantum Possible sounds better to me.

~~~
Deukhoofd
"It isn’t even the right word to use since the intended meaning is a quantum
computer that can do something that a classic computer can’t do at all, not
just better at it."

No, that's exactly what it means in Googles research. It means it can do it
faster than a classical computer, not that it's impossible for a classical
computer to do.

~~~
JohnJamesRambo
[https://en.wikipedia.org/wiki/Quantum_supremacy](https://en.wikipedia.org/wiki/Quantum_supremacy)

First line- "Quantum supremacy is the potential ability of quantum computing
devices to solve problems that classical computers practically cannot."

We can argue about what practically means but I read that as a very very long
time. No one worries about someone breaking RSA-2048 with a classical computer
because it is "practically" impossible.

>In putting together our video, we estimated the age of the Universe to be
13,751,783,021 years or a little over 13.75 billion years*, therefore if you
tried to break a DigiCert 2048-bit SSL certificate using a standard modern
desktop computer, and you started at the beginning of time, you would have
expended 13 billion years of processing by the time you got back to today, and
you would still have to repeat that entire process 468,481 times one after the
other into our far far distant future before there was a good probability of
breaking the certificate. In fact the Universe itself would grow dark before
you even got close.

[https://www.digicert.com/TimeTravel/math.htm](https://www.digicert.com/TimeTravel/math.htm)

This may not be the case with Quantum Possible computation.

------
poelzi
I know they don't understand λ_spm, I know their quantum orbit is totally
fucked up. I know that they don't understand short magnetic field condition. I
know their particle sizes are all off. (Reason you can' unify QM and GR btw).
They are lacking the relativistic micro-curviture sourrounding the nucleus. I
know they don't understand superconductor correctly, thats why the fractional
quantum hall effect is a mystery. Superposition does not mean what they think
(over interpretation).

Most of the ideas about Quantum-Computers can never be done. You can run
certain optimizations when you can model some applications as energy states,
but very limited spectrum of applications.

