Hacker News new | past | comments | ask | show | jobs | submit login
Bitcoin’s race to outrun the quantum computer (decrypt.co)
73 points by benmunster1 57 days ago | hide | past | web | favorite | 87 comments

Just to be clear, this has barely anything to do with any crypto-currency organization. It is a really regrettable framing for an event that should be of great interest to anyone dealing with cryptography, not just the fairly restricted group of crypto-currentcy enthusiasts.

If scalable quantum computers can be built (which seems probable, as we are progressing fast in the number of qubits we can keep together), then certain restricted types of public key encryption will be broken by quantum hardware (all currently used public key encryption actually). We know other public key encryption algorithms exist that can run on classical computers and still not be broken by quantum computers. In many ways they are less tested and less practical, so for a while NIST has been sponsoring the development of such quantum-resistant running-on-classical-computers public-key algorithms. The usual name for this is post-quantum encryption.

> which seems probable, as we are progressing fast in the number of qubits we can keep together

Are we? Every so often I check back to see how well they're doing with actually running Shor's algorithm. In 2012 they managed to find that 21 = 3 x 7. And today... 21 = 3 x 7. That doesn't sound like progressing fast.

Quantum annealing is doing well for itself, but it isn't any sort of threat to encryption.

We definitely are getting closer. Look at any graph of "qubit lifetime" and you will see how we have orders of magnitude improvements. (a quick google search gives me page 9 of https://hpcuserforum.com/presentations/tuscon2018/QCOverview...). We still need a couple more orders of magnitude before the qubits are useful, but the signs of exponential progress are unmistakable.

I would not take any experiment performing Shor's algorithm today or in the past particularly seriously, since they would probably be showing something fine tuned for the particular instance of the problem (e.g. 21). We do not have anything that can be called an "error-corrected logical qubit", and we need this before we can make serious claims about running algorithms like Shor's.

So if we extrapolate out the current curve 2 more orders of magnitude, it looks good. Therefore we can conclude it's looking possible? ;)

For historical precedence, see "Moore's Law"

Talking to experts in the field, we're doubling the number of functional qbits (ie accounting for error correcting requirements) about every 12 months right now. Looks like current number is 16-qbits, where many algos get "really interesting" around 1000 qbits (and functional even before that). So 5-6 years until a potential total transformation in computing paradigms. Quantum is in a similar place as deep learning in 2012.

Exponents don't continue forever in the real world.

It's anyone's guess to how far we go, but generally extrapolating a curve far off the current reading is a case of innumeracy.

Obviously. However, Moore's Law held for 40+ years. We are at 16 qbits in quantum vs single silicon chips with 1.2Trillion transistors. Clearly some room for growth, especially given the $Billions in annual investment being poured into quantum.

There are some areas where 100x improvement is almost inevitable: self-driving cars, facial recognition, etc. You don’t need to be a genius to see it.

?? Facial recognition is very good already; how would we define a 100x improvement? A 100x reduction in error rate? NIST benchmarked current implementations at a 0.2% error rate with massive databases--1 in 500-- you're saying that you're really confident we'll reach a 1 in 50,000 error rate?

More generally: nothing is inevitable, especially when we're talking about orders of magnitude. Maybe it happens as quickly as we'd hope, maybe it doesn't. Maybe it doesn't even happen in the near to intermediate future at all. That goes for self-driving cars, quantum computers, general artificial intelligence, or whatever.

Moore's law was inevitable, until it wasn't.

Yes, two or more orders of magnitude of improvement are very much expected by most researchers in this field, simply because the current state of the hardware is infantile compared to what we know is possible (it just takes a lot of work to put all the different improvements in a single experiment).

That's a completely different argument. "Researchers think we'll get two more orders of magnitude in a reasonable time" is quite different from saying "if we extrapolate this curve, we'll get there in 5 years". After all, exponents describe growth until they don't-- eventually we reach physical or economic limits (Moore's law has tapered off; bacteria in a dish of agar show exponential growth until they don't anymore, etc.)

I highly doubt we'll get there in 5 years, myself.

We can conclude that if we're optimists.

We conclude it's looking possible because fundamental physics says it should be possible.

Damn, looks like I should stop using 21 as my PK then.

Not sure what algorithm you use for encryption, but 21 doesn't look like a prime. Maybe run a Miller Rabin test on it? ;)

It's the modulus of my RSA PK, so the product of two large primes rather than a prime itself. Of course, I can't say what the factors are, otherwise anyone could crack it even if they don't have a quantum computer!

Just XOR 21 times, problem solved!

Always with the same mask, to make sure it is applied well.

That list is still going though I don’t have it handy (I’ll try to update with the mailing list).

There are a number of problems that are well established with strong security proofs vs classical and quantum algorithms - the problem is that the key and message sizes are fairly terrible for practice currently.

On the plus side all the current symmetric ciphers are still pretty solid against quantum machines (Grover’s search is a sqrt improvement so would simply require a doubling in key size, but I suspect in practice not necessary)

We seem to be getting better at using quantum computers for hard math. What I'm curious about is when we'll be able to use them for easy math. How many qubits do we need to run Doom Quantum?

Quantum computers are (conjectured to) outperform classical computers in a very restricted set of problems. These problems happen to be extremely important (e.g. protein folding and some forms of linear algebra), but for tasks that are already efficiently solved by classical computers you probably will never use a quantum computer (maybe in many decades this will change, when it becomes as "trivial" to construct quantum CPUs as it is to construct classical CPUs)

Or the subset of users who need what a quantum computer can offer add something analogous to a GPU to their general purpose computer.

> which seems probable, as we are progressing fast in the number of qubits we can keep together Probable? Ridiculous:

×there are still no hardware implementation of a single logic gate despite what? 40 years of research? ×stability of qubits are pathetic. ×no software stack (almost) ×Number of qubits are ridiculous and 40 years of research can't even make a consensus on how much qubits are needed for beating a CPU on at least one task. Even if quantum could(I strongly doubt that and quantum speedup has never been shown empirically) reduce complexity on some algorithms, the constant factor has no way to be handled with 50qbits, more like 1000000 qbits at minimum. ×no sign of needed breakthrough in error correcting codes.

I consider quantum speedup to be theoretically flawed, based on an interpretation of quantum mechanics needing "parallel worlds" which necessitate a misunderstanding of causality to make sense.

I consider quantum speedup to be flawed in practice too as order of magnitude of order of magnitude progress is needed and the trend is: no progress or very slow progress.

Thus quantum computing is like a vaporware, which take funds that would be far better allocated at e.g medicine, AGI, or ASICS.

While I agree that QC is far from cracking any encryption today or in the near future; this

> theoretically flawed, based on an interpretation of quantum mechanics needing "parallel worlds"

is patently false. There is nothing in QC that would stop working under the good ole' Copenhagen interpretation.

In fact, if there was even a theoretical possibility that a QC experiment would work only in some QM interpretations, they would no longer be interpretations, but falsifiable hypotheses.

Copenhagen postulates collapse, no? If collapses turn out to happen at problematic times or because of problematic conditions, it could potentially doom quantum computers.

In contrast, many worlds says basically: There is no collapse. Ever.

The fact that no one has found what triggers collapse yet is to me evidence that Copenhagen is false.

Pilot wave theory on the other hand is, afaiu completely equivalent to many worlds, but less philosophically convincing.

Where many worlds says there is only the wave function, pilot wave says there is a wave function but also the material world. And material world is a sort of "view" of the wave function. Continuously rederived from it. But the interesting thing is, this material world has no causal consequences. All causality stems from the wave function and its evolution. This is the compromise to get same predictions as many worlds.

Various interpretation postulate collapses under specific conditions, while other formalisms postulate something mathematically/observationally equivalent at mathematically/observationally equivalent conditions. In particular "problematic collapses" are a thing all quantum computing researchers have to deal with in their work. It is one of the largest subfields of this research program actually (practical quantum error correction).

> The fact that no one has found what triggers collapse yet is to me evidence that Copenhagen is false.

It' just the same as "no one has found what triggers a split between worlds, and why the physical reality we observe is following this particular one of the many worlds".

Well if Occam's razor is not enough, you must know that both many world interpretation of QM and hortodox view of QM are refuted. The one remaining is the statistical one that is: QM is a map not the territory, QM descriptions are description of optimal use of our knowledge (or lack of) on a system. Sufficient knowledge of the environment allow use of classical mechanics. Quantum paradoxes do not exists because e.g the phenomenon such as two simultaneous antithetic state (e.g a cat dead and alive) are just intermediate QM calculations results that have no reality but are useful for calculations. Sadly QM are invaded by BS artists and "philosophers" which prefer fun magic to serious rigorous truths. Source: https://www.google.com/url?sa=t&source=web&rct=j&url=https:/...

It is a travesty to put forward such unfounded claims, and then point to as source a paper by the great Anthony Leggett on the use of condensed matter systems to probe the quantum measurement problem.

What you are claiming is merely the proposition of "hidden variables" aka "local realism", a theory which has been thoroughly rejected by experimental measurements [1,2,3].

[1] https://www.nature.com/articles/nature15759

[2] https://doi.org/10.1103%2FPhysRevLett.115.250401

[3] https://doi.org/10.1103%2FPhysRevLett.115.250402

I think it is rather misleading to state "40 years of reasearch" the way you are doing it. It was only in the late 90s that there was theoretical reasons to believe quantum error correction is feasible. And your goalpost of logical gates completely disregards the enormous (more than 6 orders of magnitude) exponential progress in the lifetime of physical qubits (see sibling comments for references). We still need a couple more orders of magnitude before things work out at the "logical gate layer", but it is dishonest to disregard the progress up to now.

There's a very intuitive inductive argument that places quantum computers ahead of classical computers.

Basically that quantum interactions are more fundamental to the universe than more terse ones, stochastic randomness or presence/absence.

If a computing device (in a universe) is built to compute using more fundamental substance (to that universe) it should stand to reason it is more efficient.

If you model a photon, does your model ever move as fast as the photon?

I would like to understand what you mean but I can't, could you rephrase / exemplify?

"Basically that quantum interactions are more fundamental to the universe than more terse ones, stochastic randomness or presence/absence." What does more fundamental means? Presence, absence and randomness are included in the set of quantum interactions they are not of a different nature.

" If a computing device (in a universe) is built to compute using more fundamental substance (to that universe) it should stand to reason it is more efficient." What does fundamental substance mean? I don't understand this paragraph.

"If you model a photon, does your model ever move as fast as the photon?" My model does not move. If you meant that my mental visual simulation of a photon moving is slower than a photon moving, yes obviously, it's slowness and lack of accuracy is just a limitation of my brain. How is that related to quantum speedup?

Help me understanding you, I would really like to believe in quantum speedup but the explanation needs to be sound.

I'll reiterate that I was just hoping to share my intuition behind the subject but I'll try and point you in the right direction.

I mean that the relationship between Newtonian mechanics and Quantum mechanics is such that the matter described by Newtonian mechanics is composed of matter whose precise behavior is exclusively described by quantum mechanics, that is that underlying matter is fundamental to the macro behavior.

I'm by no means a physicist, but a formal treatment of this kind of relationship (and predicted consequences) is captured in Constructor Theory (as David Deutsch who may have been early in identifying this was able to formalize this in the much more precise language of physics)

Intuitively, if you took a whiffle ball (or pair of whiffle balls, or some other ensembleoof whiffle balls sufficient to represent a system's state) in your hands, and a little obstacle course representing, say, beam splitters, and with each in your hands somehow made these classical implements behave like photons, you would have little hope of doing this at light speed.

Well thanks for the effort, that was interesting.

In general I have issues understanding many QM explanations as I disagree with all interpretations of QM except the statistical one. You might be interested to read this Nobel prize paper with test the limits of quantum mechanics and refute in a sound manner, common interpretations. https://www.google.com/url?sa=t&source=web&rct=j&url=https:/...

The only truly safe encryption is quantum encryption. Any classical encryption algorithm should be breakable with quantum computers, even if we haven’t figured it out yet like with Shor’s.

Citation needed? Symmetric encryption algorithms like AES (with at least 256 bit key) are considered safe even against quantum computers, based on some reasonable math/qm assumptions.

Yeah, so far as I know the only algorithm that provides an asymptomatic speedup (for AES or similar symmetric key crypto) over a classical computer is Grover's algorithm. That would reduce a 256 bit key to 128, which is hardly disastrous.

All of that seems to only be “medium” secure in face of future quantum computing.

“It's been estimated that 6,681 qubits would be required to run use Grover's algorithm to break AES-256 bit encryption.”


Not only that, there are asymmetric encryption algorithms not vulnerable to Shor's algorithm, we're just not using them yet (because they're slower and haven't been as well-studied as RSA or ECC).

Obvious counterexample: one-time pads.

Even if you don’t go that extreme, there’s no indication that e.g. AES is breakable.

What do you mean with quantum encryption? QKD (Quantum Key Distribution)?

Though the concept seems nice, it has quite a few issues in practice. Such as distance between coupled devices, and the need to have a dedicated physical communication line.

Oh and before you think you can just chain several devices to get longer distances: this opens you up to classical MitM attacks.

Source: https://www.nature.com/articles/npjqi201625

> Take the Bitcoin blockchain: an unencrypted public key is sent along with every bitcoin transaction, and left unencrypted during the time it takes for the network to confirm the block, around ten minutes.

My understanding is that it remains unencrypted forever. That’s why it is a public key. As long as the coins are not moved to another account that public key stays a valuable target.

Edit: As DennisP pointed out my understanding was wrong and indeed only the hash of the target is published until you make an transaction from an account.

Nitpick: What people like to call "Satoshi's coins" were actually mined in transactions to pubkeys rather than to pubkey hashes.

Early Bitcoin transactions did not use addresses.

Example from block 1:


You'll see at the bottom that the opcode is a PUSH / CHECKSIG rather than the later DUP / HASH160 / PUSH / EQUALVERIFY / CHECKSIG format.

So this isn't true. (blockchain.info derives an address but actually the pubkeys are right there in plain sight. Have at it!)

Most transactions are indeed made to pubkey hashes though, yes.

So then if Satoshi destroyed the private keys instead holding on to them just in case for an event like this, it's possible that someone might take the million or so BTC just sitting around?

Depending on how easy it is, all those addresses that were abandoned containing 50 BTC are also up for grabs.

Would a coin without public addresses be better suited against such future?

Not if Satoshi's coins have never moved. A bitcoin address is a hash of a public key. The public key isn't revealed until the first time funds are transferred out of that address.

Nitpick: What you're calling "Satoshi's coins" were actually mined in transactions to pubkeys rather than to pubkey hashes.

Early Bitcoin transactions did not use addresses.

Example from block 1:


You'll see at the bottom that the opcode is a PUSH / CHECKSIG rather than the later DUP / HASH160 / PUSH / EQUALVERIFY / CHECKSIG format.

So this isn't true. (blockchain.info derives an address but actually the pubkeys are right there in plain sight. Have at it!)

Thanks, I had no idea!

Okay, thanks!

It's worth mentioning that Bitcoin addresses aren't exposed on the blockchain as public keys, but hashes of the public keys. If you want to steal the coins you'll need to turn ripemd160(sha256(sha256(publicKey))) into a private key. Good luck.

Yes, but the public key is exposed when you send from that address. If a quantum computer is quick enough, it could get your private key and issue its own transaction, racing you to get into a block.

Cracking a public key in 10 minutes is much harder than cracking a public key (at all). Considering we haven't cracked anything yet, I wouldn't be worried about that. Also, if you're transferring thousands of bitcoin and want to be safe, you could always privately send it to a miner rather than broadcasting it. In that case the tx would have 1 confirmation before the the public knows about it, requiring them to also pull off a 50% attack.

A QC factors the current public key algorithms in constant time. More qubits just means you can factor bigger keys. How big that constant is will depend on how the QC works.

Sending privately to a miner would help, but you'd end up with a very centralized system since you would want to send to the biggest miner, to minimize the time until that miner produces a block. You can't have miners sharing the transaction, even privately among themselves, since if they did that then one of them could have a QC and you wouldn't have any way to know who stole your money.

There had better be enough money available then after all, if it costs you $1M to steal $100 then you've just lost money.

It doesn't need to be quick if you still have coins at that address.

Wrong due to P2PK transactions. But also no matter which hash you use as an address the person has to reveal their public key in a digital signature to transact the coins, and if an individual has access to a quantum computer which solves the ECDLP fast enough they can run a node which monitors transactions and conceivably get your private key and then exploit a race condition by transacting the same coins with a higher fee to steal the coins.

It's impossible to use a hardware wallet without exposing the master public key to an untrusted device (not hardware wallet).

The only solution would be to run a Bitcoin node on the hardware wallet itself, which is not possible right now.

So you have to treat the public key as a public key.

That would involve your computer being already compromised. If that's the case AND you have enough bitcoins to warrant running a quantum computer worthwhile, you're not securing your coins properly.

Problem includes far more than bitcoin.. Your bank for example

Not really. Your bank (and the whole web) can easily switch to a quantum resistant encryption algo when time comes.

For Bitcoin this is much harder.

Its not much harder for bitcoin...

The community will just do a blockchain snapshot of balances at an agreed upon block and start a new distributed ledger with quantum resistant encryption

Snapshots have been done hundreds of times

Everybody will need new private keys.

What do you do with old coins? Satoshi's one for example? Or lost coins that nobody has the key for?

Do you set a threshold day, after which all unclaimed coins are just marked destroyed forever? If not, how do you know someone claiming some coins didn't use a quantum computer to get the key?

>Do you set a threshold day

yeah pretty much.

>What do you do with old coins? Satoshi's one for example? Or lost coins that nobody has the key for?

If those coins hasn't been touched for decades, despite widespread announcements of pre-quantum cryptography (presumably it wouldn't happen overnight), it's safe to say that nobody is going to claim them.

Dont treat any address differently

You wont know that, you will provide the tools and also new accounts will have assurance and regenerate confidence in the system

So we have moved from concluding that bitcoin use is an irreparably flawed concept to a method of maintaining viability of the concept

It's easy to add a quantum resistant algorithm, but as it's much more expensive to verify and takes more block space, the transaction fees will be much higher. Transitioning is a huge political problem as well.

It'll be politically easy. Nobody wants someone else able to steal their money.

> The community will just do ...

This is the funny thing about decentralized services. They require centralized action. Not saying it is good or bad, but it is... different.

It requires community consensus thats the opposite of centralized

Anyone can make a snapshot, assigning value to it is not centralized

I said centralized action. Community consensus sounds nicer, but simultaneous action is required.

I don’t imagine this is other settings... “I only do breaking changes on my api so let’s have everyone change their client at midnight”.

Imagine if we all changed implementations of SQL at once!

Actually, we changed the meaning of the $() function inside the devtools console across all browsers at the same time. That was fun. :)

No thats not necessary.

People just add flags in their mining protocol that only trigger when a threshold is reached. The last Segwit changes needed a percent change of closer to 90% just to trigger the next change.

We are only assuming that consensus would be reached quickly given the scenario presented. It would be irresponsible to design it to need simultaneous action. People would have to considering to stop using the bitcoin network for X,000 blocks while consensus is being reached, and only until it is reached.

> Ah, but with the right quantum computer, able to process information at speeds exponentially faster than today’s supercomputers? Suddenly, what seems uncrackable becomes child’s play, able to be broken in under 10 minutes.

Cringe. Quantum computers are not "able to process information at speeds exponentially faster than today's supercomputers". They're able to solve a very specific subset of problems which are hard for classical computers. A very specific subset, called BQP, mostly having to do with finding prime factors. NP hard problems are probably uncrackable with quantum computers.

I get that quantum computer can run faster with more states. But can someone with quantum computing experience explain how quantum computer can address exponential computation? Does it reduce exponential computation into polynomial?

My answer to the above question is no. Assuming a quantum computer has 10 states. Its running time for exponential algorithms is still exponential. A simple example: 2^10 is about 10^3. Still exponential.

Will breaking classic computer public key encryption reveal any secrets that were encoded prior to them being obsolete?

Should we be recording encrypted streams and saving them for a few years until we can break them? Is there any value in that?

What do you think the NSA is doing?

The public blockchains make all this more feasible since they have the community keep the data around in original state for them!

So much cheaper than recording SSL/TLS traffic, for example.

Also that is why they would find it important to exfiltrate the data from a company (or government) before it can be re-encrypted with something better.

Yes, if this ever works and is affordable.

Suppose our adversaries have a machine which can do Shor's algorithm at the scale needed to break modern public keys for say $1M and an hour, and they have been recording encrypted sessions.

For sessions encrypted using RSA key exchange, it is enough for them to spend $1M and wait one hour and then they can decrypt everything that they've recorded, using one particular key. So e.g. a typical HTTPS site only has one key for months or even years, if they've recorded the encrypted data they can read all of it for $1M.

Where Forward Secrecy (e.g. ECDHE) was used, the cost is $1M (and an hour) for each session, because the keys change each time so each fresh session needs the expensive algorithm.

What a deceptive article.

They take a crypto conference happening at a prestigious institution and sprinkle in some bitcoin punditry to make them seem related.

"The amount of time the universe itself has left—around 0.65 billion billion years."

Is this true?? I've never heard this claimed before.

I’ve never seen it framed that way, but they may be referencing the estimated timeframe for the “heat death” of the universe:


Yeah that was my assumption but I've never seen that specific time frame proposed and it just seems like the kind of thing I would have heard of if it was a know fact.

You can estimate this because there is a known amount of energy around in the visible universe (energy, mass, etc.). We know from inflation that this number changes at a given rate. We also know that time and entropy are connected, so when entropy of the known universe is at the maximum, it would be essentially meaningless to say time could continue, i.e ‘the end of time.’

This is a general problem with public keys, not just Bitcoin!!

One of the problems with current PKI is weakness in the face of quantum computers, leading to a new crop of algorithms being submitted to NIST, etc.

I wanted to ask whether the following simple scheme, based just on cryptographic hashes, can be used CONFIDENTLY, SECURELY and RELIABLY in many situations where Assymetric Key cryptography is used today, and in many others too, such as providing provably random polling etc. It is very similar to a One Time Pad but uses a cryptographic hash function to generate the OTP codes.

Here is the scheme: Everyone generates a random private key K[p] and store it just like any assymetric private key (encrypted with some non-stored derived key).

They use any cryptographic hash function that hasn’t had a serious preimage attack (perhaps even MD5?), hash it n (eg 10,000,000) times to get h[p][n], and publicly commit to that number. This is like a public key.

The hashes are long enough that it’s infeasible to reverse them. Key strengthening can be achieved by jumping a few onion layers between transactions.

If you start running out then you post a new public key, signed with one of your remaining onion layer codes. Any verifiers store the original public key per participant, and then can replace them with the new public key if it was properly signed by the old one, etc.

Use case: generating provably random numbers by mutually distrusting parties

Participants they gradually reveal their hashes, one onion layer per transaction. Each provably random seed is a function of the alphabetically smallest/largest three of those hashes at the next onion layer. If not all of them reveal the hashes in time, they gossip, verify and agree on which ones are the smallest/largest three before some cutoff point like “most reported that most reported”. That leaves tons of bits of entropy coming from everyone!

Use case: Authenticator Apps

The hash h[p][n+1] would be a hash of some substring of h[p][n] with enough bits that finding all chosen preimages (by an eavesdropper of the previous code) would be infeasible in advance. Perhaps 10 alphanumeric characters is enough. Also when displaying the code to enter, the authenticator app can tell the user a number from 1-100 indicating to the verifier how many onion layers to peel, making it harder to precompute the preimages. Or the user would have to enter the entire hash via the network-connected computer scanning a QR code, NFC or something. From a security standpoint, this method seems superior to the HOTP and TOTP schemes used in authenticator apps today, since there is no need to trust the verifier with any secret keys (https://www.ietf.org/rfc/rfc4226.txt) Also there is no need to sychronize clocks, since the client simply lets the server know how many times to run the hash, and increments that number every time.

Use case: Signing Payloads

Participants reveal a payload and commit to an HMAC signature by using cryptographic key at the next onion level, which at that point would be known only to them. All these signatures are collected into a blockchain block / merkle tree timestamp / similar thing, and it is sent to the participant before they reveal the onion key they used to sign it.

Use case: Off the Record Messaging

The Blockchain or Merkle tree is private between a few parties only, so once the next onion level is revealed, no participant can prove the payload was generated by a given participant, since all the onion hashes were known, any of them could generate a new valid tree with any payload history. They can only prove it to each other, or given enough “witnesses” attest to that tree, people might trust then on the basis of consensus of (presumably) mutually distrusting parties, but that’s not the same thing as cryptographic proof. But that is true of any OTR conversation.

Use case: Restoring Access

This can be used instead of Shamir Secret Key sharing. The server would have to store keys for every participant, and M of N participants would just sign that they approve authorization of some new session, some new key, or whatever. These signatures could be easily checked by anyone who has the public keys of the M participants who signed it.

Use case: Decrypting payloads

Not sure how one would do this one, to be honest. With PKI, someone could encrypt a payload that can only be decrypted by a private key holder. I see how to do signatures and HMAC, but not the other way.

Finally a good use-case for bitcoin: a bounty for the first quantum computer to crack it.

That bounty is currently at 185 billion USD. Of course, it would be pretty difficult to sell all the bitcoins without slippage, but I guess one could cash in at least several tens of millions.

Not really. BTC has no value other than its ability to be exchanged for USD. If BTC is cracked no one will trade their USD for it anymore and it will be worthless.

The first person to assume custody of a large amount of bitcoin will sell and be incredibly rich

They can siphon off a lot before other people catch on to people’s complaints and realize the best practices were followed

Yes, but anyone smart enough to run a quantum computer to solve the private keys and steal bitcoin is also likely to be smart enough to first examine the blockchain for balances that would not get complained about. Apparently, something like 20% of all BTC is just stranded inadvertently lost or destroyed keys -- starting with 'accounts' after the closely watched genesis block, but which have seen no transactions for maybe 5-8 years should net a lot of funds without being noticed.

Whilst Quantum computing is an interesting topic, gotta say I think bitcoin's challenges are far more mundane and pressing that that.

No.1 challenge is how to stop centralized exchanges from extensive market manipulation.

Decentralized crypto currencies are never going to achieve their goals, whilst players like Bitfinex and Tether are in a position to "print" unlimited amounts of money and use them to purchase coins like BTC.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact