If scalable quantum computers can be built (which seems probable, as we are progressing fast in the number of qubits we can keep together), then certain restricted types of public key encryption will be broken by quantum hardware (all currently used public key encryption actually). We know other public key encryption algorithms exist that can run on classical computers and still not be broken by quantum computers. In many ways they are less tested and less practical, so for a while NIST has been sponsoring the development of such quantum-resistant running-on-classical-computers public-key algorithms. The usual name for this is post-quantum encryption.
Are we? Every so often I check back to see how well they're doing with actually running Shor's algorithm. In 2012 they managed to find that 21 = 3 x 7. And today... 21 = 3 x 7. That doesn't sound like progressing fast.
Quantum annealing is doing well for itself, but it isn't any sort of threat to encryption.
I would not take any experiment performing Shor's algorithm today or in the past particularly seriously, since they would probably be showing something fine tuned for the particular instance of the problem (e.g. 21). We do not have anything that can be called an "error-corrected logical qubit", and we need this before we can make serious claims about running algorithms like Shor's.
Talking to experts in the field, we're doubling the number of functional qbits (ie accounting for error correcting requirements) about every 12 months right now. Looks like current number is 16-qbits, where many algos get "really interesting" around 1000 qbits (and functional even before that). So 5-6 years until a potential total transformation in computing paradigms. Quantum is in a similar place as deep learning in 2012.
It's anyone's guess to how far we go, but generally extrapolating a curve far off the current reading is a case of innumeracy.
More generally: nothing is inevitable, especially when we're talking about orders of magnitude. Maybe it happens as quickly as we'd hope, maybe it doesn't. Maybe it doesn't even happen in the near to intermediate future at all. That goes for self-driving cars, quantum computers, general artificial intelligence, or whatever.
Moore's law was inevitable, until it wasn't.
I highly doubt we'll get there in 5 years, myself.
There are a number of problems that are well established with strong security proofs vs classical and quantum algorithms - the problem is that the key and message sizes are fairly terrible for practice currently.
On the plus side all the current symmetric ciphers are still pretty solid against quantum machines (Grover’s search is a sqrt improvement so would simply require a doubling in key size, but I suspect in practice not necessary)
×there are still no hardware implementation of a single logic gate despite what? 40 years of research?
×stability of qubits are pathetic.
×no software stack (almost)
×Number of qubits are ridiculous and 40 years of research can't even make a consensus on how much qubits are needed for beating a CPU on at least one task.
Even if quantum could(I strongly doubt that and quantum speedup has never been shown empirically) reduce complexity on some algorithms, the constant factor has no way to be handled with 50qbits, more like 1000000 qbits at minimum.
×no sign of needed breakthrough in error correcting codes.
I consider quantum speedup to be theoretically flawed, based on an interpretation of quantum mechanics needing "parallel worlds" which necessitate a misunderstanding of causality to make sense.
I consider quantum speedup to be flawed in practice too as order of magnitude of order of magnitude progress is needed and the trend is: no progress or very slow progress.
Thus quantum computing is like a vaporware, which take funds that would be far better allocated at e.g medicine, AGI, or ASICS.
> theoretically flawed, based on an interpretation of quantum mechanics needing "parallel worlds"
is patently false. There is nothing in QC that would stop working under the good ole' Copenhagen interpretation.
In fact, if there was even a theoretical possibility that a QC experiment would work only in some QM interpretations, they would no longer be interpretations, but falsifiable hypotheses.
In contrast, many worlds says basically: There is no collapse. Ever.
The fact that no one has found what triggers collapse yet is to me evidence that Copenhagen is false.
Pilot wave theory on the other hand is, afaiu completely equivalent to many worlds, but less philosophically convincing.
Where many worlds says there is only the wave function, pilot wave says there is a wave function but also the material world. And material world is a sort of "view" of the wave function. Continuously rederived from it. But the interesting thing is, this material world has no causal consequences. All causality stems from the wave function and its evolution. This is the compromise to get same predictions as many worlds.
It' just the same as "no one has found what triggers a split between worlds, and why the physical reality we observe is following this particular one of the many worlds".
What you are claiming is merely the proposition of "hidden variables" aka "local realism", a theory which has been thoroughly rejected by experimental measurements [1,2,3].
Basically that quantum interactions are more fundamental to the universe than more terse ones, stochastic randomness or presence/absence.
If a computing device (in a universe) is built to compute using more fundamental substance (to that universe) it should stand to reason it is more efficient.
If you model a photon, does your model ever move as fast as the photon?
"Basically that quantum interactions are more fundamental to the universe than more terse ones, stochastic randomness or presence/absence."
What does more fundamental means? Presence, absence and randomness are included in the set of quantum interactions they are not of a different nature.
" If a computing device (in a universe) is built to compute using more fundamental substance (to that universe) it should stand to reason it is more efficient."
What does fundamental substance mean?
I don't understand this paragraph.
"If you model a photon, does your model ever move as fast as the photon?"
My model does not move.
If you meant that my mental visual simulation of a photon moving is slower than a photon moving, yes obviously, it's slowness and lack of accuracy is just a limitation of my brain.
How is that related to quantum speedup?
Help me understanding you, I would really like to believe in quantum speedup but the explanation needs to be sound.
I mean that the relationship between Newtonian mechanics and Quantum mechanics is such that the matter described by Newtonian mechanics is composed of matter whose precise behavior is exclusively described by quantum mechanics, that is that underlying matter is fundamental to the macro behavior.
I'm by no means a physicist, but a formal treatment of this kind of relationship (and predicted consequences) is captured in Constructor Theory (as David Deutsch who may have been early in identifying this was able to formalize this in the much more precise language of physics)
Intuitively, if you took a whiffle ball (or pair of whiffle balls, or some other ensembleoof whiffle balls sufficient to represent a system's state) in your hands, and a little obstacle course representing, say, beam splitters, and with each in your hands somehow made these classical implements behave like photons, you would have little hope of doing this at light speed.
In general I have issues understanding many QM explanations as I disagree with all interpretations of QM except the statistical one.
You might be interested to read this Nobel prize paper with test the limits of quantum mechanics and refute in a sound manner, common interpretations.
“It's been estimated that 6,681 qubits would be required to run use Grover's algorithm to break AES-256 bit encryption.”
Even if you don’t go that extreme, there’s no indication that e.g. AES is breakable.
Though the concept seems nice, it has quite a few issues in practice. Such as distance between coupled devices, and the need to have a dedicated physical communication line.
Oh and before you think you can just chain several devices to get longer distances: this opens you up to classical MitM attacks.
My understanding is that it remains unencrypted forever. That’s why it is a public key.
As long as the coins are not moved to another account that public key stays a valuable target.
Edit: As DennisP pointed out my understanding was wrong and indeed only the hash of the target is published until you make an transaction from an account.
Early Bitcoin transactions did not use addresses.
Example from block 1:
You'll see at the bottom that the opcode is a PUSH / CHECKSIG rather than the later DUP / HASH160 / PUSH / EQUALVERIFY / CHECKSIG format.
So this isn't true. (blockchain.info derives an address but actually the pubkeys are right there in plain sight. Have at it!)
Most transactions are indeed made to pubkey hashes though, yes.
Depending on how easy it is, all those addresses that were abandoned containing 50 BTC are also up for grabs.
Would a coin without public addresses be better suited against such future?
Sending privately to a miner would help, but you'd end up with a very centralized system since you would want to send to the biggest miner, to minimize the time until that miner produces a block. You can't have miners sharing the transaction, even privately among themselves, since if they did that then one of them could have a QC and you wouldn't have any way to know who stole your money.
The only solution would be to run a Bitcoin node on the hardware wallet itself, which is not possible right now.
So you have to treat the public key as a public key.
For Bitcoin this is much harder.
The community will just do a blockchain snapshot of balances at an agreed upon block and start a new distributed ledger with quantum resistant encryption
Snapshots have been done hundreds of times
What do you do with old coins? Satoshi's one for example? Or lost coins that nobody has the key for?
Do you set a threshold day, after which all unclaimed coins are just marked destroyed forever? If not, how do you know someone claiming some coins didn't use a quantum computer to get the key?
yeah pretty much.
>What do you do with old coins? Satoshi's one for example? Or lost coins that nobody has the key for?
If those coins hasn't been touched for decades, despite widespread announcements of pre-quantum cryptography (presumably it wouldn't happen overnight), it's safe to say that nobody is going to claim them.
So we have moved from concluding that bitcoin use is an irreparably flawed concept to a method of maintaining viability of the concept
This is the funny thing about decentralized services. They require centralized action. Not saying it is good or bad, but it is... different.
Anyone can make a snapshot, assigning value to it is not centralized
I don’t imagine this is other settings... “I only do breaking changes on my api so let’s have everyone change their client at midnight”.
Imagine if we all changed implementations of SQL at once!
Actually, we changed the meaning of the $() function inside the devtools console across all browsers at the same time. That was fun. :)
People just add flags in their mining protocol that only trigger when a threshold is reached. The last Segwit changes needed a percent change of closer to 90% just to trigger the next change.
We are only assuming that consensus would be reached quickly given the scenario presented. It would be irresponsible to design it to need simultaneous action. People would have to considering to stop using the bitcoin network for X,000 blocks while consensus is being reached, and only until it is reached.
Cringe. Quantum computers are not "able to process information at speeds exponentially faster than today's supercomputers". They're able to solve a very specific subset of problems which are hard for classical computers. A very specific subset, called BQP, mostly having to do with finding prime factors. NP hard problems are probably uncrackable with quantum computers.
My answer to the above question is no. Assuming a quantum computer has 10 states. Its running time for exponential algorithms is still exponential. A simple example: 2^10 is about 10^3. Still exponential.
Should we be recording encrypted streams and saving them for a few years until we can break them? Is there any value in that?
The public blockchains make all this more feasible since they have the community keep the data around in original state for them!
So much cheaper than recording SSL/TLS traffic, for example.
Also that is why they would find it important to exfiltrate the data from a company (or government) before it can be re-encrypted with something better.
Suppose our adversaries have a machine which can do Shor's algorithm at the scale needed to break modern public keys for say $1M and an hour, and they have been recording encrypted sessions.
For sessions encrypted using RSA key exchange, it is enough for them to spend $1M and wait one hour and then they can decrypt everything that they've recorded, using one particular key. So e.g. a typical HTTPS site only has one key for months or even years, if they've recorded the encrypted data they can read all of it for $1M.
Where Forward Secrecy (e.g. ECDHE) was used, the cost is $1M (and an hour) for each session, because the keys change each time so each fresh session needs the expensive algorithm.
They take a crypto conference happening at a prestigious institution and sprinkle in some bitcoin punditry to make them seem related.
Is this true?? I've never heard this claimed before.
One of the problems with current PKI is weakness in the face of quantum computers, leading to a new crop of algorithms being submitted to NIST, etc.
I wanted to ask whether the following simple scheme, based just on cryptographic hashes, can be used CONFIDENTLY, SECURELY and RELIABLY in many situations where Assymetric Key cryptography is used today, and in many others too, such as providing provably random polling etc. It is very similar to a One Time Pad but uses a cryptographic hash function to generate the OTP codes.
Here is the scheme:
Everyone generates a random private key K[p] and store it just like any assymetric private key (encrypted with some non-stored derived key).
They use any cryptographic hash function that hasn’t had a serious preimage attack (perhaps even MD5?), hash it n (eg 10,000,000) times to get h[p][n], and publicly commit to that number. This is like a public key.
The hashes are long enough that it’s infeasible to reverse them. Key strengthening can be achieved by jumping a few onion layers between transactions.
If you start running out then you post a new public key, signed with one of your remaining onion layer codes.
Any verifiers store the original public key per participant, and then can replace them with the new public key if it was properly signed by the old one, etc.
Use case: generating provably random numbers by mutually distrusting parties
Participants they gradually reveal their hashes, one onion layer per transaction. Each provably random seed is a function of the alphabetically smallest/largest three of those hashes at the next onion layer. If not all of them reveal the hashes in time, they gossip, verify and agree on which ones are the smallest/largest three before some cutoff point like “most reported that most reported”. That leaves tons of bits of entropy coming from everyone!
Use case: Authenticator Apps
The hash h[p][n+1] would be a hash of some substring of h[p][n] with enough bits that finding all chosen preimages (by an eavesdropper of the previous code) would be infeasible in advance. Perhaps 10 alphanumeric characters is enough. Also when displaying the code to enter, the authenticator app can tell the user a number from 1-100 indicating to the verifier how many onion layers to peel, making it harder to precompute the preimages. Or the user would have to enter the entire hash via the network-connected computer scanning a QR code, NFC or something. From a security standpoint, this method seems superior to the HOTP and TOTP schemes used in authenticator apps today, since there is no need to trust the verifier with any secret keys (https://www.ietf.org/rfc/rfc4226.txt) Also there is no need to sychronize clocks, since the client simply lets the server know how many times to run the hash, and increments that number every time.
Use case: Signing Payloads
Participants reveal a payload and commit to an HMAC signature by using cryptographic key at the next onion level, which at that point would be known only to them. All these signatures are collected into a blockchain block / merkle tree timestamp / similar thing, and it is sent to the participant before they reveal the onion key they used to sign it.
Use case: Off the Record Messaging
The Blockchain or Merkle tree is private between a few parties only, so once the next onion level is revealed, no participant can prove the payload was generated by a given participant, since all the onion hashes were known, any of them could generate a new valid tree with any payload history. They can only prove it to each other, or given enough “witnesses” attest to that tree, people might trust then on the basis of consensus of (presumably) mutually distrusting parties, but that’s not the same thing as cryptographic proof. But that is true of any OTR conversation.
Use case: Restoring Access
This can be used instead of Shamir Secret Key sharing. The server would have to store keys for every participant, and M of N participants would just sign that they approve authorization of some new session, some new key, or whatever. These signatures could be easily checked by anyone who has the public keys of the M participants who signed it.
Use case: Decrypting payloads
Not sure how one would do this one, to be honest. With PKI, someone could encrypt a payload that can only be decrypted by a private key holder. I see how to do signatures and HMAC, but not the other way.
They can siphon off a lot before other people catch on to people’s complaints and realize the best practices were followed
No.1 challenge is how to stop centralized exchanges from extensive market manipulation.
Decentralized crypto currencies are never going to achieve their goals, whilst players like Bitfinex and Tether are in a position to "print" unlimited amounts of money and use them to purchase coins like BTC.