Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you know when enough computational effort / energy has been devoted to mining i.e. how do you know that there is no attacker out there who is working harder than all the honest users combined?


You can calculate how much capital an attacker would need to purchase the necessary mining equipment. When that amount of capital is more than large nations would be willing to commit, then you know it's pretty safe. Currently, the amount of capital is in the tens of billions, which is already rather out of reach for most governments to allocate for this kind of thing. Yes a government like the US could theoretically do it, but it would be incredibly difficult to make that fly politically.


That is a lot of assumptions about an attacker's means and willingness to invest in the attack. German cryptographers made that same mistake during World War II: they knew how to attack the Enigma cipher in principle, but assumed, incorrectly, that none of the allied powers would be willing to make the necessary investment. Tens of billions dollars is within reach for many governments, corporations, and even some individual people. It is not hard to imagine what might motivate an attack, whether politically (perhaps to disrupt the Western financial system once more institutional investors have take on large Bitcoin positions) or financially (imagine building a large short position in Bitcoin futures, then executing a 51% attack).

Moreover, why assume that the attacker will necessarily have to work as hard as the total work being done by honest miners, when the attacker might first try to take down the largest miners as a first step to reduce the capital investment? Blackouts in China significantly reduced the hashrate of several large mining pools recently. An attacker willing to invest tens of billions of dollars in an attack would have little difficulty sabotaging mining operations, especially given how concentrated and centralized those operations have become.

Really though, in what other context would we accept an argument that comes down to assumptions about what an attacker is willing to do? Cryptographic security demands that the attack effort grows exponentially (or super-polynomial in the case of public key cryptography) so that we can make the attack effort larger than the number of atoms on our planet (or galaxy or universe), which renders all speculation on the attacker's motives or willingness to invest an attack moot. With Bitcoin you can never make such a claim until most of the world's available energy is consumed by honest miners (the only point at which the attack is actually impossible, at least if we assume that the honest parties cannot be corrupted).


It would be easy for existing users to see how much energy the cheaters are spending based on the blocks that are getting published. A block gets published every 10 minutes on average so it wouldn't take a long time to become obvious.


So you only know how hard the honest parties must work to prevent an attack after a successful attack is detected?


If you assume all the current mining capacity is in use, and everyone mining right now is actually dishonest and collaborating with each other as a single party (worst case), then at most you just need to match the current capacity to prevent an attack. All you need is to make sure that more than 50% of miners aren't collaborating to benefit the same party at the same time.

If you are assuming that the attackers are also hiding "dark" mining capacity, then it's harder to say what capacity you would need to stop an attack, but they would be choosing not to run that hardware at a significant opportunity cost for no reason other than to create a false sense of security.

You seem to be implying that this is a strange model but I don't think so. It's no different from how you won't know how big of a military you need until after you get attacked. Your opponents could be hiding their military capacity. Etc. It's not clear that it is possible to avoid this problem in any trustless system.


Assuming that all mining capacity is visible is the sort of assumption that cryptographers typically avoid because you can easily be blindsided. Why shouldn't an attacker hide their equipment? Cryptographers also typically avoid speculating on the attacker's motivation, so talking about the opportunity cost is a pretty weak argument. Maybe the attacker just wants to cause mayhem in the system regardless of the cost, and they are idling their equipment until they are ready to attack.

It is also worth remembering that someone trying to attack Bitcoin is not limited to amassing enough of their own mining equipment; they can also try to reduce the work being done by honest miners by sabotaging honest mining operations, which at some point will likely be cheaper than trying to buy more mining rigs. So the honest miners need to do more than just match their own estimate of the capacity of would-be attackers. Honest miners must exceed attackers by enough of a margin to absorb attempts by an attacker to reduce the honest mining capacity, and that margin is also difficult to estimate.

I do not know what counts as "strange" but I certainly say that Bitcoin's security model is inefficient and backward by the standard used in cryptography. Honest parties have no advantage over the attacker and thus are left devoting ever growing amounts of energy to mining, continuously raising the bar for the attacker (and also raising transaction costs to pay for all that energy, not to mention creating an environmental disaster in the process). No competent military commander would pursue a strategy that depends purely on numerical superiority; armies set up bases in locations that are tactically advantageous (e.g. on a hilltop) and focus their planning (offensive or defensive) on strategically valuable targets (bridges, supply lines, etc.).


I don't think this comparison with "the standard used in cryptography" is really accurate or useful.

Cryptography tries to avoid speculating on the attacker's motivation, but establishing trust from nothing is a different problem than sending private messages on a public channel. It's not clear the former can be done any other way than by creating incentive structures to make the cost of cheating greater than some known amount.

> Honest parties have no advantage over the attacker

Honest parties have the advantage that it's easier for them to agree (and they are rewarded for agreeing). An honest party who wants to play honestly is implicitly supported by every other honest party whether they are explicitly allied or not. Whereas a dishonest party doesn't just need to be dishonest to benefit, they need to get >50% of all miners to agree to benefit them specifically, or otherwise amass all that capacity just for themselves while outcompeting every other honest party put together (not just their specific enemy, but every honest party).

> and thus are left devoting ever growing amounts of energy to mining, continuously raising the bar for the attacker

Why would the energy usage be "ever growing"? Bitcoin might become more popular and therefore might have bigger security demands expected of it in the future, but that should eventually reach a cap. From that point only cheapening electricity generation would increase the expected usage.

> No competent military commander would pursue a strategy that depends purely on numerical superiority;

That is certainly what they'd do if there was literally no other way to achieve an advantage (which is the case in this kind of landscape). Everything else is just a way to make their forces work more efficiently.


Modern cryptography is about a lot more than sending private messages over public channels. The entire field of secure multiparty computation is about mutually distrustful parties participating in a distributed computation of some kind. Bootstrapping trust from "nothing" is part of that, and the theoretical work in that field establishes the limits and feasibility of accomplishing that goal. There are settings where "trust from nothing" (more precisely, when only one party is honest) is possible to achieve with cryptographic security -- for example, if aborting the protocol (stopping the computation) is an acceptable outcome (this is unavoidable if there are only two parties).

"Honest parties have the advantage that it's easier for them to agree (and they are rewarded for agreeing)."

I should clarify that when I used the term "advantage" I was referring to something analogous to mechanical advantage. A basic tenet of cryptographic security definitions is that the honest parties should have an advantage over the attackers in the sense that a small increase the work done by the honest parties results in a large increase in the work that must be done to attack the system. Note also that cryptographic security definitions assume the opposite of what you said: that the honest parties are not coordinated and are operated independently, but that the attackers do coordinate with each other (in fact we just refer to "the adversary" which has full control over all the attacking parties, with full access to their internal state, inputs, outputs, randomness, and whatever else).

You mention incentive structures, but incentive structures are not very reliable. You need to make a lot of assumptions about motivation before an incentive structure makes sense, and those assumptions can easily fail because of something external to the system. What if a group of miners who collectively control 51% of the mining capacity of Bitcoin realized that they could open short positions in Bitcoin futures and make a fortune if they triggered a panic by e.g. executing very obvious double-spending attacks? Historically incentive structures have failed to prevent wars, crime, and other unwanted behaviors.

"Bitcoin might become more popular and therefore might have bigger security demands"

One does not follow from the other. Security requirements have nothing to do with popularity and everything to do with the ability of some attacker to succeed in attacking a system. Even if nothing about Bitcoin changes, a change elsewhere in the world may create an opportunity to profit from attacking Bitcoin (and there is no reason to think anyone other than the attacker would be aware of that opportunity). The attacker may not be rational at all; maybe the attacker just wants to create chaos, even at great expense. Like I said, incentive structures are unreliable and make for a poor approach to security on the Internet.


> A basic tenet of cryptographic security definitions is that the honest parties should have an advantage over the attackers in the sense that a small increase the work done by the honest parties results in a large increase in the work that must be done to attack the system.

I think a more accurate claim would be that cryptography aims to achieve the best security guarantees that can be assured mathematically. In the case of establishing consensus the best guarantees you can achieve are just different from what you can achieve in those other situations. That doesn't mean that Bitcoin is somehow going against some kind of cryptographic standard like you are implying here. It is using cryptographic technologies to achieve the most secure possible solution (as far as we have discovered) to the problem which it aims to solve. It just happens to be a different problem than those other problems and therefore the nature of the solution is different.

> Note also that cryptographic security definitions assume ... that the attackers do coordinate with each other

That's just one possible scenario which you might consider. Again I think it is misleading to imply this is some kind of "cryptographic litmus test" or something.

> You mention incentive structures, but incentive structures are not very reliable.

Right, but there's just no stronger guarantees which you can make when it comes to this problem.

> One does not follow from the other. Security requirements have nothing to do with popularity and everything to do with the ability of some attacker to succeed in attacking a system.

I think you are interpreting what I said backwards. Popularity defines what level of security is required. The level of security defines what attacks can be resisted. The more popular the coin, the bigger need it will have to resist attacks. And this is accommodated in the design: the higher the market price of the coin, the bigger the reward will be for miners (thus funding more mining, increasing the cost of an attack).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: