Hacker News new | comments | show | ask | jobs | submit login
Monero Declares War on ASIC Manufacturers (ccn.com)
139 points by Osiris 5 months ago | hide | past | web | favorite | 135 comments



You can actually see one of the pull requests about this already. [1]

My favorite part about it is response to criticism that the changes may break the underlying cryptography they are using-

> We do not have cryptographers familiar with this kind of thing, sadly. [2]

1. https://github.com/monero-project/monero/pull/3253

2. https://github.com/monero-project/monero/pull/3253#issuecomm...


It not only creates a bias (which decreases entropy), but its an operation that would easily be implemented faster on an FPGA than on a CPU. I mean, its basically a singular LUT, there's nothing more efficiently done on an FPGA than LUTs.

Basically, I think it actually weakens the FPGA-resistance of Cryptonight.

Fortunately, the main idea of the changes would discourage ASICs being made for Monero. But it doesn't seem to discourage FPGAs, which are just one "resynthesize" away from fixing any code changes. And as long as the community becomes accustomed to regular changes, then any issues in the encryption can be fixed later.


I'd prefer to optimize for FPGAs over GPUs anyway.


FPGAs cost 10x ASICs so this argument is unimportant. The reason for using ASICs is lower cost / lower compute resources.


I am confused by this—-aren’t ASICs the costliest option until some (high) number are fabricated and put into use?

If FPGAs can implement these changes efficiently—more efficiently than cpu miners—-and be re-synthesized to implement further changes, doesn’t that advantage them?


Parent was presumably leaving unsaid "... at scale", aka enough to move a chunk of the hash power.


Thanks yes I totally meant scale. Main cost of new ASIC chips is design and fabrication. Only serve a very narrow specific purpose and all that cost is sunk. But after that, unit production chip cost is negligible.


do you happen to know b/e costs given difficulty/profitability ratio using current market electricity prices?


"How would you decide when to use an FPGA and when to create an ASIC? That depends. FPGAs waste a large amount of silicon compared with an ASIC, so the cost floor, which depends in large part of the surface area of silicon required for the chip, is often an order of magnitude higher than you’d want it to be. But fabricating an ASIC isn’t cheap either."

[1] https://spectrum.ieee.org/tech-talk/computing/hardware/lowbu...


ASICs are only costly from an R&D perspective. Which means once someone has all the R&D, they can be a monopoly and get really cheap hashrate, and since they are an incumbent monopoly it's much harder for another company to step in and complete.

FPGAs don't suffer this problem


I mean, a serious FPGA lab would simply keep their FPGA techniques secret until they build a sizable advantage. There's a lot of tech that would be built up: a custom memory controller, an implementation of various Cryptocoin PoW systems, and of course, the Bill of Materials for the ideal power-efficiency for various designs, etc. etc.

Something like an FPGA + Interposer to HMC would be a huge R&D effort, and just as centralizing.

BTW: None of the designs I've talked about are strictly speaking commodity. They'd require at a minimum, custom PCBs. Maybe more advanced techniques for the best technology (again: Custom Interposer to HMC + FPGA interface + all the Verilog / VHDL code to make it happen).

As long as an FPGA-based shop kept their FPGA PCB secret, as well as their code secret, and their Device Drivers secret (You'd probably run Linux / Windows to talk to the FPGA over PCIe) then they're basically going to be ahead of the rest of the competition. Eventually, the competition would catch up, but a constant R&D effort into newer designs (ie: testing HBM2 vs HMC vs RLDRAM3 vs QDRIV, building relationships with suppliers, etc. etc.) could lead into a sustainable business edge.

Heck, early on in Monero's life, some dude got to like 40%+ of the entire network's Hash Rate by simply writing better CPU code and keeping it for himself, and then spending hundreds-of-thousands of dollars on AWS: https://da-data.blogspot.com/2014/08/minting-money-with-mone...

> By the 14th of May, we were 45% of the total hashing power on the coin. Things started to get a little exciting

FPGAs would be that on steroids. There are way fewer Verilog / FPGA engineers. It also would require custom PCBs and hardware engineers to make it all happen. I wouldn't even know who to ask to design an Hybrid Memory Cube interposer and to fit it on an FPGA for example.


A quick google search turns up at least one devboard with a beefy FPGA and HMC:

https://www.xilinx.com/products/boards-and-kits/dk-u1-vcu110...

Not cheap, and probably not available in bulk, but cheaper than custom PCB R&D.


Not cheap, but it would be insane for cryptomining purposes if you somehow got a sizable supply.

16.5 MB of internal RAM. 18MB of QDR-IV SRAM. 125MB of RLDRAM 3. 2GB of HMC.

That's plenty of RAM to pretty much crush Ethash, Cryptonight and more.


Generally ASIC coins see centralization because they can control a large portion of the hash power.


> We do not have cryptographers familiar with this kind of thing, sadly.

Heh, free money incoming if you've got a crypto background. Cryptocurrency is a nested fractal of incompetence.


That's an unfair statement. The Monero Research Lab has several full-time PhD'd cryptographers who are paid for by crowdfunding from the community, just none who are specialised enough that they can comfortably sign off on the change. I think it's a good thing that those researchers don't overextend and act like they are all-knowledgeable.


Cryptocurrency is a nested fractal of incompetence.

It works just fine if you substitute "Software development" for the 1st word.


As a Bitcoin miner since 2010, the hostility toward ASIC-based mining rig manufacturers has rarely made sense to me. ASICs strengthen your network. The requirement to have specialized hardware to mine profitably, hence pushing miners to the highest joule/hash metric physically reachable will make a PoW-based cryptocurrency much more secure with respect to majority attacks (so-called 51% attacks.) Cryptocurrency devs should welcome and embrace ASIC miners.

By far, the majority of GPU farms mine Ethereum. This mean any GPU-mineable coin other than Ethereum is trivially vulnerable to majority attacks. The only thing preventing these attacks from happening is the financial incentives against performing such a purely destructive attack.

ASIC-mineable coins are safe from such attacks from GPU miners.

People point out that the high barrier of entry into ASIC manufacturing can lead to monopolies, such as Bitmain which is a quasi- (not quite) monopoly with its 70-80% market share in Bitcoin with the S9. But first and foremost, manufacturing monopolies don't matter to the function of Bitcoin. A manufacturing monopoly isn't a hashrate monopoly: Bitmain cannot attack Bitcoin. They themselves only directly own <10% of the hashrate.¹

¹ Not to be confused with Bitmain's mining pools which represent more than a third of the hashrate. People usually misunderstand the function and nature of mining pools. End-users choose to mine at whatever pool they want. If Bitmain's pools started sending malicious mining jobs to stratum clients, the community would react and the end-users would promptly abandon them. So in practice Bitmain cannot do whatever they want for however long they want with their pools' hashrate. They enjoy the privilege of representing this hashrate, and this privilege could evaporate overnight.


Increasing collective hash-power doesn't necessarily increase the decentralization guarantees of cryptocurrencies. It's more important that hashpower is distributed among many people.


My point wasn't about decentralization. It was about hashrate providing security. That's all.

(Since you talk about decentralization, being GPU-mineable doesn't necessarily increase decentralization either. Bitmain could put 100 MW of GPUs into mining some random coin.)


You don't even have to buy the gpu's. Just rent them from AWS, Azure, and google cloud for the time it takes to carry out your attack.

digiconomist [0] estimates that current etherum mining cost is 1.7 billion a year, or 4.65 million a day, or 194,000 an hour, or 3234 a minute.

Multiply by 5 for cloud on demand premiums and you could dominate the etherum network for an entire day for 23.25 million. You could also do it for free if you can manage to do it with stolen credit cards. I'm amazed it hasn't already happened to a cryptocurrency.

https://digiconomist.net/ethereum-energy-consumption


There are probably cooler things that you can do for 24 million stolen dollars than dominate the etherum network. Actually, using the hashpower to wash the stolen money by legitimate mining sounds cool enough.

What can kind of cool stuff can you actually do when you dominate the etherum network?


That's one of the reasons Google Cloud has fairly small quotas on everything now, and you have to request permission for more. They'd almost certainly stop approving you way before you got to $23m of GPUs, assuming they even have that many free in your zone.


Wait. Are you are saying that if Google were to know that your intent was to legally mine Ethereum (or some similar proof-of-work currency) on a large scale, that they would refuse to rent you GPU's? This seems extremely unlikely. Why would it be in Google's interest to turn down any paying customer, much less one willing to pay millions of dollars a day?


It's that the kind of client you want? One that comes takes 10 billions $ a year worth of GPU (6 x 1.7) for only a single day. All that to give you bad PR afterward because that crashed the market of a popular cryptocurrency?

Sure 23 millions seems fun, but handling that many GPU is expensive and if they doesn't have differents markets to support that usage, it's a big loss to them (and if they actually have the market, then, that single 23 millions isn't worth much either).


Then split it across all 3 cloud providers, across all regions/AZ's, and across thousands of accounts.


In Monero, the idea is that hash power should be spread throughout rather than consolidated and fought over by high-power people.

I see your point about ASICs raising the bar but wouldn't that put contributions out of reach of all but high-wealth or dedicated miners? Which is basically what has happened to Bitcoin. It is completely uneconomical for anyone but people with free to very cheap electricity and lots of ASICs.

I believe the point of Monero is that power shouldn't be consolidated to a single few.


«I see your point about ASICs raising the bar but wouldn't that put contributions out of reach of all but high-wealth or dedicated miners?»

Not at all. ASICs don't increase the gap between the low-hashrate and high-hashrate miners any more so than GPUs. It's all proportional. If you have x% of the global hashrate, you get x% of the revenues. This is one of the biggest misconception in the mining world. You see journalists claiming that ASICS make mining tough for smaller miners. But this has absolutely nothing to do with ASICs. It all comes down to electrical costs: the large miners can afford to set up shop in area with cheap electricity while small miners have to make do with whatever rates they pay where they live/work. The difference between $0.03/kWh in Douglas County, WA (where many industrial miners are located) and $0.20-0.30/kWh in SoCal is huge. That's why large miners profit more. This would be exactly the same if the coin was GPU-mineable.


Because not everyone can afford ASIC's, which puts the mining power solely in the hands of those who can afford the hardware.

This once again reduces the playing field to a small, powerful group of players and essentially no one else can participate. Doesn't this defeat the purpose of a decentralised currency based on not having "trusted nodes"?


But they're pushing the cost of graphics cards up in such a way that does the same thing, no?


Realistically the cost of entry is more like $2,000 which is not a small, powerful group of players.


Yeah I've totes got $2k spare to use on just getting an ASIC, which likely won't have any resale value, and is only valuable for a very finite time

Comparatively, I do have a nice graphics card (which I can use for gaming and stuff), and I can rent from cloud providers at price points I can afford if I wanted more access to cards.


It's not, though. You have the R&D costs of the ASIC, which you don't have to sell to the public, or can run them on your own schedule before releasing them for sale.


Three problems with ASICs:

(1) Their higher capital cost, limited availability, and limited "appeal" promotes effective centralization of mining.

(2) Due to #1 mining rewards go to fewer an fewer hands, promoting inequality and centralization of wealth.

(3) ASICs increase the overall wastefulness of PoW currencies since they're e-waste. Mining ASICs are worthless for any other purpose, while CPU-mineable coins at least promote the production of things that are otherwise useful. If a miner liquidates a bunch of CPUs and GPUs they will see use on the market, while ASICs are junk.


The natural state of an ASIC mining arms race is that any advantage forces centralization. For example, if you have the most cost-efficient miners, you can acquire more miners with your profits. Your competitors fall behind with each iteration. This forces a monoculture of mining machines as miners are forced to use the best-available miner. This leaves the entire system incredibly vulnerable to back-doors and state interference from that manufacturer.

Similarly, locations with the cheapest power force centralization of location, which forces centralization of state regimes presiding over miners.

Monero doesn't suffer from these centralizing forces as applying generalized computing power can be done with spare cycles and is still profitable at the margin. In other words, you might not buy a computer to mine Monero, but if you have one, it might make sense to use your spare cycles for mining. This gets you a widely distributed polyculture, which is the best kind of decentralization.


«Your competitors fall behind with each iteration»

I don't think so. Competitors will always try to catch up given how profitable it is to sell mining rigs. We certainly have been going through a phase over the last 2 years where Bitmain has been dominant, but today we have 3 manufacturers who have miners just as efficient as the Bitmain S9 (0.10 J/GH):

• Canaan who just released the Avalon 821 (0.11 J/GH)

• Ebang E10 (0.10 J/GH)

• Halong is promising a miner just as efficient in "2 months"


That's the argument from theory. OTOH some people are (over)reacting to perceived "abusive" behavior from Bitmain.


Have these trivial attacks happened?


You care about a safe front door even if nobody broke in through it yet.


Empirically, this is wildly untrue. I was just reading how indoor shops in Singapore often don't even have doors. If you're in the building, you're in the shops.

Here in the US, it's very common to have a front door which is never locked.

You care about having a safe front door to the extent that you anticipate future attacks. If you don't, then you don't.


Yes, anticipation of future attacks is reason to warrant security measures, and that's what I tried to convey.

More specifically, the fact that a trivial attack didn't happen yet is not always an argument against the need to anticipate that attack.


I think this is short-sighted.

Just like mining pool was not an expected event, humans will always figure out a hack to make more money where there's money. Especially when there is no legal implication for doing so.

These monero developers think they're doing good for their network, but all this is doing is weaken their network based on the limited knowledge they have about what's possible currently. I would even go further to say this is arrogance.

There are many ways to compromise a network, and the best way to protect against this is to strengthening it, instead of making foolish decisions based on some conspiracy theory that nobody has ever actually seen played out.

For example if someone figures out a way to bring together the coinhive JS mining library and a clever web exploit scheme to create a global ultimate monero mining worm, it could compromise the entire network and they will have no choice but to hard fork. The only way to protect against this sort of attack is to strengthen the network. And only then you deserve to worry about centralization.

I know this "miner centralization" is a controversial topic, but what I know for sure is that people are making decisions to permanently change protocols that are already working, in order to "fix" some hypothetical situation that may or may not happen.

If humans were so good at prediction, we would not have great depressions. The best way to deal with these issues is to solve them when they actually happen. I think this is the best for all cryptocurrencies because IF some sort of centralization actually does end up happening, people can fork and take a huge chunk of network with them always.

The people who think we can't recover from these hypothetical centralization are making some assumptions that are more likely to be false than true.


Relevant quote: "Moving forward, developers will seek to protect the network’s ASIC resistance by slightly modifying its PoW algorithm at every scheduled hard fork, which generally occurs twice annually. These changes will not be noticeable to ordinary XMR users, but they will alter the network’s hashing algorithm enough that Cryptonight ASIC miners would become obsolete following every fork."

We'll see how this works out. Hard to tell without any technical details about what these changes to the algorithn will be. I'm not quite sure though about amateurs mucking with the details of hash functions, it smells like going against the rule of "don't roll your own crypto".

Also, and in the full understanding that this ship has sailed, it's stupid to call a planned update everyone goes along with a "hard fork".


>it's stupid to call a planned update everyone goes along with a "hard fork"

No, a hard fork indicates a set of consensus rules that is incompatible with the current set of consensus rules. This occurs when the domain of valid blocks expands. When the domain of valid blocks decreases, it's a soft fork.


I understand the first part of this, but I don't know what you mean by the "domain of valid blocks".

Anyway, I understand the issue of an incompatible change. The point is, it's dumb to use the same term for an incompatible change everyone goes along with (like here) with an incompatible change that causes two different chains to be kept up by two different communities (like Ethereum vs. Ethereum Classic).

The difference is between calling a fork-shaped thing a "fork" and calling a knife-shaped thing a "fork".


There is no reason why the upcoming Monero hardfork couldn't cause a split.

If a faction disagrees with the scheduled update, they very well could cause 2 different chains, in the exact same way that Ethereum, and Ethereum clasic happened.

There is no free lunch. A hard fork is a hard fork. It is perfectly possible that you will be incorrect in your prediction that the hardfork will be uncontroversial.


>I don't know what you mean by the "domain of valid blocks".

Blocks are considered valid by a consensus set if they have certain attributes. Consider each attribute an axis in a multi-dimensional space. One axis for the block size [1], one for the type of PoW function it has[2], one for the sign of the R S values used in an ECDSA function[3], another for the required difficulty[4]

For example, Bitcoin went through a soft fork when the domain of valid blocks was reduced in BIP66 to decrease the number of domains DER signatures could exist in [3]. Bcash /increased/ the domains in which blocks would be considered valid in their chain by changing the difficulty-retargeting algorithm used in Bitcoin[4], 8X'ing the blocksize [1].

As a result, if you turn on a bitcoin node running ~0.6-.7 it won't sync to the bcash chain but will sync to the bitcoin chain (with implementations such as Bitcoin Knots, bcoin, Bitcoin Core) because bcash is a hard fork of bitcoin.


Some good thinking about forks and upgrades: http://www.truthcoin.info/blog/forks-and-splits/


That makes clear that the meaning of "fork" has changed over time in exactly the way I said. It then goes on to try to rationalize the current usage, which I don't happen to agree with.


He means hard forks make block validity more restrictive whereas soft forks do not.


You got that backwards. Hard forks make blocks that are not considered valid by previous versions. Soft forks make blocks that are considered valid by previous versions. A soft fork does stuff like e.g. changing Bitcoin blocks to be 512KB in size. In that case, all new blocks would still be seen as valid blocks for anyone still running the old version.


> the rule of "don't roll your own crypto".

That's an extremely important principle when doing the routine work of protecting sensitive information, but less relevant when you're making hash modifications that are going to be obsolete in a few months anyway. By the time anyone's done serious cryptanalysis, it's probably too late.

Anyway, the worst case scenario is that things get screwed up and you need to do a premature hard fork that rolls back some transactions. Not the end of the world.


> premature hard fork that rolls back some transactions. Not the end of the world

That sounds pretty bad. Especially if mucking with the hash function is now business as usual. (I.e. there's a chance of this happening after every hard fork)


Talking of ships, the term “hard fork” is just as correct as your use of “crypto”: technically correct.


> the rule of "don't roll your own crypto".

There is no such rule in general. In many situations it might be better to use available solutions, but in others it might be better to do something different. It depends on your attack model and what you are trying to achieve.


> There is no such rule in general.

Actually, the general rule is, "Do not roll your own crypto."

> It depends on your attack model and what you are trying to achieve.

It depends on so much more than those two items, which is why people are told, "don't roll your own crypto."

There is almost a 1to1 relationship between looking at homemade crypto and finding severe vulnerabilities such as oracles, input reuse, incorrect encryption modes for the purpose of use, use of broken crypto, and a million other gotchas that most people don't know they need to know.


Not all "own crypto" is "homemade".


You're playing with semantics. You didn't address any of the points about crypto, choosing instead to superficially focus on a single word.

However, "own crypto" is indeed "homemade." Crypto is not independent from the purpose of use.


The only people who say things like "you can do things differently" are those who don't understand the fundamentals of crypto. (I've been one in the past)

Because once you do get to a point where you understand (You don't have to be a crypto guru or actually invent a new crypto algorithm, but just to a point where you understand how the fundamental bits and pieces work), you will find yourself saying the same thing: "Don't roll your own crypto".


You are wrong in assuming I don't know about cryptography and it's fundamentals. I'm not a professional expert in them, but I do have formal theoretical and implementation experience.

The "don't roll your own crypto" is simply wrong in general. For comparison, most often injecting chemical agents into your veins in high concentration is highly detrimental, so most often the rule "don't inject highly concentrated chemicals into your blood" is valid, but sometimes the chemical is an antidote and is the only way to save a life. Therefore, in general, the rule "don't inject concentrated chemicals into your blood" isn't valid. It's the same with cryptography. There are tradeoffs at play and one needs to know what they are trying to achieve. Custom cryptography is a (rarely used) tool in the box, but it is rightfully there.

Maybe this is a misunderstanding. I certainly would tell children never to inject chemicals into their veins, but I wouldn't tell that so absolutely to an adult. I treat people here with respect and as adults. That's why I nitpick and say "don't rule your own crypto" isn't true in general.


The reason people say "don't roll your own crypto" is not because rolling your own crypto is something reserved for super-intelligent elites, but it's because if you use a cryptosystem that's been around and running in public for a long time, it's "empirically proven" (as well as mathematically proven) that it can't be easily broken. Rolling your own crypto can work, and maybe it's even mathematically provable, but unless there's a very strong reason for rolling your own, all the risk that comes along with it is not justifiable.

This is why people criticize companies like Telegram for running their own crypto. They could have just used the existing cryptosystems for key exchange, etc., but they decided to go with their own, and since there are no other public systems that implement that specific algorithm, nobody can be sure what the agenda is.

Same reason why people don't trust certain algorithms released by NSA even though they're completely public and open source. No matter how provable an algorithm is, things are super subtle in crypto-land and you never know what might be hiding behind the scenes or what unknown vulnerability there may be.

I get your analogy about chemicals, but cryptography is its own special beast with its own inherent quirks which I described above, so I don't think the analogy fully translates to this context.


I agree with most of your arguments. But using single examples like Telegram in this case is not suited for arguing about the general validity of this rule.

If I were lazy, I could say that every cryptographic system is custom to the creators, hence the general rule is wrong.

Most often using established cryptographic systems might be the right thing, but it is not always. I'm all for putting big warning signs on the idea of custom cryptography, but I do oppose strict rules against them, for they have use-cases. One could argue that those who have these rare use-cases and needed capabilities do know that custom cryptography can be used and what the dangers and caveats are. I prefer to stay with the full truth upfront.


I agree with your general sentiment too.

The only reason I took issue was because of the adjective "general".

I still think the rule is "generally" true, but people shouldn't be dogmatic about it when they need to find a special solution for a special problem.

The reality is most problems that require cryptography are not that special and there really is no good reason to roll your own crypto. I think in this particular case this is even more true because people's real money is on the line.


> The only reason I took issue was because of the adjective "general".

Now, that is an interesting and valuable insight for me.

I regularly teach mathematics where I need to insist on formal nitpicks when they change the result of the reasoning, and I'm used to people not understanding formal nitpicks right away, so I'm used to patiently insist and use analogies, different angles, etc. However, I don't teach in english.

When I say a rule is true in general, then that means for me that rule is always true no matter what. If the word "general" in this context is understood differently, then that explains things.


Not sure if you're accusing me of having an incorrect understanding of the word "general" or not, but when someone says "X is simply wrong in general", most people understand it as "Generally, X is simply wrong". Even using your claim of:

> When I say a rule is true in general, then that means for me that rule is always true no matter what

and interpreting the exact sentence you posted above:

> The "don't roll your own crypto" is simply wrong in general

I can't see how you can interpret it anything other than:

> The "don't roll your own crypto" is simply wrong no matter what.

If you wanted to say what you're claiming to have said, you should have said "'The don't roll your own crypto' is not always correct".

Since you said you teach math, I'm sure you understand inverse/converse/contrapositive relations.


I'm not accusing you of anything.

When I wrote

> The "don't roll your own crypto" is simply wrong in general

then I meant the negation of

> The "don't roll your own crypto" is true in general

I should have used the much clearer

> The "don't roll your own crypto" is not always correct

I differentiate between "A is not true" and "not(A) is true". The latter I use when the negation of A is a tautology, the former I use when A may or may not be true.

In the case of "A is wrong in general" this led to

"A is wrong in general"

-> "A is not true in general"

-> "not(A is true in general)"

-> "not(A is true in all cases)"

-> "There are cases where not(A) is true".


If you are talking about, say an EpiPen, then that is injecting medical grade chemicals that someone manufactured and packaged. "Rolling your own" EpiPen, including manufacturing and purifying the chemicals needed, manufacturing the delivery device, and then shooting yourself with that, would probably result in death. You appear to be totally clueless in the various ways that any of these processes could fail, resulting in your death. And that is most likely true of your understanding of crypto too. Don't roll your own crypto, and please, please, do not roll your own injectable medicine.


> Don't roll your own crypto, and please, please, do not roll your own injectable medicine.

But some people must and did, right? Else we would neither have injectable medicine nor crypto.

Instead of simply ruling it out, I argue for pointing to risks and guidelines for when it is appropiate and when not.

We need people working in all areas, including cryptography and manufacturing of injectable medicine. That's why I oppose the strict, overly-simplified rule in this case.

I never argued and don't argue for a random web-store one-man-show to develop their own crypto, nor for them to craft their own injectable medicine.


No. They don't. A whole group of people get together and the end result is good crypto or safe medicine. Sure, one person might have the idea. By the time it's included in, e.g. a Java Crypto library or OpenSSL, or your arm, it's been thoroughly reviewed by tens or hundreds of people. For example, "RSA" is the initials of the three people who invented it, by trying for some time to implement an idea published by two others two years earlier.

Only a fool would say, "Bah, I know what I'm doing! Ship it!"


I completely agree. Scientifically proven methods like independent verification are very important. Yet, it could very well be custom "own" crypto that is verified and then used.


OK, so now if Bitmain develops an ASIC they just won't tell anyone about it. Apart from the increase in difficulty (which is increasing all the time anyway), how would you know?

The ASIC companies are already de-facto mining operations. Bitmain premines on the customer's hardware and only ship once they've taken their profit and spiked the difficulty up. BFL did the same thing back in the day.


> OK, so now if Bitmain develops an ASIC they just won't tell anyone about it.

"Moving forward, developers will seek to protect the network’s ASIC resistance by slightly modifying its PoW algorithm at every scheduled hard fork, which generally occurs twice annually. These changes will not be noticeable to ordinary XMR users, but they will alter the network’s hashing algorithm enough that Cryptonight ASIC miners would become obsolete following every fork."


A cat-and-mouse game where you have no idea if you're successful or not? Doesn't sound winnable to me. And remember that those changes will have to be choreographed well in advance so that other miners can implement the changes too - at which point Bitmain will know the changes as well. And you're going to do a major enough shakeup of the algorithm to break ASICs every 6 months, and it's not going to piss off the miners?

This is a pretty tall order IMO.


Not always. A friend got two A3 miners and they seem rather new because they still copiously outgas formaldehyde.


I'm surprised I haven't heard about any big FPGA mining operations. It would work nicely with this change plus it allows to mine coins with different algos depending on what is most profitable at the moment (along with running some wallet recovery / vanity address generator services).


Usually newer POW algos are very memory heavy to make FPGA mining impractical.


But FPGAs can interface with whatever RAM you want.

If you have a memory-heavy problem, then buy expensive, high-quality RAM like RLDRAM3 or QDR-IV. In effect, think of RLDRAM3 as the "ASIC" and use the FPGA to interface with it.


Yes, but then you end up competing directly with the price/performance of off-the-shelf hardware. PC memory bandwidth is the best deal in computing because it's amortized across so many units.

I think it would be difficult for an FPGA to beat a $200 CPU and $60 MB that has 30-50 GiB/s RAM bandwidth.

(Numbers are approximate)


QDR-IV and RLDRAM3 are off-the-shelf parts. Go to any supplier like Digikey (https://www.digikey.com/product-detail/en/cypress-semiconduc...) and you can order yourself some high-speed RAM today.

An FPGA or ASIC would be needed to talk to these special RAM chips, but that's a problem that can be solved.

> I think it would be difficult for an FPGA to beat a $200 CPU and $60 MB that has 30-50 GiB/s RAM bandwidth.

Its not bandwidth that's the problem with a lot of these chips. Its latency. QDR-IV achieves 7.5 nanoseconds of latency, while L3-cache of Ryzen or Intel Skylake achieves 10-nanoseconds, while DDR4 has latency measured in 50nanoseconds to 100-nanoseconds, depending on the CPU that's accessing it.

So yes. An off-the-shelf QDR-IV chip will achieve BETTER latency numbers than your off-the-shelf computer, by significant margins.

Bandwidth is basically a solved problem with regards to ASICs or FPGAs: just add another channel. Talk to 10-chips, or 20-chips at a time to have 10x or 20x the bandwidth. Latency is the hard problem, which is why I'm focusing on it.


Commodity DDR4 costs $10/GiB on Amazon.

The part you linked costs $16,497/GiB at Digikey.

That's 1600x more expensive, not even counting board fabrication costs.

https://www.amazon.com/Crucial-Single-PC4-19200-288-Pin-Memo...


Something like Ethash would obviously use DDR4 to store the DAG.

But something like Cryptonight (which hits 2MB EXTREMELY hard) would use QDR-IV or RLDRAM 3.

An ASIC / FPGA designer can use whatever part they want, for whatever part of the algorithm they want, for as many "cores" as they want. If DDR4 is the best answer to some problem in cost/effectiveness (ie: Ethash's DAG storage), then DDR4 should be used.

In the case of Ethash: there is a 16 MB inner loop, which would likely be best if it were run out of RLDRAM3. The DAG portion (which can be 4GBish, maybe 8GB to be safe) would be run from DDR4.

EDIT: I initially thought it was 128kb inner loop. Double-checking the specs, its 16MB, which RLDRAM3 seems like a good bet on that size.

Mix-and-match parts as you see fit and imagine the best combination. That's the advantage of FPGAs and ASICs: they literally can choose any part to better match a problem. Spend expensive memory only on the portions where it is necessary.


> there is a 16 MB inner loop, which would likely be best if it were run out of RLDRAM3

RLDRAM isn't stocked at Digikey, but it does list some prices. Far more reasonable at only 92x more expensive than commodity DDR4.

Here is a 16 MiB RAM SRAM chip for $169 with lower random-access latency than you're likely to obtain going out to a separate package. As a bonus, it comes with its own CPU cores: AMD RYZEN 5 1500X 4-Core 3.5 GHz (3.7 GHz Turbo) Socket AM4 65W YD150XBBAEBOX

I'd love to be proven wrong. What combination of FPGA and specialty RAM beats that?

https://www.newegg.com/Product/Product.aspx?Item=N82E1681911...


> Here is a 16 MiB RAM SRAM chip for $169 with lower random-access latency than you're likely to obtain going out to a separate package. As a bonus, it comes with its own CPU cores: AMD RYZEN 5 1500X 4-Core 3.5 GHz (3.7 GHz Turbo) Socket AM4 65W YD150XBBAEBOX

Ryzen may have 16 MB of L3, but it can't actually hold a 16MB dataset. Ryzen's cache structure is 8MB + 8MB. So its physically impossible for even 4-cores to get more than 8MB in high-speed memory. I'm sorry, but 8MB+8MB does NOT allow for a 16MB dataset to exist.

If somehow the 8MB+8MB were stored on L3 between two CCXs, you incur a major cross-CCX latency of 130 nanoseconds. In fact, I'm almost certain that CPU would rather pull from main-memory than attempt to keep cross-CCX L3 caches coherent.

http://www.sisoftware.eu/2017/04/05/amd-ryzen-review-and-ben...

Some select quotes:

> > Going inter-CCX increases the latency by 3 times to about 130ns thus if the threads are not properly scheduled you can end up with far higher latency and far less bandwidth.

> > Within the same compute unit (sharing L3), the latency is ~45ns

So lets be frank: Ryzen's 8MB+8MB L3 structure will NEVER hold a 16MB complete dataset. Its just insanely inefficient due to the 130ns cross-CCX penalty, and the memory controller would rather pull ~70ns to 95ns latency from main DDR4 instead (yeah, the cross-CCX penalty worse than an access to main memory!!)

But if we drop the problem size down to 8MB, then we get a more reasonable 45ns latency from Ryzen's L3 cache. Because now we're truly executing out of L3.

> I'd love to be proven wrong. What combination of FPGA and specialty RAM beats that?

Consider that RLDRAM3 has a latency of 7.5ns (tRC worst-case to the same bank) and has a power-usage of 2 Watts.

That tends to beat Ryzen's Cross-CCX latency of 130ns, DDR4 Latency of ~90ns, Ryzen L3 latency of 45ns, and power-usage of 65 Watts.

Consider that RLDRAM3 is available in many different sizes, all the way up to 64MB per chip. 8 RLDRAM3 chips working together will also increase bandwidth and capacity by 8x. So Bandwidth isn't an issue either.

Here's a price point btw if you want a readily available supply: https://www.arrow.com/en/products/mt44k16m36rb-125eatr/micro...


Oh, a note: "Latency" is a bit ambiguous because I don't really know how the sisoftware code works. If they measured latency by performing one operation, then perhaps RL + tRC (15+7.5 nanoseconds) would be closer to the expected performance of the RLDRAM3.

If they measure latency by randomly jumping around Ryzen's L3 memory and determining the delay, then RLDRAM3's performance would almost never stall and you effectively can get an answer on every clock cycle (~1ns), with a worst-case performance of waiting tRC if you visit the same memory bank (7.5ns). That's why I focused on tRC last post.

Thinking about it now however, maybe the 20ns figure for RLDRAM3 is a better comparison against Ryzen's measured 40ns L3 latency. It really depends on how SiSoftware's benchmark is coded.

In any case, its clear that RLDRAM3 has incredibly good latency metrics, rivaling that of Ryzen's L3 cache. Expected 22.5-ns of realistic latency compared to 40ns measured means RLDRAM3 is still way faster than Ryzen.

Plus, you know, the ability to hold a 9MB data-set or larger. An ASIC would be able to chain up as many RLDRAM3 chips as they have pins, so you can probably hold upto 640MB or so (assuming you have enough pins for 10x RLDRAM3 chips)


Yes, they can interface with RAM, but usually at a slower speed than a GPU can interface with colocated RAM (I believe, this is a bit out of my expertise)


Your statement is probably true for strictly 3 GPUs: Vega 56, Vega 64, and Titan V. Because those three GPUs use next-generation stacked-die HBM2 memory.

Alas: an FPGA or ASIC could ALSO use HBM2 memory (although looking into the RAM types available: Hybrid Memory Cube looks like a better fit for the Cryptocoin Proof-of-Work problems).

The main issue is that stacked-memory + interposers require advanced manufacturing techniques and are incredibly expensive right now. But if the future makes this technology cheaper, then I'd expect FPGA + Hybrid Memory Cube to eventually become the ideal platform for Proof-of-Work mining. AMD really did a good job at turning HBM2 into a commodity (well, until HBM2 supplies dried up anyway).

-------------

If you're talking about commodity GPUs, well... QDR-IV and RLDRAM3 have way lower latency than GDDR5, DDR4, and GDDR6. So I'd expect RLDRAM3 + FPGAs or ASIcs to beat any GDDR5 or GDDR6 based GPU at memory-hard problems.


For this kind of workload it makes sense to optimize the DRAM array topology to the problem at hand, which you cannot do on GPU. That obviously means that you cannot use ready made FPGA devboard (which usuallly have quite narrow memory interfaces) but you have to design your own board.


Back when I was trading, we poked at FPGAs.. but they were 10x or 100x slower than ASICS.

They were "cool" in that you can 100% predict the speed.. like guarantee algo X takes Yns to complete... which is very useful for some things, but the brute power was really not that impressive compared to a fast intel CPU with no or a very thin OS.

I don't think the profit margin can support that kind of efficiency loss, you are back to near CPU levels..


Back in the Bitcoin days FPGA mining had little or no advantage over GPUs.


Actually FPGAs had 10× better energy efficiency. This was huge. Eg. the popular Spartan6 LX150 used in many miners was rated 50 joule per gigahash, compared to 500 J/GH for an HD 5970. I used to run both GPU and FPGA mining operations. That said, they were comparable to GPUs in terms of $/GH.


That's not how I remember it. I remember a significant increase in hash rate and reduction in power consumption when FPGA mining became common.


Moving forward, developers will seek to protect the network’s ASIC resistance by slightly modifying its PoW algorithm at every scheduled hard fork, which generally occurs twice annually.

I presume they've considered this, but this seems like a dangerous approach. How will they decide which change occurs, how far advance will it be decided, and how will they keep this decision secret? Think of the potential advantage to a party who uses foreknowledge of the exact change to have ASIC's ready at the official announcement. If done on a small scale, it's possible that no one would ever know. If done on a large scale, it would be obvious after the fact that a leak had occurred, but the repercussions would likely destroy the currency.


Everything is open and transparent, so a "leak" is absolutely guaranteed. It appears the plan is to merely make some sort of change every 6 months such that an ASIC manufacturer hardly has time to develop a change, tape out, and actually mine for longer than a month or two. This should make it prohibitively expensive.


Couldn't you defeat ASICs pretty definitively (at least for a long time) by making the algorithm dependent upon a gigantic completely random substitution table? It would at the very least require ASICs to have giant amounts of ROM or RAM (with a memory controller etc) which would multiply ASIC cost.

AFIAK for today's ASICs a table larger than a few megabytes would be a strong deterrent. Go really big (e.g. 64mb) and you'd have some margin.

This would also be completely deadly to GPUs, especially if combined with branches and complex algorithms.

Edit: even worse: perturb the table using entropy from the block chain, making it impossible to burn into ROM and requiring tons of on-board RAM or a memory controller.


This is exactly what “memory-hard” hashing algorithms like scrypt, Argon2, and ballon hashing already do.

You can tune their parameters to use GBs of memory if you wish.


I don't get it.

A crypto-currency which will hard-fork twice yearly and all miners accept that hard-fork? If you trust changes not backed by computations but from central authority regularly, why don't you use the traditional bank backed by a nation-currency?


The little known secret of ALL cryptos, is that they are all fundamentally backed by a SOCIAL consensus, and not just a technical definition.

If everyone in the world just decides that a new fork of bitcoin IS bitcoin, then it really does become bitcoin.

This exact thing happened with Ethereum. A bunch of people in the community just decided to hardfork, and then it happened. And now the hard forked ethereum is the real ethereum.

Computing power doesn't stop your "real" version of the old currency from becoming worthless, and the "new" "forked" version of the same currency from becoming worth more money.


I agree this is super important and often missed.

People think the blockchain prevents bad things from happening, but it’s the exact opposite:

The blockchain allows EVERYTHING.

It permits all possible universes with all possible transactions. I can go back to the first block and pretend Satoshi never mined more than the first one, and I have all the bitcoins.

That reality is just as valid as any other. This is the central idea of blockchain tech: you assume all realities will exist. Then you code ways for people to do business despite inevitable lack of consensus.

Coincidentally this is exactly what young hippies miss about anarchism when they are trying to run consensus processes. They think anarchism is about obtaining consensus. But it’s really the opposite: anarchism is about what happens when you DONT have consensus. If you have consensus it doesn’t matter what political system you’re in: monarchy, communism, libertarianism, it’s all the same when there’s consensus.

This all became clear to me after reading The Dispossessed (rip Ursula K Leguin). An anarchist syndicate is differentiated (from e.g. a co-op) by how people react when someone DISSENTS.

If you dissent from the co-op you have essentially no rights. Unless you have a voting quorum the others can use the police to stop you operating against policy.

In a syndicate, a dissenting member has the same rights as the entire rest of the syndicate. They have equal claim to the facilities, the resources, even the staff. They stand peer to peer with the syndicate itself.

Bitcoin is the same. A fork has all the same capabilities as the long chain.


Because the traditional bank backed by a nation-currency has no privacy features.

Please read up on Monero.


It's very uncivil to assume that people who disagree have not "read up" on whatever topic. I have read up on Monero, and know what it's offering. To me, those features have no value, and thus, this hard fork happening quite often is not worth it.


Why did you ask the question (“why don’t you use...”) if you already knew the answer then? The fact that you asked the question in such a broad way suggested you hadn’t done the basic research.


Why would you assume all miners accept the hard fork? They don't have to.


I am a bit confused by the reaction and commentary. ASICs are kind of the natural next step, so it is not surprising at all for any serious miner to go there. Hence, this appears as a too reactive really short term move and I wonder about the reasoning behind this.

The number of units to break even is not that high -- here is one random example, a google search away, that seems right to me -- from what I can recall -- http://www.deepchip.com/downloads/fpga-vs-asic.pdf. A reasonable volume or am I missing something?

Consider FPGAs as the efficient R&D step (due to its re-useability) towards an ASIC. ASIC will be generally faster and considerably more cost effective option per unit; plus more compact (so more units per space -- although there are other factors to account here). Usually you might end up with something less power hungry also, although I really doubt here one will see such a benefit.

Hence Monero is going against this step as a way to mitigate centralization and the introduction of more market entry barriers. I think this is a bit late. For instance, there is a great incentive for miners to join pools -- see a bitcoin analysis http://www.jbonneau.com/doc/SBBR16-FC-mining_pool_reward_inc...


Ironic that the plan to keep mining decentralized requires centralization itself. Same thing happened with SIA earlier when another company beat SIA's founders to the market with an ASIC.


How trustworthy are these people?

I have not heard anything bad about the Monero people, but given the number of bad actors in the cryptocurrency space they should probably describe what steps they are taking to ensure that nothing like what I'm about to describe can occur.

> Moving forward, developers will seek to protect the network’s ASIC resistance by slightly modifying its PoW algorithm at every scheduled hard fork, which generally occurs twice annually. These changes will not be noticeable to ordinary XMR users, but they will alter the network’s hashing algorithm enough that Cryptonight ASIC miners would become obsolete following every fork

Suppose an insider is conspiring with one of the ASIC makers. The insider gets the modifications to the ASIC maker before they go public. The ASIC maker gets a head start on making new ASICs.

During the time between the modifications going public and other ASIC makers getting chips out, the conspiring maker does NOT sell their chips. They use them themselves, splitting the results of their mining with the insider.


There are no "insiders" with Monero. Just like many FOSS projects, everything is developed in the open and entirely transparently. The first time anyone has visibility on PoW changes is when someone comes up with specifics and opens a PR, and as can be seen on the current PR there's little chance that the first pass is the one that will go in.


What are these scheduled hard forks? Is it a property built into Monero's algorithm or is it a process centrally planned by the governance.


Monero has schedule hard-forks for software upgrade


The developers have a release cycle that includes periodic hard forks. AFAIK there is nothing algorithmic to entice or force a fork like the Ethereum difficulty bomb does.


The scheduled hard fork is announced weeks in advance, and high-profile clients like exchanges are "strongly encouraged" to upgrade. If you run your own XMR full node on an old version past the fork date, you get a bright red warning message telling you to get the new version.


That sounds extremely centralized.


Anyone can be a Monero contributor and can work on the hard fork.


I am not too sure, but someone in the Monero space told me that it is some kind of "voting on Moneros direction"-thing. Basically by forking the community decides what the developers should do.


Is it possible to synthesize cryptographic hashing function circuits randomly?


A cryptographic hash is a one-way function that produces repeatable not random output.


This is not what I meant :) my mistake, I will rephrase my question:

Is it theoretically possible for a computer program to synthesize new cryptographic hash functions which are significantly different one from the other (i.e. not just changing constants, or number of iterations)?


One would need to define what is meant by "significantly different" as well as the exact purpose of the hash functions. i.e. I suggest not to start with the implementation but with what is trying to be achieved. Putting the cart before the horse, so to speak, is a recipe for creating bad crypto.


It seems to me the only reliable way to favour general purpose computers over specifically manufactured hardware would be to change the algorithm periodically.


I wonder how nicehash will handle the upcoming hard forks as there are other cryptocurrencies running on the original cryptonight algorithm.


A variant are memory-hard PoW algorithms like Equihash. To calculate a hash you need a lot of memory which makes ASICs infeasible.


I disagree.

I've outlined my main points in a post on Reddit: https://www.reddit.com/r/Monero/comments/7x82yp/technical_cr...

In short: a memory-hard PoW algorithm (such as Cryptonight) can be gamed by using exotic low-latency memory like RLDRAM3, QDR-IV, or Hybrid Memory Cube.

Its not as many orders-of-magnitude better than BTC, but it'd probably be on the order of 1x to 10x more power-efficient in the case of RLDRAM3. Although due to the costs of developing an ASIC and buying expensive RAM like RLDRAM3, it would likely cost a bit more than commodity hardware.

You'd make it up eventually in energy savings however.

The recent Hybrid Memory Cube RAM looks incredibly fast and power-efficient however. I wouldn't be surprised if HMC + FPGAs or ASICs completely destroys any "ASIC-resistance". Fortunately for the cryptocoin community, HMC is incredibly exotic, expensive, and rare at the moment.


It's interesting that you didn't consider internal SRAM; 2 MB of SRAM is less than 10 mm2 in a 10 nm process so an entire CryptoNight core might be ~12 mm2. This would be roughly twice the area efficiency of x86.


QDR-IV is SRAM.

I think buying QDR-IV and externally interfacing would be easier. But I haven't thought too much about this aspect of the problem.

In effect, QDR-IV already exists and is already mass produced. So if you wanted SRAM, there it is. Already to order from numerous suppliers. I don't see much reason why you'd want to design it into an ASIC when its a ready off-the-shelf part.

Overall, I think the optimal design would use RLDRAM3 anyway. RLDRAM3 is cheaper and uses less Watts than QDR-IV. The tRC latency issue on RLDRAM3 doesn't seem to affect the expected workload of a memory-heavy Cryptocoin, so QDR-IV (aka SRAM) wouldn't have much of an advantage over RLDRAM3 anyway.


Thanks. It's really interesting. It seems that I posted a stupid comment. Sorry for that.


No one seems to know about RLDRAM3 yet in the cryptocoin community. Prior to my post a few days ago, I couldn't find any information on the subject (which is why I wrote something up on Reddit).

I'm trying to bring up the issue actually, because I think its the obvious elephant in the room. In effect: ASICs are ALREADY made to solve the memory problem, they're called high-speed low-latency RAM (IE: RLDRAM3, QDR-IV, or HMC).

But it seems like most people in the cryptocoin community is only familiar with DDR3, DDR4, or GDDR5 RAM. I think the "exotic" low-latency stuff is still not really well known yet.


I know Peter Todd has made some comments about how it would be pretty easy to build ASICs for memory-hard hashing algorithms, but my few seconds of Googling hasn't turned it up. Plenty of old hands will acknowledge that ASIC resistance may be a pipe dream for any coin which amasses sufficient economic value.


Look at Vertcoin and their adaptive tuning of the scrypt hash parameters based on difficulty.


QDR-IV has lower-latency than L3 cache from CPUs. And you can achieve higher-bandwidth arbitrarily by simply talking to more QDR-IV chips.

The harder the memory-problem becomes, the better it is for QDR-IV SRAM compared to CPUs or GPUs. An ASIC or FPGA designer can use any RAM that exists in the world, but CPU / GPU users are limited to DDR3, DDR4, the L1/L2/L3 caches, GDDR5, and maybe HBM2 (AMD Vega / NVidia V100)

Any memory-hard problem are basically solved by these exotic low-latency / high-bandwidth RAM chips + an FPGA or ASIC interfacing with them.


No, it just means you need an ASIC attached to a lot of memory slots. You could probably use rejected DRAM, with a few bad bits, and save money.


Memory-hard Crypto-coin PoW algorithms are usually latency-bound (ex: Cryptonight). So DDR4 RAM would be too slow compared to the L3 cache or L2 cache of CPUs.

So this simple solution actually won't work on the harder algorithms. It might work for Ethash or something scrypt based, but things have gotten harder since then.


This is awful. I would like to be able to buy a graphics card for it's intended use (gaming) at a reasonable price.


So not a war on TSMC etc like I thought this article was about. But a war on Bitcoin ASIC designers.


Trying to defend the mining from ASICs makes somewhat sense keeping in mind recent news. Monero could really gain momentum (more, than they have by now) by becoming the alternative to online advertising by being the coin to be mined in Browsers around the world.

Salon.com already tried it, and if it works out for them, it could become a pretty great alternative to ads. It would also change the dynamics of online content, as page-reloads would mean a dip in hashing power.


http://coinhive.com/ also offers this but users do not seem to like it. They offer two miners, one which only operates with users consent and another one without explicit consent. Either one seem to have been banned by my ad blocker, although the one which requires consent has a not which clearly states that it should not be in any filter because they user wanted to run the script and it might break the website.

They even have a pretty cool usecase for it too: to have x amount of hashes as a captcha replacement, with a nice UI and such (you can find it by click on login at their page & disabling your ad blocker). Unfortunately, it does not work really well for many of my use cases. When I'm on my phone it is fairly slow. When I'm on an old phone (iPhone 5 and 5s, for instance), it is so slow that many users would just close the page because they believe it is broken but it probably is a decent defense against brute force attacks because it still requires quite a bit of work to try a login.


The biggest problem I see with Coinhive is that they've systematically failed to prevent malicious uses of their service. The Coinhive script is frequently injected into compromised web sites, or into all web sites by locally installed malware, and Coinhive's payout scheme -- fully automated payouts every two hours -- makes it very easy for a malicious user to "get away" with their profits.

(Coinhive claim to disable accounts associated with abuse, but even if they do that promptly, the lack of any meaningful holding period means that abuse can still be quite profitable.)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: