My favorite part about it is response to criticism that the changes may break the underlying cryptography they are using-
> We do not have cryptographers familiar with this kind of thing, sadly. 
Basically, I think it actually weakens the FPGA-resistance of Cryptonight.
Fortunately, the main idea of the changes would discourage ASICs being made for Monero. But it doesn't seem to discourage FPGAs, which are just one "resynthesize" away from fixing any code changes. And as long as the community becomes accustomed to regular changes, then any issues in the encryption can be fixed later.
If FPGAs can implement these changes efficiently—more efficiently than cpu miners—-and be re-synthesized to implement further changes, doesn’t that advantage them?
FPGAs don't suffer this problem
Something like an FPGA + Interposer to HMC would be a huge R&D effort, and just as centralizing.
BTW: None of the designs I've talked about are strictly speaking commodity. They'd require at a minimum, custom PCBs. Maybe more advanced techniques for the best technology (again: Custom Interposer to HMC + FPGA interface + all the Verilog / VHDL code to make it happen).
As long as an FPGA-based shop kept their FPGA PCB secret, as well as their code secret, and their Device Drivers secret (You'd probably run Linux / Windows to talk to the FPGA over PCIe) then they're basically going to be ahead of the rest of the competition. Eventually, the competition would catch up, but a constant R&D effort into newer designs (ie: testing HBM2 vs HMC vs RLDRAM3 vs QDRIV, building relationships with suppliers, etc. etc.) could lead into a sustainable business edge.
Heck, early on in Monero's life, some dude got to like 40%+ of the entire network's Hash Rate by simply writing better CPU code and keeping it for himself, and then spending hundreds-of-thousands of dollars on AWS: https://da-data.blogspot.com/2014/08/minting-money-with-mone...
> By the 14th of May, we were 45% of the total hashing power on the coin. Things started to get a little exciting
FPGAs would be that on steroids. There are way fewer Verilog / FPGA engineers. It also would require custom PCBs and hardware engineers to make it all happen. I wouldn't even know who to ask to design an Hybrid Memory Cube interposer and to fit it on an FPGA for example.
Not cheap, and probably not available in bulk, but cheaper than custom PCB R&D.
16.5 MB of internal RAM. 18MB of QDR-IV SRAM. 125MB of RLDRAM 3. 2GB of HMC.
That's plenty of RAM to pretty much crush Ethash, Cryptonight and more.
Heh, free money incoming if you've got a crypto background. Cryptocurrency is a nested fractal of incompetence.
It works just fine if you substitute "Software development" for the 1st word.
By far, the majority of GPU farms mine Ethereum. This mean any GPU-mineable coin other than Ethereum is trivially vulnerable to majority attacks. The only thing preventing these attacks from happening is the financial incentives against performing such a purely destructive attack.
ASIC-mineable coins are safe from such attacks from GPU miners.
People point out that the high barrier of entry into ASIC manufacturing can lead to monopolies, such as Bitmain which is a quasi- (not quite) monopoly with its 70-80% market share in Bitcoin with the S9. But first and foremost, manufacturing monopolies don't matter to the function of Bitcoin. A manufacturing monopoly isn't a hashrate monopoly: Bitmain cannot attack Bitcoin. They themselves only directly own <10% of the hashrate.¹
¹ Not to be confused with Bitmain's mining pools which represent more than a third of the hashrate. People usually misunderstand the function and nature of mining pools. End-users choose to mine at whatever pool they want. If Bitmain's pools started sending malicious mining jobs to stratum clients, the community would react and the end-users would promptly abandon them. So in practice Bitmain cannot do whatever they want for however long they want with their pools' hashrate. They enjoy the privilege of representing this hashrate, and this privilege could evaporate overnight.
(Since you talk about decentralization, being GPU-mineable doesn't necessarily increase decentralization either. Bitmain could put 100 MW of GPUs into mining some random coin.)
digiconomist  estimates that current etherum mining cost is 1.7 billion a year, or 4.65 million a day, or 194,000 an hour, or 3234 a minute.
Multiply by 5 for cloud on demand premiums and you could dominate the etherum network for an entire day for 23.25 million. You could also do it for free if you can manage to do it with stolen credit cards. I'm amazed it hasn't already happened to a cryptocurrency.
What can kind of cool stuff can you actually do when you dominate the etherum network?
Sure 23 millions seems fun, but handling that many GPU is expensive and if they doesn't have differents markets to support that usage, it's a big loss to them (and if they actually have the market, then, that single 23 millions isn't worth much either).
I see your point about ASICs raising the bar but wouldn't that put contributions out of reach of all but high-wealth or dedicated miners? Which is basically what has happened to Bitcoin. It is completely uneconomical for anyone but people with free to very cheap electricity and lots of ASICs.
I believe the point of Monero is that power shouldn't be consolidated to a single few.
Not at all. ASICs don't increase the gap between the low-hashrate and high-hashrate miners any more so than GPUs. It's all proportional. If you have x% of the global hashrate, you get x% of the revenues. This is one of the biggest misconception in the mining world. You see journalists claiming that ASICS make mining tough for smaller miners. But this has absolutely nothing to do with ASICs. It all comes down to electrical costs: the large miners can afford to set up shop in area with cheap electricity while small miners have to make do with whatever rates they pay where they live/work. The difference between $0.03/kWh in Douglas County, WA (where many industrial miners are located) and $0.20-0.30/kWh in SoCal is huge. That's why large miners profit more. This would be exactly the same if the coin was GPU-mineable.
This once again reduces the playing field to a small, powerful group of players and essentially no one else can participate. Doesn't this defeat the purpose of a decentralised currency based on not having "trusted nodes"?
Comparatively, I do have a nice graphics card (which I can use for gaming and stuff), and I can rent from cloud providers at price points I can afford if I wanted more access to cards.
(1) Their higher capital cost, limited availability, and limited "appeal" promotes effective centralization of mining.
(2) Due to #1 mining rewards go to fewer an fewer hands, promoting inequality and centralization of wealth.
(3) ASICs increase the overall wastefulness of PoW currencies since they're e-waste. Mining ASICs are worthless for any other purpose, while CPU-mineable coins at least promote the production of things that are otherwise useful. If a miner liquidates a bunch of CPUs and GPUs they will see use on the market, while ASICs are junk.
Similarly, locations with the cheapest power force centralization of location, which forces centralization of state regimes presiding over miners.
Monero doesn't suffer from these centralizing forces as applying generalized computing power can be done with spare cycles and is still profitable at the margin. In other words, you might not buy a computer to mine Monero, but if you have one, it might make sense to use your spare cycles for mining. This gets you a widely distributed polyculture, which is the best kind of decentralization.
I don't think so. Competitors will always try to catch up given how profitable it is to sell mining rigs. We certainly have been going through a phase over the last 2 years where Bitmain has been dominant, but today we have 3 manufacturers who have miners just as efficient as the Bitmain S9 (0.10 J/GH):
• Canaan who just released the Avalon 821 (0.11 J/GH)
• Ebang E10 (0.10 J/GH)
• Halong is promising a miner just as efficient in "2 months"
Here in the US, it's very common to have a front door which is never locked.
You care about having a safe front door to the extent that you anticipate future attacks. If you don't, then you don't.
More specifically, the fact that a trivial attack didn't happen yet is not always an argument against the need to anticipate that attack.
Just like mining pool was not an expected event, humans will always figure out a hack to make more money where there's money. Especially when there is no legal implication for doing so.
These monero developers think they're doing good for their network, but all this is doing is weaken their network based on the limited knowledge they have about what's possible currently. I would even go further to say this is arrogance.
There are many ways to compromise a network, and the best way to protect against this is to strengthening it, instead of making foolish decisions based on some conspiracy theory that nobody has ever actually seen played out.
For example if someone figures out a way to bring together the coinhive JS mining library and a clever web exploit scheme to create a global ultimate monero mining worm, it could compromise the entire network and they will have no choice but to hard fork. The only way to protect against this sort of attack is to strengthen the network. And only then you deserve to worry about centralization.
I know this "miner centralization" is a controversial topic, but what I know for sure is that people are making decisions to permanently change protocols that are already working, in order to "fix" some hypothetical situation that may or may not happen.
If humans were so good at prediction, we would not have great depressions. The best way to deal with these issues is to solve them when they actually happen. I think this is the best for all cryptocurrencies because IF some sort of centralization actually does end up happening, people can fork and take a huge chunk of network with them always.
The people who think we can't recover from these hypothetical centralization are making some assumptions that are more likely to be false than true.
We'll see how this works out. Hard to tell without any technical details about what these changes to the algorithn will be. I'm not quite sure though about amateurs mucking with the details of hash functions, it smells like going against the rule of "don't roll your own crypto".
Also, and in the full understanding that this ship has sailed, it's stupid to call a planned update everyone goes along with a "hard fork".
No, a hard fork indicates a set of consensus rules that is incompatible with the current set of consensus rules. This occurs when the domain of valid blocks expands. When the domain of valid blocks decreases, it's a soft fork.
Anyway, I understand the issue of an incompatible change. The point is, it's dumb to use the same term for an incompatible change everyone goes along with (like here) with an incompatible change that causes two different chains to be kept up by two different communities (like Ethereum vs. Ethereum Classic).
The difference is between calling a fork-shaped thing a "fork" and calling a knife-shaped thing a "fork".
If a faction disagrees with the scheduled update, they very well could cause 2 different chains, in the exact same way that Ethereum, and Ethereum clasic happened.
There is no free lunch. A hard fork is a hard fork. It is perfectly possible that you will be incorrect in your prediction that the hardfork will be uncontroversial.
Blocks are considered valid by a consensus set if they have certain attributes. Consider each attribute an axis in a multi-dimensional space. One axis for the block size , one for the type of PoW function it has, one for the sign of the R S values used in an ECDSA function, another for the required difficulty
For example, Bitcoin went through a soft fork when the domain of valid blocks was reduced in BIP66 to decrease the number of domains DER signatures could exist in . Bcash /increased/ the domains in which blocks would be considered valid in their chain by changing the difficulty-retargeting algorithm used in Bitcoin, 8X'ing the blocksize .
As a result, if you turn on a bitcoin node running ~0.6-.7 it won't sync to the bcash chain but will sync to the bitcoin chain (with implementations such as Bitcoin Knots, bcoin, Bitcoin Core) because bcash is a hard fork of bitcoin.
That's an extremely important principle when doing the routine work of protecting sensitive information, but less relevant when you're making hash modifications that are going to be obsolete in a few months anyway. By the time anyone's done serious cryptanalysis, it's probably too late.
Anyway, the worst case scenario is that things get screwed up and you need to do a premature hard fork that rolls back some transactions. Not the end of the world.
That sounds pretty bad. Especially if mucking with the hash function is now business as usual. (I.e. there's a chance of this happening after every hard fork)
There is no such rule in general. In many situations it might be better to use available solutions, but in others it might be better to do something different. It depends on your attack model and what you are trying to achieve.
Actually, the general rule is, "Do not roll your own crypto."
> It depends on your attack model and what you are trying to achieve.
It depends on so much more than those two items, which is why people are told, "don't roll your own crypto."
There is almost a 1to1 relationship between looking at homemade crypto and finding severe vulnerabilities such as oracles, input reuse, incorrect encryption modes for the purpose of use, use of broken crypto, and a million other gotchas that most people don't know they need to know.
However, "own crypto" is indeed "homemade." Crypto is not independent from the purpose of use.
Because once you do get to a point where you understand (You don't have to be a crypto guru or actually invent a new crypto algorithm, but just to a point where you understand how the fundamental bits and pieces work), you will find yourself saying the same thing: "Don't roll your own crypto".
The "don't roll your own crypto" is simply wrong in general. For comparison, most often injecting chemical agents into your veins in high concentration is highly detrimental, so most often the rule "don't inject highly concentrated chemicals into your blood" is valid, but sometimes the chemical is an antidote and is the only way to save a life. Therefore, in general, the rule "don't inject concentrated chemicals into your blood" isn't valid. It's the same with cryptography. There are tradeoffs at play and one needs to know what they are trying to achieve. Custom cryptography is a (rarely used) tool in the box, but it is rightfully there.
Maybe this is a misunderstanding. I certainly would tell children never to inject chemicals into their veins, but I wouldn't tell that so absolutely to an adult. I treat people here with respect and as adults. That's why I nitpick and say "don't rule your own crypto" isn't true in general.
This is why people criticize companies like Telegram for running their own crypto. They could have just used the existing cryptosystems for key exchange, etc., but they decided to go with their own, and since there are no other public systems that implement that specific algorithm, nobody can be sure what the agenda is.
Same reason why people don't trust certain algorithms released by NSA even though they're completely public and open source. No matter how provable an algorithm is, things are super subtle in crypto-land and you never know what might be hiding behind the scenes or what unknown vulnerability there may be.
I get your analogy about chemicals, but cryptography is its own special beast with its own inherent quirks which I described above, so I don't think the analogy fully translates to this context.
If I were lazy, I could say that every cryptographic system is custom to the creators, hence the general rule is wrong.
Most often using established cryptographic systems might be the right thing, but it is not always. I'm all for putting big warning signs on the idea of custom cryptography, but I do oppose strict rules against them, for they have use-cases. One could argue that those who have these rare use-cases and needed capabilities do know that custom cryptography can be used and what the dangers and caveats are. I prefer to stay with the full truth upfront.
The only reason I took issue was because of the adjective "general".
I still think the rule is "generally" true, but people shouldn't be dogmatic about it when they need to find a special solution for a special problem.
The reality is most problems that require cryptography are not that special and there really is no good reason to roll your own crypto. I think in this particular case this is even more true because people's real money is on the line.
Now, that is an interesting and valuable insight for me.
I regularly teach mathematics where I need to insist on formal nitpicks when they change the result of the reasoning, and I'm used to people not understanding formal nitpicks right away, so I'm used to patiently insist and use analogies, different angles, etc. However, I don't teach in english.
When I say a rule is true in general, then that means for me that rule is always true no matter what. If the word "general" in this context is understood differently, then that explains things.
> When I say a rule is true in general, then that means for me that rule is always true no matter what
and interpreting the exact sentence you posted above:
> The "don't roll your own crypto" is simply wrong in general
I can't see how you can interpret it anything other than:
> The "don't roll your own crypto" is simply wrong no matter what.
If you wanted to say what you're claiming to have said, you should have said "'The don't roll your own crypto' is not always correct".
Since you said you teach math, I'm sure you understand inverse/converse/contrapositive relations.
When I wrote
then I meant the negation of
> The "don't roll your own crypto" is true in general
I should have used the much clearer
> The "don't roll your own crypto" is not always correct
I differentiate between "A is not true" and "not(A) is true". The latter I use when the negation of A is a tautology, the former I use when A may or may not be true.
In the case of "A is wrong in general" this led to
"A is wrong in general"
-> "A is not true in general"
-> "not(A is true in general)"
-> "not(A is true in all cases)"
-> "There are cases where not(A) is true".
But some people must and did, right? Else we would neither have injectable medicine nor crypto.
Instead of simply ruling it out, I argue for pointing to risks and guidelines for when it is appropiate and when not.
We need people working in all areas, including cryptography and manufacturing of injectable medicine. That's why I oppose the strict, overly-simplified rule in this case.
I never argued and don't argue for a random web-store one-man-show to develop their own crypto, nor for them to craft their own injectable medicine.
Only a fool would say, "Bah, I know what I'm doing! Ship it!"
The ASIC companies are already de-facto mining operations. Bitmain premines on the customer's hardware and only ship once they've taken their profit and spiked the difficulty up. BFL did the same thing back in the day.
"Moving forward, developers will seek to protect the network’s ASIC resistance by slightly modifying its PoW algorithm at every scheduled hard fork, which generally occurs twice annually. These changes will not be noticeable to ordinary XMR users, but they will alter the network’s hashing algorithm enough that Cryptonight ASIC miners would become obsolete following every fork."
This is a pretty tall order IMO.
If you have a memory-heavy problem, then buy expensive, high-quality RAM like RLDRAM3 or QDR-IV. In effect, think of RLDRAM3 as the "ASIC" and use the FPGA to interface with it.
I think it would be difficult for an FPGA to beat a $200 CPU and $60 MB that has 30-50 GiB/s RAM bandwidth.
(Numbers are approximate)
An FPGA or ASIC would be needed to talk to these special RAM chips, but that's a problem that can be solved.
> I think it would be difficult for an FPGA to beat a $200 CPU and $60 MB that has 30-50 GiB/s RAM bandwidth.
Its not bandwidth that's the problem with a lot of these chips. Its latency. QDR-IV achieves 7.5 nanoseconds of latency, while L3-cache of Ryzen or Intel Skylake achieves 10-nanoseconds, while DDR4 has latency measured in 50nanoseconds to 100-nanoseconds, depending on the CPU that's accessing it.
So yes. An off-the-shelf QDR-IV chip will achieve BETTER latency numbers than your off-the-shelf computer, by significant margins.
Bandwidth is basically a solved problem with regards to ASICs or FPGAs: just add another channel. Talk to 10-chips, or 20-chips at a time to have 10x or 20x the bandwidth. Latency is the hard problem, which is why I'm focusing on it.
The part you linked costs $16,497/GiB at Digikey.
That's 1600x more expensive, not even counting board fabrication costs.
But something like Cryptonight (which hits 2MB EXTREMELY hard) would use QDR-IV or RLDRAM 3.
An ASIC / FPGA designer can use whatever part they want, for whatever part of the algorithm they want, for as many "cores" as they want. If DDR4 is the best answer to some problem in cost/effectiveness (ie: Ethash's DAG storage), then DDR4 should be used.
In the case of Ethash: there is a 16 MB inner loop, which would likely be best if it were run out of RLDRAM3. The DAG portion (which can be 4GBish, maybe 8GB to be safe) would be run from DDR4.
EDIT: I initially thought it was 128kb inner loop. Double-checking the specs, its 16MB, which RLDRAM3 seems like a good bet on that size.
Mix-and-match parts as you see fit and imagine the best combination. That's the advantage of FPGAs and ASICs: they literally can choose any part to better match a problem. Spend expensive memory only on the portions where it is necessary.
RLDRAM isn't stocked at Digikey, but it does list some prices. Far more reasonable at only 92x more expensive than commodity DDR4.
Here is a 16 MiB RAM SRAM chip for $169 with lower random-access latency than you're likely to obtain going out to a separate package. As a bonus, it comes with its own CPU cores: AMD RYZEN 5 1500X 4-Core 3.5 GHz (3.7 GHz Turbo) Socket AM4 65W YD150XBBAEBOX
I'd love to be proven wrong. What combination of FPGA and specialty RAM beats that?
Ryzen may have 16 MB of L3, but it can't actually hold a 16MB dataset. Ryzen's cache structure is 8MB + 8MB. So its physically impossible for even 4-cores to get more than 8MB in high-speed memory. I'm sorry, but 8MB+8MB does NOT allow for a 16MB dataset to exist.
If somehow the 8MB+8MB were stored on L3 between two CCXs, you incur a major cross-CCX latency of 130 nanoseconds. In fact, I'm almost certain that CPU would rather pull from main-memory than attempt to keep cross-CCX L3 caches coherent.
Some select quotes:
> > Going inter-CCX increases the latency by 3 times to about 130ns thus if the threads are not properly scheduled you can end up with far higher latency and far less bandwidth.
> > Within the same compute unit (sharing L3), the latency is ~45ns
So lets be frank: Ryzen's 8MB+8MB L3 structure will NEVER hold a 16MB complete dataset. Its just insanely inefficient due to the 130ns cross-CCX penalty, and the memory controller would rather pull ~70ns to 95ns latency from main DDR4 instead (yeah, the cross-CCX penalty worse than an access to main memory!!)
But if we drop the problem size down to 8MB, then we get a more reasonable 45ns latency from Ryzen's L3 cache. Because now we're truly executing out of L3.
> I'd love to be proven wrong. What combination of FPGA and specialty RAM beats that?
Consider that RLDRAM3 has a latency of 7.5ns (tRC worst-case to the same bank) and has a power-usage of 2 Watts.
That tends to beat Ryzen's Cross-CCX latency of 130ns, DDR4 Latency of ~90ns, Ryzen L3 latency of 45ns, and power-usage of 65 Watts.
Consider that RLDRAM3 is available in many different sizes, all the way up to 64MB per chip. 8 RLDRAM3 chips working together will also increase bandwidth and capacity by 8x. So Bandwidth isn't an issue either.
Here's a price point btw if you want a readily available supply: https://www.arrow.com/en/products/mt44k16m36rb-125eatr/micro...
If they measure latency by randomly jumping around Ryzen's L3 memory and determining the delay, then RLDRAM3's performance would almost never stall and you effectively can get an answer on every clock cycle (~1ns), with a worst-case performance of waiting tRC if you visit the same memory bank (7.5ns). That's why I focused on tRC last post.
Thinking about it now however, maybe the 20ns figure for RLDRAM3 is a better comparison against Ryzen's measured 40ns L3 latency. It really depends on how SiSoftware's benchmark is coded.
In any case, its clear that RLDRAM3 has incredibly good latency metrics, rivaling that of Ryzen's L3 cache. Expected 22.5-ns of realistic latency compared to 40ns measured means RLDRAM3 is still way faster than Ryzen.
Plus, you know, the ability to hold a 9MB data-set or larger. An ASIC would be able to chain up as many RLDRAM3 chips as they have pins, so you can probably hold upto 640MB or so (assuming you have enough pins for 10x RLDRAM3 chips)
Alas: an FPGA or ASIC could ALSO use HBM2 memory (although looking into the RAM types available: Hybrid Memory Cube looks like a better fit for the Cryptocoin Proof-of-Work problems).
The main issue is that stacked-memory + interposers require advanced manufacturing techniques and are incredibly expensive right now. But if the future makes this technology cheaper, then I'd expect FPGA + Hybrid Memory Cube to eventually become the ideal platform for Proof-of-Work mining. AMD really did a good job at turning HBM2 into a commodity (well, until HBM2 supplies dried up anyway).
If you're talking about commodity GPUs, well... QDR-IV and RLDRAM3 have way lower latency than GDDR5, DDR4, and GDDR6. So I'd expect RLDRAM3 + FPGAs or ASIcs to beat any GDDR5 or GDDR6 based GPU at memory-hard problems.
They were "cool" in that you can 100% predict the speed.. like guarantee algo X takes Yns to complete... which is very useful for some things, but the brute power was really not that impressive compared to a fast intel CPU with no or a very thin OS.
I don't think the profit margin can support that kind of efficiency loss, you are back to near CPU levels..
I presume they've considered this, but this seems like a dangerous approach. How will they decide which change occurs, how far advance will it be decided, and how will they keep this decision secret? Think of the potential advantage to a party who uses foreknowledge of the exact change to have ASIC's ready at the official announcement. If done on a small scale, it's possible that no one would ever know. If done on a large scale, it would be obvious after the fact that a leak had occurred, but the repercussions would likely destroy the currency.
AFIAK for today's ASICs a table larger than a few megabytes would be a strong deterrent. Go really big (e.g. 64mb) and you'd have some margin.
This would also be completely deadly to GPUs, especially if combined with branches and complex algorithms.
Edit: even worse: perturb the table using entropy from the block chain, making it impossible to burn into ROM and requiring tons of on-board RAM or a memory controller.
You can tune their parameters to use GBs of memory if you wish.
A crypto-currency which will hard-fork twice yearly and all miners accept that hard-fork? If you trust changes not backed by computations but from central authority regularly, why don't you use the traditional bank backed by a nation-currency?
If everyone in the world just decides that a new fork of bitcoin IS bitcoin, then it really does become bitcoin.
This exact thing happened with Ethereum. A bunch of people in the community just decided to hardfork, and then it happened. And now the hard forked ethereum is the real ethereum.
Computing power doesn't stop your "real" version of the old currency from becoming worthless, and the "new" "forked" version of the same currency from becoming worth more money.
People think the blockchain prevents bad things from happening, but it’s the exact opposite:
The blockchain allows EVERYTHING.
It permits all possible universes with all possible transactions. I can go back to the first block and pretend Satoshi never mined more than the first one, and I have all the bitcoins.
That reality is just as valid as any other. This is the central idea of blockchain tech: you assume all realities will exist. Then you code ways for people to do business despite inevitable lack of consensus.
Coincidentally this is exactly what young hippies miss about anarchism when they are trying to run consensus processes. They think anarchism is about obtaining consensus. But it’s really the opposite: anarchism is about what happens when you DONT have consensus. If you have consensus it doesn’t matter what political system you’re in: monarchy, communism, libertarianism, it’s all the same when there’s consensus.
This all became clear to me after reading The Dispossessed (rip Ursula K Leguin). An anarchist syndicate is differentiated (from e.g. a co-op) by how people react when someone DISSENTS.
If you dissent from the co-op you have essentially no rights. Unless you have a voting quorum the others can use the police to stop you operating against policy.
In a syndicate, a dissenting member has the same rights as the entire rest of the syndicate. They have equal claim to the facilities, the resources, even the staff. They stand peer to peer with the syndicate itself.
Bitcoin is the same. A fork has all the same capabilities as the long chain.
Please read up on Monero.
The number of units to break even is not that high -- here is one random example, a google search away, that seems right to me -- from what I can recall -- http://www.deepchip.com/downloads/fpga-vs-asic.pdf. A reasonable volume or am I missing something?
Consider FPGAs as the efficient R&D step (due to its re-useability) towards an ASIC. ASIC will be generally faster and considerably more cost effective option per unit; plus more compact (so more units per space -- although there are other factors to account here). Usually you might end up with something less power hungry also, although I really doubt here one will see such a benefit.
Hence Monero is going against this step as a way to mitigate centralization and the introduction of more market entry barriers. I think this is a bit late. For instance, there is a great incentive for miners to join pools -- see a bitcoin analysis http://www.jbonneau.com/doc/SBBR16-FC-mining_pool_reward_inc...
I have not heard anything bad about the Monero people, but given the number of bad actors in the cryptocurrency space they should probably describe what steps they are taking to ensure that nothing like what I'm about to describe can occur.
> Moving forward, developers will seek to protect the network’s ASIC resistance by slightly modifying its PoW algorithm at every scheduled hard fork, which generally occurs twice annually. These changes will not be noticeable to ordinary XMR users, but they will alter the network’s hashing algorithm enough that Cryptonight ASIC miners would become obsolete following every fork
Suppose an insider is conspiring with one of the ASIC makers. The insider gets the modifications to the ASIC maker before they go public. The ASIC maker gets a head start on making new ASICs.
During the time between the modifications going public and other ASIC makers getting chips out, the conspiring maker does NOT sell their chips. They use them themselves, splitting the results of their mining with the insider.
Is it theoretically possible for a computer program to synthesize new cryptographic hash functions which are significantly different one from the other (i.e. not just changing constants, or number of iterations)?
I've outlined my main points in a post on Reddit: https://www.reddit.com/r/Monero/comments/7x82yp/technical_cr...
In short: a memory-hard PoW algorithm (such as Cryptonight) can be gamed by using exotic low-latency memory like RLDRAM3, QDR-IV, or Hybrid Memory Cube.
Its not as many orders-of-magnitude better than BTC, but it'd probably be on the order of 1x to 10x more power-efficient in the case of RLDRAM3. Although due to the costs of developing an ASIC and buying expensive RAM like RLDRAM3, it would likely cost a bit more than commodity hardware.
You'd make it up eventually in energy savings however.
The recent Hybrid Memory Cube RAM looks incredibly fast and power-efficient however. I wouldn't be surprised if HMC + FPGAs or ASICs completely destroys any "ASIC-resistance". Fortunately for the cryptocoin community, HMC is incredibly exotic, expensive, and rare at the moment.
I think buying QDR-IV and externally interfacing would be easier. But I haven't thought too much about this aspect of the problem.
In effect, QDR-IV already exists and is already mass produced. So if you wanted SRAM, there it is. Already to order from numerous suppliers. I don't see much reason why you'd want to design it into an ASIC when its a ready off-the-shelf part.
Overall, I think the optimal design would use RLDRAM3 anyway. RLDRAM3 is cheaper and uses less Watts than QDR-IV. The tRC latency issue on RLDRAM3 doesn't seem to affect the expected workload of a memory-heavy Cryptocoin, so QDR-IV (aka SRAM) wouldn't have much of an advantage over RLDRAM3 anyway.
I'm trying to bring up the issue actually, because I think its the obvious elephant in the room. In effect: ASICs are ALREADY made to solve the memory problem, they're called high-speed low-latency RAM (IE: RLDRAM3, QDR-IV, or HMC).
But it seems like most people in the cryptocoin community is only familiar with DDR3, DDR4, or GDDR5 RAM. I think the "exotic" low-latency stuff is still not really well known yet.
The harder the memory-problem becomes, the better it is for QDR-IV SRAM compared to CPUs or GPUs. An ASIC or FPGA designer can use any RAM that exists in the world, but CPU / GPU users are limited to DDR3, DDR4, the L1/L2/L3 caches, GDDR5, and maybe HBM2 (AMD Vega / NVidia V100)
Any memory-hard problem are basically solved by these exotic low-latency / high-bandwidth RAM chips + an FPGA or ASIC interfacing with them.
So this simple solution actually won't work on the harder algorithms. It might work for Ethash or something scrypt based, but things have gotten harder since then.
Salon.com already tried it, and if it works out for them, it could become a pretty great alternative to ads. It would also change the dynamics of online content, as page-reloads would mean a dip in hashing power.
They even have a pretty cool usecase for it too: to have x amount of hashes as a captcha replacement, with a nice UI and such (you can find it by click on login at their page & disabling your ad blocker). Unfortunately, it does not work really well for many of my use cases. When I'm on my phone it is fairly slow. When I'm on an old phone (iPhone 5 and 5s, for instance), it is so slow that many users would just close the page because they believe it is broken but it probably is a decent defense against brute force attacks because it still requires quite a bit of work to try a login.
(Coinhive claim to disable accounts associated with abuse, but even if they do that promptly, the lack of any meaningful holding period means that abuse can still be quite profitable.)