Hacker News new | comments | show | ask | jobs | submit login
DDoSCoin: Cryptocurrency with a Malicious Proof-Of-Work [pdf] (usenix.org)
81 points by gwern 298 days ago | hide | past | web | favorite | 21 comments



I'm currently working on something similar. It's a blockchain for verifiable exploits against open source software. The idea is that you can represent bug bounties as a series of smart contracts and have exploits run deterministically against a test case within a virtual machine. That way researchers are always guaranteed payment for working exploits and vendors can be forced to release patches on time.

I wrote more about the protocol and rational here if anyone is interested: http://roberts.pm/exploit_markets It's still very much in the brain storming phase so any feedback at all is appreciated.


Every full node would need to the full VM, which has nightmare security implications, not to mention scalability issues.

You are far better off just doing multisig + escrow here.


That's true. The problem would be if you get attackers who have exploits to break out of virtual machines. What I can say in those regards is you might be able to design that out by creating a DSL for specific kinds of exploits and then using that for the first version.

You could always have a panel of humans that approve or deny exploits on the network but that kind of thing defeats the point of using the service. What I'm trying to accomplish is to build something without the need for human intervention. Because with a human-run service you open yourself up to politics and bureaucracy and suddenly its the same as with a regular bug program.

You've given me something important to think about.


Validation of work could be done in DMZ'd infrastructure. Google's servers are hardened against guest escalation, so all you'd really need is a container each for guest wallet, target service, and attacker VM.

Absolute privacy is absolutely irrational.


Very interesting. I need to read up on smart contracts to better understand your post but the first question that comes to mind is: if you use virtual machines to run the code and test the exploits, against which platform(s) would those VMs be built? Would the idea be to standardize on one virtual infrastructure (and therefore class of exploits) or would you allow just about anything?

I ask because I can imagine some (or many) PoCs would require hardware and software of a different platform than usual e.g. qemu+x86_64 vs ARM vs MIPS. So, I guess I see a somewhat complex infrastructure problem that this would need to have solved.

Perhaps the VM for running the exploit and exploited code in question should be wrapped around the binary that the vendor distributes, with an intention that it be e.g. ELF on x64 Linux 4.x, statically compiled?


True. What I can imagine is that when the smart contract is being setup the vendor chooses a list of templates for the platforms / environments used to setup the virtual machines. If they're running their own software on their own infrastructure they can always provide a scale image that reflects what they intend to run in production.

(Thanks for the feedback, by the way. It's hard to stand out for anything these days.)


So the intent here is not to run on Ethereum or an existing smart-contract oriented blockchain, is it? Because compiling random real-world programs like Firefox to run on that VM isn't ever going to work, and I don't think there are any zero-knowledge or witness encryption approaches which will make it possible.

All the binaries you might want to offer bounties on are going to be 100s of GBs in size and growing very rapidly (Think of just the Debian repositories - they take up easily 8 DVDs. And that's by ignoring the daily/weekly/monthly snapshots used by many software projects including the web browsers which are the biggest target of bug hunting.) Is this something that needs to be decentralized?


That's a fascinating concept, and something akin to what you are proposing is certainly needed in that particular niche. I read the your description page, and agree with most of your points regarding the rationale behind the project.

I wonder if you might lose the attention of potential contributors/supporters by the inclusion of an underlying threat of reprisal implied by your wording.

You are right about so much, and both industry and "researchers" have much to gain by a model that guarantees payment for work within agreed timeframes, etc... While consumers certainly would benefit from a better patch release process, I'm not so sure you can force a company to ship a patch by a certain date to avoid disclosure; that leads to bad patches, more bugs, and resources devoted to the wrong things. Obviously, this is great for the researcher who understands the bigger picture. One might use the word blackmail. If not blackmail, then it sounds like a great way to abuse bounties in general, and make it easier to take advantage and profit off of the bounty vendor. It could also make "responsible disclosure"/bounties (not conflating, but they are the equivalent of "silver or lead") a more profitable endeavor than less legal activities doing the same thing.

advantage of a systemic, nuanced design issue, by reporting a bug that one can exploit via one facet of said systemic issue, iterate, get paid, disclose bug1 (or not), report bug2... Someone will realize what is going on at some point, but the payment/disclosure windows will prevent them from properly fixing the issue. Each patch released to meet the deadline is a potential new attack surface.

The inclusion of a paragraph regarding a zero-knowledge exploit market is definitely relevant, and germane, but in in the context of what is ostensibly a proposed method for implementing smart contracts between historically antagonistic parties, it comes off as a threat.

Neat project. I harbor no illusions about the ethics of certain prominent entities in this space, and if your project helps make it so that ethics are not necessary for a decent system to exist, then you have done something good.


Something I thought about is because you're representing vulnerability disclosures on the blockchain you can cryptographically prove when bugs were first brought up and when they were fixed.

So re: forcing the vendor to patch software.

When the contract is formed the vendor can specify a section for collateral that he gets back progressively from good behavior (I.E. fast patches.) If the vendor fails to continually patch software within an acceptable time-frame, the smart contract can be made to automatically use some of the vendors collateral to hire security auditors (or donate the money to a pre-defined DAO for researchers.)

This thus signals to customers that vendors are willing to stand behind a certain level of quality assurance for the patch release cycle. Whether or not a vendor will abide by this - I have no idea. But it may be that in the future good researchers will only want to work with vendors whose responsibilities within the industry are represented by a smart contract that will protect them from bad experiences. So vendors may end up being brought on board with this too.

I'm not sure what you mean by the attack issue, by the way. You can specify within the contract a reasonable time-frame for exploits to have been disclosed before they were revealed to give the vendor time to patch any problems. Obviously any researchers who want to work on that software will have to agree to the terms of the smart contract before they start. So everybody is in agreement with disclosure time-frames, payment rates, etc.

Thanks for the awesome feedback :)

By the way, you just gave me another idea. Originally the section for private exploit markets was more focused on the deep web. But it just occurred to me that there is actually a legitimate reason to use zero-knowledge proofs in this protocol for the bug bounties. Namely that the existing model depends on having to wait for an exploit to be fixed before a researcher is paid and they might not want to wait for that.

So to prove to the network that an exploit works without disclosing it (and thus frustrating the vendor) you use a zero-knowledge proofs. Thus the network can validate that the exploit works without getting access to it so that the payment can be released earlier even before the vendor has finished the patch.


Will the be using Docker?


Something like it. Whether or not Docker will work best for this I have no idea.


Tldr on page 4:

* The miner generates a random nonce

* It initiates a TLS 1.2 connection to the victim server, and uses the nonce as `client_random`

* The server generates an ephemeral public key (eg. using ephemeral DH or ephemeral ECDH), and responds with a 32-byte `server_random`

* The server sends its certificate chain as well as its public key, and signs the DH key exchange parameters along with `client-random` and `server-random`. This is the proof of work

* The client computes the SHA256 hash of the DH params, the signature and the nonce, and does the usual difficulty check; if it passes, it is used to make a block


I'm not sure this currency will take off but how about the following: Instead of network paying for DDoSing TLS connection let it pay for relaying TLS connections.

The principle of would the same as in DDoSCount, but miners relay connections of certain users until it results in signature of given difficulty. After obtaining such signature, miner would be able to withdraw from user's account certain amount of coins. The probability dictates miners would withdraw randomly from all users.


A bit like Torcoin, basically.


Hey, I wrote that :)

The big challenge for any kind of decentralized "proof of bandwidth" scheme is verifiability. How do you know two actors are not colluding to say they are producing bandwidth? Or how do you resolve conflicts when a client thinks he transferred 1gb but the host thinks he transferred 10gb?

Our solution was to use a verifiable shuffle, where all relays and clients submit their public keys into a matrix, which the shuffle then transforms into routing paths. The result is that the client gets a "path" (e.g. A tor circuit, but it could be any routing path) that is privately addressable but publicly verifiable. So each node on the path only knows the IP address of its neighbor node, but all nodes on the path can sign bandwidth calculations with a group signature.

That said it was two years ago for my senior thesis... hardly a work of art :P I'm still pursuing the ideas in one form or another.


Link to the Torcoin paper, for other people like me who hadn't heard about it: https://petsymposium.org/2014/papers/Ghosh.pdf


Off topic but I'm interested, what software is used to create these papers? Almost all research papers have the same, very aesthetically pleasing, layout that is hard to recreate.



Can anyone think of defences not listed in the paper?


I can't think of more defences, but they briefly mention the possibility for websites to mine DDoS coins against themselves and I think it's an interesting perspective. I don't know the algorithms involved in the key exchange well enough, but if they support bulk optimizations like RSA does (it's faster to do thousands of signatures in a single operation, rather than one operation at a time) then the server has a noticeable advantage on clients - to the point where it could drive the difficulty insanely high and make it unprofitable to mine DDoS coins.


Making it unprofitable would be the best defense against for-profit ddos attacks IMO. If using a botnet to mine regular coins yields more than ddoscoins the insentive is gone.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: