I wrote more about the protocol and rational here if anyone is interested: http://roberts.pm/exploit_markets It's still very much in the brain storming phase so any feedback at all is appreciated.
You are far better off just doing multisig + escrow here.
You could always have a panel of humans that approve or deny exploits on the network but that kind of thing defeats the point of using the service. What I'm trying to accomplish is to build something without the need for human intervention. Because with a human-run service you open yourself up to politics and bureaucracy and suddenly its the same as with a regular bug program.
You've given me something important to think about.
Absolute privacy is absolutely irrational.
I ask because I can imagine some (or many) PoCs would require hardware and software of a different platform than usual e.g. qemu+x86_64 vs ARM vs MIPS. So, I guess I see a somewhat complex infrastructure problem that this would need to have solved.
Perhaps the VM for running the exploit and exploited code in question should be wrapped around the binary that the vendor distributes, with an intention that it be e.g. ELF on x64 Linux 4.x, statically compiled?
(Thanks for the feedback, by the way. It's hard to stand out for anything these days.)
All the binaries you might want to offer bounties on are going to be 100s of GBs in size and growing very rapidly (Think of just the Debian repositories - they take up easily 8 DVDs. And that's by ignoring the daily/weekly/monthly snapshots used by many software projects including the web browsers which are the biggest target of bug hunting.) Is this something that needs to be decentralized?
I wonder if you might lose the attention of potential contributors/supporters by the inclusion of an underlying threat of reprisal implied by your wording.
You are right about so much, and both industry and "researchers" have much to gain by a model that guarantees payment for work within agreed timeframes, etc... While consumers certainly would benefit from a better patch release process, I'm not so sure you can force a company to ship a patch by a certain date to avoid disclosure; that leads to bad patches, more bugs, and resources devoted to the wrong things. Obviously, this is great for the researcher who understands the bigger picture. One might use the word blackmail. If not blackmail, then it sounds like a great way to abuse bounties in general, and make it easier to take advantage and profit off of the bounty vendor. It could also make "responsible disclosure"/bounties (not conflating, but they are the equivalent of "silver or lead") a more profitable endeavor than less legal activities doing the same thing.
advantage of a systemic, nuanced design issue, by reporting a bug that one can exploit via one facet of said systemic issue, iterate, get paid, disclose bug1 (or not), report bug2... Someone will realize what is going on at some point, but the payment/disclosure windows will prevent them from properly fixing the issue. Each patch released to meet the deadline is a potential new attack surface.
The inclusion of a paragraph regarding a zero-knowledge exploit market is definitely relevant, and germane, but in in the context of what is ostensibly a proposed method for implementing smart contracts between historically antagonistic parties, it comes off as a threat.
Neat project. I harbor no illusions about the ethics of certain prominent entities in this space, and if your project helps make it so that ethics are not necessary for a decent system to exist, then you have done something good.
So re: forcing the vendor to patch software.
When the contract is formed the vendor can specify a section for collateral that he gets back progressively from good behavior (I.E. fast patches.) If the vendor fails to continually patch software within an acceptable time-frame, the smart contract can be made to automatically use some of the vendors collateral to hire security auditors (or donate the money to a pre-defined DAO for researchers.)
This thus signals to customers that vendors are willing to stand behind a certain level of quality assurance for the patch release cycle. Whether or not a vendor will abide by this - I have no idea. But it may be that in the future good researchers will only want to work with vendors whose responsibilities within the industry are represented by a smart contract that will protect them from bad experiences. So vendors may end up being brought on board with this too.
I'm not sure what you mean by the attack issue, by the way. You can specify within the contract a reasonable time-frame for exploits to have been disclosed before they were revealed to give the vendor time to patch any problems. Obviously any researchers who want to work on that software will have to agree to the terms of the smart contract before they start. So everybody is in agreement with disclosure time-frames, payment rates, etc.
Thanks for the awesome feedback :)
By the way, you just gave me another idea. Originally the section for private exploit markets was more focused on the deep web. But it just occurred to me that there is actually a legitimate reason to use zero-knowledge proofs in this protocol for the bug bounties. Namely that the existing model depends on having to wait for an exploit to be fixed before a researcher is paid and they might not want to wait for that.
So to prove to the network that an exploit works without disclosing it (and thus frustrating the vendor) you use a zero-knowledge proofs. Thus the network can validate that the exploit works without getting access to it so that the payment can be released earlier even before the vendor has finished the patch.
* The miner generates a random nonce
* It initiates a TLS 1.2 connection to the victim server, and uses the nonce as `client_random`
* The server generates an ephemeral public key (eg. using ephemeral DH or ephemeral ECDH), and responds with a 32-byte `server_random`
* The server sends its certificate chain as well as its public key, and signs the DH key exchange parameters along with `client-random` and `server-random`. This is the proof of work
* The client computes the SHA256 hash of the DH params, the signature and the nonce, and does the usual difficulty check; if it passes, it is used to make a block
The principle of would the same as in DDoSCount, but miners relay connections of certain users until it results in signature of given difficulty. After obtaining such signature, miner would be able to withdraw from user's account certain amount of coins. The probability dictates miners would withdraw randomly from all users.
The big challenge for any kind of decentralized "proof of bandwidth" scheme is verifiability. How do you know two actors are not colluding to say they are producing bandwidth? Or how do you resolve conflicts when a client thinks he transferred 1gb but the host thinks he transferred 10gb?
Our solution was to use a verifiable shuffle, where all relays and clients submit their public keys into a matrix, which the shuffle then transforms into routing paths. The result is that the client gets a "path" (e.g. A tor circuit, but it could be any routing path) that is privately addressable but publicly verifiable. So each node on the path only knows the IP address of its neighbor node, but all nodes on the path can sign bandwidth calculations with a group signature.
That said it was two years ago for my senior thesis... hardly a work of art :P I'm still pursuing the ideas in one form or another.