I had the same thought as hackuser when reading the article, and then it was quickly followed by your point. I think an important first step would be to get certain things classified as arms. Once that's done, normal options may be able to handle them appropriately, such as not allowing the purchase or sale of certain types of arms within or over borders, etc.
This would of course open up a whole new can of worms in the US, as we are constitutionally guaranteed the right to bear arms, but that's just makes it hard, not impossible (and could possibly even serve to provide some much needed nuance to that discussion in the US).
That said, I haven't put a lot of thought into this, so a well reasoned criticism could completely change my stance.
I was thinking less of the knowledge being considered an armament, and more that an actual program that takes advantage of it being one. I don't consider the the scientific knowledge required to create a gun as an armament, nor even specific schematics, but governments may view it differently (indeed, they weren't happy about the 3D printable gun).
Also, I don't think this concept is limited specifically to exploiting bugs. I think a program that was meant to access and catalog social media accounts for a person while hiding it's accesses as much as possible, but run from a third party's location, might be considered an armament. Same with something designed to DoS a service.If the purpose is to cause harm, it might be an armament. I am aware there's probably a fine line here, and one that would inevitably be abused. I'm not sure how to deal with that, and whether the negatives there outweigh the possible positives overall.
And fundamentally it's the knowledge that matters. Programmers are "expensive" but not that expensive. Give any decent off-the-shelf code monkey the specifics of a vulnerability and he can give you exploit code.
Which means restricting the exploit code is quite useless. But restricting the knowledge itself doesn't work because the same knowledge is necessary to mitigate the vulnerability and to test that the mitigation is effective.
These days, exploiting vulnerabilities in most interesting code actually seems to be quite fiddly thanks to all the mitigation techniques and requires a bunch of specialist knowledge and tools that isn't exactly trivial to come by. The knowledge is already restricted, just for commercial rather than legal reasons.
Which is still knowledge. If you have the information you can make the tools.
I mean obviously in reality the line between "information" and "software" is non-existent because software is just a type of information, but if you insist on trying to draw a line anyway then it still fails because it's still possible to convey everything of significance using natural language, and the skillset required to convert plain language instructions into software is not rare enough to be prohibitive.
>thanks to all the mitigation techniques and requires a bunch of specialist knowledge and tools that isn't exactly trivial to come by
Eh, specialist knowledge yes. Restricted, no. Getting documents on how chips and software has always been somewhat restricted, just be a linux person and try to get documentation from Broadcom on how their wifi/lan chips work, for example.
> we are constitutionally guaranteed the right to bear arms
It doesn't extend to all arms; e.g., you don't have a right to own anti-aircraft guns, weaponized anthrax, or even fully automatic rifles. What side of the line the exploits fall on is of course a question, but if I'm right that their only civilian use is illegal harm to others (e.g., you don't use them to protect your home or hunt deer) then it's simpler.
Yes, and that's what I meant about making it hard, not impossible. That said, there are uses of exploits which can be said are for the purpose of protecting property. I might conceivably want to use an Android or iOS exploit to liberate some of my data from my phone if some apps are less forthcoming with that data than I would like.
> I might conceivably want to use an Android or iOS exploit to liberate some of my data from my phone if some apps are less forthcoming with that data than I would like.
A great point that I should have thought of. I wish I could edit my original post and add that consideration.
I can draw a conceptual line: Ban using exploits on other people's equipment. But practically, I don't see how to stop that without criminalizing distribution, in which case I can't get my data from my phone (or install a 3rd party OS) without the vendor's permission.
> I can draw a conceptual line: Ban using exploits on other people's equipment. But practically, I don't see how to stop that without criminalizing distribution, in which case I can't get my data from my phone (or install a 3rd party OS) without the vendor's permission.
I don't understand what the problem is supposed to be. You don't need laws against knives because there are already laws against assault and murder and there is no harm in having a knife you use to cut carrots. Then you prosecute people for the bad things they actually do.
The justifiable laws against specific weapons are for the exceedingly dangerous ones like plutonium and smallpox. That isn't this.
> You don't need laws against knives because there are already laws against assault and murder
A good point. In this case it's so hard to catch perpetrators that to stop the crimes, it could be necessary to ban the weapons or their distribution (if that even is a practical option).
Are the other similar situations, where perpetrators are so hard to catch and you have to ban the means? Counterfeiting is all I can think of, and they don't ban color printers they just put tracking tech in them. Also, color printers are dual-use: They have many legitimate uses, exploits have very few.
> The justifiable laws against specific weapons are for the exceedingly dangerous ones like plutonium and smallpox. That isn't this.
Weapons that help foreign governments oppress large parts of their population might qualify, though clearly not all exploits fit that description.
> Are the other similar situations, where perpetrators are so hard to catch and you have to ban the means?
The nearest thing is clearly DMCA 1201. The problem of course being that DMCA 1201 is an epic failure. DRM circumvention tools are widely available to pirates, meanwhile it regularly subjects honest people to a choice between breaking the law and having it interfere with their legitimate activities.
> Also, color printers are dual-use: They have many legitimate uses, exploits have very few.
Exploits seem to have more legitimate uses than illegitimate ones. The only illegitimate use that comes to mind is wrongfully breaking into systems, which is the mirror image of the legitimate use of rightfully breaking into systems, in case you somehow get locked out (or some malicious third party locks you out).
Then on top of that, sysadmins require exploits to verify that a patch actually prevents the exploit. And proof of concept exploits are sometimes the only way to convince a vendor to fix a vulnerability. And academics need to study the newest actual exploits in order to keep up with what currently exists in the wild.
> Weapons that help foreign governments oppress large parts of their population might qualify, though clearly not all exploits fit that description.
Smallpox is inherently dangerous. Some exploits could be specifically dangerous in the sense that some very sensitive systems could be vulnerable to them, but only in the same sense that a Fire Axe could be used to break down some doors leading to very sensitive areas. The problem then is not that the public has access to axes, it's that there aren't enough independent security layers protecting sensitive systems.
And you can't fix that problem by banning tools because a high value target with bad security will fall to a state-level attacker regardless. The only answer is to improve the security of sensitive targets.
I think the equivalent (or much worse, actually) for exploits is something that is self replicating and disruptive. For example, a bug in the BGP routing protocol (or a certain percentage of the common implementations) that propagates bogus routes and disrupts some or all traffic for affected systems and spreads. Something that disrupted a large enough chunk of global traffic would not only be horrendous in its own right, but would also make dissemination of any fix quite problematic.
Then again, I assume it's probably good practice to somewhat lock down how BGP functions in your routers (if that makes sense. I'm not that familiar with it), but a certain incident from last year[1] leads me to believe that's either not possible, hard to do, or people just don't do it.
If you have the tooling to keep smallpox and not kill yourself you can also keep ebola around too, if you go to the effort to go find it. Really dangerous stuff and is going to be costly.
The problem here is I can make and keep 'digital smallpox' on my home PC, and for many pieces of equipment it is surprisingly easy to find exploits for them. Are you planning on watching every computer? Every person in the world?
Take a lesson from the failed war on drugs, where there is significant profit motivation people will do what is necessary to make massive amounts of money. There are massive amounts of money in blackhat work.
Which is exactly what I mean by "there aren't enough independent security layers protecting sensitive systems." We've known that BGP has terrible security for many years.
Fixing it is hard because it requires a lot of independent parties to agree on what to do and update their routers. In theory this is the sort of thing a government could help with by providing funding, to fund research into solutions and/or provide cash incentives to early adopters.
But the market also solves these things eventually, since successful attacks are bad for business. It just takes for the attacks to actually happen first in that case.