The meat of the paper is in Sections 2 (where the unintended power dynamics of some modern academic crypto research projects is discussed) and 4 (where he provides suggestions for important practical projects academic cryptographers should tackle). Sections 1 and 3 are written for an audience of academics who might be less familiar with the political implications of crypto than the typical HN reader.
Essentially, Rogaway is trying to convince mathematicians to embrace the practical and political impact of their work.
Colin Percival gets a nice mention towards the end of the paper. I'd be over the moon if I were him. Congrats, Colin!
Dr. Rogeway is morally responsible both for the use of his encryption for cyber ransom and for the use of his encryption to allow private democratic and economic discourse.
If you think the negative effects of publicly available encryption outweigh the positive effects, it would be immoral to be working on making better encryption publicly available.
Obviously, there is a complicated moral calculus behind which effects are foreseeable/likely relative to your other available actions and their foreseeable/likely effects.
"I am not optimistic. The gure of the heroic cryptographer sweeping in to save the world from totalitarian surveillance is ludicrous. And in a world where
intelligence agencies stockpile and exploit countless vulnerabilities, obtain CA secret keys, subvert software-update mechanisms, infltrate private companies with moles, redirect online discussions in favored directions, and exert enormous in uence on standards bodies, cryptography alone will be an ineectual response. At best, cryptography might be a tool for creating possibilities within contours circumscribed by other forces."
Exactly. Gotta deal with those "other forces," which are social and legal. Otherwise, the opponents vast resources work around the few that try to resist with the crypto. Doesn't stop many technical people from thinking crypto will magically save the day.
Cryptography rearranges power: it configures who can do what, from what.
I'd argue that the reverse is really the issue that needs more attention. Online systems that do not provide strong cryptography rearrange power, as compared to their offline equivalents.
It was not feasible to scan all phone calls for keywords in 1970, since that required effort from humans to do the patching and listening. The power dynamic changed when our industry brought those calls into a centralized, trivially-storable clear-text format. Encrypting the conversations is simply a partial return to the status quo of a few decades ago.
What powerful tool is I incrementalism that the whole population has forgotten that we used to be private individuals.
The unification of the human race by the Internet now threatens the relevance of overgrown governments and banks. That's why they fight us.
Tech has created more extreme possibilities on both sides. On the privacy side, it's possible to exchange messages at a distance in an unreadable and nearly undetectable way. On the surveillance side, it's possible to eavesdrop on nearly everyone (except the very sophisticated).
We can't and (don't want to) go back in time. The real question is "which danger is greater: conspiracy or oppression?"
I completely agree with your post. I have an idea for how to resolve this via networking topology. Right now, TCP/IP seems to me to be an engine for centralizing power: Limited hop count and hierarchical address assignment leads to star topologies, leading to economies of scale that again support centralization.
I propose a network protocol stack that encourages a mesh topology, where it actually makes economic sense to physically link my home to 2 or more of my immediate neighbors. I surmise that all my neighbors (or all the neighbors of the person I'm communicating with) would have to be my adversary in order to spy on my communications (See secret splitting on Wikipedia). I feel that mass surveillance doesn't scale with this topology.
I've been working for some time on designing such a networking protocol stack... What do folks here think? Is this worth my time?
It's a whole category of research really. Papers like Herd at Sigcomm and Vuvuzela at SOSP are the two latest I've seen and following references there should be helpful. I think if you look at Herd there are a few tricks in there to lower the cost of all of the chaff with the superpeers (or whatever they call them, I read it a while ago). A hybrid system that mixes meshnet schemes for local peer to peer traffic with secret sharing schemes and mixnets for more disparate networks seems workable to me. The question is what benefits does the meshnet provide over the mixnet style schemes?
> The question is what benefits does the meshnet provide over the mixnet style schemes?
My Isochronous grid/mesh protocol is designed to operate at the network layer. The TCP/IP Internet has:
* High and Unbounded Latency
* Wasteful, Underused Links
* Low Redundancy
* A Tendency to Centralize Power
* Choke-point Surveillance and Censorship
* Disaster Vulnerabilities
* Tragedy of the Commons
I think a mesh network with non-centralized per-byte pricing can make a big dent in all of these.
A meshnet built on top of a starnet is like trying to build a road network on top of a train network: It's not economically feasible and ultimately pointless.
I'd check the literature on that, typically under the data center track at networking conferences.
Ignoring this fact makes most "mesh" seem like the answer, but the real answer, especially pertaining emergence, is continual improvements in encryption, etc., not a replacement of the entire construct of the Internet, which itself is emergent.
Thanks for the reply!
You state this as a fact, but I've spent many hundreds of hours trying to prove to myself that it's not a fact. I think with packet switched networks, you are probably correct. Instead, I've been designing an Isochronous network protocol.
If you could help me out with more concrete details on why all non-centralized networks are incapable of running at scale, it could save me a lot of time! :-)
Take a single computer at the edge of town A. It's the only machine in town A that can connect to the next town B, because of the distance between town A and town B. All traffic in town A now has to route through this machine to reach town B. How will a single machine achieve this?
Even worse, what if the two towns are too far for any connection other than a centralized style connection (large wires on a pole).
In the case where there is only one link between two towns, then the owner(s) of the switches at either end of that link will be able to charge a monopoly price for the bits that get sent across it. Market forces will soon encourage others to create additional links between the two towns.
In the bootstrapping phase of my plan, the case of a single link between two cities would be impossible: Network participants would create tunnels through the IP Internet (with the obvious downside of higher latency and cost).
Back to my original question: Should I be spending my time on this? You claimed that crypto was a better route because mesh doesn't scale. I'm not a crypto genius, but I do consider myself a reasonably proficient systems software engineer. I feel that if I could design a scaleable mesh network protocol stack, many of the problems we're discussing become tractable. What do you think?
So. Maybe an interesting question is: What sorts of applications and protocols will work in a mesh topology? That set might be interesting. For example, you could imagine big chunks of Nextdoor working well in a mesh topology, since it's already a geo-limited social graph by design.
After that, the next step might something as open and decentralized as Ethereum.
I only have Comcast as an option for broadband Internet. This is the direct result of the protocol's topology. This is power and control that no amount of protocols written on top of TCP/IP can break. We can keep wanting to have decentralized or non-centralized services, but I don't see it actually happening on the TCP/IP Internet: The economies of scale are too powerful too compete against.
If we tried to layer a road network exclusively on top of a rail network, we'd just have a less efficient rail network.
What's funny is that you could talk to any number of law enforcement officials who believe that the moral failing is on the cryptography community for not providing a "backdoor" into encrypted communications. Or to restate, "Please, Apple, think of the children!"
Morality is, unfortunately, subjective. Part of the argument is in convincing your opponent that your morality is superior to theirs. Or, perhaps, that their stance violates their own sense of morality.
The first part is: "regardless of what you think, if something you do rearranges power, then it will become political". This is more a statement of fact than anything else.
The second part is: "now that I hope to have convinced you that crypto is necessarily political, here's the moral stance I would prefer for you to follow."
I completely agree with you that law enforcement officers will see it that way. I posit that that is a matter of limited perspective. They see backdoors as a means to catch criminals, and do not consider the implications for surveillance, democracy and freedom at all.
This paper is not aimed at them, it is aimed at the cryptographers, who could hopefully more easily understand that perspective.
Law enforcement will naturally seek to expand their power by any means available to them, in the same way that various branches of government or political parties will. Hence we have separation of powers.
Genuine question: What do people who believe this use as their reason for refraining from harming others for personal gain? I understand that simple intuitive preferences against seeing others suffer will often work, but what do you do about instances where either the rewards of screwing someone over are very great or where your intuition tells you that you would get a lot of pleasure from seeing someone in pain?
I know that I don't really have anything to support my belief that there exists an objective morality which is hard-to-determine.
This is an interesting and deep question! The game theory of life. I don't know the answer or if there is a clear answer, but I imagine there's generally a lot of risk involved in harming others for personal gain, and basic fear of individual or group retaliation (either mob or police) is a significant deterrent.
If you "believe" there's an objective morality, isn't that just another subjective position? You can't justify it, so why don't you abandon the notion and go with your selfish whims?
It's a flawed dichotomy. Morality is in your perception like how you know to respect other people's personal spheres. It's not based on rule based rationality or some axiomatic selfishness.
If you want to sleep well at night, you refrain from screwing people over in terrible ways. Probably you screw people over to the extent that you can get away with it in a moral-cultural sense, unless you're particularly saintly.
There were also any number of people who justified the use of the Atomic Bomb because it lessened the predicted number of losses incurred in the Pacific theater of war. This didn't stop the creators of the bomb from believing it was still better to opt for peace instead of it's continued use to lessen the total number of casualties in future wars.
I've worked with police who privately express feelings and take actions diametrically opposed to how they behave in a professional context.
That is only true if there are no universal preferences: things like self ownership, all things being equal, is always preferable to slavery.
If I want to kill all other people and I successfully kill everyone but myself, my preference has become universal but fulfilling that preference has not become any more moral.
Personally, I don't think morality is all that subjective, but more an emergent property of community identity. (Thus is inherently shared rather than inherently personal). I would way that Morality is, fortunately, relative.
So do you include the future and the past? If you include the future it is extremely difficult, if not impossible, to determine the universality of a principle. If you don't include the future, that means that it is possible for the universality of a principle to change. Then As soon as one person is born who doesn't share a principle, the moral character of that principle vanishes?
I'm not what point you are trying to make by bringing in the concept of the perfect circle. I don't see at all how this relates to morality. The concept of a perfect circle is not universally shared currently, and certainly isn't historically shared universally. Even if that concept were universal, it would not make imperfect circles immoral.
Yes. I think the point of confusion is around "universality". I'm not talking about a unanimous agreement on preference between people born and unborn, I'm talking about attributes that define kindness (the set theory kind, not the Disney princess kind). I was hoping to make that clear with the perfect circle bit, but I wasn't aware of contention on the issue... pretend I said square, I'm pretty sure that is a safe universal definition. I have no opinion on the morality of geometry, but all squares share universal attributes. All humans share universal attributes as well, by way of biological imperatives - life is preferable to death, all things being equal and outside of coercion.
I think that covers mental illness, war, dying in a fire while rescuing kittens, emo cutting, etc.
If you could stop the pain without dying, would you still kill yourself?
If you could escape depression without dying, would you still kill yourself?
If you could find meaning in your existence without dying, would you still kill yourself?
If you could go to heaven (or whatever magic place you believe is better than your present situation) without dying, would you still kill yourself?
If you could kill yourself without dying, would you still kill yourself? (Weird I know, but for the sake of completeness)
Did I miss any good reasons to kill yourself?
Or is your 'universal attribute' some sort of 'ideal' that doesn't actually ever happen and thus is completely useless for defining the set of 'human' and 'not human', let alone 'moral' and 'immoral'.
I don't think that existence is subjective, so universal preference either exists or it doesn't. It has been a long time since I've sat in a philosophy class though, so I'd welcome a correction.
So if I were to tell you that I really did prefer to be a slave you would say I am not human? You are excluding as 'human' any individual who are not capable of understanding the concept of 'ownership'. Do you seriously believe that babies and the severely mentally handicapped are not human?
If you are operating with such a divergent notion of 'human' (and thus 'morality', 'universal' and 'preference') then there is no point in having any further discussion with you or considering anything you have to say.
You won't end up with any single seed values you can match any given instance against, but with multidimensional matrices that represents multidimensional scales.
Why? I can't just live morally and let my "opponent" do what they think is right?
I feel very strongly that people should decide what they think is right and fight for it.
I feel no need to convince other people that my morality is superior to theirs.
> I feel no need to convince other people that my morality is superior to theirs.
Those two statements are in direct contradiction. Fighting for your morality is convincing others of its superiority. Those two things are the same thing.
Unless your morality includes letting others come to their own conclusions about what is right. In that case, trying to convince others that they should let others pick their own morals would be self-contradictory.
I believe ISIS would agree with you whole-heartedly
I suspect ISIS would not agree with that statement, but with one that is slightly different:
"I feel very strongly that people should decide that what I think is right and fight for it."
So no, that implication really isn't there.
The lack of will to reform is not total amongst LEOs. The lack of political will to do many seemingly sensible things in this country is a moral failure that every citizen shares.
I suspect that you're right that it is a minority of LEOs that actually abuse their power (possibly a 49% minority, but possibly even as low as 10% or even 1%), but it's clear that there are many more - an overwhelming majority, who actively defend or passively ignore those who do. (I'm sure I only ever see the bad ones, but just about every statement by the Fraternal Order of Police makes me seethe with rage...)
Total lack of will to reform: no.
It seems to me there's a tremendous lack of will to reform abuses and criminal acts by LEO. Perhaps not "Total", but so marginally less than "Total" as to be functionally equivalent (in that "not all men" kind of failure to acknowledge reality...).
Police misconduct is real here and all over the world, as well, and will also continue to be fought. There are no quick fixes, and it's important to be outraged and voice one's outrage when we see it, but let's not give up and paint all LO with one brush, as their existence most certainly contributes a net positive to society.
It always was.
Many developers like to stay out of politics. Concentrating on difficult technical problems is hard enough; adding in politics is therefor adding in unnecessary complexity. As the wonderful Tom Lehrer put it in his song "Wernher Von Braun",
Don't say that he's hypocritical,
Say rather that he's apolitical.
"once the rockets are up, who cares where they come down?
That's not my department," says Wernher von Braun.
Not only is cryptography an inherently political tool, almost all software is political.
Software does not exist in a vacuum; the entire point of most software is that it has an impact on business, society, and the world. With the discovery of the General Purpose Computer, this impact can be very large.
It's easy to see why cryptography disrupts existing power structures. It should be similarly easy to see how software already overturned the traditional power structures in places like the stock market, manufacturing, and retail.
So please, consider what impact your software might have when you are writing it, or if someone already has a goal in mind. Maybe, in some cases, it's better to walk away. It;'s a hard question, but the answer is not to say "I'm staying out of politics". To quote Quinn Norton and Eleanor Saitta from their talk at 30c3, there is "no neutral ground in a burning world".
I didn't know about that, and reading that right after seeing the jury dury article on the front page today is chilling to say the least.
Hackers are now routinely the foot soldiers of the cyber-war of everyone against everyone - we need to think more about our own rules instead of following orders.
Computer scientists (and any self-respecting scientists) HAVE to separate their ethics from the interests of state institutions.
In my opinion, cryptographers and computer scientists have ignored morally questioning their work for too long. Reality is now catching up on this, with techniques for surveillance. The paper argues this for cryptography, in my view this is even more generally applicable.
No one else understands the implications of an experiment or a new methodology. I was at a meeting recently discussing ethics committees for computer science. A medical expert gave his opinion and said (paraphrasing): I fail to see the problem with digitalisation, we have had medical records on paper for years, now they are on a computer, what is the difference?
I don't mean to say that every scientist should do this individually. They should discuss this with colleagues, and with an ethics committee, which should contain subject matter experts, but also ethicists.
if this article has any content that warrant discussion, is how out of touch with reality the social sciences are.
... and sadly, yet another proof of how necessary Snowden was.