Hacker News new | past | comments | ask | show | jobs | submit login
The Moral Character of Cryptographic Work (ucdavis.edu)
240 points by cscheid on Dec 3, 2015 | hide | past | favorite | 93 comments

For anyone here who doesn't know who the author is, Phil Rogaway is one of the most important academic cryptographers; he's responsible for OAEP, PSS, OCB, UMAC, FPE, and the constructions behind XTS, the universal standard for disk encryption.

The meat of the paper is in Sections 2 (where the unintended power dynamics of some modern academic crypto research projects is discussed) and 4 (where he provides suggestions for important practical projects academic cryptographers should tackle). Sections 1 and 3 are written for an audience of academics who might be less familiar with the political implications of crypto than the typical HN reader.

Essentially, Rogaway is trying to convince mathematicians to embrace the practical and political impact of their work.

Colin Percival gets a nice mention towards the end of the paper. I'd be over the moon if I were him. Congrats, Colin!

This is a very good paper and it's disappointing that it didn't get higher on HN. That being said I disagree with some of paper's points. I don't believe that cryptographers should take on moral responsibility for how they perceive their work may or may not be used. If a malware writer uses XTS to lock up a hard drive should Dr Rogeway be morally responsible for that since he helped create the constructions for XTS? I would argue "no" because there is a moral separation between idea and implementation. We should not burden cryptographers with moral baggage that should be placed on the people who implement or set policy to implement.

There is a clear distinction between a moral responsibility and a legal liability. The moral duties of scientists versus policy makers have little to do with the moral separation between idea and implementation.

Are you arguing that the scientists who developed the first atomic bomb share no moral responsibility for the (positive or negative) effects of its use?

Dr. Rogeway is morally responsible both for the use of his encryption for cyber ransom and for the use of his encryption to allow private democratic and economic discourse.

If you think the negative effects of publicly available encryption outweigh the positive effects, it would be immoral to be working on making better encryption publicly available.

Obviously, there is a complicated moral calculus behind which effects are foreseeable/likely relative to your other available actions and their foreseeable/likely effects.

It's a nice paper. Turns out, he grudgingly believes what I've been saying all along:

"I am not optimistic. The gure of the heroic cryptographer sweeping in to save the world from totalitarian surveillance is ludicrous. And in a world where intelligence agencies stockpile and exploit countless vulnerabilities, obtain CA secret keys, subvert software-update mechanisms, infltrate private companies with moles, redirect online discussions in favored directions, and exert enormous in uence on standards bodies, cryptography alone will be an ine ectual response. At best, cryptography might be a tool for creating possibilities within contours circumscribed by other forces."

Exactly. Gotta deal with those "other forces," which are social and legal. Otherwise, the opponents vast resources work around the few that try to resist with the crypto. Doesn't stop many technical people from thinking crypto will magically save the day.

From the abstract:

Cryptography rearranges power: it configures who can do what, from what.

I'd argue that the reverse is really the issue that needs more attention. Online systems that do not provide strong cryptography rearrange power, as compared to their offline equivalents.

It was not feasible to scan all phone calls for keywords in 1970, since that required effort from humans to do the patching and listening. The power dynamic changed when our industry brought those calls into a centralized, trivially-storable clear-text format. Encrypting the conversations is simply a partial return to the status quo of a few decades ago.


What powerful tool is I incrementalism that the whole population has forgotten that we used to be private individuals.

The unification of the human race by the Internet now threatens the relevance of overgrown governments and banks. That's why they fight us.

I wouldn't say they do it because they feel threatened—how often do bureaucrats genuinely feel threatened? Most likely they are just legitimately trying to do their jobs, and the natural tendency will be to seek any power available that seems relevant to that task. Does any individual or organization ever voluntarily cede power?

Good point. And even longer ago, all conversations required meeting, and were private unless an eavesdropper was physically nearby.

Tech has created more extreme possibilities on both sides. On the privacy side, it's possible to exchange messages at a distance in an unreadable and nearly undetectable way. On the surveillance side, it's possible to eavesdrop on nearly everyone (except the very sophisticated).

We can't and (don't want to) go back in time. The real question is "which danger is greater: conspiracy or oppression?"

> private unless an eavesdropper was physically nearby.

I completely agree with your post. I have an idea for how to resolve this via networking topology. Right now, TCP/IP seems to me to be an engine for centralizing power: Limited hop count and hierarchical address assignment leads to star topologies, leading to economies of scale that again support centralization.

I propose a network protocol stack that encourages a mesh topology, where it actually makes economic sense to physically link my home to 2 or more of my immediate neighbors. I surmise that all my neighbors (or all the neighbors of the person I'm communicating with) would have to be my adversary in order to spy on my communications (See secret splitting on Wikipedia). I feel that mass surveillance doesn't scale with this topology.

I've been working for some time on designing such a networking protocol stack... What do folks here think? Is this worth my time?

I think the typical approach to the bottleneck issue with regard to traffic analysis attacks is that the machines on the edges can act as mixes. They can essentially launder traffic from within their respective meshes so that any intermediary between them can't do attribution. Of course, you have to trust the mix! So what then? Then communication to them has to be encrypted and onion routed, and moreover, continuously sent (even if what is encrypted is the message, "No data here dummy, this is just chaff") and then that has to be sent along, all so the mix doesn't know that you're actually communicating anything.

It's a whole category of research really. Papers like Herd at Sigcomm and Vuvuzela at SOSP are the two latest I've seen and following references there should be helpful. I think if you look at Herd there are a few tricks in there to lower the cost of all of the chaff with the superpeers (or whatever they call them, I read it a while ago). A hybrid system that mixes meshnet schemes for local peer to peer traffic with secret sharing schemes and mixnets for more disparate networks seems workable to me. The question is what benefits does the meshnet provide over the mixnet style schemes?

Thanks for the pointers, I'll look them up!

> The question is what benefits does the meshnet provide over the mixnet style schemes?

My Isochronous grid/mesh protocol is designed to operate at the network layer. The TCP/IP Internet has: * High and Unbounded Latency * Wasteful, Underused Links * Low Redundancy * A Tendency to Centralize Power * Choke-point Surveillance and Censorship * Disaster Vulnerabilities * Tragedy of the Commons

I think a mesh network with non-centralized per-byte pricing can make a big dent in all of these.

A meshnet built on top of a starnet is like trying to build a road network on top of a train network: It's not economically feasible and ultimately pointless.

I see. I'm not sure if all of these things are fundamental to TCP itself, but instead are economic and regulatory results. Something to think about. It's not my area so I don't have specific cites, but data centers are effectively meshes. I know there has been work on different ways to transit data within them other than stock TCP/IP. Network coding, for instance, is a pretty cool way to splat data among a whole bunch of interconnected people and UDP to all your peers is a good medium to do it over. There's also work on multipath TCP (MCTCP, others) to help utilize other idle links.

I'd check the literature on that, typically under the data center track at networking conferences.

Unfortunately, the other thing that doesn't scale with this topology is the network itself. A centralized network needs to be trusted for transporting data over vast expanses of underpopulated areas, or even urban bottlenecks (e.g., LA communicating with SF)).

Ignoring this fact makes most "mesh" seem like the answer, but the real answer, especially pertaining emergence, is continual improvements in encryption, etc., not a replacement of the entire construct of the Internet, which itself is emergent.

> A centralized network needs to be trusted for transporting data over vast expanses...

Thanks for the reply!

You state this as a fact, but I've spent many hundreds of hours trying to prove to myself that it's not a fact. I think with packet switched networks, you are probably correct. Instead, I've been designing an Isochronous network protocol. If you could help me out with more concrete details on why all non-centralized networks are incapable of running at scale, it could save me a lot of time! :-)

Just a quick thought experiment (because my networking expertise is limited):

Take a single computer at the edge of town A. It's the only machine in town A that can connect to the next town B, because of the distance between town A and town B. All traffic in town A now has to route through this machine to reach town B. How will a single machine achieve this?

Even worse, what if the two towns are too far for any connection other than a centralized style connection (large wires on a pole).

If you arbitrarily define any large wire on a pole as being "centralized", then sure, but I wouldn't agree with that definition. For example, I can lease long distance point to point dark fiber for my personal use, and run whatever combination of wavelengths and protocols that I want on it.

In the case where there is only one link between two towns, then the owner(s) of the switches at either end of that link will be able to charge a monopoly price for the bits that get sent across it. Market forces will soon encourage others to create additional links between the two towns.

In the bootstrapping phase of my plan, the case of a single link between two cities would be impossible: Network participants would create tunnels through the IP Internet (with the obvious downside of higher latency and cost).

Back to my original question: Should I be spending my time on this? You claimed that crypto was a better route because mesh doesn't scale. I'm not a crypto genius, but I do consider myself a reasonably proficient systems software engineer. I feel that if I could design a scaleable mesh network protocol stack, many of the problems we're discussing become tractable. What do you think?

I think mesh doesn't scale is a pretty valid network assumption, in the general case.

So. Maybe an interesting question is: What sorts of applications and protocols will work in a mesh topology? That set might be interesting. For example, you could imagine big chunks of Nextdoor working well in a mesh topology, since it's already a geo-limited social graph by design.

I think the answer to your question is here today, if not that popular yet. Lambda.

After that, the next step might something as open and decentralized as Ethereum.

You mentioned a tunnel. That's exactly what I mean. If the Internet itself wants to constrain and control us, we can just create a new Internet inside of it, at their (e.g., ISPs, backbones) expense. The result is a system of protocols in communication standards that can be distributed across on trusted hardware, which lens itself well to a future, more open and distributed Internet, at a point when these types of things become illegal on the existing Internet. In other words, work with what we have for now, building up the necessary tools and infrastructure from the inside.

I want a mesh network that does to the TCP/IP Internet what the road network has done to the train network. The road networks reduced the barriers to entry in so many industries.

I only have Comcast as an option for broadband Internet. This is the direct result of the protocol's topology. This is power and control that no amount of protocols written on top of TCP/IP can break. We can keep wanting to have decentralized or non-centralized services, but I don't see it actually happening on the TCP/IP Internet: The economies of scale are too powerful too compete against.

If we tried to layer a road network exclusively on top of a rail network, we'd just have a less efficient rail network.

They lead one to ask if our inability to effectively address mass surveillance constitutes a failure of our field. I believe that it does. I call for a community-wide effort to develop more effective means to resist mass surveillance.

What's funny is that you could talk to any number of law enforcement officials who believe that the moral failing is on the cryptography community for not providing a "backdoor" into encrypted communications. Or to restate, "Please, Apple, think of the children!"

Morality is, unfortunately, subjective. Part of the argument is in convincing your opponent that your morality is superior to theirs. Or, perhaps, that their stance violates their own sense of morality.

Yes, that is certainly a tension here. I think it's easier to think of the paper as two broadly different subparts.

The first part is: "regardless of what you think, if something you do rearranges power, then it will become political". This is more a statement of fact than anything else.

The second part is: "now that I hope to have convinced you that crypto is necessarily political, here's the moral stance I would prefer for you to follow."

Morality is subjective at the edges. We all share a common morality, otherwise our society would not function. Most of the common morality is captured by laws.

I completely agree with you that law enforcement officers will see it that way. I posit that that is a matter of limited perspective. They see backdoors as a means to catch criminals, and do not consider the implications for surveillance, democracy and freedom at all.

This paper is not aimed at them, it is aimed at the cryptographers, who could hopefully more easily understand that perspective.

I agree. The opinion of law enforcement is irrelevant in the sense that we should not defer authority on the matter to them. Law enforcement might similarly have thoughts on how to run the judicial branch or alter the Constitution so as to better "catch criminals" (or some other excuse), yet that is no reason to grant them it.

Law enforcement will naturally seek to expand their power by any means available to them, in the same way that various branches of government or political parties will. Hence we have separation of powers.

> Morality is, unfortunately, subjective.

Genuine question: What do people who believe this use as their reason for refraining from harming others for personal gain? I understand that simple intuitive preferences against seeing others suffer will often work, but what do you do about instances where either the rewards of screwing someone over are very great or where your intuition tells you that you would get a lot of pleasure from seeing someone in pain?

I know that I don't really have anything to support my belief that there exists an objective morality which is hard-to-determine.

"Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." The Categorical Imperative and its weaker brethren like the Golden Rule are remarkably widespread.

> What do people who believe this use as their reason for refraining from harming others for personal gain?

This is an interesting and deep question! The game theory of life. I don't know the answer or if there is a clear answer, but I imagine there's generally a lot of risk involved in harming others for personal gain, and basic fear of individual or group retaliation (either mob or police) is a significant deterrent.

As I get older I've come to believe less and less that morality is subjective. I believe the NAP or non-aggression principle is as close to objective, universally-preferable morality as it gets. Certainly some people have other senses of morality, but they cannot be universally-preferable.

They don't mean "subjective" as in "determined by arbitrary selfish choice," they mean "not resolvable in objective, axiomatic, universal ways."

If you "believe" there's an objective morality, isn't that just another subjective position? You can't justify it, so why don't you abandon the notion and go with your selfish whims?

It's a flawed dichotomy. Morality is in your perception like how you know to respect other people's personal spheres. It's not based on rule based rationality or some axiomatic selfishness.

If you want to sleep well at night, you refrain from screwing people over in terrible ways. Probably you screw people over to the extent that you can get away with it in a moral-cultural sense, unless you're particularly saintly.

Recognizing that other people have different opinions than you doesn't stop you from preferring or advocating your own. Objective morality is nice to the extent that you can have an intellectual discussion and come to a resolution. More often you just fight it out.

Because people are not pure rational creatures. Most decisions are governed by emotion, and most people have empathy, so harming others directly is abhorrent. The rest is post-hoc justification.

Beauty is subjective, but that doesn't stop people in society from coalescing around norms.

> What's funny is that you could talk to any number of law enforcement officials who believe that the moral failing is on the cryptography community for not providing a "backdoor" into encrypted communications.

There were also any number of people who justified the use of the Atomic Bomb because it lessened the predicted number of losses incurred in the Pacific theater of war. This didn't stop the creators of the bomb from believing it was still better to opt for peace instead of it's continued use to lessen the total number of casualties in future wars.

Part three of the paper contains his justification for opposing mass surveillance.

I disagree. People are good about segmenting their lives. Few people have value commitments that are stronger than their ability to pay the mortgage.

I've worked with police who privately express feelings and take actions diametrically opposed to how they behave in a professional context.

> Morality is, unfortunately, subjective.

That is only true if there are no universal preferences: things like self ownership, all things being equal, is always preferable to slavery.

Universality doesn't necessarily have anything to do with morality.

If I want to kill all other people and I successfully kill everyone but myself, my preference has become universal but fulfilling that preference has not become any more moral.

Personally, I don't think morality is all that subjective, but more an emergent property of community identity. (Thus is inherently shared rather than inherently personal). I would way that Morality is, fortunately, relative.

As a foundation for morality it does, violation of a universal principle being an immoral act. Your hypothetical situation depends on the dimension of time, I'd say that universality has no such constraint: the concept of a perfect circle exists regardless of a physical example ever having existed.

> Your hypothetical situation depends on the dimension of time

So do you include the future and the past? If you include the future it is extremely difficult, if not impossible, to determine the universality of a principle. If you don't include the future, that means that it is possible for the universality of a principle to change. Then As soon as one person is born who doesn't share a principle, the moral character of that principle vanishes?

I'm not what point you are trying to make by bringing in the concept of the perfect circle. I don't see at all how this relates to morality. The concept of a perfect circle is not universally shared currently, and certainly isn't historically shared universally. Even if that concept were universal, it would not make imperfect circles immoral.

> So do you include the future and the past?

Yes. I think the point of confusion is around "universality". I'm not talking about a unanimous agreement on preference between people born and unborn, I'm talking about attributes that define kindness (the set theory kind, not the Disney princess kind). I was hoping to make that clear with the perfect circle bit, but I wasn't aware of contention on the issue... pretend I said square, I'm pretty sure that is a safe universal definition. I have no opinion on the morality of geometry, but all squares share universal attributes. All humans share universal attributes as well, by way of biological imperatives - life is preferable to death, all things being equal and outside of coercion.

If "life is preferable to death" were a universal attribute of all humans resulting from a biological imperative, no one would ever choose to end their life. I suspect any other proposed "universal attribute" -- at least one that take the form of a universal preference -- will fail to correspond with reality in the same way.

> all things being equal and outside of coercion.

I think that covers mental illness, war, dying in a fire while rescuing kittens, emo cutting, etc.

I don't think it covers any cases of voluntary suicide not motivated by percieved impacts on others, irrespective of mental illness. Unless you define coercion so broadly that the whole statement becomes meaningless.

>> all things being equal

If you could stop the pain without dying, would you still kill yourself?

If you could escape depression without dying, would you still kill yourself?

If you could find meaning in your existence without dying, would you still kill yourself?

If you could go to heaven (or whatever magic place you believe is better than your present situation) without dying, would you still kill yourself?

If you could kill yourself without dying, would you still kill yourself? (Weird I know, but for the sake of completeness)

Did I miss any good reasons to kill yourself?

Please provide a single example of when "all things being equal and outside of coercion" (using your extremely broad definition of 'coercion') a human made a choice to live rather than die.

Or is your 'universal attribute' some sort of 'ideal' that doesn't actually ever happen and thus is completely useless for defining the set of 'human' and 'not human', let alone 'moral' and 'immoral'.

Sure, you woke up and didn't kill yourself. As did most every human being that has ever existed, because humans don't do that outside of very strange circumstances that are out of their control.

Sure, but whether or not there are universal preferences is also subjective.

> are

I don't think that existence is subjective, so universal preference either exists or it doesn't. It has been a long time since I've sat in a philosophy class though, so I'd welcome a correction.

Whether a preference is universal is sufficiently impractical to ascertain with certainty (especially since you have said you want to include the preferences of people in the past), such that any belief about the existence of any universal preferences is inherently subjective.

I'm not talking about whimsical preference, I'm talking about group membership defining attributes - preferences that make one a human being and that every other human has or will share. And yes, this would define a human being even in a world were no human ever existed or will exist.

So... you are talking about preferences that 'define a human being', but then those preferences are only tautologically universal (since you are excluding any possible exception). This seems absurd.

So if I were to tell you that I really did prefer to be a slave you would say I am not human? You are excluding as 'human' any individual who are not capable of understanding the concept of 'ownership'. Do you seriously believe that babies and the severely mentally handicapped are not human?

If you are operating with such a divergent notion of 'human' (and thus 'morality', 'universal' and 'preference') then there is no point in having any further discussion with you or considering anything you have to say.

Nope, exceptions get their own subclass. Babies would be baby-humans, the mentally ill would be - well you get the idea. These classes are actually a predictable reality given an accurate definition of "human being", a component of the whole. Imagine a dependency graph, with the edge cases as leaf nodes. So the definition of human isn't related to a single entity, or the averaging of the qualities of the entire population - it is related to the attributes that give rise to the entire structure - seed values. Again, this has nothing to do with time, so "give rise to" is more like a logical conclusion than a sequence of events.

You should be aware that there's a significant risk you'll have to turn every single human who ever lived into its own class, and even then you'll run into the Halting problem while trying to identify each case of each classification.

You won't end up with any single seed values you can match any given instance against, but with multidimensional matrices that represents multidimensional scales.

> Morality is, unfortunately, subjective. Part of the argument is in convincing your opponent that your morality is superior to theirs. Or, perhaps, that their stance violates their own sense of morality.

Why? I can't just live morally and let my "opponent" do what they think is right?

I feel very strongly that people should decide what they think is right and fight for it.

I feel no need to convince other people that my morality is superior to theirs.

> I feel very strongly that people should decide what they think is right and fight for it.

> I feel no need to convince other people that my morality is superior to theirs.

Those two statements are in direct contradiction. Fighting for your morality is convincing others of its superiority. Those two things are the same thing.

They're not entirely the same thing. For example, if you think people have a right to privacy, one way to fight for it is to write cryptographic software, so everybody who already agrees with you can protect themselves. That doesn't require you to convince anyone who disagrees.

If you fight to win (in at least as much as keeping your freedom to encrypt), you will have to convince your government at some point. The topic won't go away and if they care strongly enough you might find yourself in your own private full-on intelligence war, and maybe in prison. At some point tech cannot help you (rubber-hose cryptography).

Law is not the only moral battlefield.

Could you elaborate? I think I do not understand your point.

That isn't fighting for your morality, though. That's just making crypto software.

> Fighting for your morality is convincing others of its superiority.

Unless your morality includes letting others come to their own conclusions about what is right. In that case, trying to convince others that they should let others pick their own morals would be self-contradictory.

In what sense are you fighting for your morality, then? It isn't contradictory to hold that view, but if you hold that view, then inherently you would not fight for it.

> I feel very strongly that people should decide what they think is right and fight for it.

I believe ISIS would agree with you whole-heartedly

> I believe ISIS would agree with you whole-heartedly

I suspect ISIS would not agree with that statement, but with one that is slightly different:

"I feel very strongly that people should decide that what I think is right and fight for it."

When your opponent condones torture, indefinite imprisonment of innocents, war crimes, and genocide, there is little point in having anbopen dialog or argument with them.

There are many people that don't condone those things but still think law enforcement should be able to get into people's computers or read communications with a warrant. It's more nuanced than you would like to believe.

Are you implying that "law enforcement officials" have moral standing, especially in the US? Torture, racism, fraud, corruption, brutality, are what springs to mind, along with a total lack of will to reform. To hell with that.

I would say that it is precisely the moral standing of LEOs that makes the torture, racism, fraud, corruption and brutality so much more egregious of an offense than when committed by any other armed gang.

> Morality is, unfortunately, subjective.

So no, that implication really isn't there.

The lack of will to reform is not total amongst LEOs. The lack of political will to do many seemingly sensible things in this country is a moral failure that every citizen shares.

Would you agree that well over 99% of LEOs either actively or passively ignore abuses by their colleagues? As evidenced by things like this: http://henrycountyreport.com/blog/2015/12/01/leaked-document...

I suspect that you're right that it is a minority of LEOs that actually abuse their power (possibly a 49% minority, but possibly even as low as 10% or even 1%), but it's clear that there are many more - an overwhelming majority, who actively defend or passively ignore those who do. (I'm sure I only ever see the bad ones, but just about every statement by the Fraternal Order of Police makes me seethe with rage...)

Torture, racism, fraud, corruption: yes.

Total lack of will to reform: no.

OK, take "Total" out and see if you're still happy with that answer:


It seems to me there's a tremendous lack of will to reform abuses and criminal acts by LEO. Perhaps not "Total", but so marginally less than "Total" as to be functionally equivalent (in that "not all men" kind of failure to acknowledge reality...).

There's a very strong political will to "reform" torture by the CIA (i.e. stop it), just as there was when the police did it [1]. Racism, fraud, corruption have existed and will continue to exist, and will continue to be fought viciously for many lifetimes to come, both in the US and throughout the world.

Police misconduct is real here and all over the world, as well, and will also continue to be fought. There are no quick fixes, and it's important to be outraged and voice one's outrage when we see it, but let's not give up and paint all LO with one brush, as their existence most certainly contributes a net positive to society.

1. https://en.wikipedia.org/wiki/Waterboarding#By_U.S._police_b...

> This makes cryptography an inherently political tool.

It always was.

Many developers like to stay out of politics. Concentrating on difficult technical problems is hard enough; adding in politics is therefor adding in unnecessary complexity. As the wonderful Tom Lehrer put it in his song "Wernher Von Braun"[1],

    Don't say that he's hypocritical,
    Say rather that he's apolitical.

    "once the rockets are up, who cares where they come down?
    That's not my department," says Wernher von Braun.
The problem with this is similar to the problem of abstaining from the vote: it's absolutely not a neutral position. Choosing to abstain from politics in general, like those that choose to abstain from the vote, is de facto a vote for the status quo and majority rule.

Not only is cryptography an inherently political tool, almost all software is political.

Software does not exist in a vacuum; the entire point of most software is that it has an impact on business, society, and the world. With the discovery of the General Purpose Computer, this impact can be very large.

It's easy to see why cryptography disrupts existing power structures. It should be similarly easy to see how software already overturned the traditional power structures in places like the stock market, manufacturing, and retail.

So please, consider what impact your software might have when you are writing it, or if someone already has a goal in mind. Maybe, in some cases, it's better to walk away. It;'s a hard question, but the answer is not to say "I'm staying out of politics". To quote Quinn Norton and Eleanor Saitta from their talk[2] at 30c3, there is "no neutral ground in a burning world".

[1] https://www.youtube.com/watch?v=QEJ9HrZq7Ro#t=16

[2] https://www.youtube.com/watch?v=DWg2qEEa9CE

Posted a few days ago at https://news.ycombinator.com/item?id=10655418, but got so little discussion that we won't treat it as a dupe but have instead merged the threads.

Huh, thank you. I missed it back then, and assumed that when the submission went through it meant that the dupe detector okayed it.

This is an important message to consider, and not just for cryptography. Everyone can benefit from thinking about the moral and social consequences of what problems they choose to work on, who they do them for, and what values the institutions they contract with hold.

The biggest thing I took away from this was reading the slides and seeing the FBI's suicide letter to Martin Luther King.

I didn't know about that, and reading that right after seeing the jury dury article on the front page today is chilling to say the least.

I completely agree that Cryptography researchers should evaluate their work against their moral values. I feel the same thing about pretty much all engineering... I've been focusing on trying to design moral networking protocols.

This was one of the factors (certainly not the only one) that made me move out of the field of cryptography - I was doing work related to attempting to break a major cryptosystem and I realised that I wasn't completely sure what the right course of action was ethically in the slim chance that I succeeded. My background was in pure mathematics and up until I moved into cryptography, it seemed obvious that openess of information was an obvious good. However, once in the crypto field, it became much a much more ambiguous issue.

Cyber-security in general is political. This guy is a cryptographer - so it is natural that he formulated this for his own area - but it is too narrow.

Hackers are now routinely the foot soldiers of the cyber-war of everyone against everyone - we need to think more about our own rules instead of following orders.

Here we have what I think is a display of an intelligent mind specialized in one area, funded by a state institution, and weak at resolving moral conflicts. Computer scientists (and any self-respecting scientists) HAVE to separate their ethics from the interests of state institutions. Phrases in the paper resembling something like "where the cryptographer has a duty to serve the public and keep their self-interest in check" indicate this. I've read a paper recently on designing systems to have security exceptions for law enforcement and calling them "exception requirements" or something to that effect. This is the sort of thing a good study of ethics can help to resolve.

  Computer scientists (and any self-respecting scientists) HAVE to separate their ethics from the interests of state institutions.
Why? The paper argues exactly to the opposite in the first part, describing the atomic-bomb scientists, and the Russell-Einstein Manifesto.

In my opinion, cryptographers and computer scientists have ignored morally questioning their work for too long. Reality is now catching up on this, with techniques for surveillance. The paper argues this for cryptography, in my view this is even more generally applicable.

Why do scientists/cryptographers have to judge ethics for themselves? Not sure what you're asking. Bertrand Russell judged the bomb to be a danger for humanity and chose to express his own views on it.

Medical ethics committees have ethicists, but the main advice is from medical experts. I would not want anybody else judging the ethics of medical experiments, as no one else has the expert knowledge. I see absolutely no reason why this should different for computer scientists and cryptographers.

No one else understands the implications of an experiment or a new methodology. I was at a meeting recently discussing ethics committees for computer science. A medical expert gave his opinion and said (paraphrasing): I fail to see the problem with digitalisation, we have had medical records on paper for years, now they are on a computer, what is the difference?

I don't mean to say that every scientist should do this individually. They should discuss this with colleagues, and with an ethics committee, which should contain subject matter experts, but also ethicists.

It is the responsibility of the inventor to think ahead as far as possible about how an invention may do good and/or harm.

everyone has those talks 30yrs ago when crypto was labeled munition by the usa.

if this article has any content that warrant discussion, is how out of touch with reality the social sciences are.

... and sadly, yet another proof of how necessary Snowden was.


So I guess we have a moral imperative to fork Chrome to actually enforce cert pinning even against locally-installed roots, then?

Enforce cert pinning? No. Notify when it is overridden by locally-installed roots? Yes.

...as opposed to Firefox, Edge, and Safari which all do the same?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact