Hacker News new | past | comments | ask | show | jobs | submit login
Why we have to boycott RSA (erratasec.com)
271 points by techinsidr on Jan 4, 2014 | hide | past | favorite | 69 comments

> I mention this because people on Twitter are taking the stance that instead of boycotting RSA that we should attend their conference, to represent our views, to engage people in the conversation, to be "ambassadors of liberty". This is nonsense. It doesn't matter how many people you convince that what the RSA did is wrong if that doesn't change their behavior. If everyone agrees with you, but nobody boycotts RSA's products/services, then it sends the clear message to other corporations that there is no consequence to bad behavior. It sends the message to other corporations that if caught, all that happens is a lot of talk and no action. And since the motto is that "all PR is good PR", companies see this as a good thing.

DO BOTH. This is the real world. People have to compromise to send a unified message. Don't refuse to help one group who shares your goals because they have a different idea of how to achieve it. If you are in a position where you can boycott and voice your opinion to their faces, do it. Maybe you're right and they don't give a shit about what you say. Who cares? Let the other people there know, and let them know that there are more of you out there.

It would take too much secret coordination, but the coolest thing would be if all the world's encryption experts/academics colluded to talk at RSA's conference with seemingly-plausible topics, but then have everyone just deliver a speech on RSA's actions before leaving the podium. Then again, getting in would require writing legitimate papers that RSA could still publish in their proceedings to make the conference look successful.

I can't imagine anyone voluntarily using any of RSA's products after the patent expired. RSA SecurID was pretty mediocre, too (acquisitions being the way of horrible companies with cash and no products).

They had a huge brain drain as soon as the market picked up at all post-dotcom period, too.

I think it may be useful to think in terms of contract law as well, where companies looking to integrate with each other set restrictions on the security technologies that each other is allowed to use under the terms of the contract. This could cover both interoperability and end-user and admin access (e.g. SecurID).

I'd go further. I think there needs to be a class action suit brought by customers who purchased a security solution and got snake oil. I'm sure the RSA license limits liability but I think there's a case to be made that this isn't just negligence but willful criminal acts and the limitations should be set aside. The case itself would probably be pretty damaging ("Tell us, what did you think the $10m was buying?"). I think RSA would go pretty far to avoid a trial.


"II. The 2008 Amendments to the FISA

While the underlying actions were pending in district court, and partially in response to these suits, Congress enacted the FISA Amendments Act of 2008, Pub. L. No. 110-261, 122 Stat. 2435, codified at 50 U.S.C. § 1885a. Among the amendments is § 802, an immunity provision and related procedures that are triggered if the United States Attorney General certifies to one or more of five conditions. In such case, no civil action may be maintained “against any person for providing assistance to an element of the intelligence community.” § 802(a)."

This to me says such an action would not even get off the ground let alone them having to answer the "what did you think the $10m was buying" question.

You should also paste the five conditions for completeness:

1. any assistance by that person was provided pursuant to an order of the court established under section 103(a) directing such assistance;

2. any assistance by that person was provided pursuant to a certification in writing under section 2511(2)(a)(ii)(B) or 2709(b) of title 18, United States Code;

3. any assistance by that person was provided pursuant to a directive under section 102(a)(4), 105B(e), as added by section 2 of the Protect America Act of 2007 (Public Law 110–55), or 702(h) directing such assistance;

4. in the case of a covered civil action, the assistance alleged to have been provided by the electronic communication service provider was—

A) in connection with an intelligence activity involving communications that was—

i) authorized by the President during the period beginning on September 11, 2001, and ending on January 17, 2007; and

ii) designed to detect or prevent a terrorist attack, or activities in preparation for a terrorist attack, against the United States; and

B) the subject of a written request or directive, or a series of written requests or directives, from the Attorney General or the head of an element of the intelligence community (or the deputy of such person) to the electronic communication service provider indicating that the activity was—

i) authorized by the President; and

ii) determined to be lawful; or

5. the person did not provide the alleged assistance.

I'm not sure RSA would want to make that defense. Right now they're claiming that they made the changes to keep the NSA as a customer and because that customer demanded them and they didn't see the harm.

To argue they were actively working with the NSA on an intelligence operation, while possibly granting them immunity there, would likely cost them even more customers.

RSA is a subsidiary of EMC. This means that a boycott of Greenplum, Pivotal, VMWare, Isilon, Mozy, and MANY others would probably be included.

I just don't see an effective boycott of this scale happening -- especially when most of their customers just care about the product cost and benefit. Also, it can probably be argued that trying to secure your systems against a targeted intrusion from the NSA using technical means is pointless and a waste of money (throwing money at the EFF might be more effective).

Having said that, is there an good alternative to SecureID? The only thing that seems to come close is CRYPTOCard, but it looks like they have closer ties with the NSA than RSA does. A yubikey also looks nice, but I don't like how it needs to be plugged in as a keyboard -- a device that is kept physically separated from the login machine would be ideal. OTP apps on a multi-purpose device (mobile phone) also isn't something I consider to be secure.

I whole heartedly support public shaming. It's about perception management.

RSA is no longer regarded as invincible, infalliable.

I met some cocksure RSA shirts (sales engineers?) waiting for their espresso. I cheerfully congratulated them on their employer's security fobs getting pwned [1]. I'm sure they were visiting our company for damage control. They tried to play it off "Gee, whatever are you talking about?"; I'm sure everyone got the memo.

I just looked them in the eye and said "Don't play stupid. I read the news." Their demeanor completely changed. Real contrition.

Like everyone, I have some skin in this game. As an election integrity enthusiast, I'm exhausted by the techno utopian stooges promoting crypto as a cure all.

The more the public understands the emperor has no clothes, the safer we'll be.

[1] http://en.wikipedia.org/wiki/SecurID#March_2011_system_compr...

>is there an good alternative to SecureID?

Twitter, Tumblr, Facebook, MIT, Stanford, Sony, Arbor Networks, 37 Signals, Twilio, (and many more) all use Duo Security as an alternative to RSA SecureID. https://www.duosecurity.com/success-stories

Duo 2FA is easily the most secure[1], easiest to use[2], and most developer friendly multi-factor solution[3].

[1] https://www.duosecurity.com/security

[2] https://www.duosecurity.com/product

[3a] Almost all of Duo is open source. https://github.com/duosecurity

[3b] Duo's c development libraries and SSH/PAM packages are available in the official repos for major distributions like Debian/Ubuntu, REHL/CentOS/Fedora, SUSE/SLES, etc. http://packages.debian.org/search?keywords=duo+security&sear...

[3c] Duo's REST APIs kick ass: https://www.duosecurity.com/api

The only downside to Duo is Jon Oberheide's previous collaborations w/ Charlie Miller, ex-nsa'er & advocate of the "no more free bugs" movement.

Minor nitpick, but if we're shaming companies in the security community, I think it's worth calling out some of the security celebrities whose stances contribute to the privacy destroying activities of the NSA. He and people like the gruqg are enablers of the government's destruction of our privacy.

I agree with you though. Duo is one of the best alternatives out there. Jon's collaborations w/ Miller were years ago. Perhaps I'm being a bit to grudgy.

Gemalto FOBs with Duo's backend. This seems very reasonable. Thank you!

In terms of effectiveness:

Boycotting all EMC subsidiaries > boycotting just RSA > doing nothing

Nothing wrong with keeping it small scale and only boycotting RSA, if the alternative is doing nothing at all.

My bad, the parent comment's logic contains multiple fallacies:

False dichotomy of a slippery slope all-or-nothing boycott with the all option there is another false dichotomy of "limited alternatives." Sounds like rationalization BS for doing nothing to me.

If the options were so "limited," why isn't there more crowdfunding of open source hardware &| software security modules?

Since you feel there are other alternatives, do you know of any hardware tokens that haven't been influenced by the NSA?

A device with a broken usage model (USB or writable SmartCard interface) isn't acceptable since it defeats the entire purpose of using it in the first place.

Would you care to further explain how USB or writable smartcards are "broken" for this application? ISTM that with stingy enough trust models (i.e. the device shouldn't trust the host nor should the client trust the host or the device...) something could be done.

> Also, it can probably be argued that trying to secure your systems against a targeted intrusion from the NSA using technical means is pointless and a waste of money (throwing money at the EFF might be more effective).

You're assuming that the NSA is unwilling to do things that are illegal. The (legitimate) fear is that your competitors may have ties to the NSA, who can then target you illegally, which can greatly benefit your competitor and damage your business.

Good security has now been made vastly more expensive because the government has to be assumed to have built nearly every high-budget attack vector they could. The cost of defending your business from this is very high, and we're all going to end up paying for it.

> You're assuming that the NSA is unwilling to do things that are illegal.

> Good security has now been made vastly more expensive

Most (all?) companies won't be able to outspend the NSA when it comes to security. So, my point is that trying to compete with them by spending more is pointless. We won't pay for it, because we can't afford it.

Lobbying seems to be the only course of action left which might be effective.

> won't be able to outspend the NSA

You don't need to outspend the NSA to have good security. You just need to spend more than you used to. One example, for instance, is Google now has to encrypt all traffic on its own internal lines.

No problem: NetApp or ZFS-based solutions, Asana, Xen, Tarsnap and open source soft tokens. There are plenty of alternatives that include more alternatives (ie solving the problem by modifying requirements) than a narrowly-defined class of product so there is no such thing as "only X options." It's the false dichotomy fallacy.

The choice is to make an effort instead of making excuses, because doing nothing has the appearance and effect of condoning the behavior. Silence gives consent.

A wise (wo|)man will make more opportunities than (s|)he finds. - Francis Bacon

The Aladdin Etoken looks pretty nice. It seems that their FIPS 140-2 Level 3 is basically a full blown HSM.

However, it's the same company that makes CRYPTOCard (do a search for "fortezza" to read up on the history).

Edit: The parent to this post originally said "Aladdin Etoken" instead of "open source soft tokens".

Open source software tokens aren't really acceptable since you need to run them on a multi-purpose device which has several paths to exploit. It's not a "false dichotomy" if there aren't any reasonable options. I'm not willing to settle for a device that doesn't solve the only problem it exists for.

If you're looking for something that is not another RSA style OTP solution check out LoginTC. https://www.logintc.com/docs/connectors/

The approach leverages the smartphone for what it is, a connected device with rich user experience interaction. https://www.logintc.com/docs/platform/multi-factor-flow.html

Full disclosure: I'm a co-founder at Cyphercor the builders of LoginTC.

Didn't RSA do this before EMC bought them? It's not clear how much, if any, involvement EMC has in RSA.

Should we boycott Redis and RabbitMQ?

I assume that EMC probably gutted the management of RSA when they bought them. So, if a boycott is going to happen, you should probably go after the primary decision makers.

And no, I don't think a boycott of EMC is practical.

"I don't think a boycott of EMC is practical."

It's practical for me.

The message delivered is not just to EMC, but to the technology industry as a whole. While it might not hurt EMC if RSA sales drop, I'm willing to bet there are companies that would be seriously hurt if the public perception among their customers is that you're in bed with the NSA.

Branding this as a boycott implies that this is an expression of protest, that this is a moral issue. I agree that it is, but a lot of people don't. The morality of the NSA, and of cooperating with the NSA, is a matter of national debate.

However, it is not a matter of debate that the RSA backdoor of BSAFE was and is not open merely to the NSA. It is an objective fact that anyone can take advantage of a backdoor like this. As such, even if you think that the NSA is right, even if you think that cooperating with the NSA is correct, this is not the way to do it.

It might make business sense to do business with a security company that cooperates with the NSA. It does not make business sense to do business with a security company which is proven to produce vulnerable software.

Whether or not it's an ethical problem is subjective. The fact that it's a business problem is objective.

This comment misses the mark:

> Also, it can probably be argued that trying to secure your systems against a targeted intrusion from the NSA using technical means is pointless and a waste of money

The BSAFE backdoor does not simply make companies vulnerable to targeted intrusion from the NSA. It makes every technology which uses Dual EC_DRBG vulnerable to any hacker who knows how to use the vulnerability. This is a pseudorandom number generator, which means that it affects almost every primitive cryptographic operation.

A company which would introduce such a vulnerability for the NSA may or may not be an ethical company, but it certainly is not a company qualified to provide security.

EDIT: It looks like I messed up my understanding of the way in which Dual_EC_DRBG was broken. See the responses to my post for details.

Yes. RSA should lose customers because RSA didn't do their job properly -- they didn't select algorithms that were in their customers' best interest, despite all the facts being in the open.

> However, it is not a matter of debate that the RSA backdoor of BSAFE was and is not open merely to the NSA. It is an objective fact that anyone can take advantage of a backdoor like this.

This is not accurate. You need to know the private key for the generator, and this is not publicly known.

But, how many people have access to said private key? Will it ever be leaked, as some pieces of sensitive data have been? You can't trust keys that aren't yours to control, because while we can probably safely assume that NSA has better security than you or I or the companies we work for, it also has much higher capability attackers than most of us ever see in our lifetimes. The value of this particular private key is probably the highest of any known single private key in existence.

And, what about further down the road? 10 years, maybe 20, when this new type of key is predictably breakable with large enough resources? A 1024 bit RSA key is breakable for about $10 million today, according to a study that was linked to in a previous discussion about the state of quantum competing a couple days ago.

There are too many ways this one key could end up compromising potentially millions of locks.

" … because while we can probably safely assume that NSA has better security than you or I … "

This is the same NSA that has no idea what or how many documents Snowden exfiltrated as a contractor sysadmin?

Would you bet your company's confidential data (and possibly future existence) on the assertion that Snowden didn't have access to that private key? Or that other less politically motivated NSA contractors didn't have access to that private key, and which they could have sold for profit instead of publicly whisteblowing for ethical reasons?

I've seen no evidence that the key has been compromised, nor evidence that any important NSA keys have ever been compromised. I must assume they have different practices for their keys than for their data gathering practices.

While I've never seen it spelled out this way, I've always been under the assumption that the reason the NSA had so many outside contractors doing particularly dirty work was perhaps because they knew it was illegal and unconstitutional, and wanted it to happen outside the agency itself. But, I may be misinterpreting. It may have simply been a cost-cutting measure in which they failed to account for the lower level of loyalty to the state and higher level of loyalty to the constitution and individual rights than they were accustomed to from "company men".

> While I've never seen it spelled out this way, I've always been under the assumption that the reason the NSA had so many outside contractors doing particularly dirty work was perhaps because they knew it was illegal and unconstitutional, and wanted it to happen outside the agency itself.

It has nothing to do with "doing the dirty work" as the contractors are still working as agents of the government and are therefore held to the same limitations.

Rather there's a ton of seemingly-good reasons.

HR because you can't hire enough of the types of geniuses you need from the free market on government pay (it's hard, though not impossible, to justify paying a civil servant gobs and gobs of extra cash).

Additionally though there's the political reason: NSA can seem "smaller" by shifting headcount from government employees to contractors. It's not true, of course, but it doesn't have to be true for most of the people who would care.

Plus, it's just hard to scale government org. structure up and down as needed. Where you need to be flexible and the mission is not "inherently governmental" then turning to contractors is a popular way to adapt to changing situations.

But in no case are contractors held to more lax rules. In fact it's the opposite: they legally must comply with all restrictions on government action since they are (contractual) agents of the government, but they also have to comply with government ethics rules pertaining to the fact that they are not civil servants.

These rules are often annoying in their own right (e.g. a contractor is technically required to include the fact that they are a contractor and their company in any email signatures, must announce on the phone that they are a contractor and don't speak for the government, on and on).

In this post Snowden era, any time I hear the phrase "I must assume … ", I automatically have to wonder just how well founded that assumption is any more.

You're _probably_ right.

A year ago I would have said you were "probably right" if you told me the NSA wasn't recording metadata for almost every phone call, email, and website visit.

I don't disagree with you, really.

I think we both agree that any company that is willing to compromise its users to any entity, for money or otherwise, is not a company that should be entrusted with security. I will never deploy an RSA product, and will encourage my customers to choose other options (we support 2FA in our products, as of a couple of months ago, so we have the ability to determine what potentially millions of users choose, though realistically only a few hundred of our users have enabled 2FA, thus far; we don't support RSA).

So, yeah, it's also possible that the NSA's super secret input data they used for this RNG will be revealed or will be compromised by some powerful attacker (China, for instance, who would have very high incentive to compromise a large percentage of major corporations in the US in one fell swoop).

You're correct--I foolishly hadn't actually gone over how the Dual_EC_DRBG was broken until after writing this post. In general a broken random number generator breaks many parts of a cryptosystem, but the particular way in which Dual_EC_DRBG is broken only allows someone with a constant "key" which corresponds to the constants in the NIST standard to predict future generated numbers.

> vulnerable to any hacker who knows how to use the vulnerability

I must have missed this part of the news cycle. Can you explain how anyone other than the NSA can take advantage of Dual EC_DRBG? Did P and Q or whatever it was get leaked as well?

> Can you explain how anyone other than the NSA can take advantage of Dual EC_DRBG?

Dual EC_DRBG is a random number generator. If you use any random number generator long enough without re-seeding, it will eventually cycle (start producing the same sequence of bits again) and a discernable pattern will probably emerge long before that. If you can identify that pattern, then you can predict what will be generated.

To understand this: a really crappy random number generator simply takes the seed and increments it to produce the next random number, wrapping around to 0 when it hits 32 bits. So if I give you the numbers 56, 57, 58, produced by this generator, you can predict the number 59 will be generated next (and the number 55 was generated before). If these numbers were, say, the generated public exponent in an RSA public key, and the private key was generated just before, you could very easily figure out the factors used to generate the private key. Obviously nobody is using such an obvious "random" number generator.

In practice, most DRNG cycles are very large, and it takes many generation cycles for a pattern to emerge. This provides security because even if you get a few randomly generated bits, you won't be able to predict which bits were generated next. Even if you know the cycle, it would take many, many bits to be able to know where you are on the cycle and pick what would happen next.

Bruce Schneier explains much more authoritatively than I can[1] how Dual_EC_DRBG can be broken--that after collecting only 32 bytes of generated bits, you can predict what the generator will generate next.

HOWEVER, what I didn't know before re-reading Schneier's article, is that in order to predict future numbers from the previous 32 bytes, you need to know the proper constants (unknown) which correspond to the constants set forth in the NIST's Dual_EC_DRBG. So only the person generating those constants (in tandem with the unknown constant "key") can break the standard. So when I said that anyone can break the standard, I was wrong. Color me embarrassed.

[1] https://www.schneier.com/blog/archives/2007/11/the_strange_s...

Do we have a customer list of RSA? We should at least try warning them about it. Many of them probably aren't even aware of this. What banks use RSA's products?

Conservatively: all of them.

I'm far more concerned about the overlap between the name of the organization and the name of the algorithm.

The political debate over "working inside the system" is certainly important to have. But the organization that makes those hardware tokens used all over the place could vanish, and it would be a minor systems integration inconvenience.

The reputation hit to a fundamental algorithm is going to be confusing programmers for a long time. I don't even know how to start measuring the cost of that.

I think the solution to that problem should be that if a programmer doesn't understand the difference between RSA the company and RSA the algorithm or the difference between a random number generator and an asymmetric algorithm, for God's sake don't let them anywhere near any crypto code.

Of course that probably won't happen since programmers who don't know what they're doing implementing crypto seems to be as popular as ever.

Ahh, yes I wasn't clear enough. There are two distinct issues here. I observed more than one reaction to the original news, where a tech journalist type was clearly experiencing "reasonably informed confusion" about RSA.

And then, the degree of "knowing what you're doing" is important too, because I'm pretty sure I have a better background in algebra than some professional cryptographers, but human blind spots can get pretty subtle.

The difference between a PRNG and an asymmetric cipher is easy to understand. The cognitive load of associating RSA the company with RSA the algorithm (and ECDRBG the PRNG with ECC the PKI for that matter) is difficult to overcome even when you're aware of the potential bias.

From what I've seen, this whole brouhaha has done more damage to the reputation of elliptic curves than it did to the RSA algorithm.

Forget about banks. Police, hospitals, all sorts of more impactful places.

I don't personally have much exposure to rsa, just ssh keys and ssl certs. Are these compromised?

At first I thought your handle 'digisign' was the name of that Dutch certificate authority (DigiNotar) that got hacked last year. That would have been so funny :)

"In some cases the companies had no choice (Verizon)"

This is how wrong so many people are. Verizon's CEO has flat out said "they are our largest customer" (i.e - go fuck yourself).

From the article:

"Sadly, I haven't spoken at RSA in many years. Had I been accepted to talk this year, I'd certainly be canceling it."

What does this have to do with the core part of the article: that people should boycott the RSA for helping the NSA so willingly in surveilance?

That's kind of the definition of sour grapes.

A boycott is symbolic, and that is important. But I doubt it will be effective in changing their corporate priorities. RSA makes its money from government contracts, or from other government contractors, not from privacy-minded individuals like us.

Instead, I propose that it be unlawful for companies which have been thoroughly hacked to bid on government cybersecurity contracts, at least for some period of time. After the SecurID hack, RSA should have been blacklisted for, say, a year. BSAFE should not be anywhere near a government or defence network.

PS: The analogy to Vichy France isn't great. It was not a matter of French technocrats collaborating just to save their jobs; it was real counter-revolutionaries fighting to bring down the Third Republic from within.

Via this logic, shouldn't we boycott Yahoo, Google, and Facebook too?

Why would you boycott companies that spent millions of dollars fighting NSA because of an allegation that another company took millions of dollars to hep NSA?

Well these companies are giving information to the NSA one way or another. Its a slippery slope argument, but anyone that doesn't refuse to give up the information e.g. Lavabit is complicit in aiding the NSA. Whether they get paid for it or not seems irrelevant.

I think that a reasonable person would consider {Apple, Google, Lavabit, ...} receiving a National Security Letter coercion, and therefore not "complicit".

"Complicit" would be Verizon or AT&T, who to this day still sell phone call metadata to the NSA.

Those companies appear to be hoping the NSA toothpaste goes back in the tube. It won't.

Until they deliver open source client software that uses end-user-controlled strong encryption, they are not making their users secure. But that means putting their users out of the reach of law enforcement, too, and they are scared to do that.

They need to face the choice: Enable real privacy, or lose your customers.

No organization on the planet has done more to get end-to-end strong crypto, as free as is feasible from the influence of certificate authorities, into the hands of as many people as Google has. Google is the Internet's leading deployer and evangelist of forward secrecy and the pathbreaking pioneer in identity pinning.

That's all great, but in terms of actually keeping users' data out of the secret policeman's file, Skype was the first and best, until they got their balls cut off.

Google does an excellent job keeping everyone's data out of the hands of black hats, but that's not quite the same thing as making it impossible for governments to snoop.

Instead of bitching about the NSA, how about a GMail client I can audit, a key exchange I can trust, and routine encryption of all emails.

How about PFS and strong payload crypto for hangouts?

I suspect they are hoping for a scenario where customers will just trust them about government snooping. That would enable their security people to cooperate in situations they deem worthy of cooperation. But I think that trust is gone forever. After all, the courts have ruled metadata collection is legal, so why wouldn't Google cooperate with a legal operation?

The millions they have spent are simply PR and damage control.

I like the idea of the smug superiority of a Bank of America lifer lecturing me about Google's complicity.

Nice comeback.

10-10, shots fired.

That was mean Thomas.

I initially read this as boycotting RSA products like BSAFE, rather than the conference.

Aside from their secure ID products, do people use many RSA products?

What About EMC?

Should they bear any of the burden or only the subsidiary?

What about those companies that use RSA products and services?

These are just questions.

Not advocacy.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact