Hacker News new | past | comments | ask | show | jobs | submit login
SecureDrop (washingtonpost.com)
316 points by brianmwaters_hn on July 7, 2014 | hide | past | web | favorite | 95 comments

If the leaker visits this page before opening the Tor Browser from a regular browser to copy the onion url, the whole thing is as safe as SSL as there will be a trail of the SSL connection just before the visit to SecureDrop. And they don't even explain to avoid it.

OPSEC is hard.

(Securedrop dev here) This is a really good point. Unfortunately, we're "as safe as SSL" no matter what, unless the source has a separate way to verify the .onion address on the SSL-protected page. They can use the SecureDrop directory for that (and we're working on other schemes as well), but it's not automated so only a handful of very cautious sources would likely do this.

I'm not sure how we could explain to avoid it - where would the explanation go? Visiting that page would be just as much of a correlation, no? It's kind of a chicken and egg problem, unless the source is already using Tor.

Avoiding the "trail of the SSL connection" also suggests we should be doing something to combat website fingerprinting, which we have discussed but do not have a clear solution for yet.

Our current thinking is that just visiting the landing page is not enough to prosecute a source. We can do better, and are working on it, but it's difficult.

A few things that may be helpful:

1. Make the entire site available under `ssl.washingtonpost.com` (ideally without the `.ssl` prefix).

That way, the domain won't be as suspicious as it is right now. I suspect that this is more or less the only content hosted on the domain.

2. Include an iframe for all (or a random subset of) visitors, loading this particular url (hidden).

By artificially generating traffic to the endpoint it will be harder to distinguish these from other, 'real' requests.

Use a random delay for adding the iframe (otherwise the 'pairing' with the initial http request may distinguish this traffic).

3. Print the link, url and info block on the dead trees (the paper), as other has suggested.

4. Add HSTS headers (http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security)

Also, if you can swing https://washingtonpost.com?page=securedrop, the request will just look like it's to https://washingtonpost.com since query parameters are encrypted with ssl.

So is the rest of the URL, it could just as well be https://washingtonpost.com/securedrop

Oh right, paths are too. Sorry!

> Include an iframe for all (or a random subset of) visitors, loading this particular url (hidden).

Or, since the content of this page is mostly text, it could be included in the HTML of all washingtonpost.com home page requests with very small overhead, and shown with a non-tracked javascript action (link/button), so it is all client-side and indistinguishable from a normal request to the home page.

Definitely! The challenge is getting the news orgs to change their entire site, which often involves a lot of complex, entrenched infrastructure and sometimes involves reluctant third parties such as ad networks.

We're working on a best practices guide for deployments [0]. I'll make sure these suggestions go in there. Feel free to take a look and comment if you're interested!

[0] https://securedrop.hackpad.com/SecureDrop-Deployment-Best-Pr...

> Unfortunately, we're "as safe as SSL" no matter what, unless the source has a separate way to verify the .onion address on the SSL-protected page.

Print it in the physical newspaper. The German computer magazine c't prints their PGP key fingerprints in the masthead.

We've been working on this with some of our deployment partners for a while now :D Great idea! I didn't know anybody else did it, it's cool to hear about c't.


> I'm not sure how we could explain to avoid it - where would the explanation go?

You could put the instructions on pages that many people visit regularly, true security through obscurity. For example, put the instructions in abbreviated form in a box in the footer of your front page (or in the footer of every page).

Glad you are doing this. You should just stick this link/info in the footer of all Washington Post pages.

Good idea, but, like many of these ideas, easier said than done.

Print a QR Code for SecureDrop in every issue of the newspaper. Hell, feature it as part of a story announcing SecureDrop the first time you print it. Then just print it in a consistent position with minimal explanation from then on.

This may be one of the rare cases where the use of a QR Code is justified.

Only if they visit the page just before. Seems plausible they would read about it, set it up and then drop their documents at a later date as a default behavior.

I agree it would probably be a good idea to put a warning about such a problem though.

There's this hard tradeoff that most people are willing to make, between making things more 'secure' and making things useable by the general public. I just wish that attention would be paid to the security side of things.

Ultimately, we can write descriptive documentation - but getting it read and understood is hard. Cryptoparties, are again a great idea, but getting the non-technical user involved is damned hard.

IMHO these things always come down to "how do we make it easy for the public, whilst keeping it REALLY secure". How does security become a general piece of education, much akin to math, or at least history?

I don't think SecureDrop is designed to be usable by the general public.

I'm happy to agree with you. Equally I feel that with a (small) amount of love, it could be used by whistleblowers! To me it's almost ready for that.

Embed the page as iframe and scrub the referrer on every page a viewer visits.

That should make it hard enough to correlate any data, I guess they have enough visitors.

I don't see how that would help. The threat model here, the reason to use Tor is that they could be compromised and forced to log, and through Tor they would not know the leaker's IP.

You only need the two "leak at time X, IP Y loaded this page at time X-5" datapoints to break this.

An embedded page is not fetched by someone else.

Either you misunderstood me, or I don't quite understand how that would not help.

My suggestion is to embed an iframe to the posted URL on every page on www.washingtonpost.com. Every article, everything. I'd assume this would blast the logs enough that if you look at "time X-5" you'll have too many data points to actually make something out of it. Because everyone who reads an article on wapo will have also visited that page. So yes, that embedded page would be loaded by every single viewer of any page on washingtonpost.com.

Edit: I just realized that there is a huge unfixable flaw in this approach. The request for an article in the logs will always show up shortly before the request for the SecureDrop page. Even if you would iframe a random article on the SecureDrop page too you could see from the logs that is was loaded before the actual article. Essentially rendering this thing useless :/

So... Nervermind... I guess.

(Securedrop dev here) We often suggest ideas like this to deployment operators, and others as well. For example, we encourage deployments to mirror the Tor Browser Bundle so sources don't have to go to Tor's (monitored) website to get it. We encourage them to use SSL everywhere so the "trail to the landing page" is harder to spot. We encourage the exact "hidden iframes" idea you propose here. And we encourage them to deploy on a path, not on a subdomain (because hostnames are visible even with TLS). At least WaPo is doing the last one right!

Generally, it is very difficult to convince the operators of sites like the Washington Post to do things like this, but we're working on it!

Uuuh, hi there! Thanks for the effort you all put into making leaking safer for sources.

Other possible approach: load the landing page everywhere and show it with Javascript when the user clicks their way to it. I think it's an improvement on the iframe without drawbacks. How does it sound?

Hard to verify that there are no ajax shenanigans.

It's a hard problem :/

Downloading Tor from an inofficial source sounds like a recipe for trojans though... I don't think most people will have Erinn's signature to verify.

It shouldn't matter where you're downloading the TBB binary, since you're going to verify the signature before trusting it, right? Surely you wouldn't just assume it was legitimate, and then install it.

Business idea: signature database with web interface. So download from anywhere, and look up the signature on the database to verify its authentic.

How about some simple cookie tracking an iframe that loads a random number of seconds after the page loads (like 10 - 60)? That might spam the logs randomly enough so that it couldn't be tracked. However, I think measures such as including the Securedrop page as a part of the root domain only under ssl would be the simplest solution in this case.

Ah ok, I didn't get the "on all pages" bit, sorry.

I still believe it would not be enough, since such a thing could be silently disabled by WaPo if ordered to do so.

The point in SecureDrop is that they could not deanonymize the source even if they tried.

Maybe you could use JS to randomize the timing of the iframe load after the article load?

Wouldn't matter, the GET for a particular IP for the article would still show up before the GET for SecureDrop, the actual timing is irrelevant here, if there's always an article visit, and then a SecureDrop request.

I guess you could randomize if you load the iframe or not. Then you couldn't be sure if a visit was an actual visit or an iframe that was randomly triggered (with a random delay).

But for this to be useful you'd still need to instruct sources to randomly browse the page before going to SecureDrop. Which might work if you force them to click a link on the main-page to get to the SecureDrop page.

But if they go directly to /securedrop it will fail again because the GET /securedrop will show up as the first request from that IP, giving away that the visit was intentional.

So my current idea would be to randomly generate the actual /securedrop path in a non-predictable matter per client. Maybe something simple like securedrop-sha1(...). Then link to that from WaPo's main page. Forcing everyone to go trough WaPo.com. But then you still have the problem that you must make sure sources don't access this link from history or something.

Quite a lot of work, for still flawed security.

Please correct me if I'm wrong but, right now, at home, I visited that site. Hardly suspicious at all, since it's on HN front page. I could write down the .onion url on a piece of paper (or just print the page, as reference) and then later follow the instructions posted there, at a semi-anonymous Internet cafe, without having to visit that page, right?

I accidentally clicked the down arrow for your comment. Up-voted another to (somewhat) make up for this. Sorry.

That's like saying John Smith went to a bank withdrew money at 1pm on Jan 1. Then the bank was robbed at 1:10 Jan 1 therefore John Smith robbed the bank.

I don't think you can connect visiting the info page and the very next SecureDrop file upload.

The threat here isn't only proof that is acceptable in court:

* Your actions could put you on a shortlist of people to be more thoroughly investigated.

* Your actions could tip off the people whom your information threatens; maybe they stop communicating with you (or worse) to shut off the leak.

* Per the Snowden release, the NSA tracked the communications of people within something like 3 degrees of their targets. With standards that low, it's not a stretch to think someone would track everyone visiting the Washington Post's secure drop box.

That is a poor analogy of the threat. Basically the problem is about attracting adversarial resources. Any suspicious activity will attract more attention and thus make it more likely the adversary will find real evidence.

I wrote up an analysis of exactly this problem last year: http://grugq.github.io/blog/2013/12/21/in-search-of-opsec-ma...

It all depends on traffic.

And "a group of 100 IPs including a coffee shop near NSA employee John Smith's home" is enough.

It's not about proving that John Smith robbed the bank, but raising suspicion so that he will be investigated.

The difference between a court of law and a court of force.

A Tor user at Harvard was successfully tracked when he sent a bomb threat, since he was the only user on the Harvard LAN using Tor at the time the threat was issued.

That wasn't proof, of course, but it didn't need to be proof, just a good lead for law enforcement to kick-start their investigation.

Enjoyed, thanks. Particularly liked "Let's call it half a win."

If memory serves, there were several people who had been or were using Tor at the time the threat was sent. When he was questioned by the police, however, he confessed.

That's possible, but doesn't really change the point. By bootstrapping a associations of identity-masking technologies with possible identities you allow "normal" law enforcement investigative techniques to unmask the identity.

Or if the submitter accidentally leaves their cell phone on en route to or while at said public location ...

The leaker can always visit the SSL site via Tor, which would solve the problem.

If anyone from WaPo visits here, you've got some typos on that page:

"Download and install the Tor browser bundle from Download and install the Tor browser bundle from https://www.torproject.org/" should be "Download and install the Tor browser bundle from https://www.torproject.org/"

"You will be provided with a codename that you will use it to log in to check for replies from The Post." should not have the word "it".

Otherwise, great work! I'm really glad that you're doing this and featuring it prominently on your home page.

I worry that the Washington Post has unintentionally created a honeypot for leakers. I wonder if the Post has the resources to sufficiently secure it:

The requirement for security is to make successful attacks more expensive than they are worth for the attackers. (There is no perfect security, of course.)

How much is information leaked to the WP worth? It's information that can change the course of history; it could make war or peace; it could be worth billions or even trillions of dollars; it could simply change the course of the stock market or of one stock and be worth billions to an individual.

If I ran a state intelligence service, with the fate of my nation and all my citizens in my hands, I would be irresponsible not to invest in monitoring the Washington Post (and the NY Times, and others') "secure" tip line. If I ran an unscrupulous business, it would be worth it, if only for the information relevant to the stock market. EDIT: Also, the information can change the course of elections and be a target of unscrupulous politicians.

I find it hard to believe that the Washington Post or any news organization has the resources to protect assets that valuable.

In case you don't have Tor installed and want to know what it looks like: https://imgur.com/GbwKfuG,D2aWi25,glApNg3

Very refreshing to see a big, red warning in the screenshot about the fact that Javascript is enabled! Usually you see the same thing when Javascript is disabled, asking you to enable it.

(SecureDrop dev here) Glad you like it! It's hard to tell people who get excited about fun UX ideas that they can't use JS, but from my experience as a browser security engineer, eliminating JavaScript (and plugins, which the TBB does already) dramatically reduces the browser's (unfortunately enormous) attack surface.

Agreed with you completely. Every time a new web app is posted to HN and it doesn't work without enabling Javascript, a small circle of security-conscious people complain about it. The responses from other people are in the lines of:

"Are there really people that browse the internet without enabling Javascript in 2014?"

"Well, 0.01% of your users have Javascript disabled, you can safely ignore them"

"Javascript is an important part of the web, if you have it disabled, you have no right to complain"

We need more people like you to advocate secure browsers without using Javascript.

In August 2013, the FBI injected a Javascript exploit with a MITM attack to uncloak the real IP addresses of people accessing Silk Road over Tor: http://arstechnica.com/security/2013/08/attackers-wield-fire...

[edit] Nerdier link with exploit demo: http://resources.infosecinstitute.com/fbi-tor-exploit/

This is a different deployment of the same product [1]. Which, incidentally, was originally created by Aaron Swartz. The Wikipedia page[2] has a list of well-known deployments.

[1] https://pressfreedomfoundation.org/securedrop

[2] http://en.wikipedia.org/wiki/SecureDrop

Thanks for pointing that out. I just watched "The Internet's Own Boy", the documentary about Aaron, and it is positively incredibly how many projects Aaron created or played a critical role in creating. An unthinkable shame that he left us so soon — one can only imagine all the things he had left to create.

Thanks for posting the links, I am guessing the deployment list will grow.

Does anyone know what the codenames are like? If they are easy enough to remember, then they may be easy enough to brute-force?

I think this is a great concept, yet perhaps too little, too late (Journalists should know PGP and drop boxes like these should have been common already). I also worry a bit because of Washington Post's track record with leaks, of the top of my head:

- Washington Post was Snowden's first choice, but they put up enough demands for Snowden to move to The Guardian. [1]

- Washington Post, according to Assange, had access to the "Collateral Murder" video a whole year before WikiLeaks published their edited video. [2]

- Washington Post employs op-ed columnists that call for assassination of "criminally dangerous" leakers like Assange [3]

[1] http://nymag.com/daily/intelligencer/2013/06/nsa-leaker-shop... [2] http://www.abc.net.au/foreign/content/2010/s3040234.htm [3] http://www.washingtonpost.com/wp-dyn/content/article/2010/08...

EDIT: More information on SecureDrop: https://pressfreedomfoundation.org/securedrop and source here: https://github.com/freedomofpress/securedrop

Securedrop dev here. We tried to balance the memorizability of codenames (aka Diceware passphrases) with their length. The current minimum length is 8 words from a list of 6969 words, so you get math.log(69698, 2) = 102 bits of entropy, which is quite good. Additionally, the codenames are stretched with scrypt with affords an extra (approx.) 14 bits of entropy (that's our current work factor).

We are continuing to discuss and debate this trade-off. Other ideas welcome!

> Does anyone know what the codenames are like? If they are easy enough to remember, then they may be easy enough to brute-force?

I don't know what they're like, but if you take a list of 5000 common words and use 4 random entries for each codename, there are 625,000,000,000,000 possible combinations. Brute-forcing the entire space at 100,000 tries per second would take ~200 years.

Edit: I made a toy jsfiddle version: http://jsfiddle.net/SwWZ9/10/

The wordlist is just a random sampling of English nouns (I couldn't find a quick source of common nouns long enough). It may contain profanity, watch out!

Your codename seems to be a collection of random words, the number of which you get to specify.

Tor hidden services are not bulletproof. Just as a really simple example, you can do network traffic analysis to find network nodes with one-way traffic to hosts without a correlated public service and deduce if a hidden service is nearby.

There are several exploits which have been used in the past to expose Tor hidden services, and several papers on theoretical ways to expose them. Many of these attacks can be used in reverse to expose the origin of a connection to a hidden service.

In the [not so] extreme case, the govt can always issue a National Security Letter to WaPo and scoop up any data it wants directly from the hidden service servers, similar to its Silk Road and Freedom Hosting takedowns.

The FBI TOR Exploit [ http://resources.infosecinstitute.com/fbi-tor-exploit/ ]

Heartbleed used to reveal Tor hidden services [ https://blog.torproject.org/blog/openssl-bug-cve-2014-0160/ ]

Hot or Not: Revealing hidden services by their clock skew [ http://www.cl.cam.ac.uk/~sjm217/papers/ccs06hotornot.pdf ]

Tor Hidden Service Passive De-Cloaking [ http://blog.whitehatsec.com/tor-hidden-service-passive-de-cl... ]

If all Post correspondents used SecureDrop to submit their stories that would be a start.

One would have to assume that all the traffic going to the server is logged by the NSA and anyone else who can manage it. If the traffic volume is low then timing correlation with even a large pool of suspects is simple. An active attacker can differentiate between the SSL connection from a web browser and one from a tor node, so the background SSL traffic to the Post would not provide cover.

I think it could be improved by using a mix network (eg mixminion) accessed over tor, rather than just tor.

Unfortunately the mixmaster/mixminion networks are currently too small to provide meaningful complexity. Large scale adoption by, eg, newspapers, is not technically hard and would significantly complicate the adversary problem.

I'd love to see more discussion of bitmessage and Pond (https://pond.imperialviolet.org/)

cf http://www.syverson.org/

This is brilliant, and a smart move for the WP, despite some of the criticism's below. I think it's a much needed, if romantic, idea that harkens back to the transparency of Wikileaks, and gives WP a great little heads up over some of the other papers. I wouldn't be surprised to watch the others follow suit soon.

Random question: has anyone attempted to build a Tor-like system (or bridge to the actual Tor network) using WebRTC?

Assuming you were able to avoid the "JavaScript crypto problem", would this be a good or bad idea?

Sometime in the near future, I predict that the US will require some form of photo I.D before using an internet kiosk. As usual, the spin will be to protect the children.

USA is pretty low on the list of countries I could imagine implementing something like this. Given Russia's, China's, and a large portion of SEA countries' internet censorship track records...

I'd put the USA pretty high on that list. They've implemented plenty of their take-downs over the past year, and are more capable of introducing something like this than any SEA state.

I think many people living in the US are unaware of just how bad the rest of the world has it, sometimes.

That's not the point at all. The USA claims to be a bastion of democracy and freedom. Therefore it has significantly higher standards to live up to than countries like Russia and China.

I have a better idea. Make it so that some traffic receives higher priority than others, and force content providers to have to pay to play. Then limit competition at the ISP level so that to succeed you have to pay a monopoly to carry your traffic in a timely manner.

No need for something as heavy as what you propose.

South Korea has pretty much already implemented this with the majority of its major websites requiring their SSN equivalent to register.


Fortunately, they can't do that for all the open/WEP/WPS wireless APs everywhere.

They've done a pretty good job of scaring people into securing their APs (which is also a legitimate thing in most cases); just publishing some stories about people having ISP service cut off due to freeloaders doing bad stuff would probably be enough; wouldn't even need to try to prosecute some.

>They've done a pretty good job of scaring people into securing their APs

How is this even remotely a bad thing? It's trivial to MITM people on unsecured networks - I can't think of a single consumer router that actually does DHCP snooping to prevent it either.

I think the technology confuses two things: 1. Encrypted traffic between device and wireless hotspot 2. Restricted access to the wireless hotspot (you need a password or it won't give you service)

I want to allow anonymous access, but let the traffic be encrypted. Is there a technical reason why this is not implemented?

I'm very sad by the culture (and moreso, the legal necessity) of restricting wireless access. I want to share, and have at times relied on anonymous wifi to help me get home.

You can run an access point with all the benefits of WPA2/AES, but make the password really simple. Setting your SSID to "PasswordIsBacon" or just using the same SSID and password is a fairly easy way to share access, without running a completely insecure, unencrypted network.

That's "easy to share" which is a much greater hurdle than "publicly accessible". I want strangers to be able to use my wifi in the middle of the night from outside my home. I want devices to connect without any questions or hassle.

A short walk with Wigle shows literally dozens with WPS on, and usually 4-5 with WEP, plus a couple of open ones that aren't paid. WPS is a massive gaping vulnerability as long as you can stay nearby for a few hours, while WEP gives the illusion of security to clueless people but is worthless (yay RC4... worst algorithm ever.).

No, for your convenience, you only need to identify yourself in the case that you exit the kiosk without using any sort of web service account that can be used to identify you ;)

Good thing criminals have no way to obtain a fake photo-ID.

That's what "National Strategy for Trusted Identities in Cyberspace (NSTIC)" is for:


It's a national smartcard identity program that Obama admin has been pushing for a while.

There's always McDonald's, except, you are probably on camera.

If you depend on your anonymity, do not use Tor.

Wow, Tor is still a thing? We have confirmation that security agencies have taken over exit nodes and injected spyware before to track targets. I'm surprised anyone uses it. It's like the security lottery.

Exit nodes are irrelevant for hidden services like WaPo's SecureDrop, the connection never leaves the Tor network.

The NSA leaks reveal that for the most part, Tor is still secure if you're using a sufficient number of intermediary nodes.

If anything, the real concern here is the implicit encouragement to use local library computers, which would be much easier for a government agency (or cybercriminal) to infect with malware and observe.

(Securedrop dev) That's not an implicit encouragement, despite it being your interpretation. Library computers, in my experience, do not typically allow you to install software on them, such as the Tor Browser Bundle, which is needed to access SecureDrop.

The explicit encouragement that is clearly written on the landing page is to use a personal computer (not a work computer) and a public network (e.g. a coffee shop).

Apologies, you're completely right. I think I got that impression from something someone else said in this thread.

“The American Library Association (ALA) opposes any use of governmental power to suppress the free and open exchange of knowledge and information or to intimidate individuals exercising free inquiry…ALA considers that sections of the USA PATRIOT ACT are a present danger to the constitutional rights and privacy rights of library users.”



Tor isn't some magic wand you can wave to get security, but it helps. The core Tor software's job is to conceal your identity from your recipient, and to conceal your recipient and your content from observers on your end. By itself, Tor does not protect the actual communications content once it leaves the Tor network. This can make it useful against some forms of metadata analysis, but this also means Tor is best used in combination with other tools. https://blog.torproject.org/blog/prism-vs-tor

I want to ask for citations but I think I will skip. But no, you are wrong.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact