
Shoring up Tor - user_235711
http://newsoffice.mit.edu/2015/tor-vulnerability-0729
======
noondip
> The researchers’ attack requires that the adversary’s computer serve as the
> guard on a Tor circuit.

This is a great reason to run your own Tor relay, even if it's just a private
bridge for you and your friends. You can even use a pluggable transport of
your choosing -- I picked obfs4 to make otherwise identifiable Tor connections
look like random noise to my service provider.

~~~
TylerE
Does that really help? "Random noise" over TCP/IP doesn't really exist in the
wild.

Now, if you made it look like well-encrypted HTTPS traffic, _that_ might be
borderline non-suspicious.

~~~
zmanian
Yes this is the pattern of obsf4 usage.

It makes tor traffic look like https traffic to a bridge node.

~~~
scintill76
I didn't readily find details of this particular cloaking, but I'd be
skeptical of how well it can work if your adversary is more determined to
snoop on you than your bridge is to camouflage itself[1]. Is obfs4 sending an
SNI header for the fake SSL session? If so, an observer can connect to the
named server himself and see things like a) the SNI header is faked, e.g. if
there is no DNS entry for that name, b) it's not really an HTTPS server, c) it
doesn't have enough discoverable content to really justify you spending hours
using it every day, d) nothing else on the web links to this server. If it's
not sending SNI, that's probably unusual these days too, so they know your
client is non-standard.

I don't know, maybe I'm overthinking and/or considering things you don't care
about as part of your threat model. I suppose some of this could be overcome
by using a client certificate whitelist as a plausible reason to not serve
anything to snoopers, but then you and/or the bridge are somewhat identifiable
as users of an obscure SSL feature. And since the client certificate is in
plain text[2], you're either trackable by a static certificate, or
identifiable as something unusual if using many rotating certificates all
coming from the same IP.

[1] Maybe this is weird logic, but in other words, if you are paranoid enough
to be trying to obfuscate your tor usage, I don't see why you should believe
this can be secure enough (unless obfs4 has addressed these issues somehow.)
But maybe the point is more about web filters just smart enough to block
standard tor, than about constantly evolving spies bent on unraveling _your_
communications.

[2] [https://security.stackexchange.com/questions/24577/client-
ce...](https://security.stackexchange.com/questions/24577/client-certificate-
in-ssl-handshake-insecure)

------
w8rbt
“Anonymity is considered a big part of freedom of speech now...” That's a
great way to express the importance of Tor.

~~~
nickpsecurity
People don't get that, though. Too vague. If anyone asks, show them the value
of anonymity with real examples on this page:

[https://www.torproject.org/about/torusers.html.en](https://www.torproject.org/about/torusers.html.en)

There are also plenty of situations ordinary people, police, and military
might relate to on it. Hard to argue with what you might do yourself. ;)

------
Hello71
paper:
[http://people.csail.mit.edu/devadas/pubs/circuit_finger.pdf](http://people.csail.mit.edu/devadas/pubs/circuit_finger.pdf)

------
nickpsecurity
"Researchers mount successful attacks against popular anonymity network — and
show how to prevent them."

This could've been the title of many, many articles about Tor. You'll see
plenty more. It has so many past attacking, due to difficulty of its goal,
that it should only be used as one step in a series of anonymity-enhancing
methods.

------
rc4algorithm
Journalists need to start making a stronger distinction between Tor end-use
and hidden services. End-use is solid - hidden services are experimental.

That said, the fact that this research can identify what hidden service a user
accesses skirts that line. On the other hand, that doesn't sound too different
from a typical timing attack (something that Tor doesn't try to prevent).

------
ikeboy
So, the claim is that any guard can identify the sites Tor users that use it
connect to 88% of the time? That's very bad, if true, because the guard also
knows IP.

Edit: glancing through the paper, it may also require monitoring the webpage,
in which case it's less severe. I'll wait until there's a good writeup by the
Tor Project itself.

From the paper

>Indeed, we show that we can correctly determine which of the 50 monitored
pages the client is visiting with 88% true positive rate and false positive
rate as low as 2.9%

