
Deanonymizing Tor Circuits - zbentley
https://www.hackerfactor.com/blog/index.php?/archives/868-Deanonymizing-Tor-Circuits.html
======
gzer0
I think a lot of comments here are forgetting it takes months to be considered
a trusted guard node by TOR. Unless you have hundreds of these and a lot of
time and patience, and somehow knock off all of the non-malicious guard-nodes
+ have your "aged" guard nodes set up, this attack will be very difficult to
conduct.

~~~
Santosh83
Which the larger countries do. And the main point of Tor is to evade serious
adversaries isn't it? If you're not anonymous from nation states when using
Tor, then what will protect whistle-blowers and investigative journalists and
dissidents? VPNs can be ordered to turn over data by courts and you can never
be sure a "no logs" VPN does what it says. Tor is the only viable alternative
and we know it can be at least seriously compromised by the bigger nations.

~~~
tutfbhuf
Has there been any known case where a person using Tor has been tracked down
and put into curt, without human mistake that lead to his de-anonymization?

~~~
mirimir
Yes. See
[https://news.ycombinator.com/item?id=22215126](https://news.ycombinator.com/item?id=22215126)

Not one person. Hundreds, maybe thousands. Because of a exploited
vulnerability, and failure to pay attention to an upsurge in suspicious
relays.

Unless you consider that the mistake was using Tor directly, rather than
through a VPN service.

~~~
belorn
[https://blog.torproject.org/tor-security-advisory-relay-
earl...](https://blog.torproject.org/tor-security-advisory-relay-early-
traffic-confirmation-attack)

A bit more detail would be welcome about those "Hundreds, maybe thousands" of
persons. Clearly the security researcher did find a vulnerability in the
protocol and confirmed it by unmasking users, but I don't see the support for
people getting "tracked down and put into [court]".

More important, what purpose is there to reference a 5 year old patched
security vulnearbility? The same year as the above vulnerability was found in
Tor, HTTPS was also broken with Heartbleed. The only fool-proof security is
security in so many layers that it become statistically unlikely to cause a
negative impact.

~~~
mirimir
The issue isn't that the CMU people exploited the "relay early" bug. It's that
the FBI subpoenaed all their data, tracked down onion services, and served
malware.

This is just the outcome from busting one site, PlayPen:

    
    
        At least 350 U.S.-based individuals arrested
        25 producers of child pornography prosecuted
        51 hands-on abusers prosecuted
        55 American children successfully identified or rescued
        548 international arrests, with 296 sexually abused children identified or rescued
    

[https://www.fbi.gov/news/stories/playpen-creator-
sentenced-t...](https://www.fbi.gov/news/stories/playpen-creator-sentenced-
to-30-years)

~~~
belorn
While that linked article does not directly talk about relay early, this
article does: [https://blog.torproject.org/did-fbi-pay-university-attack-
to...](https://blog.torproject.org/did-fbi-pay-university-attack-tor-users)

The article says that they likely did not have a warrant and simply just went
and hired the researcher directly.

But fair enough, the claim that hundreds did get tracked down and arrested,
due to a bug found by a security researcher after getting hired by the FBI, is
true.

People selling zero days vulnerabilities is a major problem to security. Sadly
it is something which new threat models need to take into consideration. The
Tor Project did change how the Tor browser bundle updates, and I suspect this
event did also caused some ripples in the project in how much they can trust
researchers.

------
noident
First: it seems like the author has come up with a number of useful custom
modifications to the Tor daemon to mitigate common attacks against onion
operators. Why not work with the Tor Project to upstream some of the more
useful ones, as well as the additional debugging information? Reading past
Hacker Factor posts, is it just me, or is there a tone of slight disdain
towards the Tor project?

Second:

>However, there's a second attack. The attacker can run one or more hostile
guard nodes. If he can knock me off enough guards, my tor daemon will
eventually choose one of his guards. Then he can identify my actual network
address and directly attack my server. (This happened to me once.)

A guard node is going to get a lot of traffic from both onion services and
normal Tor users. How can an evil guard node tell which IP belongs to the
author's server?

Overall, this is startling to read. History suggests that it is only a matter
of time before an onion service gets unmasked by the latest attack on the Tor
network.

~~~
Iv
Onion services are incapable of resisting a determined government-scale
attacker. Just filter Tor traffic to random groups of users for like 10
seconds, find one such groups that prevents the target onion service from
being routed. Bissect until you locate a precise node.

IMO, Tor is to be used to escape censorship as a reader or host services
anonymously in not tech-savvy governments (which are becoming increasingly
rare).

In the end, Tor only buys 10 to 20 years of freedom until governments figure
out what to outlaw and filter. Censorship is a political problem, technical
solutions provide a temporary hotfix, but the political problem has to be
solved at one point.

~~~
Ajedi32
> Just filter Tor traffic to random groups of users for like 10 seconds, find
> one such groups that prevents the target onion service from being routed.
> Bissect until you locate a precise node.

That only works if you have the ability to filter out Tor traffic in the first
place. (In which case, why not just preemptively block all Tor traffic in your
country? Problem solved.) But Tor has systems in place designed specifically
to prevent that sort of blocking by disguising Tor traffic as other, more
common traffic types (like HTTP).

There's also no guarantee that the Tor service in question is running on a
host that's under your government's control. Cloud hosting is pretty common
these days.

~~~
Iv
Yes, you can ban Tor altogether and be done with the whole thing. It is doable
and there is no technical workaround for it.

The scenario I am proposing is actually worse: you use the feeling of
anonymity Tor proposes to uncover opponents. Yes, my tactic proposes you have
a backdoor into all of ISP's infrastructure. I don't see that as a crazy
requirement for most countries, even democracies.

Disguising Tor traffic does not work well, and China has deployed several
years ago already a tech that recognizes weird streams.

Yes, you could be cloud hosting from outside the country. In the case of China
though, that's likely to not work as most encrypted traffic crossing the
border gets dropped (I guess unless it is HTTPS from a whitelisted source )

------
matheusmoreira

      ExcludeExitNodes {br}
    

Why are brazilian IP addresses associated with abuse? Lots of sites refuse to
respond to my browser because of this.

~~~
deoxykev
Looks like it primarily comes from two ASN's in Brazil: CLARO S.A. (AS28573)
and TELEFÔNICA BRASIL S.A (AS18881). Interestingly, it looks like Brazil is
right up there with Russia when it comes to total number of risky ASN's.

source: [https://www.recordedfuture.com/asn-blocklist-
analysis/](https://www.recordedfuture.com/asn-blocklist-analysis/)

------
aasasd
> _many bots will only do one HTTP connection at a time (single threaded)_

Is it only service admins who do such detective work, or do admins of
intermediate nodes also try to filter out bots? I wonder if simple API calls
wouldn't get caught in the crossfire.

~~~
Ajedi32
Intermediate nodes can't see HTTP connections. Tor traffic is encrypted.

------
Thorrez
Why would people spend effort trying to deanonymize an Internet Archive
server? Are try trying to deanonymize every single hidden service?

And why would they try to DDOS the services once deanonymized? I don't see how
anyone would gain anything from that. If you DDOS some big website you get
famous from all the news articles being written about you, but DDOSing a
hidden service won't make you famous.

~~~
hackerfactor1
I don't know this attacker's reason. I only see the attack.

But I can offer some baseless suspicions:

I'm forwarding traffic from Tor to the Internet Archive. During the last
French election, someone DoS'ed most of their media outlets. As a result, lots
of French people used Tor to access the current news from the Internet
Archive's collection. Shortly after that, someone tried to DDoS my onion
service.

With all of the voting and elections and the impeachment vote coming up, I'm
expecting attacks since the Internet Archive stores lots of information that
the current administration tried to remove from the Internet.

Then again, it could be a "researcher", or just someone seeing if it can be
done. Perhaps they will decide what they want to do after their attack
succeeds.

~~~
incompatible
Perhaps countries that have already blocked the Internet Archive? But Tor
users can already access it through Tor without a hidden service.

If the IA itself was DoSed then wouldn't your relay stop working?

~~~
gruez
random guess: the servers are reached via anycast. if you DDoS from the US,
you'll take down access from the US, but not from other regions.

~~~
oofabz
It's not much of a DDoS if it's from a single country.

~~~
az656
That’s pretty invalid — I often see large attacks that are almost entirely
made up of bots from one country.

~~~
lima
I've even seen 100+ Gbps from a single ASN in a single country.

------
Santosh83
Also is it not weaker from an anonymity standpoint for the tor deamon to
choose the same nation for the entire Tor circuit? I just fired up Tor
Browser, navigated to the Tor Blog and this is what I get:

[https://imgur.com/a/j7JZFVl](https://imgur.com/a/j7JZFVl)

~~~
Astarte
This has been mentioned on the tor irc several times and also on some other
places. No one cared ...

------
quotemstr
Why do you suppose the Tor project has been so resistant to mitigating this
class of attack? In particular, the proposal to require that rendezvous nodes
be publicly listed seems sensible to me, and I wasn't immediately able to find
any Tor bugs on the subject.

~~~
Astarte
That's a good question indeed. I don't have an answer but some people got
similar results when pointing out some issues internally. Only when publishing
those on the mail list where everyone can read it things started to gain some
traction.

"In April 2018 a Tor core member — the most active Tor Project person on that
closed mailing list — made an attempt to initiate a “do not do” relay
requirements list to improve and streamline the handling of malicious Tor
relay reports. (I’m not mentioning his name since he does not want to be
publicly associated with bad-relays handling for safety reasons.)
Unfortunately also this attempt failed since no Tor directory authority
operator answered. (Tor directory authorities are required to enforce any Tor
network wide rules unless it is part of the tor code itself.)

Starting with June 2019, after multiple reports about suspicious relays
remained with no reaction I stopped sending them to the list. Occasionally I
sent some suspicious relay groups to the public tor-talk mailing list instead
— which ironically was more fruitful."

[https://medium.com/@nusenu/the-growing-problem-of-
malicious-...](https://medium.com/@nusenu/the-growing-problem-of-malicious-
relays-on-the-tor-network-2f14198af548?source=---------5------------------)

Even more ironical, the very person which reported that issue and similar ones
(also on twitter) got his twitter account closed shortly afterwards (see other
post on that site). So he has much less audience than before. Coincidence?
Tinfoil hattery? Maybe. But certainly fishy.

------
ur-whale
I wonder if Wireguard is dynamic and lightweight enough to be hacked into re-
implementing something like tor but that dynamically shuffles data paths in
flight ?

~~~
mirimir
Good point!

Actually, that works well enough with OpenVPN. I've managed a crude setup that
switches two-hop nested VN chains at 10 minute intervals.[0]

I should also have mentioned the Orchid app for Android (and perhaps soon for
iOS).[1] It implements dynamic multi-hop routing, and reputation-based route
selection. As I understand it, commercial VPNs are providing OpenVPN servers
for the network, and I would hope that each hop uses a different provider.
Users buy bandwidth with supposedly anonymous Etherium-based cryptocurrency.

Orchid may provide privacy and anonymity that's at least comparable to do-it-
yourself nested VPN chains, and perhaps comparable to Tor. In that you're
distributing information and trust among multiple providers, who are paid
anonymously. And it's obviously far less work than nested VPN chains. But as
with Tor, there's the downside of trusting a complex system, which you likely
won't understand well.

There's also the issue that current smartphones are so horribly insecure that
using such tools arguably provides merely an illusion of security, privacy and
anonymity.

0)
[https://github.com/mirimir/vpnchains](https://github.com/mirimir/vpnchains)

1) [https://www.orchid.com/](https://www.orchid.com/)

------
Santosh83
Interesting related post:
[https://www.reddit.com/r/encryption/comments/exjk9e/the_majo...](https://www.reddit.com/r/encryption/comments/exjk9e/the_majority_of_tor_nodes_are_in_the_14_eyes/)

------
mirimir
Good work!

> However, there's a second attack. The attacker can run one or more hostile
> guard nodes. If he can knock me off enough guards, my tor daemon will
> eventually choose one of his guards. Then he can identify my actual network
> address and directly attack my server. (This happened to me once.)

I also worry about malicious guards. For local machines, I only hit Tor via
nested VPN chains. So even if I'm using a malicious guard, the attacker will
just get the IP of the exit VPN server.

For onion services, I typically use VMs in dedicated servers. And so I can put
at least one VPN-gateway VM between the onion server VM and the Tor guard.

You could also get all of the guards from tor's state file, and hard-code them
in torrc with EntryNode. That wouldn't prevent an attacker from taking the
site offline, but at least they couldn't become your guard.

~~~
Operyl
Your last point falls flat if they’re already running a public guard though.

~~~
mirimir
Why?

It "falls flat" if you're using a malicious guard. But then, you're already
screwed, in that case.

~~~
Operyl
Because they could already be running a node that's indexed and has the guard
flag.

------
bitcoinz
Bitcoin full nodes running over Tor can also be attacked similarly? I think
yes however not sure and not every full node over Tor follows same
configuration

~~~
contingencies
All crypto services should be run through a decentralized, extra-codebase
voting scheme to generate a confidence interval. Mature services like Bitcoin
should have multiple clients and client versions available, geographically
distributed across a range of networks, frustrating identification and
creating an aggregate mechanism for rapidly detecting anomalous output and
mitigating unknown attacks. I implemented this ~8 years ago.

~~~
bitcoinz
8 years ago? Can you share any link which has details about the
implementation?

~~~
contingencies
Sorry no, it was internal work at Kraken.

~~~
bitcoinz
lol

------
bureaucrat
If you need a location hidden service it’s wise to run your own guard.

~~~
hackerfactor1
Running your own guard is stupid unless you open it for the world to use.

If you're the only person using the guard, then the guard offers you zero
anonymity.

And if lots of people use your guard, then make sure it doesn't violate your
ISP's terms of service. (Most ISPs have a clause about residential customers
not running public services.) Also, have a plan in place for when (not if) you
receive legal notices about copyright infringement, child porn distribution,
and other acts that could be criminal in your country/city.

~~~
mirimir
As long as you're anonymous enough about it, I don't see why running your own
[private bridge] is any less anonymous than using an unpublished bridge, or a
snowflake proxy.

An adversary with lots of intercepts could certainly figure it out. But
otherwise, how would anyone know?

And at least, it protects you from malicious guards.

Also, your point about violating a residential ISP's ToS is troubling. Because
nobody in their right mind ought to be running any sort of Tor relay from
home. It's a ~sure way to get your IP address on many blocklists.

And about getting notices, that only happens for exit relays. Not for guards
and middle relays.

Edit: Actually, I meant running your own unpublished bridge, not guard. In the
bridge torrc:

    
    
       ExitRelay 0
       BridgeRelay 1
       BridgeDistribution none
       PublishServerDescriptor 0
    

And in the client torrc:

    
    
       UseBridges 1
       UpdateBridgesFromAuthority 0
       Bridge [transport] IP:ORPort [fingerprint]

~~~
Astarte
A malicious guard is just a malicious node. It can also be used as some other
hop, or there can be non malicious nodes without a guard flag. I think there
has been at least one publication taking a closer look at what malicious
middle nodes can do.

I'm not familiar with bridges or the snowflake proxy but I think this would
work:

Public bridges are public so no one cares about those. Now you run your own
private bridge. First of all running your own leads directly back to you.
Second it puts you on the list of even more paranoid people. Since you know
and connect to that private bridge one can assume you trust that bridge for
whatever reason which indicates some kind of "personal" relationship to that
bridge.

The private bridge now connects to the second hop. This is a malicious one.
The operator sees an IP which does not come from an official relay in the
consensus. I don't know if a node knows he is in the middle (at least a guard
and exit must know they are at the beginning and end of a chain, i guess?),
but if he does he would now know that a private bridge is connecting to it. So
you could enumerate private bridges.

If someone runs dozens of nodes, which is actually happening, this looks like
a viable option. Correct me if I'm wrong.

~~~
mirimir
Good questions :)

> First of all running your own leads directly back to you. Second it puts you
> on the list of even more paranoid people.

It doesn't point to "me", at least in meatspace or even as Mirimir. It points
to some anonymous persona, created specifically for that purpose. On its own
Whonix instance, through its own nested VPN chain, and using its own multiply
mixed Bitcoin. All totally disposable.

And to be clear, I'd use a _different_ anonymous persona for the onion service
itself, created specifically for that purpose. With all the features described
above.

> Since you know and connect to that private bridge one can assume you trust
> that bridge for whatever reason which indicates some kind of "personal"
> relationship to that bridge.

There are numerous private bridges, and many of them have only a few users.
Perhaps even just one user.

> The private bridge now connects to the second hop. This is a malicious one.
> The operator sees an IP which does not come from an official relay in the
> consensus. I don't know if a node knows he is in the middle (at least a
> guard and exit must know they are at the beginning and end of a chain, i
> guess?), but if he does he would now know that a private bridge is
> connecting to it. So you could enumerate private bridges.

Sure. Authoritarian regimes do that all the time.

But here's the thing. My Tor client will still only use that bridge. So it
can't be tricked into using a malicious bridge. And I can change private
bridges frequently, if I like. It's not at all hard to configure them.

------
mirimir
This finally hit the tor-dev list.[0] And Mike Perry's post is somewhat
alarming.[1]

Some excerpts:

> The "Oddly, sometimes the connection would succeed" sentence is a red flag
> sentence. If you are inclined to be paranoid, there is indeed a way to hide
> a real attack in what looks like a simple ntohl() bug here.

> This "sometimes"connection behavior is often seen in tagging attacks, where
> the adversary abuses Tor's AES-CTR mode stream-cipher-style properties to
> XOR a tag at one end of a circuit, and undo that tag only if the other
> endpoint is present. In this way, only the connections that actually succeed
> are those that the adversary is _certain_ that they are in both positions in
> the circuit (to perform Guard discovery, or if they are the Guard relay, to
> confirm deanonymization).

> If you want to hide your tagging attack as what looks like a simple ntohl()
> bug here, you send your intro2 with the reverse IP address. Then, when your
> middle node suspects a candidate rend cell (via timing + circuit setup
> fingerprinting, to have a guess), it can confirm this guess by undoing the
> tag by XORing the cipherstream with ntohl(ip) XOR ip.

> <snip>

> Aka a correctly performing rend cell tag hidden in what looks like a very
> common networking bug.

> This cipherstream tagging weakness has had a few proposals to fix, most
> recently:
> [https://gitweb.torproject.org/torspec.git/tree/proposals/295...](https://gitweb.torproject.org/torspec.git/tree/proposals/295-relay-
> crypto-with-adl.txt)

> BUT DON'T PANIC: There is also an alternate explanation for the "sometimes
> succeed" red flag in this particular case, other than a tagging attack.

> <snip>

> So most likely, this is just a poorly written Tor client, _but_ there still
> is the possibility that it is an attack cleverly disguised as a poorly
> written Tor client.. :/

> It may be a good idea for Neal's/our monitoring infrastructure to keep an
> eye on this behavior too, for this reason, to test for the side channel
> usage + rend XOR "correction" vs just dumb bug that is sometimes connecting
> by getting lucky (and thus never properly reverses the rend IP address). If
> this is indeed just a bug, when these rends do succeed, the IP address
> should never be correct.

> The way to do that would be to build rend circuits using 3rd hops that you
> (the service operator) control, so that that 3rd hop can check if the rend
> succeeds because the TLS connection happened to be open (benign behavior) or
> because the reversed ntohl() got corrected somehow (attack).

0) [https://lists.torproject.org/pipermail/tor-
dev/2020-February...](https://lists.torproject.org/pipermail/tor-
dev/2020-February/014146.html)

1) [https://lists.torproject.org/pipermail/tor-
dev/2020-February...](https://lists.torproject.org/pipermail/tor-
dev/2020-February/014154.html)

~~~
mirimir
Also see [https://github.com/mikeperry-
tor/vanguards](https://github.com/mikeperry-tor/vanguards)

> Even after deployment of the new v3 onion service protocol, the attacks
> facing onion services are wide-ranging, and still require more extensive
> modifications to fix in Tor-core itself.

> Because of this, we have decided to rapid-prototype these defenses in a
> controller addon in order to make them available ahead of their official
> Tor-core release, for onion services that require high security as soon as
> possible.

------
jimbo1qaz
The embedded images are ugly because of the downscaling (the URL contains
&size=600).

~~~
hackerfactor1
Fixed.

