Hacker Newsnew | past | comments | ask | show | jobs | submit | Astarte's commentslogin

>So you'd need lots of guards

Can't edit my other reply. The operator in the OP runs 68 nodes. I did not look at each one, but it looks like most, if not all of them are guard nodes. This article mentions someone running at least 10% of the guard capacity: https://medium.com/@nusenu/the-growing-problem-of-malicious-... Tor's design obviously failed. Everyone can become a guard. You just need some patience. Also other nodes like DirAuths, bridges, fallback directories can discover Tor users.


Yes, that is an issue for Tor. But it's still arguably worse for I2P, because all peers can connect to each other.

But regardless, that's why I never use Tor or I2P from home, except through nested VPN chains. I don't care so much about VPS, because I'm very careful to isolate myself from them. For dedicated servers, however, which are more expensive, and where I invest more work for setup, I also use nested VPN chains.


>Some combination of obfuscation, encryption, mixing, and plausible deniability seems to be the best bet.

Go on.


Ie, talk in code, use VPN, use shared communication streams like Tor, and make your behavior look legit and boring. Of course I am not speaking as a professional opsec specialist agent.

As far as you know.


>But it's disingenuous to claim that even using a private guard (which isn't possible, as far as I know)

I have been thinking about this for a while, too. There is some Tor fork which allows non-exit nodes to exit. It has been posted on tor-talk a while ago. For a private guard you would need to change the local consensus file and include the private guard. Then you would also need to control the next hop so it recognizes your guard as first hop and connect you to the third hop. I don't see why this won't work in principle.


Huh. That is an interesting idea.

So you could have Tor exits that aren't published.

That would get around the CAPTCHA plague for Tor users.

Another option that I've considered is IPv6. Relays with both IPv4 and IPv6 must publish their IPv4, in order to get OKed for use. But as far as I know, there's no reason why they couldn't preferentially push exit traffic through IPv6. And indeed, use a different IPv6 address for each circuit.


A malicious guard is just a malicious node. It can also be used as some other hop, or there can be non malicious nodes without a guard flag. I think there has been at least one publication taking a closer look at what malicious middle nodes can do.

I'm not familiar with bridges or the snowflake proxy but I think this would work:

Public bridges are public so no one cares about those. Now you run your own private bridge. First of all running your own leads directly back to you. Second it puts you on the list of even more paranoid people. Since you know and connect to that private bridge one can assume you trust that bridge for whatever reason which indicates some kind of "personal" relationship to that bridge.

The private bridge now connects to the second hop. This is a malicious one. The operator sees an IP which does not come from an official relay in the consensus. I don't know if a node knows he is in the middle (at least a guard and exit must know they are at the beginning and end of a chain, i guess?), but if he does he would now know that a private bridge is connecting to it. So you could enumerate private bridges.

If someone runs dozens of nodes, which is actually happening, this looks like a viable option. Correct me if I'm wrong.


Good questions :)

> First of all running your own leads directly back to you. Second it puts you on the list of even more paranoid people.

It doesn't point to "me", at least in meatspace or even as Mirimir. It points to some anonymous persona, created specifically for that purpose. On its own Whonix instance, through its own nested VPN chain, and using its own multiply mixed Bitcoin. All totally disposable.

And to be clear, I'd use a different anonymous persona for the onion service itself, created specifically for that purpose. With all the features described above.

> Since you know and connect to that private bridge one can assume you trust that bridge for whatever reason which indicates some kind of "personal" relationship to that bridge.

There are numerous private bridges, and many of them have only a few users. Perhaps even just one user.

> The private bridge now connects to the second hop. This is a malicious one. The operator sees an IP which does not come from an official relay in the consensus. I don't know if a node knows he is in the middle (at least a guard and exit must know they are at the beginning and end of a chain, i guess?), but if he does he would now know that a private bridge is connecting to it. So you could enumerate private bridges.

Sure. Authoritarian regimes do that all the time.

But here's the thing. My Tor client will still only use that bridge. So it can't be tricked into using a malicious bridge. And I can change private bridges frequently, if I like. It's not at all hard to configure them.


>Wouldn't your onion service uptime become correlated with the guard's uptime?

Yes. This is happening on a fairly regular basis. ddos against a tor node in most cases is just trying to figure out someones IP address or guard node. If you run a big darknet site dealing with things like drugs, CP, fraud, and you want to stay around for a while you need to run lots of nodes otherwise you will be pwned probably within hours. There is no point in doing that for legal onions sites like facebook because everyone knows their real operators. Now when you run lots of nodes and at the same time a big website on the darknet you are in the perfect position to run traffic correlation attacks yourself. There is some tutorial available which suggests using some deceptive methods to spoil such tracing efforts. For example when website A is going down you also shut down your site. Or when there is a big blackout of AWS US etc


This has been mentioned on the tor irc several times and also on some other places. No one cared ...


>They wouldn't go through the trouble to identify and blacklist exit nodes and present users with a captcha otherwise.

Do you really think they care about that 1% (made up number) of Tor users using their service legitimately? Or do they just want to avoid being attacked or their service being abused?


That's a good question indeed. I don't have an answer but some people got similar results when pointing out some issues internally. Only when publishing those on the mail list where everyone can read it things started to gain some traction.

"In April 2018 a Tor core member — the most active Tor Project person on that closed mailing list — made an attempt to initiate a “do not do” relay requirements list to improve and streamline the handling of malicious Tor relay reports. (I’m not mentioning his name since he does not want to be publicly associated with bad-relays handling for safety reasons.) Unfortunately also this attempt failed since no Tor directory authority operator answered. (Tor directory authorities are required to enforce any Tor network wide rules unless it is part of the tor code itself.)

Starting with June 2019, after multiple reports about suspicious relays remained with no reaction I stopped sending them to the list. Occasionally I sent some suspicious relay groups to the public tor-talk mailing list instead — which ironically was more fruitful."

https://medium.com/@nusenu/the-growing-problem-of-malicious-...

Even more ironical, the very person which reported that issue and similar ones (also on twitter) got his twitter account closed shortly afterwards (see other post on that site). So he has much less audience than before. Coincidence? Tinfoil hattery? Maybe. But certainly fishy.


They could still figure out your guard and attack it directly.


>While Tor browser is very well hardened, relative to Firefox

Some say it is one of the most attacked browser ...

>However, we have no clue how many users in authoritarian regimes have been pwned by similar malware, over what we'd call human rights issues.

Maybe not as much as you believe. The OP is talking more about traffic correlation. The FBI's attack came from the browser, this can also aid in correlation attacks but was irrelevant in the FBI's case. In authoritarian regimes you can just attack from the network side and log each IP which tries to connect to a Tor node. Then you visit those people personally. Or like in so many authoritarian regimes you just block Tor completely. Neither a firewall or Tails or Whonix will protect you against traffic correlation attacks.


You can use unregistered obfuscating bridges when the Tor protocol is banned. Not sure how effective that is though, since I've never needed to use them.


I'm not sure about their security either, see my other post below.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: