
Someone Created a Tor Hidden Service to Phish My Tor Hidden Service - jstanley
http://incoherency.co.uk/blog/stories/hidden-service-phishing.html
======
tyingq
Not much you can do other than fight a war of escalation.

If you want to give the phisher lots to do, create say 10 versions of your
site where the html/css/js are tightly intertwined. Such that html version "a"
only works with css/versiona.css and js/versionb.js. Mismatches create a site
you can't view, interact with etc.

Then, for each version, put your countermeasures in your main js and css
files, and tweak your site so that it doesn't work at all without those main
js and css files. Add subresource integrity to the each version of the main
html that pulls them in. Obfuscate each version of js and css.

And vary up the countermeasures per version. Like displaying the bitcoin
address with css (selector:after {content: 'abc'}, with an image, dynamically
with obfuscated javascript, etc).

Round robin which version of the page is served. This will make caching
anything hard for the phisher, and the variations of countermeasures might get
them to find a softer target.

Shitload of work, though, just because some ahole wants to skim money.

~~~
grubles
Javascript is disabled out of the box in Tor Browser if I remember correctly.
At least it should be.

~~~
tyingq
Nope..
[https://www.torproject.org/docs/faq.html.en#TBBJavaScriptEna...](https://www.torproject.org/docs/faq.html.en#TBBJavaScriptEnabled)

------
ageisp0lis
Isn't this connected to crimewave's "Rotten Onions"? The author doesn't seem
to make the connection:
[https://twitter.com/campuscodi/status/917231902033104896](https://twitter.com/campuscodi/status/917231902033104896)

~~~
firloop
crimewave claims to donate 25% of their stolen coins to charity:
[https://www.reddit.com/r/onions/comments/4c9xlp/robbin_the_h...](https://www.reddit.com/r/onions/comments/4c9xlp/robbin_the_hood_25_of_the_money_i_steal_from/?st=j8qlmbd0&sh=ab728ca4)

~~~
nrki
Interestingly, the smsprivacy guy sent a payment to a faked bitcoin address,
and that address has since paid an address associated with the Human Rights
Foundation:
[https://blockchain.info/address/1GM6Awv28kSfzak2Y7Pj1NRdWiXs...](https://blockchain.info/address/1GM6Awv28kSfzak2Y7Pj1NRdWiXshMwdGW)

~~~
jstanley
This is a fake human rights foundation.

------
Mizza
The guy who runs the phishing servers, "crimewave", (everything is automated,
all hidden services have these BTC phisher clones) actually posts on r/onions
sometimes. I had a good discussions about possible counter-measures with him.

~~~
devhead
do tell

~~~
Mizza
Here's one of the threads from last year:
[https://www.reddit.com/r/onions/comments/45k2ux/mitm_phishin...](https://www.reddit.com/r/onions/comments/45k2ux/mitm_phishingonion_cloner/czygyem/)

I appear to be suggesting a) sending BTC addresses as images and b) a 3rd
party login visual verification service

------
ris
So clearly the random garble in smspriv6fynj23u6.onion isn't recognizable
enough for a user to spot a bogus one. And so as long as an adversary is
willing to spend the same amount of money mining their bogus key as you are in
generating the original, you're never going to win.

My suggestion would be to, during the mining process, search the "random"
portion of the domain for "recognizable strings" (probably just any dictionary
words) and keep mining until a domain is found with a suitable level of
"recognizability". This way you get the extra "strength" of having mined for a
significantly longer string at only a fraction of the cost whilst an adversary
would have to search for a much more specific string to mimic you
convincingly.

This is somewhat similar to facebook having "corewwwi" following their domain
- not words they were specifically looking for I'm sure, but notable and so
would be necessary to bruteforce.

Consider it to be a bit like having "correcthorsebatterystaple" on the end of
your domain (yeah I know it's too long).

~~~
jstanley
You could equally say that a user isn't going to be able to spot the
difference between "news.ycombinator.com" and "news.ycombinator.io".

I think the answer is not to blindly trust convincing-looking strings and to
actually ensure that you're browsing the correct site.

~~~
noobermin
Whenever talking about security, it's all about probability. The probability
of a user distinguishing "io" from "com" is _higher_ than then distinguishing
"frh2dj3" and "frh2di3".

~~~
jstanley
But if you generate a string at random, it's more likely to be "frh2dj3" vs
"swnohl6", which is just as easy to spot as "io" vs "com".

------
the_stc
This is why we need SSL certs for .onions. DigiCert was doing this, but they
told me it is on hold. Maybe when V3 hidden services are out?

With an SSL cert, you get a cert for your .onion PLUS your clearnet domain.
Then users can access the .onion and see your real enterprise's name.

This should be doable even for extrajurisdictional companies like mine. The
key is that your clearnet site should be a TCP-level proxy to the hidden
service. That way the SSL keys are never on an easily-discoverable system so
only IP addresses can be logged, not any contents. Slight plug: Here's the
tech design for our system: [https://medium.com/@PinkApp/pink-app-trading-
latency-for-ano...](https://medium.com/@PinkApp/pink-app-trading-latency-for-
anonymity-and-other-techniques-815ee21c6da4) (ignore the clickbait title).
This should work just fine for .onions too, even with the clearnet entry
point. For a DarkNet Market, most small buyers are probably safe visiting over
clearnet.

~~~
wybiral
What's the purpose of running a hidden service if you're clearly identifying
yourself?

Couldn't Tor users still visit your "clearnet" domain privately (it is just an
onion-routing proxy, after all)?

~~~
the_stc
The point is to remain user-friendly while hiding the actual webservers,
databases, and anything else that LE might want to take. An onion-routing
proxy is easily setup and torn down and moved around quickly, making it a
harder target for persistent surveillance.

~~~
runeks
How are .onion service providers supposed to communicate out to their users
whether or not their site uses a certificate?

And, presuming they are able to do this, why not just use that communication
channel to communicate the correct .onion URL to the user in the first place
(thus removing the need for a certificate authority)?

EDIT: Perhaps it would make sense to create a separate URL type for Tor
services whose keys are signed by a certificate authority? So the URL would
become e.g. secure.smspriv6fynj23u6.onion, and the Tor browser would reject
sites prefixed with “secure.“ that don’t have their key signed by a
certificate authority. This way, an attacker must register with a certificate
authority in order to phish a “secure.“ Tor site.

~~~
wybiral
> Perhaps it would make sense to create a separate URL type for Tor services
> whose keys are signed by a certificate authority?

The onion IS a proof of key. If you use the whole onion address (which is a
hash of the public key) then Tor requires that the hidden service be able to
prove they own the private key. It's like a builtin CA.

The problem to me is that knowing someone has a key isn't as interesting as
knowing that the person is a trusted source. And being anonymous takes some of
the responsibility away.

It makes more sense to me that someone just use an HTTPS clearnet site and
users who want to protect their own IP address can access it from Tor (it
works just fine).

Protecting the site owners identity and then wanting to prove their identity
to stop phishing attempts seems at odds to me.

------
bflesch
I'm not a tor expert, but could this be solved by using something like
namecoin for DNS within tor? So there would be a "proper" domain system for
the .onion routes?

The whole confusion comes from the fact that tor domains have this random
string added to the name you actually want to take, right?

~~~
koolba
The string isn’t random. It’s a public key with the private key of the service
able to decrypt content sent to it.

Anyone can generate a key with a vanity prefix, it just takes computing power
and the longer the match, the greater the computing power involved.

~~~
flashmob
Facebook's hidden service address is particularly impressive:
[https://facebookcorewwwi.onion/](https://facebookcorewwwi.onion/)

It seems like everything except the 'i' is a prefix, a lot of computing must
have went in to generating it.

One tool to do it is called 'Shallot'
[https://github.com/katmagic/Shallot](https://github.com/katmagic/Shallot)

The readme includes a table of estimated computing time required. A 15 char
prefix like Facebook's is not even on the table, and a 14 char prefix is
estimated to take 2.6 million years. There is also a GPU version which should
be an order of magnitude faster:
[https://github.com/lachesis/scallion/blob/gpg/README.md](https://github.com/lachesis/scallion/blob/gpg/README.md)

Also, technically. the onion addresses not public keys, but derived from a
public key. It's actually a hash of the public key.

It appears that the hashing algorithm used is SHA1. Source: The last few lines
of the easygen function
[https://github.com/katmagic/Shallot/blob/master/src/math.c](https://github.com/katmagic/Shallot/blob/master/src/math.c)

~~~
tylersmith
FWIW I've struggled to get keys generated by Shallot to persist very long but
haven't found the cause. We've had to fallback to a non-vanity address. If
anybody knows what I'm doing wrong please let me know!

~~~
jstanley
What do you mean by "struggled to get keys to persist"?

It shouldn't make any difference how the keys were generated.

~~~
tylersmith
I mean on the network. It stops resolving after a few hours.

~~~
jstanley
I don't think the way you've generated the key is likely to be the source of
the problem, although I don't have any good ideas about what the problem might
be, beyond the obvious (is the server still online? is for still running? is
it still using the correct config?)

------
wybiral
IMO the risk here is that onion link lists and search engines can't be
trusted. How else are people getting these bogus onions? There's no authority
on these "hidden services" and hiding identity is kinda the point of Tor so
phishing is easy.

Onion addresses are basically hashes of the public key of the hidden service.
Trusting only the first few characters for match (for these "vanity onions")
to identify the service is just a bad idea...

~~~
peteretep
Is this significantly different from the original site being at
anonymoussms.com and me registering anonymoussms.io and doing the same thing?

I don’t think I buy that this attack isn’t just as possible on the real web.

------
larkeith
Excellent writeup, and a good example of what types of attacks to expect as we
move forward towards a more decentralized web.

~~~
ewams
We are moving towards a more centralized web, not decentralized. The
technology industry, and the Internet, is consolidating. Look at the news for
all the companies being purchased by facebook, google, microsoft, amazon,
apple, hpe and cisco. Also look at ISPs, Timewarner and spectrum, verizon,
att, etc all are consolidating. It will continue to become harder to hide, or
be anonymous, as we make this path because not only will data be held by just
a few players, but so will the transmission lines only be held by a few.

~~~
jstanley
The more the "mainstream" web becomes centralised, the more technical people
are being pushed towards working on tools for decentralisation and anonymity.

It's happening, just not in the places that most people tend to look.

~~~
cookiecaper
It's not happening in a meaningful way. The technology for a decentralized web
is already here -- it's just the normal web. Our artificial legal barriers are
what keep it centralized, and those will be an issue regardless of technical
innovation.

Legalize scraping and fix copyright law so that users can truly assert
ownership over the content they generate, and this will quickly become a non-
issue.

P2P technology is cool and it has its uses; I'm even developing a
decentralized distribution thing as a side project. _But_ it is more work,
which means in the typical web browsing use case, it is slower and less
convenient than a conventional 1:1 conversation with a stable endpoint.

Suggesting that everyone introduce four hops of latency or that we all
participate in a multi-billion device DHT (which all just translate to "much
slower" to the typical user) is just not a practical solution to these
problems.

The good and correct solution is to look to the root cause and fix it. That
root cause is the incentive structure, including legal and financial
arrangements, that heavily encourages the "AOLizaiton" of the web and allows
the AOLizers to use the courts to clobber the hackers that try to re-liberate
it.

~~~
zrm
> Suggesting that everyone introduce four hops of latency or that we all
> participate in a multi-billion device DHT (which all just translate to "much
> slower" to the typical user) is just not a practical solution to these
> problems.

DHTs can actually scale _really well_.

But systems don't have to be fully distributed to be less centralized. For
example DNS stakes out a strong middle ground.

The problem with e.g. Facebook is that it's a closed system. You can't "apt-
get install" a Facebook daemon and run your own Facebook server whose users
can still talk to the people using facebook.com.

Part of the reason for that is the law but most of it is just network effects.
Most people don't use GNU social because most people don't use GNU social.

~~~
cookiecaper
>DHTs can actually scale really well.

Sure, DHTs are efficient, but they're not _cost-free_. You are still
introducing a lot of unreliable, inconsistent, and slow hosts into the mix,
and potentially having to traverse many of them to get access to the entirety
of the content you're seeking. This is not a pleasant user experience, without
even getting into the significant privacy and security tradeoffs, which can
only be mitigated by making the system do more hops, more crypto, more
obfuscation (which means, slower still). Octopus DHT is cool, but it is by no
means a speed demon.

>But systems don't have to be fully distributed to be less centralized. For
example DNS stakes out a strong middle ground.

There isn't an obviously-better "less-centralized" solution for DNS as far as
I know. See Zooko's Triangle. [0]

>The problem with e.g. Facebook is that it's a closed system. You can't "apt-
get install" a Facebook daemon and run your own Facebook server whose users
can still talk to the people using facebook.com.

Saying "it's a closed system" is forfeiting the point. I can send packets to
facebook.com and then turn around and send packets to any other destination.
Why can I not send packets in such a sequence that the packets obtained from
Facebook are then transmitted to some other place that makes it more
convenient to use them? Because if I do that, Facebook will sue the crap out
of me, as they've done to others. There is no real technical barrier
preventing this, it's purely legal.

Facebook, Google, et al are not in their position _just_ due to network
effects. They've both sued small companies because they both know that if
people can get the same data through competing interfaces or clients, if it's
simple and easy to multiplex the streams and move the content around, the
consumer won't need their company specifically anymore. They'll be relegated
to replaceable backend widgets. That's their nightmare!

Facebook and Google are middlemen, brokers between what the user really wants
and the people who are providing it. They are terrified of a world where their
brokerage is unneeded, and they work hard to make sure that you don't realize
it.

Twitter had the same realization about multiplexed streams, leading to their
infamous crippling of third-party clients. Craigslist had this realization in
their brutal about-face with Padmapper, after coming to their senses and
noting that it posed a serious threat to their business. The entity that
controls the user's attention controls the game.

It is at this point practically illegal to use a third-party exporter to read
out and easily transfer the content from your Facebook page to another site.
Even if it's 100% original content that you own completely from a copyright
perspective, you can't run a program to read it out because the Copyright Act
has been interpreted to mean that loading someone's HTML into your computer's
memory could be an act of copyright infringement (this is called "the RAM Copy
doctrine").

It's also usually illegal to download that page with an unapproved browsing
device such as a crawler or a scraper; this is exceeding authorized access
under the Computer Fraud and Abuse Act. You agree to all of this when you
agree to the site's Terms of Service, but your agreement is not necessarily
needed for these provisions to be effective.

Why are there are no easy "Try NewFace.com Services, We'll Copy Your Friend
List, Post History, and Photo Albums right over!"? Because you'll get sued and
left owing Facebook $3 million dollars if you try to do that. [1] :)

Once you throw something into the Google or Facebook black hole, they make it
very difficult to pull it back out again. That's not an accident, and it's
naive to just attribute it all to organic "network effects". The competition
is dead not because no one else wants to compete for these users, but because
they'll be sued to death if they do it in a way that's accessible to the mass
market.

[Note: I know that both Google and Facebook have buried deep in the innards of
the user configuration a mechanism that allows you to request the generation
of a crudely-formatted, multi-volume zip archive representing some or all of
your account data, and that you can receive some email some hours later
delivering this data in chunks. This is not a practical way to move data for
most people, because even _if_ you get someone to go through all this pain,
the amount of time it takes to build, process, collect, and upload these
archives ensures it is essentially a one-way thing. It can and should be a
free-flowing exchange of information, which the internet can already easily
facilitated. The only barriers are artificial, legal barriers.]

[0]
[https://en.wikipedia.org/wiki/Zooko%27s_triangle](https://en.wikipedia.org/wiki/Zooko%27s_triangle)

[1]
[https://en.wikipedia.org/wiki/Facebook,_Inc._v._Power_Ventur...](https://en.wikipedia.org/wiki/Facebook,_Inc._v._Power_Ventures,_Inc).

------
ezoe
> How can customers protect themselves?

> SMS Privacy customers should make sure they're browsing either
> smsprivacy.org using HTTPS, or, if using Tor, smspriv6fynj23u6.onion is the
> only legitimate hidden service. Anything else is almost certainly harmful in
> one way or another.

Posting this very important information which must not be modified by the
middle man on Non-TLS web site.

I really wondered the author's ability to operate the privacy centric SNS on
Tor network.

Or am I reading the web page through malicious proxy?

~~~
jstanley
Absolutely right. It's shameful that I still don't have HTTPS on my blog. I've
got no excuse.

------
pbhjpbhj
If you can get js to execute on the server, can't you get info that
disambiguates the server?

Is the a way to make the server run a bitcoin miner, for fun really ...?

~~~
jstanley
The JavaScript I talked about ran in browsers when accessed via the phishing
site, it didn't run on the server.

------
finnn
I wonder what the requests look like coming through the proxy. Is there a
distinct user agent? or some other header send with the request

~~~
jstanley
It passes through the client User-Agent unchanged. I've not yet looked at
anything else although I doubt it will do anything but pass through the client
headers.

It might be interesting to send confusing or contradictory request headers and
see how it reacts.

~~~
finnn
yeah, i'd definitely try sending it all sorts of weird things. Can you setup a
path that will respond with a ton of data? Maybe a bunch of things that it
will try to parse? etc

~~~
jstanley
"GET /headers" will dump the request headers, but note this has gone through
one layer of munging by nginx and another by Mojolicious, so don't draw any
conclusions about what you see in the response without bearing that in mind!

I'd be interested to hear if you spot anything surprising.

EDIT: And in case you haven't seen it, "torsocks" is a tool that lets any (?)
other program speak Tor, e.g. it works with both curl and nc.

~~~
finnn
Cool! Looks like the Accept-Encoding header is being set to "gzip, deflate,
sdch" regardless of what i send.

------
phire
Regarding the delay when generating addresses:

It's probably just doing an RPC call to an actual bitcoin node, which (in my
experience) can actually take a few seconds depending on how fast of a
computer the node is running on.

------
DyslexicAtheist
thanks for disclosing this [https://smsprivacy.org/info-
leak](https://smsprivacy.org/info-leak) in such a proactive way. Delloite,
Equifax & Co all could learn something from you.

------
campuscodi
The fake site was most likely created by the Rotten Onions crew.

------
nomadiccoder
I like the simplicity of this website design!

------
vmp
Maybe reducing the entropy by almost half for a vanity key/domain name wasn't
such a good idea.

~~~
nsuser3
What are you talking about?

~~~
vmp
The address, 7 out of 16 characters are fixed.

------
XR0CSWV3h3kZWg
0.002 BTC seems generous these days!

Interesting post, I wonder how common this is.

~~~
krisives
Very common look up rancid tomato proxy

~~~
jstanley
Google: No results found for "rancid tomato proxy".

DuckDuckGo: this HN thread is the only result.

Did you get the name slightly wrong?

~~~
tyingq
Try: rotten onions cloner

~~~
XR0CSWV3h3kZWg
If that doesn't work try:

Putrid walnut skimmer

------
PhisherPrice
Hit it offline with a denial of service attack.

~~~
jstanley
Note that since the phishing site proxies requests to the legit site, this
also has a high risk of knocking the legit site offline.

~~~
PhisherPrice
Maybe with a layer 7 attack, but not with a layer 3 attack. The author also
described the difference between the proxy request and a legit request, so he
can just block the proxy requests. Try using slowhttptest. You don't even need
a botnet.

~~~
sp332
I think the article described a different response, but not a different
request? Anyway they mentioned that some requests were cached, so you could
request a cached object to avoid hitting the original host, if you wanted a
level-7 attack.

