
Onion names reserved by the IETF - finnn
https://blog.torproject.org/blog/landmark-hidden-services-onion-names-reserved-ietf
======
duskwuff
It's the sensible, pragmatic thing for them to do. Allowing .onion to be
allocated as a "real" TLD would just be disruptive and confusing at this
point.

The implications with regard to SSL certificates are interesting, though, and
I'm curious how long it'll take for SSL providers to start supporting that. :)

~~~
jlgaddis
What are the benefits to using SSL/TLS certificates on a hidden service?

Perhaps a better question is: _are_ there any benefits other than just
providing an additional layer of encryption that a potential attacker would
have to defeat -- there's already end-to-end encryption when using hidden
services (even if there isn't any encryption at the application layer)?

ETA: I just remembered that hidden services use 1024-bit RSA keys and there's
been some arguments lately that that may not be enough bits. For some sites,
using (at least) a 2048-bit key may be necessary.

~~~
jandrese
Well, it would encrypt the traffic from your TOR endpoint to the application.
Usually this is on the same box so it's not a big deal, but not always.

~~~
dogma1138
All traffic to TOR hidden services is encrypted end to end, SSL can add
additional level of authentication tho.

If it's not a hidden service then you can't really use an .onion address
anyhow.

~~~
jlgaddis
You're correct, but I think his point was that the Tor endpoint (i.e. the host
connected to the Tor network) and, e.g., the host actually serving up the
content aren't necessarily one and the same (although they usually are).

In those instances, an SSL certificate would provide encryption all the way
from the "Tor client", through the Tor network, the rendevous point, Tor
endpoint, and to the actual application server. Without additional encryption
in use at the application layer, the link from the Tor (hidden service)
endpoint and the actual server would not be encrypted and, thus, vulnerable.

To (perhaps) explain better, this would be similar to how Cloudflare offers
SSL for all sites and while the path from the end user to Cloudflare is (can
be) encrypted, the link from Cloudflare back to the origin server isn't
necessarily encrypted. Alternatively, think of the link from an SSL
terminating device to the backend web servers. Again, in most cases, this is a
non-issue but there certainly are some instances in which it would apply (and
this is probably more likely the bigger a site (hidden service) is).

------
telescope7
Measuring the Leakage of Onion at the Root “A measurement of Tor’s .onion
pseudo-top-level domain in the global domain name system”

[[https://www.petsymposium.org/2014/papers/Thomas.pdf](https://www.petsymposium.org/2014/papers/Thomas.pdf)]

------
banthar
Isn't SSL on .onion domains redundant? It makes sense for onion -> open web,
but shouldn't onion -> onion connections be already both authenticated and
encrypted?

~~~
johnmaguire2013
I believe the exit node may still be able to view traffic in plaintext. This
is part of the reason that running an exit node is so "dangerous" in the US.

edit: Though with a quick Google, I'm led to believe that an exit node is only
important when you are leaving the onion network (i.e. when entering into the
Internet), and thus it sounds like SSL on a hidden service would indeed be
superfluous to me.

However, SSL also proves authenticity, not just encryption. It would let you
know that the hidden service you are accessing is indeed who you think it is.

~~~
icebraining
_However, SSL also proves authenticity, not just encryption. It would let you
know that the hidden service you are accessing is indeed who you think it is._

So do .onion address; they are an hash of the key pair you get when you
generate a new one, and the client verifies that the server it's connecting to
does in fact control the associated private key.

By abdicating readable domains, the Tor hidden services system eliminates the
need for external authentication mechanisms like CAs; the address is all you
need.

[https://www.torproject.org/docs/hidden-
services.html.en](https://www.torproject.org/docs/hidden-services.html.en)

~~~
johnmaguire2013
Assuming a .onion's key were to be bruteforced or stolen however, you would
also need to steal the SSL private key in order to continue to appear
authentic.

I'm not saying Tor doesn't cover authenticity, but that SSL provides an
additional authenticity check on top of that.

edit: On the topic of bruteforcing, the linked Stack Overflow post leads me to
believe it's not terribly infeasible.

Additionally, stealing the .onion's key would likely expose the SSL private
key as well (as you'd likely have access to the server at that point), unless
the .onion's key is exposed due to misconfiguration or another form of human
error.

I also think, lastly, that the point about the browser understanding its
dealing with a secure connection and enforcing general browser SSL rules has
merit.

edit 2: Forgot the link -
[https://security.stackexchange.com/questions/29772/how-do-
yo...](https://security.stackexchange.com/questions/29772/how-do-you-get-a-
specific-onion-address-for-your-hidden-service)

~~~
ikeboy
_14:2.6 million years_

~~~
johnmaguire2013
With a single core.

~~~
ikeboy
So a million cores still takes years. What would you consider infeasible, may
I ask?

Also, you're wrong about bruteforcing the domain implying you can decrypt if
not for ssl. If you bruteforce (for millions or billions), you won't get the
same key. You'll get _a_ key that shares the first 80 bits of its hash with
the other key used. So you can use it to mitm or impersonate the site, but you
can't use it passively to decrypt connections to the onion.

------
pcl
I've always thought that these addresses should have a different scheme, not a
different TLD. For example, _onion: //aoeusnth_ instead of
_[http://aoeusnth.onion](http://aoeusnth.onion) _

Is there any reason in particular why the TLD approach was settled upon
instead of a scheme-based approach?

~~~
rys
The reason it has to be done at the DNS level, rather than at the URI scheme
level, is because any protocol can be routed over TOR.

~~~
pcl
Well, there's nothing keeping a well-defined tor scheme from including the
protocol information in it, is there? For example, I could imagine specifying
a tor URI in my git config: _onion:ssh:pcl@aoeusnth_ or _onion:http:aoeusnth_

~~~
vodik
The point is its still http.

Think of TOR as acting like a VPN or point-to-point tunnel. You can
conceptually think of it as another network interface plugged into your
network. The policy you choose what to route over it is your own. It doesn't
affect how any other protocols function.

I can still access regular sites over TOR. I can also access regular websites
over a VPN. openvpn+[http://](http://) isn't exactly useful either for the
same reason.

And there are other special tld. Your multicast domain group (e.g. .local) is
also special. Your dns resolver sees the TLD and resolves it specially. But
once again, doing multicast DNS doesn't impact http, git, ssh, etc. So it be
silly to have to write mdns+[http://..](http://..). as well.

And if you where to join them, then you have to describe what kind of
behaviour should happen if, for example, on
openvpn+[http://foobar.tld](http://foobar.tld) you hit a hyperlink to
[http://baz.tld](http://baz.tld). Do I rewrite this to prepend openvpn+? Fail?
etc.

------
splitdisk
Now we just need to find a Tor user with enough money to buy EV
certificates...

~~~
detaro
like facebook?

[https://facebookcorewwwi.onion/](https://facebookcorewwwi.onion/)

------
ape4
What's Facebook (privacy enemy) doing in there.

~~~
elros
They're not the enemy of privacy in general. They're the enemy of privacy
between you and them.

If they can make sure that not only _they can_ get your data, but also that
noone else can get it too, that's a win in their book.

------
vegabook
Tor gets an upvote from the establishment. Is that furthering the cause of
privacy? So we'll now get more exit nodes?

Personally believe that nothing less than a wholesale transport-layer
alternative to the internet is necessary to maintain communications freedom.
Not far fetched to suggest that rooftop antennae running peer-to-peer mesh
networks will gather momentum in coming years. Not to replace the internet,
just as backup, to keep centralised government interests and moneymen at bay.

~~~
DasIch
It's not far fetched to suggest people start seriously running peer-to-peer
mesh networks, it's ridiculous. You won't be able to convince the people who
know about them to run them and you'd need to convince far more people than
that to actually get mesh networks to run reasonably well.

This is ignoring the technical and legal challenges not to mention the fact
that people have tried this for a very long time now and they've failed to get
anywhere significant for just as long.

~~~
vegabook
If your criteria for "getting anywhere significant" is to be even within two
orders of magnitude as performant as the current internet, then you are
correct. If however the aim is to build a distributed, no single point of
failure/control, kbps-class, backup communications system, that's primarily
community-based, and that is NOT controlled by any corporation, ISP, or
government, then there are many such networks already in existence. They're
based on wifi, and completely legal. Clearly they're hobby projects but that
doesn't make them uninteresting or indeed, potentially extremely useful in
low-delta scenarios of societal breakdown and/or centralised oppression.

~~~
DasIch
I'm not talking about bandwidth, I'm talking about coverage alone. Berlin for
example are many people that are into Freifunk and such networks. There are
many more people into technology and the culture and politics associated with
it. Nevertheless they can't even get decent coverage.

The network is worthless not because it works badly, they fail to create an
accessible one to begin with.

~~~
vegabook
> many people into Freifunk .....

I rest my case. There is demand for such a thing even if it's far from
perfect. Not everybody is willing to trust the increasing encroachment by the
authorities into the mainline internet. They're willing to experiment to
ensure that technologies are being worked on that permit independence from
potential authoritarianism (corporate or government).

