Hacker News new | comments | ask | show | jobs | submit login
The Tor Project: Building the Next Generation of Onion Services (torproject.org)
394 points by ashitlerferad on May 24, 2016 | hide | past | web | favorite | 122 comments

What I think really needs to happen is for the Tor group to make setting up hidden services much simpler.

Maybe I'm just stupid, but there didn't seem like an easy "type a command and we will set this all up for you" kind of way to do it.

Getting it setup, getting it to run as a daemon, and getting the service to work on multiple ports (allowing you to serve :80 and :22 for web and ssh). It seemed like a nightmare to me.

It's sad because I'm very interested in hosting a tor relay/service to make sure I can get to my important documents, even if I need to travel to another country that blocks services like dropbox and google docs.

Hm, the problem with this kind of tools is that if you're not willing to read the documentation to get a good understanding of what you're doing, you might end up thinking you're secure instead of being secure which is the worst case scenario.

Is there a way to configure the tool to not be secure? Why is the secure configuration not the default? Can a secure default be easier to set up?

Because normally, the context here is that you're trying to set up an existing internet service over the Tor network. Your web server, for example, typically doesn't know anything about Tor, and will happily serve up pages to normal internet users unless you configure it not to.

Services designed for Tor don't have this issue and can be secure by default. Ricochet[1], for example, advertises itself as a hidden service automatically and doesn't communicate outside the Tor network.

1: https://ricochet.im/

Sure, but you could always have something like a script that takes a Vagrant VM or a docker container and turns it into a hidden service on Tor. The script would take care of making sure the only access to the VM is through Tor and that the VM learns nothing (under normal operations, I am not even thinking about patching side-channel attacks and escalation-to-host attacks here) about the host's identity or location. I am thinking something like:

vagrant up --provider=tor my-service

Where my-service is any Vagrant node (a config file for setting up a generic VM with whatever software / conf you specify) and the vagrant command outputs the Tor hidden service address in the last line, after loading the VM locally on top of VirtualBox or similar.

The Tor control port protocol let's applications setup a hidden service automatically; Bitcoin Core recently released support for this, automatically using hidden services for incoming connection to your Bitcoin Core nodes.

That may work with a UI, but I'm running this on a server without X, without anything and I'd like to be able to manage it like I do ufw.

You may like https://stem.torproject.org/tutorials/over_the_river.html , then! The sample code there can create permanent or ephemeral/short-lived hidden services. It uses the `stem` library (now named `nyx`, iirc).

Hm? Did you respond to the wrong comment?

The functionality Peter Todd is talking about is totally transparent and involves no user interaction.

I think I may have been confused, my bad. I probably didn't read it correctly when I first saw it.

The real problem is that you _shouldn't_ be running bare Tor in front of a hidden service, at least not if you really want to be private. You need something like Whonix[1] to protect you from all kinds of server information leaks.

1: https://www.whonix.org/wiki/Hidden_Services

At least on Debian based systems, running Tor as a daemon is trvial.

There is even apt tor to fetch updates over Tor.

There hidden service config is pretty simple as well.

We should write more functionality for servers that let them setup hidden services via tor

It would be useful if someone wrote a wizard that could install a personal disk server without the user needing to know what software is involved or how to install it. A single click where for example owncloud is installed, linked to tor hiden services address and given to the user and/or installing a usb stick with tor-browser and a bookmark to the service. It would be outside the scope of tor project, and more in line of a useful native debian package.

The Tor hidden service setup process seems extremely simple to me.

First google result I get for "tor hidden service instructions" is https://www.torproject.org/docs/tor-hidden-service.html.en which explains the two config lines you need to add to create a hidden service

Literally all you have to do is add this into your config file.

  HiddenServiceDir /hidden/service/path
If you're hosting anything at all this shouldn't be even remotely difficult.

How do I setup multiple ports? How do I setup multiple services? How do I get multiple .oninon domains?

By spending 5 seconds on google, or alternatively by using your intuition.

Googling "hidden service multiple ports" instantly answers every single one of your questions, and the answers are rather obvious.

Who would've thought that adding a new port is as easy as just adding another port definition?

Honestly, if this is a problem you shouldn't be trying to host hidden services by yourself anyway. Even if Tor took literally one click to set up the other software will still fuck you, like apaches mod-status.

What you are saying is "it techs 5 minutes to somebody who knows what he is doing", which is precisely what's OP would like to change. You want more less tech saavy people to be able to create those nodes.

No, it's all very well and clearly explained in the official Tor Project documentation, in the section on hidden services. That link was already posted in this thread, yet the original poster still complains about insufficient documentation. His subsequent question about multiple ports was also clearly answered in the same place.

I think the problem here is simply that some people refuse to read documentation, even after they are provided with a direct link to it. Sounds about right from my personal experience with online tech communities.

When you use VLC, OpenOffice or Firefox, you don't read the documentation. I think that's what OP targets.

Setting up hidden services should not be that easy, unless it's the application itself doing it.

Making these things too easy results in very real damage when somewhat clueless people think they're capable of operating these things by themselves.

Talk for yourself, I've read VLC and LibreOffice documentation...

No, I'm not saying it takes 5 minutes. I'm saying it takes 5 seconds.

You google "hidden service multiple ports" (sans quotes), and the very first result answers every single one of his questions.

You only need to scroll down the page to see the super self explanatory config examples.

  HiddenServiceDir /usr/local/etc/tor/hidden_service/
  HiddenServicePort 80

  HiddenServiceDir /usr/local/etc/tor/other_hidden_service/
  HiddenServicePort 6667
  HiddenServicePort 22
If that doesn't tell you how to set up multiple hidden services and multiple ports, you should seriously get someone else to set up the hidden service for you. Some people just aren't competent enough to do it, just as not all of us are heart surgeons.

If you can't configure Tor, you certainly won't be able to sufficiently harden the applications that you're trying to hide.

That is rather hostile, but in any event that is only one of the things I asked. There are two other questions.

How do I get multiple .onion domains and how do I back up my keys so I can keep my domain?

Still, there are a lot of things that aren't very evident. It's just a bad experience in general.

We can have obscure documentation only accessible on the tor website, or when you install tor you could get a nice command interface like something presented by UFW.

One will lead to better configurations, the other will lead to mistakes and loss of data.

> How do I get multiple .onion domains

Repeat HiddenServiceDir

> how do I back up my keys so I can keep my domain?

Backup the contents of HiddenServiceDir

I'm very excited about a number of innovations being deployed in next gen onion servers.

The distributed random number generator is very cool.

The blinded ed25519 public keys for the rendezvous servers are also super awesome.

Funding tor not only protects people from surveillance but advances computer science.

I was curious to see if it is possible to donate funds towards the operation of "safe" (eg, non government controlled) exit/bridge nodes. According to the donation faq for the tor project[1], it appears that funds are not used for infrastructure.

If there were a way to fund exit nodes without running one myself I would definitely be interested in participating. If not, this might be a great idea for a crowdfunding campaign.

[1] The Tor Project spends about $2.5 million annually. About 80% of the Tor Project's spending goes on staffing, mostly software engineers. About 10% goes towards administrative costs such as accounting and legal costs and bank fees. The remaining 10% is spent on travel, meetings and conferences, which are important for Tor because the Tor community is global.


You can support Riseup who run exit node(s).


If you want to donate towards the operation of "safe" exit/bridge nodes, consider donating to NoiseTor: http://noisetor.net/.

In addition to NoiseTor that @garrettr_ mentioned there is torservers.net [0]. Both are mentioned [1] as ways to support infrastructure by the Tor Project.

[0] https://www.torservers.net/donate.html

[1] https://blog.torproject.org/blog/support-tor-network-donate-...

Donate to torservers.net. They are well known, frequent hacker meetings and partner up with other organizations in this space.

I think the Tor project would agree with me in saying that donations are all well and good, but the best way to contribute is to operate a high-capacity node.

Really, you'd turn down say, a thousand dollars of donations toward exit nodes and bridges?


This is not about moral philosophy, but practical matters. Tor's anonymity depends on diverse ownership of the running relays. As it stands, the organizations accepting donations to run Tor relays (torservers, noisetor, etc) already control a sizeable chunk of the total relays, and that's why the Tor project would rather encourage people to run their own.

Of course, many people can't or don't want to run an exit node. In that case, it's much better to donate to those organizations than to do nothing. But the Tor exit relays are not soup kitchens, and increased security for the Tor network due to more diversified operator group is not easily convertible to a dollar value.

Perhaps the answer here is to have a donation receiving autonomous corporation on etherium that then provisions the nodes automatically.

Note: I'm aware that'd be a lot of effort to set up and might not even work, but the idea seems fascinating in theory.

The underlying hardware of the provisioned nodes would still be under the control of easily-bugged machines in large datacenters from the perspective of government level actors.

Yes, the Tor Project effectively did this for years, since no organization or organizational structure existed to take your sanctimonious "unit of caring" and turn it into geographically disparate non-colluding exit bandwidth. Sometimes the real world, or "territory," is more complicated than the "map" you find over at Less Wrong. Take a minute to ponder this in between the daily Neoreactionary Discussion Group and the hourly Why Aren't More Women Rationalists/Rationalist Pickup Artistry thread.

I almost didn't want to dignify this with a response because of the incredibly unnecessary tone it was written in. (Perhaps my original post came off as more matter of fact and arrogant than it was meant to. It's not a rhetorical question, I'm surprised to hear that nobody has found a way to turn money into diverse exit nodes.)

>Yes, the Tor Project effectively did this for years, since no organization or organizational structure existed to take your sanctimonious "unit of caring" and turn it into geographically disparate non-colluding exit bandwidth.

So this seems like a solvable problem, in one way or another. Some ideas that immediately come to mind:

- Perhaps people could be incentivized to be exit node operators for a small amount of money every month? (Estimated liklihood: Not that great, but it's worth a try.)

- I suspect that many technically savvy people would like to run an exit node, but are afraid of being the first person to have to take the things that happen on their exit node to court. Perhaps any time somebody inquires about donating money for exit nodes to the network, they could be redirected to a legal fund to be set up in advance for anybody who gets sued in a precedent setting case over their exit node. A quick google search shows this doesn't exist and I'm sure it would calm some nerves if it had gained a sizable sum over the years. (Estimated liklihood: Honestly, I think at first it wouldn't do much and might even do damage because it wouldn't be very much money. But over time and depending on how often people are willing to donate money it might significantly help with somebodies hypothetical legal fees.)


From the Tor FAQ:

"Will EFF represent me if I get in trouble for running a Tor relay?

Maybe. While EFF cannot promise legal representation for all Tor relay operators, it will assist relay operators in assessing the situation and will try to locate qualified legal counsel when necessary. Inquiries to EFF for the purpose of securing legal representation or referrals should be directed to our intake coordinator by sending an email to info@eff.org . Such inquiries will be kept confidential subject to the limits of the attorney/client privilege. Note that although EFF cannot practice law outside of the United States, it will still try to assist non-U.S. relay operators in finding local representation."

So as a practical matter the EFF would probably step in for a precedent setting case, but it would be much better if there was a legal fund just for this that promised it would step in for a precedent setting case.


>Sometimes the real world, or "territory," is more complicated than the "map" you find over at Less Wrong.

Well, yeah. Duh. Speaking of reality being more complicated than you've imagined...

>Take a minute to ponder this in between the daily Neoreactionary Discussion Group and the hourly Why Aren't More Women Rationalists/Rationalist Pickup Artistry thread.

LessWrong Political Opinions By Affiliation And Sample Sizes On The 2016 Survey:





And for the sake of intellectual honesty:



Moreover I wasn't linking to LessWrong's opinion on charity it was Eliezer Yudkowsky's opinion on charity. I'm particularly annoyed about him getting slapped with the Neoreactionary stick when his stated public opinion is that he thinks Neoreaction is stupid and if he were still moderating the main LessWrong site he'd ban them all as part of cleanup:

(Eliezer Yudkowsky can be pretty uncharitable with his critics, I don't endorse that.)



Sadly many people live in countries where running an exit node would get them imprisoned rather fast.

OnionTip[1] allows you to send BTC to relay operators that publish their bitcoin address as part of their contact details.

[1]: https://oniontip.com/

How can/does Tor propose to handle government level subversion (which must surely be happening and continue to happen with ever-increasing depth) where "sponsored" computers begin to form a majority of worldwide exit and relay nodes, with modified Tor running on them that looks actively for attacks, and leaks of information?

Current evidence suggests it's doing OK for now. The slides from the Snowden leaks showed the NSA was unable to compromise the core infrastructure by controlling relay and exit nodes, excepting a few cases. However, there are attacks a government-level entity can mount that Tor explicitly does not protect against, such as large scale passive scanning for traffic confirmation. It is not believed to be possible to beat such monitoring without compromising latency.

That evidence is a few years old now, how far they've come in that time is a complete unknown.

This is a textbook example of FUD.

Since NSA core mission is being ahead of everyone else, a claim that capabilities have increased a lot in the last few years is not FUD.

Without any specific evidence, it's still FUD.

What would specific evidence for "we don't know how much progress they have made since the last time we had concrete data about them" look like?

The FUD is implying they've made significant progress. You're right, "we don't know" isn't really a falsifiable statement, in this context.

Is there a certain time period after which we can say it's not FUD? Like if we go 10 years without any additional clarification after the Snowden slides, is it still automatically FUD?

I'm confused -- are you asking when an unsubstantiated claim is not FUD?

Not all educated guesses and reasoned estimations are FUD

None are, actually.

No claim of present NSA ability was stated. Noting the age of the data in question is a fact. Stating that an absence of new data creates a state of unknowing is a fact.

Fear of an oppressive government hacking the connection and arrest you. Uncertainty of agencies' capabilities given that they operate under top-secret levels with unlimited budgets. Doubt that they would stop working on or advancing the state of the art at defeating the security.

It seems a little unreasonable how much you are being downvoted, the parent comment uses uncertainty to cast doubt on the idea that TOR is safe from the NSA. That can easily invoke fear.

Unfortunately that is largely what we have to go off with the NSA.

And you just provided a textbook example of how the term FUD is usually applied.

Maybe the NSA was unable to compromise the core infrastructure, but the FBI was able to compromise enough to hack into a web forum and log the location data for its users, followed by subsequently impersonating the administration of said forum for two weeks or so, then arresting nearly 1,000 individuals.

They did that by attacking the forum software/the server itself, not tor and the underlying transport layer in general.

It's another layer entirely and does not give information about their capabilities in attacking tor itself (except maybe through the fact that if they had to do that, then yeah, tor is probably still relatively secure if used properly).

can TOR be considered to be on the transport layer though? i thought it worked with TCP, so it would need to be at least OSI level 5 iirc?

> so it would need to be at least OSI level 5 iirc

You're right; actually tor reside in the application layer I think (above TLS).

Although conceptually speaking it is itself a transport layer I guess.

OSI level is cute, especially once you start to interpret ip over https ;)

Tor is a layer 4 overlay network.

Layer 5 would be HTTP, layer 6 would be the content (JPEG, MP4, etc.), and layer 7 would be the application serving or sending the HTTP requests/content.

The combination of watering hole attacks and internet scale packet timing collection is pretty big problem for the security of Tor users.

Fortunately Internet wide timing attacks are mostly a Five Eyes and domestic Chinese capability. Chaff, padding etc can help here.

Compromising the servers of target services and using that a platform to distribute anonymity stripping malware is also a problem. The Firefox codebase that TBB is based isn't awesome from a security point of view. Hopefully the Firefox code base can catch up from a security perspective and give them something better to work with.

Internet scale packet timing collection only works when the traffic is not messed with when routing through TOR. It's quite a headache when {proxy}.appspot.com is used, or technical documents are exclusively routed through GoogleCache, or even CoralCDN for that matter.

There's even users who configure BitTorrent to use TCP instead of UDP so that it's very difficult to write DPI rules to parse out the TOR traffic. Couple this with meek, VPNs and traffic shaping tools and it's quite bothersome for them.

Timing correlation attacks would be much more difficult if Tor users were more willing to tolerate higher latency. But they're not. Everyone wants their cake and eat it too.

If they're going to use random numbers to enhance security, they should make sure that at worst, if the numbers are predictable and controlled by an attacker, it's no worse than the current security.

Does anyone know if their protocol does that?

The randomness will be used to defend against knowing in advance what nodes are responsible for the HSDir entries in the hashring (allowing DoS and statistics gathering). If an attacker knew the next numbers, then this protection would be broken (but none of the other important protections would be broken).

Right, and this answers the parent's question. Currently, the layout of the HSDir is deterministic and therefore predictable by anyone, which allows for a number of potential attacks. See "Non-Hidden Hidden Services Considered Harmful" for context [0].

In the context of using the distributed randomness protocol to randomize the DHT layout, if the protocol were somehow broken then the worst case outcome would be that the DHT layout would again be predictable, which is no worse than the status quo today.

Disclaimer: IIRC there are some proposals to use the distributed randomness protocol for other things besides randomizing the DHT; I cannot speak to how those proposals might be affected if the distributed randomness protocol is flawed.

[0]: https://conference.hitb.org/hitbsecconf2015ams/wp-content/up...

Given to my knowledge they still have no way to insure the exit nodes are not control by a single majority, no idea why this would be any different.

Tor's design document also warns a nation state level adversary who can view the whole network defeats any guarantees of anonymity. There's no way to know GCHQ/NSA aren't running 90%+ of bridges and exit relays either in addition to having a total network overview at the backbone level.

Tor was designed for strong anonymity guarantees in nations that aren't 5 Eyes Alliance ie: China, Russia, ect.

> There's no way to know GCHQ/NSA aren't running 90%+ of bridges and exit relays

Yes there is. There are currently 857 exit nodes. The Tor Project only has to personally know who runs 86 of them to ensure that 90% of the exits are not run by the NSA.

In fact, since ~90% of traffic exits through the top ~260 relays, they'd only need to know 27 of the people who run those.

There is ~7000 internal relays and 2,500 bridges. https://metrics.torproject.org/networksize.html

This guarantee doesn't scale very well considering the combined 5 Eyes intel budget is ~60 billion USD and throwing just 0.1% of their budget at negating Tor would completely overwhelm the network with hundreds of thousands of stooge relays, plus they have the added benefit of global backbone cable traps. Tor can't give any kind of guarantee against a global passive adversary (5 Eyes) which is why they specifically warn against believing otherwise.

Tor has defenses against a large number of new nodes coming online.

Which really only work if the attacker is stupid or trolling.

Is there some trustable way that one could financially support a server w/o unintentionally supporting a compromised server?

Noisebridge runs a tor exit relay and is registered as a 501c3 charity.

Interesting. Can you give an example of how a security enhancing protocol can be end up degrading security?

>> "RSA BSAFE is a FIPS 140-2 validated cryptography library offered by RSA Security. From 2004 to 2013 the default random number generator in the library contained an alleged kleptographic backdoor from the American National Security Agency (NSA), as part of its secret Bullrun program."


A VPN or ssh accessed with a weak password.

Disclaimer: My knowledge of the Tor architecture is very rudimentary

It would be nice to see some new tcp/ip protocols that handle point-to-point and cross-network communication more flexibly. Take a p2p router (let's say Gnutella2), but pared down to only do addressing and routing of traffic. Then another proto on top to do handle name resolution, secrets and tunnels. Then maybe tcp on top of that just to make tunneling arbitrary applications easy. Everything written with IPv6/ICMPv6 in mind as the parent protocol to be more future-proof. In this way, we can have both a reusable framework for p2p networks (the first layer) and a repurposeable protocol for doing name, auth and secret management/tunneling.

I believe the second thing is already handled by tor, but I don't know if separating the secrecy from the routing exists currently. Those different layers could be reused for different purposes, while also being written with a "new Tor" use-case in mind.

I wonder if seif project's ideas could be helpful here: https://github.com/paypal/seifnode. I remember Crockford talking about using microphone and camera noise to generate random numbers.

Running a Tor Node- should be a form of payment. A user having no talent, requesting help from a open source community, could "donate" his bandwith and machine in return. And this form of contract should come with ease of use.

Bandwidth and machine time is not the biggest hindrance for running a tor relay or exit node. The muddy legality in most countries is.

I still really don't understand why people keep developing Tor over I2P - I2P is clearly the better protocol offering complete untraceable anonymity and a chance to secceed from the stigma of Tor...

Tor is a solution for both anonynmity & privacy and censorship evasion. I2P is oriented primarily towards anonymity and privacy.

I2P has an attractive anonymous service design and can run applications like Bittorrent over it. But it also developed basically by 3 people in New Zealand.

Tor has more funding b/c of censorship evasion features being attractive to funders. Successes in the anonynmity feature set like SecureDrop. A vibrant academic community with conferences etc. Lots and lots of review from the external crypto and security community. A deep well of technical talent.

Respectably, no tool - be it I2P's garlic routing, Tor's onion routing or anything else - could ever provide "complete untraceable anonymity"; that is a huge (and potentially very harmful) misunderstanding of what these techniques can do, I strongly encourage you to learn more about them to correct that misconception.

Both projects have designs which have inspired each other and have relative advantages and disadvantages. Technically, I like I2P, but I accept I may be somewhat biased there. Practically speaking, Tor has a much larger anonymity set because it is far more widely used and receives more support, with very well-established volunteer outproxies. I would never criticise anyone for contributing to either: Tor in particular has the widest practical impact of any tool in this space.

This distributed random idea is a very impressive achievement; I'm glad to see it work in the wild! Congratulations.

I'm not sure what you mean about "stigma". Any reasonably effective solution in such a politically-charged space as the anonymity and privacy of human communication is likely to become controversial to some degree.

Anything that replaces Tor will get the stigma of Tor.

I've heard that I2P tends to add experimental features that don't have any rigorous analysis of the privacy impact. So there's that.

> offering complete untraceable anonymity

Your argument falls apart the moment you claim this.

I think tor has more marketing and mindshare than I2P, and thats why you see tor more than it. I would like to see a more in depth comparison of the two, do you know of a good one?

Isn't I2P still "peer-to-peer" by default? That is, the fact your IP is connected to I2P is broadcast to everyone. That makes every disconnection an opportunity to trace you, directly and by elimination. It's especially bad with torrents, which are probably the most popular use of I2P.

How does I2P defend against traffic analysis attacks?

My understanding of distributed commit/reveal RNGs is they need some sort of incentive mechanism. Otherwise, its trivial for an attacker to flood the network with lots of commits and only reveal the ones that give him a useful outcome.


As far as I understand, the distributed randomness will only be distributed on the 11 trusted directory servers (where you get your node manifest from). So you don't need to worry about malicious nodes killing the randomness.

I can't access the website because it's using HSTS and my browser says their certificate is invalid. There is no option to bypass the browser security warning. I'm at a public library. Anyone know what's going on?

Maybe your clock is wrong, maybe they're man in the middling the connection for network surveillance reasons.

It works for me. The cert is issued by DigiCert Inc and the sha1 fingerprint is DE:20:3D:46:FD:C3:68:EB:BA:40:56:39:F5:FA:FD:F5:4E:3A:1F:83

I have a completely different cert, issued by Cisco Umbrella Secondary SubCA ash-SG:

sha1 - 3B:AE:49:04:9E:6A:3D:BE:96:08:60:F0:9B:6B:2F:03:4F:E9:8C:43

Cisco Umbrella seems to be some type of security product for networks. Are you using a computer belonging to your employer or with employer software installed? They could be MITMing you. It seems odd that the Tor project would be using a Cisco product like that.

What about the cert for Hacker News, or my website https://throwpass.com ?

Other certs validate, it might be site-specific from my employer (I'm on my employer's network)?

OpenDNS/Cisco Umbrella is basically a DNS-level security service that analyzes your DNS queries, blocks known malware domains, etc.

For some high-risk domains - depending on some settings - it will also switch to MitM'ing the connection to take a closer look at the traffic and block it on that level if necessary. It might also just be necessary to show the "This domain is blocked" page when you're requesting a site via https. Usually, your employer would pre-install their CA certificate, which would bypass the HSTS warning, but I suppose this might be a BYOD setting (or they just forgot/didn't like the idea of Cisco being able to MitM all the things).

Are you using firefox? I had an issue recently where my max tls version had been set to 1. Min version was also 1, max is supposed to be 3. Check about:config.

Same here, I'm at work though, so it's probably my company.

Looks fine here, maybe try archive.is?

I don't understand why these TOR guys can't rent like 10-20 cheap VPSs all around the world and do their testing there. They are describing getting 11 nodes like some sort of struggle.

VPSs are truly cheap now, you can get one for $3.52 per year:


Didn't read that way to me. Read like they normally use VMs on their computer to have a "testing tor net", but decided to set up actual distributed nodes for testing this. More like, "Hey look, this is nifty", rather then "Ugh, it was so hard to set this up"

Quite a few VPS providers and ISPs will block TOR services out of the gate.

Here's their overview page: https://trac.torproject.org/projects/tor/wiki/doc/GoodBadISP...

Yep, in the earlier tor days it was obscure enough to be doable, but I haven't run an exit node in so long I have no idea who would actually host one now. On top of that, the bandwidth used can eat budgets super fast!

>On top of that, the bandwidth used can eat budgets super fast!

Bandwidth isn't expensive though, unless you need "premium" bandwidth.

Case in point, I've used petabytes of bandwidth for scanning this year and and probably spent less than $2k total on both the scanning hardware and the bandwidth. Realistically I've only spent a few hundred dollars on the BW itself.

And good luck even maxing a gigabit line with Tor, it's not easy.

I've hosted a plenty of exits, there's no lack of friendly ISPs.

Any good recommendations of VPS providers who are friendly with exit nodes?

Most VPS hosts on ecatel, maxided.com, hostkey.

Thanks for the recommendations!

Where would the funds to run the servers come from?

Why do you believe these "cheap" servers would be secure?

Funds? That's $35 for 10 nodes per year. I'm pretty sure a team of developers can handle that.

And who cares if they are insecure? It's for testing only. The code is open source anyway.

Can you show me where I can get a $3.50 per year VPS.

Read the root comment in this chain

Where can you get a VPS for $3.50/year? Please tell us.

Read the root comment in this chain.

> They are describing getting 11 nodes like some sort of struggle.

Where are they doing that? I see nothing the like in the article. Only that this was the first time they did a test of that scale, not that there was anything preventing them from doing it earlier.

And they also explicitly prohibit using Tor, it's right in their ToS => https://quadhost.net/policies/terms-of-service/

Many of the cheap (read: sub $10/year) OpenVZ VPS offerings prohibit ANY type of Tor traffic, even use of a client (such as torsocks) - I've used many of them, and they are quick to detect and suspend based on traffic analysis.

The security of the Tor network depends on diversity of relays and exit nodes. If the Tor project ran all nodes, then that is low sysadmin diversity (but high network and jurisdiction diversity) and thus lower security.

In addition to what other replies have said, using their own computers has the advantage of testing on a variety of systems, environments, and connections. VPS would be fairly monolithic.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact