That same paper walks through the challenges of dealing with it and doesn't find any satisfactory solutions.
As I wrote in our post on the topic, there's a trade off between security, anonymity, and convenience. CloudFlare provides security to our customers. We believe in the importance of anonymously accessing the Internet. Unfortunately, that means we have to sacrifice some convenience. If you haven't read it, I encourage you to see the post I wrote on the topic:
The two long-term solutions we proposed — blinded tokens or CloudFlare supporting .onion addresses — we believe could reduce the inconvenience, but they'll require help from the Tor developers. While public posts like this are discouraging in terms of coming up with a better solution, I'm encouraged by private conversations we've had with Tor developers who acknowledge this is a hard problem and want to find solutions.
Surely you can see how, given the amount of outreach they do to educate regular people about the positive uses of Tor, putting forth unfounded statements like that might be perceived negatively.
I am glad that CloudFlare has put effort into this problem and as a Tor user I appreciate it (though obviously, this problem goes far beyond Tor, as mentioned in the article - your systems will have the exact same problems with large-scale IPv4 NAT, so really it's not optional for a CDN provider).
But please, stick to facts when presenting the case.
You seem to be engaged in goalpost shifting. CloudFlare have no incentive to make this figure up. They aren't going to give random people on the internet root access to their servers to recalculate the figure themselves. By claiming they aren't "sticking to the facts" all you do is show a closed mind.
Tor proxies tons of bad stuff. Everyone who has run a big web site knows this. Remember you only need one or two bad guys with fast enough tools to generate a flood of malicious traffic that completely overwhelms thousands of legit web browsing users. It's just so trivial for a minority of bad actors to end up dominating traffic profiles. So, I believe Cloudflare.
Tor guys love to talk about journalists, whistleblowers etc. That must be a really tiny amount of their overall traffic compared to people who just want to torrent, be assholes on forums etc. Just because they love to "educate" anyone who disagrees with them doesn't mean they're right.
I have run a Tor exit node, so I have some intuitive idea of the amount of malicious traffic. CloudFlare are full of shit.
The rest of your comment is engaging in the exact same goalpost shifting you accuse me of, suggesting that because there are one or two bad guys it's really no big issue that they block thousands of legitimate users.
Also you completely ignored the most important point of my comment, which is that this problem is not restricted to tor!
If you wish to further the conversation, please actually respond to my points. Thank you.
VM = virtual machine (local or remote VPS) 
Let's say that I have a box with a couple quad-core Xeons and 64GB RAM, and a 100Mbps uplink. I can easily run 150-200 Debian VMs, each running one or more tor processes.
 VPS = virtual private server
I see a certain hypocrisy in claiming to protect your customers, and at the same time enabling criminal operations through allowing them use of your infrastructure.
(For the record and because you tell me this at every point of contact, I know your main business is reverse-proxy. I don't care. You run DNS infrastructure, you are responsible for it.)
When I reported it, they said they had informed the attackers of my report, which is sortof like having the police tell a gang you snitched on them and could have enabled retaliation.
When I asked them if they had indeed leaked my personal contact information in this report, they responded with this:
As indicated at https://www.cloudflare.com/abuse/form and to which you expressly agreed: "By submitting this report, you consent to the above information potentially being released by CloudFlare to third parties such as the website owner, the responsible hosting provider, law enforcement, and/or entities like Chilling Effects"
And then when I followed up again, they responded with this:
Again, to re-iterate, by submitting a report at https://www.cloudflare.com/abuse/form, you expressly agreed: "By submitting this report, you consent to the above information potentially being released by CloudFlare to third parties such as the website owner, the responsible hosting provider, law enforcement, and/or entities like Chilling Effects"
In this case, it appears that we chose not to forward your report to the website owner. However, we reserve the right to, and you should assume that this should happen when making reports to us.
I don't know what the legal implications of this are, suffice to say, protecting DDoS attackers for free while asking for legitimate sites to pay (in my case, the $6000/mo plan would be needed) feels a hell of a lot like extortion.
As for liability, ISPs aren't liable for hosted content, but there are exceptions (DMCA, CP), and legally, Cloudflare absolutely has legal liability here. They're not just linking to that content like with a BitTorrent tracker, they're literally serving it through their nginx servers.
The real issue here is that the web is becoming increasingly centralized, which means that we're becoming more dependant on the internal processes of a small handful of venture capital corporations for the web to work. Regardless of Cloudflare's current policy on Tor (they seem to be trying which is good), they could also just arbitrarily change that policy anytime they want, and this is a scary situation for the future of the web. It's a single point of failure for a large chunk of the web, for political manipulation and for advertiser and government spying. Tor (your last chance at a privacy web) users being unable to access major swaths of the web just happens to be the first sign of the implications of this. It's no surprise to me at all that we've seen so much interest in distributed web technologies lately (IPFS, ZeroNet).
Also moving away from Cloudflare is just a DNS update away.
I don't really get them either, surely they have enough paying customers to be able to afford some basic data hygiene.
I'd really like to find some ways to make it more costly for them to keep the scum on their networks than to send out stupid "we r a reverse proxy!!!" replies, that clearly have nothing to do with anything.
But yeah, keep complaining about Tor, good job CloudFlare /s
People are criticizing CloudFlare for inconveniencing Tor users - a tool which, among other things, can be used to fight censorship.
At the same time, people are calling them out on their abuse policy which essentially boils down to "We won't take down sites based on content unless we receive a court order telling us to."
That's interesting, to say the least.
The other one is a for-profit organization who's CEO's rationalization for taking money from internet scum is that if he doesn't take it, someone else will .
And to be clear, you are not quoting their abuse policy correctly. They say they will MITM phishing and malware sites to insert warning banners and take down childporn.
They will however not tell a scammer "Hi, please take your business elsewhere" based on some misguided "CloudFlare will save the internet"-fantasy.
Below is a direct quote from the article you have cited. It refutes your underlying premise, namely that cloudflare is in it for the money and doesn't care. The knee-jerk reaction that corporations take money for services, therefore they're evil, needs to die in fire.
LulzSec and other problematic customers tend to sign up for our free service and we don't make a dime off of them. When they upgrade they usually pay with stolen credit cards."""
The point is: Tor is trying to provide a useful service that requires it to be very hard to block abuse, yet as you say: the publish their list of exit nodes and everyone is free to block them.
CloudFlare on the other hands, could extremely easily respond to abuse requests the same way that almost every other legitimate network service provider responds to them: investigate abuse by their users and terminate malicious users.
You misunderstood: corporations taking money for services is excellent.
Corporations continuing to provide service for criminals even after they've been notified of, and have acknowledged that they are hosting problematic customers mixed in with their legitimate customers is extremely problematic.
They're not evil, they have responsibility.
I'm completely sympathetic to your problem - Tor is used by lots of spammers - totally understandable to try to prevent this spam from hitting your customers.
But most of the services I run can't really be affected by this sort of spam (no public comment systems for example). I use CloudFlare on a few of my domains, if I don't care about bot traffic and just want to turn this CAPTCHA system off entirely, is there a way with just a pro account to do so? I appreciate the anti-DDoS protection and certainly having it automatically kick these CAPTCHAs on with only extremely high volumes of traffic could be appropriate, but enforcing them is just too much.
I have security level set to minimum, but recently spun up a hidden service because users were still getting CAPTCHA'd over Tor.
The vast majority of captcha'd pages by CloudFlare on Tor which makes secure web browsing so cumbersome are completely read-only, while some may have a comment system hosted by a third party like Disqus and Facebook (and are therefore protected already). Other sites should have the captchas on a different level than the front page, like the login forms and other pages where write/spam access is somehow enabled.
I can think of very few sites that are in need of a total ban on anonymous users/bots or that are write-access enabled by default without any login process.
You'd think they could be far more sophisticated about reputation though and adjust it in realtime so that ips are by default trusted and are marked down temporarily for bad behaviour.
The token scheme looks interesting though as long as it was truly anonymous and tokens were different on each request to avoid tracking.
i wonder if they could do a plugin for the tor browser so that they don't have to wait for tor?
Because it sounds like you want to preserve an incorrect system and are pushing this problem on Tor.
At CloudFlare we've not explicitly treated traffic from Tor any differently, however users of the Tor browser have been more likely to have their browsing experience interrupted by CAPTCHAs or other restrictions. This is because, like all IP addresses that connect to our network, we check the requests that they make and assign a threat score to the IP. Unfortunately, since such a high percentage of requests that are coming from the Tor network are malicious, the IPs of the Tor exit nodes often have a very high threat score.
With most browsers, we can use the reputation of the browser from other requests it’s made across our network to override the bad reputation of the IP address connecting to our network. For instance, if you visit a coffee shop that is only used by hackers, the IP of the coffee shop's WiFi may have a bad reputation. But, if we've seen your browser behave elsewhere on the Internet acting like a regular web surfer and not a hacker, then we can use your browser’s good reputation to override the bad reputation of the hacker coffee shop's IP.
The design of the Tor browser intentionally makes building a reputation for an individual browser very difficult. And that's a good thing. The promise of Tor is anonymity. Tracking a browser's behavior across requests would sacrifice that anonymity. So, while we could probably do things using super cookies or other techniques to try to get around Tor's anonymity protections, we think that would be creepy and choose not to because we believe that anonymity online is important. Unfortunately, that then means all we can rely on when a request connects to our network is the reputation of the IP and the contents of the request itself.
Look, please correct me if I'm misunderstanding or taking your words out of context.
But what I hear you saying is that CloudFlare is fundamentally opposed to user privacy at a business and an architectural level.
I.e., if you don't agree to let CloudFlare track you around the web (perhaps by simply declining cookies) CloudFlare is likely to degrade your user experience to the point of being borderline unusable and then point the blame at you for coming from a bad network neighborhood.
Edit: 97% is a real number, not an exaggeration, based on numbers from the report linked in the article.
Similarly, if you block cookies/supercookies/etc to avoid being tracked ... you'll be challenged every view.
Is there really a constant DDoS attack on all of these sites from users with no cookies?
One could setup an IDENT-like service that delivers a hash for the source's route, and that would enable better scoring, but also could be used as a tracking measure... you can't have one without the other.
Even then, it would take either the user allowing cookies, or the TOR system to change their exit nodes.
Listed those in other responses.
The need for incentive you mention is silly. The website operator [much like a job searcher with a resume] wants to be in front of as many non-malicious people as possible. And while you might argue .04% of malicious traffic comes over Tor, I've operated sites where 20%+ came over some sort of proxy with poor IP reputation.
You know what?
Fuck it. I'll just build my own site that doesn't use Cloudflare for such a purpose.
It all depends on what you do. I had a customer a few years ago that was forced to geo-block IP addresses from China, most African nations and Bulgaria. The nature of that customer's business made that an easy solution.
A company like Cloudflare serves everyone without a lot of context. If your site serves a Tor-heavy niche, it's not the right solution.
It seems like CloudFlare is not absolutely committed to this position because they're willing to explore things like the blinded tokens approach.
Comment spam isn't a CloudFlare-level problem. If sites want to allow anonymous comments then they get the consequences of anonymous comments (or have their own CAPTCHA for them); if they want to require account registration and some vouching or proof of work or payment to get an account then they can have that as well.
DoS is a CloudFlare problem, but you don't need historical IP reputation for that, you only need what that IP address is doing right now.
They're not in a position to do it accurately.
Order of requests for that IP in the last n minutes, timing of requests, request headers order, type of content requested, captcha content timings, specific-for-site content requested, etc
So, essentially, Cloudflare relies on defense in depth as a security company but as a side effect of this is Tor [and IP anonymity services in general] are affected.
Fair enough but you may want to seriously consider just giving the availability to ignore IP reputation altogether [except during a DDoS] to your customers since it isn't just Tor but also VPNs, etc. that are impacted by this sort of strategy.
The seeming disconnect is that the vast majority of our customers ask us to provide them a way to block Tor entirely. And we've resisted that because we believe the anonymity Tor provides is a good thing. Same reason we don't allow the vast majority of customers to entirely block traffic from an entire a country, even though it's one of our top customer support requests.
We weren't using Cloudflare but our own systems that were using IP threat rating services like MaxMind but eventually we had to totally prevent anything important from being done on the site via anonymous proxies. Bids, Listing Creation, Payments of any kind had to be completely blocked from those sources. People were using Tor to create fake listings on fake users with stolen credit cards that we were then paying charge back fees for. Using Tor to bid up their own auctions. Direct messages soliciting users to take the transactions off site.
Blocking those systems was one of the most effective things that we had to do and our users were vocally happier about it.
> In addition to CloudFlare’s automatic detection, you can easily add an IP address, IP ranges or entire countries to your Trust and Block list.
Umm, vast majority is non-paying I take it since I believe its available on every paid plain?
> A low security setting will challenge only the most threatening visitors. A high security setting will challenge all visitors that have exhibited threatening behavior within the last 14 days.
I'm guessing you mean the "Essentially Off" option which implies Cloudflare basically stops providing security?
Couple that with most clients passing all the same fingerprintable data (browser brand and version, OS version, etc), and you can't uniquely identify different clients coming from the same IP with any level of accuracy.
There is no solution where everyone wins. Simple as that.
You should take this into account as well, since your customers are presumably not interested in losing business.
* facepalm *
How would you solve this? Abuse from Tor IPs is a known and documented problem. If you have a solution, I'll bet CloudFlare has a job opening.
Any company that currently bases their offering on IP-based reputation better be working on different solutions to the problem or they're not going to stay relevant for long. This is an existential issue for CDNs.
On top of that CloudFlare as a free service helped lots of small controversial, hated-by-some-government websites to stay alive. They protect >ANYONE< that is being DDoS-ed usually for free.
If keeping freedom of speech ( as in "being online" ) for all those users they protect, for you is "at the expense of minority", I think you are totally biased.
I don't see this in any practical fashion. I can visit a CloudFlare hosted site from the regular internet for hours (even scrape automatically) with no problems; the first time I hit the same site through Tor, it gets a double or triple capchca.
Perhaps it should be a blacklist instead of a whitelist. Defaults matter.
A blacklist would do nothing to solve this, since the fundamental problem is the way Tor and VPNs work, by aggregating traffic into exit nodes at specific IPs.
Edit: And upon further thought, it most likely is a blacklist. A bunch of malicious requests go out from one IP, so that IP is blocked. Because it's an exit node, it also blocks a bunch of other legitimate people.
I mean blacklist "Tor" to give them capchcas, instead of having to whitelist them to not give them capchcas.
> I can visit a CloudFlare hosted site from the regular internet for hours (even scrape automatically) with no problems
Ah but this is not the same. Try doing so from an IP which is also sending malicious traffic, and you will see the same issue.
Much like I don't care which bus gets me from point A to point B, or if I'm the only the one on the bus or not... it's the experience of the trip between points that matters.
You're fine with the anonymity industry throwing their weight around at the expense of a handful of CDN's, though. I understand Tor's need to protect users. That's why they need to find a way to treat malicious traffic different than normal traffic. Then adopting them might be a good business decision.
I have barely ever seen any legitimate Tor traffic, it has all been spam bots or other kinds of traffic I would rather avoid.
But that's all utter nonsense! These botnets only access stuff inside the Tor network (i.e the C&C, that the operators want to hide), they don't use Tor to access clearnet content. Even a small botnet will have more nodes than there exists tor exit nodes (966 as of right now), what possible benefit could there be for the botnet operator from doing that?
This is no different than a network or AS that is spammer friendly, botnet friendly, carder friendly, etc. All of those networks eventually end up on blacklists or Spamhaus lists and their efficacy goes down. Eventually, the network dies out and the criminals move somewhere else. Yes, it's a game of whack-a-mole, but it's proven to work well.
I know Tor doesn't want to be in the network regulation business, but they need to be if they want their product to thrive. Otherwise, good bye Tor.
Bitcoin has the same issue: there are lots of legit uses of it, but to make it a good widely used currency, a reputation system is going to emerge, and from there you've already erased half the benefits of using Bitcoin. However, in the mean time Bitcoin is used by a bunch of people purely interested in speculation or as a way of avoiding taxes/money laundering laws/etc. There are people, just like Tor, using it for legit reasons, but my bet it's for mostly reasons nobody in the Bitcoin community likes to admit.
You don't blame mask manufacturers for malicious people wearing masks.
It's like city guards banning everyone with a mask from entering and issuing IDs to them. Then they're using those IDs to determine what they should and shouldn't see in the city, tracking them everywhere "across cities" etc.
In the interest of privacy, it is best to instead use the dynamic nature and types of the requests to figure out what the behavior is like.
Going with the mask analogy, they should instead check if a person is brute forcing lock combinations. Maybe even condition on the fact that they're wearing a mask.
That's what they're doing. They are seeing brute forcing come from a bunch of IPs and they're blocking those. What do you expect them to block on? The people using the anonymous service voluntarily identifying themselves on every request (cookies, browser fingerprinting, or pretty much anything else coming from the client side that can be faked)?
Like if you fail to log in to a site, 2^(attempts) timeout from that IP for that page only. Can also integrate a combination of request headers. Sure, it's still IP-based reputation, but it doesn't persist and is much less intrusive.
Most sites require specific cookies on consecutive requests, and such blocking should be on the app side only.
There are solutions in each case and all of them are harder than IP-based blocking. However, in the interest of privacy, they should adopt these more nuanced solutions.
If they're requesting specific type of content like images or some weird request that queries DB, these would be grouped together.
What I'm saying is gather more information for each request and use it more wisely to expire IP reputation quicker - within minutes as opposed to months.
The DDoS problem is actually easier than the rest because you need a large volume of requests to do anything. Usually these requests are very similar, come in rapid succession and come from the same bunch of IPs.
Going with the mask analogy again, it's like you see 1000 masked people rush into a bar and block the entrance with their bodies.
Is the solution really to ban wearing masks everywhere?
The flaw in this analogy is that in this case the mask makes every person completely indistinguishable from every other person wearing the mask. In this case, one ID is issued to every person wearing the mask.
When 90%+ of the people with this ID are criminals and vandals, blocking anyone with this ID is a pretty obvious and effective way to prevent crime and vandalism. It's seems pretty reasonable to me when presented this way.
As I said elsewhere, if you see 1000 masked people rush into a bar and block the entrance with their bodies, is the solution to block all masked people from going to all establishments?
Clearly, if this happened IRL, people would just put a limit on the number of masked people entering that bar until there wasn't a group of 1000 of them trying to get in.
IRL, the bar would call the police and anti-riot forces would move in with crowd control equipment. Tear gass would be launced at the masked people and a lot of the masked people would be hauled to the police station where their identity would be recorded and a background check would be performed. It's not pretty but it's reasonable.
I have used Tor out of a legitimate wish for privacy. I have cursed Cloudflare and Google in passing to myself for their captchas presented to me when I've browsed through Tor.
Captchas in general are a royal pain in the butt, but they are among the most effective at protecting sites from abuse, so even though they annoy me at times, I hold the view that they are a net positive.
If you want to help preserve anonymity, I think the best course of action is not to focus on Clouflare, but instead to help maintain one or more communities on onion sites. The change must come from within. Once it has been shown that an onion site is able to provide useful services over time with privacy but with the same level of protection from abuse and bad people, then, in my view, it is time to reach out and educate the wider 'net on how this can be done.
It's best to imagine it's a walk-up bar rather than one with a door.
Try wearing a mask into a petrol station or convenience store. They've already performed the assessment of 'potential sale vs getting robbed', and decided the risk factor they'd like to accept.
Now imagine a convenience store that demands you to tell them where you've been this month and doesn't let you in otherwise.
It's really a shame. An opinionless platform offering anonymity cannot flourish in an opinionated world. At some point if these things want to succeed, they need to play by the rules of the world that they exist in. But I don't think anyone's figured out a common set of systemic restrictions that Tor, 4chan, etc. can implement that avoid taking away their primary affordance: freedom.
Your premise seems to be that you can't be bothered to protect your networks so you want to put that responsibility on someone else. It's called intermediary liability and it's terrible because the intermediary has all the wrong incentives.
You demand that the intermediary eliminate malicious traffic but they suffer much less than individual users if they also eliminate non-malicious traffic, so they set up a system with a high rate of false positives and harm many honest people. YouTube does this with Content ID. Spam registries do this with innocent small mail servers. CloudFlare does this with Tor.
What you're doing is called externalizing costs. It's generally recognized as antisocial behavior. So if you're going to claim benefits to yourself at the expense of other people, at least recognize that you're doing it.
Remember his preface - cranky old-school network operator.
Let's say you have a hundred networks all connected together into some sort of "inter-net" system. If one AS starts sending out malicious traffic, what makes more sense:
1. That AS starts policing their users.
2. The other 99 ASs have to deal with the malicious traffic.
You're expecting the other 99 groups that are being targeted by the one group to bear the cost of dealing with that group's malicious users. Who exactly is externalizing costs here?
In a system without any real rules or authority, I think "those adversely effected choosing to block the bad actor" is a fairly democratic solution to the problem. You either play nice or you get voted off of the island.
That's the part which is adverse to the rest of your argument. You're not voting off the bad actor, you're voting off everyone in the bad actor's country.
We know how to deal with this problem. You go to a website, you sign up for an account, it can be pseudonymous but to get it you have to put up some collateral. Money/Bitcoin, proof of work, vouching by an existing member, whatever you like. Then if your account misbehaves you forfeit your collateral.
But this isn't a CloudFlare-level problem. They're trying to solve it at the wrong layer of abstraction. Identity isn't a global invariant, it's a relationship between individuals. Endpoints identify each other with persistent pseudonyms. The middle of the network should have nothing to do with it.
The bad actor is the organization or person responsible for administering the network where the abuse is originating.
When I'm being attacked by someone's VPS, I report them to their host. After the fourth time I report them only to have their host pass along my report but take no further action, the host becomes, maybe not a bad actor but, a "bad citizen".
My choices are to allow them to externalize the costs of their lack of enforcement (or decision not to enforce) and attempt to find a way to block the specific actor under their purview, or just to block that host and accept whatever collateral damage that occurs. (And yes, sometimes that "bad citizen" may end up being most of a country - it doesn't change the equation for me.)
It's the only method I have to exert any pressure on the host to act responsibly. If enough people agree with me, then it quickly becomes "their problem" rather than "my problem" as they get blackholed from everywhere on the internet.
The bad actor is the individual who acts bad. The Post Office is not a bad actor for delivering letters.
> allow them to externalize the costs of their lack of enforcement
Tor is not an enforcement agency. Neither is CloudFlare. The costs of bad actors are your costs. You have the technical ability to retaliate against common carriers for not allowing you to push those costs onto them, but that doesn't make you right to do it in any sense other than might makes right. And you should realize that in doing it you're knowingly hurting innocent people.
I subscribe to the idea that it's an ISP's responsibility to police its own network for abuse and my responsibility to police mine.
You apparently subscribe to the idea that it's my responsibility to just accept whatever shit you fling at me and it's my problem to deal with and yet somehow I have a responsibility or moral obligation to still provide services to you and your customers.
I suspect I'm never going to agree with you.
The second one is the only one that works without massive collateral damage.
Identifying bad people is only an abstraction over identifying bad acts and it leaks like a sieve. A reformed thief is entirely capable of buying an apple without incident, because not all acts by bad people are bad acts. But a thief has no reputation as a thief until after they steal for the first time. The only way to stop bad things is to detect bad things.
But the true failure of reputation systems is that as soon as multiple people share an identity they disintegrate entirely. Innocent people get blamed for malicious acts of other people through no fault of their own and with no ability to prevent it. The only way reputation systems can work at all is if people can prevent other people from using their identities.
Which means that IPv4 addresses can't be identities, because we don't have enough of them for them not to be shared.
And forcing common carriers to stop doing business with anyone who has ever done anything bad has another problem. It imposes the death penalty for jaywalking. You send spam once -- or get falsely accused of sending spam -- and you're blacklisted. It puts too many innocent people into the same bucket as guilty people and then the innocent people fight you alongside the guilty. It creates the market for these VPN services because too many servers are wrongly using IP addresses as identities. Then the bad people also use the VPN services and bypass your "security" because it was never security to begin with, so you block the VPN services which destroys those and they're replaced with others you haven't blocked. Meanwhile the real bad people also use botnets which are unaffected, so you aren't actually blocking the bad people, you're only blocking the one IP address that they share with the good guys.
You don't want this fight. Most of the people you're fighting are innocent. People need to learn to detect bad acts, not "bad IP addresses."
If they actively ignored death threats/didn't send them off to the police when it came to their attention, they become responsible.
"The costs of bad actors are your costs."
Cloudflare is sick and tired of getting attacked by people from the TOR network. I don't blame them for the ban. It's costing them money..and TOR isn't going to raise the funds to pay them for the lost bandwidth and customer revenue.
We used to have a bigger problem with mail spam because server operators constantly would leave anonymous relaying open. How did we stop a great deal of it? By blacklisting the IP until the problem is fixed. It has worked out pretty well.
"And you should realize that in doing it you're knowingly hurting innocent people."
I guess we have to determine which 'innocent' people are more important: The ones getting their websites attacked and hacked anonymously, the people that can't access those websites because they are down/DOS attacked, or the random people that want to use the TOR network.
The Post Office reads your mail?
> We used to have a bigger problem with mail spam because server operators constantly would leave anonymous relaying open. How did we stop a great deal of it? By blacklisting the IP until the problem is fixed. It has worked out pretty well.
Only if you're willing to disregard innocent people.
> I guess we have to determine which 'innocent' people are more important: The ones getting their websites attacked and hacked anonymously, the people that can't access those websites because they are down/DOS attacked, or the random people that want to use the TOR network.
The "random people" who use Tor aren't doing it because it's trendy. They're doing it because it's the only way they can access the internet. Or because not using it would get them stoned to death by religious fundamentalists or imprisoned by an oppressive government.
In that context, it's a lot of votes for Tor to find a solution to this problem.
Really, I'm surprised at CloudFare's restraint here. A lot of their customers probably couldn't care less about Tor, but they've been putting a lot of effort into trying to avoid blocking Tor users (actually blocking, not inconveniencing) or compromising their anonymity.
Based on what I've seen in the thread of the bug report that spawned this debate, most of the browsing activity that these CAPTCHAs get in the way of is read-only. The only ways (nominally) read-only requests can cause harm are DDOS and exploiting vulnerabilities in the server software. Tor doesn't have enough bandwidth to be a big contributor to a DDOS, and sticking CAPTCHAs before some users is at best a probabilistic solution, as it only avoids exploits that:
(a) are untargeted (scanning the whole internet - if a human attacker cared about your site in particular then they could easily switch to one of the following methods to circumvent the CAPTCHA);
(b) use Tor rather than going to more effort to get access to less tainted IPs (VPS, botnets...) - assuming that the attack itself doesn't gather bad reputation (in cases where CloudFlare can detect malicious traffic by inspection, it can do better than IP blocking);
(c) don't use a service that farms CAPTCHA solving out to humans - which increases the attacker's cost but not by much.
Since the harm reduction is so minor, I suspect that for most sites, if the administrators had even a small incentive to support Tor users and the time to think about it, they would not choose CloudFlare's coarse-grained CAPTCHA approach. Rather, they'd make sure to have their own CAPTCHAs before anonymous write actions and before user registration - which they should be doing anyway - and leave read-only access alone. And the benefits of Tor to users in repressive countries should be enough to provide that small incentive, if they cared.
But they don't care. They don't want to change anything (like adding CAPTCHAs) unless there's a problem, and if CloudFlare can reduce that problem without their having to think about it, then that's the path of least resistance and they'll go with it even if there are consequences. I suspect most site owners, if asked about Tor, would say "just block it", which is why CloudFlare has - admirably - gone out of its way to doing so in its UI difficult. This is (a large portion of) who CloudFlare represents and I agree with you that they're showing restraint.
But here's where I differ: I don't think mass apathy counts as "votes for Tor to find a solution to this problem". While the magnitude of harm is of course completely different, that's like saying that in the case of discrimination against a minority group, since most majority group members just want the issue to go away, they're voting for the minority to "find a solution" - when the only real solution is for the majority to change and stop discriminating. I mean, maybe they will vote that way in actual elections, but apathy votes don't reflect the "wisdom of the crowd" as much as others do; the minority shouldn't just consider themselves overruled and give up.
CloudFlare is already "defying" those votes to some extent, and if there is no good solution that can make both parties happy, I'd say it would be the right thing to do for them to go a little further and open up a little more for Tor users, even if it's not what their customers would decide in a knee-jerk reaction. I hope that this blinded CAPTCHA idea will turn out to be such a solution, though. It's not ideal, since having any kind of CAPTCHA blocks potentially-legitimate automated traffic, but I think it's a good enough compromise for now - sites that care could still turn it off entirely. I hope the Tor developers won't let the perfect be the enemy of the good.
Oh, and - I think there is one act for which CloudFlare deserves some blame: signing up those customers in the first place with the promise to provide "security" at the CDN level. It's not that what they do is useless, but given the fundamental limits of (all) "web application firewalls" that only see the application from the outside and thus can only guess heuristically what is an attack, less technical customers are probably misled somewhat about the necessity and benefit of them. Most people, including less technical site administrators or owners, don't even understand the difference between DDOS and "real" attacks, let alone what WAFs do or, say, what concerns apply to Tor in particular. I'm not sure what CF could do to fix this short of not advertising security at all, and that would undersell what they do provide. Even so...
My Linode IP address was assigned to me years ago. I do not use it maliciously, do not share it with other people, and have never used it for tor. Yet find myself regularly blocked for no logical reasons when I proxy my web-request through it.
It's not as if CF set out to screw over Tor users, by the nature of Tor they'd have no way to do it with any kind of ease. Tor traffic just happens to have a whole lot of bad actors using it and that causes the reputation of those IP's to down.
Copyright in the context of the internet has costs that can only be paid by innocent people. It will either have many false positives or many false negatives. So the question is whether those costs should be paid by the innocent people who benefit from the system that created those costs or the innocent people who don't.
Huh? Isn't this exactly what CF is attempting to do? And Tor traffic tends to be abusive so the good is caught up with the bad, but it's all in the name of protection.
No. The real bad people have botnets and can cycle through a million random IP addresses every time you block one. A real solution needs to be secure against someone you don't yet know is bad.
This is going to get a lot worse as we've run out of IPv4 addresses. ISPs are going to start to NAT many users behind one IP address. Some already have. Then you have the same issue as Tor where one user is malicious but shares the same IP address as a thousand innocent users. IP blocking isn't going to work anymore so you might as well find an alternative solution now.
The honeypot space for spam is now systems other than email. Social media. Blogspam. Web advertising. Dating services. SMS.
(I'm not sure precisely which, but pick from among there and you'll likely turn up the issues.)
Email is fairly well defended at this point, though not without considerable collateral damage (small / self-hosted email is quite difficult, most of us rely on a small number of high-volume providers who may present a considerable privacy risk).
That is exactly why there is a Tor. Tor is for enabling anonymous communication. Now deciding who can do what or why would limit use and that would limit its ability to anonymous communication.
Why should TOR get a pass on this when network operators need to protect their network from abuse? The choices are either 1) let the abuse continue or 2) try to block the specific attack traffic in question by heuristics which is often difficult or impossible or 3) block abusive IP addresses.
The right to privacy is not the same thing as the right to access. If someone doesn't want to allow you anonymous access it is their right to block you. Anyone using CF and not whitelisting Tor is effectively saying. If you want to visit my site you need to verify that you aren't a bad actor. If you don't want to do that because of privacy reasons then you can't visit.
It's like selling alcohol (cigarettes, porn, gambling), but not to minors.
You can say "it's OK for this crowd not OK for this crowd". Otherwise you'd probably claim that said regulation would go against the very thing that the merchant is trying to do: make as much money as possible.
It's not like that at all, because selling to everyone other than minors requires identity. Sure, you just need to verify the subject is over 18, i.e. you don't need to know name, birth date or address. But you DO need to know identity to issue the token that provisions the token that proves an age > 18.
And that breaks Tor's raison d'être.
Do you have a proposal for filtering out that kind of traffic while allowing other traffic? I can't think of a way off the top of my head and I'm sure the people behind Tor would at least consider a logical solution.
Edit: This sounds kind of confrontational, but I don't mean it like that. I honestly would like to hear of a potential solution to this because I really can't think of one.
How? If you think this was possible without completely compromising the Tor protocol, don't you think it would have been done already?
Well the thing is how do you regulate totally anonymous users? How would you know what the binary 0 and 1 are doing in tor?
I think you don't understand what Tor is or how it works. Tor is a way to anonymize its users. You have no way to analyze a packet until it reaches an exit node, and you have no way to analyze that packet if it's done over https, and you have no way to block an ip from that exit node because it comes from another node where plenty of other ips are coming from. If you start blocking this node, then the spammer can choose a different path and come from another node, or just change his exit node.
tl;dr: what you are saying goes against Tor's principles.
> what you are saying goes against Tor's principals
That's why it will probably never be cleaned up. That's also why more and more people will probably block access from Tor. CloudFlare says they get a 95% attack rate from it. A blog post the other day said FotoForensics gets about 91% attacks from Tor.
No one is going to put up with 91% attacks for long. And if that means Tor becomes it's own walled garden that doesn't 'interact' with the public internet, so be it.
Agree to disagree :)
> if that means Tor becomes it's own walled garden that doesn't 'interact' with the public internet, so be it
Most websites are not using Cloudflare and have no idea how to block a range of ips. So no Tor is not going to become its own walled garden.
I know very well how it works thank you very much. It doesn't mean I have to like it.
Since you're apt to throwing around unfounded accusations, I'll join the party. I have more experience than you do at running large networks, dealing with fraud, dealing with abuse, dealing with malware, and dealing with law enforcement/government agents. The Internet has enough problems with the well-run networks that actually care. We could care less about Tor users that might be blocked.
I'm promise you that CloudFlare isn't the only group that offers to block TOR as an option (or just has it blocked by default).
Cloudflare just have to get smarter here. It's their business to do dynamic filtering and balance this stuff, not Tors.
It's not their business given Cloudfare doesn't make shit off Tor: it looses bandwidth and time fighting malice instead. It's actually their business to block it. Tor needs a reputation system or some other method to deal with this stuff.
The blacklisted IP lifetime problem is real though. It's a problem I've had to raise several times with our product and network teams. People would see an abusive IP and just ban it...without a TTL or lifetime. This really upset me as they seemed to think that was OK not just for the time being, but that it was good enough. When I describe IPv6 to them, their faces just melt as it sinks in that they can't just keep banning IP's and must do something higher up the stack to detect and block fraud.
At least this will reduce legitimate user's annoyance, instead of being blocked indefinitely
My own experience: Tried accessing wikialpha from work LAN, 'blocked' by endless captcha (for a few months already) and opening the said wiki from home network, all is perfectly fine...
well at least now I know why, still does NOT makes it OK from end-users perpective
What does malicious percentage of traffic in this case mean? Malicious sessions? IPs used? Packets? Users?
> Based on data across the CloudFlare network, 94% of requests that we see across the Tor network are per se malicious. That doesn’t mean they are visiting controversial content, but instead that they are automated requests designed to harm our customers.
CloudFlare's CAPTCHAs are an attempt to deal with that reality, but they're heavy-handed. Worse, they're at the wrong level: the protected site may have already verified that the user is legitimate, but CloudFlare imposes its block when the user's source IP changes again.
CAPTCHAs belong at the application layer, not the transport layer.
The ironic thing is actually that by applying any kind of "network regulation" the Tor project would abandon its own primary purpose. The only way it can continue to exist is actually if it doesn't practice any kind of censorship of its users.
By your logic, gun owners would be able to shoot whatever they want. The primary purpose of a gun is to shoot things. To paraphrase: "To regulate what you can and can't shoot would go against the primary purpose of a gun and the gun manufacturers can't exist that way."
Cloudfare know of the problem and refuse to do anything about it.
The facilitator of much abuse isn't anonymity per se, but impunity. The ability to act without consequence.
Finding a way to imbue reputation across an anonymised connection seems to be one way to operate. Not an easy problem. There's some work toward solutions, though none are yet widespread.
I'm not sure if you know what you're talking about. From your comment, it's crystal-clear that you don't understand what their service is.
Plus, calling a free service based largely based on volunteers, university and non-profits a product is derogatory.
This response seems a bit of a childish knee-jerk reaction from the Tor project, which could've been worded more maturely.
Tor exit nodes were far more likely to contain malicious requests
Risk averse companies may wish to block all Tor traffic
The article then goes on to suggest that it is perfectly reasonable to use the word 'block' to mean showing a captcha - in common usage block means block - deny requests, not attempt to determine if a user is human with a captcha or some other method - that's not blocking, it's annoying and potentially pointless, but it's certainly not simply blocking users and it's disingenuous to describe it as such.
All of that adds up to a response which seems to be more interested in scoring points than finding a solution for legitimate Tor users. I'm not sure I'd describe it as immature, but it's not a very constructive response, to an article which went out of its way to be Tor friendly and propose solutions. It would be much easier for cloudflare to really block Tor traffic, they would probably suffer very little from doing so.
> Tor exit nodes were far more likely to contain malicious requests
They also were far more likely to issue requests in general. This data point has no meaning. But that's statistics for you :)
My plan is to continue to do so through that ticket as I've made various commitments there (some of which, like whitelisting, we've already rolled out). It's worth reading the entire ticket to get a sense of the conversation. We are in no way finished improving the situation.
The net effect is that it's not saving you any CPU usage or bandwidth (if anything, it's costing you more as we still request the actual page after the Captcha system runs), it's making customers like me abandon your customer's sites out of frustration, and it's eroding the last line of defense we have against invasive tracking.
I'm sympathetic to the problem you're trying to solve, but surely there must be a better way for simple GET requests.
This doesn't just affect people like me as a user. Having experienced this, I would be averse to deploying or recommending CloudFlare in its current state.
If anything, it's been surprising to me to learn just how many sites are using CloudFlare ;)
I don't have a site handy that's triggering it right this moment, but here's a somewhat recent screenshot I grabbed of the captcha wall hitting my VPN: http://i.imgur.com/OnvK05l.png ; I received this for simply trying to view a product page with an ordinary GET request. Ironically, I couldn't solve the captcha, despite being human >_<
I have the following suggestions for CloudFlare:
1. Can you provide better documentation for your customers about what Tor is, and reasons for/against white/blacklisting Tor? For example, when a customer selects to Block or Captcha Tor, a tiny link could show up somewhere that says something like "This affects users who seek privacy, find out more."
2. In addition to better docs, can you setup something that lets site operators view the site as a Tor user? The screenshot on this site does not do the Tor+Captcha experience justice:
If you provided a "view site as Tor user" demo (JS and non-JS versions), then site operators might be more reluctant to enable Captcha for Tor users.
3. The latest CloudFlare blog post on Tor says "you can do a lot of harm just with GETs." I wish you would give more thought to the idea of a read-only option for non-whitelisted Tor users. If GET requests are harmful (I'm skeptical), reducing the harm of GET requests seems like a much easier problem than the overall problem. Afterall, what good is a CDN that can't handle lots of requests for static content?
I also have the following suggestions for the Tor community:
1. Continue to improve the Tor user experience to gain users! The more people use Tor, the harder it is to ignore (by CloudFlare and others) and the safer it gets. Acquiring more users is one of the best ways to fight back against Captchas. One way to get lots of new users is Firefox integration.
2. Fix hidden services. It's awesome that CloudFlare wants to give the option to setup hidden services for their customers. Make that possible for them!
3. If CloudFlare doesn't want to provide some sort of read-only mode, build that functionality yourselves! For example: when the Tor Browser detects a CloudFlare captcha, it could give the user the option to read a read-only cache from some other CDN.
I'll make sure the product team sees that suggestion.
2. In addition to better docs, can you setup something that lets site operators view the site as a Tor user?
That seems like an enormous amount of work when anyone can just get the Tor Browser and test it out.
3. The latest CloudFlare blog post on Tor says "you can do a lot of harm just with GETs." I wish you would give more thought to the idea of a read-only option for non-whitelisted Tor users. If GET requests are harmful (I'm skeptical), reducing the harm of GET requests seems like a much easier problem than the overall problem.
If you read the Trac thread you'll see that I've answered that. In short, I don't want to do it because that diverts engineering resource away from the right thing to work on (which is reduce the need for CAPTCHA).
I agree, but thinking about GET-only requests is one approach to reducing the need for CAPTCHA. For example, maybe CloudFlare could have better Tor defaults for sites that are serving only static content, and default to Captcha for sites that are POST-heavy (just a high level idea).
Basically, most of the time, GETs have nice properties: idempotent, pure, etc. I think a solution to the captcha problem could take these into account.
I do not think the other solution proposed in the CF blog post, using proof of work with some sort of blinded tokens, is going to work well. A hashcash style proof-of-work is easily defeated with a botnet or FPGA, and reputation-based systems are an ongoing research area.
It's possible there is a silver bullet that we haven't found yet. Have Tor or CloudFlare considered putting out a call for research into the problem?
To be honest I'm not interested in solving the CAPTCHA problem just for Tor. That doesn't make a lot of sense. What I am working on is an overall solution so that the need for CAPTCHAs at all is diminished.
I like that idea, but my worry is it will take years to reach that point, and in the meantime Tor/VPN users will just have to suffer. I'd rather see some short-term fixes now and long-term solutions on the horizon.
I admit: I have not read the entire Trac thread, so I'm not sure what your current roadmap is.
Sure, but most site operators aren't going to get the Tor browser to test it out -- especially if they don't realize that that is something they should do.
By "view site as Tor user" I simply meant having a way for site operators to interact with the captcha page that CloudFlare presents users. That should be easy to setup; it can even be just a static HTML page that's linked from the documentation on Tor. I didn't mean that you should display the page through Tor.
Alternatively, you can create a dummy CloudFlare instance with /0 under Captcha, and put the URL to this in the docs, but this wont let users try the JS-disabled captchas.
The only difference between this demo and the actual Tor experience is that over Tor, these pages load more slowly. But people are more annoyed at seeing the captcha in the first place, not the fact that they sometimes load slowly...
It's not that simple.
reCAPTCHA makes an on the fly decision about the strength of the CAPTCHA served depending on the visitor. In the case of Tor there's no visitor information other than IP address (and whatever the browser gives as User-Agent etc.).
So, I can dummy up a CAPTCHA page trivially, what I can't do is dummy up the experience of a Tor Browser user hitting a reCAPTCHA. The way to do that is run the Tor Browser.
edit: Thanks for the dialogue thus far.
You seem keen on making locking out malicious use of Tor, when can we expect you to lock out malicious use of CloudFlare?
On the CloudFlare blog if someone's name is on it they wrote it .
> A report by CloudFlare competitor Akamai found that the percentage of legitimate e-commerce traffic originating from Tor IP addresses is nearly identical to that originating from the Internet at large. (Specifically, Akamai found that the "conversion rate" of Tor IP addresses clicking on ads and performing commercial activity was "virtually equal" to that of non-Tor IP addresses).
Actual data from the report:
• Comparison of Tor and non-Tor Traffic:
Of legitimate requests, non-Tor IPs accounted for 99.96 percent of requests, while Tor exit nodes accounted for 0.04 percent
Of malicious requests, non-Tor IPs accounted for 98.74 percent of requests, while Tor exit nodes accounted for 1.26 percent
• Tor exit nodes were far more likely to contain malicious requests:
1:11,500 non-Tor IPs contained malicious requests
1:380 Tor exit nodes contained malicious requests
• However, traffic from Tor exit nodes yielded a conversion rate virtually equal to non-Tor IPs:
Conversion rate for non-Tor IPs was 1:834
Conversion rate for Tor exit nodes was 1:895
However, traffic from Tor exit nodes yielded a conversion rate virtually equal to non-Tor IPs
You just described every busy IP address: if you handle more requests, you are more likely to handle a malicious one. This is the problem with IP based reputation.
The only thing that can be drawn from that data is that Tor makes IP-based reputation tools ineffective. The thing is, for many people that may be enough to justify what Cloudflare is doing.
> Based on data across the CloudFlare network, 94% of requests that we see across the Tor network are per se malicious.
How can Cloudflare assert that 94% of all requests over Tor are malicious, when Akamai seems to be saying that less than 1% of Tor IPs contain malicious requests?
Starting with a blanket blaming statement isn't that great either. Conversion rates and e-commerce sorta go together, so I would say it's fine to entangle them logically. Tor exit nodes entangle users with each other through an IP. If someone's lens is limited, they'll run the risk of blanket blaming the legitimate traffic as well.
That's comparing apples to oranges though. A lot of different people send requests from tor exit nodes. That might be comparable to some corporate networks, but many IP addresses are used by only one person (or a family).
Intuition would suggest that tor traffic is more likely to be malicious than average traffic, but suppose there are 500 tor exit nodes in the world, and 1 malicious tor user: then 1 in 500 tor-exit node IPs would have sent a malicious request!
One of my sites enjoys a ridiculous number of fraudsters trying to make purchases, many - but very much not all - from the tor network.
The easy solution is to punish everyone and ban tor exit nodes from access, and woo, a significant reduction in my fraud rate.
The way I justify this to myself is that the site only accepts payment via PayPal and/or credit cards, and paying with those in itself gives up a good amount of privacy.
For sites that don't make a profit and have to use unpaid time to clean up the mess from some tor nodes, I really don't know what the solution is.
It definitely sucks for legitimate users.
Edit: one more difficulty is that I don't know if I was targeted by one or two lazy-yet-determined fraudsters who only use tor, and so make tor look worse than it is with their repeated attempts. No idea even where to begin with that one.
The argument against 3DS is it kills conversion rates (meaning: lots of legitimate customers don't complete that extra challenge, who would otherwise have made the purchase). But for legitimate customers using TOR, I wouldn't be surprised if that number were very different :) I know I wouldn't mind whipping out a 2fa device every time I purchased something over TOR. I mean, fair play, right?
3DS is available today in Europe, Asia and Africa! :'( but not ubiquitously in the USA. Yet. It's getting there.
More info: https://en.wikipedia.org/wiki/3-D_Secure
3D Secure redirects to the bank's site (not in an iframe! a real window with a visible address bar) where you enter a one-time code from SMS!
For example I have 3D secure enabled, and all my online purchases (in my own country) always require to type in the password on the bank's gateway site.
If I see a site that allows me to make a purchase without going through 3Dsecure I'd be worried and probably call my bank.
The one who has to fear fraudsters is not the card owner; it's the merchant! Without 3D Secure, they have to pay the money back! + an extra fine, for good measure. And, of course, the product / service is already gone.
Prepaid credit cards are essentially anonymous, as far as I know.
My understanding is that you can only buy prepaid cards after showing ID in many jurisdictions, and many other places require you to register them with ID in order to use the cards.
However, in order to make certain purchases online, it's often been the case that you need to "Register" the card with the provider, and supply details that would match billing information for the selling party. I can't see any reason why you couldn't fudge that, although shipping information for real goods would leak information.
One of the way scammers get "cash" is to buy Amex gift cards with stolen credit cards. Then use those Amex gift cards to buy more amex gift cards until they feel confident the trail is murky enough. Or they use a combination of store gift cards to buy Amex/Visa gift cards.
Yes and no.
Yes: many fraudsters are ridiculously lazy, and fraud rates go down when the tor block is in place.
No: plenty of fraudsters access from elsewhere (and I put them through a separate fraud-detection-SaaS)
Simple answer would be that the original analysis is flawed, they've forgotten that the wrote a script to block TOR exit IPs; TOR intentionally provides a list of these IPs to the public.
Might be worth noting that TOR users are often the target of National Security Letters, that Cloudflare based on their own report received National Security Letters, and as such, would be unable to say if those letters impacted code on the topic.
That said, based on what I know, the answer is to whitelist the TOR IPs, give TOR users a global session that the user has the option to opt into (likely make sense for TOR publish what the impact of this is and Cloudflare to link to it in from that page) and always let users know a global session is set in case the user believe that using TOR they reset the session; resetting it via Cloudflare would be meaningless. General gist though is humans are not bots, don't behave as bots, and Cloudflare treats ever request as the same from an IP, which is a poor way to block bots.
That's an option CloudFlare is offering to their customers now.
> give TOR users a global session that the user has the option to opt into (likely make sense for TOR publish what the impact of this is and Cloudflare to link to it in from that page) and always let users know a global session is set in case the user believe that using TOR they reset the session
Has this been researched or suggested by the Tor project at all? I think it's fairly dangerous to suggest CloudFlare starts offering something like this before it has been vetted.
Should the ToR network be doing some amount of self policing? (Can it?)
I know this might be against some of its principles, but it seems that ToR is there to create privacy, not to be used for criminal activity. And yes, criminal activity differs based on jurisdiction, but I think fraud is generally something everyone agrees should not be allowed.
I think Tor is great, but I don't find it at all surprising or unlikely that 94% of traffic (not users) is malicious (spam, vulnerability scanning, scraping, etc) because it's likely that malicious traffic is automated while legitimate traffic is not.
That said, I'd also like to hear more about CloudFlare's methodology.
Here's hoping that given they truly do appear to care about TOR users that they'll revisit the situation and find a better solution.
Here's a link to Cloudflare's blog post an the related comments on HN:
So's the Tor project's.
So's the view of the website operators receiving this traffic.
Cloudflare's post acknowledge the fact that there are at least three major points of view on this problem. The Tor project, by contrast, is increasingly striking me as taking on a petulant tone by refusing to acknowledge that and acting (implicitly if nothing else) as if their view is the only one.
To be honest, the core problem here is not Cloudflare. The core problem is that their customers don't really want Tor traffic. Cloudflare is, to my eye, bending over backwards for Tor compared to what I'd expect from a corporation, however it may feel to Tor. I would suggest the Tor project and its users, however annoyed they may be at their day-to-day experience, are ill-advised to take a petulant tone here, lest Cloudflare indeed give their customers the ability to whitelist and blacklist Tor as a whole... because I completely agree with Cloudflare that effectively nobody is going to whitelist it.
Second, volume counts do not equal session counts and I find it very hard to believe that a human non-abussive human session looks the same as an abussive session. If true, then it's Cloudflare that's abusing users and exploiting the situation, not TOR.
Also, TOR users are not blocked, but flagged to provided data to Google. Also, Cloudflare's clients like don't even know about the issue since according to Cloudflare they're flagging IPs, not TOR.
To your specific question, the parent comment's points were well reasoned, and your response read as completely orthogonal. If it wasn't, then yes, the onus is on you to demonstrate the relevance.
You are not doing the Tor project any favors here.
Cloudflare does not make the world better for everyone, but it does a great job of making the world better for those who have access to resources, such as dedicated IPs and venture capitalists.
Capitalism has failed, and it's starting to force that fail onto the Internet in a big way. We need to be smarter about how we approach building trusted infrastructure. All that starts with how we approach building infrastructure companies.
If it's an infrastructure service model and it ain't bootstrapped and sustainable, don't use it.
The hard fact is that capitalism is a complex type of game theory, with the objective of winning and making more money. If there are those that think that building the best infrastructure we can for all is dependent on building companies that make VCs and limited partnerships even more money, I will do whatever is in my power to dispel those beliefs.
I do this because I believe our future is dependent on it, not because I'm sad about it. If anything, I don't trust the current process.
Unless they attempt to change, sorry, but they are a bad company, potiental evil if the data is being used to dox TOR users via a NSL.
> eastdakota: I work for CloudFlare. We don't get anything from Google for using reCAPTCHA.
I think you might have gotten a username confused.
Actually, I got it wrong as well, because it's really CloudFlare.