
Quantifying the Impact of “Cloudbleed” - eastdakota
https://blog.cloudflare.com/quantifying-the-impact-of-cloudbleed/
======
camel_gopher
The wording of the post seems to be tilted towards limiting Cloudflare's
potential liability as opposed to objectively quantifying the impact. For
example, the author mentions that the 100th largest website handles less than
10B requests per month - which positions near the median of the table of
likelihood anticipated leaks.

I'm not surprised at this approach as the CEO has a background in law. I think
that, combined with his hubris, shaped the tone of this post from an objective
post mortem to a goal of minimizing the damage to Cloudflare itself.

I met Mr. Prince once at a tech conference. I mentioned that I followed him on
Twitter, to which he freaked out a little bit and said "Oh I hate that when I
run across people who tell me that". When I heard him tell his name to someone
else there, he said his last name was "Prince, like King". I suspect that his
attitude is prevalent at Cloudflare, and may have shaped not only the source
of this issue, but the response to it as well, since CEO's generally influence
the culture.

~~~
tenkabuto
> his attitude

I don't quite follow from your description of his reaction. What sort of
attitude were you attempting to describe/how'd you perceive his attitude to
be?

~~~
fredsir
I think his attitude, as described, was that he manipulates for his own
perceived self importantness and perceived gain. If you mention you follow him
on twitter the response is "oh that's so beneath me and I hate it", if you
don't know him, he is "prince, like a king, and an important one of that so
remember me" kind of attitude. Like a person you wouldn't want to meet in a
bar.

That is how I understood it anyways.

------
koolba
> 3) after a review of tens of thousands of pages of leaked data from search
> engine caches, we have found a large number of instances of leaked internal
> Cloudflare headers and customer cookies, but we have not found any instances
> of passwords, credit card numbers, or health records; and

Downplaying how bad it was that customer cookie's were leaked is what I found
particularly egregious in the original response. A session cookie is _almost_
as bad as a password leak. Usually it'll let you everything except change your
password (as most sites require your current one as an extra validation).

~~~
chipperyman573
Session cookies can be reset much more easily than asking a user to reset
their password. Session cookies also aren't used on multiple sites. It's still
a big deal, but leaking a session cookie is much less dangerous than leaking a
password.

~~~
hueving
>Session cookies can be reset much more easily than asking a user to reset
their password.

This statement is making an assertion about the behavior of vastly different
software backends and the skill-set of the people administering the systems.
Things that may appear trivial to you may not be for the sysadmin in charge of
servers running healthcare applications that have an opaque internal state.

------
geforce
Was talking with colleagues about cloudbleed and S3 problems yesterday.

I don't feel like many people are actually concerned of the implication of
having an internet that isn't an internet anymore, but merely a handful of big
companies hosting everyone.

Or maybe it's me who don't understand.

~~~
stickfigure
What are you proposing? We should all serve off of our DSL lines?

~~~
ezequiel-garzon
Collocation, dedicated servers, even VPS. There's plenty of a middle ground
between serving off your home server and AWS, Cloudflare et al.

~~~
thehardsphere
And to be blunt, I'd even claim that the middle ground is superior for most
people than AWS, Cloudfare et al.

I have limited experience with AWS, but they always seem to be having problems
that ruin everything that they don't acknowledge. That's what I get from
people at the office who have to use it to support a particular customer.
Nothing we do ever seems to run properly on it, even though our software is
fine everywhere else. We're trying to push that customer off to using their
own hardware, and they agreed with us that it's necessary but for different
reasons.

Cloudfare, I don't even understand why anyone would use it. I understand the
benefits it claims to provide, but you can get those without having an
external MITM proxy that can spew information all over the Internet with one
bug.

------
tptacek
Not everything Cloudflare has written about this breach has been good, but
this is solid. In particular, I'm satisfied that commercially reasonable
efforts have been made to ensure there wasn't deliberate exploitation of the
bug.

What remains now is an accounting of how some of the most sensitive C code on
the Internet was tested prior to the discovery of this bug (by Cloudflare's
own accounting, the underlying error seems to have been present long before it
became symptomatic), and what they're doing about that now.

~~~
hueving
>I'm satisfied that commercially reasonable efforts have been made to ensure
there wasn't deliberate exploitation of the bug.

How would you classify people using all of the crawled data though? It seems
sort of hand-wavy to claim nobody deliberately exploited the bug if they used
leaked session tokens in crawled data to get access to user accounts.

------
markonen
One aspect I haven't seen mentioned yet is that there is a geographic
dimension to what ends up in search engine caches; their crawlers end up
talking to the nearest Cloudflare point-of-presence so any leaked data is also
going to be from requests served in that same general area.

For example, my Cloudflare-using site is in the 1B to 10B requests a month
bracket, meaning 112–1,118 anticipated leaks per Cloudflare's calculations.
It's enough that you'd expect to possibly find some in the search engines, but
we haven't.

One potential explanation is that our usage is heavily concentrated within a
geographic area (the Nordics), and this just isn't where the big search
engines are doing their web crawling from.

~~~
eastdakota
Yes, that's correct and not something our analysis has taken into account.
Search crawlers are more likely to hit certain PoPs. If your content was less
likely to be in one of those PoPs then you're less likely to have leaked data
to one of them.

We also didn't take into account that the distribution of the ~6,500 sites
that triggered the bug weren't even across our infrastructure. We distribute
load for any site across some fraction of all the servers in any given PoP.
Based on whether you're on a cluster of servers with more or fewer vulnerable
sites you'd have been more or less likely to leak data.

For the general case the stats are directionally correct and, if anything,
conservative (we believe, if anything, overstate the risk). But a particular
customer's situation could deviate from the expected probabilities for a
number of reasons.

------
mixedbit
The problem is that CloudFlare encourages a fundamentally insecure
architecture. CDN should be used to host static files on a separate, cookie-
less subdomain. With such architecture consequences of any data leak from a
shared CDN infrastructure are limited.

With CloudFlare all the requests are passed through the CDN. With dynamic,
authenticated requests the benefits of this are dubious. The responses can't
be cached anyway, an additional HTTP(S)-level hop is required for each request
(and many additional IP-level hops). The drawbacks are now obvious: sensitive,
unencrypted data of thousands of different sites resides in the same, not
isolated, shared process memory. With such architecture we will always be one
C pointer manipulation bug away from another leak.

~~~
problems
The reason CloudFlare encourages this is because it isn't meant for use as a
traditional CDN - they're not a CDN at all really. You don't upload content to
them. They're meant as a cache and anti-DDoS proxy more than anything else.

Look at what happens when anyone gets hit with a large scale DDoS attack these
days, the first thing people will tell them, myself included is "CloudFlare".
How do you handle a massive spike in load on a tiny, usually empty server?
CloudFlare is the go-to tool for this purpose, especially for people who can't
afford to pay Akamai, BlackLotus, Amazon or others. And they're impressively
cheap for small businesses too. You can't hide from a DDoS attack or save your
tiny DigitalOcean box if your web server isn't hidden away behind a service
like this.

I have nothing against CloudFlare, they've proven themselves extremely
trustworthy in many ways and even in handling of past security issues they've
been very responsible. But this time that's not the case - the downplaying of
the issue is really damaging their reputation in my eyes. I'm a paying
customer and I'm quite disappointed to see this.

~~~
mixedbit
> They're meant as a cache and anti-DDoS proxy more than anything else

You can't cache responses to authenticated requests, so in case of DDoS attack
against an HTTP end point that requires authentication the best CloudFlare can
do to safe the back-end is to drop requests, which makes the attack
successful. I really can't see benefits of passing authenticated traffic
through CloudFlare.

~~~
problems
That assumes all users who are performing the attack are authenticated - an
extremely unlikely scenario. All said users would also have to be able to pass
a captcha which gets engaged in the case that a site is under active attack.

> the best CloudFlare can do to safe the back-end is to drop requests, which
> makes the attack successful

It can also drop the attack requests, which is generally not from
authenticated users, and pass the real traffic to the backend.

Only in the case of real user, authenticated traffic is CloudFlare not a
solution. In cases of high unauthenticated user or high fake user traffic or
in cases of attacks not operating on HTTP at all, CloudFlare will solve the
problem perfectly. Some of the nastiest attacks, like reflection attacks,
don't even touch HTTP at all but will quickly knock most servers offline -
even lead many server providers to nullroute you to protect their other
customers. It'll also get you out in cases of high real user, unauthenticated
traffic - like being posted on HN, reddit, etc.

------
saurik
It was my understanding that people were finding numerous examples of leaked
data that included Authorization headers, which don't always leak _passwords_
but which still leak session keys. Maybe if espadrine from this linked comment
reads this, he could explain more?

[https://news.ycombinator.com/item?id=13719455](https://news.ycombinator.com/item?id=13719455)

FWIW, as an example, Grindr uses CloudFlare: do a Google search for
"authorization: grindr3" and you will find a URL which (no longer cached but
you can still get snippets) contained an authenticated grindr request, which
would be enough to have had temporary access to that person's account.

[https://costumla.com/wild-west-costumes-for-
men.html](https://costumla.com/wild-west-costumes-for-men.html)

"... U Edge Certificate Authority1 0 U San Francisco1 0 U California�1p ���U�
N��U � � �f"�0! %���U@ @GET /v3/profiles/ _[REDACTED]_ HTTP/1.1 CF-RAY:
33282514b8d957d7 FL-Server: 15f76 Host: grindr.mobi X-Real-IP: _[REDACTED]_
Accept-Encoding: gzip Client-Accept-Encoding: gzip X-Forwarded-Proto: https
Connect-Via-Https : on Connect-Via-Port: 443 Connect-Via-IP: 104.16.85.62
Connect-Via-Host: grindr.mobi CF-Visitor: {"scheme":"https"} CF-Host-Origin-
IP: _[REDACTED]_ Zone-ID: 22252132 Owner-ID: 2607399 CF-Int-Brand-ID: 100
Zone-Name: grindr.mobi Connection: Keep-Alive X-SSL-Protocol: TLSv1.2 X-SSL-
Cipher: ECDHE-RSA-AES128-GCM-SHA256 X-SSL-Server-Name: grindr.mobi X-SSL-
Session-Reused: . SSL-Server-IP: 104.16.85.62 X-SSL-Connection-ID:
15d1b8de6025864d-DFW X-SPDY-Protocol: 3.1 authorization: Grindr3 ... accept:
application/json user-agent: grindr3/3.0.13.16790;16790;Free;Android 6.0 CF-
Use-OB: 0 Set-Expires-TTL: 14400 CF-Cache-Max-File-Size: 512m Set-SSL-Name:
grindr.mobi CF-Cache-Level: byc CF-Unbuffered-Upload: 0 Set-SSL-Client-Cert: 0
Set-Limit-Conn-Cache-Host: 50000 CF-WAN-RG5: 0 CF-Brand-Name: cloudflare CF-
Age-Header-Enabled: 0 CF-Respect-Strong-Etag: 0 Set-Proxy-Read-Timeout: 100
Set-Proxy-Send-Timeout: 30 CF-Connecting-IP: _[REDACTED]_ Set-Proxy-Connect-
Timeout: 90 Set-Cache-Bypass: 0 Set-SSL-Verify: 0 CF-Force-Miss-TS: 0 Set-
Buffering: 0 CF-Pref-OB: 1 Set-Keepalive: 1 CF-Pref-Geoloc: 1 CF-Use-BYC: 0
CF-IPCountry: _[REDACTED]_ CF-IPType ..."

edit: I have spent the last fifteen minutes pulling the search snippet (edit:
and now an hour; but all on my phone, so this is harder than it should be, and
I also am distracted by other stuff). The way you do this is by walking
through the parts you can see to get nearby context. (I do this a lot to pull
content purged from or inaccessible to or simply updated in Google's cache.)

In so doing, while I haven't been able to still have access to the session key
(likely too long and unique for a snippet), I _have_ pulled the X-Real-IP
address field of this Grindr user and a profile identifier they were checking
out (both of which I redacted above, but you could trivially get yourself now
using that context).

CloudFlare: if you think there isn't private data that was leaked, _OR EVEN
PRIVATE DATA STILL ACCESSIBLE_ , you are a bunch of fucking idiots. 1)
Clearing the cache isn't sufficient, as for anything built from short plain
text words we can pull the snippet. 2) IP addresses and session keys count as
"private data". 3) GET requests actually _are_ often sensitive information :/.

~~~
eridius
I don't see anywhere in the post where they claim private data wasn't leaked.
In fact, they explicitly state that authorization cookies were leaked. What
they said they didn't find was passwords, CC info, health records, SSNs, and
customer encryption keys.

They even have a paragraph talking specifically about cookies in GET requests:

> _This is not to downplay the seriousness of the bug. For instance, depending
> on how a Cloudflare customer’s systems are implemented, cookie data, which
> would be present in GET requests, could be used to impersonate another
> user’s session. We’ve seen approximately 150 Cloudflare customers’ data in
> the more than 80,000 cached pages we’ve purged from search engine caches.
> When data for a customer is present, we’ve reached out to the customer
> proactively to share the data that we’ve discovered and help them work to
> mitigate any impact. Generally, if customer data was exposed, invalidating
> session cookies and rolling any internal authorization tokens is the best
> advice to mitigate the largest potential risk based on our investigation so
> far._

~~~
tedunangst
It's still weird to enumerate things not found, but no mention of examples
like dating site messages that were found. They're obscuring that by focusing
on internal headers, etc. Why does the table with "0 health records" not have
an entry for "X private correspondence"?

~~~
eridius
There's 3 classes of information:

1\. Information where improper disclosure is illegal. For example, health
records. Or credit card details, which would presumably be a violation of PCI
DSS.

2\. Information that can be actively exploited, but can also be fixed so the
previous disclosure is harmless. This means passwords, authentication tokens,
etc.

3\. Information that is merely private in nature.

Cloudflare is focusing on the first two items. The third one is hard to
quantify; what one person may consider private, another person might not care
about. And there's not much that can be done about this kind of disclosure
(beyond scrubbing caches which they're doing anyway). Also, it's difficult to
automatically identify this type of content (whereas cookies, passwords,
credit card numbers, etc are pretty easy to detect) and Cloudflare probably
doesn't want to have their employees spending their time reading through all
of the cached data they can find looking to see if there's private info (both
because it's a lot of work and a huge waste of time since it won't affect
anything, and because it's private info; chances are nobody's going to see it
normally, and having employees reading your private messages doesn't help
anyone and is a violation of your privacy).

~~~
79d697i6fdif
_2\. Information that can be actively exploited, but can also be fixed so the
previous disclosure is harmless. This means passwords, authentication tokens,
etc._

I wouldn't call the disclosure harmless. It's unknown if anyone made use of
the leaked information before Cloudflare knew, so accounts should be treated
as compromised unless it's shown otherwise.

Also, leaking user credentials to any system that handles payments and health
info would also breach PCI/HIPAA . This broadens the scope of systems
effectively breaking the law.

Another thing to keep in mind is that many(most?) token based authentication
systems don't invalidate tokens. So any tokens captured will be valid until
they expire, and they can't be "changed" without invalidating every
outstanding token (changing the server key)

~~~
eridius
No I mean after it's fixed, the previously-disclosed information becomes
harmless. Obviously anyone who exploited it before you reset your
password/tokens may have caused you harm.

> _Another thing to keep in mind is that many(most?) token based
> authentication systems don 't invalidate tokens._

In my experience, changing your password generally invalidates all outstanding
tokens. And yes, this does mean invalidating all of them instead of just the
leaked one, but that's not usually a big deal.

------
remus
It seems to me like the point of this post was to clarify whether the
vulnerability was being actively exploited prior to the announcement, and I
think it does that well.

Just think of how easy it would have been to exploit this bug: buy a load of
random domains, host some plausible looking but malformed content on it then
setup a few thousand bots to hit those sites and harvest the responses. Keep
that plugging away for a few months and you'd easily collect a lot of
supposed-to-be-private info.

That would have been an order of magnitude worse than having to sift through
caches for scraps. Not to say that what's happened is 'good', but it could
have been a whole lot worse.

------
advisedwang
Disappointing that their response is "damage control" and downplaying impact.

~~~
AwesomeBean
Why? As long as they come along with facts and data, then that's a good thing.

~~~
79d697i6fdif
they only sampled a few thousand leaked responses out of over a million. The
margin of error is 2.5% on conclusions because they didn't use enough data,
not even close.

------
matt2000
But also, we gotta start writing stuff like this parser in safer languages
where this class of bug simply can't happen, right? Every time one of these
breaches occurs, it's basically the same thing. "Data wasn't what we expected,
we read a bunch of extra junk that had nothing to do with the input." It's
kind of crazy it's acceptable at all.

At this point, these kind of problems are on us as a community that we keep
using unsafe tools. Every time we choose one of these languages we are
implicitly trading security for performance (a.k.a. money).

~~~
tedivm
If things aren't written in a "safe language" then they should at least be
tested. Valgrind should have picked this issue up.

------
nikisweeting
I've linked to this post from the top of the pirate/sites-using-cloudflare
README. I'm glad that it's a refreshingly level-headed response that isn't
placing the blame on Google or other caches. It's still down-playing the
potential impact slightly, but at least it's a more thorough response than
some of the HN comments we've been seeing from CF employees.

My advice to friends and family currently is to use their "log out all active
sessions" buttons, and to reset only crucial passwords. As Techcrunch
mentioned in their coverage, many sites will likely pay for identity theft
insurance to cover the change of a leak, rather than force password resets and
lose their users trust. It's a risk analysis that each site needs to make
individually, and it may be the right call to eat the losses of ~10 possibly
compromised users instead of forcing millions to reset their passwords.

------
79d697i6fdif
_In total, between 22 September 2016 and 18 February 2017 we now estimate
based on our logs the bug was triggered 1,242,071 times._

Wow, so just as bad as we thought.

 _We did not find any passwords, credit cards, health records, social security
numbers, or customer encryption keys in the sample set._

BUT WAIT, THERE'S MORE

 _The sample included thousands of pages and was statistically significant to
a confidence level of 99% with a margin of error of 2.5%._

Oh, so it could actually be as a high as 2.5% leaking encryption credentials.
And if none of the data was found to leak anything sensitive where the fuck is
the dataset? I've been around way too long to take a "study" like this at face
value without third party verification.

I also enjoy the straight up lie at the end:

 _We are continuing to work with third party caches to expunge leaked data and
will not let up until every bit has been removed._

That sounds great right? Well, its too bad that a lot of 'third parties' are a
box sitting on the corporate network edge that hasn't been touched in 5 years.
Deleting all of this data from third party caches is not physically possible.
In fact it might actually make things worse because it's destroying evidence
of which credentials were leaked.

~~~
dahdum
1.2 million page views over 5 months is almost nothing for the amount of
traffic going through Cloudflare.

~~~
mikeash
It seems to me that the absolute number is what's relevant, not how it
compares to the total amount of traffic. That's 1.2 million potential data
leaks. That's it "out of a bazillion" doesn't change it.

~~~
user5994461
Correction: It's 1.2 million definitive leaks.

The only question is what is in these leaks exactly.

------
lima
Just consider all the private crawlers whose caches _weren 't_ purged.

Anything short of invalidating any and all tokens, cookies and sessions is
irresponsible and CloudFlare should communicate is as such.

------
tyingq
These exact looking numbers are based on extrapolating a look at 1% of all
requests, for the sites they think were leaking, for a 10 day period.

They don't get into the full detail of how they knew which sites to look for.
For example, if I created a site with these settings, used it for a week, and
then deleted it...is my site counted? What if I changed the settings back to
normal? Does it slide under their radar? Did they really look for every site
that had those settings at any time, or just any site that had those settings
at the specific time they looked?

To me, it still feels like there's a window where a bad actor could have
created a site that triggers this behavior, and then pumped it with a scraper,
over and over.

~~~
caf
Yeah, exactly - this _" look at a 10 day period and extrapolate"_ idea is
based on the assumption that the rate of hits to the triggering pages was
constant over the whole time, but in the case of looking for a malicious actor
that's essentially begging the question. If there was unnatural traffic in
there you _can 't_ assume its rate would have stayed constant over the whole
period.

------
bifrost
I'm glad to see an article that isn't all puppies and kittens, this wasn't a
fun thing to deal with and certainly wasn't anticipated. The CloudFlare team
did a great job responding to it IMHO.

"Of the 1,242,071 requests that triggered the bug, we estimate more than half
came from search engine crawlers."

This is very important to sort out, most people don't think about security
much less the storing of credential or identifying data by search engines,
this is a huge part of the incident response.

I think that what we should take away from this is that even though the bug
existed it was responded to in a reasonable manner.

------
rattray
tl;dr excerpted:

In total, between 22 September 2016 and 18 February 2017 we now estimate based
on our logs the bug was triggered 1,242,071 times.

1) we have found no evidence based on our logs that the bug was maliciously
exploited before it was patched;

2) the vast majority of Cloudflare customers had no data leaked;

3) after a review of tens of thousands of pages of leaked data from search
engine caches, we have found a large number of instances of leaked internal
Cloudflare headers and customer cookies, but we have not found any instances
of passwords, credit card numbers, or health records;

and 4) our review is ongoing.

~~~
tyingq
>we now estimate based on our logs the bug was triggered 1,242,071 times

Based on looking at 1% of their log entries for 10 days of the total ~150 day
period.

------
ryanlol
>In total, between 22 September 2016 and 18 February 2017 we now estimate
based on our logs the bug was triggered 1,242,071 times.

So uh, Cloudflare has logs of all page loads since september at the very
least? And I guess with response sizes, since that seems like the most
reasonable way for them to come up with such a number.

Awesome.

~~~
user5994461
Doubt that they have any log, they have explained before that they only keep
4hours of logs because it's too much volume of data. FYI: Traffic is over 4M
requests per second.

Guess they only kept a counter of requests. A bit of maths and you get an
estimate for the amount of leaks.

------
pron
Their messaging is a bit strange. They write:

> For the last twelve days we've been reviewing our logs to see if there's any
> evidence to indicate that a hacker was exploiting the bug before it was
> patched. We’ve found nothing so far to indicate that was the case.

right after they explain:

> Cloudbleed is different. It's more akin to learning that a stranger may have
> listened in on two employees at your company talking over lunch. ... you
> can't know exactly what the stranger may have heard, including potentially
> sensitive information about your company.

So they say they've found no evidence after explaining that the situation is
analogous to a case where no evidence can be found even if information were
stolen.

------
Karupan
I'm not going to comment on the methodology of CloudFlare's analysis, but just
the fact that they are logging millions of unencrypted HTTP requests (cookies,
headers and all) is just scary! Wasn't the point of SSL/TLS that intermediate
proxies cannot peek into requests?

~~~
jgrahamc
We're not logging "cookies, headers and all".

~~~
Karupan
My bad! Just re-read the article and it did say information about requests and
responses. Thanks for the clarification.

------
_kst_
A theoretical question.

There seems to be some question about whether any passwords were actually
leaked, but in principle some password data could have gotten out.

What would actually have been leaked, a cleartext password or the
encrypted/hashed password stored on an affected server (or both)?

I have accounts on a number of sites that were affected by Cloudbleed, as
indicated by
[http://www.doesitusecloudflare.com/](http://www.doesitusecloudflare.com/),
and I've already changed my passwords on those sites. But the passwords I use
are randomly generated and should be practically unguessable given a hashed
version. Would I have been at risk if I had left my passwords alone?

~~~
orthecreedence
> What would actually have been leaked, a cleartext password or the
> encrypted/hashed password stored on an affected server (or both)?

Depends on the app. The apps I write will do a PBKDF2 w/ 20K rounds on a
password _before_ sending to the server, which then hashes that hash again
before storing/comparing. I suspect most apps just sent the username/password
plaintext over SSL, meaning the plaintext passwords would have been leaked if
the app used Cloudflare to terminate SSL.

That said, if someone got the PBKDF2ed password hash from the SSL payload,
they could just as easily use that to log into an account, but they'd have to
at least do a little bit of work to break the hash if they wanted to actually
crack the password.

In other words, you were absolutely right to change all your passwords.

------
gwu78
If users were discerning based on the inner workings of this organization,
they should be switching their nameservers away from Cloudflare. Quantifying
the number of domainnames CF has in their zones before and after this reported
incident would be interesting. Sadly, I doubt there will be much change.

------
jlg23
TL;DR: "Since this is just a sample, it is not correct to conclude that no
passwords, credit cards, health records, social security numbers, or customer
encryption keys were ever exposed."

------
HalfwayToDice
My website gets 75,000,000 total requests per month through CloudFlare, so
they estimate 6-11 leaks.

I am happy with the way CloudFlare have responded to this.

------
79d697i6fdif
Tagging along on the top reply here, but does anyone else notice some serious
HN gaming going on in this thread?

Accounts commenting that have made 10 comments in 5 years, people defending CF
that, after going through their profiles, are clearly linked to CloudFlare or
maybe even employees. Top relies suddenly being near the bottom and replaced
by posts supporting CloudFlare with statements that don't match what was
actually in the blog post.

Just seems really really fishy to me

~~~
komali2
I doubt anything more nefarious is happening than a flood of CF employees
hitting the thread. Is that gaming?

~~~
JumpCrisscross
It's, at the very least, disingenuous.

