Hacker Newsnew | comments | show | ask | jobs | submit | rdl's comments login

In this case you would have been unaffected if you used CloudFlare for DNS, rather than using Dyn for DNS and CNAME from CloudFlare to it. Using CloudFlare for DNS is our preferred configuration.

Sorry about this: just posted a blog post. Appears to be a Dyn issue, but affects a lot of our customers (including support.cloudflare.com which is Zendesk)

https://blog.cloudflare.com/the-internet-is-a-cooperative-sy...


The affected zones are those cnamed from CloudFlare to Dyn.

Any example hostname thats configured on Dyn and is not resolving?

We should have it fixed very shortly; 99.9% on the cause.

Sorry -- our ops and engineering teams are looking into this and will update/resolve as quickly as possible.

https://www.cloudflarestatus.com/incidents/m8hf8q8vs81d


Good luck, this is really rough. I encountered a 101 on a random site before I saw this and was a bit baffled.

What does this mean for startups/tech industry? I assume not very much as long as it is limited to Greece, but more so if it spreads to Portugal, Spain, Italy, right?

CloudFlare | https://www.cloudflare.com/ | San Francisco, CA; London, UK; Singapore, SG | VISA, ONSITE

CloudFlare is building a better Internet -- performance and security optimization at the edge. Our long term goal is to give every site the same performance, security, and reliability that major sites like Google and Facebook accomplish, without any specialized network hardware or complicated administration. We enhance over 2 million sites, including this one.

We're hiring for a variety of roles -- started the year at 128 and hope to end around 256, and will be at 175 by the beginning of August. This is a perfect time to join -- product market fit is established, but there's a lot of great engineering, product, sales, and support work to be done. We've publicly said we're profitable and on track for long term success.

You may wish to check out our blog to see the kinds of engineering work we do. (https://blog.cloudflare.com/).

https://www.cloudflare.com/join-our-team has a listing of positions.

We're always hiring for operations/SRE, sales, general systems engineering (mostly in Go, nginx, and network, as well as DNS at scale), and web development.

Specific roles we're keen to hire include:

1) Billing engineer -- someone to take the lead as we build a new billing system.

2) VP Engineering -- continuing to build and scale a great engineering team

3) Principal Engineer -- owning the WWW stack which we use for control and administrative functions internally and for customers, and managing a move to a modern microservices model.

We've recently opened a Singapore office and are hiring sales/support/operations personnel there.

If you're interested, please apply through the https://www.cloudflare.com/join-our-team link.


What I'd love to see is "how did network log processing happen in 1985, 1990, 1995, 2000, 2005, 2010, 2015, and how will it look in 2015". The ratios between network traffic, processing, etc. are all changing, and DDoS, etc. are evolving.

My perception is it's getting harder to monitor large systems at the same time as it is getting easier to just push raw packets (in a distributed, CDN-type sense). The complexity of running Enterprise networks in the 1990s used to be lots of non-IP legacy protocols, router bugs, etc.; and the limitations of silicon pushing a lot of things into Layer 2 vs. Layer 3 processing. Weird discontinuities around Cisco slacking off around 2000, too.


So interesting... flow (first NetFlow) has been around since the mid 90s but it's always been a volume problem relative to the (single) computers of the day.

The big issue we kept hearing from folks from the notes community is that the current 'cheat' is totake flow (now sFlow, IPFIX as well) and just instantly aggregate it to turn it into tsdb-like summaries.

The challenge is that you don't actually know in advance what you want to look at details for - whether you're investigating perf, availability, security, or just efficient ops.

So that's the current big problem - how to get and store and analyze thousands to tens of thousands to millions (for big networks) of flow records/second.

The 'old' problem from 1996 to 2006 of devices not being able to export has mostly gone away. There are bugs that happen every so often (like Juniper / sampled that ras called out and got resolved last year), but newer Juniper and Ciscos can do line-rate NetFlow/IPFIX to ~ 100-200k flows/sec per devices or sometimes per card. Which feeds back into point #1. Not that useful to have extra resolution if your systems are just doing top N on 10 fixed aggregation dimensions...

What I'd really like to see the silicon do is tcpstats, app decode and latency data. Both have applications for performance and security (even perf data can hint at botnets vs humans, for example, and of course URLs let you look for sql injection and app vs network problems).

The load balancer folks and some of the "Network Packet Brokers" have roadmap or the beginnings of this but it'd be better to have on Broadcom...

Or, an even more basic thing would be to have not per-packet sampling+sFlow like Broadcom and other chips do, but also a primitive for "flow sampling" to get all packets for 1/Nth of flows (say, 1/1000th) to get sampled such decodes up on a daemon running on say Cumulus or Arista.


I know these guys really well -- Avi ran a local ISP in Philadelphia where I grew up, and back when it was NSF-controlled, helped me get Internet access through local POP, and then supporting a FreeNet project (coincidentally set up by Eric S. Raymond!). He also invested in HavenCo and was really the main "adult supervision" there, and is the smartest and most innovative networking guy I know.

A couple of their founders were early people from CloudFlare, too.

I've met them, and it's a very solid product and solid team. If you want good visibility into complex, large-scale networks, this is a tool to watch.

Funding announcements are boring, but I hope they'll post more about how they're doing high-volume analytics to HN in the future.


And I lived on/ran a datacenter from one of Churchill's anti-aircraft castles.

https://en.wikipedia.org/wiki/Principality_of_Sealand


Guessing you have some interesting stories about Sealand. I know it's asking a lot and I understand if you don't have the free time, but did you have any you don't mind sharing with us?

1) Having a dog on a platform like that (basically a 5000 square foot trailer park with no "park") is pretty crappy for both the dog and the humans unfortunate enough to be stuck with it (not the owners of the dog).

2) 30 year old canned meat products are in fact still nominally edible.

3) Spending all day on the Internet from a random building in the middle of nowhere is actually not THAT different from my daily life today; less traffic, higher latency Internet.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: