Hacker News new | comments | show | ask | jobs | submit login

I was lead developer on Arbor Network's DDoS product in the early 2000s (I left in 2005 to start Matasano Security). My information on this is surely dated, but people seem to still be using the same terminology now as then.

You can break down DDoS into roughly three categories:

1. Volumetric (brute force)

2. Application (targeting specific app endpoints)

3. Protocol (exploiting protocol vulnerabilities)

DDoS mitigation providers concentrate on 1 & 3.

The basic idea is: attempt to characterize the malicious traffic if you can, and or divert all traffic for the target. Send the diverted traffic to a regional "scrubbing center"; dirty traffic in, clean traffic out.

The scrubbing centers buy or build mitigation boxes that take large volumes of traffic in and then do heuristic checks (liveness of sender, protocol anomalies, special queueing) before passing it to the target. There's some in-line layer 7 filtering happening, and there's continuous source characterization happening to basic network layer filters back towards ingress.

You can do pretty simple statistical anomaly models and get pretty far with attacker source classification, and to track targets and be selective about what things need to be diverted.

A lot of major volumetric attacks are, at the network layer, pretty unsophisticated; they're things like memcached or NTP floods. When you're special-casing traffic to a particular target through a scrubbing center, it's pretty easy to strip that kind of stuff off.




It sounds like we were working on the same problem at the same time and came to roughly the same conclusion (see my sibling comment about eBay's DDOS mitigation system). :)


>"The scrubbing centers buy or build mitigation boxes that take large volumes of traffic in and then do heuristic checks (liveness of sender, protocol anomalies, special queueing) before passing it to the target. There's some in-line layer 7 filtering happening, and there's continuous source'

Where these heuristics done in hardware then? ASICs FPGAs? Could you elaborate what the "liveness of sender" and "special queueing" heuristics are?


Yeah, custom hardware (ASIC/FPGA depending). Liveness is trying to detect things like Slowloris [0], with things like timeouts, SYN cookies (which ask the client to do some minor work), etc.

[0] - https://en.wikipedia.org/wiki/Slowloris_(computer_security)


It was silicon (or, at least, optimized general compute) in the mid-2000s, but who knows anymore? It could all be user land TCP/IP on Linux today. High speed network processing got weird.


It's a mix depending on what market segment you're looking at. I watch it from afar. There's still a lot of silicon use, esp for accelerating TCP/IP or decryption. I also found one recently you all might enjoy with slides on using a GPU:

http://on-demand.gputechconf.com/gtc/2017/presentation/s7468...


Thanks for the link. This is really interesting. Might you know if the talk that accompanied this is available somewhere?


A10 actually lists the number of FPGAs in their mitigation appliances for sizing purposes.

"Select Thunder TPS models have high-performance FPGA-based Flexible Traffic Acceleration (FTA) technology to detect and mitigate up to 60 common attack vectors immediately in hardware — before data CPUs are involved. "


>"High speed network processing got weird."

I was curious about this statement. Can you elaborate, weird how?


Shifted from hardware intensive (ASICs, FPGAs) to software so we can do high-speed packet mangling on commodity hardware. Initially pretty involved with DPDK etc but much easier as of late with XDP+eBPF.

e.g. https://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/ https://netdevconf.org/2.1/papers/Gilberto_Bertin_XDP_in_pra... https://people.netfilter.org/hawk/presentations/OpenSourceDa...


Not the GP, but I worked in the DDoS space for a spell a few years ago, helping develop the company's 3rd generation product. Their 1st generation was ASIC-based; 2nd generation a manycore CPU (Tilera) running a custom OS mostly written in assembly; 3rd generation used the next generation of that CPU (Tile GX) which provided lots of dedicated highly-parallel network processing hardware (including a programmable coprocessor), some of which was designed following feedback from our CTO.

The Tile GX (including the hardware) was available for general-purpose use from Linux (which we ran), but could also be programmed directly to do lots of packet classification even before the packets got to the CPU and main memory (which we did). The Cavium network processor worked similarly.


What happens when it doesn't work? For instance why does something like Mirai happen? The first D is too D?


Yeah, I don't know. The biggest Mirai traffic spike involved a pretty simply volumetric GRE attack; GRE is its own IP protocol, so I mean it's trivial to filter but also lots of middleboxes won't even forward it in the first place. There was some confusion about how bad the Mirai attack was because the propagation code for Mirai, independent of the DDoS attacks, managed to crash some routers.

It's definitely not the case that all DDoS attacks can be reliably cleaned up in an ISP scrubbing center.


You call up krebs and the FBI and they'll dox/arrest the attacker.


No.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: