IP 192.168.1.75.50950 > 22.214.171.124.1900: UDP, length 95
IP 192.168.1.71.1026 > 192.168.1.75.50950: UDP, length 249
I'm not sure what is the moral here. But if you ever see some UDP packets from a weird port, to a weird port - maybe it's this SSDP case.
If they didn't keep it the same then the OS wouldn't always be able to know packets are part of which connections, because you can multiple connections open with the same computer at the same time to the same dest port, and the only differentiation would be the source port.
UDP does not have connections.
> and the only differentiation would be the source port.
... or some request/session/flow id in the application layer protocol. Some UDP protocols use UDP in this way, some don't, UDP itself doesn't care.
UDP does not have connections, but the OS does have a concept of UDP connections to a degree in the form of packet filtering/routing. Point being, if you send a DNS request (for example), the source IP/port and dest IP/port is how the OS will decide which packets to route back to you when the DNS server responds. If the responding DNS server changes the source port, the OS will not route that packet to the original socket because the source port does not match. You can still make it work, but you would have to be already listening for packets from that port (one way or another), so you would have to know beforehand they are going to be using a different port.
> ... or some request/session/flow id in the application layer protocol. Some UDP protocols use UDP in this way, some don't, UDP itself doesn't care.
You still have to get the packets though, and the OS had no idea about any application layer routing. If you want to get UDP packets from a bunch of different ports, you have to be listening on those ports.
Edit: It's true I was playing a bit loose with the terminology (UDP is connectionless), but the behavior of packet routing and how changing the source port would mess with that is what I was getting at. If you want to be more correct, replace "connection" with "socket" in my original comment.
The filtering is completely optional to use.
> Point being, if you send a DNS request (for example), the source IP/port and dest IP/port is how the OS will decide which packets to route back to you when the DNS server responds.
That depends on how the requesting resolver has configured the socket.
> If the responding DNS server changes the source port, the OS will not route that packet to the original socket because the source port does not match.
> You can still make it work, but you would have to be already listening for packets from that port (one way or another), so you would have to know beforehand they are going to be using a different port.
Yes, obviously you have to know the application protocol you are trying to speak and how it uses UDP before you try to speak it.
> You still have to get the packets though, and the OS had no idea about any application layer routing.
Which is why application layer routing is called application layer routing.
> If you want to get UDP packets from a bunch of different ports, you have to be listening on those ports.
No, you listen on local ports, not on remote ports.
> If you want to be more correct, replace "connection" with "socket" in my original comment.
Well, technically, some minor details would be more correct - but the fundamental assumption that you can only receive datagrams from one remote address/port with a given socket is just completely and utterly wrong, and not just in the sense that it's a theoretical possibility, but it's a perfectly normal use case. To take an obvious example, a common configuration for an OpenVPN server is to accept authenticated packets from any remote address and automatically switch to changing remote addresses for the sending direction, so when the client changes addresses, the OpenVPN session just keeps going.
As long as you don't connect() a datagram socket in the BSD sockets API, you will receive datagrams from any remote address (and you'll have to specify remote addresses using sendto() when transmitting).
And your explanation is at the very least misleading, bordering on wrong. Pretty much noone (except where the protocol spec explicitly were to require such behaviour, maybe) would implement a client that would open a socket per server/per request, but bind them all to the same local address/port.
Either you use one socket for all requests, in which case you don't connect(), so you receive all the responses, and thus would also receive datagrams from addresses/ports that you didn't send to, and instead you would do the matching of responses to requests in userspace, even if potentially based on the sender's address/port.
Or you use one socket per server/per request and let the OS assign you a free port per socket, in which case the local address/port is perfectly sufficient for the OS to route received datagrams to sockets. In the latter case, it's common to simplify your code by letting the OS handle the filtering of source addresses, but that's all it really is, filtering--actual routing based on remote addresses by the OS is not what normally happens and not why varying the response source port would not work with many protocols.
Sounds exactly like UPnP to me!
- listens on multicast/1900;
- response over unicast, thus ephemeral sport.
so we use DPI to drop SSDP DDoS-es.
A scheme to strong arm the adoption of BCP 38 is key to stopping these attacks from growing. IoT has shown us that expecting device updates to disable these UDP protocols is a lost battle.
Easily done: "Follow some standards and RFCs or get put on a global blacklist of companies to not do business with."
How would this even be possible? Home routers have to NAT everything. Normally you have to set up reverse NAT to get ports forwarded to the LAN.
Now, let's say I hook up a printer to a switch in that configuration. Is it smart enough to not respond to UPnP coming from globally routable addresses?
This is why ingress filtering is important.
WTF? How would it not be utterly idiotic to not respond to UPnP requests from globally routable addresses? Why should it be impossible to print from some machine, just because it has a globally routable address?
Because it's an Internet.
Sure, it's uncommon behavior and not what most people want, but let's not completely give up on the notion of being a peer on the network.
The printer serves up (printing) just like a web server serves up web pages. You should be able to run a web server and participate as a peer, globally.
The security standard we use for everything in our network is: if it would be insecure if hooked directly up to the Internet, it is broken.
Firewalls encourage poor security by creating a false sense of security and leading to developer and system administrator complacency. IMHO it would be better to get rid of them and let the insecure junk burn. To prevent DDOS exploitation the best would be to have grey hats take the latest exploits and mass-brick exploitable devices.
We'd learn our lesson and then we'd have secure devices.
(I don't think firewalls are a good solution in general, but I would agree that they might be the least-bad way to handle crappy embedded/IOT-type devices).
I interpreted this as routers that themselves have uPnP implementations, probably intended to advertise themselves to clients on the local network, but that listen on all network interfaces by mistake.
I'm glad to see miniupnp is still in active development: https://github.com/miniupnp/miniupnp but I can't work out if it's set to be vulnerable by default.
If a device is listening to UPnP on the WAN interface, the fault is not on UPnP but on whoever configured it to be open on the WAN. IMO, all of these zeroconf protocols should be limited to responding back only to the local segment and not allowed to traverse gateways.
I don't see how it is at all reasonable to shift blame from a protocol that assumes the world can be trusted to the untraceable goal of "every single network in the entire world should only generate trusted data: then the problem would be solved".
> Internet providers should internally collect netflow protocol samples. The netflow is needed to identify the true source of the attack. With netflow it's trivial to answer questions like: "Which of my customers sent 6.4Mpps of traffic to port 1900?". Due to privacy concerns we recommend collecting netflow samples with largest possible sampling value: 1 in 64k packets. This will be sufficient to track DDoS attacks while preserving decent privacy of single customer connections.
OMFG. Do you want deanonymization attacks? Because this is how you get deanonymization attacks :/. The right form of solution here is not to encourage ISPs to log even more of our traffic (a practice I wish were illegal), but to try to kill off UPNP through every form of leverage possible (even if it breaks things).
I'd say this is "so disappointing", but I guess I shouldn't expect much from the company that tried its damndest to argue that nothing of importance was leaked from Cloudbleed even when you could still recover Grindr requests complete with IP addresses that they had managed to leak well after they tried to claim that data had been scrubbed :/.
A) ip spoofing
On IP spoofing I said plenty already https://idea.popcount.org/2016-09-20-strange-loop---ip-spoof... . There are two major points:
- We will always have DDoS vulnerable UDP protocols. In past we had DNS. Then we had NTP. Now we have SSDP. The next one is going to be some gaming protocol. We should fix them as we go, but a more comprehensive solution it to actually fight the spoofing.
- Even without using amplification, with IP spoofing it's possible to launch a direct attack, which will be untraceable. We regularly see 150Mpps+ packet floods going _direct_ from the attackers to our servers. The ISP's are clueless. There is no way for anyone to trace the true source of the attack. (without netflow, that is)
This brings us to second point - netflow. You say - the ISP's are incompetent, they do not have netflow and this is _good_. No it's not good. The ISP's can track you / deanonymize anyway, but when I ask them: "hey guys, I see this 150Mpps flood from your network, can you do something about it?" they say - "no, we can't identify the source because the IP's are spoofed". Yes, I herby recommend that each of the ISP's should take care of their network. Be able to answer historical questions about DDoS. That means the netflow collection point will have statistical metadata about customer connections (1 in 64k connections will have saved data - source port/ip, dest port/ip, length, packets, bytes). This might be used to attack your privacy - but the ISP can do much worse things anyway. Doing netflow right will allow us to finally trace the IP spoofing.
I really think that DDoS is a threat to the internet as we know it. Think about centralization that it causes: can your server sustain trivial 100Gbps SSDP attack? I really think that doing netflow right will allow us to keep the decentralized internet.
The problem is that the Internet as designed simply supports this, and you can't fix it unless you fix the entire Internet at once; this problem is harder and less realistic to fix than any other place to poke at the problem, specifically due to the entire nature of the attack: it is an amplification attack... so I only need to find--somewhere, anywhere--a smattering of Internet that still supports spoofing, and use that to launch my attack.
> The ISP's can track you / deanonymize anyway...
They can, yes; the question is how much they do and if they should: I believe that it should be illegal for them to do this, and in a more perfect world on a more perfect network I believe it should be impossible for them to do this. The idea that you seriously think that not only is it OK that they do this but that they actively should do more of it, in all honesty, sickens me: we should be striving for a world where the list of reasons an ISP "should" track you--the list of reasons people feel they have to--is empty.
> That means the netflow collection point will have statistical metadata about customer connections (1 in 64k connections will have saved data - source port/ip, dest port/ip, length, packets, bytes).
No: that's not what this article says, and that's not how NetFlow works. You are proposing logging 1 out of every 64k packets, not 1 out of every 64k connections. Connections are made up of multiple packets--at least 4--and are sometimes made up of many many packets (the average I read was ~100 packets per connection, though I'm sure that falls into some inverse power loaw). So you are logging way more connections than 1 out of every 64k, and there are known attacks even on networks like Tor that are based on having this NetFlow data to correlate connections.
> Think about centralization that it causes: can your server sustain trivial 100Gbps SSDP attack?
The only force of centralization I'm seeing in either my experiences or your presentation is the marketing that comes out of CloudFlare and the, as far as many of us can tell, bending of the truth as to what even constitutes an attack that is used to up-sell existing CloudFlare customers. It is one of the reasons why the only websites you ever really notice being attacked are ones behind CloudFlare, because CloudFlare really really wants random other people to notice that they are "helping".
FWIW, I have absolutely been the target of SSDP attacks, and have been concerned about this protocol for a long long timeand it isn't obvious to me how "let's centralize more" is the real answer to the problem: if you really want to protect yourself from a DDoS attack, the obvious solution is to decentralize, not centralize... the more centralized you are the more you have a target that can actually be taken down. As a visceral demonstration of this, I dare you to use an SSDP attack to take down Bitcoin: the only reason that this attack is a conceptual problem in the first place is that people like to centralize things.
For the great majority of internet users buying more capacity around the globe to sustain a 100Gbps SSDP is not an option. If you run a mildly controversial website, you don't expect to pay much for idle bandwidth. You can go for hosted solutions, but then you will be charged for attack traffic. What I'm proposing is a solution to this problem - how can we make the internet safer for the most common use case. I propose: netflow (to identify the spoofing boxes), flowspec (as stop gap measure), and BCP38 (a fundamental issue) will get us a long way.
If we were to design HTTP from scratch we could discuss how to make it truly decentralized. This sounds like an academic discussion though.
Your second argument is that DDoS is not a real problem. I don't know how to assess it. Dyn was down. Krebs went down. These are facts. I'm definitely not the guy that shouts "we are all doomed! buy product A or you will go down". All I say is - this is what I see, this is what happened, here are the numbers. Read the data and assess it yourself I guess!
- stats for amplifications https://blog.cloudflare.com/reflections-on-reflections/
- numbers for some random syn flood https://blog.cloudflare.com/a-winter-of-400gbps-weekend-ddos...
- numbers from some unexplained L7 event https://blog.cloudflare.com/the-porcupine-attack-investigati...
A long time ago, there was a proposal (itrace, its latest draft was https://tools.ietf.org/html/draft-ietf-itrace-04; see also http://ccr.sigcomm.org/archive/2001/jul01/ccr-200107-paxson....) to make these attacks easier to trace, by having routers probabilistically emit ICMP packets towards the supposed target or source of a packet. From what I recall, as DDoS attacks moved from IP spoofing to zombies using their real IP address, the working group sort of lost its purpose and died.
What am I missing? For now, I'm getting the info I need from sFlow but I want to get rid of that ASAP.
You might be happy to hear that the ISP I work for can definitely identify where that 150Mpps flood came from :) We're even doing some automated outbound mitigation in order to be good net citizens. CloudFlare's blog articles definitely helped us improve our network-level DDoS mitigation, by the way! Thanks for that.
Quake 3 engine game servers have already been used in amplification attacks.
Attacking every single protocol that dares to respond to a query is a pretty stupid approach IMO. Look how well it's worked so far.
Additionally, unless we switch DNS to TCP only, root and authoritative name servers are always going to provide and amplification factor and there are still more than enough of them for devastating attacks.
Agree with you about monitoring, but that wouldn't be necessary if we got serious about enforcing and blacklisting ISPs that drop the ball on BCP 38.
23% of the address space, about 33% of AS's. It has improved a lot but still some way to go
It's just that there is no incentive to stop lazy ISPs from allowing everything since that's easier.
If all ISPs did this, there would be no ACL issue and BCP 38 would solve the problem. No need to make it harder than it is.
I suspect it might reduce out-of-network traffic a bit too..
Or ISPs could check source IPs at edge routers maybe?
Only on HN. Haha.
To impose fixes upstream, you'd have to do DPI on all data; which is not allowed under some laws (i.e. net neutrality).
RFC2827, which should fix the problem where SSDP can be used for DDoS, was published in 2000: https://tools.ietf.org/html/rfc2827
Is ingress filtering on layer 3 considered DPI?
I would not consider the comparison of the source address of packets crossing an ingress link to be 'deep'. I consider that check to be very shallow. It needn't even be every packet from a set, merely picking a random (actually random) packet and testing for conformity is a good quality control measure that SHOULD be taken.
What would the comparison be against? Routers are supposed to know which links are on the other side of all down-stream connections so that they can effectively route.
Ever since creating it and just checking on some networks, I'm surprised of how many devices are actually using it. I probably saw this in Wireshark before as well, but probably overlooked it because you're never really looking for it. I wonder if many other such protocols are often used but easily missed...
More on the SSDP servers
Since we probed the vulnerable SSDP servers, here are the most common Server header values we received:
104833 Linux/2.4.22-1.2115.nptl UPnP/1.0 miniupnpd/1.0
77329 System/1.0 UPnP/1.0 IGD/1.0
66639 TBS/R2 UPnP/1.0 MiniUPnPd/1.2
12863 Ubuntu/7.10 UPnP/1.0 miniupnpd/1.0
11544 ASUSTeK UPnP/1.0 MiniUPnPd/1.4
Supposed Reddit comment from the author: https://www.reddit.com/r/AskReddit/comments/5nqq3c/serious_p...
I personally feel that it is. (maybe tis was already obvious to others - I just havent talked about it out oud to anyone prior...)
Well,now... there's my problem right there... Jus' don' know much 'bout them japanese now doncha.
<rocks back and forth with thumbs on the straps of my filthy coveralls, spits...>
Yep yep yep is what I always say...
<heads back into dilapidated datacenter behind squeaky screen door only holding on by one hinge>
Every single thread of this nature has a similar comment, and I really want to know (ie, I want to hear this fully fleshed out because I think your concerns are valid and worth exploring): is this demonstrative of a new (or in some way more valid) notion of the word "hacker" in "hacker news?"
My sense of that word, and of the culture that underlying it, is that a critical part of its critique is that obscurity, specifically in its implications for security (and thus, perhaps civility and peace and justice), is subject to deprecation in the information age, precisely in favor of styles of disclosure like this: where the pudding for the tasting is provided as the proof.
Have I missed something very important?
Are there good reasons to believe that obscurity (ie, keeping secret the means and methods of attack) is likely to be a viable defense in favor of civility and justice in the age to come?
I meant to reply to this: https://news.ycombinator.com/item?id=14660862
Sorry about the confusion.
Edit: for the downvoters, this isn't just my opinion, please read https://en.wikipedia.org/wiki/Responsible_disclosure
Please resist commenting about being downvoted. It never does any good, and it makes boring reading.