1. DNS query for TXT record for example.com
2. DNS reply with HTML content
1. DNS query for A record for example.com
2. DNS reply with x.x.x.x
3. TCP SYN to port 80
4. TCP SYN/ACK
5. TCP ACK
6. HTTP GET
7. HTTP reply with HTML content
Again, I am only half serious, but this is an interesting thought experiment...
Edit: oddtarball: DNSSEC would solve spoofing. And updates should take no longer than the DNS TTL to propagate: the TTL is under your control; you could set it to 60 seconds if you wanted. It is a common, false misconception that many DNS resolvers ignore the TTL. Some large web provider (was it Amazon? I forget) ran an experiment and demonstrated that across tens or hundreds of thousands of clients wordlwide, 99% of them saw DNS updates propagated within X seconds if the TTL was set to X seconds. Only <1% of DNS resolvers were ignoring it.
(Why? Lots of captive portal wifi hotspots (think hotel/train etc) seem to allow DNS resolutions before stopping your other traffic.)
* DNS uses port 53 which is the same as the atomic number for Iodine ;)
Not that a court would agree with my logic, of course.
Calling it "access control" has always been confusing which is why people started calling it Machine Address Code or Ethernet Hardware Address instead.
Good luck explaining that to the judge.
Of course, I just tethered my phone and got way better service than their crappy $10/day wifi.
Good. Still, it needs to be pointed out. This idea is an awesome hack to show how can you piggyback on existing infrastructure to make it work as something it was not intended to.
But it absolutely, terribly sucks at anything practical. Actually, it's a non-solution. Here's why.
> There are way fewer network round trips:
> 1. DNS query for TXT record for example.com
> 2. DNS reply with HTML content
Let me show an exactly equivalent alternative implementation of the above concept.
1. HTTP GET x.x.x.x/example
2. HTTP reply with HTML content
I know you're half-serious with this idea, but I'm going to play along. So to continue with the interesting thought experiment... if people were to start actually using DNS records to smuggle websites, they'd quickly overwhelm the capabilities of the DNS network, so the reliability and free hosting would quicky go out of window, along with all hope of ever having anything even resembling consistency in the Internet.
So yeah; a nice hack, but kids, don't try to deploy it at scale ;).
The steps are not exactly the same. Any sensible ISP give you at least two redundant DNS servers with your DHCP response and most public DNS providers also give you multiple redundant servers. When you do a DNS lookup, your OS or browser handles failover between the DNS servers automatically, client side. When accessed by IP address, as you've demonstrated, HTTP offers no client-side failover mechanism built into web browsers to fall back to a different IP.
It's additionally important to note that architecturally, DNS servers are far more scaleable than most HTTP servers. They don't run anywhere near as much code per request and don't require the overhead of TCP or HTTP.
Note that I'm also not encouraging using DNS instead of HTTP for serving websites, I'm just pointing out that DNS is a more reliable technology and has client-side failover mechanisms so the pros which mrb listed are very real.
Exercise for the reader (the proxy soln): write a server called txtdns.com that returns the content of TXT records as HTML. The path would look like http://txtdns.com/example.com - and the key is that the server is only accessing DNS, even though your client is using TCP and HTTP.
(and probably be easier to boot, given the right scripting language/libraries that should be doable in 10-30 lines of code or so)
1. TCP SYN to port 80
2. TCP SYN/ACK
3. TCP ACK
4. HTTP GET
5. HTTP reply with HTML content
The TCP Fast Open proposal gets around this by using a cookie, so that the first connection requires a normal three-way-handshake, but subsequent connections between the same client and host can use an expedited handshake that eliminates a round trip.
- It could take multiple days to update the website for the entire world
- It would be very easy to spoof the entire site
- It would probably slow down the rest of the queries the DNS server would be responding to at the time.
Also, updating DNS can be a pain for sites that aren't managing their own records.
This is not hardware. This is software defined networking which you should apply one of the rules of good cloud design which is to expect failure - and this is a feature - IPs change. Deal with it. ;)
Further we have the TTL issue noted and an very interesting thing happens there. If you use a default of say 300 or more seconds for your FQDN in the DNS record which has even a CNAME to an ELB set of FQDNs (if multiple ELBs for example) then your going to have a condition at some times where that 300 seconds is still ticking down when the say 60 second TTL for the ELB's fqdn expires and or the ELB IP itself has changed. In that time span your resolved IP may be assigned to another ELB and traffic going to your platform hits some other platform. So perhaps at the first 1 second mark of your 300 second TTL start the AWS ELB TTL has expired and perhaps is assigned to nothing - your traffic gets failure to connect - then the IP is assigned to some other FQDN and your traffic hits some other ELB. (Your church patrons now get porn perhaps.) The flip side is true and interesting to watch in logs.
How to take advantage of this feature? Oh that is fun. Marketing? Route all such identifiable traffic to good bad or ugly ends?
This situation should be expected.
We tell clients that we've launched their site, but that the DNS changes might take up to 48 hours to propagate.
Realistically, from our office and to most of the world it's probably live within 5 minutes. One of our local ISPs happens to be one of those irritating ones that just ignores your TTL and caches records for days at a time.
Sometimes one of their servers will end up with the new record and one with the old. That combined with peoples' home routers caching records (again, sometimes ignoring TTLs) can lead to fun situations where the site might load fine for a couple hours (hit the good ISP server, local router cached) then the old site for a couple of hours (hit the bad server, local router cached...).
I used to try and explain it to people but after having enough people freak out about how their site switched back, it's not live yet, etc, etc... I just tell them it's going to take 48 hours. If it's visible earlier it's a pleasant surprise, and if it takes two days I don't get any panicked phone calls.
I already did this many years ago. It works well.
I also do not use DNSSEC (unencrypted DNS packets) opting instead for dnscurve (encrypted DNS packets).
What is still missing from the DNS world is a server that can handle pipelined (TCP) DNS queries (multiple lookups in the same request). I think the spec allows for it but no one ever implemented it as far as I know.
In your thought experiment, that would be "HTTP/1.1 pipelining".
I use HTTP pipelining everyday via command line utlities and where "web browsing" is concerned I find it hard to live without.
You can still see how it worked: http://whois.domaintools.com/isinterneton.com
It defined a few nameservers,
<title>Is Internet On</title>
If you want more stuff you have to use TCP and that is not very preferable.
As far as the 65535 limit, from RFC 2671:
4.5.5. Due to transaction overhead, it is unwise to advertise an
architectural limit as a maximum UDP payload size. Just because
your stack can reassemble 64KB datagrams, don't assume that you
want to spend more than about 4KB of state memory per ongoing
Here is a extention that "fixes" it:
I have not seen that spec extention implemented in the wild.
I just found the extention so I have on the other side not looked for it.
Why say that at all? Is it a way to fend off ridicule? Or does this show a lack of confidence in the idea and what you are saying?
Reminds me of comments that start "Am I the only one who thinks that..."
I've gotten out of the habit of apologizing for things that I say or prefacing them with phrases such as that. The reason is that I found that it was a lazy way to not give as much thought to what I was saying and whether I needed to vet my thoughts more.
Or downvoting as they have done with my comment.
That's because nobody came to this thread because they wanted to read your meta-discussion.
You could have just alert'd, too, but no. Harlem Shake. Bravo.
The rickroll was the first bit I noticed o_0
The point here is that:
1. DNS TXT records can contain HTML, including scripts and whatever.
2. Domain registrants can publish arbitrary TXT records.
3. TXT records can appear in pages generated by web sites which serve, for instance, as portals for viewing domain registration information, including DNS records such as TXT records.
4. Thus, such sites are vulnerable to perpetrating cross-site-script attacks (XSS) on their visitors if they naively paste the TXT record contents into the surrounding HTML.
5. The victim is the user who executes a query which finds the malicious domain which serves up the malicious TXT record that is interpolated into the displayed results. The user's browser executes the malicious code.
Thus, when you are generating UI markup from pieces, do not trust any data that is pulled from any third-party untrusted sources, including seemingly harmless TXT records.
Edit: I found my data and have a grep running on it, will share what turns up.
Edit2: Somewhat less exciting than I remember:
$ fgrep -- '>' *
Only takes 5 minutes to create an account.
Embed a live DIG result (do a diggle):
$ host -t TXT jamiehankins.co.uk
;; Truncated, retrying in TCP mode.
jamiehankins.co.uk descriptive text "<iframe width='420' height='315' src='//www.youtube.com/embed/dQw4w9WgXcQ?autoplay=0' frameborder='0' allowfullscreen></iframe>"
jamiehankins.co.uk descriptive text "v=spf1 include:spf.mandrillapp.com ?all"
jamiehankins.co.uk descriptive text "<script src='//peniscorp.com/topkek.js'></script>"
jamiehankins.co.uk descriptive text "google-site-verification=nZUP4BagJAjQZO6AImXyzJZBXBf9s1FbDZr8pzNLTCI"
Why is mandrillapp.com (tranactional email startup) included?
You can defend your own websites from these kinds of attacks by setting up a Content Security Policy and using the 'httponly' flag on auth cookies.
Imagine being logged in as your hostmaster account on http://your-registrar.com/, and having a malicious website redirect you to http://your-registrar.com/webtools/nslookup-tool.php?domain=....
But yes, XSS is a serious problem. Even if it's done in a site that handles no valuable info (sites that display whois normaly handle very valuable info), it can be used to launch attacks against other sites.
Firefox needs to show the 'play' icon for the audio tag.
It's the users' resistance to the slightest inconvenience that makes security so hard.
Whitelist places you trust. Keep things blocked that you don't like. If that breaks the experience, walk.
Therefore I treat my desktop as a security research one. Of course I would not do that on my desktop I were really working with crackme binaries ;)
Latest part that you edited out was a question I would raise but it seems like you also think that would not hold.
Even though this setup is not secure. It's more secure than many everyday usage patterns. In a way at least..
There are some nut-jobs or bad-ass people out there not using google, going with security enhanced phones and ddg. This does not make average or the 95percentile bad behaved.
This makes us only low security sensitive and targets.
There's a few that are "13h.be/x.js" that look like someone trying this out before.
Never trust user input.
Edit: See http://www.dnswatch.info/dns/dnslookup?la=en&host=jamiehanki... for the actual code.
Never trust any input. I think this is a case where people assume that is isn't pure user input because is would have already been parsed/checked/verified.
"Oh, its in the DNS system so it must be safe" is worse then "well, it came from our database so it should be fine". Don't even trust something coming out of your own database. You never know what various input checking bugs might have accidentally let in over time.
Never trust your program's output
You should have two sets of sanitization, one that sanitizes incoming data, and one that sanitizes outgoing data.
mike@glue:~$ dig +short chaos txt version.bind @126.96.36.199
"<iframe width='420' height='315' src='//www.youtube.com/embed/dQw4w9WgXcQ?autoplay=1' frameborder='0' allowfullscreen></iframe>"
I put this in my named.conf:
version "<iframe width='420' height='315' src='//www.youtube.com/embed/dQw4w9WgXcQ?autoplay=1' frameborder='0' allowfullscreen></iframe>";
This site is vulnerable:
Although takes a minute before it kicks in. I did report it to them at the time, but never got a response.
"I acknowledge the code just written does not trust its input, under penalty of being whipped by a wet noodle."
But I guess folks would just click through.
jamiehankins.co.uk. 33 IN TXT "<script src='//peniscorp.com/topkek.js'></script>"
jamiehankins.co.uk. 33 IN TXT "<iframe width='420' height='315' src='//www.youtube.com/embed/dQw4w9WgXcQ?autoplay=0' frameborder='0' allowfullscreen></iframe>"
$ dig txt jamiehankins.co.uk
Cleverness aside, it is practical when looking for XSS vulnerabilities because it's very obvious when you've succeeded in injecting your code.
Similarly I imagine something like the CFAA (18 USC 1030) probably has broad enough clauses to make this sort of action technically illegal, at least in some cases? But I'm out of my depth on that one.
CA 502c just says:
"(3) Knowingly and without permission uses or causes to be used computer services" amongst other very broad subsections
Because that is all this is.