There's another AWS outage, & presently the top comment is talking about us as barbarians that have stumbled into fancy hot baths & are amazed but have no idea how to keep them running. And a wonderful follow-up reply[1] talking about living in an apartment in a storm versus living in a cave during a storm. It presents another severe image of how much drift there has been in the world, how much more built up, but how we ourselves are not necessarily more advanced, smarter, wiser.
It's work like this (Mess with DNS). This is the stuff. Revealing, experimenting, inviting people in. Tech that illuminates & shows off, that is there to explain & help create understanding. This is the stuff, this is what keeps humanity powerful & competent & connected. Tech does a lot for us, but when it helps us become better wiser more creative people, when it reveals itself & the world: that holds a very dear place in my heart, is the light & heat in a vast cold and dark universe. I love this project. It's a capital example of revelatory technology, of enlightening technology.
Humans individually are pretty useless. Abandon a random human in a jungle and they will likely perish soon no matter how smart and well educated they are.
The strength of humanity is teamwork, working together to build things other groups can build things upon. Abandon 100 random humans in the same jungle and they will build a town.
It is possible for the two following statements to be simultaneously true:
* the ability of collaborating groups of humans to achieve/produce scales super-linearly with the number of humans[1]
* the growth of human population is causing problems, and is likely to cause more problems in the future
One reason is the scarcity of resources[2]; another is that "humanity" as a whole is not collaborating with all of itself.
[1] actually, I don't even think this is true, beyond some limit - but it's true for small groups
[2] which could be mitigated somewhat by fairer allocation of resources, or by process changes to focus more on fundamental needs; but, still the fact remains that the resources that we have access to on Planet Earth are limited, and access to extraterrestrial resources are extremely expensive
What is a fairer way to allocate resources than you produce for me and I produce for you?
Seems to me any other system is open to being gamed. Sure there are people born into generational wealth. But those are like one in a million and generational wealth doesn't typically last more than a handful of generations as the number of descendants grows exponentially.
Random people would have the most varied set of skills. A single person can have skills that are useless for surviving in the jungle, but if any of the 100 people has a good enough idea of what to do, the rest can help.
Even non-random groups like your coworkers or immediate neighbors can have unexpected skills that will make you feel dumb.
I'm not sure -- but I do think it would be interesting how that would turn out. Australia would founded in this sort of fashion. I think there's a bit more nuance though.
I do think though that empowering individuals is key.
Teamwork is still the work of many individuals, and I think a person's upbringing & disposition & the capabilities they've developed are hugely influential on what kinds of teams are possible in the world. The world of computing today gives users interesting capabilities, but only shallowly, only on the surface; it denies us the view below, denies us the freedom to see, understand & explore, and humanity always being so yolked restrains human growth, restricts what I see as one of our key better nature from getting a chance to come out & thrive.
Sure, we are not going to all learn how to build apartment buildings; we will take much for granted. But many people do learn some home repair, or try their hand at fixing appliances. Sometimes just to save some money, but sometimes because it's interesting, & because there's videos showing them how to, because they can. But computer/information tech, in my view, has created a highly resistant unrepairable unviewable digitalia that is anathematic to this basic human engagement with the world about us. It is not just a built environment, but a built environment which resists real understanding, which prevents human empowerment.
Creating an accessible world, one where human's have a strong locus of control, where they have flexibility & options to experiment, to play, to try, to explore is absolutely capital to me. Humanity loses who humanity was when/if we view the world as prebuilt, as a creation of some wider us, that we are but tiny figures upon. Yes there are many things that we have to rely on groups for, but that ability to learn about the world, to understand it, to investigate & understand & experiment in the pieces of it we so choose- that spirit is the lifeblood of this planet, and it's that attitude & disposition that produces highly functional teams & groups. Which is something we will, best I can tell, always need.
To speak to technology & it's revelatory potential, to put it in scope here, I think it's important to review Ursala Franklin's dichotomy of technology. She divides tech into work & control related, work that helps individuals do things, control that regulates systems. Going further, she divides tech into holistic & prescriptive techologies- prescriptive technologies which break down work into fixed, predictable, deliberate steps & processes, and holistic technologies, which amplify the capabilities & prowess of the tool-bearer. There's a lot of tech on this planet, but even "creative" tech like a photo-sharing sight is mechanistic in nature, follows limited & fixed flows, & affords only superficial control to it's users. Where-as tech like Mess with DNS amplifiers human understanding, gives us the power to explore & test out what is possible, lets us set our own rules. This world is in need of techno-spiritual healing- computers are widely used but rebuff understanding, they have become overwhelming elements of control rather than empowerment. I look forward eagerly to a shift, to revelatory technology that abides different ends, that seeks a holism. Mess with DNS is "just" a little playground for some tech, hardly an attractive application on it's own, but I believe that individuals everywhere would be much better off- that teams would be much richer as a result- if tech worked to open up the engine-bay & allow some monkeying around.
Julia Evans's cool stuff aside (and it is _very cool_, we need all the high quality didactic material we can get!), all this info _is_ on the net. I'm always surprised when I see engineers (like in that linked post) who don't understand how to do things like regional failovers, DNS load balancing, load balancing strategies, load shedding, circuit breaking, AZ balancing/failover, etc. These are pretty standard concepts in the world of high reliability net services, writing the code is the easiest part! I guess that says a lot about the problem domain I'm in and how different reliability guarantees tend to be in other problem domains.
I've never seen anything at all as interactive & playful as this. Nothing that comes close. All in one, designed to create the experience of DNS. It's in the name: Mess with DNS. That makes it far far far & away different
And I think that makes all the difference. I tend to believe very strongly in hands on experience, think that seeing things happen yourself & getting to play is by far the best way to learn, just incredibly surpassing.
There's a theory of education called Constructivism[1] that is broadly similar. Adherents include folks like Seymore Papert[2], creator of Logo, employee at One Laptop Per Child (which I think is the most interesting & innovative software environment we've ever created, vastly under-appreciated). Projects like Logo are supposed to create that hands on feedback, to make programming not just writing scripts & having programs run, but ways to see the code really execute, to create more interactive modes.
With software eating the world, it is so so so important to me not just to create knowledge, to tell tales of what software is, but to let people have the experience themselves. To create playgrounds to meddle, to mess around. I wish so much that applications could actually show & explain what they are doing, what's inside of them, could reveal their workings, but we're so far away from that Enlightened world, we've fallen into such deep shadows imo.
(Side note, I see things very differently, but I also am disappointed folks would downvote your perspective like this. As for the lack of knowledge/experience, I'd say that most engineers don't have familiarity because there's not a lot of opportunities to set up & learn systems work; most coders spend their time coding, not setting up bits of infrastructure to run code on. You yourself also say "writing the code is the easiest part", which underscores just how complex/inter-related/particular all the systems/infrastructure stuff is, how probable it is engineers might not feel fully competent or brave enough to engage.)
> I've never seen anything at all as interactive & playful as this. Nothing that comes close. All in one, designed to create the experience of DNS. It's in the name: Mess with DNS. That makes it far far far & away different
Oh absolutely! I don't mean to diminish this. The ability to interact and play also works very well for my own learning.
> There's a theory of education called Constructivism[1] that is broadly similar. Adherents include folks like Seymore Papert[2], creator of Logo, employee at One Laptop Per Child (which I think is the most interesting & innovative software environment we've ever created, vastly under-appreciated). Projects like Logo are supposed to create that hands on feedback, to make programming not just writing scripts & having programs run, but ways to see the code really execute, to create more interactive modes.
+100
> With software eating the world, it is so so so important to me not just to create knowledge, to tell tales of what software is, but to let people have the experience themselves. To create playgrounds to meddle, to mess around. I wish so much that applications could actually show & explain what they are doing, what's inside of them, could reveal their workings, but we're so far away from that Enlightened world, we've fallen into such deep shadows imo.
You bring up a good point overall about the lack of interactive materials for engineers/students/interested folks. I also suggest opening up any cloud provider (cheap for playing around is probably better!) and trying these things with services like Traefik (which are easy to configure/play with). Try to do some multi-region failover stuff, observe what happens with different load balancing strategies, that sort of thing. It reminds me a lot of watching videos about setting up IP networks, stuff like Cisco certification material.
You've given me some food for thought on educational materials for sure.
> As for the lack of knowledge/experience, I'd say that most engineers don't have familiarity because there's not a lot of opportunities to set up & learn systems work; most coders spend their time coding, not setting up bits of infrastructure to run code on. You yourself also say "writing the code is the easiest part", which underscores just how complex/inter-related/particular all the systems/infrastructure stuff is, how probable it is engineers might not feel fully competent or brave enough to engage.
Yeah this stuff isn't easy and operational work is often a different skillset than writing code.
This tool is so neat! One thing I've learned from it is my ISP (sonic.net) seems to be doing queries to _.example.com. For instance:
$ dig @50.0.1.1 nelson.lily6.messwithdns.com a
Results in two queries being answered by the messwithdns server. One for nelson.lily6.messwithdns.com as expected, but also one for _.lily6.messwithdns.com.
Any guesses what that naked underscore query is for? Not every nameserver does it (Cloudflare, Google, Quad9, and Adguard all don't). But Sonic isn't the only one that does.
I've asked on Twitter and the best guess right now is it has something to do with RFC2782 or RFC 8552. But those are about using _ to make unique tokens that aren't likely domain names, things like _tcp or _udp. What would a naked _ mean?
I wrote it because I wanted more specific advice about how qname minimization should work, and I deliberately aimed it at an ideal world, ignoring obvious interoperability problems. I hoped that this would provoke discussion and get people working towards a more realistic algorithm. But that did not happen until years later.
So the early implementations of qname minimization had to invent their own ways of working around the inevitable interop problems, and some of those solutions were quite creative.
I think the bare _ version is trying to avoid querying delegation points directly, so that it still gets a referral as it would have done using the full qname. And the _ also avoids problems with negative responses, which are often implemented very badly - it is common to make a mess of the distinction between NXDOMAIN and NODATA.
After reading through the draft I think I don't understand the argument about user privacy.
Does QNAME minimization try to prevent the scenario where a malicious party has setup a DNS tracker that responds with the same A/AAAA entries for a specific subdomain in the sense that e.g. "session-id.actualserver.company.tld" results in the same entries as "actualserver.company.tld"?
How would a client detect this before actually resolving it? I mean, if TTL is 0, no client will cache the results and therefore the minimization aspects are kind of irrelevant because the client has to resolve all over again, right?
I think I am having questions about the logical conditions "when" a client tries to resolve "_" before resolving the actual domain, which I am assuming is what the draft proposed...because to me this scenario would have the requirement that the very same party also has ownership of the HTML/actual links in the code, so I don't understand what it's trying to prevent because the same party could just read their apache logs to gain better datasets.
The scenario is that you want to resolve alice.example.com but you don't want the root servers or the .com servers to know any more information than they need to.
Historically you would send the whole query to all servers. Even the root servers would see the entire fully-qualified domain name (alice.example.com) even though all they're going to do is refer you to the .com servers. With QNAME minimization the root servers only know that you want something under .com and the .com servers only know you want something under .example.com and so on.
Now suppose the root servers don't do any kind of encryption but example.com supports DNSCurve or some other opportunistic encryption and so do you. Your ISP used to see the query going to the root servers or the .com servers and know the FQDN even if the query to example.com was encrypted. Now they don't.
Likewise, if someone is sitting on the root servers watching all the queries from everyone, they used to see FQDNs, now they only see top level domains.
I didn't have in mind that an ISP could have their own map of all zones where they simply map observed specific DNS traffic to the zones themselves because they know which server is responsible as well.
> Your ISP used to see the query going to the root servers or the .com servers and know the FQDN even if the query to example.com was encrypted. Now they don't.
In practice your recursive resolver either is your ISP (in which case this helps nothing) or is outside of your ISP (and your ISP can't see its queries). The only realistic privacy leaks that is addresses is leaking subdomains to the root servers and other delegating servers higher up the chain an their network operators.
As others answered, something called qname minimization. Others gave detailed explanations, so I'll try to be shorter.
In DNS, the recursive resolver sends the entire FQDN each time to every step.
Now realize, like every company, DNS operators want to collect and sell your data.
So imagine a 'bigsite.com' that does a lot of things. And you like, say porn.bigsite.com. Without this minimization, everyone from the root to verisign to bigsite knows what you queried for.
But I wish a service existed that made domain names easy enough to use that the average person could manage them. IMO you shouldn't have to learn DNS and TLS in order to securely use a domain name. If I want to sign up to have Fastmail host my email, why do I have to manually copy and paste a bunch of DNS records? Fastmail already knows exactly what records need to be set. I should be able to OAuth redirect over to my domain registrar and approve giving Fastmail control over a subdomain of my choosing, and Fastmail should be able to use a simple open protocol to update the records.
The oauth flow you just mentioned exists, I just did it to confirm my domain in cloudflare with google workspace: google did an oauth flow and I got a cloudflare popup asking me to add a dns record for an hour. It was very cool.
My friend, it took me a few hours to find it -- there's zero documentation on Cloudflare about cloudflare supporting it, but it's supported [1], but godaddy luckily is a lot more vocal about it [2], here's the spec:
Wow, I'm still reading the spec but on the surface this appears to be almost exactly what I've been looking for for over a year[0] and somehow unable to find. I really appreciate you taking the time.
And yeah I hear what you're saying about ideas haha.
My problem with this spec is it requires Service Providers and DNS Providers to know about each other. It's essentially formalising the status quo of cookie cutter setups for big name providers.
Yeah, I read the website and the entire spec. I think it's pretty good, but it's built by big names for big names. There's nothing wrong with that, but I'm concerned it might not be appropriate for things like quickly pointing a simple A Record at a self-hosted open source service. Maybe I'm wrong. I'm having a good discussion with the spec developers here: https://github.com/Domain-Connect/spec/issues/64
In my personal experience I find that zone files work quite well as universal format for that.
To pick up your Fastmail example: Fastmail could generate a matching zone file for your domain and let you download it. You could then upload it to any domain service provider that supports importing zone files.
It's obviously not as hassle-free than something like your oauth example, but it's using the infrastructure that is already there.
Incidentally, just an hour ago I was setting up a mail server on a Digital Ocean droplet, and had to manually copy and paste 20+ DNS entries because Digital Ocean doesn't support zone file upload (only download). So, the zone file seems like a good enough solution if only everyone would use it.
That's a good idea, but it would require all the registrars agreeing on a few different protocols and people doing the hard work of implementing them reliably at many, many, many different participants. Since lots of those participants are competitors (e.g., many registrars provide hosting, email service, etc), I think it would be very hard to get enough momentum that places like, say, GoDaddy would feel obligated to participate.
It seems like a pretty useful feature one of the big boys could offer to differentiate themselves. Or I could see a new entrant in the domain seller space marketing this as a main feature.
But it only works if it has significant compliance. If a new entrant offers the service, there's little reason for other places to implement it, because only a tiny percentage of their customers will be using it. And the big boys have a disincentive because they already offer things like email and web hosting. Making it easy for people to buy those services elsewhere will cut in to their revenue.
See @matthewaveryusa's comment above[0]. Looks like it already exists and is supported by GoDaddy, Google, CloudFlare, 1and1, and others. Still reading the spec but it looks pretty good.
This is a neat tool! FYI, make sure the domain is registered with Safe Browsing in advance. If one subdomain is cataloged as malicious by google the entire domain can be flagged. It can be a pain to deal with.
You need multiple subdomains to be flagged in order to cause the eTLD+1 domain to be flagged. But then since this is open for anyone to change, I imagine it's really easy to cross that threshold.
This is a real risk. When people start adding CNAME's or A's that point to known phishing sites, it's very easy for Google to notice and block.
hypothetically, what happens if a domain is catalogued by malicious? Also who catalogues it? If you haven't bought the domain from Google, the only thing that Google can do is not show the domain on google search results. Did I miss anything?
> If you haven't bought the domain from Google, the only thing that Google can do is not show the domain on google search results. Did I miss anything?
I would imagine they might also show warnings in Chrome.
Indeed. Google basically gives this service away to browsers. It costs money if you want to build a commercial service using it, but if you give away browsers, no problem.
You can switch it off, but you probably shouldn't, even if you're sure you would spot a phishing scam, actually maybe even especially if you're sure you would spot the scam.
The service is capable of being quite nuanced since it works on (hashes of) HTTP path segments, so e.g. it can say OK this site https://some.example/ seems fine except the /cgi-bin/crapscript.php/fake-bank/ pages are clearly a fake bank, and so if your browser tries to visit those pages it gets flagged. But equally it can say OK, everything in bogus.example is bogus, fakebank.bogus.example, harrods.bogus.example, www.news.bogus.examples, it's all bogus, warn for all of it.
You can't get the actual list, because if you could of course that mostly helps bad guys. Your browser does a bunch of hash lookups, and it has a fancy tree structure, so it can rule out e.g. OK everything starting FE43 is fine, everything in FD9 is fine etc. If that tree can't rule out a hash it calls Google, who have much finer grained hash data that wouldn't fit in your browser. Also periodically the browser fetches delta updates to the tree from Google.
Google's safe browsing list has never caught a fishing site for me. Since it's public, phishers obviously check their site against it before sending it to you.
> You can switch it off, but you probably shouldn't
You really should disable it because Google cannot be allowed to be the gatekeeper of the internet. The list contains tons of non-malicious URLs [0] and Google has absolutely no incentive to remove them. And even if you haunt them enough to do so the same broken process that added it in the first place will just add it again. Any browser that enables this list by default is actively making the web a worse place an engaging in mass-defamation.
> It does NOT contain any malware. Use a browser that is free of Google Shit Browsing security service crap (which is based on tons of noname antivirus "engines", look at VirusTotal if interested).
On the security aspect, I wonder how is this site affected services that do domain ownership verification [1] where they assume that only a person who owns the domain can edit dns records. I think letsencrpt ACME protocol [2] does it for SSL certs too. This site does create a subdomain for every user, so may be these issues don't apply.
Generally a dot is used as a barrier for these, because otherwise you need to have an infinite (and changing) list where users are allowed to register subdomains. .ac.uk vs. .com, etc. Not to mention that there are some of these domains where the policy is changing and there's both delegates and toplevel domains.
If you don't trust across separator boundaries you're mostly safe. That is, mytxt.foo.com shouldn't be blindly trusted for my.subdomain.foo.com nor mytxt.subdomain.foo.com shouldn't be trusted for foo.com.
IMO the biggest concern is with organizations that blacklist domains for various reasons, because they are not eager to just build very fine-grained blacklists.
One inconvenience is that although RFC8657 explains how to tell a CA that it must use particular methods, the most obvious public CA (Let's Encrypt) has not shipped RFC8657 support. So you can write a CAA record which says "Only Let's Encrypt may issue" or indeed say "Only Sectigo may issue" but you cannot write a record which says e.g. "Only Let's Encrypt may issue, and they must use the tls-alpn-01 method". Or rather, you can write that record but it won't work.
Now, there are a bunch of things you could do about that, and I believe this cool toy does one of the obvious ones: Don't have any certificates for the problematic domain. The web site isn't in the domain you can mess with. But it would be nice if Let's Encrypt got to this, periodically I check so far each time somebody has pestered them for RFC 8657 recently, so I don't pile on since that's unhelpful.
This is a really great resource. I wrote a DNS Server in C# once upon a time, it was hard, I wouldn't suggest it to anyone unless the benefit weights up as $millions. I could have killed for a tool like this, instead I spent a tonne of time in PCap and NetMon :(
Its out there on my GitHub if folk are interested.
Ironically 53 comments just before I added this comment...
A month ago, I scripted https://github.com/moretea/browsers-with-fake-dns as an alternative to editing /etc/hosts. It's a docker container with a BIND DNS server, and chrome/Firefox reachable via webvnc
Neat project! Setting up your own DNS server for a throwaway domain is definitely a pain, especially if you've never done so and use anything other than PowerDNS really, so this is useful for messing around with.
I do hope the author has set some limits on the DNS configuration you can freely enter. One annoying trick DDoS spammers will use is that they will set up DNS records that are as large as possible to use for their botnet's amplification attack, so allowing them arbitrarily large requests on your domain may be problematic and may cause nasty complaints against your domain. I'd recommend anyone running a free subdomain service (or something super cool like this!) to consider this in their configuration. We can't have nice things because of these bad people :(
CoreDNS which is commonly used in Kubernetes as a caching DNS server also supports RFC zone files and is very easy to configure. Written in Go, with just a few system library dependencies. I use it for LAN domains + cache + DoT client and it works nicely. I would probably not use it for big production deployments but it actually even supports master-slave transfers. :) Maybe worth having a look at this too.
I've seen jvns take a similar path to me in engineering over the years, almost uncannily. The difference mostly is that I stored it all in my head, and they take the time to write it up for everyone.
Same with DNS. DNS is such a freakin black box, mostly because outside of RFCs, it's some good ol boys club of 'consultants' that don't want to share information. You should see the mailing lists, it's a giant pissing contest.
Back on point, I always wanted to distill this information down to make it for everyone, but always hit some small hurdle like... making a website about it.
That Julia takes the time to do this and share this is invaluable. It's like a better version of me exists out there, and I'm happy for it.
Love how this just drops you straight into a workspace where you can start experimenting - no sign up required! And the live view of requests is really neat too.
Julia Evans continues to do so many cool projects! The blog, the zines, now this, such great work! It always amazes me when one person can create so many useful things.
The tone of your comment is pretty inappropriate. The whole point of this is to help people learn about DNS, including the author, who happens to be one of the most humble and helpful persons on the internet.
No volume of books can be adequately substituted for doing something, which this project enables handily.
I'm sure you'll be down voted to oblivion but maybe consider a more constructive approach, like opening a PR and helping the authors out.
Note that this sentence was about browser-based integration tests. Browser automation has come a long way, but even on very frontend-fluent teams I've been on we had a few flakey tests, and browser-based integration tests are sometimes flakey in ways that are difficult and tedious to debug! Not understanding why doesn't necessarily indicate any lack of understanding of DNS.
But maybe it increases the odds of a "Let's understand Playwright!" post in the future!
Someone is missing knowledge, admits it, and this somehow inflames you? They created a free tool. Nowhere do they claim that this is a comprehensive replacement of a full O'Reilly book.
I don't really agree with the tone of your comment, and why would you cite a section of the article where the author was talking about a front-end testing framework?
Great job reading the article: she's talking about frontend E2E testing which literally has nothing to do w/ the mechanics of DNS. Every one of these frameworks I've used _is_ a bit flaky too, so this should be completely unsurprising to anyone who actually knows anything about this.
Some of my fondest memories were learning programming and then infrastructure engineering in bits and pieces while so many “veterans” at the time pissed and moaned about how the One True Way to learn was reading O’Reilly books.
A decade into my career, I’m pretty sure I out-earn nearly all of them despite them having a solid decade on me. Of course, income is a fallible indicator, and to the extent that it’s accurate, I don’t think the difference is “reading books vs Googling” but rather (if I had to guess) some handicap that correlates with bitching about how other people learn on the Internet.
It's work like this (Mess with DNS). This is the stuff. Revealing, experimenting, inviting people in. Tech that illuminates & shows off, that is there to explain & help create understanding. This is the stuff, this is what keeps humanity powerful & competent & connected. Tech does a lot for us, but when it helps us become better wiser more creative people, when it reveals itself & the world: that holds a very dear place in my heart, is the light & heat in a vast cold and dark universe. I love this project. It's a capital example of revelatory technology, of enlightening technology.
[1] https://news.ycombinator.com/item?id=29568078