If you want a startup/side-project idea, here it is. I can tell you from 10+ years in website hosting and maintenance that most laypeople, designers, marketers, and even some technical people will never understand DNS. (and they shouldn't have to)
A small business wants a website at SquareSpace, cloudflare CDN, email at Google Apps, landing pages on a subdomain with Unbounce, their blog on yet another subdomain, and DKIM/SPF records for the email newsletter system.
Setting this up is not easy for most people. Most people aren't even sure where to do it–let alone how to do it. (is your registrar handling your DNS? sometimes ...)
If you had a 1 or 2-click tool that setup these for people, maybe wrapped it around some domain search/affiliate tools, I think you could make some money.
I actually enjoy setting up that kind of stuff. It is easy and kind of relaxing compared to the stress of developing software sometimes. I'm not trying to be a system admin, but setting up routine services is relaxing sometimes after a hard day. All those services you mentioned have no unified API to automate the process, so it seems like there would need to be a specific script for each provider.
I agree that domain registrars should make it easier for people to simply specify what services they are signed up for and provide the option of one-click auto configuration/population of DNS records. Some registrars do this to the extent that they will fill in Google's MX and other CNAME records if you specify you are a Google apps user.
I disagree that technical people should not have to understand DNS. DNS isn't that complicated in most scenarios, and having a basic understanding of it can help alleviate many of the headaches that are introduced when people don't understand the relationship between the ownership of their domain, their domain registrar and DNS records that point to various services, basic concepts around TTLs and caching, etc.
>> I disagree that technical people should not have to understand DNS
I think that any good technical developer/sysadmin has at some point set up their own BIND server or similar, if only to play with it for a few hours before uninstalling. This is something that is often missing from developers who went through a CS program just to get a degree in an industry that pays well, without really having an interest in the field.
Good developers/sysadmins, who actually have an interest in the work they do, have an innate curiosity that leads them to try their hand at things like a DNS server. For no reason other than to learn something new and understand a topic they didn't knowing anything about before the experimentation.
Of course, you can learn about DNS without actually configuring a DNS server from scratch - but the real "aha" moments come from working with the server itself rather than only understanding the difference between A/CNAME/MX et al. Learning about FQDN syntax, how DNS master syncs to slaves, SOA serial numbers, etc. - so many interesting things to discover.
Although I have repeatedly in my life set up and configured BIND servers, and manage DNS zones monthly if not weekly, I disagree that a could developer should have to know these things very well if at all.
As a developer you should know the basics of networking and the protocols you will be using to communicate, as well as their drawbacks but DNS as far as a developer is concerned, generally is a means to an end. They just need to know where to send a message to. I feel like you could be a great developer and not have a freaking clue how to configure a BIND server or manage a zone file- they're fairly unrelated to creating, maintaining, or deploying code (unless you're getting into containers and auto-provisioning).
> they're fairly unrelated to creating, maintaining, or deploying code
Actually it's very useful to know and understand potential misconfigurations.
e.g. if your 3rd party integration starts failing, it's useful to know the signals that imply their sysadmin messed up a zone transfer.
This sort of 'hands-on' knowledge that you only really get from messing around with the daemons yourself reduces downtime and emergency maintenance from hours to minutes.
I was lucky enough to work on a migration from BIND 4 (yes, four) to BIND 9 during my training.
I have run my own resolver at home ever since and in retrospect learning how DNS works was one of the most interesting things I learned back then.
The thing is, when DNS fails for some reason, everything that sits on top fails, too. So as a network admin / sysadmin, it is very helpful to know. Heck, even as regular Internet user, it is helpful.
"Email is so important to the functioning of the Internet that it gets its own record type."
Well that is one way of looking at it. Probably the appropriate way if you're teaching, but if you want to be critical then it reeks of poor design for a particular service to get special treatment by a fundamental part of the internet. It isn't that MX records aren't needed. They just shouldn't only be useful for email.
That's what SRV records are for. MX is just older than that convention.
Edit: And it wasn't as bad of a design decision back when changes to important network protocols was a thing that happened, not a case of the sky falling.
MX records were designed and implemented waay back when it was assumed that every new service would implement it's own super-specific (and probably binary) layer 4 protocol rather than just defaulting to JSON over HTTP(S). In Internet time, SMTP is _ancient_.
SMTP is ancient and we've been piling crap on top of it for 20 years to keep it working (and all the OSI layer 5, 6 and 7 things that depend on mail continuing to flow). SPF, DKIM, DMARC, IMAP daemons that present email over TLS1.2, clamAV and realtime malware scanning/filtering, spamassassin, bayesian filtering and analysis of mail body contents, email-to-SMS gateway, SMS-to-email gateways, I could probably make this list 4x as long.
But it doesn't seem to be going away anytime soon.
I've added a new acronym to my personal mail servers over the weekend: SRS (fixes SPF when doing basic forwarding to another email address, e.g.: to gmail).
And now I'm not even sure that was a correct move, since Gmail explicitly states[1] that you shouldn't change the envelope sender (which SRS does).
Anything that breaks DKIM so the signature coming from your SMTPd is no longer valid, not matching the public key published in your DNS records, will make your mail MUCH more likely to be flagged as spam.
SRS[1] doesn't break DKIM, because the sender rewrite happens on my server, before DKIM.
That's the thing, in theory I went from:
spf: fail
dkim: pass
for forwarded mails, before SRS, to:
spf: pass
dkim: pass
so it should be better, right?
But as I said, they explicitly mention NOT changing the envelope sender (maybe they have an exception for SRS cases, but it's not documented anywhere).
Oh yeah, and now I'm also responsible for any real spam forwarded to Gmail, since the From is @mydomain.
It made more sense when you could assume each machine was tended by a sysadmin who was responsible for running it, so having a quick way to reach that person in the Net's distributed "phone book" was a fairly obvious thing to do. Spam... existed, actually, but it was solved by social mechanisms. ("Hey, tell that loser to knock it off or AUP him off your system, wouldja?") DNS is old.
We no longer think of DNS as a "phone book" style directory system, and we no longer assume that every reachable machine is cared for individually.
I assume that if email was coming along today it would look more like internet telephony, which is built on things like e164 to encode addresses into DNS using A, SRV, and NAPTR as the record types.
Learning networking/tcp/dns has been a pain for me for years. I can never wrap my head around it properly despite many attempts.
I blame it on not having easy access to throwaway playgrounds.
I recently found this project http://mininet.org/ which promises throwaway network playgrounds. Hopefully it will help me finally learn networks for good.
Although I appreciate that you're sharing something interesting, I can't believe you when you say the problem to tinker with networking, TCP and DNS was the lack of playgrounds.
Any home LAN is a playground. Internet is a playground. GNS3 for Cisco stuff. Linux itself is a playground (I have been playing today with StrongSwan, Quagga and SoftEther!). If you just want a network simulator, there's Packet Tracer. Of course you could also just fire up tcpdump and/or Wireshark and have a look. Many of the things I've mentioned are free :-)
Absolutely. Within that playground you can have a distributed wonderland, running VirtualBox, KVM or XEN where you spin up Linux hosts rapidly. Now your playground can start to imitate real life.
It sounds like the poster you're replying to need to solidify the fundamentals, before the playground under his/her feet can be experienced. TCP/IP Illustrated, one of many introductory guides to DNS, etc. can be read through. TCP/IP Illustrated Vol 1 is great because of how simple it makes networking feel, and it does so while showing you packet captures (another skill to pick up).
Add to that GNS3, which I've seen people use to design and sketch out company networks (just need a Cisco account to download the IOS images).
We could go on. The point is that your (free) playground is right under your nose.
Almost always you'll want to redirect a bare domain like iskettlemanstillopen.com to www.iskettlemanstillopen.com. Registrars like Namecheap and DNSimple call this a URL Redirect. In Namecheap you would set up a URL Redirect like this...
I prefer my domains to be naked (as opposed to www.), but I typically redirect all www-traffic in my web server (NGINX). Is this the wrong approach?
I prefer naked domains, but the problem I always have is that you can only use A records which makes using them with a bunch of things a pain in the butt. I host my personal website as just a tumblr blog (yes, go ahead and laugh, but I find it less of a chore to deal with than wordpress) so I need a CNAME to make that work, so I just have a very basic redirect on the naked domain to my blog subdomain.
If it were me, I would set the ip (A record) on the root, and then use a CNAME to alias www to the root. Then you don't need any redirects, and you don't need to worry about server doing the redirects going down and taking your site with it.
Every time you request something (a web page, an image, a css file, etc) from a server, your browser sends any cookie data that had previously been set by that server as part of the request header.
Cookies can be set for specific subdomains only, but if they are set for the "unprefixed" domain they will also be sent for all subdomains (just the nature of how browsers handle cookies).
Since cookie data is rarely needed just to server static assets (images, css files, etc), you can shave off some time on each request if you serve them from a subdomain that is different from the web page's subdomain... but of course your web page has to actually be at a subdomain (e.g. www.example.com) in order for this to work.
Hence, setting up your main web pages to be at a subdomain (such as www.) gives you the ability to then serve static assets from different subdomains without browsers having to send cookie data on each request.
Cookies are scoped to a domain name and all subdomains under that domain name. For example, if you set a cookie on `ycombinator.com` that cookie will be presented to `news.ycombinator.com` as well.
Using `www.example.com` as your cookied domain allows you to avoid sending your cookies to `cdn.example.com`, shaving off a few bytes of incoming bandwidth per request. Whereas if you use `example.com` as your cookied domain, those cookies will be passed to `cdn.example.com`. To avoid that, you'd have to set up your CDN on a completely different domain like `examplecdn.com`.
It is surprising how many companies in technology and related spaces don't have any redirect to a bare domain name, and assume everyone types 'www.' before it. 404 simply exists without the www prefix.
That's serverside at least, whichi is easily remididied given simple education Much harder is the more than 70% of exotic browsers used by millions; I hate 2345 in particular.
OpenNIC is a user controlled Network Information Center offering a democratic and non-national alternative to the traditional Top-Level Domain registries. http://wiki.opennicproject.org/HomePage
I used OpenNIC for a while and it mostly worked. I started to notice some problems resolving .today URLs and had to ask for help in their IRC, and the response was not reassuring. At least one operator do not regularly update or even monitor their servers - I saw down time lasting weeks for one server (fortunately I had configured my router to use two and the second kept working throughout). The .today TLD had existed for several years but had not been added to the DNS servers I used. This isn't a complaint, just an observation after a year of regular use.
I like the idea behind the project, but from personal experience I decided that I had to switch to another provider that had more robust infrastructure and regularly patched their machines. DNS is too critical and I felt the risk of having my DNS requests hijacked by a compromised machine was too great.
(Also, who down-voted the parent comment? Bizarre, maybe that was accidental?)
Honestly, I've done quite a bit of DNS handling, mangling, and manhandling, and never once really thought about it or have been bothered by it, nor have had any reason to look into how the underlying protocol works. Which, I guess, means it's a good protocol because of not leaking abstractions.
A small business wants a website at SquareSpace, cloudflare CDN, email at Google Apps, landing pages on a subdomain with Unbounce, their blog on yet another subdomain, and DKIM/SPF records for the email newsletter system.
Setting this up is not easy for most people. Most people aren't even sure where to do it–let alone how to do it. (is your registrar handling your DNS? sometimes ...)
If you had a 1 or 2-click tool that setup these for people, maybe wrapped it around some domain search/affiliate tools, I think you could make some money.
My 2¢