Hacker News new | past | comments | ask | show | jobs | submit login
Benchmarking DNS response times of TLDs (bunnycdn.com)
273 points by twapi 16 days ago | hide | past | web | favorite | 139 comments



We have a client who insisted on using a .house domain. It kept triggering alerts on our uptime monitoring service with DNS errors, so we had to reduce the sensitivity of that specific test.

We also had a client who had to change their TLD from .healthcare to .org.uk because (a) people were confused because they didn't understand that there are all these new TLDs and kept adding .co.uk, .org etc and (b) some NHS systems in the UK (one of their target audiences) refused to accept emails that finished with .healthcare, saying that it was a malformed email address.

I think in the vast majority of cases these new TLDs aren't worth it until the general population's understanding catches up. I still advise clients to use www rather than a plain domain name for websites, because I've come across a significant minority of people who need that "www" as a clear signal that it's a website.


I'm interested in the NHS systems which didn't recognise the newer TLDs. I work for NHSX - could you drop me an email with the details? My contact details are in my profile. Thanks!


Sure - I'll drop them an email to see if they can remember. They changed towards the middle of last year, and it was reported to them by NHS staff rather than them experiencing it first-hand, so their memories may be a bit hazy.


Friendly reminder: you don't really know whether this person truly works for the NHS, as they claim. Don't divulge sensitive information to random people.


Perhaps in general, but in this case the email he's pointing to is an nhs.uk address.


> I still advise clients to use www rather than a plain domain name for websites

And you should. Not only can’t you put CNAME at the zone APEX without doing nasty tricks but not doing so is also a security risk when dealing with cookies.


Could you (or someone) explain this? (I'm not arguing, I'm interested.)


There is lots on the DNS already. For the security argument: Consider you run on `example.com` and at a later stage add `[blog|forum|support|...].example.com` suddenly cookies from `example.com` might leak to those Subdomains. If you put cookies on `www.example.com` they won't leak to those.


Not if you set the Domain attribute of the cookie properly, this is a poor software problem, not second-level domain problem.


True, but simple mitigations can be powerful ...

Sidenote: If leaking cookies to your own subdomains is a risk, one might also have other problems already. Point is: I explained the potential risk. Evaluating one has to do oneself


You can only put CNAME records at the zone APEX if there is no other records kinds as a CNAME is supposed to be a synonymous.

If you had records such as:

@ IN A 127.0.0.1

@ IN CNAME example.org.

The server could not answer anything sensible as you now just introduced an ambiguity.

The "nasty tricks" I was referring to is basically records such as ALIAS that you may find at some providers. The way they work is roughly:

- Given a record `@ IN ALIAS example.org.`.

- Pull records at `example.org` (maybe with an AXFR request but usually just individual records).

- Merge our domain APEX with `example.org` records we just pulled.

- Reply to requester with the merged zone, discarding the ALIAS record.

Regarding cookies, the issue is that they are passed to subdomains so cookies that you set for `example.org` are also accessible from `*.example.org`.


unless I am hosting separate sites/IPs on subdomains, or have a complex cookie situation, I think the www. prefix seems vestigial/legacy and completely unneeded. I am actually in the opposite camp of not including it at all and redirecting to non-www. it sounds awfully ambiguous in speech ("double-u double-u double-u dot", "all the double-us dot", "dub dub dub dot", etc.) and most times multiplies the syllabic length of a "catchy" address. I even had an older client who said "stop" instead of "dot", and another who used @www. in an email address. I also think it looks silly and unprofessional on business cards, on the side of vans, etc., especially when http:// and/or a trailing fowardslash is also included cringe. if it MUST exist, DNS does not always have to be involved, it could just be a simple http 301 forward or vhost config. we don't add :80 or :443 to the end of a web address. let the browser handle all that.


It's worth making it exist and doing a 301 redirect. There's some people that will still type it, and others that will link it without checking, and they'll just think your site is broken. You're fighting against the grain for what's essentially a couple lines of config.

It's not much different than using TLS for everything, but still handling and redirecting http:// traffic (also doing a 301 redirect).


It's also useful to set hsts for the whole domain.

I'm opinionated enough to say if you type http://example.org/foo, I'm just going to redirect that to https://www.example.org/ but reasonable people could disagree (especially since Chrome has been going back and forth on displaying the actual URL and something that's vaguely similar to it)


That would just be a giant middle finger to people who manually type URLs.


Now that Chrome is hiding the "www." it's going to look more and more legacy. I still use the "www." and probably will continue using, but for business cards and things like that I always use the bare domain.


The article https://bjornjohansen.no/www-or-not (2017) persuaded me to still use www, however, opinions vary:

* https://www.yes-www.org/why-use-www/

* https://dropwww.com/why


It depends on your audience, but setting it up just so you don't cringe could well be alienating a bunch of your potential user base. Seems like a strange trade-off.

I think people here often forget that there's a mass of people out there who are nowhere near as technically literate and wouldn't have the faintest idea of what "DNS" stands for.


What's so "nasty" about using ALIAS?

> Regarding cookies, the issue is that they are passed to subdomains so cookies that you set for `example.org` are also accessible from `*.example.org`.

This is only true for cookies that are not set as `HostOnly` as far as I'm aware.


> What's so "nasty" about using ALIAS?

The DNS server provide implements this differently, but the two "common" methods I know of is either aggressive caching or caching on request. In both cases the server becomes the client, breaks geoip, and create complexity where errors can easily slip in.


Re: "... only true for cookies that are not set as 'HostOnly'"

Technically, you're right -- but that implies a commitment to vigilance in setting that attribute on every instance of every cookie across every page or endpoint on your apex domain. Further, if you ever want to support some use of a subdomain, you have to deal with the ability of any page therein to set cookies on ".example.com". And when you inevitably miss any instance of any of these, you're unlikely to notice, because everything will still function normally. So in practice, using the apex paints you into a corner and makes the domain less useful.

There's also the issue of DDOS attacks and related problems, for which the fastest and simplest mitigation is updating DNS entries for your CNAME(s). Why take that option off the table?


> What's so "nasty" about using ALIAS?

It's a not standard and therefore, is not a generally applicable solution.

When something not standard becomes common practice and is expected, here comes the pain.


> When something not standard becomes common practice and is expected, here comes the pain.

I think that's the case with most popular features before they become a standard. Personally, I've used ALIAS on AWS without a hitch for years.


The standard should be amended for the modern web then. AWS's (and I'm sure many other's) implementation of this for their various load balancing, CDN services, etc. are critical to the health of the web now.


I don't really see the issue. It changes nothing from the DNS client's perspective.


From the DNS client's perspective, there is a positive and a negative.

Positive: since the server with an alias record actually returns an A/AAAA, a client doesn't have to contact any more servers to get the results.

Negative: the servers for the CNAME target may be faster, relevant if the name is used beyond the A/AAAA TTL but sooner than the TTL would be set for a CNAME); or the servers for the target may be providing much finer targeting than is possibly by proxying.


Noob question: Lets say i am fine with all going to www. or other subdomains mainly because i like using cname to point to other server. I still need to set some Apex record no? if A is blank i dont have anything to do the redirect to www. or what i am missing? Thank you!


Yes, you still need an apex record with an actual IP address. And ideally something listening at that IP address that will redirect to the www address on both http and https.


ALIAS is not a valid record.


I'm interested in this aswell.


So maybe we should define a DNS SRV record (https://tools.ietf.org/html/rfc2782) for WWW, and work to get the browsers using it?

e.g. _http._tcp.example.com and _https._tcp.example.com

It seems there was such a draft: https://tools.ietf.org/html/draft-andrews-http-srv-02


SRV support seems to have been (unfairly, I think) rejected by HTTP vendors and standard bodies. There is a new standard, however, in the works:

https://tools.ietf.org/html/draft-nygren-dnsop-svcb-httpssvc...

This one might have a larger chance of being accepted, since it provides some things which HTTP standard bodies and HTTP client vendors want. It doesn’t (at this stage, anyway) provide for the load-balancing “weight” field from the SRV record, but it does support MX-style priority numbers, and also port numbers. Very interesting, to say the least.



one of my DNS hosting providers supports ANAME record to provide CNAME record for apex/naked domain https://tools.ietf.org/id/draft-ietf-dnsop-aname-01.html - found just today trying to configure static website hosting on S3 & Cloudfront


Use of CNAMES also leaves the door open to simple DDOS mitigation.


Even .org.uk isn't foolproof; I've had mine for 20 years and still occasionally get people leaving off the .uk. The most recent suspect is Google Android keyboard will autocomplete my email address .. and leave off the ".uk"!


As the former holder of yahoo.org.uk, I can also confirm that ordinary people don't understand the difference between .org.uk and .co.uk.


why is that .co.uk anyway? like other countries have .de .nl.. at certain point i thought its smebody owning co.or reselling subdomains.


Like many countries, .UK had a second-level set of categories for different types of organization. There's also .net.uk, .sch.uk, .ac.uk, .gov.uk, .nhs.uk, .mod.uk, .police.uk, ...

.AU, .NZ, .JP, .MX, .BR, .TH... it's not uncommon.


>some NHS systems in the UK (one of their target audiences) refused to accept emails that finished with .healthcare

not suprising at all, I've seen e-mail address validation code that (among other things) used a fixed list of TLDs. I could imagine some spam filters doing a similiar thing.


I've had e.g. foo@mysub.mydomain.com rejected by one of the world's top-3 shipping/courier services. This was on a .com domain registered many years ago, so it wasn't a TLD issue. And the subdomain and its MX was also established long ago. But their software finally did accept the mail address after I removed the "mysub." part. Crazy!


I know there was someone in charge of ccTLD, and used just that TLD for personal e-mail (i.e. no dots). That would throw off a lot of validators, but I suppose it also was good to avoid all kinds of robots that were scanning webpages for emails.


And I've seen validation that assumed that TLDs were 2 or 3 characters only.


Oh yes, a simple [a-z]{2,3} in regexp. I'm pretty sure I've seen exactly this on many different occasions


Or their 'modern' version: ([a-z]{2,3})|info)


There are many broken web forms out there that didn't support perfectly legitimate ccTLDs, and that's before the floodgates were thrown open. There's lots of silly ideas out there on how to validate an email address, and almost nobody uses the right one.

(To wit: Deliver to it, and if it doesn't send, that's the user's problem)


> I've come across a significant minority of people who need that "www" as a clear signal that it's a website

Now more than ever. Before they opened the tld floodgates you could be pretty sure [word].[commonTld] could be put into your browser.

So yeah, guess we have to keep using those 4 letters or come up with alternative solutions. Funniest I've seen was an ad using http:// instead.


as with all things, this depends on your target audience.

If you seeking a general population for something like healthcare, finance, or any other ever day task then yes stick to the old school TLD's


Would love to hear if the same applies to SEO and/or spam filters.

I have a .works domain, which looks nice, but it was a huge mistake and wish I had gone for a .com:

- people often mispell it to .work, or just generally get confused

- one customer actually couldn't receive my emails because some layer in their email stack was blocking this domain

- .work exists, and somebody else owns [name].work, which is a potential security risk...


> Would love to hear if the same applies to SEO and/or spam filters.

I'm the founder of an email monitoring service.

I can confirm that lesser known TLDs can impact deliverability, but not much. It mostly happens with smaller email services that use white/black lists for spam filtering. These are often self-hosted email services. We've also seen 'enterprise' email security products that use hardcoded TLD lists.

Larger services such as Google, MS, Yahoo, etc don't seem to suffer from this, as they use ML based spam detection.

We've also seen email address validators that do not accept fancy TLDs, which can make it difficult to register to services with a fancy TLD email address.


I’ve heard anecdotally that getting .xyz email delivered is much harder than .com, some servers just blanket reject. I expect this will change as the domains become more popular.


I managed a hosted email platform in the past and can confirm one of our spam filters was a simple whitelist of all know TLD's, which was manually updated every now and then. We eventually dropped it when new TLD's where announced every few weeks. Can image some providers still use these kinds of lists and not updating them because they receive no complaints.


> not updating them because they receive no complaints.

I wonder why...


There are still enough sites that actually call emails with some TLDs invalid when signing up. So I can imagine some of them simply have a fixed list when sending/receiving emails and just reject all the rest.


Seeing so much spam from it, I'm not surprised. I'm thinking to start doing the same.

I am wondering why spammers decided to use it, I'm suspecting it's very trivial to get a domain there.


I blacklist .xyz, .ninja, and a few others. Nobody complains.


I had to blacklist .top and .xyz registrations for a free service because they were being used exclusively for spam. xyz in particular I think is just so cheap (often $0.99 to $2) that it's tempting for spammers to register a bunch of throwaway domains.


Reminds me of .info a decade ago. Guaranteed spam, malware, or phishing.


My work blocks xyz and the only xyz I’ve ever wanted to visit was the google/alphabet thing.


This is one of those things you simply don't expect to test when you're benchmarking performance. I've been trying to cut away chunks of 20-50ms on a side project I've been working on hosted on a .io domain, and I'm seriously considering switching because of this. Great article for sure. I also wonder at what point the trade off is between a vanity domain vs a more performant domain. For a CDN this makes perfect sense, but would it be better to launch with a less-notable url vs a more performant one?


> For a CDN this makes perfect sense

I'd say the inverse is more likely. If you're going to fire a single request to a domain only you are using and you're running a full local resolver, it may make a difference.

For a public CDN: your browser already has the file cached. If it doesn't then it has the domain cached. If it doesn't then the dhcp-provided resolver has it. If it doesn't, then at least it already has the TLD nameserver available immediately, and the TLD can serve that response from very hot cache. It's CDNs job to make sure this happens.


With a vanity domain, you can fully control the TTL values. All my sites use a vanity domain because it doesn't tie me to a particular CDN, and they have 86400 TTL.

When you have a vanity domain like cdn.example.com, the recursive resolver already knows the nameservers for example.com, so this actually reduces the additional DNS lookups.


I humbly suggest this is not a good use of your time.


Not if you like nerding out on this stuff. Then it's more like a hobby or pleasant diversion.

But I agree one needs to zoom out and put things into its proper perspective.


nope I would kill to be able to cut 50ms by DNS tweaks across the large fleet >100 sites for a major brand


On the first request. All subsequent requests will be cached.

Oh, and if the requestor has a large DNS cache upstream, it's already done.

Oh, and if the browser used a pre-fetch, that's already done.

Oh, and if you have already invested your branding effort across 100 sites, maybe you don't want to re-do all that?

Oh, and if you need to cut 50ms from your first time page load, have you considered dropping all the trackers and analysis JS loads? Can you deliver your first page without any JS at all? Can you do it without a database lookup?

Those are all things you should do before killing anyone.


If you haven't done already, take a look at don't delivery improvements. Serve don't files locally, subset them, and use variable fonts and woff2. I'm a micro performance enthusiast myself and it's my #1 optimization with the biggest gain.


dont't ??

I will have a look at your font suggestions


Switch to .dev then? It's not listed on this graph but its performance should be similar to .app.

Disclaimer: I run .dev.


Although I may see what the BunnyCDN is trying to get at, personally I've gotten under 10ms with .com (even w/o caching).

I find this a bit hard to believe without more people justifying or backing this work up.

So yeah, don't worry about your TLD. .io is perfectly fine and companies and people internationally use it.


.io doesn't have a good history though outside of this - https://hn.algolia.com/?query=io%20domain&sort=byPopularity&...



I'd like to point out that these are empty platitudes. Imagine going to your boss to justify not moving tlds and saying, "well sdan on hackernews said it was fine!"


if anyone does tell their boss, let me know ;) haha


I don't think you can draw any conclusions at all from this. They appear to have measured how far it is from their CDN pops to the anycast instances of the TLDs.

Edit: I see what they did. They picked a random nameserver for each TLD. This means that those using only anycast servers win as they always hit the same POP. .org uses only some anycast instances.


> For each top-level domain, our system picked a random nameserver published for each of the top-level domains and queried a random domain name that we picked for it. We then grouped the results by region and logged the data every 10 seconds.

Am I misunderstanding what they're doing or is this completely misleading? If they're only testing one randomly chosen nameserver, the results are much less likely to be a good indication of the speed of an average request for that TLD. Why not average across all of them?

Also, as they kind of suggest near the end, caching is probably good enough that this is very rarely a problem anyone needs to worry about.


I think they are saying on each probe, a random nameserver and then random site are chosen. So overall it’s randomizing over all nameservers and sites, to make the median a more useful metric of the overall TLD.

In other words, what they wrote suggests it’s not just one fixed nameserver chosen per TLD, rather randomly chosen each time they will make a request.


This methodology doesn't really consider what recursive resolvers tend to do either.

Recursive resolvers will keep track of performance of the authoritative servers they use and send more queries towards the servers where they get faster responses.

For popular TLDs like .COM/.ORG, it's almost guaranteed your recursive resolver will have enough data to pick a fast authoritative. If you're using a .bike domain though, I'd guess it makes some difference.


I recommend that all DNS servers be in-bailiwick servers with glue.

This recommendation is taken from Dan Bernstein, see https://cr.yp.to/djbdns/notes.html, section "Gluelessness"

It not only improves reliability but can help reduce the number of queries and speed things up.


I mean, that would probably be true if DNS wasn't distributed and highly cacheable. But seeing that it is I would be surprised if you'd see anything other than statistical anomalies on a domain with even a limited traffic and poorly configured TTLs.


A little off topic, but some of you folks have experience with bunnycdn, the company behind the post?

Are they fine?


I'm watching them for years. They are from Eastern Europe and started really small (they are not so small nowadays) but they deliver extended (!) features which many other CDNs don't have or like in a 5-10x more expensive premium price range. Also their policy is really relaxed.

Awesome value for the money. You get much more for lower pricing.


I don’t have experience with them but I did find that they started as a side project and shared it on Reddit 4 years ago:

https://reddit.com/r/SideProject/comments/419hz0/bunnycdn_th...


I'm using their bulk pricing for video streaming (from a B2 backend). Works great, support is awesome (fast response, credit when issues occur) and the UI is a breath of fresh air. Everything just works exactly as expected. I can't recommend them enough.


Using them for serving f.lux updates - super reliable and beats cloudfront prices by a mile.


I've been using them for several years. Saved our local news startup a boatload compared to CloudFront. What we pay in a year for BunnyCDN would equal one month on CloudFront.


They are great! We have been using them for a couple years now. Highly recommend.


Same question. Was wondering if I should submit the BunnyCDN on HN as topic.


Yeah, they are doing an amazing job.


Thanks!


Independently reproduced the tests for .com and .blog. Found that many of the TLDs at the bottom of BunnyCDN’s list has one thing in common: CentralNic. Avoid using any TLD that use it for their backend infrastructure. https://www.ctrl.blog/entry/dotblog-tld-performance.html

Interestingly enough, the .top domain gives what it promises: it's at the top of the worst performing TLDs list


> The biggest shockers were the .info and .org domains that showed really poor performance especially in the 85 percentile range, despite being one of the oldest and well established top-level domains with millions of registered domains each. After some further investigation it appears 4 out of 6 of their nameservers are performing extremely poorly which is the reason for the poor results.

I always thought with all the money collected from ten million .org domains they would have an army of nameservers to make sure latency is low, instead they actually only have 6 nameservers and are performing poorly? Sure it'll probably won't impact real world performance but I'm still disappointed that they seem to be only doing bare minimum. I wonder where those hundred million bucks actually goes?


Most TLDs run the bare minimum amount of nameservers required by ICANN, specially the newer TLDs. However, hundreds of nameservers scattered around the globe wouldn't help much either, because they just need to have faster responses to the mass of recursive resolvers.


Hundreds of nameservers scattered around the globe with anycast absolutely helps.

The main thing between a DNS request and a DNS response is network round trip time. Actually processing the request should be trivial (especially for .org, but even at .com), the zone file may be large compared to most, but it's all static records, with batched updates. I remember when internic would do updates at midnight, but you might not make it in the batch; mostly I see 5-20 minute delays on changes now.


Damn the India cctld is the second fastest cctld out there - almost at par with .us . Not sure where the tests were ran - probably in EU/US, so this is actually really good.

It pretty much outperforms .com/.net/.org

#MakeInIndia


How is make in india even relevant to resolution/cache?


It is operated by Neustar (an American company) under contract from NIXI.

https://en.m.wikipedia.org/wiki/Neustar


That's something for which we might want to see the raw data: to sort by region, see other percentiles, analyze whether it's bunnycdn's connectivity ?

Also, it might be nice to be able to re-do the tests; better yet: have a website that's auto-updated so that we see results today (maybe TLD X had an incident when they were measuring?). Of course, that's not something you can ask a random stranger on the internet to do for you.


> The biggest shockers were the .info and .org domains that showed really poor performance especially in the 85 percentile range, despite being one of the oldest and well established top-level domains with millions of registered domains each

Why do I imagine this being spun into some upbeat marketing exercise for .org?


Friendly reminder for people on HN reading this:

I know this is actually quite interesting, but before you start worrying about the latency of the name servers of your TLD, you might want to do something about the metric ton of JavaScript on your site and the 25 different 3rd party servers from which you side load most of it. Also those 6 additional servers from which you load a bunch of TTF fonts. Especially if all your site does is just display some text and two or three pictures.


To add to this, one thing people tend to forget is that html has a dns prefetch option for offsite js you absolutely must have. (rel=dns-prefetch)

Of course I agree with goliath, which is why I try very hard to write pure html5+css3 with no JS unless absolutely necessary. It is very rarely necessary. When it is, very rarely do I need one of the crazy frameworks, pure js works pretty well.

Beyond that, this is why adblocker plus and umatrix are two must have addons to firefox. Once you build your asset rule list up with only the couple of js needed to run a site, the same site that takes forever for the average visitor can actually be fairly speedy when none of it's js loads.

Now, for original topic, if you are on windows check out GRC's DNS Benchmark, and if on nix, check out namebench.


Thanks for the tip. I've heard about it, but never saw anything about it. Here is a bit of info on dsn-prefetch, https://developer.mozilla.org/en-US/docs/Web/HTTP/Link_prefe...


You can go ahead a step and `preconnect` to a particular URL if you know it. This includes dns prefetching but goes ahead and initializes an HTTP connection.


> rel=dns-prefetch

This is amazing! How could I have missed this? How are the loading times affected when using this option, if you care to share?


It largerly depends on how you use it. For example, if you are using standard Google Fonts, you can pre-connect to the fonts file's hostname so the browser already has DNS resolved when the CSS file refers to the font files.


It helps more when the offsites are in another geographic location, like if for some reason you are loading something from the EU on a server in the US. Benefits can be marginal otherwise, so I just suggest people play around with it for any offsite requests they have.


Beware even some text-only browsers have added prefetching; need to patch source to turn it off

And then there are browsers that have internal stub resolver. Horrible

https://www.reddit.com/r/chrome/comments/bgh8th/chrome_73_di...

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-...

https://www.chromium.org/developers/design-documents/dns-pre...

https://www.ghacks.net/2019/04/23/missing-chromes-use-a-pred...

https://www.ghacks.net/2013/04/27/firefox-prefetching-what-y...

https://www.ghacks.net/2010/04/16/google-chrome-dns-fetching...

I have been doing "DNS prefetching" before this term existed

I do non-recursive lookups and store the data I need in custom zone files or HOST file. I get faster lookups than any "solution" from any third party.

It is sad how much control is taken from the user, always with the stated goal of "making the web faster".

In many cases they are makng it slower. The irony of this blog post by a CDN about TLD latency is that some CDNs actually cause DNS-related delay by requiring excessive numbers of queries to resolve names, e.g., Akamai

Users have the option to choose for themselves the IP address they want to use for a given resource. If they find that the connection is slow, then they can switch to another one. Same idea as choosing a mirror when downloading open source software. Some users might want this selection done for them automatically, others might not


> And then there are browsers that have internal stub resolver.

You mean internal caching resolver? Every application has an internal stub resolver, even if it's just using getaddrinfo, which builds and sends DNS packets to the recursive, caching resolvers specified by /etc/resolv.conf or equivalent system setting. But getaddrinfo is blocking, and various non-portable extensions (e.g. glibc getaddrinfo_a, OpenBSD getaddrinfo_async) are integration headaches, so it's common for many applications to include their own async stub resolver. What sucks is if an internal stub resolver doesn't obey the system settings.


As a user, I prefer gethostbyname to getaddrinfo. The text-only browser I use actually has --without-getaddrinfo as a compile time option, so I know I am not alone in this preference. The best "stub resolvers" are programs like dnsq, dq, drill, etc. They do not do any "resolution", they just send queries according the user's specification.

As a user, I expect that the application interfacing with the resolver routines provided by the OS will respect the configuration settings I make in resolv.conf. Having to audit every application for how it handles DNS resolution is a headache.

https://www.xda-developers.com/fix-dns-ad-blocker-chrome/

I recall there were earlier experiments with internal DNS resolution in Chromium, e.g., code was added then removed.

Browser DNS caches are another annoyance but that is not what I meant.


> As a user, I prefer gethostbyname to getaddrinfo

On many systems (e.g. OpenBSD) they're implemented with the exact same code. glibc is something of an outlier given its insanely complex implementations interacting with RedHat's backward compatibility promises. Many of the code paths are the same[1], but getaddrinfo permits stuff like parallel A and AAAA lookups, and minor tweaks in behavior (e.g. timing, record ordering) often break somebody's [broken] application, so I'm not surprised some people have stuck to gethostbyname, which effectively disables or short-circuits alot of the optimization and feature code.

But, yeah, browsers in particular do all sorts of crazy things, even before DoH, that were problematic.

[1] As a temporary hack to quickly address a glibc getaddrinfo CVE without having to upgrade (on tens of thousands of deployed systems) the ancient version of glibc in the firmware, I [shamefully] wrote a simple getaddrinfo stub that used glibc's gethostbyname interfaces, and dynamically loaded it as a shared library system wide. It worked because while most of the same code paths were called, the buffer overflow was only reached when using getaddrinfo directly. Hopefully that company has since upgraded the version of glibc in their firmware. But at the time it made sense because the hack was proposed, written, tested, and queued for deployment before the people responsible for maintaining glibc could even schedule a meeting to discuss the process of patching and testing, which wasn't normally included in firmware upgrades and nobody could remember the last time they pushed out rebuilt binaries. glibc was so old because everybody was focused on switching Linux distributions, which of course took years to accomplish rather than months.


As a user, I admire your work.


No kidding. I'm using a > 1GBit fibre line and the majority of my page loads is still downloading recursive, pointless javascript dependencies, or doing a billion CORS OPTIONS requests and waiting for the response. DNS latency doesn't even factor into the end result user experience, it's dominated entirely by front end design decisions.

If this was a concern it’s an admission that JavaScript developers have optimized to the best of their ability. Which is just sad.


> a billion CORS OPTIONS requests

When I first saw the CORS headers on large sites that are sent with every effin request, I thought I gone mad, only to learn that it's encouraged to be used... I remember times with monsteriusly sized cookies, this is the same, except this won't go away with the rise of server side sessions.


Do you have a moment to talk about The Great Saver, uMatrix?


Disabling a lot of the bloat makes random pages break, which is an even worse experience.


Would I recommend uMatrix to my non technical friends? Absolutely not.

For technical people it's a matter of self selection. It's worse for the ones of us that can't bother checking which scripts break the page and deciding whether to enable them. It's great for the others. Personally I can't imagine using the web without uMatrix (and uBlock Origin). For the site that really breaks no matter what, if I really need it either I open it in a Vivaldi private window (I only have uBlock in that browser) or if anything else fails, I start Chrome and close it immediately after I did what I had to do.


But it is compensated for by the sites that actually work better when you strip their non-local JS from them. I've lost track of the number of times I've come to the HN comments and read about how unreadable the page was due to all the popups and ads and modal dialogs when I just read the text. You're absolutely not wrong that some sites get worse, but it's not one-sided in that direction.


uBlock Origin is probably the better middle way for most. It should accomplish what you describe most of the time yet breakage is the exception instead of the rule.


Those random pages are not worth the visit then.


Nothing wrong with preflight checks. What’s wrong is that a SPA will make 100 calls to a web server for data that isn’t even that complex.

So much poor API design out there that most would be better off with server side rendering.


> So much poor API design out there that most would be better off

...fixing the API design.


Many inexperienced developers don't understand the difference in latency between a local memory call and a remote HTTP API call, so treat the two as equal in terms of architectural concerns.


It's been a truism for many years now: "slow" is often not your side of the pipe, especially with high-bandwidth connections.

>If this was a concern it’s an admission that JavaScript developers have optimized to the best of their ability. Which is just sad.

I mean sure, but "is this the best you can do?" requires you to know what the goals are, and I suspect the issues raised in these comments are not on hardly any of the lists. New Relic and its third-party cool-usage-graphing friends are way way higher in priority.


Absolutely agree with this. The first time I installed the uMatrix extension, half the sites were broken because they wouldn't load unless I greenlighted access to all the third-party servers. Only 30% of the sites I've come across didn't need tinkering with uMatrix to work.

PS: uMatrix disables loading of resources (IFrames, scripts, etc.) if it's not served by the first-party (i.e. The domain itself) domain by default.


And if they weren't using 60 second TTLs everywhere, they might actually from the caching built into DNS!


Are you suggesting this might be a premature optimization? :)


ccTLDs like .ru kinda make sense being slower on a global scale because they mainly target a national audience, but a lot of the worst offenders are gTLDs where your audience can be anywhere on the planet.


I once read that the person running the .co registry was a k user and released his own k-like language. I wonder if that has anything to do with the speed of the .co TLD


Was this tested with different geographic points to account for latency across the earth?


I find this a bit hard to believe. Who's servers were you running this test on? Which regions exactly?

Is there any other sources that back up this claim?

Also interesting that .in (even though indian tld) is faster...


The company I used to work for switched from .io to .dev, because the former had issues with availability.


.in is technically-operated by Neustar.


Off topic: My first thought was "Andrew Huang has a CDN?" but then remembered he spells his nickname differently. Never mind.


Well, this is disappointing. As others have stated , clearly there usually are other areas (easier targets like too much javascript or tracker code crap) to optimize performance than here. But what i find disappointing is that i was under the belief that things like a TLD were merely cosmetic and didn't have an impact on underlying performance. Domain names were supposed to be sticky labels on more important underlying infrastructure. I always thought that choosing a .COM, .NET, .WHATEVER didn't matter for delivering digital assets across the internets to one's customers/users...well, if bunnycdn is onto something, color me surprised, and slightly saddened.

I've owned my own {$last_name}.CC domain name, and have used it as the basis for my personal/family email for over a decade, and am quite aware of how it has gotten treated at least on the spam front. Admittedly, I've had very, very minor issues compared to many others who have used other seemingly "non-traditional" TLDs/ccTLDs. (I attribute the low issues to having used G Suite for many years. Spam fighting: One of the few good things i like about google.) While the .CC tld was originally based on a country code, it was marketed (and i guessed managed) for many years now as a generic TLD. My friend gave me the idea to use it after he set his domain name up for his then-nascent (ahem) computer consultancy ...And, since the .COM, and the .NET versions[0] of my $last_name at the time were not available for me, i went with a nice short and sweet .CC domain. Back then i also felt a mild sense of edginess for using something different than "boring old" .COM, or .NET. But i never figured there would be DNS/nme resolution performance issues...Its just a name resolve process going from some arbitrary text name to ip addresses, etc...why should there be an issue, right?

Beyond the admittedly minor spam issues that I've had with using a "non-traditional" TLD, the biggest headache by far is having to educate people that the world has more than just .COM and .NET domain names. Having to emphasize (both verbally spell out, and through bold text or caps while written) that my email address ends in .CC and NOT .COM is quite annoying. After so many years, it still has not lessened much...At least not in the U.S. - where i live and work. However, strangely/unexpectedly, outside the U.S. this issue is vastly less of a thing. During the last 2 years I've traveled to many parts of the world for my dayjob, and wow, my {$last_name}.CC domain name is pretty much not an issue outside of the U.S. So this whole time it's my own Americans who lack the literacy in this regard. From my experience, in the minds of typical, layperson Americans there exists only .COM, .NET, .ORG, maybe sometimes .UK...But everything else might as well be .SPAM or .FAKE ;-)

So, after the spam, after all the years of annoyance of emphasizing the spelling of the TLD, and now there could be possible performance issues!?! Man, this whole domain name thing sucks (to say nothing even of the sleazy business/marketing side of the racket). I mean, weren't domain names supposed to make things easier than having to remember arbitrary IP numbers/addresses? I think we need a new method or system.

[0] By the way, the .ORG version for my domain name was available years ago, but I wasn't a non-profit, so felt i didn't want to grab it; happily allowing any true, legitimate non-profit to scoop it up. My mental jury is still out on whether that was a wise decision that i made or not. Because of recent news of the .ORG registry, maybe I'll make a separate post on this topic.


> 0] By the way, the .ORG version for my domain name was available years ago, but I wasn't a non-profit, so felt i didn't want to grab it; happily allowing any true, legitimate non-profit to scoop it up. My mental jury is still out on whether that was a wise decision that i made or not. Because of recent news of the .ORG registry, maybe I'll make a separate post on this topic.

Not that it matters, but in my mind, .COM stood for commercial businesses, .NET stood for networks/isps, and .ORG stood for the rest. Do you have any real relation to the Cocos islands? That's less acceptable to me than putting yourself in .ORG.


I share your perception of what .COM, .NET, and .ORG stood for. And, to answer your question, i have no relation to the Cocos islands...further, if I were somehow negatively impacting the inhabitants of said region, i would have never obtained the domain name. (I'm not that kind of person.) Besides, the domain name was for my personal/family email only, that's it - not any commercial entity or network service, etc. - there was no .FAMILY (or similar tld) back then.

Nevertheless, to clarify why i had obtained a domain name using that tld...While yes, .CC was originally (and for a very short period) intended for the Cocos islands, it quickly shifted focus (by those same owners of the NIC/registry that controlled the TLD) and was promoted for international registration; in fact the marketing i saw pushed it as the next .COM for computer hobbyists, etc. And, I wasn't the only one who bought that line; see https://en.wikipedia.org/wiki/.cc#Usage

EDIT: Typo corrections.


Sorry, I'm aware everyone was/is doing it, and it's encouraged and acceptable. I see that's not clear from my message; I didn't mean to imply you made a bad choice.

I just would have ignored the intent of .org or .net before a country tld.


I appreciate that clarification; all good, no worries. And, yes, having learned from my .CC experience all these years now, going forward I will ignore the intents of .org, .net, etc. before looking at ccTLD. Cheers!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: