Hacker News new | comments | show | ask | jobs | submit login
Xip.io - a magic domain name that provides wildcard DNS for any IP address (xip.io)
312 points by qrush on June 7, 2012 | hide | past | web | favorite | 121 comments

What is the use case that requires http://test. rather than ? I've been using the latter on iphones and ipads with no problem.

Honest question.

You can't test multiple sites on the same host using just the IP.

"You can't test multiple sites on the same host using just the IP."

I'm not getting what the problem is. Isn't this what ports are for? Or even just running the websites on different folders?

Sometimes vhosts are convenient, sometimes they're even mandatory. For example, with XMPP servers, multi-user chat and any components must live on a subdomain. So if your main server is running on example.com then the MUC server is, say, conference.example.com and component "foo" is foo.example.com. No way around it short of hacking the source (and, if I'm not mistaken, violating standards.)

This is just one situation where I can see this come in really handy during development.

> For example, with XMPP servers, multi-user chat and any components must live on a subdomain.

This is only necessary if you want users outside your domain to access your component. While you probably want to do so for MUC, you might not necessarily want to bother for your user directory or gateways. I've run many servers over the years and long since stopped creating a host/subdomain for each component.

Interesting, this must be a shortcoming of OpenFire then. With OpenFire I haven't found a way around having the MUC and extension subdomains accessible via DNS, regardless of whether or not requests are coming from the same domain or not. Is this not necessary with other XMPP servers? Which ones are you using, if I may ask?

It is indeed a shortcoming of OpenFire; one that won't be fixed [1].

As far as the XMPP protocol is concerned, the concept of sub-domains doesn't matter. It's useful for human users when configuring servers though.

Prosody for example allows running a multi-user chat service on example.com. And there's an undocumented feature which let's you have user@example.com be a user, and room@example.com be a chatroom.

[1] http://issues.igniterealtime.org/browse/OF-162

I've used jabberd2 and ejabberd, neither of which had that constraint. Can't remember from my experiments with OpenFire or Prosody.

Not a problem with ejabberd.

Depends on what it is you're testing. You could be testing a vhost setup, in which case different ports or paths wouldn't suffice for testing. More specifically you could be an application's behavior when given different vhosts. Make sense?

This is the best way to get a "natural" development environment (i.e. as close to production as possible). Also, it's a pain in the ass having to start and stop various apps and remember ports, which is what Pow is great for in the first place.

Ports are a pain in the ass compared to persistent names.

Which is when you add them to /etc/hosts

Can't do that on an iPhone or iPad

Actually you can test two at a time by using an alias in /etc/hosts.

In /etc/hosts: site1.com site2.com

Then you have to edit /etc/hosts to try two more sites.

The idea that the user cannot access /etc/hosts on the iPhone is reason enough to jailbreak it. Denying access to /etc/hosts means the iPhone cannot connect to the any website that requires a hostname, even when the iPhone can connect to the internet, unless the iPhone can access some DNS server (which of course Apple probably wants to control). That is a ridiculous limitation. The internet nor the web requires DNS to work, but the iPhone requires DNS to access the web.[1]

1. assuming the website is using virtual hosts or otherwise requires a hostname

Well this service helps a lot, not only where access to /etc/hosts is restricted but also for other devices, just for convenience.

This would support multiple vhosts.

Multiple sites of course, though that can be solved with ports.

Some things are harder to do without a hostname - cookies, for instance, can be tricky.

I have dealt with weird edge cases in frameworks/proxies/whatever where sometimes the hostname turned into an IP (or turned into You generally don't want this to happen in production, and performing all your testing with names is a good way to observe it not happening.

I could see how it'd help if you are debugging a service that parses a subdomain out of the main URL for a particular purpose (think, for example, a SaS that provides a domain like 'yourcompany.mysasservice.com').

Other than that, ports will do.

I have a dd-wrt router with DNSmasq functioning as the DNS server for local hosts. DNSmasq resolves external domains using Google DNS ( With this setup, domain names like 192.168.X.X.xip.io and 127.X.X.X.xip.io won't resolve, and I believe there is something wrong with my DNSmasq setup. Anyone else ran into similar issues?

(Update) Problem solved by myself. The DNSmasq config has stop-dns-rebind option enabled, which filters out DNS results in private IP ranges from upstream servers for security reasons. DNSmasq doc has the following part:

Reject (and log) addresses from upstream nameservers which are in the private IP ranges. This blocks an attack where a browser behind a firewall is used to probe machines on the local network.

In case you run into this issue, just comment out this option in dnsmasq.conf and restart dnsmasq.

Or you can add rebind-domain-ok=xip.io to dnsmasq.conf. Not that I would do that myself, as I still don't see what value it provides.

Thanks for this tip! I guess this is more secure than just opening up all rebind from the wild.

I run unbound as my recursive DNS resolver and it too strips those results out for security purposes.

I'm not exactly why it is so hard to connect to a local machine on your network. Either determine your local IP address or your network computer name.

If you're creating an application where connecting on the "root domain" matters it can be problematic. For example, imagine you were creating some URL rewrites using apache's mod_rewrite and they worked for http://some.domain.com/rewrite-goes-here/ you would have to do a bunch of extra work (or an extra set of rules even) to make that work also for

When you're testing on your LAN using a PC/mac or whatever you can do a local DNS modification on the machine (eg. /etc/hosts) but when you're testing from an iPad or some other device this is either impossible or prohibitively difficult.

The other option is to setup a DNS server on your LAN which is a headache all it's own - this is a very simple and elegant way of circumventing these issues. Awesome stuff.

You really should develop your apps so that they are path agnostic. And the mod_rewrite rules can be fixed with a simple RewriteBase declaration (RewriteBase /subdirectory). I've never found this a major problem that requires a DNS server to fix.

Reimplemented in ~30 lines of ruby as a powerdns pipe backend.


powerdns is a solid dns server and very extensible!

$ host whatever. whatever. is an alias for 1h9u9ze.ip.ipq.co. 1h9u9ze.ip.ipq.co has address

For those who missed the other announcement, Pow 0.4.0 has xip.io support built-in: http://37signals.com/svn/posts/3191-announcing-pow-040-with-...

Does that actually work? On my machine pow has ipfw configured so that it only forwards requests to not to 10.0.1.whatever so project. fails.

Filed an issue: https://github.com/37signals/pow/issues/293

Edit: a) I should RTF man page for ipfw, b) nevertheless my OS X behaved strangely. I rebooted and now everything works.

Anyway, great project, have been a happy user of pow master for quite a while.

I've also made use of lvh.me (local virtual host), a url who's dns points at It's good for testing subdomains on localhost.

localhost.microsoft.com and localhost.yahoo.com also used to redirect to And, of course, this wreaked utter havoc with cookies (since cookies are accessible across the entire domain). Zalewski discusses in TTW (The Tangled Web).

DNS hackery is really dangerous.

I gather that Xip.io have the same issue with cookies.

Yes, but xip.io doesn't host any production services with which test cookies might interact.

I like http://localtest.me because it's the only one I've seen with a valid wild card SSL...makes testing and switching between HTTP and HTTPS easy.

Hmm... Why wouldn't I just type the IP in?

Because then you can only serve one site from that address.

With this you could serve as many different sites as you want based on the first part of the domain name.

To expand on this, this problem can usually be solved by editing /etc/hosts. But you can't do that on some platforms such as iOS.

By the way, a neat trick is to assign an alias to your network interface in order to avoid the trouble of DHCP giving you a different IP address each time you connect. For example, on Mac OS X:

  $ sudo ifconfig en1 alias netmask
This address will only be reachable from hosts that have a route to it, which can be achieved for example by also giving them an alias on the same subnet. Still, comes in handy at times.

(Obviously you want to be sure you're using a vacant address.)

(And of course, when you have control over the DHCP server there are more elegant ways of achieving this, such as binding your MAC to a static IP address.)

Edited for clarity.

What is the difference between an alias and a static IP address? Also don't most routers attempt to give IPs back to the MAC that last had them?

An alias is an additional static ip.

And most routers try to minimize IP variations, but it's not perfect. Mine doesn't persist IP<->MAC associations when I reboot it.

What is the difference between an alias and a static IP address?

Not sure what you mean. If you're referring to the bit about binding MAC to an address, I should probably have said "fixed IP" rather than "static IP", sorry about that.

Also don't most routers attempt to give IPs back to the MAC that last had them?

I'm hopping between different networks (with different DHCP servers) quite a lot, maybe it's less useful when you're always on the same network.

Ah, no Wilya picked up on why I was confused. Sounds like a great tip, thanks!

Ah, for virtual hosts, got it. Thanks.

This would be great for testing multi-tenanted cloud applications. For example:

tenant1. tenant2. tenant3.

They all resolve to the same IP Address (, but now the web application at that address knows which tenancy is being targeted.

You can do this with pow if you're testing locally - tenant1.mydomain.dev, tenant2.mydomain.dev etc, and the URLs are a bit cleaner.

Couldn't you accomplish this with djbdns' dnsrewrite or pdns_recursor's lua scripting?

Why anyone would want to write DNS server (=something that needs to be very fast) in Javascript is beyond my comprehension. The ASCII art is probably better work than the DNS server.

Or unbound's python scripting. Or if GeoScaling slightly improves its excellent smart subdomain service, 2 lines of php script.

You guys are missing the point. It's intended to be used with Pow. This way you don't have to bother with manually starting & stopping servers or remembering ports or whatever. This domain allows you to have access your sites in the same way from any device in your network. Not just your dev machine. Handy! And it works. What's not to like then?


I created a clone of xip.io which doesn't have any the DNS faults:


I run a kinda related service that does instant dns records: http://ipq.co/

And I wrote a Ruby DSL to easily integrate with a real dns server (powerdns). Makes it trivially easy to write things like xip.io


I don't get it, it's pretty easy to install a wildcard system on one's server right? Why would we go through this?

I run a simple djbdns setup locally with a caching resolver that passes specific domains to my dns server proper and the rest up the chain. Took about seven minutes to configure properly. This seems overly complex.

Nice hack.

Of course reverse dns doesn't work :-) I suppose it kinda sorta could if you tracked where a request came from and what IP you sent it and if you got a reverse lookup you could undo that, but still it is clever!

Reverse DNS is controlled by the company that owns the actual IP address. There's no way for a random website to change responses for it (unless they own the IP range, or were delegated control)

Sort of. Which is to say it is when you don't lie, but you can lie if you know what you want.

When you reverse map an IP you look up b4.b3.b2.b1.in-addr.arpa. where b1 - b4 are bytes 1 through 4 (in reverse order) of the IP address. So becomes The interesting bit is you send this to some dns resolver, typically in the 'generic' world your machine got the address of a resolver (and maybe a backup) from the DHCP server that gave it the IP address. When that dns server sees this request what it is supposed to do is to either tell you to 'go fish' and here is the IP of a server than can help, or 'recursively resolve' by forwarding on your request. Now if you run a vanilla BIND or djbdns setup you will get short circuited by it recognizing a 'private' address and not resolving it, if it did try the root servers tell you to go away as well. But if you recognized it as a private address and sent it back to xip.io DNS servers on a lark, they could "pretend" to be authorative for the domain and return you a cname record that pointed back to your fake name.

I admit it is a hack on top of another hack but as long as we're writing custom DNS servers why not go all in? :-)

Isn't it controlled by whoever is running the .arpa domain?

Yes and no. There is nothing [1] preventing any DNS server from responding authoritatively to a request that it is presented with, except a moral correctness to the protocol.

[1] If you ever wondered how openDNS or your ISP sends you spammy web pages when you try to resolve something that doesn't exist, or how the hotel hijacks your browser into giving you a login page, this is it. You look for google.com it notes you haven't logged in and returns the address for its paywall as the answer.

Oh except if you are running dnssec in which case it is a lot harder to lie about what you are authoritative for. But on my dns servers at home they all think they are authoritative for 10.in-addr.arpa. so that they will answer queries for that network.

Why is this better than adding an entry to my hosts file?

/etc/hosts can't do wildcards

How are you going to add an entry to the hosts file on an iPhone?

Buy (or dig out of your closet) an old Wifi router and install Tomato on it. Its web interface lets you edit its hosts file which is then active for all the devices connected to it.

Except many of us don't have Tomato/DD-WRT capable devices, not to mention I can't be bothered setting up an additional router to my current two, flashing and securing it and then switching my devices over to it just to test something when this solution does essentially the same thing.

I know you meant your question rhetorically, but I believe you can add a proxy server to the iPhone and handle any hostname trickery there.

Edit: Proxy settings can't be global, but are assigned individually for each WiFi connection.

Or you can use xip.io and save yourself a lot of trouble.

Yeah but then you're into jailbreak-warranty-invalidation territory. Great for personal devices, not so great on corporate.

Why would adding a proxy server invalidate your warranty? Adding a proxy is built in to iOS.

Oh whoops.. misread parent as installing a proxy server, not configuring one to be used in the settings.

jailbreak it

Because it's a PITA to show other people how to make entries to their hosts file if you want for someone else to look at your project. I'm not telling you anything you don't already know, but this is why I would use this service over just making entries to my own hosts file.

This is pretty handy until were all switched over to IPv6 and we'll need to "to solve the problem once and for all" again.

Isn't the idea of IPv6 that we have plenty of addresses. Thus it wouldn't (read: shouldn't) be hard to get another address bound to your virtual host.

In fact doesn't the whole idea of binding multiple sub-domains to the same IP address (on the same port) kind of go by the wayside when it's free/cheap/not-going-to-blow-up-the-internet to get another IP?

yes you could indeed throw another address at your virtual host. but the website/tool in question won't work with IPv6 addresses as they are not valid in a URL is the point i was trying to make :)

I wish one of the localhost-to-web projects like showoff.io or localtunnel would allow wildcard hostnames like this.

I hope people trust 37signals.

We have no proof that the code on the repo is related in any way to the code the actual site is running.

You're right. I'm probably just running xip.io to steal all your sensitive development-mode form data!

Information you have access to:

External ip addresses of requestees. Internal ip addresses of requestees.

To some, this is very useful...

Can you be specific instead of "to some?"

Aha! Give me back my "foo:bar"s!

You have no proof that you aren't running a rootkit logging all your keystrokes right this second ;)

You have no proof that you're not living in the Matix right now!

^This disturbs me :(

If you click on the "custom DNS server" link on the site, you will get a surprise

hah, they wrote it in node.

Nice, although were I to write one, I'd probably just use PowerDNS and its pipe backend. Probably only some 30 lines of Perl/Ruby/whatever. Example: http://wiki.powerdns.com/trac/browser/trunk/pdns/modules/pip...

Trust them with what? All you'd be telling 37signals with this is your development server's internal IP address and the names of any subdomains you might be using. No actual site traffic will go to them.

If they do switch where xip.io points, they could see what you're doing with your dev server if your local DNS lets the resolution expire.

That said, I do trust them :)

Why not use mDNS/Bonjour?

looks very useful

I've identified several technical problems with this domain, and this isn't an example of how to properly operate DNS. 37signals is setting an absurdly low TTL on these records (10 minutes; the answers never change, I absolutely do not understand the logic behind this TTL), which means every 10 minutes you're re-resolving a local address, through a CNAME (so two DNS round trips, and in my case this resolution took between 115ms and 230ms, not small change):

    [~]$ dig foo.
    foo.	600	IN	CNAME	foo.daze1.xip.io.
    foo.daze1.xip.io.		600	IN	A
Concerningly, ns-1.xip.io is also broken; it does not serve NS records for its own zone, instead relying upon the SOA record and the upstream glue, which I'm shocked works:

    [~]$ dig +short NS xip.io
The nameserver delegation from nic.io is also broken:

    xip.io.			86400	IN	NS	ns-1.xip.io.
    xip.io.			86400	IN	NS	ns6.gandi.net.
    ;; Received 86 bytes from 2001:678:5::1#53(b.nic.io) in 60 ms
Oh, well that's interesting, Gandi is a backup for their custom daemon, eh? So did they implement AXFR, IXFR, and notify and such to Gandi? Well, let's ask Gandi:

    [~]$ dig @ns6.gandi.net. SOA xip.io
    ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 3222
Oh, guess not. The long and short of this is for DNS purposes, a custom daemon is almost never the answer. This could have been accomplished with BIND fairly easily, and the zone would be functional as well.

It's a cool idea, but there are some other problems too; which I just want to list to help the developers and am not trying to rain on a parade.

As in the parent comment, a CNAME is returned for arbitrary names;

  % dig foo.     
  foo.	600	IN	CNAME	foo.a2eo0.xip.io.
  foo.a2eo0.xip.io.	600	IN	A
but only if the request is of type A. Requests of other types return invalid NXDOMAIN responses - invalid because they contain no SOA in the authoritative section. CNAMEs are supposed to be returned for all records of any type for a given name, not doing so is dangerous as it can poison caches. Not returning the CNAME even for a query of type "CNAME" is particularly harmful.

Responding with no name would be bad on its own, but saying that no name exists is clearly wrong and can be used to poison caches (the NXDOMAIN is cacheable). Note that most browsers and clients will now perform an AAAA lookup prior to the A lookup - poisoning their own cache if they happen to have a copy of the SOA for xip.io in cache (the SOA record hints to the negative cache lifetime).

It's not clear that using an intermittent CNAME does anything useful - why not just return an A record, with a billion second TTL value. As-is, it merely adds a round-trip (the CNAME and A are not returned in one pass by ns-1.xip.io).

Additionally, ns-1.xip.io does not mark the "authoritative answer" bit in any responses - which will cause issues with some resolvers.

But, still a neat idea. Question for the developers;

It's clear that the intermediate CNAME represents an encoding of the IP address, e.g.;

  foo.	600	IN	CNAME	foo.a2eo0.xip.io.
here "a2eo0" is an encoding of , but then;

  foo.	600	IN	CNAME	foo.k201s.xip.io.
are you using some kind of cipher?

PS. Everybody please use for IP addresses in examples and documentation, and 2001:db8::/16 for IPv6. See RFC3330/5735 and RFC3849. It's good karma ;-)

Source code for encode/decode is found here:


Thanks! it reads like a 36-ary encoding of an IP address in host byte order, rather than network byte order, which is why it seems to jump around so much.

Interestingly, it encodes as 0.xip.io , but then refuses to answer for 0.xip.io. Why isn't obvious to me from reading the code, perhaps some kind of overflow condition is triggered by the right shift.

Hold on - who cares? This isn't meant for use in production right? Or am I missing something... from what I can tell the purpose here is that I can setup a domain that will resolve to an address on my LAN without having to modify, say, /etc/hosts on my android device (which I wouldn't even know how to do) or setup a DNS server on my LAN (which is of course possible but a lot more long winded than the solution proposed here).

I can't see how it matters whether or not there are problems with this from the perspective of being a "correct" DNS server so long as it works for it's intended purpose (testing things on your local network from a bunch of different devices).

Edge cases and niggling "works ok for me" problems really do matter for something that's intended to be used with testing.

Otherwise it's easy to rat-hole for a long time trying to determine why your test isn't working, when it turns out it was a problem between your DNS resolver and an upstream domain.

Problems between DNS resolvers and DNS authoritative servers are classically intermittent; they usually depend on the ordering of a chain of steps to occur. For example, I might get a resolution failure for a xip.io record if one of the following sequences occurs;

  Client asks resolver asks for AAAA - gets NXDOMAIN from ns-1.xip.io, caches it
  Client asks resolver for A - responds with NXDOMAIN
but if the queries happen in the reverse order, things are fine.

or, another example;

  Client asks an AD DNS server to perform resolution, server chokes on lack of the AA bit in authoritative answer.
And so on.

But then in either case if a tester fires up nslookup or dig, everything works on the command line, and so they may spend quite a while trying to figure out why my library routine for connecting to my service isn't working.

What I took away from this, though, is that they just want to be able to load a web application they're developing on their iPad/iPhone or otherwise "restrictive" device that doesn't allow you to easily make local DNS modifications (such as /etc/hosts files).

I honestly don't understand what you've written above (although I re-read it a couple of times, I guess I'm just not knowledgeable enough about DNS for it to make sense) but can you see those issues impacting the ability of someone to load an application on their iPad in order to test it out?

I guess the problem might arise that people start to use this "not as originally intended" and get into all sorts of strife but for the particular scenario they were originally intending it for it seems perfectly adequate, no?

The issues reported so far could absolutely cause errors across any device, including an iPad. The problems that the DNS setup will cause affect resolvers - which can be a combination of software in your browser, your c library, local caching daemon, on your cable/dsl modem, in your ISP, and a public provider.

Many crufty resolvers - on things like wifi routers in particular - don't deal well with the lack of an AA bit, or a REFUSED answer. So a tester could easily end up with "works for me" and "not for me" reports that are really just down to the particulars of their network and resolver software, whether they have IPv6 enabled, and so on.

Edited to add: Again, I don't mean to rain on the developers parade. It's a great idea.

Writing DNS implementations is hard, and requires a certain kind of technical archeology to get to grips with the detail. DNS is a tricky protocol, chaotically and ambiguously documented. I've helped write 3 different ones - and I still get things wrong. And that said; anyone interested in writing hardcore DNS implementations that have to operate on the scale of microseconds per query should drop me a line.

The state that xip.io is in right now could, theoretically, result in DNS failures 50% of the time on any device.

Actually, a low TTL is ideal for testing purposes. While you're correct that a query will never give a different answer, a low TTL ensures that the name won't linger in any resolver's cache for very long, which makes it less easy to discover. This also makes the arbitrary string chosen as a subdomain particularly ephemeral, which is important when testing name-based virtual hosts. Why leave a testing domain stuck indefinitely in the cache of a resolver I don't control? I'd rather have it disappear when I'm not using it.

It depends on what you're working on. If you're developing a SaaS application where an individual instance should be providing group features based on the hostname, suddenly host names becomes a development detail. Though I agree that in most web applications this isn't the case.

I'm sorry, I edited my comment (the broken zone is more troubling to me than the impetus for using it) and made yours look out-of-place. You responded to something accurate the first go, and I agree with you.

I find it interesting that we are required to do a second round-trip for the CNAME when it is available in the same bailiwick and could be sent as part of either the ADDITIONAL SECTION or as part of the ANSWER SECTION. Would you happen to know what the RFC has to say on this? I understand that CNAME's are not allowed to exist with other records, but returning it in the ADDITIONAL SECTION shouldn't be a cause for concern.

The reason why it may not be serving NS records for itself is because looking at what is available on Github the server is started on port 5300, so I am assuming that there is some sort of DNS resolver/cache sitting in front of it that may be stripping them out. Same thing with it not responding with "Authoritative answer" bit set...

Although that is simply speculation, maybe they did put the node service directly on the internet.

Once a CNAME exists for a name, no record of any other type may exist for that same name (it's an override for all types).

But for a query like this, a server is allowed to return both a CNAME and its relevant target(s) ... as long as they are within-bailiwick. It can go right into the answer section, e.g.;

  % dig example.allcosts.net @ns-22.awsdns-02.com.            
  example.allcosts.net. 300     IN      CNAME   at.allcosts.net.
  at.allcosts.net.      3613    IN      A
this is permitted because the original type of the query was "A", so we can include it as an answer, and it will avoid a round-trip on behalf of the recursor. That's all regular RFC1035 behaviour.

It's more common to use the additional section to include details about the target(s) of MX, SRV and NS records. That's more of a "I know you asked for an MX record, but you're going to need this A / AAAA record too pretty soon, so here it is in the additional section" kind-of thing. The additional sections in the responses to the following queries should be illustrative;

  dig NS  ns_example.allcosts.net @ns-22.awsdns-02.com
  dig MX  mx_example.allcosts.net @ns-22.awsdns-02.com
  dig SRV srv_example.allcosts.net @ns-22.awsdns-02.com
Something I forgot to mention; Being that DNS is the chaotically documented protocol that it is, I'm glad they launched early with a minimally viable product. It's the best way to get feedback like this for free! I think the real scope for real-world error is something like 1 in 100 users experiencing a problem as-is. Most resolvers are hyper tolerant of any amount of DNS crud, because they've been beaten on so much by poor implementations over the years. But the 1% of the time it breaks will cause you hours of pain in debugging.

In fact, returning both the CNAME and the A in the initial response is required. Returning just the CNAME and setting NOERROR tells a recursor 'the target name exists but I do not have an A record for it'. Luckily, all recursors I am aware of are stubborn and will then ask for the A anyway.

This is a good example of where things get tricky in DNS. A resolver could never really infer non-existence of the A record from mere non-presence in an answer like that.

Although RFC1034 outlines that a server typically would do that, it also says that it shouldn't include data that it's not authoritative for.

So a conflict arises when you CNAME to a sub-delegated child zone. E.g.

  foo.example.com IN CNAME baz.example.com
That response may come from a server that's authoritative for example.com - and so "baz.example.com" is technically in-bailiwick from the point of view of a resolver who has made only this query.

However baz.example.com may itself be delegated to other nameservers, and so is "really" out of bailiwick. But the response won't signal this to resolvers at this stage (though in theory could via the additional section).

The simplest reason why resolvers ignore it though is that there's no SOA in the response from which to derive the negative caching time - so it wouldn't know how long to cache that non-existence - and almost all resolvers are caches.

Even if it is required, some authoritative DNS server implementations don't, so far I have found BIND that came with FreeBSD 8.0 doesn't, nor does tinydns.

So recursors would have to account for possibly broken implementations and try the query anyway.

Ah, here is where implementation becomes important... not all of the authoritative DNS servers I have tested actually have this behaviour. So far I have found that BIND and tinydns don't send the A record even-though it is in bailiwick for the CNAME.

Hopefully they will solve this problem at some point, but if you want an alternative fast, couldn't you consider using services like dyndns, noip, etc?

I use the following script which I found somewhere (can't remember where) and modified: https://gist.github.com/2894514

You can run it:

    avahi_publish.py service1 service2

    curl http://service1.local:8000/bla
It's linux only of course :)

Patches welcome.

I can't patch your registrar misconfiguration, but, thanks for the response.


(The author of the software wrote a comment here: "So you're just here to shit on things?", which he has since deleted.)

I genuinely and honestly cannot log into your Gandi account and fix your nameserver delegation, so that means I'm just here to shit on things? That's a logical leap for you? You are delegating xip.io to a nameserver that is refusing queries for your zone; that's seriously broken and can result in resolution failures, making your clever hack worthless.

I don't know why I bother providing feedback, since people from your school of thought (I'm looking at the 37signals community as a whole, here, which you're being a shining steward of) just get defensive and take your software being broken personally. You wrote a poor DNS server. Read the spec, study BIND's or NSD's source to understand the years of work that went into this before you, and understand the problems I've pointed out. I just get annoyed when people flagrantly misimplement DNS, because that starts trends, like Heroku suggesting for a long time that you use a CNAME for @ (don't do that).

I'm not making this up: http://i.imgur.com/zFNkV.png

> You wrote a poor DNS server. Read the spec, study BIND's or NSD's source to understand the years of work that went into this before you, and understand the problems I've pointed out.

Why? xip.io exists and works. If he had to read hundreds of pages of technical specs or thousands of lines of source to implement it, it wouldn't exist.

Feel free to make your own spec-compliant or better-working version though; Sam's done the same for RVM, and I'm sure he wouldn't have any problem with better software existing.

Bravo. I sometimes wish HN had an option to filter out 37 Signals items...

sscheper, I love your average on HN (-.2). In response to jsprinkles: I do believe this was just a hack and not exactly intended for public consumption but someone decided "hey that's pretty cool let's chuck it on the web". Which has the obvious results of the opinions of hundreds :P

This was an announcement; "someone" in this case is a co-worker of the author. The author of xip.io (sstephenson) came on thread to discuss this and a related announcement about Pow.

This is a hack for devices where the user cannot access /etc/hosts?

Running a local DNS server on these devices is also not possible?

Can a user access ifconfig and change interface settings, e.g. adding an alias?

In terms of networking, these devices appear to be crippled. Yet they do not have to be if they're built using code from BSD's.

I can't really see the problem which it try to solve, but I guess some people need it.

I think it really speaks to the impoverished startup environment in Chicago that this ends up as an un-monetized throwaway product.

In Silicon Valley an idea like this could lead to a helluva exit with backing from incubators like YC.

That's funny. I would interpet it the other way around. It doesn't say much for SV if they have to make money off of redundant little .js hacks like this. They have nothing better? That's the problem with SV. Lots of stupid money, conniving VC and no standard for what constitutes an actual business. They can take anything and spin it into a "company" just to reach a "helluva exit". What a joke. It is all going to implode.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact