Hacker News new | past | comments | ask | show | jobs | submit login
Adhole.org is shutting down (adhole.org)
124 points by genzer 14 days ago | hide | past | favorite | 40 comments

> ...grew out to become a worldwide multi-node adblocking DNS network with thousands of concurrent users around the globe

This is hard because DNS going down means Internet is down for anyone who isn't savvy to check and change a failing DNS upstream. And when your users are global, it might mean some really angry users if you couldn't fix emergent issues for hours on end. I run a public content-blocking DNS stub resolver. We made the decision (and subsequently paid the price upfront in terms of engineering) to host it on Fly.io (DoT), Deno Deploy (DoH), and Cloudflare Workers (DoH) to specifically avoid sysadmin tasks for what's a free offering, and the code is licensed under MPLv2 in case anyone wants to run their own stub resolver on those platforms.

We haven't had much down time, if any at all, in the past one and half years we have been functional. It costs us $1 per 2tps (transactions per second: ~6M requests) for a month. The free-tier of these platforms should cover personal workloads (3 to 5 devices: ~1M requests), just fine.

> I will make sure to keep the domain registered and under my control for the next year, to prevent adhole.org from being abused by DNS hijackers/spoofers.

I think they could, if they want to, transfer the ownership of the domain to another public content-blocking resolver, after a year has passed. There are two or three such providers that have a very solid business backing them up to support such a migration.

> it might mean some really angry users

> a free offering

You see, anything free should be treated as a privilege, not expected to last forever. Have you considered what you can do when the free providers either kick the can, or start charging fair value?

Yes, but that isn't how support works. If you can avoid angry wrong people gumming up your support channels, the better.

Or, if you’re done with the free project, you can just close the support channels.

One thing that could work to warn users of impending shutdowns is to use the captive portal detection browsers come with to redirect users to a page with information before shutting the service down. This way the administrators can inform their users without having to mess with DNS traffic and commit mitm crimes just to get people to switch DNS providers.

> $1 per 2tps DNS traffic

What are tps? I'm not familiar with the term and couldn't turn up much. T per second?

transactions per second I assume

>Since the endpoints even made it into XDA articles, I will make sure to keep the domain registered and under my control for the next year, to prevent adhole.org from being abused by DNS hijackers/spoofers.

Nobody else then me that thinks that 1 year is too soon here? Cost of domain renewal is very little.

Actually : it's a really nice move to keep this domain up and under control for one year! Moreover, I guess that people relying on this service will soon start to see the errors, go to the website, understand the situation and move on... so 1 year is quite enough I think. It's not like it was a service people use once a year...

While it is a nice move. I think the best option is to crowdsource retaining the domain for 5-years or so. Since the cost of a domain renewal isn't that much I personally would be willing to chip in for 2-years of that cost. Because if that domain comes up for grab it will most likely be caught during the auction period by someone looking to get ad money.

> ...grew out to become a worldwide multi-node adblocking DNS network with thousands of concurrent users around the globe… but that also meant more server load, which meant upgrading servers, more incident (reports) and as such more work. In other words; the project has become a victim of its own success.

Isn't this a problem that most hobbyist projects that get big have to deal with in some capacity? To me it seems like maybe a case of deciding between monetizing it somehow and having a business around it with multiple other developers partaking in the development/maintenance processes, donating/selling the entire project to some company or foundation, or pulling the plug as we see here.

This reminds of the journey of the creator of icanhazip [1]. He had created it and maintained it through several milestones _on his own_ until he could no longer carry on. Luckily it ended on a good note as Cloudflare decided to "buy" it.

[1]: https://major.io/2021/06/06/a-new-future-for-icanhazip/

Pretty much as you described it. And you can't blame these guys for one second. Who would pay hundreds, if not thousands of dollars for upkeep without getting any revenue from it.

Kinda sad that OSS and "community-friendly" projects like these mostly end up like this or bought and monetized. But it's totally understandable.

> donating/selling the entire project to some company or foundation

It's not 100% clean service - you are arbitrarily blocking access to certain resources, so I would guess companies wouldn't want to be associated with it.

> having a business around it

For a service like this, it could pose a major breach in trust for its users. Maybe there's a business that doesn't involve changes in the DNS that I don't see.

but people sign up voluntarily, right? So the users are actually the ones arbitrarily blocking their own access.

What company would associate itself with blocking advertising? Worse it could end up like AdBlock plus.

Probably any that:

  - has resources for supporting a project like this
  - doesn't have any connection to the ad industry and therefore don't care for backpressure from it
  - views ads as a security risk and a vector for attacks (in addition to other questionable content) on their corporate devices
But you have a point about AdBlock Plus, which sort of sold out, if memory doesn't fail me. They had a good run, but nowadays i just use uBlock Origins.

Yes, but there seem to be plenty of competition in this space. Entering into that pool probably didn't look attractive.

No, I think it's a problem of over-engineering and subsequently ending up with fragile systems. If the author had a service that didn't take up much time for them to maintain because of the various problems that appeared, they probably wouldn't close down the project.

One case worth mentioning: The Pirate Bay. One the of the largest websites in the world (or was at least), with the least amount of technical focus. The website hardly changed, never made the owners any money, they never focused on the technology but rather built the simplest thing they could for the smallest amount of money the could. They had the largest adversary at that time, but still, the website is up and running and basically been since day 1. I think their trick is that they never really cared about the technology itself, and only cared about making information free.

> If the author had a service that didn't take up much time for them to maintain because of the various problems that appeared, they probably wouldn't close down the project.

Is this even possible in our modern day world, where there are constantly breaking updates and security risks that need fixes (look at the recent log4j debacle, for example)?

Because while sites like TPB and even HN don't outwardly change often (for example, if the UI works it's generally left that way, without a redesign every year), there is no doubt that they still take attention and effort to maintain, keep running and more importantly, keep running securely.

Of course, if you're talking about the domain complexity (which you need to deal with) vs accidental complexity (which you introduce because of either lacking knowledge or chasing after the latest and shiny technologies), then i fully agree with you in that regard! That's why i rather enjoyed the "Choose Boring Technology" talk: http://boringtechnology.club/

That really depends on your stack. A plain LTS Linux distro + bind9 + zonefile formatted blocking data + security auto updates is pretty hands off to me.

That's funny, because it's not unheard of for even simple unattended updates to break something, for example: https://blog.kronis.dev/everything%20is%20broken/debian-and-...

(the tone of that blog post of mine is a bit vitriolic and the advice isn't exactly serious, but the fact of the matter is that sooner or later things will break)

OK, fair enough. I guess you can minimally complicate this by updating an exactly identical machine/boot drive first, and then immediately alerting if health checks fail on that. But it really doesn't seem that bad to me. I've run a VPS that's been self-updating continuously since Feb 2019, and I've not had many breaking issues with the OS.

This is a false premise. Supporting the same use case at different scales definitely comes at different engineering costs. The Pirate bay backend was using somewhat weird optimization hacks to support their load.

I've never used Adhole but thank you, maintainers of adhole, for holding out this long. I hope you move on to more fulfilling, rewarding and thankful endeavours.

nextdns.io seems to be similar and I recommend it

Seconded, never had any issues with it. One of these cases where letting someone provide a honest service for money is worth more than fiddling around with self-hosting something myself.

I pay for NextDNS since the day they started taking payments. Great service, never had any issues.

worth every penny

I used Adhole for a long time before switching to NextDNS for granular control.

Thanks so much for developing and hosting Adhole!

First I’m hearing of this. Was there revenue associated with this? We’re people even able to subscribe to it? It’s definitely an established business with other similar services. It’s possible to ask for funds here.

Have they thought about.... Monetizing with ads?


This is an article (in Dutch) with the same news, but the owner responds in the comments:

  I started it at the time because I noticed in my circle of acquaintances that there was interest in adblocking at DNS level, but that they didn't have the knowledge to set up and maintain a Pihole themselves. Five years ago, the world looked a little different and there were almost no public services that offered this (for free) and since I had a spare server, I took the chance.

  During the process, I learned a lot. From Docker to Ansible but also from DNS itself. Especially DNS amplification attacks were big troublemakers in the beginning, which Pi-hole couldn't handle. Logical too, because Pi-hole is actually not meant to be used publicly, the developers make that very clear in their documentation. At the time I tried to find a way around this with all kinds of iptable rules and that worked reasonably well, but support for things like DNS over TLS or DNS over HTTPs was missing in Pi-hole. Again logical, normally there is no need to encrypt your DNS requests on your own trusted LAN.

  A year or two ago I switched to Adguard Home as backend, since Adguard does support these features and also has some basic security features on board like rate limiting. That's also when I moved everything to an Ansible Playbook so I could easily reinstall everything with one push of a button, e.g. when buying a new node.

  I often bought new nodes during Black Friday or Cyber Monday on sites like Lowendspirit. Some nodes were sponsored by providers themselves, because they liked the idea.

  Now after five years I stop. Lately, I put more energy into it than I got satisfaction from it. In addition, the servers were bursting at the seams, making the latency of each request far too high. You noticed this while surfing and I don't want to do that to anyone. Bigger servers are an option, but the money has to come from somewhere. By the way, I would have preferred horizontal scaling instead of vertical, but the number of (affordable) providers offering anycast IPs is scarce (BuyVM is one of them).

  Fortunately, there are now plenty of other services that offer the same thing for next to nothing, so hopefully ex-Adhole users will not fall into a deep hole.

  In the end, the 'iron' has to be paid. Part of the servers was sponsored, the rest came from donations and from my own pocket. Upgrading was easier said than done, because with 6 locations and therefore 6 servers, all costs are multiplied by 6. Going to providers for an upgrade on an already sponsored server was something I didn't do (something about a given horse). And asking for donations has never been the intention of this project. I saw it as a hobby and a hobby costs money, but it must remain fun. By the way, this was not the main reason to stop, it was really the time it took and the lack of satisfaction I got from it.

  IP addresses would sometimes change for various reasons. For example because a provider cancelled his location, a migration to another node or simply a switch to another provider. That was irritating because all users then had to change the IP address of their DNS. I think if you really want to do it right, you have to have your own IP range (and that is very expensive).

  I have no tips about incidents. However, since the new intermediate certificate at Let's Encrypt I have had problems getting DoH and Dot to work. In the end, I did not succeed either. Why remains a mystery to me, the chain was correct.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact