Requiring PoW (proof-of-work) could take over for simple requests, rejecting requests until a sufficient nonce is included in the request. Unfortunately, this collective PoW could burden power grids even more, wasting energy+money+computation for transmission. Such is life. It would be a lot better to just upgrade the servers, but that's never going to be sufficient.
What do you suppose we as website owners do to prevent our websites from being DoSed in the meantime? And how do you suppose we convince/beg the corporations running AI scraping bots to be better users of the web?
If I'm being honest... I expect the websites to keep returning errors and have hopes that those that employ you to at least start to understand what's going on.
Maybe because it's an overly simplistic LRU cache, in which case a different eviction algorithm would be better.
It's funny really since Google and other search engines have been crawling sites for decades, but now that search engines have competition, sites are complaining.
As of this year, AI has given people superpowers, doubling what they can achieve without it. Is this gain not enough? One can use it to run a more efficient web server.
What human problem. Do tell -- how have sites handled search engine crawlers for the past few decades? Why are AI crawlers functionally different? It makes no sense because they aren't functionally different.
esp. for image data libraries, why not provide the images as a dump instead? No need to crawl 3mil images if the download button is right there. Now put the file on a cdn or Google and you're golden
There are two immediate issues I see with that. First, you'll end up with bots downloading the dump over and over again. Second, for non-trivial amounts of data, you'll end up paying the CDN for bandwidth anyway.
Search engine crawlers generally respected robots.txt and limited themselves to a trickle of requests, likely based on the relative popularity of the website. These bots do neither, they will crawl anything they can access and send enough requests per second to drown your server, especially if you're a self hoster running your own little site on a dinky server.
Search engines never took my site down, these bots did.
Personally I'm specifically talking about Forgejo which is Go, but calls git for some operations. And the effect that was worse than pegging all the CPUs to 100% was filling of the disk with generated zip archives of all of the commits of all public repositories.
Sure, we can say that Forgejo should have had better defaults for this (the default was to clear archives after 24 hours). And that your site should be fast, run on an efficient server, and not have any even slightly expensive public endpoints. But in the end that is all victim blaming.
One of the nice parts of the web for me is that as long as I have a public IP address, I can use any dinky cheapo server I have and run my own infra on it. I don't need to rely on big players to do this for me. Sure, sometimes there's griefers/trolls out there, but generally they don't bother you. No one was ever interested in my little server, and search engines played fair (and to my knowledge still do) while still allowing my site to be discoverable.
Dealing with these bots is the first time my server has been consistently attacked. I can deal with them for now, but it is an additional thing to deal with and suddenly this idea of easy self hosting on low powered hardware is no longer so feasible. That makes me sad. I know what I should do about it, but I wish I didn't have to.