Why are you doing a separate request for every /8? I feel like this would be the first thing that would kill the site, if it weren't for the fact you're on HTTP, the browser is talking HTTP/1.1, so it only does 6 concurrent requests per domain.
If you want to monitor how well the server holds up to the hug of death, check out its http://ipv4.games/statusz page. For example, you could poll that and calculate (or chart) how many messages per second it's handling.
I wanted to see if the server could handle being slammed with requests like that :) but yeah collapsing it into one request would be a lot better. I’ll probably do that soon
In case anyone's wondering, I wrote the web server clayloam is using. It's called redbean. It started off as just my own hobby project last year, here on Hacker News, but it grew into something more and lots of people loved it enough to help. So it's very exciting to see it being used in production to handle hundreds of http messages per second. https://redbean.dev
I have been wasting so much time claiming up to 11 networks over the past week. All for you lot to steal all but 2 of them from me in as many hours. Give me a break. :)
I am a network engineer. Mainly I have been triggering HTTP requests from routers I control.
In Cisco IOS you can configure the source IP address for HTTP requests using the 'ip http client source' configuration-mode command. Then, from exec mode, you can:
I had an almost identical idea to this website a while ago but never acted on it, props to the dev.
Here is how you win the IPv4 games, in order of most to least effective:
1) Have a large online following that is willing to visit your claim link or a page where you can embed an iframe / img / etc that points to your claim link.
2) Pay to use someone else's (consensual) botnet by paying a residential proxy service, this is the approach I just used and it cost me a few dollars for access to a massive amount of distributed IPv4 space.
3) Abuse cloud / serverless offerings as far as they will go, unlikely to win more than a few blocks this way.
4) Own IPv4 space.
Other less ethical approaches: possibly exploit the system by sending a XFF header the developer forgot to block (probably just checking socket address so unlikely to work here), spin up a Vultr VPS in the same DC and probe for a way to connect with a local address, hijack BGP space, run your own botnet, I'm reminded of an old exploit in WordPress XMLRPC...
From what I can see the current rankings are just me and mike fighting for the same proxy space (the vote goes to the most recent visit per IP), and everyone else falls into buckets 3 & 4.
Basically I did a 1&2 combo. I run a small anti-bot service for a few friends sites and started redirecting a particularly aggressive scraper to the claim URL.
I took approach #3 for 5 blocks. Surprisingly, that's good enough to get on the leaderboard, at least till someone keeps a simple script running longer than me.
I do wonder what an IPv6 version of this would look like, but how it'd work, and how active it'd be.
I am option 4 but it's never going to get me very far up the leaderboard. So I just grabbed one of the funny numbers in one of the /8s and called it a day.
Not using HTTPS opens up a bunch of new possibilities of how to cheat...
Can you send an http request spoofing the IP address it's from? I bet you could with enough attempts because you only have to successfully guess the TCP syn cookie once...
I managed to claim 64 out of 256 blocks using proxies from Bright Data[0] and PacketStream[1]. I claimed 49616 IP addresses within those 64 blocks. Unfortunately, the website doesn't tell you how many IP addresses someone claimed in total. Cool project!
Had some fun with this. I used fireprox[0] to grab a ton of AWS IPs, and some proxy vendors for some other random ranges. Sadly my ASN has only /24s in disparate ranges so it wouldn’t make a dent for most of them.
In this thread there is a comment wich talks about using AWS API Gateways for scraping. What are other great ways to get many different ips for scraping?
Beside residential proxies.
I expect you could do an img tag or iframe, buy cheap ad traffic, and win. Tor is an option but last time I looked the exit node count is in the thousands. You could probably use any feed submitter or preview functions (Google docs insert URL, Facebook insert URL, etc).
- Static IP blocks from their ISP (some still lease IPs for surprisingly cheap).
- Releasing/renewing their NAT boxe's DHCP release on carriers that don't pin assignments (usually these are in pools of /22 or 1024 addresses - though most would be in use at any given time and impossible to randomly get you should be able to get a couple dozen).
- Customers of ISPs that use CG-NAT (cheap wired) or NAT64 (some wireless providers), similar to the above just 1 translation layer deeper.
- IP space you control (that's how I have 23.0.0.0/8 for the moment)
- BGP hijacking IP space you want to control (though hopefully in the world of RPKI this is getting harder and harder to do)
The IPv4 Unicast Extensions Project has lofty goals and many of them, like this particular one, seem overly lofty.
Some of space they are after, like 240/4, was always just "reserved" and you won't find as much resistance against it as reserved things are intended for exactly this kind of proposal. They just need to convince folks it's reasonably likely to be the best use of the space, overriding any significant unauthorized usage or alternative proposals as being less valid a use case, and it could be reasonably done. Real world implementation of the change might be a different story but they could at least get consensus that it's the intended use going forward.
Other spaces like 127/8 were actually assigned for use in the wild not just reserved for future use and despite 127.0.0.1 being most common address the others in the assigned space were definitely actually in use as well past the /16 they wish to preserve. This is especially in more networking infra focused contexts which are the things that would need to change most universally for this proposal to work. It's unlikely such a breaking standards change would get any consensus i.e. IMO that proposal is likely to never leave draft status.
David Täht, who was one of the authors of that draft, even holds regrets on the 127/8 proposal as it has caused the project to receive a lot of negative focus https://github.com/schoen/unicast-extensions/issues/16 and they have since let that draft proposal expire which is why your link is to an archived copy.
Source spoofing wouldn't get you far enough into the connection to make the claim and BGP hijacking is prevented on Vultr (you have to file a ROA and update RPKI before they'll accept the advertisement).
Tonight I discovered I could create 128 m2.micros from my AWS account no questions asked. Very very worrying. Much happier with Hetzner with an initial limit of 25.
This got me wondering that, in practice, how hard would it be to spoof source IP in the internet? I assume it requires some controls on an Tier-1 ISP network (so that the the spoofed package would not be filtered by upstream)?
Though apparently it doesn’t help in this case because it’s HTTP/TCP which requires a handshake