Hacker News new | past | comments | ask | show | jobs | submit | ultrahax's comments login

Tangentially, I've faced some interesting challenges getting a multi-gigabit Wireguard VPN operating through my 2Gb Frontier connection.

My UDM Pro seems to top out around ~800mbit per UDP stream - pegged at 100% CPU on a single core. Likely it can't keep up with the interrupt rate, given it's ksoftirqd pegging it. Replaced UDM Pro with a pfsense machine.

Then I started getting 100% packet loss on the edge of Frontier's network after a couple of minutes of sustained UDP near-line-rate throughput. In the end, after trying and failing to explain this to Frontier's tech support, I reached out to their engineering management on LinkedIn, and got put in touch with the local NOC director. Turns out to be some intermediate hop is rebooting after a few mins, and they're "in contact with the manufacturer". Haven't heard back in a few months.

tldr as >1Gb connections become more ubiquitous, other bottlenecks will become apparent!


> I reached out to their engineering management on LinkedIn, and got put in touch with the local NOC director.

I hate that this is a thing. I'm dealing with a similar potential issue on Charter Spectrum right now. Specifically it's an issue that's called out here https://blog.cloudflare.com/ip-fragmentation-is-broken/ (failing the IPv4 fragmentation test http://icmpcheck.popcount.org/ ).

How on earth is one supposed to get past the front-line tech support in 2023?


You're not supposed to.


You could look for a better ISP. The larger problem is that in the US it's completely normal for there to be no actual choice, or for your "choice" to be between two equally huge uninterested corporations who know they don't need to be better than each other to keep the same revenue.

Separating the last mile infrastructure from the ISP can make it possible to have natural monopoly for everybody's last miles, but widespread competition for ISPs. That might be really hard to pull off in the US but I think it'd be worth striving for.


> Separating the last mile infrastructure from the ISP can make it possible to have natural monopoly for everybody's last miles, but widespread competition for ISPs. That might be really hard to pull off in the US but I think it'd be worth striving for.

Or even better, the model we have in France. The last mile is a monopoly for a limited time only (2-3 years). So if you build a connection to some place that didn't have one, you can profit off exclusivity for some time, and are incentivised to be good to the consumers because they can switch, but will probably ony do so if you're shit/too expensive.


It'd require sensible regulation, so the Republicans simply won't stand for it, and it's not one of the Democrats' main issues, so they couldn't be bothered.


The problem is how satisfied people are when they get to just blame the other side and not bother with any further thought. As long as people like you reward that mentality then it will never be fixed.


Yes, it must be me to blame, not the bad faith actors in office. Would you like to collect the two cent payment I offer for exposure to my wrongthink now or later?


Through LinkedIn evidently


Smells like Sandvine traffic shaper falling over or something.


Citibank did this to me. Venmo'd my architect for some work he was going on my house, kablammo, account closed, no notice. Was just lucky it was an account set up specifically for work on that house and not my that's-where-my-paycheck-goes account.


Hi Rob! Miss the DW days. Hope you’re well.


I've had many an adventure with secure gateways on varied xenon Call of Duty games. Nice to meet the person behind them!


“The Farthest” by PBS has interviews with the people who built the probes, it’s a pretty great watch https://www.pbs.org/the-farthest/


“Physics package” is usually a phrase meaning the explodey bits of a nuclear weapon, so that lines up.


I have to be careful what I say here, given I work for ATVI, I'm former Demonware myself.

I do feel safe in saying though that the login queues are _definitely_ not hype.

They're designed to constrain a quite-complex distributed system to a login rate that has been load-tested thoroughly, e.g. they know it'll work at that rate.


They handle online services for pretty much any Activision-published game, with carve-outs for Blizzard and King - they still do most of their own backend stuff themselves, with some collaborations.


I’m not sure your information wrt Call Of Duty is correct.

To my knowledge, the client timestamps their inputs and sends them to the server; the server will then rewind the state of the world to the time of the input before applying it. RTT isn’t an input. Each snapshot from the server includes the server world timestamp of that snapshot; the client will gently lerp its clock to match this per frame.

Source - I’m a COD engine developer the last ~15 years or so.


Oh, I remember you from T5 debug messages. You are most certainly more knowledgable about this topic than me.

My info might be outdated, but I've noticed that on asyncronous routes, there seems to be a large bias that's based on on assuming upstream latency == downstream latency. It might just be the clock not getting adjusted (even most NTP imlementations make this assumption), but it also has been since ~T7 that I even checked. Conditioning the network to add ~40ms to downstream latency could actually reproduce this behavior.

People don't really realize how hard of a problem sub-10ms clock sync can be on cursed networks.


When I worked at IBM circa ~2006 you'd get a written warning called a clean desk violation if you left your workstation unlocked.

I wrote a little daemon that'd l2ping my Nokia brick phone; if it didn't get a response for 30 seconds it'd invoke xscreensaver. Saved me a lot of paperwork.

I currently work at a Call of Duty studio. My favorite hacks ( not super high tech, but the ones that had the most impact for the least code, and the ones I feel I can talk about.. ):

* Put together a little box that polls varied knobs on a USB midi device to mangle the traffic going across its two interfaces. Allows for real time latency / jitter / packet loss testing https://twitter.com/ultrahax/status/1200902654882242562

* Studio LAN game browser didn't work across subnet boundaries ( studio is a couple of class B's ). Wrote a little daemon that'd take game discovery packets from one subnet, mangle the src addr, and send it on its merry way. Everyone can see everyone's games, happy producers.


These made me unreasonably happy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: