Hacker News new | past | comments | ask | show | jobs | submit login

I had this problem too! Jellyfin behind a reverse proxy over Wireguard. For intercontinental visitors (high latency), there would be an initial burst of reasonable transfer speed, but within seconds, slow to an unusable crawl. It took a long time to identify the problem as relating to packet congestion.

Try changing Linux's default congestion control (net.ipv4.tcp_congestion_control) on your Jellyfin & reverse proxy servers to 'bbr'. I don't understand the details- there might be negative consequences [1]- there might be better congestion algos- but for me, this completely solved the issue. Before, connections would stall out to <10%, sometimes even 1% line rate. In quiet/optimal network conditions.

Also, Caddy enables HTTP/3 by default. I force it to HTTP/2.

I should probably investigate using later versions of bbr, though.

[1] https://news.ycombinator.com/item?id=37408406




My issue ended up being the auto-bandwidth negotiation being sorta shitty in jellyfin, i set my remote over headscale web browser to 10mbit and the tv shows play very quick though could maybe be a bit faster.


Curious, what was your process debugging/diagnosing this? How did you reach the conclusion that it was packet congestion?


Wish I could say it was more sophisticated than slow trial and error. I tried changing many different aspects: MTU, forcing different routes/peering through different VPSs, various reverse proxy configurations.

I guess what started leading me down the right path was a more methodical approach to benchmarking different legs of the route with iperf: Client <-> reverse proxy, reverse proxy <-> jellyfin server. I started testing those legs separately, w/ and w/o Wireguard, both TCP and UDP. The results showed that the problem exhibited at the host level (nothing to do with Jellyfin or the reverse proxy), only for high latency TCP. The discrepencies between TCP and UDP were weird enough that I started researching Linux sysctl networking tuneables.

There might be something smart to say about the general challenges of achieving stable high throughput over high-latency TCP connections, but I don't have the knowledge to articulate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: