Hacker News new | past | comments | ask | show | jobs | submit | atomt's comments login

I've seen concurrency in excess of 500 from Metas crawlers to a single site. That site had just moved all their images so all the requests hit the "pretty url" rewrite into a slow dynamic request handler. It did not go very well.

I was hoping they would extend it for a bit longer, not because I want to be running old versions but because 3.0/3.1 have some massive performance regressions that are yet to be fixed.

Have a look at HAProxys latest release announcement for some openssl 3.x commentary.

With some luck they will get a handle on this before 1.1.1 expires.


Direct hit to the heart *cries in BGP and big enterprise switches*


Norway. Most of the FTTH plans seems to be symmetrical, anything from 100Mbps to 1Gbps. Some GPON based networks will hold back upstream at 500Mbps max (but they are fairly rare. Most is active ethernet anyway, not GPON)

FTTH is quickly taking over with over 70% coverage in homes passed. Pretty sure I saw less dense areas (rural?) had hit 60% last year according to some goverment report.

DSL is practically dead and "Fiber to the Cabinet" never really happened here. Coax is shrinking. Those still using these technologies are of course getting asymmetrical down/up. Some "Fiber to the Building" exists but mostly with copper ethernet to the housing unit, so its practically full FTTH.

Things that could be better is: - pricing, it's not exactly great most places, although smart HoAs can usually get decent pricing if they actually try. - the networks are very rarely open access, local monopolies are rife.

I've been on 1Gbps/1Gbps open access FTTH since ~2014, coming from 300/20 cable.


If nginx decided to support ktls they could use sendfile for encrypted traffic as well. Unsure if it is worth it just to make sendfile work however.


I was going to mention kernel TLS hopefully enabling sendfile for mostly-HTTPS workloads, as that’s the direction everything is heading anyway, and without it we don’t get zero-copy for those connections.

Now I’m more curious about the actual threshold where not having sendfile begins causing noticeable performance problems… at what point before you become Netflix?


If your cache can face-tank a HTTP-DDoS, you don't need fragile fingerprinting techniques to distinguish bad from good, thus reducing the user impact (less accidentally-blocked users). The less cost you have for filling that 100 Gbit NIC with your TLS cache traffic, the more boxes you can afford. Internet exchanges are surprisingly cheap to connect to.

Of course sharing resources between a couple services would be good, as NICs and switch ports are sill a way from free.


What about http2?


I have no idea about the economics in this area, but it kind of baffles me that they add these propietary, closed source and buggy "accelerators" instead of improving the cores a bit. A bit more L1 cache would go a long way for networking.

Many switched from MIPS to ARM in the past 10? years, but the cores remain mostly just as anaemic as they were.


The problem I have with THP is that while it initially looks great on our workload (yay! a core saved per server!), it often starts to degrade badly after several days or even many weeks depending on memory fragmentation and pressure.

It keeps getting better, maybe one day..


It was declared ready for "experimentation" about a year ago or so, so not very mature. If you are on current upstream versions it is not too bad, I'm using it here and there on not very important things.

It is a bit hit and miss with kernel/nftables versions on release distributions. I probably would not use it with any kernels older than 4.10 for example. So any current LTS kernel is out.


That example is made needlessly complicated to compress down to a one-liner and show off maps.

It does look nicer when properly formatted as part of a rule file, however.

The docs needs some work.


Yes, I see that now and yes those docs definitely need work.


A warning if you want to try out BBR yourself:

Due to how BBR relies on pacing in the network stack make sure you do not combine BBR with any other qdisc ("packet scheduler") than FQ. You will get very bad performance, lots of retransmits and in general not very neighbourly behaviour if you use it with any of the other schedulers.

This requirement is going away in Linux 4.13, but until then blindly selecting BBR can be quite damaging.

Easiest way to ensure fq is used: set the net.core.default_qdisc sysctl parameter to "fq" using /etc/sysctl.d/ or /etc/sysctl.conf, then reboot. Check by running "tc qdisc show"

Source: bottom note of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


A related warning: your NIC driver can break the qdisc layer (or at least mislead it).

Prior to 4.12 the VIRTIO Net driver always orphans skbs when they're transmitted (see start_xmit()). This causes the qdisc layer to believe packets are leaving the NIC immediately until you've completely filled your Tx queue (at which point you will now be paced at line rate, but with a queue-depth delay between the kernel's view of when the packet hit the wire and when the packet actually hit the wire).

After looking at the code -- even in 4.12 enabling Tx NAPI still seems to be a module parameter.

(I'm not sure which other drivers might have the same issue -- my day job is limited to a handful of devices, and mostly on the device side rather than the driver side)


That is good to know. I just deployed BBR on some pilot virtio backed VMs yesterday and I missed this.

As far as I can tell the Actual Hardware I'm running my other BBR pilots on are doing the right thing.

File under: BBR - still a gotcha or two ;-)


To try it out, make sure that your Linux kernel has:

CONFIG_TCP_CONG_BBR

CONFIG_NET_SCH_FQ (not to be confused with FQ_CODEL)

Put these into /etc/sysctl.conf:

net.core.default_qdisc=fq

net.ipv4.tcp_congestion_control=bbr

Reboot.


I haven't tested this, but you should be able to sysctl -p to reload the config instead of rebooting.


Just loading the sysctl values will not switch the packet scheduler on already existing network interfaces, but it will start using BBR on new sockets.

Switching the scheduler at runtime using tc qdisc replace is possible, but then you need to take extra care if the device is multi queue or not. Instead of explaining it all here just rebooting is probably simpler.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: