Ephemeral ports aren't assigned to inbound connections, they're used for outbound connections. So, for the client-to-nginx connection, both the server IP and port are fixed (the port will be either 80 or 443) - only the client IP and port change, so for a collision all you need is for a client to re-use the same port on its side quickly.
For the nginx to node connection, both IPs and the server port are fixed, leaving only the ephemeral port used by nginx to vary. You don't have to worry about out-of-order packets here though, since the connection is loopback.
Note that only the side of the connection that initiates the close goes into TIME_WAIT - the other side goes into a much shorter LAST_ACK state.
Socket from client to nginx are well identified by the client IP an the client port. On each client request, nginx create a new socket to node.js?
There can be more than one node.js intance running? That's the main goal of nginx here, or there is some additional benifices?
> Edit, ok: "nginx is used for almost everything: gzip encoding, static file serving, HTTP caching, SSL handling, load balancing and spoon feeding clients" http://blog.argteam.com/coding/hardening-node-js-for-product...
Excellent article on the subject.: http://www.speedguide.net/articles/linux-tweaking-121
net.core.rmem_max / net.core.wmem_max
net.ipv4.tcp_rmem / net.ipv4.tcp_wmem
net.ipv4.tcp_no_metrics_save / net.ipv4.tcp_moderate_rcvbuf
# Retry SYN/ACK only three times, instead of five
net.ipv4.tcp_synack_retries = 3
# Try to close things only twice
net.ipv4.tcp_orphan_retries = 2
# FIN-WAIT-2 for only 5 seconds
net.ipv4.tcp_fin_timeout = 5
# Increase syn socket queue size (default: 512)
net.ipv4.tcp_max_syn_backlog = 2048
# One hour keepalive with fewer probes (default: 7200 & 9)
net.ipv4.tcp_keepalive_time = 3600
net.ipv4.tcp_keepalive_probes = 5
# Max packets the input can queue
net.core.netdev_max_backlog = 2500
# Keep fragments for 15 sec (default: 30)
net.ipv4.ipfrag_time = 15
# Use H-TCP congestion control
net.ipv4.tcp_congestion_control = htcp
Be very careful and test it yourself.
Scroll down to 'Networking' and read the notice.
Btw, the dropped clients has to do with recycle -- reuse is far 'safer', protocol speaking.
Have you tried using upstream keepalive http://nginx.org/en/docs/http/ngx_http_upstream_module.html#... This should help keep the number of connections, and thus ephemeral port and tcp memory loading down.
As for node.js, core only ever holds a connection open for once through the event loop, and even then, only if there are requests queued. If you have any kind of high volume tcp client in node, this will also cause issues w/ ephemeral port exhaustion and thus tcp memory loading. Check out https://github.com/TBEDP/agentkeepalive in that case. Related to tcp memory load issues in general, this is a helpful paper http://www.isi.edu/touch/pubs/infocomm99/infocomm99-web/
I would guess it's to allow long connections for ssh or similar without timeouts, but there are other ways to prevent timeouts without it eating all those resources.
I set it to 1800 myself, we'll see how that goes.
I noticed memory use went down after that.
Also note that initcwnd is set to 10 by default on all current OSen.
Sadly we have to wait even longer for initrwnd support (minimum 2.6.38 kernel)