TCP connections are basically identified with an (ip, port) tuple.
Also you can set the file descriptor limits to whatever you want.
i.e. have apache listen on 127.0.0.* and set up ifcfg-lo-range with
I believe the full identifier is a quad: (source ip, source port, dest ip, dest port).
;; ANSWER SECTION:
yahoo.com. 92 IN A 184.108.40.206
yahoo.com. 92 IN A 220.127.116.11
yahoo.com. 92 IN A 18.104.22.168
yahoo.com. 92 IN A 22.214.171.124
yahoo.com. 92 IN A 126.96.36.199
yahoo.com. 92 IN A 188.8.131.52
Isn't limited by the amount of memory?
You can add NICs or virtual IPs and bind your client instances to specific IP addresses instead of INADDR_ANY.
I really appreciate the walk through of the apache bench (ab) results and learning process even though it didn't get them to their objective - I've been thinking about using ab myself, and these are great things to know.
I took down a few notes while reading the article:
- mentions use of apache bench ( ab ) command for load testing
- mentions use of ganglia tool
- mentions configuring HAProxy for multi-core using nbproc setting
- mentions the 'parallel' tool for running commands in parallel
- simulate long run requests by having the server delay a little vs client (work around for ab deficiencies )
- have the server also send different response lengths back simulating varying load
- pdsh tool to remote parallel shell (ssh) sessions
- vegeta tool - that got them to their scalability / tipping point objective
- nodejs (used for their backends) had a default request timeout of 2 mins
- used dmesg to learn that haproxy was running out of mem (at around 1.2mm conns)
- pdsh to run vegeta tool on multiple machines (acting as clients) - script included in article
- mentions haproxy maxconn setting - and verification by checking proc fs limits for the haproxy pid