1. All is disclosed in the page. Varnish is indeed mentioned on the page as being the crucial piece that made is all happen. But mea culpa, the title "Apache vs Litespeed" does need the Varnish bit added. I'll correct it shortly.
2. You're making the assumption that varnish/VCL is caching the first server-side result indefinitely (ttl). That is not the case. So apache is still doing work behind the scenes, albeit less work than it's used to. Whatever hosting conditions we imposed on Varnish/apache are the same imposed on lsws. It's fair game (for better or worse).
3. Actually Litespeed is able to handle more than 100 concurrent connections. What choked it apparently is its inability to manage the 100 PHP requests well. It actually ran out of memory during the experiments. Again, Litespeed was given the same amount of memory and was under the same conditions as Apache/Varnish. While the environment/conditions are not optimal, both varnish/httpd and Litespeed started equal. At first, we started with -c 10 and -n 10000 but both setups performed fairly well. So we upped it to -c 100. Checkout the load average from the Litespeed node below.
4. One of the goals of the experiment is to install stock Litespeed and stock Apache (RPM). No configuration changes were made to either. Litespeed has static object caching (20MB) enabled by default. Varnishd was given a 16MB cache. Stock installs are common in the Web hosting business, which is the scope/interest here.
5. Based on experience with client portals, Varnish as a cache seems to perform better than Nginx as pure cache. It could a perception. We use Nginx and/or Varnish where it makes sense though. They're both free and open so why not!
6. Can't agree more. Just for the record, we didn't select the hardware and environment to favor either web server. We had a vacant server, network, and resources and went from there. The fact that this environment did not produce good results for Litespeed is purely coincidental.
FYI, Litespeed Tech noticed the benchmark and mentioned that they're working on developing a cache module that will ship with Litespeed web server. We'll see.
2. You're making the assumption that varnish/VCL is caching the first server-side result indefinitely (ttl). That is not the case. So apache is still doing work behind the scenes, albeit less work than it's used to. Whatever hosting conditions we imposed on Varnish/apache are the same imposed on lsws. It's fair game (for better or worse).
3. Actually Litespeed is able to handle more than 100 concurrent connections. What choked it apparently is its inability to manage the 100 PHP requests well. It actually ran out of memory during the experiments. Again, Litespeed was given the same amount of memory and was under the same conditions as Apache/Varnish. While the environment/conditions are not optimal, both varnish/httpd and Litespeed started equal. At first, we started with -c 10 and -n 10000 but both setups performed fairly well. So we upped it to -c 100. Checkout the load average from the Litespeed node below.
4. One of the goals of the experiment is to install stock Litespeed and stock Apache (RPM). No configuration changes were made to either. Litespeed has static object caching (20MB) enabled by default. Varnishd was given a 16MB cache. Stock installs are common in the Web hosting business, which is the scope/interest here.
5. Based on experience with client portals, Varnish as a cache seems to perform better than Nginx as pure cache. It could a perception. We use Nginx and/or Varnish where it makes sense though. They're both free and open so why not!
6. Can't agree more. Just for the record, we didn't select the hardware and environment to favor either web server. We had a vacant server, network, and resources and went from there. The fact that this environment did not produce good results for Litespeed is purely coincidental.
FYI, Litespeed Tech noticed the benchmark and mentioned that they're working on developing a cache module that will ship with Litespeed web server. We'll see.
[root@lsws_node /]# while [ true ]; do uptime; sleep 3; done 19:45:37 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:40 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:43 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:46 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:49 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:52 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:55 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:45:58 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:46:01 up 0 min, 0 users, load average: 0.00, 0.00, 0.00 19:46:05 up 0 min, 0 users, load average: 2.88, 0.60, 0.19 19:46:09 up 0 min, 0 users, load average: 5.45, 1.17, 0.38 19:46:12 up 0 min, 0 users, load average: 7.82, 1.73, 0.57 19:46:15 up 0 min, 0 users, load average: 7.82, 1.73, 0.57 19:46:18 up 1 min, 0 users, load average: 10.08, 2.30, 0.76 19:46:21 up 1 min, 0 users, load average: 10.08, 2.30, 0.76 Segmentation fault Segmentation fault Segmentation fault -bash: /usr/bin/uptime: Cannot allocate memory -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done -bash: fork: Cannot allocate memory [root@lsws_node /]# while [ true ]; do uptime; sleep 3; done 19:46:40 up 1 min, 0 users, load average: 18.36, 4.66, 1.56 19:46:43 up 1 min, 0 users, load average: 17.53, 4.72, 1.60 19:46:46 up 1 min, 0 users, load average: 17.53, 4.72, 1.60 19:46:49 up 1 min, 0 users, load average: 16.13, 4.64, 1.59 19:46:52 up 1 min, 0 users, load average: 14.84, 4.56, 1.58 19:46:55 up 1 min, 0 users, load average: 14.84, 4.56, 1.58 19:46:58 up 1 min, 0 users, load average: 13.65, 4.49, 1.57 19:47:01 up 1 min, 0 users, load average: 13.65, 4.49, 1.57 19:47:04 up 1 min, 0 users, load average: 12.56, 4.41, 1.56 19:47:07 up 1 min, 0 users, load average: 11.55, 4.34, 1.55 19:47:10 up 1 min, 0 users, load average: 11.55, 4.34, 1.55 19:47:13 up 1 min, 0 users, load average: 10.62, 4.27, 1.55 19:47:16 up 1 min, 0 users, load average: 10.62, 4.27, 1.55 19:47:19 up 2 min, 0 users, load average: 9.77, 4.20, 1.54 19:47:22 up 2 min, 0 users, load average: 8.99, 4.13, 1.53 19:47:25 up 2 min, 0 users, load average: 8.99, 4.13, 1.53 19:47:28 up 2 min, 0 users, load average: 8.27, 4.06, 1.52 19:47:31 up 2 min, 0 users, load average: 8.27, 4.06, 1.52 19:47:34 up 2 min, 0 users, load average: 7.61, 3.99, 1.51 19:47:37 up 2 min, 0 users, load average: 7.00, 3.92, 1.50