
Nxweb – Fast and Lightweight Web Server - dillon
http://nxweb.org/
======
glossyscr
_Why?_

While Nxweb looks very promising, my first question would be 'Why should I use
it over eg Nginx?' It would be helpful to have some direct comparison to other
servers on the landing page.

EDIT: Ok, there is a link to some odd benchmarks and it includes performance
comparisons to Nginx and others which are not understandable (Nginx 141 req/s
and Nxweb 200 / 121 req/s while it's not clear when 200 and when 121);
moreover they compare it to Mongoose which is an ORM/ODM

~~~
sanxiyn
Mongoose is a web server. See
[https://github.com/cesanta/mongoose](https://github.com/cesanta/mongoose).
It's obviously not
[https://github.com/Automattic/mongoose](https://github.com/Automattic/mongoose).

~~~
glossyscr
Thanks, first time I hear about a Mongoose web server.

------
susi22
If CloudFlare can handle many thousands of sites [1] with nginx+lua then I'm
not sure if it's worth it to go the C route.

[1] [https://groups.google.com/d/msg/openresty-
en/aoBL22H8fP4/bJ3...](https://groups.google.com/d/msg/openresty-
en/aoBL22H8fP4/bJ3LrHHfGAAJ)

------
jedisct1
H2O is also written in C, is also easy to embed, and supports HTTP/2\.
[https://h2o.examp1e.net/](https://h2o.examp1e.net/)

------
joosters
They discount using CGI, which is fair enough, but why not use FastCGI? It's a
sensible enough protocol, there are libraries for most languages and there's a
good chance that your existing web server supports it.

Technically, there's no good reason why a FastCGI based system would be
significantly slower than a custom reimplementation like this.

~~~
geocar
> Technically, there's no good reason why a FastCGI based system would be
> significantly slower than a custom reimplementation like this.

An HTTP server speaking to a FastCGI application will:

• read the HTTP message

• decode HTTP

• encode FastCGI

• write to application

The FastCGI application will then:

• read the FastCGI message

• decode the FastCGI message

• do application stuff

• encode the FastCGI response

• write to the web server

The webserver then resumes:

• reading the FastCGI response

• decoding the FastCGI response

• writing the HTTP response

Meanwhile, an in-process HTTP server (like nxweb) system will simply:

• read the HTTP message

• decode HTTP

• do application stuff

• write the HTTP response

Less code runs faster; it is obvious to me why this is faster.

~~~
iMerNibor
I'd imagine encoding/decoding will be really insignificant compared to
generating the requested page or fetching data from a database in most, if not
all, cases

~~~
geocar
Most websites do not see more than 100 requests per second.

In those cases you are correct: parsing and de-parsing is insignificant
compared to the amount of energy the computer is using to heat the room.

However in order to do a trillion requests per day you need around 30 machines
using a custom web server, or 300 machines using Fastcgi: In this situation
the cost is an order of magnitude.

~~~
jerf
Many people observe that as miles-per-gallon gets better and better, it begins
to become a deceptive measurement in a way, because going from 10 to 20 mpg is
a much, much larger change than going from 30 to 40, or even from 80 to 140.
It seems people get a better sense of what's going on to measure gallons per
mile. When you start doing that it becomes more clear that going from .0001
gallons per mile to .00001 gallons per mile, as large as it may be in orders
of magnitude, still isn't that big a deal. Either way you're looking at your
cost-of-fuel being effectively zero for all practical use cases, because your
costs will be dominated by something else.

Similarly, I've noticed that people tend to get a little silly about web
server requests-per-second. It really gets to the point you probably ought to
be talking about seconds per request, or perhaps rather, microseconds per
request or something.

Because A: as you start talking about these fast servers, you need to
contemplate whether _your_ code can run in, say, 2.5 microseconds either; who
cares whether your webserver takes 2 or 25 microseconds to handle a minimal
request if your minimal response requires 8 milliseconds (i.e. "8000
microseconds")? 8ms would actually be pretty decent performance for a wide
variety of non-trivial web requests.

And B: As the webservers get faster and faster, you really need to start
wondering what corners they cut to push their reqs/s number up. I can make a
blazingly fast webserver that would actually kill nginx's performance stone
dead for a "return a constant JSON string response" task... the trick is that
I'm not even going to look at the incoming web request, I'm going to just
receive a socket, blast out my answer as a constant string buffer without even
reading from the socket, and discard the socket. (If you're feeling
particularly saucy, hook that up to a user-space TCP stack so you can drop the
work of properly setting up and tearing down TCP connections.) There aren't
that many real-world tasks for which that is a good solution (though, non-
zero!), but it'll look like pure awesomesauce on the benchmark!

 _Properly_ handling HTTP is non-trivial problem, and even moreso if it's
going to be hooked up to a program rather than a static file system or
something similarly easy. I actually start getting _nervous_ about web servers
that show excessively high numbers. If your performance is much better than
nginx, rather than me cheering for joy, I actually have a lot of questions
about how you did that exactly, and what my website's security profile looks
like with your way-faster server. I'm not saying these questions are
completely unanswerable; perhaps there is a way to safely do a much faster web
server. I'm just saying that rather than my default response being celebration
and "Oh wowzers _cool_!", my default reaction is a healthy dollop of
skepticism.

~~~
JoeAltmaier
Re: gallons of gas. There's the old puzzle: your spouse gets 100MPG in that
super-hybrid-mobile. The salesperson wants to upgrade you for $1000 to the
super-duper-hybrid-mobile at 200MPG! Double the mileage!

You suggest instead that you get the old truck serviced and replace the plugs,
distributor and tailpipe. Estimated cost $1000, and should get you from 10MPG
to 11MPG. Which is the better deal? Assuming you both drive about 100 miles
per week.

~~~
zzzcpan
Web servers are not like that. Micro-optimizations only work for benchmarks
and very specific load patterns that almost no people have.

~~~
hueving
Isn't it precisely like that? The point of the exercise is that even when you
are getting really high mpg changes (e.g. 100 to 120), the best gain is
improving the the really slow component of the pipeline (e.g. the truck from
10 to 12).

~~~
zzzcpan
Well, no. Web server's role is more like a taxi drive home after a 12 hour
flight. From that perspective MPGs don't matter at all.

------
ramr
[https://github.com/facebook/proxygen](https://github.com/facebook/proxygen)
C++, used by Facebook in production. We have been using it in a high
performance RTB application and has performed remarkably well.

------
ktRolster
A lot of ad-tech companies build ad-servers in C, because the latency is so
crucial in that context.

~~~
pjmlp
Or they don't know any better.

I took part in a few projects that replaced high throughput servers handling
mobile network traffic from C++ to Java.

~~~
faint_coder
Or maybe THEY don't know any better way to optimize/use C++ instead of
switching to Java

~~~
pjmlp
Given that I remember the days when C and C++ compilers generated code worse
than a junior Assembly programmer, I always find such comparisons interesting.

Not that they aren't true, rather their validaty depends a lot of programmer
skillset and compilers being used.

------
chx
Technical prowess is one thing, support is another. There are 20 times as many
openresty questions (although still very few) on stackoverflow than nxweb and
the few nxweb questions there are from years ago. I am not sure why is this on
hacker news frontpage suddenly.

------
giancarlostoro
Which Python is supported? Python 2 or 3? For some it makes a big difference.
I really want to play with this, also what OS? Only Linux? I am trying to find
it on the site but I'm not seeing it, maybe adding it on the front page or in
an FAQ would help (requires creating an FAQ page or section). Thanks! Looks
interesting otherwise.

~~~
sanxiyn
Python 2, since
[https://bitbucket.org/yarosla/nxweb/src/tip/src/lib/modules/...](https://bitbucket.org/yarosla/nxweb/src/tip/src/lib/modules/python.c)
uses PyInt_FromLong, which was replaced by PyLong_FromLong in Python 3. On the
other hand, it doesn't look hard to port.

------
RUG3Y
Looks cool. Must have Python 3 for me to use, would definitely try it out if
it's supported.

------
amelius
Does it support HTTP 2, or will it in the future?

~~~
mp3geek
[https://groups.google.com/forum/?hl=en#!topic/nxweb/8NAnQ0Im...](https://groups.google.com/forum/?hl=en#!topic/nxweb/8NAnQ0ImYSE)

Unlikely, and given his attitude I'm not going to waste my time trying nxweb.

~~~
lox
> "No plans so far. Why whould you need it?

Yeah, nope. Check out H2O if you haven't already
[https://h2o.examp1e.net/](https://h2o.examp1e.net/).

------
thenomad
The templating engine is an interesting, and slightly curious, addition here.

It looks significantly more flexible than anything nginx offers without having
to bolt on a server-side language like PHP - unless nginx has something
similar in its millions of modules that I'm not aware of.

(I know about and love nginx SSIs, but the templating here looks more flexible
than them.)

------
iso-8859-1
Other than being C and not C++, how does it compare to CppCMS (not a CMS)?
[http://cppcms.com/wikipp/en/page/main](http://cppcms.com/wikipp/en/page/main)

------
ex3ndr
Some questions:

1) Why you think that java is slower than C++? Server-side JIT compiles much
more optimized code as it is really know what and how to optimize.

2) What about security? Almost half of the problems in security in last days
came from native code stuff.

~~~
Ace17
"Server-side JIT compiles much more optimized code as it is really know what
and how to optimize."

While this seems perfectly plausible, would you happen to know some benchmark
backing this claim? Thanks.

------
22klinda
I would like to see how good it perform against a webserver like cowboy.

------
arca_vorago
Im curious about security features, which are one of the main reasons I have
been using Hiawatha.

------
elcct
I remember playing with it some time ago. Pretty cool thing.

------
known
Good initiative;

------
niksmac
I am so glad to see nginx is there to give a competition that Nxweb deserves.

------
Ace17
Again?

------
bigdubs
Seems cool, but curious if teams have investigated golang for these use cases,
specifically if throughput is sufficiently high and the GC pauses are
sufficiently small.

