
Nginx-1.14.0 stable version has been released - collinmanderson
http://nginx.org/
======
zedpm
The mirror module is exciting. Sometimes it's nice to have no-consequences
testing of production traffic in a staging environment. Unless you have
something like Envoy and its shadowing functionality [0] to handle the
mirroring, you end up using a tool like GoReplay [1] to duplicate the traffic
to another environment and ignore the responses. This looks like a cleaner and
simpler way to accomplish the task.

[0]
[https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/ro...](https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/route/route.proto.html?highlight=shadow#route-
routeaction-requestmirrorpolicy) [1]
[https://goreplay.org/](https://goreplay.org/)

------
rlpb
Ubuntu 18.04 is scheduled to be released tomorrow and includes nginx 1.14.0.

~~~
currysausage
Using the Nginx-provided repos for a few years now without isues:
[https://nginx.org/en/linux_packages.html#stable](https://nginx.org/en/linux_packages.html#stable)

------
LinuxBender
It appears that the default build does not check for libc support of Full
RELRO and PIE. Are there any plans to add checks for this, or is it assumed
that everyone sets the right CFLAGS and LDFLAGS? I know that Debian, Ubuntu,
Gentoo, Alpine and Fedora package build specs do this by default today.

The reason I ask is that I see a lot of people build this themselves and run
it from docker. I am concerned that they are not getting the various libc
protections that should be enabled on internet facing daemons. i.e. stack-
protector, fortify source, full relro, pie, ssp buffer limits, etc..

~~~
LinuxBender
I forgot to mention, if you want to check your existing daemons for these
protections, either apt/yum install "checksec", or grab the script from it's
maintainer [1] to check running daemons or files.

[1] -
[https://www.trapkit.de/tools/checksec.html](https://www.trapkit.de/tools/checksec.html)

------
TechTeam12
>http2_push /static/css/main.css;

Silly question, but what's the use case for the HTTP/2 Push? Their example
with pushing doesn't make sense to me. Why would you want to push static
content?

~~~
niftich
In 2016, the Chromium team at Google produced a document [1] that examines
usecases for HTTP/2 Push, talks about deployment models, and analyzes whether
it's worth it. In this particular case, you'd push static content because you
know it will be needed later, and this way the information arrives in the HTTP
header instead of in the payload's content body, so by the time 'main.css' is
needed, the UA's HTTP cache may already be populated with the file.

That being said, I fail to see how in the general case, setting static headers
in the server software's config for Push is useful [2][3], and wish that more
implementations converged on a common way of describing what to push [4], so
that tools could be built around discovering dependencies, and around
interpreting that manifest to execute push.

[1]
[https://docs.google.com/document/d/1K0NykTXBbbbTlv60t5MyJvXj...](https://docs.google.com/document/d/1K0NykTXBbbbTlv60t5MyJvXjqKGsCVNYHyLEXIxYMv0/edit?pref=2&pli=1)
[2]
[https://news.ycombinator.com/item?id=14077955#14081237](https://news.ycombinator.com/item?id=14077955#14081237)
[3]
[https://news.ycombinator.com/item?id=12719563#12722383](https://news.ycombinator.com/item?id=12719563#12722383)
[4] [https://github.com/GoogleChromeLabs/http2-push-
manifest](https://github.com/GoogleChromeLabs/http2-push-manifest)

~~~
zzzcpan
Pushes are probably best implemented in a caching layer, not manually
describing what to push. A web server should not just cache resources, but
also learn what kind of resources are often requested with each page and just
push those next time someone makes a request. And some sort of push prediction
policy should be configurable.

~~~
niftich
It's not sensible for pushes to be implemented in a caching layer, because
pushes are effectively the manual overrides to the User-Agent's own caching;
conversely, the User-Agent's cache is perfectly appropriate as a cache, and
doesn't _need_ HTTP/2 Push to work. HTTP/2 Push is effectively the server
declaring they know better, so they prime the UA's cache to avoid additional
roundtrips.

Nginx does have a module [1] and a corresponding configuration option to scan
outgoing headers for Link header preload directives, and once it has learned
of a preload being declared by a resource, it will push that resource
thereafter. Nginx talks about the justification for this feature, where they
too admit that statically configuring pushes in the server config is not
terribly useful -- it's quite often the wrong place to specify relationships
between resources.

[1] [https://www.nginx.com/blog/nginx-1-13-9-http2-server-
push/](https://www.nginx.com/blog/nginx-1-13-9-http2-server-push/)

~~~
zzzcpan
If you can predict with high enough accuracy what resource is going to be
requested by the client next, I don't see why pushing it would be a bad idea.
Speculation is how we hide latency after all.

And if you think about it, static pushes in general have very limited
usefulness, almost non existent. Imagine when some url becomes popular and
almost all of the requests to that url come from people who never visited the
website before. It would make sense for a web server to learn what kind of
resources clients request with that url and start pushing those resources to
people ahead of time.

~~~
pas
For that it's easier to parse the pushed content. If it's HTML, then catch
stlyesheets, JS, and some other static <img src=.../> things. It doesn't have
to be flawless, after all it's just a speed-up. (And if you want a speed up
write nice markup.)

Similarly, it should be the backend behind the reverse-proxy that knows what's
the page that has been just rendered, and knows about the user's session (is
it brand new, or maybe it's not new, but still needs to push things because
it's too old, and since then that particular page's background changed, etc.).

And in case of an Angular/React/SPA thing, then the "bundler/compiler" should
create a list of things to push for various URLs. Or the Angular/React team
should talk with the Nginx team to figure out how to speed up things. (In case
of SSR - server side rendering - the NodeJS server can emit the necessary Link
headers, for example.)

~~~
zzzcpan
Common, how parsing things is easier, than gathering some very basic stats?

~~~
pas
Gathering stats requires keeping them somewhere. Making inferences.
Documenting the inference engine. Explaining the magic to users. Sounds a lot
more complicated than explaining that what HTML tags will be parsed.

Proxies are already complicated as is. Caching proxies more so. (Think of how
Varnish has a - probably Turing complete - DSL to decide what to serve and/or
cache and when, and how.)

~~~
vlovich123
Parsing HTML content won't get you the full benefit an inference engine would.
An inference engine could easily learn that 90% of your users getting to your
landing page are going to login & end up on their home screen so it would push
the static resources for the home screen too. Similarly, it might know that it
already pushed those resources previously in the session & only push the new
static resources that are unique to you once you login (saving the round-trip
of the client nacking the resource). Doing it via stateless HTML parsing is
never going to work because you have no idea of the state of the session. That
doesn't mean there's not a place for a mixture of approaches (& yes you could
teach the HTML parsing about historical pushes but then you get back to the
concern you raised about storing that data somewhere).

The HTML parsing approach is probably great from a 80% of the benefit for 20%
of the effort on small-scale websites (i.e. majority). A super accurate
inference engine might use deep learning to train what to serve on a very
personalized level if you have a lot of users & the CPU/latency trade-off
makes sense for your business model (i.e. more accuracy for a larger slice of
your population). A less accurate one might just collect statistics in a DB &
make cheap less accurate guesses from that (or use more "classic ML" like
Bayes) if you have a medium amount of users or the CPU usage makes more sense
and you're OK with the maintenance burden of a DB. It's a sliding scale IMO of
tradeoffs with different approaches making sense depending on your priorities.

~~~
pas
Yes, I agree, that of course a hypothetical ML/AI outperforms any naive and
simple solution. But usually magic technology is required to do that,
otherwise it wouldn't be magic :)

That said a simple heuristic like "after requesting an URL the server got
these requests on the same HTTP/2 connection in less than 1 second, and those
were static assets served with Expires headers" could work.

~~~
vlovich123
Yes, like I said there's a sliding scale of effort/reward & HTML parsing is on
the extreme of one end.

------
sandstrom
Anyone with experience migrating a http service from nginx to Caddy or Træfik?

Did it work out well or did you end up having to revert back to nginx? If so,
what was missing?

[https://traefik.io/](https://traefik.io/)
[https://caddyserver.com/](https://caddyserver.com/)

~~~
ComputerGuru
I only have one question, why?

I can understand picking one over the other start, but what motivation could
you possibly have to actively ditch nginx altogether?

~~~
joshribakoff
Nginx struggles at basic stuff like load balancing to microservice backends
because of trivial stuff like DNS caching when running inside container
orchestration platforms

[https://serverfault.com/questions/240476/how-to-force-
nginx-...](https://serverfault.com/questions/240476/how-to-force-nginx-to-
resolve-dns-of-a-dynamic-hostname-everytime-when-doing-p)

With the new ingress in Kubernetes & lets encrypt plugin, you probably do not
need nginx anymore, if you're adopting containers. In fact, it can be a
hindrance to adopting container orchestration systems.

~~~
e12e
Sounds like it's more haproxy vs traefik or nginx vs Caddy.

If you don't need a Web server, you should probably be using haproxy already;)

~~~
joshribakoff
Haproxy, Nginx, kubernetes all implement layer 4 & layer 7 load balancing.

If you are aspiring to writing cloud native applications, there is not very
compelling reasons to run reverse proxies in my opinion. If its possible to
offload that responsibility to the cloud platform vs running your own
infrastructure, that is highly desirable for some people.

------
sho
gRPC support is very welcome! I've used gRPC internally but have felt a bit
uncomfortable exposing a server directly to the internet for outside client
use. Not to mention difficulties deploying in a downtime-free manner.

gRPC + TLS in nginx will allow connections from outside that I'm comfortable
with. Great improvement!

~~~
munjal116
Agreed, added benefit -- much easier to expose gRPC services through
Kubernetes Ingress.

------
EmilStenstrom
They really need proper release notes, it's very hard to find a list of the
changes in this new version...

~~~
Piskvorrr
[https://nginx.org/en/CHANGES-1.14](https://nginx.org/en/CHANGES-1.14)

^^^ Hard to find, though.

~~~
richardwhiuk
That just says:

    
    
        *) 1.14.x stable branch.
    

which doesn't mean anything...

~~~
judofyr
It's because Nginx uses even numbers for stable releases (very few changes)
and odd versions for mainline releases (production-ready, but frequently
improved/changed). 1.14.0 is the exact same version of 1.13.12.

------
notaplumber
Updated chroot(2) patch from OpenBSD ports:
[https://raw.githubusercontent.com/rnagy/nginx_chroot_patch/m...](https://raw.githubusercontent.com/rnagy/nginx_chroot_patch/master/nginx-1.14.0-chroot.patch)

------
collinmanderson
> nginx-1.14.0 stable version has been released, incorporating new features
> and bug fixes from the 1.13.x mainline branch - including the mirror module,
> HTTP/2 push, the gRPC proxy module, and more.

------
andonisus
gRpc support is amazing! I just wish this had been announced a week earlier,
as I just spent a good amount of development time creating a service that
subverts our NGINX server to create gRpc connections to desired microservices.
It will be nice to just target the service by name directly instead of having
to query Consul for their host IP addresses and ports.

