I recently experimented with trying to have nginx rewrite images from png/jpeg to webp for clients that support it. I ended up with a solution where a lambda triggered off new files added to a bucket and re-encoded them as webp alongside the originals. When a request came into nginx, it would examine the URL and client Accept headers, and then first try to fetch the webp file from s3 before falling back to fetching the original from s3.
I was somewhat surprised that nginx was capable of doing it efficiently, given the nginx configuration format and all the moving pieces.
It can also serve as a proxy server, but we already have the finest proxy server in the world as open source: HAProxy
I urge anyone to learn it’s admittedly obscure but simple config file switches and be amazed at how many layers this software can operate on.
When you really need to performance tune your frontend in real-time, you will appreciate HAProxy and what it offers.
> You can use both NGINX Open Source and NGINX Plus as the gateway to S3 or a compatible object store.
They do mention this further down the page, but in 8 months when it randomly breaks you have to hope you remember it needs to be periodically restarted to keep working.
This is by far the stupidest paywalled feature ever, because it amount to downtime extortion.
It may or may not be able to replace Nginx depending on your use case. For me Caddy has replaced everything I used to use Nginx for and more.
So if you need need this capability for free, check it out. Not only that, but SRV record resolutions too.
Assuming the patch is valid, do they decline it citing the paid feature or do something like making a straw man argument against it?
set $proxy_url xxx;
The difference is that SeaweedFS can support both read and write, with asynchronous write back. Nginx can support read only caching with 1 hour TTL
As for caching, that is totally configurable to whatever you want; the example configuration is set to 1 hour but that is arbitrary. In fact, one of the interesting this is all of the additional functionality that can be enabled because the proxying is being done by NGINX.
Regarding read and write, that can be enabled for AWSv2 signatures, but it is more difficult to do in AWSv4 signatures. I have an idea about how to accomplish it with v4 signatures, but it will take some time to prototype it.
What is "asynchronous write back"?
There are 2 ways to cache: write through and write back. You are using write through, which needs to write to the remote storage before returning. Write back is only writing to local copy, which is much faster to return. The actual updates are executed asynchronously.
I replaced it with Varnish with the files publicly available on the (cheaper) S3 compatible Scaleway. I guess a simple Nginx would have work the same at that point. My goal was mostly to minimize the bandwidth cost (which is not metered on my server).