Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah nice, I didn't realise you meant application proxies/gateways. Network ones are so quick due to their ASICs etc!

I personally would still say 50ms is super, super slow for an application gateway - a well designed one using e.g. nginx/openresty, lambda@edge, or simply writing another application server etc can easily do that job with an addition of <0.1ms processing time (assuming no additional network calls or heavy work), and maybe 0.3ms for additional connection establishment if it hasn't been optimised to use persistent connections.

If it is e.g. making a DB request to check auth, I would highlight that this _is_ backend processing time, not inherent or unoptimisable overhead. e.g. it's totally feasible to do auth checks without making any async calls, just need a bit of crypto and to allocate some memory for tracking revoked tokens - does add a bit of complexity, but likely worth it for the super hot path.

BFFs would not really need to add anything beyond ~1ms or so, but you do hit the lowest common denominator - in that you have to wait for the slowest thing to complete, even if everything is happening in parallel.

BFFs definitely benefit in simplifying client-side code, but at the downside of increased overall latency and potentially resilience which could be achieved by decoupling unrelated components.

As such, I wouldn't expect the Atlassian products to use BFF patterns - for them it's better to throw 1k requests down a single HTTP 2/3 connection and render each part of the page when it's available. I have heard their FEs are very complex, which I think would probably support that assessment.



Ocelot - localhost => 40-50 ms on a workstation.

Gateways can add a lot of functionality. Even Graphql can be used as a gateway.

It's not all "dumb forwarding" and I would be very surprised that you find any sub ms benchmarks.

Amazon has a one million dollar award if you get the page to load under 10 ms. So that's what you are expecting by default on a saas in your previous comment.

It's still unrealistic.


That just says that Ocelot consumes quite a bit of your latency budget. Maybe the features it brings are worth it to you, but it's def not anywhere close to the limit of what's achievable.

e.g. Envoy (which replaced Ocelot in Microsoft's .net microservice reference architecture) has a significantly lower latency cost (1)

Your reference to an Amazon reward is interesting as it's quite easy to get pages to load under 10ms in the right conditions. Perhaps you can provide a link to more information?

1. https://docs.microsoft.com/en-us/dotnet/architecture/microse...


It's from someone who worked at the parent company i work for ( Montreal ) and had that reference from someone working at Amazon


At the end of the day, all that middleware type stuff is part of your backend - it is not inherent overhead.

If you want to really focus on performance, you can choose not to use anything like that off the shelf and do it all in a fraction of a millisecond. It actually isn't difficult - you just need to not get stuck into a dependency on something heavy.

For my company's backend, our entire middleware stack incl auth checks is around the 1-2ms level including hitting a DB server to check for token revocation. That's all there is between the end user and our application code, plus network latency. We didn't do anything particularly clever or special. But we didn't use any frameworks or heavy magic products - just Go's net/http, the chi router and Lambda@Edge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: