Hacker News new | past | comments | ask | show | jobs | submit login

I did some work to improve performance on a dashboard several years ago. The way the statistics were queried was generally terrible, so I spent some time setting up aggregations and cleaning that up, but then... the performance was still terrible.

It turned out that the dashboard had been built on top of Wordpress. The way that it checked if the user had permission to access the dashboard was to query all users, join the meta table which held the permission as a serialized object, run a full text search to check which users had permission to access this page, and return the list of all users with permission to access the page. Then, it checked if the current user was in that list.

I switched it to only check permissions for the current user, and the page loaded instantaneously.




If I look at traces of all the service calls at my company within our microservices environment, the "meat" of each service is a fraction of the latency -- the part that's actually fetching the data from a database, or running an intense calculation. Often times its between 20-40ms

Everything else are network hops and what I call "distributed clutter", including authorizing via a third party like Auth0 multiple times for machine-to-machine token (because "zero trust"!), multiple parameter store calls, hitting a dcache, if interacting with a serverless function, cold starts, API gateway latency, etc...

So for the meat of a 20-40 ms call, we get about a 400ms-2s backend response time.

Then if you are loading a front end SPA with javascript...fugetaboutit it

But DevOps will say "but my managed services and infinite scalability!"



Exactly like that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: