Hacker News new | past | comments | ask | show | jobs | submit login

Things are not quite that simple. You can't say "because my Go app serves a single request under no load in 50us, it will serve 20'000 per core under 100% load" you'd be surprised it will not.

Modern machines are like a networked cluster themselves. You need to do a ton of work to tune both kernel and "hardware" parameters to identify bottlenecks with near-non-existing debugging tools.

There is one truth here: we used to get more performance by scaling "horizontally". Maybe it's time to "scale within"?




That's a fair point. I've updated the post to read, "That translates to thousands of requests per second per core" instead of saying it's linear scaling. Thanks for the feedback!


That’s nice. But I didn’t really take it literally.

Do you have some benchmark results by any chance? Although it’s a bit of a can of worms, and I would understand if you didn’t want to get into it at this time.


I've only done some light benchmarking so far. I had it running on a two-core DigitalOcean machine with sustained write load to test for race bugs and it was replicating 1K+ writes per second. But honestly I haven't even tried optimizing the code yet. I'm mainly focused on correctness right now. I would bet it could get a lot faster.

https://twitter.com/benbjohnson/status/1351590920664313856


Sharding is not really scaling for many, and I think despite your good intentions you may be misleading others. I’m glad you like the setup but people flocking to SQLite scares me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: