Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love the scalability "argument". The capabilities of a single thread of processing in a modern x86 CPU are underestimated by several orders of magnitude in most shops. The unfortunate part is that the more you chase the scalability with the cloud bullshit, the more elusive it becomes. If you want to push tens of millions of transactions per second, you need a fast core being fed data in the most optimal way possible. Going inwards and reducing your latency per transaction is the best option for complex, synchronous problem domains. Going outwards is only going to speed up if you can truly decouple everything in the time domain.


Judging by the occasional post on HN, even a lot of experienced software engineers (the "experienced" being my take) seem to have no good handle on what a single thread and a single machine should be capable of handling, what the latency should look like and what it should cost as a result. The posts that I mean are about "look how we handled the load for a moderately active web-app which is 90% cacheable with only five nodes in our k8s cluster".

I really don't know why that is, my guess would be that few people have really built something beginning to end with the most boring tech possible (start out with PHP+MySQL like everyone did 15 years ago). Or they always operate at levels of abstraction where everything is slow, so they have simply gotten used to it, like their text-editor not running smooth unless it's on an i9, because the text-editor now is a pile of abstractions running on top of Electron when vim was able to run smooth decades ago. It's sad and both an opportunity at the same time, because you can be the one with the correct gut feeling of "This should really be doable with a single machine, if not we're doing something fundamentally wrong".


I think you're right on the money with people having gotten used to it. Once I truly started harnessing the power of Vim combined with shell scripts and terminal multiplexers, my patience for many other programs and tasks decreased even further.

We have the computing power to run complex physical simulations or AI training sequences on a normal home computer, but for some reason use programs that take 100 times longer to start than old software despite not having more features, and websites that sometimes even come with loading screens. Electron isn't even as bad or slow as many people think, but somehow developers still manage to throttle it so hard that I might as well just use a website instead.

As someone who is just starting with nodejs and web development I find a lot of the tech feels nice but sometimes also unnecessarily abstract. Sure, it tends to make a lot of the code very elegant and simple, but every additional framework adds another guide and documentation to look at, another config file that has to be correctly referenced by an already existing one so npm knows what to do, another couple of seconds of build time and another source of bugs and build problems. Then of course you need that trendy package for easier imports maintenance - which IDEs automatically handled in the past, but now we gotta use an editor running in a hidden web browser which started from scratch in terms of features but is just as slow in return.


Once I get our current product in a good spot WRT maintainability and sales pipeline, I am planning to spend some time (~6 months) looking at developing ultra-low latency developer tooling. I feel like I can deliver these experiences through a browser using some clever tricks.

Going through my current project has really worn me out with regard to tolerating UX slowdowns in developer tools. I am getting so incredibly tired of how shitty visual studio performs on a threadripper with effectively infinity ram. I've had to turn off all of the nice features to get our codebase to be even remotely tolerable. And, dont get me wrong. This is not a plea to Microsoft to fix VS. I think everyone involved can agree that fundamentally VS will be unfixable WRT delivering what I would consider an "ultra-low" latency user experience (mouse/keyboard inputs show on display in <5ms). Any "fixing" is basically a complete rewrite with this new objective held in the highest regard.

Current developer experiences are like bad ergonomics for your brain. This madness needs to end. Our computers are more than capable. If you don't believe me, go play Overwatch or CS:GO. At least 2 independently-wealthy development organizations have figured it out. Why stop there?


> mouse/keyboard inputs show on display in <5ms

That's not possible, at least if you measure the full time from actuating the key to the screen showing the updated content. The record holder in that regard is still the Apple II with 30ms[0].

But I agree, modern software should react much faster. It's kind of weird how on one hand we have hardware that can run VR with 120Hz and extremely low input latency, and at the same time some programs that only need to display and change a couple bytes in a text file need 4 seconds to even start.

[0]: https://danluu.com/input-lag/


I am referring mostly to the latency that I do have control over in software. I.e. time between user input event available for processing and view state response to user.


> Or they always operate at levels of abstraction where everything is slow, so they have simply gotten used to it

That's a lot of it, I believe. One startup I dealt with was honestly proud of how they were scaling up to hundreds of requests per second with only a few dozen AWS VMs to handle it.

I'm old enough to know how ludicrous those numbers are, having built systems handling more load on a single desktop machine with 90s hardware.

But I've come to realize some of the younger developers today have actually never built any server implementation outside of AWS! So the perspective isn't there.


Yup. At the last place I worked me and a colleague would joke that given the traffic our ecommerce platform actually had, we should be able to run the whole thing on a single Raspberry PI. I think he even ran some numbers on it. What we actually had was two containerized app servers and a separate large RDS instance and SES, SQS, SNS, ELB and all the other bits and pieces AWS buy in gets you. The cloud bills for the traffic we handled were ridiculous.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: