Hacker News new | past | comments | ask | show | jobs | submit login
X is justifiably slow (2022) (zeux.io)
115 points by todsacerdoti 3 months ago | hide | past | favorite | 199 comments



Yes, even during my career computers became faster by a factor of ~100 or so, yet computers don't feel much faster if at all. Yes, they're much more _capable_, but usually even less responsive than e.g. in Windows 95 days.

I think just having the intuition of how fast computers can work is what the industry is sorely lacking. I am usually able to achieve very good performance for the stuff that I'm working on, despite the fact that it often requires a stupid amount of computation, just because I know how much a computer can actually do in a given amount of time when it's tuned correctly.


I think some of this is rose-tinted nostalgia.

For many people, Win 95 was paired with 486s with 8 MB RAM or early underpowered Pentiums with hardly any more. Opening the start menu would sometimes take seconds, like the computer had to ponder for a while. Applications frequently had to sit in waiting as the spinning rust slowly volunteered some data.

Things are much better now.


I use an overpowered (for my type of job) modern i7 computer with 32G of ram and modern 1TB ssd.

It still does take several seconds to open start menu, same for calculator. Explorer takes sometimes a minute so I have a different file manager open constantly.

The trick is to combine windows with any type of DLP enterprise software. It consistently delivers this horrible experience.

But aside from poor enterprise setups, try opening win11 start menu on a computer that has internet, just veeery slow. Suddenly you will miss those times of win95 loading start menu icons one by one, with a soundtrack of scrubbing HDD.

I don't miss those times purely because of huge stability differences. 20 years ago or so, systems crashed all the time and filesystems had to be repaired constantly. I even clearly recall how often was a kernel panic on random Linux distro, when you tried early docker.


Yeah Windows 11 is a bag of delays and buggy interfaces (and telemetry)

It's frankly ridiculous how everything "kinda seems to work" but not really


Yeah. Re the

>yet computers don't feel much faster

it depends on the computer rather. The slowest I've experienced was a cheap laptop with Windows Vista where just right clicking on an icon to see the menu could take like 30 seconds. I'm now on an M1 macbook air where everything is pretty much instant. That said the other instant computer I had was a Psion 3 which came out 33 years ago so it's down to the individual design I guess. Windows has always been kind of bad to varying degrees.


the most important part of speed these days is NVRAM not just a SSD. If you are using a standard SSD things will still be much slower.


I think something is very wrong when a Linux installation gets more responsive and Windows installation gets slower as time passes and software gets updated.

I can still run a Debian 12 with KDE on a standard SSD way faster than a Win 11 installation on a NVMe, and that installation is 8 years old, and runs tons of applications at any given time.


On Linux, the difference between a good SSD and an NVMe drive is noticeable if you pay attention, but not really important. I say that as someone who cares about performance and does pay attention.


100%.

Even just loading files off the old spinning disks took ages. Loading screens for a game could take 5-10 minutes.

Even just booting into my Linux box took 3-5 minutes and optimizing the boot time was a whole thing you spent a lot of time on.

I remember the day I received my first SSD and installed it in my computer, it was like Christmas morning. Things are way faster now.


I had a friend who supported a Windows desktop environment in the 90's. He would constantly complain about how rebooting a workstation would take over 20 minutes. The amount of scripts and other tooling was nutz.

My Macbook goes off when I shut the lid and right back on when I open it up. Try doing that on my early 90's Toshiba Satellite.

We live in a magical world now of high speed, multi-core and NVMe hardware. I have no desire to ever go back.


> My Macbook goes off when I shut the lid and right back on when I open it up. Try doing that on my early 90's Toshiba Satellite.

Even the first Mac Portable[1] had fast sleep/wake. As did the Radio Shack Model 100[2] from the early 1980s. Early Apple laptops could spin down the hard drive (vs. modern Macs where you can't easily shut off I/O-intensive background daemons like mdworker, syspolicyd, photoanalysisd, etc.; fortunately SSDs mitigate the issue somewhat.)

[1] https://en.wikipedia.org/wiki/Macintosh_Portable

[2] https://en.wikipedia.org/wiki/TRS-80_Model_100


That reminds me that 10 years ago, I could close a (Windows, Linux) laptop and open it a couple of days later and it would instantly turn on, using the amazing S3 sleep. Doing the same these days and... the laptop doesn't turn on at all, because the battery is dead (caused by modern standby). Yay, progress!


It honestly gives me a warm fuzzy feeling thinking about how blazing fast personal computers are these days.

It's instantaneous compared to the 90s and even the 2000s. It wasn't until 2010-2012 that I remember switching to SSD, which is when I feel the turning point was.


I had a netbook around this time and putting an SSD in it was a HUGE upgrade (similarly the upgrade from 2 to 4GB RAM).

I still have it, and when I put an 32bit version of Debian on it a couple years ago, it was molasses. Somehow I used it for years with no complaints.


When I upgraded my Windows laptop (running Windows NT) from something like 64 MB to 128 MB, it made a huge difference. Most of the benefit was that it stopped using swap. Sadly, I was borrowing the the DIMM from my boss and I had to give it back. It was like flying back on coach after getting there in first class.


Have you used Windows 11?

Like literally, I have a Pentium MMX 200 Mhz (64 MB RAM, 8 GB CF Card) machine right next to my Surface Pro 9 (i5 1235U, 16GB RAM, 2 TB SSD). The Pentium (running Windows 98 SE) demolishes my Surface Pro 9 (running Windows 11) in opening applications, and UI latency. I can literally open about 30 copies of Windows Explorer before Windows 11 opens a single one. Honestly the only thing that's really slower is booting since it takes forever in the BIOS stage.

Modern software is just slow. We've given up all of the advantages of our orders of magnitude faster I/O.


Booting itself was several minutes. And I had one machine where Word took multiple minutes to start, never knew why. Maybe if I defrag for the umpteenth time.

---

And significantly less reliable.

Remember how common crashes were? (Including BSoD!)

Like, you can solve 90% of memory crashes by using a memory managed language but that is course has costs.


I clearly remember that booting Windows 95 on my 386SX was under one minute. I remember measuring this.


The 386SX ran at 16 MHz, was seven years old at the release of Windows 95 and was slightly below the stated minimum system requirements. By contrast, I had a 166 MHz Pentium (released four months after Windows 95) and I remember it being slow enough that I would go do something else while it booted.

I see references that mention that installing fonts can significantly impact boot time. Maybe the installed software and devices had a significant impact.


There was a side by side comparision video (on twitter, cannot find) with Win11 with modern HW and Win98 with then-modern HW, cmd, explorer, notepad, paints, calculator all started instantly on older Windows, while on 11 every app has startup delay, and then also "paint UI so its all shown and responsive" delay.

Also hardware input lag definitely isnt better, and many times worse than in past: https://danluu.com/input-lag


Win95 keeps the start menu entirely in memory. I dare you to prove that it can take a second on any 386+ machine.


Windows 95 running on a modern day computer would be bloody fast.

The fact of the matter is the hardware has gotten faster and the software has gotten slower, cancelling each other out at best and slowing down at worst.


Imagine if the code had barely changed from 95 to now but we took all of the hardware improvements.


I'm old enough to remember when speed was the defining factor for everything. Benchmarks were a critical part if selling, everyone knew the mhz of their cpu, speed was the most important thing, it was the only thing.

And I'm not talking about techies, I'm talking about the man in the street.

Today, all computers, all programs are "fast enough". Most people can't name a benchmark, much less use it for comparison. Computer sales seldom focus on mhz. Your phone is either fast or slow. But actual measures are not what phone users care about.

These days it's all about capability. Most of which is assumed to exist as table stakes, and which didn't even exist in Windows 95 days.

Maybe people under-estimate how fast their PC is, but equally they grossly underestimate how much work it -is- doing.

Some, but not huge, amounts of time are spent on making things "as fast as they can be". Because customers are paying for more capability, not more speed.


So this isn't just me. I thought I was getting more impatient, but performance does seem to be degrading as it's no longer considered to be a feature. There is absolutely ZERO reason why a modern operating system or some rudimentary site with 10K users should not be blazing fast.

I have to constantly close my browser on a 12-core Intel Mac MINI just because four open sites get its fan spinning like it's a Hoover Dam turbine.


> There is absolutely ZERO reason why a modern operating system or some rudimentary site with 10K users should not be blazing fast.

This just isn’t true: optimizing for speed is always a trade-off and, when an application is fast enough, it can be wasteful to focus on speed rather than other factors


I think we all know this, but have different definitions of "fast enough".


Almost no website or application in 2024 is fast enough. HN itself is the only website I can think of which is "fast enough".


As a general rule, the people who make this decision don’t agree; which is why we’re surrounded by slow websites.


I agree. But it doesn't help when engineers are always whispering in their ear about the supposed evils of premature optimization and that things are fast enough anyway.


When I see vintage computers (amiga, atari) boot up straight into a repl in an instant I have to admit I feel confused.

Modern uses forced a lot of layers for genericity and security ..


There was nothing instantly about Amiga. It only has bootloader in rom. Booting to Workbench is a good 30-60 seconds from floppy, and its not much faster with a HDD (kickstart has some weird hardcoded wait loops). Loading new drawers (folders) is still slow (drawing icon at a time) on fastest accelerators (Vampire) and SSD.


People who lived through the time consistently report that PCs had to reach about 300MHz before they became as responsive as a 25MHz Amiga. Part of this is the design of the OS: Amiga gave user-input interrupts the absolute highest priority.


It is not only that. I remember reading some article where someone did actual measurements. Some of the delays are attributed to just how for example keyboards as such work like amount of movement needed before an electrical keypress signal happens and stuff like that, plus the whole processing chain following. And that is where these older computers are much much faster, because USB or worse, Bluetooth, only allow for so much low latency. Plus there are insane amounts of signal processing steps before it even arrives as an interrupt in the CPU, let alone is processed into some actual OS input event, and then it still needs to go through various levels of application software layers. And that is just the input side, the whole thing has to lead to screen updates and this is another level of technocraziness.


Amiga keyboard uses pretty much same architecture as PC PS/2. Microcontroller in the keyboard talking over serial (~10Kbit/s), pushing keys as soon as they are pressed, another microcontroller on motherboard generating interrupts per key. USB is pooled at 125 Hz. While yes, PS/2 and Amiga keyboard will have lower latency, does ~7ms make that big of a difference?


Musicians can feel latencies of 1ms. It makes a difference.


> Amiga gave user-input interrupts the absolute highest priority

That’s a pretty bad idea in general. Handling priorities properly is absolutely essential for the stability of a system. I’m not familiar with Amiga, but systems weren’t known for their stability back then. Whole OS crashes were much much more common back then.


A typical operating system should not have stability issues just because of badly prioritized interrupts. You may have severe performance issues, sure, but if you're crashing because someone didn't handle an interrupt in time, someone designed something wrong. Maybe it's not software's fault and some hard engineer made bad choices, but this is generally not true today since no mass market OS actually guarantees interrupt latency.


Athlon + Windows 2000 was, for me, the first time a PC did not feel outright sluggish relative to the Amiga 500.


Cold booting my 7 MHz A600 to a fully loaded, functional and responsive desktop takes 17 seconds from hard drive (I just measured). Pretty decent, I'd say.

There's plenty about Amiga that is and/or feels instant. My workhorse is a 14 MHz A1200 and I use it at least once a week, so I get plenty of opportunity to compare. For its intended use cases, most things feel very snappy. Then there are of course areas where it doesn't stand a chance compared to a modern PC, even if the workload is "Amiga sized". Decompression and picture downsampling, for example.


The Amiga booted into the Workbench GUI. It didn't have a native "repl" as such, although you could open a window for a Rexx script interpreter. And if you booted from a floppy drive, that wasn't fast at all.


When people are used to virtually everything waiting on multiple round-trips over cellular connections, there's no point in optimizing local performance. An extra hundred milliseconds of lag in the UI gets attributed to the slowness of the connection.

Plus web and mobile developers have put a ton of work into animations and things like that to make slowness feel natural, which lowers expectations even further. Nobody expects a device to respond quickly. You expect to have to wait a bit for round-trips or animations or both.


> An extra hundred milliseconds of lag in the UI gets attributed to the slowness of the connection.

This is looking at the problem wrong.

I notice your shittily optimised application not just because it’s slow, but because it’s draining the battery.

This is a problem, and developers need to wake up to it. Crappy performance uses more energy because the machine is doing unnecessary work to an end that could be achieved more efficiently, often much more efficiently.

So, please, invest time in performance.


Alternate take: in a world where our entire society is trying to reduce energy usage, to minimize emissions, and so on, it strikes me as insanity that we do not demand every ounce of performance out of the hardware that we have, and equivalently, aim to minimize unnecessary demands on the hardware at the same time.

Burning up a laptop CPU and torching racks of servers and routers in some data center just because the web is full of shitty ui frameworks should be intolerable to consumers and providers. Efficiency really needs to be a goal of all system designers.


What UI framework do you like to use?


On the other hand, in a world of slightly non-responsive software, something that does respond instantly has a subconscious psychological impact on us and builds affinity.


Indeed.

I cover this in "Your Database Skills Are Not 'Good to Have'": https://renegadeotter.com/2023/11/12/your-database-skills-ar...

Specifically when I cite THIS: https://designingforperformance.com/performance-is-ux/


> Imagine if SELECT field1, field2 FROM my_table was faster than SELECT field2, field1 FROM my_table.

possibly it might not be smart enough to reorder an index on (field1, field2), or it had some weird internal constraint on tuple ordering, or it was simply different enough to go down a different query plan sometimes, or maybe there’s something around the actual physical ordering on disk?

but yeah postgres and SQLite are modern marvels that we take for granted… myISAM was not a good time, or at least people tended to violate the correctness/visibility rules it promised (iirc) or something like that. The fact that you can just open up a stable, well-tested sql instance that runs against disk, or drop Postgres into most transactional use-cases, and generally not have to worry about unduly fighting the DB itself, is underappreciated.


That's probably a useful perspective for someone that makes software... But developers and most other folks interact and reason about computers and software pretty differently. Generally, people have practical problems that computers can solve, and doing so with as few steps, things to learn, or any other hassle is what they really care about. Anything more than that is a nice bonus, but pretty unnecessary. Unless something is really sluggish, they just don't really care much. Some of it is that lots of things on computers justifiably require a bit of a wait, and it takes a nontrivial bit of understanding to know if it's justified. Hell most junior devs don't even have that intuition tuned accurately.

IMO Product Managers should be steering resources based on what end users want rather than what we feel they should want. Now if we could just get more of them to listen to actual users more than marketing people that are more interested in growing their list of feature bullet points than making software that's useful to anybody at all.


It’s just that single core performance sort of plateaued (as expected). We just can’t get it faster by 2x as we used to in a couple of years, and it is not really perceptible otherwise, especially that most software besides artificial benchmarks won’t max out the computing capacity of CPUs, it’s mostly waiting for memory and IO.

What we can increase is parallelism, but most problems simply can’t be parallelized for a good enough percentage of the workload, and Amdahl’s law can’t be circumvented.

Nonetheless, many other things can be scaled, like resolution (like, think how much bigger 4k@120fps vs hd@60fps is), file sizes, etc, so the workload does increase and not everything can cancel out, hence the perceived slow down in certain cases.


> There is absolutely ZERO reason why a modern operating system or some rudimentary site with 10K users should not be blazing fast.

I find dismissive statements like this to be extremely useless to the conversation. I agree that a modern operating system could be blazing fast. But if that’s not the case, then there obviously is a reason.

If you know the reason why software in general has gotten slower (compatibility?, bad devs?, more feature?), and already dismissed it as an invalid or insufficient reason, then share what you think that reason is.

If you, like me, don’t know what that reason is, then let’s approach it with a problem-solving attitude and try to find out.


Virtually everything in a modern development stack contributes to the problem. Buses are optimized for bandwidth at the cost of latency. CPUs are broadly optimized for throughput at the cost of latency. OS design is optimized for throughput and isolation, again at the cost of latency. Language runtimes are optimized for feature-richness, again at the cost of performance. Languages are optimized for quality of life features that carry hidden performance costs, like virtual functions and exception-based control flow. Developers compound this with dozens of layers of indirection and abstraction until finally, eventually there's a message on screen for the user.

There isn't a singular cause to brainstorm. It's everything in the entire stack optimizing just a bit in other directions that produces the end result of disappointing, sluggish computers despite their actual capabilities. This discussion isn't going to tread any new ground either. All of these things are known because there are people who can't afford modern computing, like the HFT firms that are all on FPGAs now and real-time embedded systems. It's not hard to do, it's just tedious and expensive the way development used to be for everyone.


But then you go play a game on windows, and everything works instantly. When performance is critical to have a good experience, it can be done, and pretty well.

Most desktop software just doesn't give a shit about performance.


Games have always been one sector where squeezing out every ounce of performance is the goal. And, unlike web devs, game developers test their products on inferior hardware, not just the latest and greatest video cards.


I’m fairly sure you would be equally upset with your world document taking multiple seconds to load were it lower bandwidth. Or if you couldn’t have a 4k screen at 120Hz, which is severalfold more data than hd@60Hz


Capitalism constraints wont even allow adequate bug fixing. There sure as hell isn't time or money to optimize things


UI latency has gotten horrendous, both desktop and web.

That is what people are experiencing - 500ms-3000ms delays for basic UI interactions (or more), frozen UI’s, jerky/laggy autocomplete and UI renders. Like the classic ‘button is a different button by the time you see the old button and click on it’.

On incredibly lightly loaded and overpowered hardware.

Everyone has been focused on some core algorithm, and completely ignoring the user experience. IMO.


I remember learning that you get about 100ms for a basic UI interaction before a user perceives it as slow. And you get about 1s for a "full page navigation", even if it's a SPA, users are a bit more understanding if you're loading something that feels like new page.

Getting under 100ms really shouldn't be hard for most things. At the very least it should be easy to get the ripple (or whatever button animation) to trigger within the first 100ms.


It is mental just how different the video game and (web/desktop front-end) realms are.

In the former, one can have a complicated and dynamic three-dimensional scene with millions of polygons, gigabytes of textures, sprites, and other assets being rasterised/path-traced, as well as real-time spatial audio to enhance the experience, and on top of that a real-time 2D UI which reflects the state of the aforementioned 3D scene, all composited and presented to the monitor in ~10 ms. And this happens correctly and repeatedly, frame after frame after frame, allowing gamers to play photorealistic games at 4K resolution at hundreds of frames a second.

In the latter, we have 'wew bubble animation and jelly-like scroll, let's make it 300 ms long'. 300 ms is rubbish enough ping to make for miserable experiences in multiplayer games, but somehow this is OK in UIs.


Agree it's like two separate worlds. Games and web aren't 1:1 tho in relation to whether visual responsiveness is blocking a task.

Games need ultra-responsiveness because rendering frames slower is essentially blocking further user interaction, since players are expected to provide a constant stream of interaction. Being 'engaged' is essentially requiring constant feedback loops between input/output.

On the web the task of reading a webpage doesn't require constant engagement like in games. UI (should) behave in more predictable ways where animation is only there to draw association, not provide novel info. Similarly UI animations are typically (or should not be) blocking main thread responsiveness and (should be) interruptible, so even low frame rates are not breaking the experience in the same way.

But still, your point stands, its crazy what we've come to accept.


I also expect my everyday tools to be responsive e.g. if a "desktop" application lags while typing I'm uninstalling that shit (if there is an alternative sigh).


I find VS Code unusable for this reason - typing is like communicating with an app in a parallel world.


It’s a good thing we don’t talk about Eclipse, hah.

How can a UI framework be abused so heavily that it’s that frustrating to try to use?


> It is mental just how different the video game and (web/desktop front-end) realms are.

There is absolutely nothing mental about it, and I'm saying this as someone who's worked on a couple of game projects myself.

Somehow people making these comparisons are never willing to put their money where their mouth is and give random web pages the same level and amount of access to system resources as they give to those "photorealistic games at 4K resolution at hundreds of frames a second".


Usually these issues are caused by doing work in the UI thread (easy, when it’s ‘cheap’) synchronously.

All UI thread frameworks end up doing the UI interactions on a single thread, or everything becomes impossible to manage, and especially if not careful, it’s easy to not properly setup an async callout or the like with the correct callbacks.

It is easy to make a responsive, < 100 ms UI, it’s often harder to keep it that way.


The only time I see delays like that is when there is something that has to happen like a network data fetch, database lookup, etc. I’ve written a ton of GUIs in javascript/python and they’re all indistinguishable from c++ qt apps I’ve written, basically faster than a human can hope to do another operation short of queuing up keyboard commands via a “keyboard only” interop (say in emacs). When I’ve seen slowness like the latency was always per what I said before data fetch in some format from a slow database/network connection


No doubt I'm picking an easy target here, but it takes like 700 ms to switch back and forth from the Chat tab to the Calendar tab in the Teams desktop client on my boring work-issued laptop (i.e. commodity hardware). This is repeatable, first time, every time. It doesn't even bother to animate anything or give feedback that a UI interaction has occurred until 500+ ms after the click.

Some things do run very quickly, for sure, but so many of the high touch pieces of code out there from big name corps have some of the worst performance. Hundreds of millions of people use Teams, and many of them use it a lot throughout the day. You must just be getting lucky in what apps you use on a regular basis.


I think they’re referring to actual implementation, which in their experience would require some sort of massive architectural stupidity to produce a slow UI on desktop.

And you’re referring to your (and most other people’s) daily experience, which is of a major software firm producing daily used software with a super slow UI on desktop.

With a little cynicism, these two views are quite compatible.


I wonder to what degree people are actually experiencing this.

I'm on a 5-year old intel Macbook, and I think in my daily experience, core software (browsers, emacs, keynote, music) are pretty snappy.

I do routinely work with some extremely frustratingly slow software, but it's pretty stark how much its performance stands out as, well... exceptionally bad.


> button is a different button by the time you see the old button and click on it

This is somewhat exacerbated by, ironically, UI mostly being async these days. Back then, if app was slow, it would usually block the UI thread outright, so you couldn't click anything until processing was done. But these days UI is usually "responsive" in a sense that you can interact with it, and lag instead manifests by UI getting out of sync with the actual state (and constantly trying to catch up with it, causing the problem you describe).


I think this problem is a bit overblown, as we tend to get very angry at the relatively few offending ones, and fail to notice the case where it just works. So might be just some human bias over-amplification.

Like, there are websites that absolutely suck, but that’s mostly due to some idiotic management decision to add 4 different tracking bullshit libraries, and download 6 ads per click. Thinking about the regular software I use.. could it be better? Certainly. But it is very far from unusable.


UI lag is a problem, for sure -- and so are some of the dynamic layouts that are meant to "solve" it[1].

One thing I've noticed lately is mouse lag. Like, say, in Ye Olde Start Menu on Windows: Move the mouse to the Start menu, and press the mouse button down. Nothing happens. Release the button, and then: Something happens.

The menu is triggered on button-up events, not button-down events. This adds a measurable delay to every interaction.

Same with Chrome when clicking on a link: Nothing happens until the button is released. This adds a delay.

I mean: Go ahead and try it right now. I'll wait.

And sure, it might be a small delay: After all, it can't be more than a few milliseconds per click, right[2]? But even though the delay is small, it is something that is immediately obvious when opening the "Applications" menu in XFCE 4, wherein: The menu appears seemingly-instantly when the mouse button is first pushed down.

[1]: It took me more than three tries to pick a streaming device from Plex on my phone yesterday. I'd press the "cast" button, and a dynamic list of candidate devices would show up. I'd try to select the appropriate candidate and before my thumb could move the fraction of an inch to push the button, the list had changed as more candidate devices were discovered -- so the wrong device was selected.

So I then had to dismantle the stream that I'd just started on the wrong device, and try (and fail) again.

[2]: But even small delays add up. Let's say this seemingly-unoptimized mouse operation costs an average of 3ms per click. And that 100 million people experience this delay, 100 times each, per day.

That's nearly an entire year (347 days) of lost human time for the group per day, or 347 years lost per year.

Which is 4.4 human lifetimes, per year that are lost within the group of 100 million, just because we're doing mouse events lazily.


> Same with Chrome when clicking on a link: Nothing happens until the button is released. This adds a delay.

UI buttons have activated on the mouse-up event since forever ago. There's a reason: to give the user the choice of backing out of an action even after the user has pressed the corresponding button. They can just move the pointer off the button, release, and the action will not be committed.

John Carmack made this same point recently. John Carmack is wrong. Maybe the trigger on your gun in Doom needs to respond on mouse down, for the immediacy and realism -- that's how real gun triggers work, no takesie-backsies once it's pulled. But especially for potentially destructive UI actions such as deleting or even saving over a file -- or launching the nuclear missiles -- you want to give the user every opportunity to back out and only commence the action once they've exhausted all those opportunities.

It's been that way since the 1984 Macintosh -- since a time people remember as having much snappier UIs than now.

Besides which, worrying about the few milliseconds lost each time a button waits for the release is pennywise and pound-foolish; there are much larger, more egregious sources of lag (round trips to the server, frickin' Electron, etc.) we should work on first.


The mouse event thing is a bit misleading.

Mouse interactions have (at least) 5 different events: hover, down, up, click and double click. The interactions you describe all happen “onclick”, which requires a down and up event to happen consecutively.

I get your point, that small delays add up, but mouse events aren’t a great example, IMO. Each of the events have a purpose and you want to make sure that a button is only activated with an onclick, not just an ondown.


I appreciate the clarity in nomenclature.

> Each of the events have a purpose and you want to make sure that a button is only activated with an onclick, not just an ondown.

That's stated as if it is a rule, but why is that a rule?

And if if is a rule, then why does Chrome -- for example -- handle this inconsistently?

Clicking on a different Chrome tab sure seems to happen with ondown, but clicking on the "reply" button below the text box I'm typing into works with onclick.


Doesn't Windows do the same when clicking a window titlebar?

Maybe it's so if you're going to start dragging the window/tab you can see what is in it.


Clicking anywhere in a background window (including the titlebar) in Windows 10 responds by immediately raising and focusing the clicked-on window when the mouse button is first pushed down.

The inconsistency is bizarre, since some here say that clicking-and-releasing before a resultant thing is allowed to happen is a hard-and-fast rule of GUI implementation that has been in place for decades, but that just doesn't seem to be the case at all.


There are no hard and fast rules, only conventions that are context dependent.

Buttons normally only respond to the “on click” event. This lets you move off the button if you change your mind mid-click.

Window focus could be (I haven’t tested) an “on down” event because you might want to see what’s behind it while doing something else before you release the button (like drag it around). But focus used to be “on hover”, where just moving your mouse over a window brought it to the foreground. “On up” wouldn’t make sense because if you wanted to do something like move the window around, you couldn’t as you’ve now released the mouse.

It all depends on what you’re trying to do and the OS. Each OS has a design language that “tries” to bring some consistency to event handling. But ultimately, it’s up to the application to handle many things.


I occasionally click a button only to change my mind mid-click. So I can move the mouse pointer off it and then let go of the button, in effect avoiding the operation. This ability to back out is a good for command-like events. Changing tabs not so much perhaps but probably traditionally done for consistency.

X window was the odd one out in the old days that would show a context menu on mouse down. Made it feel a bit unrefined.


Which, if any, GUIs have ever activated anything decisive on mousebutton-down events? IIRC, everyone uses mousebutton-up events as the imperative factor.


XFCE 4 is that way for most things that I've tried right now: The menu panels and desktop context menus work on button-down.

Switching Chrome tabs works on button-down. Activating the three-dot menu in Chrome also works on button-down (which is good) but then it needlessly fades from 0% to 100% opacity (which is a crime against nature).

I didn't even have to go digging in the memory hole to find this. It's right here in front of me.


Sure, many menus open on mousebutton-down events, but nothing is activated or started until the mousebutton-up event.

> Switching Chrome tabs works on button-down.

True; the same in Firefox. That is interesting. Perhaps it’s because simply selecting a tab is considered a safe and reversible operation. I did use the word “decisive” for a reason.


In fact, specifically for context menus this is a feature: In KDE ones, you can button-down to open the context menu and then release above the menu entry to activate it, making the entire process a single-click affair. In standard Windows ones it takes two full clicks, which is one of those tiny inconveniences that drives me crazy when using it. Of course, the Win way also works in KDE.


Windows does this for top-level menus for the same reason.

Or at least it did in classic Win32 UI. You can still see this in action if you open, say, Disk Management. But in Win11 Notepad (which is modern XAML), holding mouse down will open the drop-down submenus, but you won't actually be able to activate an item by releasing the mouse button while hovering over it, so it seems that someone partially copied the design without understanding its purpose.


Same in macOS, long press can substitute for right click in some things I’ve tried. So obviously you can’t use mouse down for that unless you you have a “time out” which defeats the purpose. Button presses are more complicated that what people expect, especially on embedded systems.


Don't games do this all the time?


I’m not sure games count as GUIs as pertaining to this discussion. GUIs in games would qualify, sure.


> The menu is triggered on button-up events, not button-down events.

I just tried this on Firefox and can confirm similar behaviour. Some of the things I clicked on appear to have some alternate long-press function despite having a pointer device with multiple interaction modes.

It seems we have condemned our desktops to large latencies on the off-chance someone might try and interact with them using touch.


Mouse click considered complete only on mouse-up rather than on mouse-down was a thing before web browsers even existed, much less before touch UIs. Windows worked that way for as long as I can remember, and IIRC so did macOS.


> Computer sales seldom focus on mhz.

For good reason. Clock speed isnt the sole factor in performance. A PowerPC 970 can clock up to 2 GHz, but it only achieves 8 instructions per cycle (on average!) for arithmetic instructions. Modern mobile CPUs clocked at a fraction of 2 GHz achieve much higher throughput for arithmetic. The branch predictor in the PPC chip, while impressive for its time, is simple in comparison to those of modern processors. A CPU without pre-fetching is going to have much worse latency and throughput than any CPU with it. So on, so forth.

Overall, it's naive at best to focus on clock speed for performance. I used to fixate on it when I was a teenager, but not anymore.


8 instructions per cycle is a lot, especially on average! I checked and the PPC 970 could fetch, decode, and issue at most 8 ALU ops per cycle, but it could only retire 5 instructions per cycle, so its average IPC is probably less than 4.

https://www.anandtech.com/show/1702/2


8 RISC instructions per cycle is much easier to achieve than CISC ones.


If only the the PPC 970 was able to sustain an IPC of eight... it would've been a true miracle to run at 2GHz in 130nm (later 90nm). An IPC of ≥8 for more than a few cycles is something you'll only see during unrealistic micro-benchmarks (e.g. those used to reverse engineer their micro-architecture limits) on the widest of modern out-of-order CPU cores. To really sustain such an IPC in a real-world application workload you still need unusual applications and hardware e.g. a really wide VLIW DSP and *well* *optimised* assembler code.


It's the same with cars. Manufacturers have long since made cars that are "fast enough" and it's less of a differentiator then it used to be.


Not really, top speed has never been a selling point for cars since maybe the bootlegging days. They've been optimizing for either power or gas mileage for a long time.


No, they are optimized for status signalling.


Recently I've been playing around with Dear ImGui and it's so fast that now all other pieces of software feel sluggish in comparison. It starts up near instantly and it's incredibly snappy. Even using Python bindings is still incredibly fast.

Meanwhile I find it very challenging to build performant web apps without digging myself into a framework-shaped hole, even when the app shouldn't be doing anything particularly complicated.

Maybe part of it is a skill issue on my end, but it really feels like a team of smart engineers could help bridge the performance gap between native and web. Or maybe create an island that sits in the middle.


The performance hole is the javascript. Drop back to rendered HTML and some simple CSS rules for display, avoiding redraws and never having a single reflow, use system fonts, and host resources on the same domain as your HTML. You will be amazed at the speed!


But pure Javascript in itself is rarely the problem, there’s plenty of CPU-demanding websites that are responsive and performant.

IMO the problem is what JS enables: making lots of very slow Ajax calls for every small interaction. Which works great in development machines but sucks when a real network is involved.

But your point stands, dropping to server-rendered HTML makes it fast again.


It's not only the language itself, but also the way it lends itself to the developer.

I've been using Gleam in Vue[0] for my task app Netful[1] and it surprisingly reduced a lot of the jank, because... it's sync.

Awaits are used very often for things that shouldn't be used for and have compounding overhead.

[0] https://github.com/vleam/vleam

[1] https://nestful.app/


I’ve only done htmx and html + jQuery (&JS) interfaces for web but both were very reactive and hard to tell from native. What gets you is data fetches over networks. Can seem native when local and glacial over a slow network connection. I don’t think it’s necessarily the react/htmx/ etc frameworks, it’s the architecture usually. Javascript + DOM can be really fast, at least on modern hardware.


Pure react will be plenty fast. Really, try it. It’s when websites add 6 tracking libs, use some less than adequate developer that writes questionable code, and loads 3 ads per user interaction when it slows down, the same way whatever immediate UI would, were it written in hand-optimized assembly even.


> Meanwhile I find it very challenging to build performant web apps without digging myself into a framework-shaped hole, even when the app shouldn't be doing anything particularly complicated.

This so much.

I semi-recently made an Android app with Flutter and a custom storage solution of just simple mmap calls. It cold starts instantly, even after a long time of not being opened.

I then made a web thing, PWA style offline-first. It has a noticeable loading phase when you start it up, even though the entire thing is just static files on disk. Looking at the performance tab in chrome dev tools makes me want to cry. Things take way too long. And don't get me started on Indexeddb. (I realize that profiling incurs overhead, but it's still embarrassing. Indexeddb isn't even instrumented and it takes 20+ms to fetch one record out of an index)

SQLite on frontend was trending for awhile recently. I tried using that to get around terrible Indexeddb, but the startup time was so bad. Several seconds just to initialize the library and read the database file. It's pretty quick to query once running, but a non-starter if you want to show users their data quickly upon starting the app. It seems that even just chugging the wasm takes a significant amount of time...


It's hard to say after all this time, but I think I was frequently waiting on everything in the win 95 days, while now even a small delay is annoying and standing out.

Pressing a key to start a launcher, typing and using enter within a second does not seem to have been possible back then.


That's true, loading times improved drastically with SSDs. The responsiveness isn't just about the loading times though. E.g. if you're typing text some web sites (thankfully not this one) manage to introduce a measurable delay between a key press and when letters appear. Computers in Win95 era were quite slow to be able to do anything complicated on each key press, so usually there was no extra processing and thus no extra delay. Also, few remember, but CRTs weren't limited to 60 Hz, and I remember using e.g. 100 Hz normally (with lower resolution), and the difference was quite noticeable.


If there's just a measurable delay then it's usually tolerable. I _hate_ when the broken JS causing that delay also skips characters if you type too fast, has a setting to wipe the form "on-load" (after you've half filled it out because the site is too slow), or otherwise mangles legitimate text input.


My computer boots in about 10 seconds. That's probably about average? In the Windows 95 days that would have seemed almost impossible.

Most applications and games launch in single-digit seconds. I don't recall this being the norm in the Windows 95 days either.


Just the spinning rust turning at 5400rpm seems like that would have been all but impossible. With modern NVMe devices, waiting for data to load isn't really a bottle neck. At 1200MBps+ speeds, we had to span 16 7200rpm drives in RAID-0 to get close to that speed.

With all of that, it does feel wrong to have to wait for apps to load. How much of that is the calling home to validate the app has been signed?


Why not find out by turning off your wifi and starting the app?


But back in those days, you had confidence that the event loop didn't randomly drop things, and you could type ahead of what was on screen with confidence.


Its easy to see how people would become used to expect everything to be slow. Python is the most popular programming langauge both in schools and in general.

I can't think of a slower language. Then all your IPC is done through json as if everything was a website hundreds or thousands of miles away instead of something more sensible such as a shared memory.


Java? Its speed, or the lack thereof, was a common joke in the 90s and 00s.


But that changed a long time ago. Now, the main bottleneck with Java is memory usage, but 20 years of JITing have turned Java into a language more than fast enough for most tasks.

Won't beat Rust on most benchmarks, of course.


The gap between Java and [C++,Rust,Obj-C,Zig?,D?] is approximately equivalent to 5-8 years of CPU hardware improvements these days.

Software engineer time is still often the primary cost metric, but Java didn't ever get actually fast - things like constant pointer chasing and poor cache utilization still hurt it significantly in "regular" code. So, too, does the complete lack of compile time optimizations and limited JIT thoroughness & comprehensiveness.


The culture around Java doesn't help either, with its love for insane amounts of abstraction and overengineering.


Culture definitely hurts it a lot. It seems like any question involving Java and anything performance related is "you don't, you're stupid, the JIT fixes everything, never measure, never benchmark, premature optimization is the root of all evil which means never, ever attempt to optimize anything ever"

There's pockets of people doing Java + performance, but they are far & away the exception and they are frustratingly insular about what they do & why they do it. And yeah it tends to often go against every bit of guidance from things like Effective Java & similar.


That is indeed the case, but at the same time, even early Java (back when it was still a bytecode interpreter without JIT) was considerably faster than Python is today. Python is kinda slow by design - just look at the descriptor protocol for an example.


Running a java is still slow as hell, in 2024

It still takes many seconds to just start the program

Maybe the JRE is simply a pain to start ?


It has a lot of upfront costs, but that is separate from "hot" performance once everything is loaded.


> having the intuition of how fast computers can work

I wish I never developed it. It turned what used to be mild annoyances into questioning my career choices.


I think some of us see the past through distorted lenses.

Anything, from booting the computer to any random application is much faster in my Win 11 than on my old Win 2000 (with good hardware for the time).

I still remember the hourglass on the icon when opening so many applications (let alone together..).


You are right, power usage concerns and SSDs have done wonders in making some operations seem instant. But on the other hand, there's much to complain about how invasive web tech has become as online apps replace native ones. You don't launch office anymore, you connect to it, making a lot of local optimizations moot.


Things were pretty slow in the Windows 95 era too. It’s easy to look back on that era with rose tinted glasses but even back then people were constantly moaning that software bloat outpaced computer specifications.

After more than 3 decades of writing software, I can confidently say that computer specifications are a lot like a persons house or flat. The more space you have, the more ways you find to fill that space. And the less space you have, the more ways you find to optimise for that space.


I can only agree to this. 500 times faster and still slower than Windows 3.1 for almost the same set of features.


Making OS's more secure definitely slowed them down. I remember that there was a noticeable difference between a fresh install of Windows XP and an installation that had been uploaded with the latest service packs after everyone became aware of just how vulnerable the OS was to attacks.


AFAIK this isn’t because of the security updates but because the filesystem becomes fragmented and spinning disk is awful at random seek. Defrag helps a little but nothing like having big files laid out contiguously.


I had a Dell 333p from 1992 running Windows 3.11 via DOS 6. For many years I would dig it out and see it beat newer PCs from boot-up into editing a Word document. And Word 2.0 had a lot of the features I suspect most people still only use these days.


My GNU/Linux system using sway window manager feels at least as fast as Windows 95. I actually have been able to appreciate speed increases with computer upgrades. That is, until I open a web browser...


My GNU/Linux system using LXQt feels faster than I remember Windows 95 being. I attribute this mostly to higher refresh rate on the monitor, and optical switches and faster polling on the keyboard and mouse. Modern gaming hardware is good even if you don't use it for games.


Another aspect of it is that a lot of the enhancements bought by insane hardware speed is not that crucial to my mind when I use a computer. When booting NT5 .. I realize that just a bit of fading or slide-in once in a while is just about right. Even limited UI framework, with some flicker .. doesn't bother me, and there's a strange feeling of being bare-metal / lean / primitive that felt good the last time I used it.


They’re even less capable. Lots of basic consumer software, like email and word processors is less functional than the corresponding applications 25 years ago.


> Yes, they're much more _capable_, but usually even less responsive than e.g. in Windows 95 days

Hard disagree: all applications back then needed splash screens to reassure the user something was happening while the application was loading components on into RAM. Today, most applications load within seconds.

I booted an old computer from 2009 just the other day, and everything felt sluggish from boot to launching applications.


Casey muratori's performance aware programming is one of the few courses spreading this message today. We definitely need more of this. Js devs coming in to the industry would think Slack's 7 seconds channel switch is the norm and not due to poor architecture.


This is just windows tbh. Clicking around my iPhone and everything loads essentially instantly.


Because they got smaller at the same time as they got faster.

We pretty much locked in a latency pattern (eg, acceptable upper bound for user path completion speed) such that we’re happy with the speed you can go. Much faster and it feels weird.

We just want to continue to shrink it down.


I dunno man, after we moved to SSDs I felt like things sped up dramatically.


Memory latency hasn't improved much since Windows 95 and lots of stuff pointer chases, though L3 caches now are bigger than all the ram they had and L2 is matching or close.


Sorry, but I seem to have lived in a fundamentally different universe than you. I remember that my 386 took longer to boot to DOS (which is not even an operating system by today's standards) than my current laptop to come up.

When Windows 95 was introduced it took so long to boot that it wasn't even worth it for me. I remember that a popular magazine did an aprils fools joke that the new Intel CPU was so fast it could boot Windows within few seconds.


Honestly we ALL know that the reason modern computing sucks is Javascript. And until we replace it with something fundamentally less awful, it'll eat every last CPU Second we have.


The fact that we took Javascript out of the browser and started to allow it to be used server side was a sad day to me. Like who let the patients run the sanitarium with that decision?


I think often people don't put in the extra mile to make things resource efficient or snappy (they may or may not have good reasons not to of course).

It starts with choosing the right algorithms, using an index instead of doing a linear search, minimizing network traffic, ... but sometimes there are just unneccessary sources of slowness.

I was tinkering with a react-native app the other day which shows a map and lets the user click on a building, and then highlights the building and shows some information. There was a noticable delay after clicking which made it not much fun to use. So I spent a couple evenings (hobby time, not company time) to integrate a proper spatial database with a fast spatial index, precache a lot of data, so I didn't have to hit the network. It got snappier, I could query a lot of geometry much faster, but it still felt slow. Then I looked into the source of the libraries, and in some react-native glue code I found that the event handler was waiting after the first tap for a double-tap, and thus would only register the tap after a couple hundred ms. One tiny change and it became blazing fast.

I believe most apps could do most things instantaneously. Google can find any document on the net in a few ms, after all. But of course not everybody has the time and money (or even skills) to polish their apps that much. If the customer will buy your app when it is good enough then why should you put in more work (besides from a personal sense of craftsmanship)?


One day I checked top and noticed that I had a python process was using 60 GB of RAM. Slightly worrying. Turns out I had a jupyter notebook open from a week ago that was casually consuming 60 GB with a huge dict.

So part of the reason may be that developers tend to have access to hardware which isn't representative of a typical user. I probably could've written a more clever algorithm at the time, but why bother for a one-off?


They don't know how, that's the entire problem. Six months in a crappy boot camp doesn't teach you a single thing about computers qua computers. The industry has been suffering with mediocrity for so long, that even today's managers don't know what performant software looks like. But as long as people get paid, nobody cares. It's depressing as all hell.


It is. I'm contracting for a company right now and they have a number of scripts to automate various day to day things. More than once I've taken a look at them and have been able to 100x their speed due to a little common sense. Caching some things, not pulling ALL the data when not needed, precalculate stuff outside of hot loops, etc. Like really basic stuff.

No one cares anymore. I don't get it.


is that app you are playing with for your company? I think it makes perfect sense most people do not want to do free work to make things snappy if the company does not prioritize it on company time


1)

Most of web programs are slow because of ad business.

You need to be evaluated. Are you a robot? Which cohort you fit into? Which ads should it fed you?

They capture every bit of information about you, so you could be tracked more, or to sell your data.

Then they display what you want to see.

Corporations focus not on providing info you would like to see. Therefore X/meta/youtube will not have a good subscription UI / behavior.

The corporations focus on suggestion algorithms so they can spoon feed you with data they want to monetize for advertisers.

2)

Big corporate projects are built by thousands projects, and thousands abstractions tied together by a duct tape. Layers upon layers, built by engineers not happy with outcome, but engineers that met deadline.


There are lots of technical reasons why web apps are slow, and this is a technical site so, as a community, the discourse tends to over-index on "react/electron/python/whatever is slow".

But these are the messy human + economic reasons why things are actually slow.

I'd add some nuance to (2), that even small corporate projects are often built by companies that are explicitly incentivized to build as quickly as possible, any other barometer of success goes out the window.

If you use software developed by companies or projects that are outside (1) and (2), you'll find it is actually pretty fast, subjectively.


One of the funniest things around this is whenever I pay for any sort of official content service (comics, manga, books etc.) and I use the official 'cloud reader' service of the respective content owner, the site is super effing heavy and often fails to load images, page turns feel like molasses with image load speeds comparable to late 90s dialup.

As opposed to this, whenever I don't feel like bothering with this and just type in 'Read %THING% online free', the resulting sites tend to be just html pages where the pages are linked in as images and the whole things is blazing fast, and I can scroll without bother.

I'm quite the former sites re built by a small army of engineers and it's some kind of orchestrated microservice thing running region-replicated on some cloud provider-hosted kubernetes cluster, and is loaded to the brim with DRM authorization.

While the latter sites are probably running on some kid's old surplus gaming PC tucked into the corner of their bedrooms, on some PHP site thrown together over the weekend.

Yet the latter is infinitely more usable.


It is consequences of industry moving to feature factory model. It takes time to learn how to improve, understand and profile performance. Furthermore, it is not only performance, it is everything. Doctors spend like 10 years learning, but there is expectation that a fresh bootcamp developer can build a full-stack application.


If you have time, begin with the physics. How many bits have to change, how many have to move, how many have to be computed? Do they really have to move? Do they really have to be computed? Do they really have to change? Do they have to change now and every time? Can they remain in-situ? Works on the CPU register scale, and on the availability region/continental/global scale. While modern hardware provides a wealth of capabilities WRT SIMD, multi threading, and of course the revolution in storage over the last 20 years… sometimes you can design-out a lot of the heavy lifting.

My first program ran on a 2MHz CPU with 64KB RAM and (floppy) disk access time measured in seconds. I think something of the art and science has been lost when even a wonderful “toy” computer runs at 2.4GHz.

Often when I ask “what is the system really doing when X is slow?” people (including developers who built it) have little idea.


The Xorg foundation/whoever should have given Elon a C&D the instant he tried to name his hellsite X with also basically the exact same logo.

E: this article is deeply confusing and I should probably bill the owner for my time


I’m not even sure what the argument is here. The author appears to be saying that ‘X’ is slow despite a lot of effort has already gone into making it fast, so what remains must be that it is doing irreducible hard work.

I assume he’s using X as a generic placeholder instead of formerly-known-as-Twitter, but the argument applies in either case.

In my experience this is never the case. There’s always ridiculously low hanging fruit. On the ground and rotting, in fact.

I’ll use Jira as an example. Their online version takes a solid minute to open an empty form. A minute! Do you have any idea how much computing power this represents!? I can install Windows Server 2022 into a virtual machine in less time than this! I can read 42GB of data from disk, or download a DVD from the Internet.

Someone working for Atlassian was here making excuses: customers implement many complex customisations, they have security rules, etc, etc…

“The system is doing a lot of hard work” is what he was trying to say.

I tried a new, empty tenant. No data, no customisation of any kind.

Took nearly a minute to show an empty form. That’s not “hard work”, that’s the baseline. It can only get worse from there!


> The author appears to be saying that ‘X’ is slow despite a lot of effort has already gone into making it fast, so what remains must be that it is doing irreducible hard work.

This is the literal opposite of what the author is saying though? The article even closes with him expressing annoyance because people who claim that X is justifiably slow have almost certainly not done enough analysis to say so.


This is what I mean that the article's core argument is a bit hard to follow.


X is slow because management told everyone to work on something else the moment we got a prototype barely functioning


This is a bad article, primarily because multiple people, myself included, have no idea what this person is talking about. I thought he was talking about Twitter. Someone else suggested X11, and a third group says he’s talking in the abstract.

If you fail at making it clear what the topic of your writing is, then you failed as writer. I shouldn’t have to guess what the subject under discussion is.

Now assuming he is taking in general, the whole article is just content free, because the writer is reduced to literally just brainstorming anything and everything, and saying, “See? There’s lots of reasons for something to be slow!” Well, fuck dude, I think we all know that.


I remember having built a PC for Windows NT 4. It was fast. However, it had good hardware for that time. Adaptec UW SCSI hard disk, Matrox video card and 128MB of RAM.


No matter how fast CPUs get, shitty programmers and their managers with enshitification strategies will make software slow again.

What new features does windows 11 have compared to win XP? Now compare the requirements needed to even start the OS.

Web is even worse... 2kB of usable information means tens of megabytes of downloaded crap, and that's even before the ads. Why does a simple site with a sidebar need megabytes of javascript?!


You can only optimize at your current level of abstraction. Writing JavaScript? Have fun, there's a floor to your performance, because you're on the 28th floor of the abstraction skyscraper, and you can only go up. Part of the issue is that developers don't know any better, the other part is that everyone suddenly decided that four virtual machines crammed inside each other like a matryoshka doll is the only valid way to write software anymore.


JS by itself isn't that bad. You could calculate pi digits reasonably fast using JS.

The problem usually is: really poor code that is blocking, triggers a million rerenders for every interaction, insane bloat, or just the absolute massive amount of abstractions through packages. Plus HTML + CSS is really slow.


back in 2022 Twitter was Twitter and "X something something" had a very different expectation of meaning, by default

thanks Elon


Actually, the name change took effect in 2022, and is mentioned in the article:

> As should be obvious from the framing, X here is a variable, not a web site formerly known as Twitter.


name change was in July 2023


Yes, you are right, sorry. I thought that (2022) in the submission indicated that the article must be from 2022 (though it seems actually to be from 2024), indicating that the mention of the name change in the article must mean that it had already happened by 2022; and then, though I checked Wikipedia, I confused the date of acquisition with the name change.


I only realized this post wasn't about Twitter when I reached the end.


The rebranding from Twitter to X, then, has been more successful than anticipated. It should have been X Windows that one mistook the topic for.


I guess people stopped associating "X11" with "slow" at some point in the early 00s.


X11 is indeed what I first thought this was about.


If you know Twitter already or it's like the opposite.


I'm just going to keep calling it Twitter.

Usually, I will call people however they preferred to be called, even when it seems silly.

But renaming Twitter to "X" crosses the line of confusing. I'm not sure it should've been allowed.


Me too. It makes very general points but it could apply to Twitter, which is IMHO what makes this so ambiguous. The fact that X is actually slower than I think it could be (remember the "light" version that didn't need JS at all?) adds to that ambiguity.


Same, I kept thinking "Are there really that many people still on Twitter to make it slow or do they just lack funding for infra".


Me too. I'm not sure I would have clicked if the tagline weren't, effectively, clickbait.


Nuh, elon screwed it up for everybody


What did you think after the first two sentences?


Is the post not deliberately ambiguous for humorous effect?


I thought this was about X11, not twitter.


The site currently shows a footnote:

> As should be obvious from the framing, X here is a variable, not a web site formerly known as Twitter.


I know. I read that. I didn't say it was about twitter. I'm saying that many commenters, and the article itself, seem to think people will confuse it with twitter, but I confused it with X11.


Same. Coincidentally, wayland is pretty fast.


Is X not X11? I read the article thinking it was that.


> As should be obvious from the framing, X here is a variable

Not only did I misunderstand the article, the author tops it off by calling me a dummy at the end..


It probably would make more sense to put that at the top of the article.


At the very least, X is definitely the most overloaded product name in tech.



Pro gives X a run for its money.

  Xtreme iAir Pro+ | AI for My Blockchain


The same way flat earth as a debate topic has been misappropriated, as has this - the concept of a variable


I think this is the first time the Twitter rename has caused me significant confusion with an unrelated article.


> As should be obvious from the framing, X here is a variable, not a web site formerly known as Twitter

This could be one of the needlessly confusing articles I've seen yet.


Is the world going to have to agree on a new generic variable/placeholder name, just because of Twitter...

I think X on its own should never be Twitter, it should always be referred to as x.com to eliminate any ambiguity.


> it should always be referred to as x.com

I vote on "the website previously known as Twitter".


How about just "X Twitter", as in "ex Twitter"?


I'm tickled that "Xitter" is gaining traction. Use the Chinese pronunciation of X.


How about TWPKAT? It looks a bit like twat


In CS, foo, bar, and baz are those metasyntactic variables.


"How can we get a 10y improvement here?"


Yes, I read the 30 other posts about it, too.


Not an article about Twitter.


This is from 2024? Or was this updated?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: