Hacker News new | comments | ask | show | jobs | submit login
Slow Software (inkandswitch.com)
438 points by Thibaut 3 months ago | hide | past | web | favorite | 260 comments

Input latency, I can understand. Web latency, at least you know what you are paying for (trackers, fonts, bloated stylesheets, MBs of JS libraries...). But core software latency, that is pure madness.

Android share menu: what on earth takes 500ms-2000ms to display a menu??

Android scrolling: I could have a smoother scrolling if I rendered the view on a wall in Doom III. Why on earth is the scrolling process not prioritized above everything else?? iOS got this right.

Microsoft Outlook on Mac: it can teak easily 2000ms to close(!) a window, are you kidding me?

My new 2018 Touchbar Mac does not "feel" any faster than my old 2013 retina Mac, while the paper specs and benchmarks show at least 100% increase in computational power. They can both equally easily choke down on some unwanted Skype update or stupid WebEx video stream (yes, I work in corporate).

Amen, the Android share menu. Seems like it tries to go through the entire history of my previous shares to see what is most popular, then asks every app that enables sharing to list their respective sharing targets. Some of these apps also checks which of its own sharing targets are most popular (like the sms app and Messenger). Finally it renders a long list that to me feels random each time and not always relevant. /End rant

I recently learned I could long press share targets and pin them. So the fine I actually use in the list of a couple dozen actually show up first now.

Which, infuriatingly, is disabled in some tier-1 Google apps, including Youtube and Google News.

Great tip! Thanks!

Woah, TIL. Thank you!

Office on Mac is garbage - I had someone told me that it remains single-threaded on Mac which would explain this.

I find my Pixel (v1) surprisingly fast given its age and have no plans to replace it currently, although the share menu slowness is weird. It happens any time the device has to enumerate the list of installed apps—like loading the apps page in settings—, which bizarrely it doesn't seem to cache.

From what I know, the gui rendering has to happen on the main thread, not another spawed thread on OSX. I'd guess that's a contributing factor.

I also work in corporate - my 2017 Touchbar feels insanely slower than my partner's 2013, or sometimes even my late 2011.

Not to mention the awful keyboard and non-upgradable parts.

Outlook is a nightmare - heck, the Office suite has been awful essentially since it's rewrite from being PPC-only for 2008.

Not only did we have to wait way too long for a native Intel version / Universal Binary - but when it arrived, it was slow and clunky on both pieces of hardware.

But worse than office - our entire team is now being migrated to cloud-based email that virtualizes itself in an IE window, and responds to clicks and scrolls about 40% of the time.

It looks awful, it performs awful, I've never experienced quite a terrible piece of software. Weird, virtualized, Office 365 - unless you're on Windows, I couldn't imagine this working for you.

Microsoft Outlook on Mac: it can teak easily 2000ms to close(!) a window, are you kidding me?

On my new 8th gen i7, restoring a Skype for Business window (from the task bar, so it's already running), I've measured at sometimes in excess of 10 seconds for it to restore. No other application I use exhibits such horrible performance.

I can see the typing latency in the latest version of Outlook, too, when composing an email. And don't even get me started on web-based Outlook, the performance is abysmal.

Not sure what's going on at Microsoft lately, but something very, very wrong has been happening for 3-4 years now.

Microsoft Teams is similarly bad. Also, multiple seconds to switch views in app. I really, really hate it.

On the other hand, my 2018 MBP feels MUCH faster. I think it's the faster read write on the SSD.


Can you expand on this?

I work at a large company that is not one of the famous silicon valley tech companies. One of the worst parts of working there is the heap of various enterprise anti-virus software they install on our computers. It brings huge typing and disk access latencies. Even opening files in vim with FZF is slow. I can't explain why but this makes programming so much less pleasant. I really just want to work somewhere without anti-virus.

You too, huh?

Last employer moved to McAfee "because active malware protection". Basically, AMP is a set of rules you can apply to disk accesses per application -- like for instance, "no application can delete PDF files from My Documents" (this is one of the "anti ransomware" rules).

It wasn't too bad with just the signature-based virus scan, but the updater and AMP were horrendous. The PCs (3.6GHz 8-core Xeon workstation with SSD, 16GB+ RAM and a ludicrously powerful 3D card) went from booting in 30 seconds to taking 15 minutes to boot. Eclipse took another five to start. When AMP was deployed to the JIRA server, JIRA refused to start (Atlassian Support suggested AV exceptions which IT refused).

IT response: close out any AV related ticket with "You will not be receiving a hardware upgrade and the AV is mandatory."

Six weeks later, IT was outsourced and the response became "we don't have permission to change AV settings" (BigCo politics).

Four more weeks later and the electronics lab was crippled as Labview got detected as malware by AMP.

A fortnight after, half the technical team handed their notice in.

It wasn't the only reason this FTSE100 was constantly outrun by its competitors, but it was certainly a contributing factor.

Similar environment here.

I was thankfully able to strip McAfee out because it was monstrously terrible.

Still, corporate IT has enforced a browser plugin and tray app called Triton AP-Endpoint and Triton Forcepoint Endpoint.

It's sole purpose is to block you from moving any sensitive data to external drives. I, up until now, have had 0 problem removing any materials to any drives anywhere. I don't think it works very well. It does however chew through my 2015 MB pro battery and cause the fan to nearly continually run and even at times overheat.

I think I could remove it, too—but am mildly concerned they'll get a notification and come start inspecting things.

At this point I'm kinda glad they went all the way, because now they don't exist anymore. I just hope the idiots don't ruin another company like that.

You can't have viruses in company that doesn't exist I guess.

I'm currently picking the OS stack for all machines at my company. All guidance, even people I respect, point towards antivirus protection. Yet I lean towards nixing that. I know it's hugely ineffective. In fact, it opens up holes of its own[1].

And yet... it's like scaffolding in NYC[2]. Absolutely useless[3]. But if you are all for removing it, and a brick falls and hurts someone, heads will roll. Quite a quandary I - and other C-levels - face.

[1] https://www.computerworld.com/article/3089872/security/secur... [2] Another contrarian passion of mine. [3] Bricks do fall and hurt people. But no more than scaffolding itself falls with the same effect.

I would recommend against antivirus. If you force developers to use an annoying configuration of a specific OS, you make it much harder to hire top talent. Why would someone work at your company when they could go to Google and use their favorite Linux tools running on their favorite model ThinkPad?

It totally makes sense to lock down machines that can access production, but for development? Just let people use what they like. You'll have less work for IT, happier developers, and an easier time recruiting talent.

Microsoft Defender typically doesn't hurt performance or security much. The alternative is to run Mac or Linux stacks instead of a Windows stack, of course.

Microsoft Defender destroys WSL performance unless you create exclusions.

Then there’s NTFS behind that. Just use a VM. Ten times faster.

For what I am using it for, a VM is not faster than WSL. Maybe in the early days, but not now.

Ten times faster at what? Do you have any benchmarks?

Writing and reading lots of small files on disk. Which is basically what 90% of Unixes are.

While running a Linux stack may still work, your Mac info is outdated by three years, at least. Macs have entered the zoo in 2015.

For practical purposes, that's not true.

Macs do not allow running unsigned software by default, and no one runs antivirus on Mac, ever. So even if they were commonly being infected, which they aren't, the person above would have organizational indemnity if someone were infected because they're following industry best practices by not running antivirus on Mac.

If you want, you can further restrict Macs to only App Store software, which is heavily sandboxed. Then you can go even further by not allowing the individual users to install software on their own, if you really want to be draconian about it.

Unless someone is being individually targeted, running very outdated software, or is intentionally trying to get themselves infected, it will not happen. Even if all three conditions are true, it's still very unlikely.

Anyone who says otherwise is just fear mongering. That same level of fear mongering could point to the dozens of pieces of malware that have been released for Linux.

I say this as someone who uses a Linux laptop for work and a Windows desktop at home. I don't have a dog in this fight. I do, however, try to stay very informed about the state of software security.

> Macs do not allow running unsigned software by default, and no one runs antivirus on Mac, ever.

My company-issued MBP is running something called "Cylance Protect" (and "TrendMicro" earlier). And also something called "Forecpoint DLP". I have no control over any of that, software just appears and disappears. I think it's done by something called "Jamf".

I don't really care either way. The only thing I actually use on the Mac is Chrome for email/calendaring/vidconf and some intranet sites. Actual work is all on Linux servers via ssh (and even that has "ClamAV" antivirus running). So I'm just using it as an expensive terminal/chromebook.

It sounds like we might work together; the only difference is that I do most of my dev work on my MBP so I consider it an expensive Linux machine instead of a chromebook :)

Haha I just made another comment to a similar effect. We had McAfee but I stripped it out because it was just so terrible.

We also have the Forcepoint nonsense.

> and no one runs antivirus on the Mac, ever.

Working at a large company (also not one of the famous silicon valley tech companies) and if you get a macbook, they are managed remotely and have BitDefender installed.

I sometimes forget how much big companies enjoy spending money. I once worked for a large company where every developer was issued a full copy of Microsoft Office, even though most of them worked inside a fullscreen Linux VM all day. Outlook was the only piece of Office that my coworkers and I used, and I would have been happier using a tab open to a webmail provider inside the VM than having to use Outlook to connect to Exchange. That was far from the only unnecessary software they paid for. Why use Git when you can pay for Perforce!

If you're working in the game industry, and your repository includes a lot of huge binary files that are mergable, perfoce is a godsend.

Also, p4merge is the best merging utility for any VCS, and I always install it alongside git if I work in Windows.

>p4merge is the best merging utility for any VCS

What makes it "the best" ? Honest question, i found it kludgy and settled for Kdiff3.

p4merge is indeed awesome. I work exclusively in Linux environments, but p4merge always gets installed. The experience has actually gotten better in that regard since git started supporting it straight as one of its 'mergetools'.

> every developer was issued a full copy of Microsoft Office

Enterprise volume licenses. It probably cost them less than the time/money they'd lose if you spenta few minutes trying to figure out how to open some file sent by non-devs.

> Why use Git when you can pay for Perforce!

Nice graphical tooling and sane developer experience.

I consider perforce "views" to be burdonsome and not so sane. The rest of the experience may be less jarring than git for many.

Why use Git when you can pay for Perforce!

Perforce integrates very nicely with a whole bunch of third party tools in a way git does not, and is on the whole a lot easier to use for most people than git (and I'm saying this as someone who doesn't like Perforce at all)

Really? My experience is the opposite - all the tools integrate with git, not perforce.

Said as someone who likes perforce. Having hundreds of developers working with large binary files was incompatible with git until very recently.

I would have to echo this, no problem with perforce, but sayings its ecosystem of plugins is more viable than git?

Maybe 5+ years ago?

In many enterprises you are only allowed to connect laptops into the network if an anti-virus is present, that includes company issued Macs.

And this is why I used to have a laptop in my drawer I used for logging into the proper intranet, booking holidays and organising my pension.

All my actual work was done on Unix/Unix-like machines on our own old network, something we clung on to after acquisition.

The privilege escalation vulnerabilities that have been found in Mac OS in the last few years have made me distrust its security. It's not like Apple can't create secure OSs (see iOS), they just don't seem to care nearly as much about macs. All the "security" changes (that I've seen) they've made have been implementing an iOS-esque walled garden with the premise being that trusting Apple is better for security (which is probably true, but I think many users will turn these limitations off as soon as they hit them).

I used to really like Mac OS X, but nowadays it feels much less polished and much more annoying. It might just be nostalgia, but I remember Leopard being more responsive and having fewer pop-ups (Screw you, iCloud! I don't want to synchronise my files!).

Being individually targeted is a possibility if you work for a high-profile or important software company

Additionally, not sure about you, but the reason I like to develop on macs is due to the fact that I can test nix software on them. That means I am installing and running nix binaries either via browser or through package managers such as pip. There is absolutely a non-zero risk that Linux malware will somehow find its way into my development environment. I am not trying to fear monger, as I do believe macs are still generally the safest, but don’t let them lull you into a false sense of security

> and no one runs antivirus on Mac, ever. Work at small company. Have Eset NOD32 on all Macs. It is possible to install non-signed software (I sometimes install development tools from small companies who do not have signing cert, if I'm comfortable with their reputation). It's also possible something could sneak through in a PDF, Word Doc, homebrew/ports install, etc.

>Macs have entered the zoo in 2015.

What "zoo"?

To this day, Macs are practically virus-less, which they always where (99.999% of the scares in the media were for trojans, and even those at worse affected something like 1-5% of the total user base) -- nothing like the good ole Windows (XP and pre) days where after 1 day surfing the web you'd have a few viruses.

And of course if you go with the default options (gatekeeper, signed packages, etc) you have even less to worry about.

It's also not about "market share" -- Macs had 1/4 the market share they have now in 1990-1997, but there were tons of viruses for them under the old OS.

It's not like the original (pre-many security features were introduced) OS X was specially hardened or anything, but it was much more secure than Mac OS and the old Windows versions just by having a basic UNIX-style design.

> To this day, Macs are practically virus-less

Exactly what I'm talking about. Not to this day, but to some time back in 2015. Right now any trojan toolchain on the black market comes with a Mac-targeted package.

Trojans always existed. But trojans aren't viruses, and if you don't get your stuff from shady websites you don't have much to worry about (and if you just get signed and sandboxed App Store stuff, even less or nothing to worry about).

I believe you mean “finished” not “out of date”.

Actually I chose my words carefully. Even though malware guys have started targeting Macs about three years back actual infestations are uncommon.

>>And yet... it's like scaffolding in NYC[2]. Absolutely useless[3]. But if you are all for removing it, and a brick falls and hurts someone, heads will roll.

This is a question of shifting liability and sharing responsibility. If a brick falls when you knew the facade needed maintenance, then the liability falls solely on the building. If the scaffolding falls, then the liability is borne by the scaffolding company, or at least shared.

If people you respect are pointing towards antivirus protection, you might also want to inquire whether they are saying this purely out of technical reasons (i.e. surface attack area, which could be debated), or if there are financial risk management factors tipping in this direction.

Since you're picking the OS for everybody in your company, which presumably includes multiple departments and staff who are non-technical, it seems like madness that you'd let them run amok without some level of antivirus.

But -- leave the poor developers alone. One hopes that the company was capable of hiring technical staff practicing basic day-to-day security hygene.

Eh, I’d like to see some data on NYC scaffolds. They do collapse and hurt people, but the sheer volume of pedestrians and construction work makes me skeptical that it’s just as bad to have them as not have them. Not to mention the fact that it gives construction workers a way to not block the sidewalk with equipment and personnel.

Keep in mind the reason guidance exists for products that aren’t really all that necessary is largely due to the massive sales teams behind those products, plus (as you mention) cover-your-ass concerns

> I really just want to work somewhere without anti-virus.

Not trying to be hostile.. but why aren't you? I've never worked anywhere that required anti-virus, so I know there are jobs out there that don't require it. In recent years I've gone so far as to take the stance that I won't use company computers at all, only my own, and I still haven't had any problems finding work.

Unless you have strict restrictions on switching jobs (eg. H1B, can't move for reasons, bad network connections so no remote work, etc.) nothing should keep you from finding better working conditions.

I worked in such companies, where you need to sign that you assume all legal consequences of a virus being introduced into the company network via your computer.

Police officer in my country told my sister to not plugin her usb (which contained some camera footage) into his computer because HER usb will get infected with viruses from HIS work computer!! This was in an actual police station!

I've never had to sign such an agreement. I'm US based, maybe this is something done elsewhere?

Germany based customers.

Oh yes, those companies where progress takes place regardless of, not thanks to IT support...

That doesn’t really work for large companies unless your employer plans on suing you for $100mm+ after a breach.

But big companies have internal security teams which basically handle all of this behind the scenes (until you get road blocked weeks/months when they come out of the woodwork to make your new product secure - a very necessary annoyance)

You don't sign, and leave right when they ask you to, laughing at them?

My computer was proper, those were customer sites.

No, most people would rather install an IT certified anti-virus on their systems and keep the customer, than lose the business opportunity.

Such a question can only be countered one way: "if company installs anti virus, does head of IT assume all legal consequences of virus being introduced?".

Didn't think so

I went into construction, and then industrial alpinism with tree-trimming inclination starting 2016.

But before I did that, the one-before-the-last place I've tried to work was this hostile environment where everything was Windows and MS-based, as far as what we were meant to use for work. I couldn't bring my own lappy.

I ended up writing an AutoHotkey script that would get mouse scrolling about 80% sane and manage my clipboard.

I set up a VM on our HPC cluster, on which I'd do my actual work by way of VNC and sometimes SSH. The LAN was OK, so it ended up being less laggy than Windows on my local machine.

But I suppose a local Q frontend to a VM hosted on my work lappy would have worked too. Virtualize the AV away, yeah.


> I ended up writing an AutoHotkey script that would get mouse scrolling about 80% sane and manage my clipboard.

I used to run AlwaysMouseWheel to fix up focus scrolling, but forgot to set it up after my last reinstall. Thanks for the reminder! :)


As for the other stuff, ugh. I've made it a general rule not to work anywhere where I don't get root on my own box.

This is no longer needed in Windows 10.

My main work desktop is still on 8.1 and will probably remain so. Laptop is 10 and yeah, no issues there.

What is industrial alpinism?

I go up a rope or down a rope. Up a tree or a building, down a building/wall or a well. Do some tree trimming, sometimes construction tasks. Mounting stuff disassembling stuff, sending it down, or up another rope. Fun stuff.

Do an internet search for TreeUp.

I think it means workers who, for example, climb on industrial chimneys for maintenance.

Basically anywhere where you're paid for your alpinist skills. Mounting/dismounting things up there. There was an example like cleaning up the walls of an old fortress from grass.

Pff, my entire hard drive is scanned every friday. Friday is slow day on my work laptop (Win10 with McAfee)...

At home I run KDE Neon, when people see me use that laptop (1 y/o Asus, core i5, 8 gb ram, standard ssd) they always comment how snappy and fast everything is and ask me what laptop I use. Even my neighbor with his brand new Win10 desktop with NVMe drive and new i7 cpu.

Place I worked at the laptops often had one core pegged at 100% load by some disk monitor that was competing with the AV software. I left very quickly and you should too.

Why not compaign for the change to different AV software?

Why not compaign for the change to different AV software?

Depending slightly on the company, that is often a complete and utter waste of time.

That kind of change would either never happen or take years to happen. I haven't got time for that.

Years? I guess we have worked at different types of companies.

Do you have enough resources (and permissions) to run VirtualBox? That way you can run a VM and get complete control. You can do things inside the VM and anti-virus will not be in the picture.

I do this in my case but for different reasons and not performance.

I've tried to do this, but I think the anti-virus software breaks performance by doing strange stuff to the disk images whenever they're being written.

On the other hand, I just started at a Fortune 500 and the IT configuration has been non intrusive even as a developer. The only time I had issues (I got locked out and had to ping a support guy) was when I was using WSL trying to authenticate through the proxy and probably hammering it with weird requests which was flagged as suspicious behavior. Fair enough. But working somewhere that has their act together is a dream, and I’ve been able to be productive immediately because of good decisions in IT.

Vim uses an on disk file as a buffer. It's the .swp file. That's how you can open files larger than the available memory, and recover files when it crashes.

Same here. It's faster to compile in a VM because the VM doesn't suffer from the antivirus bollocks. Insane.

Install vmware and work in vm. The av should only check one file at opening. With ssd overhead will be minimal.

I think this article places too much emphasis on input devices and hardware constraints, and not enough on software architecture. JIRA doesn't really feel any faster just because your input devices are fast; the application level latency dwarfs the input latency by a large margin.

I would take measurements, but JIRA prohibits benchmarking for some reason... ¯\_(ツ)_/¯ they're probably just trying to save everyone else the embarrassment of seeing how incredibly fast JIRA is compared to their own sluggish offerings, right?

I am confident that JIRA's application latency has nothing to do with Java (backend) or JavaScript's (frontend) garbage collector.

Many companies just feel no impetus to write fast software, or to use commensurately powerful hardware.

Jira is in a league of its own. With latest stable Firefox on a Macbook Pro from 2014, I get a full 10 seconds loading time for the notifications sidebar. Every click in Jira takes between 5 and 15 seconds. It is absolutely nuts and it seems like many people just don't understand how insanely bad it is (but obviously many engineers do ;)

Meanwhile my browser can load, render, and scroll a 4000 line colored diff in less than half a second (e.g. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...)

Our team is responsible for deployment/operations of our self-hosted Jira so a few months ago I decided to make it faster, and oh what a rabbit hole that was:

When you look at Jira using Chrome DevTools there are these big batch.js and batch.css files (up to 5MB in total) that are different for every type of Jira view (dashboard, agile board, issue detail, issue search, …) so the first few page loads might be a bit slow but then they all should be cached as they don't change until you update your Jira and everything should be smooth. Except that wasn't what I saw, they seemed to be reloaded every hour or something.

Naturally I blamed Jira, and wasted hours Googling for batch.js not being cached, but eventually came to the conclusion that it can't be Jira's fault, and it isn't. Turns out it's the Google Chrome's cache backend that's used on Linux (and only there). There are three issues with it:

1. it's limited to cca 320MB even if you've got 1TB free space

2. entries are evicted by age times size, ignoring number of hits

3. media files such as 2MB youtube fragments use the same cache

The result is that watching youtube for a while evicts all cached batch.js. To make this bearable, I enabled gzip in the reverse HTTP proxy in front of Jira, which brought batch.js down to 500KB so that it's smaller than the youtube fragments and isn't evicted sooner than those. Still, few hours of watching youtube and not visiting Jira evicts it. Increasing the cache size using "google-chrome --disk-cache-size=2000000000" helps as well.

Oh and here's the link to Chromium issue tracker: https://bugs.chromium.org/p/chromium/issues/detail?id=617620 :-)

I suspect the root of the problem is that 5mb js/CSS blob itself rather than its caching. It's absolutely nuts and its most definitely not needed for what Jira does

At least we're getting paid well for watching load indicators. Atlassian should make a step forward and show funny cat gifs to increase satisfaction.

Is your jira heavily customized? I haven't seen this level of slowness and use it daily. I know that customizations can be the root of evil on jira

It looks pretty standard ... it does have GitHub integration enabled, and custom fields and workflow for each project, which seems like a pretty standard thing for organizations/users to do.

But my key anecdote is about loading the notifications sidebar after the page has loaded, there shouldn't be any customizations in there. But I dunno, maybe Jira gives managers a bit too much rope with which to hang their development teams.

And this level of open-ended configurability is why many vendors prohibit benchmarking. While Atlassian is exceptional in some regards, this is not one of them.

If jira is slow crap when used in the way real people use it then isn't this a meaningful fact worth sharing?

Are you on Jira Could? I've never seen a self hosted Jira instance that slow.

Yes, it's the Atlassian-hosted version. I have heard that they did a complete re-write for the on-premise version, but will probably never be able to replace the old codebase ...

Ah interesting, I think this is the disconnect I always have, I've never used their cloud based version.

This reinforces for me that JIRA should always be self hosted.

We host our own Jira on-prems behind an apache server and it's pretty fast. I am actually surprised by how fast it is.

In previous job we used jira cloud and it was terrible.

Try Tempo for JIRA. Seeing 60 second load times for pages sometimes. I’m going to have to open a timesheet entry for filling in timesheet entries.

In my company we use a PLM system for technical drawings. Jira is an absolute dream compared to that.

> I think this article places too much emphasis on input devices and hardware constraints, and not enough on software architecture.

No it does not: it clearly makes the point that due to device, hardware, OS, and framework latencies your application has very little time budget in which to respond and still feel fast to users. This will necessitate considerable effort in terms of architecture. I don't see how that's under-emphasising it.

To have a useful discussion on architecture within the article they'd have to talk about the specifics of different software and applications, which obviously vary considerably according to the requirements for said software and applications.

The danger with that sort of thing is that some people will say, "well, if we use architecture X then our application will feel fast," disregarding many of the detailed requirements of their domain that mean X won't always (or possibly even ever) perform well.

To some extent that's unavoidable but I think this article was absolutely on point in that it makes clear the constraints within which software developers must operate, and sensibly leaves architecture as the responsibility of those software developers.

I wrote some Python to convert a long yaml file into jira epics/stories/deploys to avoid having to use the UI for anything aside checking what I have on my todo. Has already saved me more than it took to write.

That's great for you, but I'd bet no one else can effectively use your scripts. The beauty of the UI is that anyone can use it almost immediately.

The beauty of a shit-filled moat is that anybody can wade in without having to be able to jump it.

Why does this matter?

There's no obligation to always make software useful for other people. There's a lot of space between doing something manually and selling a product.

> There's no obligation to always make software useful for other people.

The problem, which you basically acknowledge, is that software that isn't useful generally doesn't get used. If you write nice custom scripts to make your own workflows more efficient, that's great... for you. But to make any interface "useful" (whether it's a UI, an API, etc.), it doesn't just need to be "useful", it also has to be accessible. That should help explain why even crappy UIs are so ubiquitous. They are moderately useful, but extremely accessible.

Everyone in engineering can use it if they want, it’s in our “fun tools” repo. I have better things to do than to navigate through ticket creation UI 20 times

> I have better things to do than to navigate through ticket creation UI 20 times.

Thank you for being that sort of person.

I know some people who would take great pleasure in knowing that they would get paid for spending just that time, no matter if they achieved very little doing it.

But to be fair, plain Jira can be pretty fast, it’s just when you get into an enterprise environment and people think they need 2000 different fields things fall apart.

I've never seen JIRA be "pretty fast", so I can't agree. I also think "2000 different fields" is an unfair exaggeration, if we're going to focus on being fair. The 10 to 20 fields that people might use per issue should not prevent JIRA from being fast with the right architecture.

Same. JIRA is just slow, even in basic configuration. Which is always annoying me because JIRA is 99% read-only. They could generate most of the pages and cache them until anything changes. Even pre-generate then on change. But every time you view an issue, it takes multiple seconds again.

This is a common 'but sometimes' issue in software engineering.

Caching works for 99.9% of cases, 'but sometimes', I click on an issue and it was just updated just before my page rendered.

I've setup Jira (on premise) at my workplace myself. Runs behind an nginx reverse proxy and it is indeed pretty fast!

We don't use much stuff and it's a bit older version 7.8, but I cannot complain so far.

Installed a plain Jira software instance on Docker, that wasn’t terribly slow.

Why shouldn't 2000 fields be fast? Shouldn't all of what jira does boil down to straight forward SQL queries?

or even 200000 etc. I've never used but at a glance i feel like it's just a google sheets tracker with unnecessary visual stuff?

Our fresh install was noticeably slow. Even if it were fast (it's not), they sell the thing as a customizable solution. Few people get everything they need from a vanilla install. That's kinda the whole selling point.

Don’t know about the enterprise version, but the cloud offering loads and renders like molasses.

We have been using jira as a very simple issue tracker with 3 users and the standard backlog, selected for development, done workflow.

Load times are atrocious. It's the slowest website I have ever seen. Maybe facebook is slower, but I don't use that as often.

I've said it before and I'll say it again - one of the greatest boons from the new wave of VR is not the tech itself (fun though it is) but the focus on latency as a first-class metric. I've always been sensitive to microstuttering and so find it irritating when a game that's "running at 90fps" (or a document that's scrolling) still has perceptible judders.

is 'judder' a synonym for 'jank'?

‘judder’ refers to dropped frames in a fixed-frame-rate application, frames that are not dropped but are rendered too late, or, more generally, variance in frame rate of a program that should be smoothly animating.

I think ‘jank’ would refer to the same thing, as soon as it lasts longer than a few frames.

E.g. the program runs consistently otherwise, but somehow freezes for 0.5s every minute or so.

My intuitive interpretation has been that `jank` is related to frame dropping but with time perception being constant (like sampling being insufficient yet played at correct times) whereas `judder` also has a sort rubberbanding effect on time, where time is perceptutally compressed or dilated as playback is delayed or "catches up", with frame dropping happening upon e.g a deadline being missed. Here, with frames 2 and 4 being costly to render:

    normal: F1....F2....F3....F4....F5....F6
    jank:   F1....F2..........F4..........F6
    judder: F1....F2........F3....F4......F6

This use of 'judder' is what I would call 'jitter'. 'Jank' I'd just call 'skipped/dropped frames'. I never realised how much this terminology varies between groups, I think it's fascinating!

No, "judder" is like a shaking or vibrating, "jank" just means bad quality.

Jank has a more specialized meaning here, related to delays in UI rendering that cause frames to be dropped intermittently which reduces the smoothness of motion.

Judder is a perceived vibration or motion aliasing caused by the incoherence of consecutive frames at a fixed sample rate.

Further reading from Dan Luu, who wrote about this as well. [0]

He goes into why the Apple 2e is so quick, the iOS Rendering pipeline, and general complexity of computing input.

I find it really interesting that latency is so high on a lot of devices in 2018.

0. https://danluu.com/input-lag/ (2017)

I like how the site is an example of very fast software, such a simple design and it renders immediately; a single file load with the tiniest of javascript to make the videos work. A full page load in 27ms with only 14kb transferred!

> A full page load in 27ms with only 14kb transferred!

Nitpick: not really a full page load. The text part is 14 KiB, but once the website finished downloading the images and videos it's 14 MiB. However, because it does so via lazy loading (same for the PNGs), the text-parts are instantly rendered, and because it downloads all of the videos you have no buffering latency when clicking.

So yeah, near-perfect example of how to design a website for low-latency while still maintaining rich media (the thumbnail PNGs could have been JPGs to save more data, and more importantly: lazy-load even faster).

The only nitpick I have is that the sidebar comments are built in a weird way. They're nested in the middle of the paragraphs inside small tags. It would be a little nicer if they split the sidebar text into aside tags that come after the paragraph.

It's refreshing to see a site that loads quickly though. That would've been a bit hypocritical otherwise :p

Though, if I were to critique it, I would say that the JS was not really necessary for such a simple case for video. There's only one <source> (I guess H.264 is considered ubiquitous now) and the big central play button just gets in the way of the video (and on most platforms, the default styling of the <video> element puts a big play button over top of it anyway).

to be fair the site is a html site only and gets served via cloudflare. not a fair metric against a "dynamic" site.

also most webservers/frameworks/whatever will favor throughput over latency.

dynamic sites are a choice, and achieving that richer level of interaction comes with its own costs. For every legitimately dynamic page out there (going all the way up to full-on applications), there’s many more that have no good reason to be anything other than static content that could be served from a cdn. Going pure html for a page that doesn’t need more is precisely the sort of design choice that goes with the mentality they’re advocating.

Optimising for throughout rather than latency is, again, a design choice, and the article serves as an indictment of taking that too far.

This site is fast, why should I care how it got fast?

Ok, what about Hacker News? But the article discusses news website like WaPo that really don't need to be very dynamic.

My least favorite type of latency is when you start typing and the application takes at least 1 - 2 seconds to catch up with you. This still happen to me often enough to be a normal occurrence on both my desktop computer and phone.

This is rather painful for me, especially when you don't notice and slow down for it to catch up, and it begins to drop characters.

I'm not talking latency with the keyboard - I'm talking voice input when the receiver hits latency. If I'm using voice, I generally can't see the screen.

It's a source of infuriating spell-checking. When C int becomes nt.

You know what, I'm actually more okay with this than perceptible, constant latency. If nothing is happening then I know I can just stop looking for characters to appear and keep typing. A tiny part of me loves to "get ahead" and just keep going because I know it'll be magical when it does arrive. If it's just always laggy, though, it's disorienting and disconnects me much worse.

Now if it's always doing that -- like a 1-2 second delay more than a few times in 10 minutes, for instance, it's terrible. But if it's a one-off, I don't mind it.

What is extremely annoying is when I get used to this "getting ahead" dance, but every once in a while chrome would just ignore what I typed and I would have to go back and retype it.

> A tiny part of me loves to "get ahead" and just keep going because I know it'll be magical when it does arrive

Same here! My old laptop did this and it was a thrill to see how much you could write before it started coming in at lightning speed.

this is unfortunately my almost daily experience using slack...

I work with Embarcadero Rapid SQL and I got used to these hiccups already, sometimes up to 10 seconds delay. I don't want to start on how crappy software is in terms of user experience, but I see people getting their jobs done with it and not complaining just one bit. Should I think I got spoiled with snappy fast barebones editors? I'm getting better at ignoring the ugliness and I feel it's a good exercise of willpower.

> I don't want to start on how crappy software is in terms of user experience, but I see people getting their jobs done with it and not complaining just one bit.

Most people, including technical folks, don't complain (unless extremely agitated) because they know they can do nothing about it but accept it. Doesn't stop making them miserable, though.

> Rapid SQL

> 10 seconds delay

Oh the irony.

This still exists with todays computers and systems? I remember back in school (a long time ago) we had computers that was so slow it took 7 seconds for the backspace key to start doing something on some of the computers. This was in the 286 age with windows 3.11 I believe. I haven't seen it much since then. What in the world are you using to have this experience?

My work laptop is an i7 with 16GB RAM and I sometimes see this delay in inputs when running Visual Studio. I type a line of code then wait a few seconds for it to appear.

Weird thing is when it does this, I'm not using all the memory and the CPU is normally sitting way below 50%.

I've no idea why it sometimes just gets unusably slow.

I had the same thing, I uninstalled GitLens which fixed it (or at least made it rarer). Might be one of your plugins?

Cpu sitting below 100% indicates that the program is spending time waiting for a data fetch from disk or memory.

is there any tool out there that monitors IO calls from one specific process?

I get extremely annoyed when applications simply freeze without them doing any important cpu task.

resource monitor in windows can do that

It does not monitor latency caused by cpu doing fetches from ram (to the cache) though? Depending on the app, they can be a real killer.

Or it does not use all cores.

What’s worse is when the application drops events during that time.

Main offender for me is chrome, so much lost typing in new tabs and windows

The Windows 10 Start Menu on my machine loves to eat the first one or two typed characters

> Android and iOS both make substantial use of "long press" to access context menus, which require that the user wait hundreds of milliseconds in the middle of their command gestures.

> A related source is delays for disambiguation. For example, on mobile Safari there's a default 350ms delay between when the user taps a link and when the browser begins fetching the new page, in order to tell the difference between a link click and a double-tap zoom.

I really wish mobile devices had one or two modifier buttons on the side. That way you could have "right click", maybe even positioning a cursor without clicking, all sorts of crazy stuff, like being able to "mouse over" a link in a mobile browser.

Apple has 3D touch, where if you press firmly on the screen, the phone makes a tactile "click" and triggers a different action than normal tap. Annoyingly, Apple mapped 3D touch to a totally new set of actions instead of replacing long press.

Apple has slowly started merging the two, because it makes it easier for them to support devices with and without the feature.

We can't do that. Buttons, and headphone jacks, are so yesterday. /s

> I really wish mobile devices had one or two modifier buttons on the side.

Look, I feel you in terms of lost functionality, but I don't when it comes to trying to explain how these systems work to my family. Android was notoriously bad because the back stack behaved totally sane to programmers, and insane to everyone else. When Android had a "menu" button on the bottom too, it was another source of confusion. "What does the menu button do?" Thinking back further, I know for a fact my mom didn't even know her phone had a trackball on it (hiya Android 2.x era), but she was mystified when I told her what it was and what it did.

I'd rather someone fix the design problem of mapping taps to different things than add more buttons that are impossible to explain.

> it was another source of confusion. "What does the menu button do?

How is that impossible to explain?

> I know for a fact my mom didn't even know her phone had a trackball on it (hiya Android 2.x era), but she was mystified when I told her what it was and what it did.

And? You're just saying such things are impossible to explain for you, or impossible to explain to people you know.

But consider how complex the world is. The alphabet has 26 letters, all of which are impossible to explain. We have complex books, simple books, some people even drive cars with all sorts of levers and buttons and many dozen street signs to learn. They earn instruments and raise children, all sorts of complicated stuff, but they can't learn what a button does?

How about we simplify English, remove 95% of the words, because some people never use them?

I keep coming back to thinking that it's the gap between dev and "mainstream" hardware.

Forget about phones for the moment (lol, Javascript performance on a $99 Android).

My PC is a 6-core, 32GB of RAM beast. I suspect that most (non-PC gamers) people's machines look closer to https://www.officeworks.com.au/shop/officeworks/p/acer-aspir....

Dual core, 8GB of RAM, spinning hard drive (!)

I can tell you from experience that Windows 10 isn't a fun experience on anything other than a fast SSD.

The machine you posted is pretty typical of PC gamers according to the steam user survey (https://store.steampowered.com/hwsurvey?platform=combined). 8GB+ of RAM only got to 50% of users a couple of years ago. I suspect most non-pc gamers machines to be even worse, the firefox survey has 30% of people still on 4GB: https://data.firefox.com/dashboard/hardware .

> I can tell you from experience that Windows 10 isn't a fun experience on anything other than a fast SSD.

Agreed, I think MS are dogfooding exclusively on surface books.

>The machine you posted is pretty typical of PC gamers according to the steam user survey

The linked laptop still has an ancient 1366 x 768 screen which is only 12.62% of all players that have taken the survey.

Something similar happened to a designer I was working with: The guy had his Apple Retina display Super-duper whatever, which he used to design.

At some point, he went to a developer screen to see how his designs were looking once implemented in code, and his first comment was "Why are the colors looking so bad?" ... Well, apparently the colors he chose looked great on his retina display but did not look that good on the average hardware.

After that day, we had to give him a second "normal" monitor so that he tested his designs for the layman user.

You know what's really funny, we had this exact same issue with developers.

Yeah, your UI runs fast against a local copy of that API with a local ElasticSearch instance backing it with 0% load on a $3000 Macbook, now test that bitch on a $100 dollar Android running on 2G against AWS.

Windows certainly doesn't a good job to run well on lower end systems. It got fast and reliable if you hit a baseline. Still they're hardly a competition for linux if you don't care about gaming.

Hardware retailers are just grabbing cash from average users. You cannot proof how well a system is performing so you just make up numbers and fancy words. Product series merely exists to nudge prices, they guy who buys the cheapest stuff is the biggest looser.

Cosumers would be served better if they used linux on their cheap systems. You only need a browser anyway. I used an early asus Eee PC netbook for some time with ubuntu. Yes it wasn't really snappy but it was on par with what my parents used to use. But more reliable in log term and 3 times cheaper.

But like linus said, linux won't take over the desktop market unless it is widely sold with machines out of the box.

The cheap androids aren't so shabby these days. A Moto G5 play is a quad core 1.4GHz with 2GB of ram. No speed demon, but is reasonable. $109.99 on Amazon, unlocked.

Javascript/web performance on Android is universally bad but truly awful on the low end. A Moto G5 pulls ~14 on speedometer 2.0 which puts it in line with an iPhone 5/5s (5-6 year old devices). An iPhone XS scores around 123 in this benchmark for comparison, ~9 times faster.

I concur, having just purchased a Nokia 5 (for €105) and being surprised it felt so much slower than my iPhone SE. I guess it's a sign that speed/$ is flattening out. The good part about it is devices will stay competitive longer.

> I can tell you from experience that Windows 10 isn't a fun experience on anything other than a fast SSD.

Indeed, I've spent £40 on upgrading a whole load of friends' laptops to SSDs (fast SSDs are pretty cheap these days!), and the difference is night and day on windows 10.

If I want something to run fast I do all of the testing on ancient hardware or artificially constrained VMs.

I'm on 4gb ram with a spinning hard drive and I develop on it. I'd like something a bit faster but it's not a huge deal to use visual studio code, webpack etc.

Well, that's all good but it doesn't explain how did we get there. I think, Moore's law resulted in a resource curse for PC and mobile. Unlike with the shared computers from the past and cloud services from now, you pay not for the resources you use, but for the resources you own. And under the Moore's law they grew exponentially for quite a while, so the software simply followed the trend growing exponentially in overheads, too. Technical reasons are probably secondary, since we do still have real-time computing, and high-performance computations in their own respected niches. It's not that all software is inherently bad, it really depends on what it is for. I believe, the socio-economical reasons of the current state of software are most interesting and the least researched.

> When dragging items on the screen, for example, users perceive latencies as low as ~2ms

This seems doubtful, given that even a state-of-the-art 240Hz screen only refreshes every 4.16ms. I guess they could compare dragging on a screen to dragging a physical object, but that would still be comparing 4.16+ms latency to 0ms latency, which doesn't explain the 2ms figure.

> This seems doubtful

Ever tried playing a fast paced game with VSync turned on?

True, the next frame can start getting drawn before the last one, but every desktop compositor I know of uses vsync.

you could say with a 4.16ms refresh rate it takes on average ~2ms until the next frame?

When I read the headline I thought this was going to be something like slow food or the slow movement in general https://en.wikipedia.org/wiki/Slow_movement_(culture) and arguing for software to be slower.

I'm glad I was wrong.

My understanding of Slow Food is that it isn't necessarily about "slowness" so to speak. More about keeping things close to nature and not interfering with them / overprocessing too much. Something similar would make sense for software (e.g. using lower level languages, less extraneous JS on websites etc)

turn on, tune in, NOP out

I spent a decade working in engineering at Intel. Basically all design work (as of 2016, and for 10+ years prior) is done over VNC to Linux servers in data centers.

The experience was barely tolerable over 100mbit Ethernet to an on-site data center, and anything less was fairly abominable for just about anything other than working in a black and white terminal.

The majority of the work is fairly graphical and most engineers rarely have the luxury of connecting to an on site data center.

Over the years the situation both improved and worsened:

Pluses: - circa 2010 some of us were lucky enough to get gigabit Ethernet connected to our desk. - circa 2010-2015 there were improvements to the VNC protocol like JPG and Zlib compression that helped a lot with bandwidth constrained situations (nearly all) - circa 2014 a lot of us got 802.11ac capable office APs and laptops, often pushing 300+ mbps reliably.

Minuses: - The company shut down a bunch of datacenters and setup “hub” sites. Making most of us work over high latency WAN links even in the office. - More and more work seemed to get organized across sites, making many of us remote to far-off datacenters even if we were local to a hub.

No one in engineering management or IT seemed to take the problem seriously. No wonder the company has floundered so much.

To me an unresponsive interface completely spoils flow and dramatically reduces my productivity.

The "user-hostile" section is kind of silly. Poor/lazy coding is also user-hostile. Requiring 20 megabytes of JavaScript frameworks because you're too lazy to figure out how to solve your feature requirements is user hostile.

Maybe call that other stuff "monetization"

On a related subject, I've noticed that my local gas station updated their software for their fuel pumps and it is terrible. For some reason the latency is very high, so high that I can't reliably type in my PIN. It literally takes 500ms - 1000ms for key-presses to register, plus it doesn't seem to cache them very well so if you go too fast it drops the key-press altogether. Finally, they changed the font to a fancy script that is difficult to read on the low resolution display.

At first I thought it must be a hardware change or something on the back-end is slower. There is another gas station of the same brand just 3 miles away and it is still running the old version of the pump software, which is fast and user friendly. In fact, it makes me want to drive that extra bit in order to not have to put up with the slow software.

Related, dadgum.com's article "How much processing power does it take to be fast?" commenting on an arcade machine playing Defender with very low latency 30-40 years ago - https://prog21.dadgum.com/68.html

Somebody once posted a well known news website with an alternate link that loaded instantly. Anyone remember this? I don't remember if it was WSJ, or NYT or something else.

Maybe the USA Today site when accessed by EU IPs? https://eu.usatoday.com/

A couple news sites have 'text-based' versions. Became popular as a way to give people in disaster areas with spotty signal a chance to find things out.

These are some I found:

[0] https://lite.cnn.io/en

[1] http://thin.npr.org/

Oh my God. It's like 1993 all over again, only with a 1000x faster "modem" and it is wonderful.

how do I get my browser/OS to modify link URLs when I go there so it prepends the right "lite" URL?

In theory, AMP pages should all load instantly, and most news websites have AMP pages.

A lot of HNers hate AMP for other reasons, though; I think mostly Google's insistence on using their CDN for it, and for whitelisting mostly their own scripts for use in AMP.

AMP loads "instantly" because Chrome preloads it, not because they're very fast. A simple page like [1] loads over a dozen JS files (including stuff like "amp-analytics"), which expand to over 1MB of code that has to be parsed.

[1] https://www.bbc.co.uk/news/amp/business-44802666

Fair, but keep in mind that AMP loads scripts async, so the page is finished loading and usable well before stuff like amp-analytics is done being downloaded.

> In theory, AMP pages should all load instantly

In practice, AMP pages take a minimum of 5-10 seconds to load for me. Maybe Google is punishing me for my uBlock/uMatrix/Pi-Hole setup.

Likely not what you had in mind but news.yahoo.com is amazingly quick to load for me.

Why aren't ads a data point in the infographic?

Ads are a necessary component of the web atm. They typically insert a delay between the user's action (clicking a button, scanning paragraphs of text with one's eyes) and the desired behavior (watching a video, comprehending which text is the article vs. advertising pictures and/or text).

So a Youtube app running on Fuschia could become a poster child for "anti slow software" based on the author's guidelines. Yet this would only deliver the user more quickly to the problem of ad latency-- a problem which is orders of magnitude worse UX than the problems listed in the article.

It seems like inside baseball to make ad latency an externality to the core problems of slow software.

While comparing Samsung and Apple mobile device latencies the article gives these examples: Tapping latency examples (Videos slowed 16x): - Opening a settings tab on an iPhone 6s with ~90ms of latency. - Toggling a setting a Samsung S3 with ~330ms of latency.

I agree latency is evil, I hated Android a while ago because of this. Apple always felt really fast compared to other OS. BUT it seems normal that toggling a setting proceeds a bit slower than just opening a tab no? It's like it's just a bad example.

Isn't that just ridiculously slow animations for the most part? I still use an ancient OnePlusX that I got when it came out and I've disabled all UI animations, toggling most settings (with legitimate exceptions like activating the wifi hotspot feature) feels almost instant, certainly nothing close to 300 ms. Admittedly, I haven't used any iPhone in many years, so I can't really compare.

> Isn't that just ridiculously slow animations for the most part?

Agreed, and as the recent example with the iPhone calculator proved, Apple aren't exactly immune to this either.

Can you just disable animations on Apple devices the way you can on Android?

Yes, I think low power mode disables almost all animations on apple

Wouldn't that also disable other features as well (push notifications, maybe?)? Animations are something I disable permanently just to have a better user experience, I wouldn't like to sacrifice anything else.

Yes it would. But you can only reduce effects:

Settings -> General -> Accessibility -> Reduce Motion.

The only thing I dislike is the slightly counter-intuitive quick fading effect when minimizing or switching between apps. Outside of that though, any iDevice feels snappier with that option enabled (== effects reduced).

At first glance I thought it was the sw equivalent of the "slow food" movement. No such luck.

Link is blocked at work by McAfee Web gateway

Url category is pornography

I have an HP EliteBook with an i5 processor, 8GB RAM, and an SSD that’s about 70% empty (only 30% space is being used). It runs the latest Windows 10 image from work and is slow as molasses. Almost every action I take, be it a mouse click or hitting a key or switching between applications, takes a few seconds or much longer. I thought it’s a McAfee issue, but the CPU usage is above 50% almost all the time and this usage is across many processes (whose names I don’t understand), not just McAfee.

Since it’s a work image of Windows, it has policies set to prevent me from changing many things.

Where do I even start troubleshooting this issue and finding the culprits? Is it just a CPU usage issue and/or some kind of I/O issue? I haven’t yet tried using something like Process Explorer (from the sysinternals tools) to get a clearer idea of what’s happening (though I’m not sure if that’d help).

I’m thinking of putting Linux on it as an alternative.

Any and all suggestions are welcome and appreciated.

Check out Bruce Dawson's blog, he works for Google and he writes articles on investigating slow application performance on Windows, e.g.


Over the last few years I’ve written over forty blog posts that discuss ETW/xperf profiling. I’ve done this because it’s one of the best profilers I’ve ever used, and it’s been woefully undersold and under documented by Microsoft. My goal has been to let people know about this tool, make it easier for developers and users to record ETW traces, and to make it as easy as possible for developers to analyze ETW traces. [..] The purpose of this page is to be a central hub that links to the ETW/xperf posts that are still relevant.

Some of my favorite blog posts are those that tell a tale of noticing some software that I use being slow, recording a trace, and figuring out the problem.

They go back a few years, he recently tweeted that Windows Performance Analyzer / ETW Trace Viewer is now available in the Microsoft Store - https://twitter.com/BruceDawson0xB/status/106039652215040819...

McAfee intercepts every IO read/write afaik, which makes it horribly slow

In my experience on a similar configuration, most of the times such aliens is because of i/o bottleneck.

Antivirus, Windows app optimisations, and device manufacturers' softwares running at predefined schedules, choking the disk might be a problem.

You frontotemporal should check the process explorer next time you gave the issue. In the meantime, may be check the task scheduler to see if some heavy read write tasks are scheduled

You could try swapping the ssd with a new one with a fresh install of windows or even try dual booting a fresh install of windows to see if it's a software issue after all.

I seriously doubt the vast majority of the population cares.

I mean, I do. I hate websites that take many seconds to completely load when I know they could take less than 1 second without the bloat.

Hardcore desktop gamers and developers usually are also very performance conscious but that is minority.

Sites keep piling hits while adding bloat, and it's totally counterintuitive. Why?

> I seriously doubt the vast majority of the population cares.

They do. Just because they can’t put it into words doesn’t mean that they don’t care. In the early days of iOS (and to a lesser extent today, with Android finally realizing that latency is important), many people preferred iOS to Android precisely because the former “felt smoother”.

I care. One of my worst slow system experiences is my "new" Xfinity X1 "Entertainment System." Latency to remote control button press is on the order of a second or more. Worse than that, it will put prompts up on screen and not be ready for the input. For example, when I finish watching a recorded program it puts op a "Delete" prompt. If O press the OK button when this prompt appears, nothing happens. I have to wait a second or two and press the OK button again. When I open the list of recorded programs it can take from 2-4 (or more) seconds to display anything. The delays are long enough that I am often left wondering if it registered a button press or if I need to press again. I have to wait at a minimum 4-5 seconds to see if the system is catching up or if I need to repeat an action. It's a constant irritation when using the system. It baffles me that their flagship product is so non-performant. I guess that when you have no real competition there is no motivation to produce a better system.

I had an LG "Smart" TV for a year. It was horrible. Suppose that I want to change the channel to number 54:

1. Pick up remote. Type "5", "4", "OK". Put down remote.

2. Wait for 5 seconds. A "5" appears on the screen.

3. Wait for 2 seconds. A "4" appears on the screen.

4. Wait for 2 seconds. Channel switches.

I sold it and got a 40" screen for my PC instead. The PC actually boots faster than the "Smart" TV.

Smart TVs are generally terrible. It's usually better to just use an Nvidia Shield or an Apple TV for the smart functionality.

Manufacturers like LG and Samsung are terrible at UX/UI, and then you have others using Android TV which is great but using crappy low performance SOCs like Sony does.

Comcast dvrs are indeed terrible. I'd at least go tivo, but these days if you have a computer hooked up to your tv and a nice wireless keyboard, you can run youtube tv and it is _much much better_.

I think you're largely referring to sites that work based on their content, and not their workflow. i.e. site you interact with to consume content.

Most of those sites are "free". People are willing to wait for things that they want to consume and are available for free, up to some limit.

If we're talking about websites that offer some sort of utility -- i.e. it does something for you and you need to interact with it often. Then its responsiveness is likely going to be a much bigger factor.

If gmail loads your inboxes and emails in hundreds of milliseconds but yahoo took several seconds, and a person needs to respond to lots of emails during the day, I'm sure the person would develop a preference for the former if all other things are equal.

> People are willing to wait for things that they want to consume and are available for free, up to some limit.

Right, they are going to wait 4-6 seconds and then maybe go back if the page doesn't load.

But that's an insane amount of time which is why I think most people don't really care about bloat until they think "hey this isn't working".

But I concede that you have a point with the tools vs websites to consume information. There is no difference for me personally, loading and reading a website is part of using it.

I just did a test from the main page to here.

Clicking on the link to comments took about 4-5 seconds to render the comments.

Doing a back then forward action in the browser appeared to be almost instantaneous (10's of milliseconds).

I have noticed that many websites are being significantly slower these days. It, of course, could be due to the national monitoring scheme that is required now here in Australia.

Using TOR to get access was a little slower to get access by about a second, otherwise back and forward are the same.

> We hope this material is helpful for you as you work on your own software.

Sadly, as interesting as the material in the article is (great to learn about the measured latencies of the hardware part), I fail to see much that is "actionnable" for a run-of-the-mill software developper.

It seems the only advice is "don't download ad / tracking / social media - related stuff", but even that is not exactly in the developpers circle of influence. Who's going to make google analytics smaller to download ? (except, well, google ?) Is any developper really in the position to say "great news, our pages now load xxx ms faster !! However, you won't be able to compute your KPIs for this semester, is that a problem ?")

Also, is "using a language without GC" accessible today for a web frontend developper ? (through some rust / wasm / whatever magic ?)

The author doesn't say you should use a language without GC, but minimize its effects. There are techniques to avoid GC churn by reducing allocations and the subsequent cleanups.

I heard an anecdote about how Minecraft got much slower when Notch (the original developer) turned it over to a team of employees. The new team did some refactoring, e.g. instead of calling functions like

  doWork(int x, int y, int z)
they refactored that into

  doWork(Coordinate c)
and that's when Minecraft started eating RAM like some sort of delicious candy, because now each time you deal with a new Coordinate, it's one more object to garbage-collect. The old method may not have been particularly pretty, but plain ints are allocated on the stack and thus reduce GC pressure.

(BTW, can anyone confirm or deny that anecdote?)

Was `Coordinate` a class or a struct? C# structs generally [0][1] don't force use of the garbage-collected heap.

[0] https://blogs.msdn.microsoft.com/ericlippert/2010/09/30/the-... (ignore the usual comments telling the reader they are wrong for wondering about whether the garbage-collected heap is used)

[1] https://jacksondunstan.com/articles/3453

Minecraft is a Java application.

Derp, of course! Mention of Microsoft threw me off :-P

In the JVM, there are of course no structs, but I'd expect the escape analysis optimisations in the HotSpot JIT to reduce it down to avoiding any GC churn. If this isn't happening, I'm curious as to why.

I don't know about that particular anecdote, but the gist sounds very reasonable. I've seen/solved the exact same general thing in python on more than one occasion.

Yeah, that's true. After beta 1.8, coordinates started getting packaged as a class which MCP calls 'BlockPos'.

This. My last contract was working on digital TV UI software that was build using nodejs, so we did a lot of work optimizing our code and architecture to reduce latencies and load times. I ended up going down the rabbit hole of how v8, libuv and node's GC work. It was a great learning experience.

You really can influence your app's performance if you have the attitude and determination.

Oh I absolutely agree that it can be done, I just wish the article had went into more details about how that's done...

I recently switched to an OpendBSD machine with no mouse. I have a large screen and use tmux-- no X windows --and a clicky gamer's keyboard. It's soooo nice.

The only downside so far is that about ~%60 of the WWW sucks through Lynx... I have a separate machine on my desk that's basically a Firefox and VSCode kiosk now. But I've gotten a dead-tree hardcopy book on Vim...

Only since I work fulltime with gcc hogging all my CPUs and most of my RAM for up to 10 minutes at a time and also slowing down the computer I fully appreciate having worked in dynamic languages for many years. Yes, that problem might be solvable with a beefy build box.

The difference is interpreted vs compiled, not dynamic vs typed. You can use a C interpreter to instantly run C code without compilation.

Of course you're right but it's equally often used for interpreted scripting languages. And I don't like the term scripting languages. Apparently naming is hard, who knew.

At the app level, I'd say it's about responsiveness not latency. Instagram had a post a while back about how they cheat a bit to make their app feel responsive even if actual latency was high.

From this perspective one could say this article puts too much focus into the raw numbers, how many ms to a response. As techies, we like that: cheating is cheating. Numbers are important. But really we need to look harder, how can we make users perceive that things are better than the raw numbers.

I hope devs can fix the racing wheel latency first, so we can enjoy those racing games.

Which racing wheel have you tried? Was it implemented as a HID device (subject to the polling rate limitations described in the article) or with its own driver?

I didn't buy single one since I have seen many youtube videos, all seem to have latency (yep, HID device). By latency I mean, when you throw steer, you can see it reflects about 1/3 second to the gameplay. It is very hard to blame slowest part, probably the game itself, but I really wish that would be solved.

there's slow as laggy and slow tempo

I remember how I loved my slow hp48, the input buffer was still listening and I could easily think and keep typing operations while the screen was busy, never felt "slow"

You will never massage and cram all the remote cruft efficiently enough to get back the responsiveness of an Apple II. Today’s software is turtles all the way down from the library to the sub-library to the JIT to the application and OS and hardware and they each need 5-25 milliseconds to even wake up. That’s before you even hit the network.

Things don’t get bad, they get worse.

> Today’s software is turtles all the way down from the library to the sub-library to the JIT to the application and OS and hardware and they each need 5-25 milliseconds to even wake up.

Where do you get these numbers? I have programs that run from start to finish in 5 ms, including tons of OS syscalls.

I think many of us forgot how efficient OSes are because of the shitshow that app developers put on top.

I have a Go program that parses command line arguments, loads and parses a text file, then interprets that code on a faux-CPU. Total time 24ms start to finish for 107 faux-CPU steps (sorting 12 numbers of bunches of 3) according to `time`.

Mine is also a Go program that reads several files and does some basic computations on top of it. I wrote about the `time` measurements here: https://blog.bethselamin.de/posts/latency-matters.html


We detached this subthread from https://news.ycombinator.com/item?id=18508236.

You went beyond incivility and crossed into harassing another user in this thread. I won't ban you for it because I saw your apology below, but please don't do anything like this on HN again.

A single comment that was half as aggressive as your first one would already have been more than enough, even if it wasn't completely off topic.


Years ago I disabled github notifications and ignored pull requests because honestly, 95% of them are crap. So many of them degrade performance, break unrelated features, or are incompatible with certain platforms. I don't have the time or inclination to sift through them all to figure out which 5% are good, especially since I'm not getting paid for it. Sorry.


As the maintainer of ripgrep, I'd like to respectfully ask you not to harass other maintainers of open source projects about how they spend their free time. I've been where ggreer is and it's not pleasant. I completely understand their position, and a little empathy on your part would go a long way. In an ideal world, the maintenance status of a project would be mentioned in the project README, but speaking from personal experience, this is seemingly difficult to do in practice.

In what way is updating README.md difficult? Please have as much respect for contributors and potential contributors as you seem to have for maintainers. They're all people after all and their free time is equally important.

In theory, I agree. But again, speaking from personal experience as the maintainer of several somewhat popular projects, it is actually emotionally difficult to update the maintenance status. It appears trivial from the outside, and to some extent I agree with what you're saying about being respectful of others' time, but we should nevertheless do our best to be kind to others. Calling out a maintainer in an off topic comment is not being kind.

You are completely right. I apologize.

I respectfully disagree with you. I wasted perhap 10 hours of my life or more on this PR (pointlessly rebasing it a few times). 95 other PRs are outstanding. There are many ways to improve this situation. I pointed two of them out:

    1. Declare that you won't accept (or even review) them, preferably at the top of README.md *
    2. Give a second person merge status.
I have some empathy for the maintainer but I also have empathy for the prospective contributors who spent a chunk of their free time working on https://github.com/ggreer/the_silver_searcher/pulls

I've done my bit here to stop wasting people's time globally, I don't consider a maintainer's time more valuable than a contributor's time. That's elitest nonsense.


* The likely side-effect is a unblessed fork (or people not wasting their time, or if they do, well, they didn't read the README.md and cannot be too annoyed at anyone but themselves).

Come on dude, I just want to enjoy my long weekend. Please stop bothering me by creating spurious issues on my GitHub repo.

OK sure, but would you consider merging one PR that explains that people should not expect PRs to get reviewed? I'm happy to do the work if you'll review and merge it.

That way we can avoid unfortunate incidents such as this in future.

Other people want to modify my project, usually in ways that break it, and somehow I am obligated to provide my time and skills to them for free? I don't understand where you're coming from at all.

You don't want to waste your own precious time but are more than happy for many other people to waste theirs?

Also you merged commits starting from 24 Dec 2011 and ending on 18th August. Presumably you saw value in those commits?

ggreer: I have no grudge, hopefully ag will continue to be a success for its users but it'd be lovely if people could contribute to making it even more of a success. I believe there are some easy ways to achieve that (expanding the core team if you were ameanable or somewhere or shomehow ossasionally explaining why PRs are not getting looked at), and some not so easy ways (CI with good testsuites and benchmarks).

I apologise for my overly aggressive comments, I did not handle the fact I really hoped to help out on ag and got frustrated by the experience very well (at all).

Have a nice weekend.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact