Android share menu: what on earth takes 500ms-2000ms to display a menu??
Android scrolling: I could have a smoother scrolling if I rendered the view on a wall in Doom III. Why on earth is the scrolling process not prioritized above everything else?? iOS got this right.
Microsoft Outlook on Mac: it can teak easily 2000ms to close(!) a window, are you kidding me?
My new 2018 Touchbar Mac does not "feel" any faster than my old 2013 retina Mac, while the paper specs and benchmarks show at least 100% increase in computational power. They can both equally easily choke down on some unwanted Skype update or stupid WebEx video stream (yes, I work in corporate).
I find my Pixel (v1) surprisingly fast given its age and have no plans to replace it currently, although the share menu slowness is weird. It happens any time the device has to enumerate the list of installed apps—like loading the apps page in settings—, which bizarrely it doesn't seem to cache.
Not to mention the awful keyboard and non-upgradable parts.
Outlook is a nightmare - heck, the Office suite has been awful essentially since it's rewrite from being PPC-only for 2008.
Not only did we have to wait way too long for a native Intel version / Universal Binary - but when it arrived, it was slow and clunky on both pieces of hardware.
But worse than office - our entire team is now being migrated to cloud-based email that virtualizes itself in an IE window, and responds to clicks and scrolls about 40% of the time.
It looks awful, it performs awful, I've never experienced quite a terrible piece of software. Weird, virtualized, Office 365 - unless you're on Windows, I couldn't imagine this working for you.
On my new 8th gen i7, restoring a Skype for Business window (from the task bar, so it's already running), I've measured at sometimes in excess of 10 seconds for it to restore. No other application I use exhibits such horrible performance.
I can see the typing latency in the latest version of Outlook, too, when composing an email. And don't even get me started on web-based Outlook, the performance is abysmal.
Not sure what's going on at Microsoft lately, but something very, very wrong has been happening for 3-4 years now.
Last employer moved to McAfee "because active malware protection". Basically, AMP is a set of rules you can apply to disk accesses per application -- like for instance, "no application can delete PDF files from My Documents" (this is one of the "anti ransomware" rules).
It wasn't too bad with just the signature-based virus scan, but the updater and AMP were horrendous. The PCs (3.6GHz 8-core Xeon workstation with SSD, 16GB+ RAM and a ludicrously powerful 3D card) went from booting in 30 seconds to taking 15 minutes to boot. Eclipse took another five to start. When AMP was deployed to the JIRA server, JIRA refused to start (Atlassian Support suggested AV exceptions which IT refused).
IT response: close out any AV related ticket with "You will not be receiving a hardware upgrade and the AV is mandatory."
Six weeks later, IT was outsourced and the response became "we don't have permission to change AV settings" (BigCo politics).
Four more weeks later and the electronics lab was crippled as Labview got detected as malware by AMP.
A fortnight after, half the technical team handed their notice in.
It wasn't the only reason this FTSE100 was constantly outrun by its competitors, but it was certainly a contributing factor.
I was thankfully able to strip McAfee out because it was monstrously terrible.
Still, corporate IT has enforced a browser plugin and tray app called Triton AP-Endpoint and Triton Forcepoint Endpoint.
It's sole purpose is to block you from moving any sensitive data to external drives. I, up until now, have had 0 problem removing any materials to any drives anywhere. I don't think it works very well. It does however chew through my 2015 MB pro battery and cause the fan to nearly continually run and even at times overheat.
I think I could remove it, too—but am mildly concerned they'll get a notification and come start inspecting things.
And yet... it's like scaffolding in NYC. Absolutely useless. But if you are all for removing it, and a brick falls and hurts someone, heads will roll. Quite a quandary I - and other C-levels - face.
 Another contrarian passion of mine.
 Bricks do fall and hurt people. But no more than scaffolding itself falls with the same effect.
It totally makes sense to lock down machines that can access production, but for development? Just let people use what they like. You'll have less work for IT, happier developers, and an easier time recruiting talent.
Macs do not allow running unsigned software by default, and no one runs antivirus on Mac, ever. So even if they were commonly being infected, which they aren't, the person above would have organizational indemnity if someone were infected because they're following industry best practices by not running antivirus on Mac.
If you want, you can further restrict Macs to only App Store software, which is heavily sandboxed. Then you can go even further by not allowing the individual users to install software on their own, if you really want to be draconian about it.
Unless someone is being individually targeted, running very outdated software, or is intentionally trying to get themselves infected, it will not happen. Even if all three conditions are true, it's still very unlikely.
Anyone who says otherwise is just fear mongering. That same level of fear mongering could point to the dozens of pieces of malware that have been released for Linux.
I say this as someone who uses a Linux laptop for work and a Windows desktop at home. I don't have a dog in this fight. I do, however, try to stay very informed about the state of software security.
My company-issued MBP is running something called "Cylance Protect" (and "TrendMicro" earlier). And also something called "Forecpoint DLP". I have no control over any of that, software just appears and disappears. I think it's done by something called "Jamf".
I don't really care either way. The only thing I actually use on the Mac is Chrome for email/calendaring/vidconf and some intranet sites. Actual work is all on Linux servers via ssh (and even that has "ClamAV" antivirus running). So I'm just using it as an expensive terminal/chromebook.
We also have the Forcepoint nonsense.
Working at a large company (also not one of the famous silicon valley tech companies) and if you get a macbook, they are managed remotely and have BitDefender installed.
Also, p4merge is the best merging utility for any VCS, and I always install it alongside git if I work in Windows.
What makes it "the best" ? Honest question, i found it kludgy and settled for Kdiff3.
Enterprise volume licenses. It probably cost them less than the time/money they'd lose if you spenta few minutes trying to figure out how to open some file sent by non-devs.
Nice graphical tooling and sane developer experience.
Perforce integrates very nicely with a whole bunch of third party tools in a way git does not, and is on the whole a lot easier to use for most people than git (and I'm saying this as someone who doesn't like Perforce at all)
Said as someone who likes perforce. Having hundreds of developers working with large binary files was incompatible with git until very recently.
Maybe 5+ years ago?
All my actual work was done on Unix/Unix-like machines on our own old network, something we clung on to after acquisition.
I used to really like Mac OS X, but nowadays it feels much less polished and much more annoying. It might just be nostalgia, but I remember Leopard being more responsive and having fewer pop-ups (Screw you, iCloud! I don't want to synchronise my files!).
Additionally, not sure about you, but the reason I like to develop on macs is due to the fact that I can test nix software on them. That means I am installing and running nix binaries either via browser or through package managers such as pip. There is absolutely a non-zero risk that Linux malware will somehow find its way into my development environment. I am not trying to fear monger, as I do believe macs are still generally the safest, but don’t let them lull you into a false sense of security
To this day, Macs are practically virus-less, which they always where (99.999% of the scares in the media were for trojans, and even those at worse affected something like 1-5% of the total user base) -- nothing like the good ole Windows (XP and pre) days where after 1 day surfing the web you'd have a few viruses.
And of course if you go with the default options (gatekeeper, signed packages, etc) you have even less to worry about.
It's also not about "market share" -- Macs had 1/4 the market share they have now in 1990-1997, but there were tons of viruses for them under the old OS.
It's not like the original (pre-many security features were introduced) OS X was specially hardened or anything, but it was much more secure than Mac OS and the old Windows versions just by having a basic UNIX-style design.
Exactly what I'm talking about. Not to this day, but to some time back in 2015. Right now any trojan toolchain on the black market comes with a Mac-targeted package.
This is a question of shifting liability and sharing responsibility. If a brick falls when you knew the facade needed maintenance, then the liability falls solely on the building. If the scaffolding falls, then the liability is borne by the scaffolding company, or at least shared.
If people you respect are pointing towards antivirus protection, you might also want to inquire whether they are saying this purely out of technical reasons (i.e. surface attack area, which could be debated), or if there are financial risk management factors tipping in this direction.
Since you're picking the OS for everybody in your company, which presumably includes multiple departments and staff who are non-technical, it seems like madness that you'd let them run amok without some level of antivirus.
But -- leave the poor developers alone. One hopes that the company was capable of hiring technical staff practicing basic day-to-day security hygene.
Not trying to be hostile.. but why aren't you? I've never worked anywhere that required anti-virus, so I know there are jobs out there that don't require it. In recent years I've gone so far as to take the stance that I won't use company computers at all, only my own, and I still haven't had any problems finding work.
Unless you have strict restrictions on switching jobs (eg. H1B, can't move for reasons, bad network connections so no remote work, etc.) nothing should keep you from finding better working conditions.
But big companies have internal security teams which basically handle all of this behind the scenes (until you get road blocked weeks/months when they come out of the woodwork to make your new product secure - a very necessary annoyance)
No, most people would rather install an IT certified anti-virus on their systems and keep the customer, than lose the business opportunity.
Didn't think so
But before I did that, the one-before-the-last place I've tried to work was this hostile environment where everything was Windows and MS-based, as far as what we were meant to use for work. I couldn't bring my own lappy.
I ended up writing an AutoHotkey script that would get mouse scrolling about 80% sane and manage my clipboard.
I set up a VM on our HPC cluster, on which I'd do my actual work by way of VNC and sometimes SSH. The LAN was OK, so it ended up being less laggy than Windows on my local machine.
But I suppose a local Q frontend to a VM hosted on my work lappy would have worked too. Virtualize the AV away, yeah.
I used to run AlwaysMouseWheel to fix up focus scrolling, but forgot to set it up after my last reinstall. Thanks for the reminder! :)
As for the other stuff, ugh. I've made it a general rule not to work anywhere where I don't get root on my own box.
Do an internet search for TreeUp.
At home I run KDE Neon, when people see me use that laptop (1 y/o Asus, core i5, 8 gb ram, standard ssd) they always comment how snappy and fast everything is and ask me what laptop I use. Even my neighbor with his brand new Win10 desktop with NVMe drive and new i7 cpu.
Depending slightly on the company, that is often a complete and utter waste of time.
I do this in my case but for different reasons and not performance.
I would take measurements, but JIRA prohibits benchmarking for some reason... ¯\_(ツ)_/¯ they're probably just trying to save everyone else the embarrassment of seeing how incredibly fast JIRA is compared to their own sluggish offerings, right?
Many companies just feel no impetus to write fast software, or to use commensurately powerful hardware.
Meanwhile my browser can load, render, and scroll a 4000 line colored diff in less than half a second (e.g. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...)
When you look at Jira using Chrome DevTools there are these big batch.js and batch.css files (up to 5MB in total) that are different for every type of Jira view (dashboard, agile board, issue detail, issue search, …) so the first few page loads might be a bit slow but then they all should be cached as they don't change until you update your Jira and everything should be smooth. Except that wasn't what I saw, they seemed to be reloaded every hour or something.
Naturally I blamed Jira, and wasted hours Googling for batch.js not being cached, but eventually came to the conclusion that it can't be Jira's fault, and it isn't. Turns out it's the Google Chrome's cache backend that's used on Linux (and only there). There are three issues with it:
1. it's limited to cca 320MB even if you've got 1TB free space
2. entries are evicted by age times size, ignoring number of hits
3. media files such as 2MB youtube fragments use the same cache
The result is that watching youtube for a while evicts all cached batch.js. To make this bearable, I enabled gzip in the reverse HTTP proxy in front of Jira, which brought batch.js down to 500KB so that it's smaller than the youtube fragments and isn't evicted sooner than those. Still, few hours of watching youtube and not visiting Jira evicts it. Increasing the cache size using "google-chrome --disk-cache-size=2000000000" helps as well.
Oh and here's the link to Chromium issue tracker: https://bugs.chromium.org/p/chromium/issues/detail?id=617620 :-)
But my key anecdote is about loading the notifications sidebar after the page has loaded, there shouldn't be any customizations in there. But I dunno, maybe Jira gives managers a bit too much rope with which to hang their development teams.
This reinforces for me that JIRA should always be self hosted.
In previous job we used jira cloud and it was terrible.
No it does not: it clearly makes the point that due to device, hardware, OS, and framework latencies your application has very little time budget in which to respond and still feel fast to users. This will necessitate considerable effort in terms of architecture. I don't see how that's under-emphasising it.
To have a useful discussion on architecture within the article they'd have to talk about the specifics of different software and applications, which obviously vary considerably according to the requirements for said software and applications.
The danger with that sort of thing is that some people will say, "well, if we use architecture X then our application will feel fast," disregarding many of the detailed requirements of their domain that mean X won't always (or possibly even ever) perform well.
To some extent that's unavoidable but I think this article was absolutely on point in that it makes clear the constraints within which software developers must operate, and sensibly leaves architecture as the responsibility of those software developers.
There's no obligation to always make software useful for other people. There's a lot of space between doing something manually and selling a product.
The problem, which you basically acknowledge, is that software that isn't useful generally doesn't get used. If you write nice custom scripts to make your own workflows more efficient, that's great... for you. But to make any interface "useful" (whether it's a UI, an API, etc.), it doesn't just need to be "useful", it also has to be accessible. That should help explain why even crappy UIs are so ubiquitous. They are moderately useful, but extremely accessible.
Thank you for being that sort of person.
I know some people who would take great pleasure in knowing that they would get paid for spending just that time, no matter if they achieved very little doing it.
Caching works for 99.9% of cases, 'but sometimes', I click on an issue and it was just updated just before my page rendered.
We don't use much stuff and it's a bit older version 7.8, but I cannot complain so far.
Load times are atrocious. It's the slowest website I have ever seen. Maybe facebook is slower, but I don't use that as often.
E.g. the program runs consistently otherwise, but somehow freezes for 0.5s every minute or so.
Judder is a perceived vibration or motion aliasing caused by the incoherence of consecutive frames at a fixed sample rate.
He goes into why the Apple 2e is so quick, the iOS Rendering pipeline, and general complexity of computing input.
I find it really interesting that latency is so high on a lot of devices in 2018.
0. https://danluu.com/input-lag/ (2017)
Nitpick: not really a full page load. The text part is 14 KiB, but once the website finished downloading the images and videos it's 14 MiB. However, because it does so via lazy loading (same for the PNGs), the text-parts are instantly rendered, and because it downloads all of the videos you have no buffering latency when clicking.
So yeah, near-perfect example of how to design a website for low-latency while still maintaining rich media (the thumbnail PNGs could have been JPGs to save more data, and more importantly: lazy-load even faster).
It's refreshing to see a site that loads quickly though. That would've been a bit hypocritical otherwise :p
also most webservers/frameworks/whatever will favor throughput over latency.
Optimising for throughout rather than latency is, again, a design choice, and the article serves as an indictment of taking that too far.
I'm not talking latency with the keyboard - I'm talking voice input when the receiver hits latency. If I'm using voice, I generally can't see the screen.
It's a source of infuriating spell-checking. When C int becomes nt.
Now if it's always doing that -- like a 1-2 second delay more than a few times in 10 minutes, for instance, it's terrible. But if it's a one-off, I don't mind it.
Same here! My old laptop did this and it was a thrill to see how much you could write before it started coming in at lightning speed.
Most people, including technical folks, don't complain (unless extremely agitated) because they know they can do nothing about it but accept it. Doesn't stop making them miserable, though.
> 10 seconds delay
Oh the irony.
Weird thing is when it does this, I'm not using all the memory and the CPU is normally sitting way below 50%.
I've no idea why it sometimes just gets unusably slow.
I get extremely annoyed when applications simply freeze without them doing any important cpu task.
> A related source is delays for disambiguation. For example, on mobile Safari there's a default 350ms delay between when the user taps a link and when the browser begins fetching the new page, in order to tell the difference between a link click and a double-tap zoom.
I really wish mobile devices had one or two modifier buttons on the side. That way you could have "right click", maybe even positioning a cursor without clicking, all sorts of crazy stuff, like being able to "mouse over" a link in a mobile browser.
Look, I feel you in terms of lost functionality, but I don't when it comes to trying to explain how these systems work to my family. Android was notoriously bad because the back stack behaved totally sane to programmers, and insane to everyone else. When Android had a "menu" button on the bottom too, it was another source of confusion. "What does the menu button do?" Thinking back further, I know for a fact my mom didn't even know her phone had a trackball on it (hiya Android 2.x era), but she was mystified when I told her what it was and what it did.
I'd rather someone fix the design problem of mapping taps to different things than add more buttons that are impossible to explain.
How is that impossible to explain?
> I know for a fact my mom didn't even know her phone had a trackball on it (hiya Android 2.x era), but she was mystified when I told her what it was and what it did.
And? You're just saying such things are impossible to explain for you, or impossible to explain to people you know.
But consider how complex the world is. The alphabet has 26 letters, all of which are impossible to explain. We have complex books, simple books, some people even drive cars with all sorts of levers and buttons and many dozen street signs to learn. They earn instruments and raise children, all sorts of complicated stuff, but they can't learn what a button does?
How about we simplify English, remove 95% of the words, because some people never use them?
My PC is a 6-core, 32GB of RAM beast. I suspect that most (non-PC gamers) people's machines look closer to
Dual core, 8GB of RAM, spinning hard drive (!)
I can tell you from experience that Windows 10 isn't a fun experience on anything other than a fast SSD.
> I can tell you from experience that Windows 10 isn't a fun experience on anything other than a fast SSD.
Agreed, I think MS are dogfooding exclusively on surface books.
The linked laptop still has an ancient 1366 x 768 screen which is only 12.62% of all players that have taken the survey.
At some point, he went to a developer screen to see how his designs were looking once implemented in code, and his first comment was "Why are the colors looking so bad?" ... Well, apparently the colors he chose looked great on his retina display but did not look that good on the average hardware.
After that day, we had to give him a second "normal" monitor so that he tested his designs for the layman user.
Yeah, your UI runs fast against a local copy of that API with a local ElasticSearch instance backing it with 0% load on a $3000 Macbook, now test that bitch on a $100 dollar Android running on 2G against AWS.
Hardware retailers are just grabbing cash from average users. You cannot proof how well a system is performing so you just make up numbers and fancy words. Product series merely exists to nudge prices, they guy who buys the cheapest stuff is the biggest looser.
Cosumers would be served better if they used linux on their cheap systems. You only need a browser anyway. I used an early asus Eee PC netbook for some time with ubuntu. Yes it wasn't really snappy but it was on par with what my parents used to use. But more reliable in log term and 3 times cheaper.
But like linus said, linux won't take over the desktop market unless it is widely sold with machines out of the box.
Indeed, I've spent £40 on upgrading a whole load of friends' laptops to SSDs (fast SSDs are pretty cheap these days!), and the difference is night and day on windows 10.
This seems doubtful, given that even a state-of-the-art 240Hz screen only refreshes every 4.16ms. I guess they could compare dragging on a screen to dragging a physical object, but that would still be comparing 4.16+ms latency to 0ms latency, which doesn't explain the 2ms figure.
Ever tried playing a fast paced game with VSync turned on?
I'm glad I was wrong.
The experience was barely tolerable over 100mbit Ethernet to an on-site data center, and anything less was fairly abominable for just about anything other than working in a black and white terminal.
The majority of the work is fairly graphical and most engineers rarely have the luxury of connecting to an on site data center.
Over the years the situation both improved and worsened:
- circa 2010 some of us were lucky enough to get gigabit Ethernet connected to our desk.
- circa 2010-2015 there were improvements to the VNC protocol like JPG and Zlib compression that helped a lot with bandwidth constrained situations (nearly all)
- circa 2014 a lot of us got 802.11ac capable office APs and laptops, often pushing 300+ mbps reliably.
- The company shut down a bunch of datacenters and setup “hub” sites. Making most of us work over high latency WAN links even in the office.
- More and more work seemed to get organized across sites, making many of us remote to far-off datacenters even if we were local to a hub.
No one in engineering management or IT seemed to take the problem seriously. No wonder the company has floundered so much.
To me an unresponsive interface completely spoils flow and dramatically reduces my productivity.
Maybe call that other stuff "monetization"
At first I thought it must be a hardware change or something on the back-end is slower. There is another gas station of the same brand just 3 miles away and it is still running the old version of the pump software, which is fast and user friendly. In fact, it makes me want to drive that extra bit in order to not have to put up with the slow software.
These are some I found:
A lot of HNers hate AMP for other reasons, though; I think mostly Google's insistence on using their CDN for it, and for whitelisting mostly their own scripts for use in AMP.
In practice, AMP pages take a minimum of 5-10 seconds to load for me. Maybe Google is punishing me for my uBlock/uMatrix/Pi-Hole setup.
Ads are a necessary component of the web atm. They typically insert a delay between the user's action (clicking a button, scanning paragraphs of text with one's eyes) and the desired behavior (watching a video, comprehending which text is the article vs. advertising pictures and/or text).
So a Youtube app running on Fuschia could become a poster child for "anti slow software" based on the author's guidelines. Yet this would only deliver the user more quickly to the problem of ad latency-- a problem which is orders of magnitude worse UX than the problems listed in the article.
It seems like inside baseball to make ad latency an externality to the core problems of slow software.
I agree latency is evil, I hated Android a while ago because of this. Apple always felt really fast compared to other OS. BUT it seems normal that toggling a setting proceeds a bit slower than just opening a tab no? It's like it's just a bad example.
Agreed, and as the recent example with the iPhone calculator proved, Apple aren't exactly immune to this either.
Settings -> General -> Accessibility -> Reduce Motion.
The only thing I dislike is the slightly counter-intuitive quick fading effect when minimizing or switching between apps. Outside of that though, any iDevice feels snappier with that option enabled (== effects reduced).
Url category is pornography
Since it’s a work image of Windows, it has policies set to prevent me from changing many things.
Where do I even start troubleshooting this issue and finding the culprits? Is it just a CPU usage issue and/or some kind of I/O issue? I haven’t yet tried using something like Process Explorer (from the sysinternals tools) to get a clearer idea of what’s happening (though I’m not sure if that’d help).
I’m thinking of putting Linux on it as an alternative.
Any and all suggestions are welcome and appreciated.
Over the last few years I’ve written over forty blog posts that discuss ETW/xperf profiling. I’ve done this because it’s one of the best profilers I’ve ever used, and it’s been woefully undersold and under documented by Microsoft. My goal has been to let people know about this tool, make it easier for developers and users to record ETW traces, and to make it as easy as possible for developers to analyze ETW traces. [..] The purpose of this page is to be a central hub that links to the ETW/xperf posts that are still relevant.
Some of my favorite blog posts are those that tell a tale of noticing some software that I use being slow, recording a trace, and figuring out the problem.
They go back a few years, he recently tweeted that Windows Performance Analyzer / ETW Trace Viewer is now available in the Microsoft Store - https://twitter.com/BruceDawson0xB/status/106039652215040819...
Antivirus, Windows app optimisations, and device manufacturers' softwares running at predefined schedules, choking the disk might be a problem.
You frontotemporal should check the process explorer next time you gave the issue. In the meantime, may be check the task scheduler to see if some heavy read write tasks are scheduled
I mean, I do. I hate websites that take many seconds to completely load when I know they could take less than 1 second without the bloat.
Hardcore desktop gamers and developers usually are also very performance conscious but that is minority.
Sites keep piling hits while adding bloat, and it's totally counterintuitive. Why?
They do. Just because they can’t put it into words doesn’t mean that they don’t care. In the early days of iOS (and to a lesser extent today, with Android finally realizing that latency is important), many people preferred iOS to Android precisely because the former “felt smoother”.
1. Pick up remote. Type "5", "4", "OK". Put down remote.
2. Wait for 5 seconds. A "5" appears on the screen.
3. Wait for 2 seconds. A "4" appears on the screen.
4. Wait for 2 seconds. Channel switches.
I sold it and got a 40" screen for my PC instead. The PC actually boots faster than the "Smart" TV.
Manufacturers like LG and Samsung are terrible at UX/UI, and then you have others using Android TV which is great but using crappy low performance SOCs like Sony does.
Most of those sites are "free". People are willing to wait for things that they want to consume and are available for free, up to some limit.
If we're talking about websites that offer some sort of utility -- i.e. it does something for you and you need to interact with it often. Then its responsiveness is likely going to be a much bigger factor.
If gmail loads your inboxes and emails in hundreds of milliseconds but yahoo took several seconds, and a person needs to respond to lots of emails during the day, I'm sure the person would develop a preference for the former if all other things are equal.
Right, they are going to wait 4-6 seconds and then maybe go back if the page doesn't load.
But that's an insane amount of time which is why I think most people don't really care about bloat until they think "hey this isn't working".
But I concede that you have a point with the tools vs websites to consume information. There is no difference for me personally, loading and reading a website is part of using it.
Clicking on the link to comments took about 4-5 seconds to render the comments.
Doing a back then forward action in the browser appeared to be almost instantaneous (10's of milliseconds).
I have noticed that many websites are being significantly slower these days. It, of course, could be due to the national monitoring scheme that is required now here in Australia.
Using TOR to get access was a little slower to get access by about a second, otherwise back and forward are the same.
Sadly, as interesting as the material in the article is (great to learn about the measured latencies of the hardware part), I fail to see much that is "actionnable" for a run-of-the-mill software developper.
It seems the only advice is "don't download ad / tracking / social media - related stuff", but even that is not exactly in the developpers circle of influence.
Who's going to make google analytics smaller to download ? (except, well, google ?)
Is any developper really in the position to say "great news, our pages now load xxx ms faster !! However, you won't be able to compute your KPIs for this semester, is that a problem ?")
Also, is "using a language without GC" accessible today for a web frontend developper ? (through some rust / wasm / whatever magic ?)
doWork(int x, int y, int z)
(BTW, can anyone confirm or deny that anecdote?)
 https://blogs.msdn.microsoft.com/ericlippert/2010/09/30/the-... (ignore the usual comments telling the reader they are wrong for wondering about whether the garbage-collected heap is used)
In the JVM, there are of course no structs, but I'd expect the escape analysis optimisations in the HotSpot JIT to reduce it down to avoiding any GC churn. If this isn't happening, I'm curious as to why.
You really can influence your app's performance if you have the attitude and determination.
The only downside so far is that about ~%60 of the WWW sucks through Lynx... I have a separate machine on my desk that's basically a Firefox and VSCode kiosk now. But I've gotten a dead-tree hardcopy book on Vim...
From this perspective one could say this article puts too much focus into the raw numbers, how many ms to a response. As techies, we like that: cheating is cheating. Numbers are important. But really we need to look harder, how can we make users perceive that things are better than the raw numbers.
I remember how I loved my slow hp48, the input buffer was still listening and I could easily think and keep typing operations while the screen was busy, never felt "slow"
Things don’t get bad, they get worse.
Where do you get these numbers? I have programs that run from start to finish in 5 ms, including tons of OS syscalls.
I think many of us forgot how efficient OSes are because of the shitshow that app developers put on top.
You went beyond incivility and crossed into harassing another user in this thread. I won't ban you for it because I saw your apology below, but please don't do anything like this on HN again.
A single comment that was half as aggressive as your first one would already have been more than enough, even if it wasn't completely off topic.
1. Declare that you won't accept (or even review) them, preferably at the top of README.md *
2. Give a second person merge status.
I've done my bit here to stop wasting people's time globally, I don't consider a maintainer's time more valuable than a contributor's time. That's elitest nonsense.
* The likely side-effect is a unblessed fork (or people not wasting their time, or if they do, well, they didn't read the README.md and cannot be too annoyed at anyone but themselves).
That way we can avoid unfortunate incidents such as this in future.
Also you merged commits starting from 24 Dec 2011 and ending on 18th August. Presumably you saw value in those commits?
I apologise for my overly aggressive comments, I did not handle the fact I really hoped to help out on ag and got frustrated by the experience very well (at all).
Have a nice weekend.