Most machines I work with these days take minutes to get rolling.
Okay, I know that systems are bigger and more complicated now; buses have to be probed and trained, RAM has to be checked, network stuff needs to happen, etc., etc., but minutes? This is just industry laziness, a distributed abdication of respect for users, a simple piling-on of paranoia and "one or two more seconds won't matter, will it?"
A cow-orker of mine use to work at a certain very large credit card company. They were using IBM systems to do their processing, and downtime was very, very, very important to them. One thing that irked them was the boot time for the systems, again measured in minutes; the card company's engineers were pretty sure the delays were unecessary, and asked IBM to remove them. Nope. "Okay, give us the source code to the OS and we'll do that work." Answer: "No!"
So the CC company talked to seven very large banks, the seven very large banks talked to IBM, and IBM humbly delivered the source code to the OS a few days later. The CC company ripped out a bunch of useless gorp in the boot path and got the reboot time down to a few tens of seconds.
When every second is worth money, you can get results.
A headless linux can come up in seconds (UEFI fast boot + EFI stub) or even less than a second if you're in a VM and don't have to deal with the firmware startup. Booting to a lightweight window manager would only add a few seconds on top.
Here's a random video of a guy booting a R740, it takes 1:51 just to get into the bios: https://www.youtube.com/watch?v=CSJNTdKdTJI
IBM's QPI-linked dual servers boots even slower as one technician explained to me. Presumably you can make coffee during the wait.
Also, ECC failures usually cause a machine check. Not sure if you can control this on modern machines, it might be all or nothing.
five seconds, year 2008: https://lwn.net/Articles/299483/
two seconds, year 2017: https://www.youtube.com/watch?v=IUUtZjd6UA4&t=17
The only differences between the system I used for benchmarking and my regular desktop was autologin, chromium in the startup script, and the enablement of the benchmarking thing. I'll probably poke around with it tonight and see if it's any different on my new laptop. This one had a proper NVMe as opposed to the m.2 SSD in my old laptop.
My Windows desktop at work takes forever to get running. Even after you log in and the desktop is visible it's another minute until it's usable.
That's an exceptional case though - the GUI OS was hardwired in ROM. The Amiga was an otherwise comparable machine that had to load much of the OS from disk, and it did take its time to boot to desktop.
Not really exceptional.
It took maybe a minute to load off floppy disk. That is STILL shorter than the POST time for every machine I work with these days, with the possible exception of the Raspberry Pi in the closet.
I'm using a Dell E7440 and it's pretty quick to boot from powered off. I have a bunch of stuff turned off in the bios. It's my machine on my home network, it's not a corporate machine with all the corporate stuff.
But maybe that's the lever we need to get change: 30 seconds extra for 1000 people over 250 working days a year is over 2000 person hours being spent waiting for machines to boot.
And that wait time for the corporate stuff is something that real people talk about. Here are a few twitter threads about people in different NHS organisations.
Some people are waiting 3 to 10 minutes, a few are waiting even longer(!!) https://twitter.com/griffglen/status/1066043840497360897?s=2...
It's nuts. The memory was fine the last time it was tested (30 minutes ago, on the last reboot). Let's just train some buses, probe some address spaces and go, okay?
On a floppy that would take a while to load. Off a hard drive, sure, not too bad.
[Note: this is the early 80s. A computer with a large amount of memory might have 64K in this period. I think a 64K ROM cost about four dollars, and 64K of RAM was about fifty bucks]
The Atari ST's operating system (TOS, and no I don't want to talk about what that stands for) was written in C and assembly language.
Initially the ST was going to have 128K of ROM; we wanted to one-up the Macintosh (which hadn't shipped yet, but there were rumors and we had copies of Inside Macintosh that were simply fascinating to read) and put both our OS and a version of BASIC in ROM. Most home computers at the time came with some version of BASIC, and the Mac did not; we were hoping that would be a differentiator. Trouble was, nobody had actually sized our software yet (the only things even remotely running were on the 8086, not the 68000 we were going to use, and Digital Research wasn't exactly forthcoming about details anyway).
So mid-October (the project started in earnest in July 1984, and FWIW we shipped in late May 1985, a whole new platform starting from zero in less than ten months) we realized that just the OS and GUI would be 128K, and that the BASIC we were thinking of using was like 80K (but could probably be shrunk). So the hardware guys added two more ROM sockets, for 192K of ROM. A month went by. Wups! -- it turned out that the OS and GUI would be like 170K, with little hope of shrinkage. No, make that 180K. Would you take 200K?
The code topped out at 210K or so, and that wouldn't even fit into the six ROM sockets we now had. No chance in hell of getting another 64K of ROM -- that stuff costs real money -- so we shrunk the code. The team from Atari came from a background of writing things that fit into really tiny amounts of ROM, so we went about this with a fair amount of glee. We got about 1K per programmer per day of tested code savings by ripping out unused functions, fixing all the places where people had "optimized" things by avoiding the expense of strlen or whatever, and coding some common graphics calls with machine trap instructions instead of fatter JSRs. For about a week, the hallway in engineering was full of people calling out to other offices, "Wow, get a load of this stupid routine!" and in a codebase that had been tossed together as quickly as GEM/TOS had been, there was no lack of opportunity for improvement. We found a fair number of bugs doing this, too.
Additionally, the C compiler we were using was not very good, and even its "optimized" code was terrible. Fortunately it had an intermediate assembly language stage, so we wrote some tools to optimize that intermediate code (mostly peephole stuff, like redundant register save/restores) and got a relatively easy 10-12 percent savings. I think we had a few hundred bytes of ROM left on the first release.
I remember that 192K pretty darned well. Though they're fun to talk about, I honestly don't miss those days much; today I wrote the equivalent of
void *p = malloc( 1024 * 1024 * 1024 );
It's interesting that the original plan was to get a BASIC in there, because IMHO that really was a weak point of the ST for its target market -- which I guess included my 13 year old self. At least for the first couple years.
As strange as it might sound, there's a large artificial delay added between the time the service knows the answer to customer's search and the time the answer is sent to the customer's web browser.
The reason for that delay is that the without it the customers do not believe that the service performed an exhaustive search and do not complete the transaction!
My current Windows 10 machine boots faster than my monitor turns on (<5 seconds from power on to login screen).
As to being the users fault, that’s rather subjective. Video card, printers, and other drivers taking forever to load is really a hardware/OS issue.
I guess it just comes down to priorities. I'm sure if PC specs included "boot time (lower is better,)" we'd see boot times drop quickly.
Minutes? That sounds exaggerated.
How powerful was your Atari ST compared to other machines at the time versus the machines you work with these days compared to other machines available?
Because I'm not even on a particularly new machine and from a powered off state I'm logged into Windows and starting applications within 5 seconds. And for example that's at 7680x2880 resolution, not 640x400.
POST on a Dell M640 is about three minutes. Other Dell systems are similar. POST on the workstations that I use are in the range of 1-2 minutes. This is before the OS gets control (once that happens, it's usually 15-20 seconds to a usable system).
The ST was a pretty decent performer at the time (arguably a little faster than the original Macintosh, for instance). Both the ST and the Macintosh took about the same amount of time to boot from floppy disk (though the OS was in ROM for the vast majority of STs that were built).
With that said, POST on my Z820 workstation probably takes a minute, even with option ROMs disabled, but still maybe half the time it takes an Gen8 HP MicroServer with a quarter the RAM to do the same.
On the other hand, my old IBM POWER6 server sets local records for boot (IPL) time: in "hyper" boot mode, with "minimal" testing, it still takes slightly longer than the MicroServer to turn control over to the OS, the default, "fast" boot mode takes maybe five minutes to POST, and, well, I could very nearly install Windows 10 on a fast, modern desktop PC in less time than it takes to do a full, "slow" POST.
As for simply booting such a desktop, even with all BIOS and Windows fast boot options disabled and a display connected to both internal (AMD) and external (NVIDIA) GPUs, my Kaby Lake NUC takes no more than ten seconds to boot to the Windows 10 logon screen from a fast (Samsung 970 Pro) SSD.
I also have hibernation disabled, and I've never noticed some obviously large file that might be a hibernation state on my drives (even e.g. removing a drive after a shut down so there was no restart where it could've been deleted).
Still it isn't much longer for me to restart after fully shutting down (with a more recent system and SSD), just more time in Windows and less in the BIOS (and shutdown is instant with "fast startup" turned off, which is better for me since I usually pull the plug after turning it off). About 20 seconds (counting myself, not timed).
Still not as fast as DOS + Windows 3.1 IIRC, but not too bad for turning it on once or twice a day. I have also noticed the many things that have delays now in places there weren't 30 years ago, but I don't think boot times are the best example of this. I might appreciate the twitter rant if not for the completely incorrect diversion about google maps (you can drag parts of the route to make changes and get exact distance, better than paper maps and with much more details about what is nearby). IMO, computer interfaces should have a tool focus, doing basic tasks quickly and reliably so that users can learn to use it like they would a physical tool (while also not making users do extra work that could be done quickly and reliably). Now everything tries to use the network all the time, adding random delays as well as compromising privacy.
Who care about boot time when you do it once in a while versus actually using that interface?
My Ryzen boots within 8 seconds on a b450 Pro motherboard.
POST time is crazy bad. It's almost like the engineers working on it don't care.
Maybe I've just gotten really lucky...
Not implying that it's bad software, I'm just curious because it sounds unusual.
The example I gave above happens regularly, as I use Deepin Linux as my typical daily driver while I'm working. However, if the need to open an adobe suite tool comes up, I can quickly swap over. Discord works fine for me on both platforms and my phone.
All in all, I don't really like discord all that much. It's not the best at anything. But it has the advantage of being both convenient and feature-rich overall. There are better solutions out there, but none are as convenient or free.
I-frames are only sent, say, once or twice a second.
When a channel is switched, the TV has to wait for the next I-frame, since P-frames (and B-frames) only encode the difference to the previous I-frame (or to the previous and next I-frame in the case of B-frames).
If you are aware of a possibility for efficient video compression that avoids this problem, tell the HN audience; the really smart people who developed the video codecs apparently have not found a solution for this. ;-)
Otherwise complain to your cable provider that they do not send more I-frames to decrease the time to switch between channels (which would increase the necessary bandwidth).
(smart systems both cache a whole bunch of this stuff and revalidate their caches on the fly while they are tuning - first tunes after boot might be slower as these caches are filled)
You can apply the encoded difference to a grey (or black!) screen, the way (IIRC) VLC does in such cases. This means that the user immediately gets a hint of what's happening onscreen, especially since the audio can start playing immediately (also, often the P/B-frames replace a large portion of the on-screen content as people move around, etc.). Surely it isn't any worse than analog TV "snow".
If it looks too weird for the average user, make it an "advanced" option - 'quick' channel changes or something.
But even in boxes with multiple tuners (DVRs) your solution would require tying up at least three tuners (current channel plus one up and down) which would cut down the number of simultaneous recordings that are possible. I doubt many people would like that tradeoff.
However, the biggest issue is that most boxes simply don't have more than one MPEG decoder in them.
Some of the channels I like used to be difficult to tune with my previous cable box, because it would not correctly coordinate tuning with the infrastructure, so I'd have to retune. If I left the box on such a channel and turned it off, the next time I'd use the box the screen would be black.
In the old days all the channels were analog, and used 6MHz each (may vary in your region), and channel changes were much faster.
How do you plan to obtain I-frames without decompressing them?
That's 50 cents the shareholders can pocket, and everyone has already inured themselves to the slower experience.
Isn't progress great?
BOM costs make or break mass market hardware products. You don't just add 50 cents of BOM to a mass market item without a real good reason.
I guess the question is, why is that so?
IMHO, a valid "real good reason" is fixing a product/technological UX regression. However, it seems American business practices have settled on shamelessly selling the cheapest acceptable product for the highest acceptable price. If cheaper means a little crappier and enough customers will put up with it, cheaper it is. I'm dissatisfied with it because it usually means the stuff I buy is less durable or lacking on some fit-and-finish area.
50 cents x 4, along with the other increase likely $5+ of BOM cost increase could make or break a consumer product. But your reason is also true as it improves UX.
This is where innovation and Apple comes in, you need to market the product with a features that masses of consumer believes in it and are willing to pay for it. ( Lots of people, including those on HN often mistaken innovation as invention )
There is nothing "American" about this business practices, it is the same as any European, Chinese or Korean Manufacturers. They could have very well put this feature in but I am willing to bet $100 it wouldn't make a difference to consumer's purchase decision. So why continue to add $5 or more for a feature they cant sell.
But Apple has the ability to move consumers, and to charge higher ( as a package along to this feature ) to demand a premium. And if Apple successfully market this feature, say with some sort of brandname like "QuickSwitch", it is only a matter of time before other manufacturers copy it.
It has nothing to do with “American Business”. Just a fact of life in a competitive market.
Is it worth spending an extra 2-5 million in tuner chips so 100 million set top boxes can channels can change faster? You tell me?
That said, it's true that there still may not be a practical solution which is better for the user than letting them wait a second.
I notice this happening when IGMP forwarding is broken in my router and channels will only play for a second or two after being switched to and then stopping. Switch times are pretty good.
This would allow a rapid channel surfing, something I haven't been able to do on any recent TV.
This is a bad comment/reply - upstream wasn't complaining about the codec but about the usability (of the devices).
This can impact how someone feels about the change, but does nothing to solve the time to change problem.
One thing it does do is confirm the change is in progress. That is a subset of the time to change problem.
Many current UX on this do not give a very good, or any indicator of successful input.
Quite a few people may see their feelings about the time to change improve because they can diver their attention away from the change knowing it will eventually happen.
A black screen is also a sign that a change is in progress. But this is exactly what my parent fortran77 complained about (https://news.ycombinator.com/item?id=21835676).
Turns out getting like that manufactured in the quantities we were looking at is a nightmare - so it didn't happen.
Edit: Clearly it would have been easier to have one button for "Beer, burger and porn" - but that has only occurred to me now.
Mercifully, pressing the TV Source button triggered a different app that didn't crash when I pressed the off button, and in what must be the software engineering achievement of the decade, the off button turned off the screen.
However, in most cases, at least in mid-range rooms, the TV is barely bigger than my laptop so it just doesn't make sense to use it.
It might not be your ad-blocker or script-blocker; it might be your DNS settings.
Or just let me use an HDMI port.
I don't watch channels anymore, and I don't want to pay for your pay-per-view content.
"In fact, we found semen on 30% of the remote controls we tested."
Nowadays, that problem is solved by reducing actual live content to a strict minimum; everything else can be on-demand.
The cheapest would be to just use a constant neutral-grey i-frame whenever the channel flips, and update that until a real i-frame comes along, while playing the channel audio immediately. Ugly video for a second, but high-action scenes fill in faster. I'd bet that most people could identify an already-watched movie or series before an i-frame comes in, at least 80% of the time.
More expensive would be to cache incoming i-frames of channels adjacent to the viewed channel, and use the cached image instead of the grey frame. Looks like a digital channel dropping update frames during a thunderstorm for a second.
Prohibitively expensive (back then) would be to use multiple tuners that tune in to channels adjacent to the viewed channel, and then swap the active video and audio when the channel up or channel down buttons are pressed. Drop the channels that just exited the surfing window, and tune in to the ones that just entered it. Surfing speed limited by number of tuners.
Televisions still don't do this, even after more than a decade of digital broadcast, and multiple-tuner, multiple-output DVR boxes.
I've always suspected the reason it's slow is because you press the remote button, the DVR sends that to the provider, the provider has to verify that you can do what you are asking it to do, then a response comes, then the change can start.
I'm not sure who all is using it now but I used to work on the setup for Switched Digital Video. If nobody in your neighborhood was watching a certain channel, it would stop getting broadcast. That freed up bandwith for other things like internet. Once you would tune to a channel, a request would go to the head-end, it would quicky figure out if the channel is being broadcast in your area. If not, a request would go to the content delivery system to start feeding it to the QAM and then obtain what frequency the channel was on, and finally relay that back to the settop box which would tune and start the decoding process.
Rather impressive tech but again, this would add a bit more latency to that particular channel switching.
Nowadays there's youtube - but it's hardly the same thing.
Nowadays there's many layers of various quality, and software reactivity is a factor of so many things.
i don't know anyone who does that, but it is damn trivial.
(former middleware engineer)
> i don't know anyone who does that, but it is damn trivial.
Getting this standardized in an international standard is far from trivial.
Many of the strongest AND weakest consumer technology experiences are influenced by standards in some way.
Positive Examples: pluging in an usb headset, electrical sockets, sms.
Negative examples: Trying to hook up your laptop in a random conference room, transferring a large file locally between 2 devices from different vendors.
Maybe with ever increasing conplexity and capability the annoyances fall between industry actors and solutions would need first multiple parties acknowledging the problem followed by succesful coordination.
pretty much anything cable and satellite tv companies do is against the user. there is very little (if any) innovation in the industry, and that's why they will eventually die.
That said, I'm not sure that counts for the latency by itself. Pretty sure it doesn't. :(
The answer: put an ad in there.
And I'm not even talking about the Netflix "app" that's on there. Holy s#!t that's slow. Or the TV-guide. They now resort to teletext because that's much faster... I mean...
A counterexample: in 1983, enter two search terms, one of them slightly misspelled or misremembered, hit f3: "no results", spend 10 minutes trying to find the needle in the haystack, give up and physically search for the thing yourself.
Enter two search terms slightly incorrectly now: no of the time it will know exactly what you want, may even autocorrect a typo locally, get your accurate search results in a second.
When things were faster 30+ years ago (and they absolutely were NOT the vast majority of the time, this example cherry picked one of the few instances that they were), it was because the use case was hyperspecific, hyperlocalized to a platform, and the fussy and often counterintuitive interfaces served as guard rails.
The article has absolutely valid points on ways UIs have been tuned in odd ways (often to make them usable, albeit suboptimally, for the very different inputs of touch and mouse), but the obsession about speed being worse now borders on quixotic. Software back then was, at absolute best, akin to a drag racer - if all you want to do it move 200 meters in one predetermined direction then sometimes is was fine. Want to go 300 meters, or go in a slightly different direction, or don't know how to drive a drag racer? Sorry, need to find a different car/course/detailed instruction manual.
Want to give me fancy autocorrect? Fine. But first:
* Make a UI with instant feedback, which doesn't wait on your autocorrect
* Give me exact results instantly before your autocorrect kicks in
* Run your fancy slow stuff in the background if resources are available
* Update results when you get them... if I didn't hit "enter" and got away from you before that.
It's not that complicated. We've got the technology.
And also, there's still no fucking reason a USB keyboard/mouse should be more laggy than their counterparts back in the day.
Either way I'm not sure it rises to the level of indignation shown here.
There's no good reason for a lag after hitting the "Start" button.
There's no good reason for a lag in the right-mouse-button context menu in Explorer (this was a "feature" since Windows 95, however).
I could go on for a long time, but let's just say that Win+R notepad is still the fastest way to start that program, because at least Win+R menu wasn't made pretty and slow (but still has history of some sorts).
The search box behaves in truly mysterious ways. All I want it to do is bring up a list of programs whose name contains the substring that I just typed. It's not a task that should take more than a screen refresh, much more so in 2019. And yet, I still have no clue what it actually does - if it works at all.
Check out this specific part of his postings: https://i.imgur.com/Roz80Nd.png
The main idea I got from his rant was that we have mostly lost the efficiencies that a keyboard can provide.
In light of that, I think it's less that we've "lost" keyboard-driven efficiency as much as knowingly sacrificed it in favor of spending UI/UX dev time on more generally desired/useful features. The nice thing about being the type of power user who wants more keyboard functionality is that you can often code/macro it yourself.
We could really do better on the latter category here in the 21st century.
I can't agree with this enough. The whole of web development in general really grinds my gears these days. Stacking one half-baked technology on top of another, using at least 3 different languages to crap out a string of html that then gets rendered by a browser. Using some node module for every small task, leaving yourself with completely unauditable interdependent code that could be hijacked by a rogue developer at any moment. And to top it all off now we're using things like Electron to make "native" apps for phones and desktops.
It seems so ass-backwards. This current model is wasteful of computing resources and provides a generally terrible user experience. And it just seems to get worse as time passes. :/
It’s funny, in a way, because the “problem” with straight HTML is that it was straight hierarchical (and thus vertical) and so a lot of space was wasted on wide desktop displays. We used tables and the later CSS to align elements horizontally.
Now on phones straight html ends up being a very friendly and fast user experience. A simple paragraph tag with a bunch of text and a little padding works great.
I've been using Sublime all week and it feels like an engineering masterpiece. Everything is instantly responsive. It jumps between files without skipping a beat. My battery lasts longer. (I don't want to turn this into an editor debate, though. Just a personal example.)
If you would've asked me a month ago, I would've said that engineers cared too much about making things performant to the millisecond. Now, I would say that many of them don't care enough. I want every application to be this responsive.
I never realized how wasteful web tech was until I stopped using it. And I guess you could say the same for a lot of websites – we bloat everything with node_modules and unnecessary CSS and lose track of helping our users accomplish their goals.
I have been arguing about this since may be 2016 or earlier, on HN it was the echo chamber of how much faster VSCode is compared to Atom! I cant believe this is built on Electron etc. And with every VSCode update I tried and while it was definitely faster than Atom, it was no way near as fast as Sublime. And every time this was brought up the answer was they felt no different between VSCode and Sublime or VSCode is fast enough it didn't matter.
The biggest problem of all problems is not seeing it as a problem.
I hope VSCode will continue to push the boundary of Electron Apps, they had a WebGL Render if I remember correctly that was way faster, not sure if there are anymore things in the pipeline.
The performance of feedback and the speed with which you can go through the basic cycle of coding and testing it works as expected is the speed at which you can develop code and the editor is a pretty critical part of that.
Since I haven't done much with Java itself, the build times weren't as impactful on me.
What made a big change was how IntelliJ, despite being pure Swing GUI, was order of magnitude lower latency, from as simple things as booting the whole IDE, to operating on, well, everything.
Then I switched from Oracle to IBM J9 and I probably experienced yet another order of magnitude speedup.
You can do likewise with VS Code or other environments, except maybe some plugins are not installed by default.
In the end it boils down to: how do we define an IDE? And even if it is about bundled capabilities, I would still be able to create a "dedicated" (would not need much modification) Linux distro and declare it to be an IDE.
It was easier to distinguish IDE from other things in the MS-DOS era.
More relevant to the article, I fully agree with the authors upset at trying to do two parts of the same task in google maps, it's entirely infuriating.
Edit: duplicate submission: one directly on twitter, this one through hte threaded reader. The other submission has >350 comments: https://news.ycombinator.com/item?id=21835417
In 1998, I used https://www.mapquest.com/ to plan a road trip a thousand miles from where I was living, and it was, at the time, an amazing experience, because I didn't need to find, order and have shipped to me a set of paper road maps.
In the 1970s, when I had a conversation with someone on the phone, the quality stayed the same throughout. We never 'lost signal'. It was an excellent technology that had existed for decades, and, in one particular way, was better than modern phones. But guess what? Both parties were tied to physical connections.
Google Maps is one product, and provides, for the time being, an excellent experience for the most common use cases.
> amber-screen library computer in 1998: type in two words and hit F3. search results appear instantly
So that's a nice, relatively static and local database lookup, cool.
I wrote 'green screen' apps in Cobol for a group of medical centers in the early and mid 90s. A lot of the immediate user interface was relatively quick, but most of the backend database lookups were extremely slow, simply because the amount of data was large, a lot of people were using it in parallel, and the data was constantly changing. Also: that user interface required quite a bit of training, including multi-modal function key overlays.
This article has a couple of narrow, good points, but is generally going in the wrong direction, either deliberately or because of ignorance.
1) I can't manipulate data in my resultset well enough in google map.
2) Searches are too slow.
3) Mousing is bad.
Now, you can argue that those are related.
The first two are an argument for moving away from full-page post/response applications to SPA-style applications where the data is all in browser memory and as you manipulate it you're doing stuff on the client and pulling data from the server as needed, desktop style.
The latter? I don't know why he had to go back to DOS guis. Plenty of windowed UIs are very keyboard friendly. Tab indexes, hotkeys, etc.
> GUIs are in no way more intuitive than keyboard interfaces using function keys such as the POS I posted earlier. Nor do they need to be.
This is where he loses me. I remember the days of keyboard UIs. They almost all suffered from being opaque. You can't say "the problem is opaque UIs" when that describes the vast majority of keyboard-based UIs.
While there are obviously ways to create exceptions, GUIs are intrinsically more self-documenting than keyboard inputs, because GUIs require that the UI be presented on the screen to the user, and keyboard inputs do not.
I think the part you quoted is an interesting problem. I agree with his statement that UIs are not necessarily intuitive, however I do believe they are easier to pick up than keyboard inputs. Seeing older people try to navigate websites shows me how much "intuitiveness" I take for granted and is actually just experience. I think the focus though was that the "intuitiveness of a GUIs" is not worth the loss in efficiency, especially when we have the capabilities to combine the two interfaces into one system.
One thing he said stood out to me, which was that including a mouse interface on a primarily keyboard-based system is much better than trying to add keyboard functionality in a primarily mouse-based based interface.
Most Millennials I know who are technical absolutely love mice because they grew up using them, and most of them have extensive PC gaming experience to boot.
I’m the Linux/CLI junky among them and even I don’t find mousing cumbersome—to use someone else’s words, it’s an amazing, first class input device. Same goes for trackballs. By comparison touchscreens are a joke, there’s no depth of input like with a mouse(RMB,LMB,etc) and every action requires large physical movements.
I’m used to seeing people fly through menus with a mouse at speeds people here expect to only to see from keyboard shortcuts. Just because mousing is cumbersome for you, doesn’t means it’s universally true at all. I know keyboard shortcuts are fast, but it’s a lot to memorize compared to menus which typically have the same basic order. File|Edit|View|...|Help
I guess it just depends on what the program requires of your inputs. When it comes to software development, window switching and maneuvering around websites, keyboards are precise and rapid, where the mouse can only do one thing at a time before needing to travel to the next input.
The other important part about ditching the mouse, is that when you're predominantly typing and using both hands on the keyboard, switching over to the mouse takes a non-trivial amount of time. You have to move your hand over there, figure out where the cursor is on the screen, then do what you need to do with it. When you're doing it hundreds of times a day, it adds up.
 https://news.ycombinator.com/item?id=15643663 (posted two years ago)
Additional: anyone know of a good F# library for the gui.cs framework? Before I manually write bindings I figure I can throw a quick query out there.
Now on the desktop side, this is also why I am a CLI junky who lives in a terminal. CLI apps don't get in the way, because most of the time you are just dealing with text anyway. There are many websites that ought to be a cli app at least via api. This is also one of my criticisms of companies trying to force people into the browser by disallowing api clients.
It was the constant bloat and spying that finally spurred me to go gnu/linux only many years ago, and things are only getting better since then. It requires a change in how you do your computing, yes. It may not be easy (higher learning curve), but the rewards are worth it.
Windows 10 does a little trick to speed up boot - when you perform a shutdown, Windows 10 saves a mini-hibernation image to the hibernation file. When you perform normal boot, it can start up quite fast. This gives noticeably shorter boot time especially on spinning rust drives (i know, i know, $CURRENT_YEAR). However if you perform a "reboot" instead of "shutdown + power on", you'll get the full length boot of notably longer time.
 assuming the hardware setup is sufficently unchanged
I've been rocking an M.2 ssd for quite some time now and Win10 _always_ takes a considerate 1-2 minutes to shutdown.
Likewise, my Windows machine hibernates so fast that boot time feels like I'm just waking it up from sleep.
Thanks to the advent of SSDs, applications are also quite peppy to startup. Music, movies, pictures, all so fast to use.
The section about google maps follows a form of criticism that is widespread and particularly annoys me, namely => popular service 'x' doesn't exactly fit my power user need 'y', therefore x is hopelessly borked, poorly designed, and borderline useless.
There is always room for improvement, but all software requires tradeoffs. One of the things that makes a product like google maps so powerful is that it makes a lot of guesses about what you are actually trying to do in order to greatly reduce the complexity and inputs required in order to do these incredibly complicated tasks.
So yes, sometimes when you move the map some piece of data will be removed from the screen without your explicit consent, and yeah, in that moment that feels incredibly annoying. But balance that against the 100s or 1000s of times you used google maps and it just worked, perfectly, because it reduced the number of inputs needed to use it to the bare minimum.
Google maps doesn't need to fit every use case perfectly, and while its fine to talk through how your hyper specific use case could and should work, remember all the times that it seamlessly routed you around traffic from your office to your house in one touch while you were already hurtling down the highway at 70 mph.
That's a common use case. The problem with Google maps (and the problem with a lot of modern software) is, as you say, it makes a lot of guesses.
The definition of a good user interface is "to meet the exact needs of the customer, without fuss or bother"*
Google Maps is great for finding directions to a very specific place. But after mapping those directions, doing almost anything else destroys that route. If I have to (and I do) open multiple map tabs, or repeatedly enter the same route info after making a search (if I'm on a phone) it is not a good UI.
Am I missing something - this use-case is already supported! You choose start & endpoints, then your start the trip (which "saves" them) - you can now search and add as many waypoints as you desire
For example, do a map of San Francisco to New York City. Now you want to visit the world's largest ball of twine, so you add a waypoint, and start typing "Ball of Twine" and a drop-down will appear with a few choices, pick the one you want and it'll add to the map. You can re-order them as needed to optimize your route.
You still need to know the name or address of the waypoint you want to add, but that's the case with paper maps and is a good use of browser tabs to search for it.
One important factor in a good UI is that it is discoverable! If you build an amazing feature but forget to inform the user about it, you've wasted the work.
The above is not a foregone conclusion, but it was how I thought until I read your and parent's postings about it, and I'm a techie. For every 1 techie that doesn't know about a feature, there are 1,000 users (or something like that).
I would posit that the discover-ability difficulties are present whether someone is in a TUI or a GUI.
If you hit start before, it starts talking a lot, and that's annoying.
To be fir, I've never really wanted this feature much, so I haven't tried to find it.
(There's a less blurry mkv at http://paste.stevelosh.com/1983.mkv for those that want it.)
I go to `maps.google.com`. The page loads a search box, with my cursor focused inside it. Then it unfocuses the search box while some other boxes pop in. Then it refocuses the search box. Is it done thrashing? Can I type yet? I wait for a few seconds. I would have already entered in my query by now in 1983. I sigh. This bodes well.
I guess it's as done as it's ever gonna be. I search for "rochester ny to montreal qc". I wait for the screen to load. It finds me a route, which is actually good. Step one done.
Now I want to find a restaurant somewhere in the middle. Let's try just browsing around. I find somewhere roughly in the middle — Watertown seems like a good place to stop.
I zoom in on Watertown. I wait for the screen to load. I look around the map and see some restaurants, so I click one. Now I want to read the reviews, so I scroll down to find the "See All Reviews" link. My scroll wheel stops working after I scroll more than an inch or two at a time, until I move it out of the left hand pane and back inside it. I sigh, wiggle my mouse back and forth repeatedly to scroll down and click on the link.
A whirl of colors — suddenly the map zooms in on the location. Why does it do this? I wanted to read the reviews, not look more closely at the map! Now that the map is zoomed in, a hundred other points of interest are suddenly cluttering the map. I wanted to read reviews about this restaurant, and suddenly 3/4 of my screen is filled with text about other places. I sigh.
I ignore the garbage now cluttering most of my screen and read some reviews. This place seems fine. I click the back arrow, then click Add Stop to add it to the route. I wait for the screen to load. Suddenly my screen whirls with color and zooms out, losing my view of Watertown. I sigh.
My trip is now 8.5 hours instead of 5.5, because it added the new stop at the end. AlphaGo can win Go tournaments, but I guess it would be too much to ask for Google to somehow divine that when I add a stop in the middle of a 5.5 hour trip, I might want to visit it on the way by default. I sigh and manually reorder the stops.
Let's also find a gas station somewhere before Montreal, because I like to get gas before I get into the city so I don't have to deal with it once I'm in. Cornwall seems like a good place to stop.
I zoom in on Cornwall. I wait for the screen to load. I don't see any gas stations markers, but that's fine, there's a button that says "Gas stations" on the left! I click it and the screen goes blank. I wait for the screen to load. I've suddenly been whisked away to downtown Montreal instead of looking around where I'm currently centered on the map. Guess I should have read the heading above the buttons first. I sigh.
I click "back to directions". I wait for the screen to load. The map does not return to where I was previously, it just zooms to show the entire route, throwing out my zoomed-in application state. I think back to Gravis' tweet of "gmaps wildly thrashes the map around every time you do anything. Any time you search, almost any time you click on anything" and I sigh.
I rezoom in on Cornwall. I wait for the screen to load. The gas station button didn't work, but surely we can search, right? I don't see a search box on the screen, so I roll the dice and hit Add Destination. This gives me a text box, so I try searching for "gas stations" and pressing enter. This apparently didn't search, but just added one particular gas station to the route. It also zoomed me back out, throwing away my previous zoomed in view.
I rezoom in on Cornwall. I wait for the screen to load. I notice the gas station it picked happens to be across the US/Canada border from the route. That clearly won't work. I sigh and remove the destination. This zooms me back out (I wait for the screen to load), throwing away my previous zoomed in view.
I rezoom in on Cornwall. I wait for the screen to load. I click Add Destination again and this time notice that when my cursor is in the box, there's a magnifying glass icon — the universal icon for "search" — right next to the X icon (which will surely close the box). It even has a tooltip that says "Search"! Aha! That was well-hidden, UI designer, but I've surely defeated you. I click the magnifying glass icon and it… closes the input box. I… what? I sigh, loudly. It has also zoomed me out, throwing away my previous zoomed in view. I wait for the screen to load.
I rezoom in on Cornwall. I wait for the screen to load. Okay, apparently I can't search to try to find routes. I guess I'll resort to browsing around the map again. I notice what looks like a gas station called "Pioneer" and click on it. Cool. But then I realize this is on a bit of a side street. Surely I can find a gas station along the main road. Let me just cancel out of this location by pressing X.
My entire route is completely gone. All that time I just spent, flushed down the toilet. To add insult to injury: this is the one time that it didn't automatically zoom me out and lose my view of the map. It just threw away all of my other state.
Fuck this. I'm with Gravis.
After you accidentally lost your route, you could have just used a built in feature of your browser to get yourself back to where you were.
EDIT: The rest of your post was entirely accurate. Google Maps is a slow, stuttery mess on literally every platform I've ever used it on recently. At least the back button works...
This is all much worse on the Android app as well, where it makes the assumption that your use case is to get from where you are right now to somewhere else. Trying to get from point A to B, where neither is where you are now, is unnecessarily frustrating.
That strikes me as a fantastic assumption. I wonder what percentage of routes involve the user’s current location? I bet it’s high!
It just worked for the default case but when you needed something else it was straightforward to do that.
I open the app, click my destination, and then click "Directions". The very next thing is both of those boxes, with "current location" defaulting to the start location. I can then change that if I want.
It optimizes for my most common use case, but allows me to do it otherwise, too. I don't think I could design this better.
That seems pretty decent UX wise?
* What about stops along the way?
* What about saving the results for later?
* What if you want to do some other mapping task in the middle of all this?
* Are the directions given feasible?
- Open Maps
- Search destination. It autocompletes after about 5 characters
- Select destination
- Screen changes to infobox about the location. There is a prominent "Directions" button
- Press "Directions"
- It changes to a route view, the Start is autocompleted to Current Location but obviously editable
- Press into start location edit box
- I can type location or "Choose on map"
This process requires essentially the minimum possible information from me (I want directions, from A, to B). What is frustrating about it?
compare this to the original that they "simplified" away:
- Open app in navigation mode (step 1)
- it shows two boxes, where you are going from and where you are going to
- fill said boxes. There is a button next to from to choose current destination. (step 2 and 3)
- click get directions (step 4)
Compared to the current "simple" version it is immediately clear and there are fewer steps and less things you need to know.
1) Open app
2) Search for start
3) Select start
4) Search for destination
5) Select destination
6) Click directions
Here's the parent's way:
1) Open app
2) Search destination
3) Select destination
4) Select directions
5) Search for start
6) Select start
They're the same process.
1. Open app in navigation mode (there was a separate icon for that)
2. Accept default start or type if you don't want the default.
3. Point at destination
4. Type destination and enter
Besides it was immediately obvious when I opened the app for the first time on my first smartphone, it just made sense and still does when I think about it.
Edit: I reread https://news.ycombinator.com/item?id=21836204
I exaggerated wildly and can get it down to 5 steps. It is by definition discoverable since we have all discovered it, but I hold that it is still not obvious or self-explanatory in any way.
Gmaps right now works like this:
1) Open App
2) Click search box
3) Either select a destination from the list that pops up or start typing and actually search. Once that's done the route pops up with your travel time.
4) Click start
If you want to change your start:
4) Select the starting location
5) Search or select from the list that pops up and your route and travel time are show.
6) Click start
It's not rocket science. It's all obvious from the UI.
I see no reason that supporting this thing that old mapping software used to support would elevate it "above" other use cases. If you just want a single route, you do one search and you see the result and you never click the "add" button, no problem.
I should dig out an old Delorme Street Atlas CDROM and install it in a VM, to get some sense of how many clicks it took to do the things I used to do. I don't think it was many. It was definitely pickier about address entry; that's one place Google has absolutely improved. But aside from that, it was way more powerful at pretty much everything else.
And your answer to someone asking "Why do you think that use case is that common?", your first line literally just talks about your use case from your point of view:
> This is exactly what _I_ usually want to do with maps. _I_ have a route _I_ want to plan, and _I_ want to do more than one thing along my route, or see what else is in the area. It's futile in Google Maps.
I'm not saying that wouldn't be useful, it's just that maybe not that many people need it... I guess it was built with the idea that you would just open more tabs to search other things?
Based on what I've seen from Google product design, this is a pretty bold assumption.
While Google has access to unfathomable amounts of data collected from users, it's more than happy to eschew that if the data conflict with higher-level product or company strategy decisions, which generally are much less motivated by raw user data,
On a 4-5 hour road trip, I want to take the kids to see a castle or something somewhere around 1/2 to 3/4 of the way. Even just wanting to have lunch somewhere other than Hilton Park or Newport Pagnell would be such a use case.
I have also wanted it for visiting someone - I'm going to their house, what is my most convenient option for buying some wine and/or flowers on the way?
I have wanted it when I've been away from home and have a big time gap between finishing my planned activities (or having to check out of my hotel) and my train or plane departure. What is the best way to spend a few hours that is anywhere on the route from here to the airport/station.
Everything is oriented around the model of "reserve a hotel", "reserve a flight", like you really are on rails like a European.
Today's online maps aren't up to the freedom that motorists have to make small deviations from a route. For instance if I drive from here to Boston I am likely to stay at a hotel en-route, that could be anywhere from Albany to Worcester. I don't have strong feelings about where, but it might be nice to find a good deal or find a place that I think is cool.
Thus I am interested in searching along a tube around my route, not clicking on cities like Springfield and running a search at each one.
That's why desktop web search is less valuable to Google than mobile web search, mobile web search is less valuable to them than map search, map search is less valuable than voice search and voice search while driving is their holy grail because there the ranking game is completely winner takes all. A second page hit on desktop has a better chance at getting traffic than the second place overall in voice while driving. (And those sweet "while driving" hits will almost always be followed by actual business transactions, whereas the old desktop is just a mostly worthless page view)
Afaik Google is far from allowing businesses to directly bid for that coveted number slot (it would ruin their ability to keep the balance between attracting advertisers and attracting eyeballs), but the result is even better for them: when businesses "bid by proxy", via buying other ad products in hope/fear that it might be a factor in the ranking they don't just get the winner's money. I'd absolutely say that drivers are very high on Google's audience priority list, it's just that nobody on that list is a customer.
The visiting example for me is normally a non-car use case. If going by car, I would probably pick these things up close to home and carry them all the way.
Of course, that's where Google makes their money from the service. Google Maps isn't a public good, it's a line of business.
I use Google Maps almost daily and this is also my complaint. It's not a hyper-specific use case. Google Maps are good for navigating from point A to point B when you are sure of both, but they suck at being a map. For instance, lack of always-on street names and weird POI handling makes them problematic to use when you want to explore the area you're in.
We would study the map ahead of time, based on the map figure out our plan of action by either making mental notes or notes in notepad, or notes on a map and eventually execute our plan based on the information we have selected.
We no longer need to do that. We can decide "I want to do something around X" , go to X and when we want to do something specific ask maps "Where can I find Y around X"?
Ability to drop pins removed the need to study map to complete most of the tasks. When one stumbles upon something interesting while reading a book, watching a show, scrolling through eater, one can drop a pin on a map so next time that person is in the area the pin is there!
Studying a map ahead of time and marking it up (on the map itself, as we did with paper maps and dry-erase or permanent markers) is a more efficient interface. There's this forgotten principle in UI that users can mentally filter out noise and focus on relevant parts very good; that's what our sense of sight is optimized for. Having to actively search whenever you need to know something is an inferior experience, both in terms of efficiency and because of missing context.
(Also, dropping permanent pins is AFAIK impossible in the Google Maps proper; it's a feature of "my maps", which is hidden somewhere and has weird interactions with Google Maps.)
You are thinking about it as a synchronous workflow. Study map->create a plan->execute a plan. This workflow was the only workflow because it was impossible to execute a search when needed.
Google maps is optimized for a modern workflow. "I'm here. I need X. How do I get there?" With pins that workflow is asynchronous.
For example, I use pins for restaurants. I find/read something about a place I want to try at some point. I drop pins. Next time when I happened to be in the area I see the pins that I dropped. It may happened to tomorrow or three months from now. My alternative is yelp with its sync workflow - search and analyze results of a search or rely on my memory of what place should be around where.
I hate when I need a restaurant or gas station ALONG MY ROUTE and yet years later no maps have this ability. It's insane.
It has been a cornerstone of digital navigation since the things were invented. To claim that it's an edge-case ignores history and instead highlights how _you_ use the tools.
IMHO it was more obvious that Google wants to you 'actively search' for $waypoint item while enroute instead of pre-planning. "hey google, show me restaurants near me"
That gives them a better way to monopolize on advertising and forced $ from companies in order to stay relevant and appear in those type of searches.
To add to the "things are getting worse" narrative, we implemented this properly back in the days when sat navs were still relatively exciting things. Last I saw the algorithm was to do a lightweight route plan through nearby search results and find the ones that made the smallest difference to your arrival time at your final destination. I don't think the google maps search API does that yet, although I haven't worked in the area for quite a while.
give it a go.
Bret Victor has a great essay on building information software called “Magic Ink” :
> Information software, by contrast, mimics the experience of reading, not working. It is used for achieving an understanding—constructing a model within the mind. Thus, the user must listen to the software and think about what it says… but any manipulation happens mentally. Except possibly for signaling a decision, such as clicking a “buy” button, but that concludes, not constitutes, a session. The only reason to complete the full interaction cycle and speak is to explicitly provide some context that the software can’t otherwise infer—that is, to indicate a relevant subset of information. For information software, all interaction is essentially navigation around a data space.
Of course, guessing poorly is a problem, but that’s an issue with execution.
The problem is guessing poorly, and making it cumbersome for the user to override your guess.
The thing with Google Maps was that it was actually reasonably good and intuitive on mobile until sometime 5 or 7 years ago when someone decided it had to be "simplified".
The old version was easy: you enter "to" and "from", and it gives you a route.
I think it also had multiple entry points so you could choose "navigate", "browse" and "timeline" or something directly from the system menu.
The "simplified" version removed all that + the timeline feature I think and replaced it with one search box.
The timeline came back after a while as did a number of other features they removed but it still isn't as easy or intuitive as the early versions and it still annoys me every time I want to get a route from A to B (as opposed to from where I am now to B).
Compare this to Windows 95 that I disliked for a few months until I got used to system wide drag and drop and realized it was in fact better than Windows 3.1.
All Google stuff ended up doing this when they started trying to standardize their "design language" across their services. They developed a very annoying habit of hiding every useful function or bit of relevant contextual information inside a poorly marked hamburger menu somewhere. It's extremely annoying from a discoverability perspective and I strongly suspect any UX designers involved lost a lot of arguments for a decision like that to get codified.
Is there a single person who prefers the monochrome GMail UI where you can't easily visually parse one thread from another, or the "new and improved" functionality where you need to click at least 2 or 3 times to even SEE what address you're sending to or from, or to change the subject?
I've stuck with Android until now, but now that I can replace the keyboard on iPhone I gave it a chance and I'm super happy with it.
iPhone 7 I had is noticeably laggy with maps and other heavy apps.
Meanwhile my Samsung S10e (equivalent to the XR) has more than sufficient performance and has a better screen for a lower price.
My point is that my current iPhone is the first phone since my Samsung S2 that hasn't disappointed me by being slow more or less immediately after unboxing it.
So you are telling me that wanting to see the name of a given street without having to zoom 10000x (and even then sometimes...) or figuring out how to get the directions to and from somewhere are "power users needs"?
Give me a break. Google maps was way easier to user as a map before. Now it prioritizes ad revenue at the expense of what users actually want to do.
There are no "power users" in this new world.
Edit: In fact, I just confirmed it now. Opened up Google Maps and started looking at small roads. For the third road I checked, Maps wouldn't show a name no matter how far I zoomed in, and I had to drag the view as described above.
Note that none of the larger roads leading to the roundabout at the top-left is named, while some (but not all) smaller streets names are here. Instead you have the "D509" label copy/pasted haphazardly but that's not the actual name of the boulevard that would be used on a post address so it's of very limited use (and even leaving those labels in there's plenty of room to add the actual street name).
Here's Open Street Map for the same map at a similar zoom level: https://svkt.org/~simias/up/20191219-175553_map-osm.png
OSM doesn't have all the bells and whistles of GMaps, but as far as the map itself it's vastly superior IMO.
Yes he does. You touch anything and state is erased. Don't know exactly which one of the search results you want to go to? Tough shit, the interface works against you.
And still, getting directions used to be simpler. Now you have to decipher unlabeled hieroglyphs, and the interface keeps changing. You can't even get used to it. You are constantly being nudged to do shit that is not what you really want to do, such as "exploring your neighborhood".
> And I've never had an issue with Google maps hiding street names when there was ample room to show them, but if you're more than a little zoomed out, there isn't that room. Showing only major roads is a decent trade off, as is the mouse scroll wheel for zoom in/out.
Luck you, I guess. I have this problem all the time. A better trade off: if I searched for "Market Street", show that label! That would be a start. And frequently labels aren't shown even when there is plenty of space.
Oh, and why not show the scale of the map by default? Is this also a "power user" feature? I thought it was a crucial piece of information when reading a map...
> Or maybe it's my own failure of imagination: how would you improve in this particular area?
Easy. Revert to the interface circa 2010. It had none of the above problems.
I should be able to get directions without having the GPS. If the GPS is lost, I really need those street names, NOW, without touching my phone.
None of the above is about power users. And none of this is innate to today's hardware. It's a matter of prioritization.
And consumers, though initially swayed by shiny objects, do eventually respond to good design and good engineering. Indeed Google itself found its early success partly through clean and thoughtful design, at a time when other search engine websites were massively cluttered and banner ads were the bane of the Web.
They keep changing the interface faster than I can learn it.
The way to enable complex expression of tasks in a user interface is by composition. ie have microtasks that you can then combine in different ways.
That's what makes excel great - each cell is a single function but you can combine them to build stuff the developer never knew was possible.
Being able to store state ( a location ) and then operate on that state is a pretty basic building block of a composable map UI.
I can't, because the autocomplete/dropdown/prediction for saved locations is disabled if you don't enable Google's device-wide "Web and App Activity" spyware function. This means I have to type my address every time I want directions home, even though I manually saved it. It's hot garbage.
Another very common usecase which is impossible: getting directions to a place and then looking at the street view so I'll know what it looks like. I have to remember to check street view before the search or redo the search entirely. Again, hot garbage, and inexcusable for a company spending bazillions of dollars on UX people.
Then I manually place everything back to where I actually wanted it and had it in the first place, and lo and behold it's showing me every kind of restaurant instead of Chinese restaurants. I have to click the search this area button which of course conveniently wasn't on the original screen to get the results I wanted in the first place.
Then I click on a restaurant to look at the pictures and read the reviews and when I'm done I naturally click the back button and it's all gone. I'm back to some other screen, maybe an empty map or one of the screens I was on previously that I didn't want, but it's almost never the list of Chinese restaurants in the small geographic area I was researching in the first place.
And that's just one example of one problem I regularly have with Google Maps. It's a horrible horrible user experience if you aren't using it in the way they think you should be using it.
I don't think this is a power user use case. Everybody wants to look up Chinese restaurants at a family or friend's house at some point in their life. Why is this so fucking hard?
> popular service 'x' doesn't exactly fit my power user need 'y'
Being a power user isn't a function of geekness, or a mark of belonging to some niche. Being a power user is a function of frequency and depth of use.
My wife is a power user of a particular e-commerce seller backend, a certain CAD software, and Excel, all due to her job. She is not technical, but when you spend 8 hours a day each day in front of some piece of software, you eventually do become a power user. Teenagers of today are power users of Snapchat, because they use it all the time.
Software being "power user friendly" isn't about accommodating existing power users; it's about allowing power users of that software to appear. It's about allowing the room for growth, allowing to do more with less effort. Software that doesn't becomes a toy, inviting only shallow, casual interaction. It's not all it could be. And it's the worst when software that was power user friendly becomes less so over time - it takes back the value it once gave to people.
This was a quality of life improvement for the vast majority of users, even if it wasn’t for the minority who used it the most.
All these threads about software being worse and "perceptually slower" than it used to are about regressions. Google Maps and other mentioned tools aren't pushing the envelope. They aren't bleeding edge. They were science fiction made manifest 10-15 years ago, and since then actually decayed in utility, ergonomics and performance. Meanwhile, all the money invested in all the software and hardware should have given us the opposite outcome.
What I have good memory about though is GMail. I've been using it for 10+ years now and that really keeps getting slower and heavier over time, while offering no extra functionality to compensate.
Google may track latency metrics for all their services, yet somehow, the stuff they do is one of the most bloated stuff out there. I guess they don't look at, or care, about those metrics.
It shouldn't be like this in theory. Computers only ever get faster (occasional fixes for CPU bugs notwithstanding). So making software slow down requires active work. So does removing or breaking useful features.
I've seen this a lot with my mother in particular, who is certainly not a power user but knows enough to get by, struggle to use software that's trying too damn hard to guess what she wants, instead of letting her just tell it what she freaking wants.
Apple seems to get it right pretty consistently, which is why I keep their stuff. But when it does manage to go wrong, holy shit debugging it is an absolute nightmare.