Hacker News new | past | comments | ask | show | jobs | submit login
Almost everything on computers is perceptually slower than it was in 1983 (2017) (twitter.com/gravislizard)
639 points by zdw on Dec 19, 2019 | hide | past | favorite | 431 comments



Discussed yesterday as explained here: https://news.ycombinator.com/item?id=21835519


I had an Atari ST in a closet, and decided to get rid of it a while back. I pulled it out to test it. The boot sequence, all the way to a desktop with a mouse that moves, takes less than one second. If you boot from a hard drive, maybe another second. For a while I just kept hitting the reset button, marveling at the speed at which the ST came up.

Most machines I work with these days take minutes to get rolling.

Okay, I know that systems are bigger and more complicated now; buses have to be probed and trained, RAM has to be checked, network stuff needs to happen, etc., etc., but minutes? This is just industry laziness, a distributed abdication of respect for users, a simple piling-on of paranoia and "one or two more seconds won't matter, will it?"

Story time.

A cow-orker of mine use to work at a certain very large credit card company. They were using IBM systems to do their processing, and downtime was very, very, very important to them. One thing that irked them was the boot time for the systems, again measured in minutes; the card company's engineers were pretty sure the delays were unecessary, and asked IBM to remove them. Nope. "Okay, give us the source code to the OS and we'll do that work." Answer: "No!"

So the CC company talked to seven very large banks, the seven very large banks talked to IBM, and IBM humbly delivered the source code to the OS a few days later. The CC company ripped out a bunch of useless gorp in the boot path and got the reboot time down to a few tens of seconds.

When every second is worth money, you can get results.


Corporate windows installations are unnecessarily slow because they run antivirus and all kinds of domain membership stuff in the boot and logon path. A clean install with fast boot and without that gunk takes seconds.

A headless linux can come up in seconds (UEFI fast boot + EFI stub) or even less than a second if you're in a VM and don't have to deal with the firmware startup. Booting to a lightweight window manager would only add a few seconds on top.


Headless, schmeadless: a RPi with the Slowest Disk In The Universe come up to a functional GUI under a minute. Slower boot (and correlated, slow work) on far beefier hardware is down to pure crud. Sure, you need a bazillion updaters, two for each program, and three antiviruses checking each other - in other words, "users have fast computers, SHIP IT!"


If your host machine is a real server (eg poweredge), itll do a selftest. This already takes tens of seconds. If you want fast bootup times, you need to be the BIOS. The example in the top post is stuff that either loads data off a flash chip (like a bios works) or a disk (requires some bootstrapping).


Newer Dells like the R640 take minutes to boot, not seconds.

Here's a random video of a guy booting a R740, it takes 1:51 just to get into the bios: https://www.youtube.com/watch?v=CSJNTdKdTJI


Some of the companies' servers (e.g. Dell) can boot pretty fast in these days. If you want see something that boots slowly, try an IBM HS22 blade (3+ minutes self-initialization after insert, 2.5+ minutes to pass initial BIOS screen to storage initialization and boot-start).

IBM's QPI-linked dual servers boots even slower as one technician explained to me. Presumably you can make coffee during the wait.


What does self-testing cover? RAM? That's something the OS could do as a background task, incrementally making memory available during boot after a chunk has been scanned.


Memory, peripherals, temperatures, fans, disks, network boot. RAM needs to be learned in, hence ddr first boots take a bit. https://github.com/librecore-org/librecore/wiki/Understandin...


DRAM testing is not really something you can do incrementally. Localized, bit-level failures are only one kind of failure; things get really interesting when you have disturbances (e.g., a shorted address line or something causes writes to some location to also change bits in another location).

Also, ECC failures usually cause a machine check. Not sure if you can control this on modern machines, it might be all or nothing.


I'm actually relatively confident that you could get it down to 8-9 seconds to the basic X environment with an NVMe SSD and maybe even less if you were to cache some application states.


Things are possible if one puts one's mind to it.

five seconds, year 2008: https://lwn.net/Articles/299483/

two seconds, year 2017: https://www.youtube.com/watch?v=IUUtZjd6UA4&t=17


And that 5 second boot time was on an Asus EEE 701, which had Intel Celeron M 630 MHz, and just 512 MB ram.


A few years ago I went about trying to get my laptop to boot as fast as possible. Got it down to 9 seconds from POST completion to X with Chromium running.

The only differences between the system I used for benchmarking and my regular desktop was autologin, chromium in the startup script, and the enablement of the benchmarking thing. I'll probably poke around with it tonight and see if it's any different on my new laptop. This one had a proper NVMe as opposed to the m.2 SSD in my old laptop.

My Windows desktop at work takes forever to get running. Even after you log in and the desktop is visible it's another minute until it's usable.



> The boot sequence, all the way to a desktop with a mouse that moves, takes less than one second.

That's an exceptional case though - the GUI OS was hardwired in ROM. The Amiga was an otherwise comparable machine that had to load much of the OS from disk, and it did take its time to boot to desktop.


Even when the OS was loaded off of a hard drive, the boot time was still about 2-3 seconds (I did measure this).

Not really exceptional.

It took maybe a minute to load off floppy disk. That is STILL shorter than the POST time for every machine I work with these days, with the possible exception of the Raspberry Pi in the closet.


Are the post times modifiable?

I'm using a Dell E7440 and it's pretty quick to boot from powered off. I have a bunch of stuff turned off in the bios. It's my machine on my home network, it's not a corporate machine with all the corporate stuff.

But maybe that's the lever we need to get change: 30 seconds extra for 1000 people over 250 working days a year is over 2000 person hours being spent waiting for machines to boot.

And that wait time for the corporate stuff is something that real people talk about. Here are a few twitter threads about people in different NHS organisations.

Some people are waiting 3 to 10 minutes, a few are waiting even longer(!!) https://twitter.com/griffglen/status/1066043840497360897?s=2...

https://twitter.com/bengoldacre/status/1038329028623716358?s...

https://twitter.com/dannyjpalmer/status/1123604293251158016?...


For database servers with multiple TB of RAM, 10-15 minutes is not unusual for POST.

It's nuts. The memory was fine the last time it was tested (30 minutes ago, on the last reboot). Let's just train some buses, probe some address spaces and go, okay?


All of Atari ST TOS is 192KB.

On a floppy that would take a while to load. Off a hard drive, sure, not too bad.


About 60-70 seconds, IRRC. It's been a while since I measured it. We did some optimizations, other folks did a better job (e.g., optimally skewing physical sectors so that they'd be hit nearly immediately after a head seek).


Hah, I didn't notice it was you who I was responding to. How nice of me to tell you the size of the OS you helped write ;-)


That deserves a story, too. Why not?

[Note: this is the early 80s. A computer with a large amount of memory might have 64K in this period. I think a 64K ROM cost about four dollars, and 64K of RAM was about fifty bucks]

The Atari ST's operating system (TOS, and no I don't want to talk about what that stands for) was written in C and assembly language.

Initially the ST was going to have 128K of ROM; we wanted to one-up the Macintosh (which hadn't shipped yet, but there were rumors and we had copies of Inside Macintosh that were simply fascinating to read) and put both our OS and a version of BASIC in ROM. Most home computers at the time came with some version of BASIC, and the Mac did not; we were hoping that would be a differentiator. Trouble was, nobody had actually sized our software yet (the only things even remotely running were on the 8086, not the 68000 we were going to use, and Digital Research wasn't exactly forthcoming about details anyway).

So mid-October (the project started in earnest in July 1984, and FWIW we shipped in late May 1985, a whole new platform starting from zero in less than ten months) we realized that just the OS and GUI would be 128K, and that the BASIC we were thinking of using was like 80K (but could probably be shrunk). So the hardware guys added two more ROM sockets, for 192K of ROM. A month went by. Wups! -- it turned out that the OS and GUI would be like 170K, with little hope of shrinkage. No, make that 180K. Would you take 200K?

The code topped out at 210K or so, and that wouldn't even fit into the six ROM sockets we now had. No chance in hell of getting another 64K of ROM -- that stuff costs real money -- so we shrunk the code. The team from Atari came from a background of writing things that fit into really tiny amounts of ROM, so we went about this with a fair amount of glee. We got about 1K per programmer per day of tested code savings by ripping out unused functions, fixing all the places where people had "optimized" things by avoiding the expense of strlen or whatever, and coding some common graphics calls with machine trap instructions instead of fatter JSRs. For about a week, the hallway in engineering was full of people calling out to other offices, "Wow, get a load of this stupid routine!" and in a codebase that had been tossed together as quickly as GEM/TOS had been, there was no lack of opportunity for improvement. We found a fair number of bugs doing this, too.

Additionally, the C compiler we were using was not very good, and even its "optimized" code was terrible. Fortunately it had an intermediate assembly language stage, so we wrote some tools to optimize that intermediate code (mostly peephole stuff, like redundant register save/restores) and got a relatively easy 10-12 percent savings. I think we had a few hundred bytes of ROM left on the first release.

I remember that 192K pretty darned well. Though they're fun to talk about, I honestly don't miss those days much; today I wrote the equivalent of

    void *p = malloc( 1024 * 1024 * 1024 );
and I didn't even feel bad.


Funny, the EmuTOS guys are _still_ finding stuff in the original DRI sources to clean up. They've managed to stay within the 192KB budget as well.

It's interesting that the original plan was to get a BASIC in there, because IMHO that really was a weak point of the ST for its target market -- which I guess included my 13 year old self. At least for the first couple years.


Several years ago I have done consulting for a large travel booking service that advertises on TV.

As strange as it might sound, there's a large artificial delay added between the time the service knows the answer to customer's search and the time the answer is sent to the customer's web browser.

The reason for that delay is that the without it the customers do not believe that the service performed an exhaustive search and do not complete the transaction!


Also, it stops the client from hammering at the server...but this is a very inelegant way do do it, and devs would rightfully complain of such crude measures, so it might be necessary to frame it differently ;)


Have you ever booted an Amiga 500 from floppy disks? I don't know how long it takes, but it's one of the slowest things I can ever remember doing.


> Most machines I work with these days take minutes to get rolling.

My current Windows 10 machine boots faster than my monitor turns on (<5 seconds from power on to login screen).


Login screen is hardly the end of the boot up process. I have had work laptops where I would login then go get a water because it’s going to take several minutes to finish loading.


My machine definitely doesn't take multiple minutes after login to load fully. I didn't count after that point because it's really the user (or administrators) fault after that point, and because I have a password which means I'd need to actually time startup, pause, and time login. Also, when do you consider "done"? How can I accurately measure that?


I consider the boot process down when the system is responsive. I can normally open notepad in under a second so IMO that’s a good benchmark for done.

As to being the users fault, that’s rather subjective. Video card, printers, and other drivers taking forever to load is really a hardware/OS issue.


IMO if there's a meaningful delay beyond the login prompt is the fault of the user/administrator. Not in 100% of cases, there's slower hardware, etc... But most of the time if logging in is slow it's because you have too much set to start with the machine at once, and those things are all fighting over resources that they could otherwise have exclusive access to.


My Pixel 3 boots rather fast compared to my Macbook pro, or my Windows desktop that I hardly ever use.

I guess it just comes down to priorities. I'm sure if PC specs included "boot time (lower is better,)" we'd see boot times drop quickly.


> Most machines I work with these days take minutes to get rolling.

Minutes? That sounds exaggerated.

How powerful was your Atari ST compared to other machines at the time versus the machines you work with these days compared to other machines available?

Because I'm not even on a particularly new machine and from a powered off state I'm logged into Windows and starting applications within 5 seconds. And for example that's at 7680x2880 resolution, not 640x400.


> Minutes? That sounds exaggerated.

POST on a Dell M640 is about three minutes. Other Dell systems are similar. POST on the workstations that I use are in the range of 1-2 minutes. This is before the OS gets control (once that happens, it's usually 15-20 seconds to a usable system).

The ST was a pretty decent performer at the time (arguably a little faster than the original Macintosh, for instance). Both the ST and the Macintosh took about the same amount of time to boot from floppy disk (though the OS was in ROM for the vast majority of STs that were built).


In addition to POST, many servers and workstations have option ROMS (NIC, storage) that, if enabled, can add seconds to boot time; if unused, these can typically be disabled in firmware setup.

With that said, POST on my Z820 workstation probably takes a minute, even with option ROMs disabled, but still maybe half the time it takes an Gen8 HP MicroServer with a quarter the RAM to do the same.

On the other hand, my old IBM POWER6 server sets local records for boot (IPL) time: in "hyper" boot mode, with "minimal" testing, it still takes slightly longer than the MicroServer to turn control over to the OS, the default, "fast" boot mode takes maybe five minutes to POST, and, well, I could very nearly install Windows 10 on a fast, modern desktop PC in less time than it takes to do a full, "slow" POST.

As for simply booting such a desktop, even with all BIOS and Windows fast boot options disabled and a display connected to both internal (AMD) and external (NVIDIA) GPUs, my Kaby Lake NUC takes no more than ten seconds to boot to the Windows 10 logon screen from a fast (Samsung 970 Pro) SSD.


Doesn't windows do a type of hibernate rather than a full shutdown.


I see someone mentioned that further down in this thread, but shutdown and reboots are similarly quick for all of my pc's. Even my 12 year old laptop (with SSD) boots Windows 10 in _well_ under a minute, and shuts down and reboots much faster.

I also have hibernation disabled, and I've never noticed some obviously large file that might be a hibernation state on my drives (even e.g. removing a drive after a shut down so there was no restart where it could've been deleted).


I thought I had hibernate disabled, but it seems Windows 10 is sneeky about that:

https://superuser.com/questions/1096371/why-is-windows-hiber...

Still it isn't much longer for me to restart after fully shutting down (with a more recent system and SSD), just more time in Windows and less in the BIOS (and shutdown is instant with "fast startup" turned off, which is better for me since I usually pull the plug after turning it off). About 20 seconds (counting myself, not timed).

Still not as fast as DOS + Windows 3.1 IIRC, but not too bad for turning it on once or twice a day. I have also noticed the many things that have delays now in places there weren't 30 years ago, but I don't think boot times are the best example of this. I might appreciate the twitter rant if not for the completely incorrect diversion about google maps (you can drag parts of the route to make changes and get exact distance, better than paper maps and with much more details about what is nearby). IMO, computer interfaces should have a tool focus, doing basic tasks quickly and reliably so that users can learn to use it like they would a physical tool (while also not making users do extra work that could be done quickly and reliably). Now everything tries to use the network all the time, adding random delays as well as compromising privacy.


I took a look at a recording of the Atari ST GUI on Youtube, it didn't seems that fast to resize a basic folder view windows.

Who care about boot time when you do it once in a while versus actually using that interface?

https://www.youtube.com/watch?v=A1b9kUP0WtI


I've used 4.3BSD on a VAX 11/780 - and its remarkable to me how similar the experience is, even vi startup times are close. It's weird. I guess some things only go so fast. Similarly, my OS X 10.4 (or 10.5 desktop) laptop boots only marginally slower than my OS X 10.14 laptop.


The latest AMD CPUs are particularly bad at this, I got a 3600 and for half a year now there are known problems with extremely slow booting. The latest BIOS update made them a bit better but it's still at completely unacceptable levels.


I just put a Ryzen 5 3400G in a B450 board and it hits the login screen from cold start in like 3 seconds (and no there's not still a bunch of stuff churning in the background - it's all but done loading at that point).


That's very board specific I think, some boards from the first Zen generation also had that problem, but my MSI board boots very quickly


This. MSI B450 + 3600 and I'm still on bios 7B85v19 SSD is and older 850 EVO and boots to desktop in under 30 seconds


No issue with my 3700. Boots in a second or two.


BIOS update makes me think that it's a motherboard problem.

My Ryzen boots within 8 seconds on a b450 Pro motherboard.


Great story.


An SSD can make a world of difference. Most of the time spent during boot up is in executing random reads (of about 4Kb size) from storage, and SSDs are a factor of magnitude faster there


Sorry, most of a box's boot time is the BIOS and other stuff. The servers I run at work take 3-4 minutes to boot, the last twenty seconds or so is the OS coming up. The consumer PCs I use have a similar ratio.

POST time is crazy bad. It's almost like the engineers working on it don't care.


It's almost like a server is expected to spend approximately 0,000% of its lifetime doing POST.


In normal operations sure, but in downtime that’s another story.


In that case, you shouldn't be looking for an answer to "how can I make a faster (rebooting) horse," but rather "is there a way to make this redundant, so that a single node offline won't critically endanger the system?"


UEFI Fast boot improves it significantly on modern hardware.


In all 25 years I've used computers, I can't recall having a PC that took more than a minute to POST. Ever. My current PC's fast boot is fast enough that it looks like the computer turns on and goes straight to Windows.

Maybe I've just gotten really lucky...


All of my machines are the same. From the time I plug in the power cable and hit the go button I can expect to see my desktop in around 10 seconds. It's fast enough that when I'm in discord with coworkers and need to switch OS it doesn't really affect the workflow at all.


I'm curious about why a workplace would settle on Discord. Are you in game development, is there a killer feature that Discord has compared to the competition, or was it just what people were comfortable with at home?

Not implying that it's bad software, I'm just curious because it sounds unusual.


I can think of several reasons. #1 for us, is that we're a marketing agency serving a lot of the gaming industry, so working with content creators or industry folk who sometimes default to discord is convenient. That said, Discord is a free quick and easy tool that everyone can install, on any os, and be up and running anywhere in the world for comms in less than a minute. In the box, there's video conferencing, screen sharing, chat, and more.

The example I gave above happens regularly, as I use Deepin Linux as my typical daily driver while I'm working. However, if the need to open an adobe suite tool comes up, I can quickly swap over. Discord works fine for me on both platforms and my phone.

All in all, I don't really like discord all that much. It's not the best at anything. But it has the advantage of being both convenient and feature-rich overall. There are better solutions out there, but none are as convenient or free.


pretty much any server in existence (vendor hw) takes minutes to boot, it's painful having to test bios options on most servers.


SSD + Windows 10 + Fast Boot = seconds of boot time


I have win 10, fast boot and two nvme drives striping in raid 0. Boots in maybe 8 seconds.


Kind of related: does anyone else notice how long it takes to change channels on the TV these days? It used to be instantaneous when cable first came out and at some point it became this laggy experience where you'll press buttons on the remote and the channel takes forever to change. I hate it and it's one of the reasons I don't have cable any more.


The central reason is that modern video codecs use I-frames and P-frames (sometimes also B-frames; even though as far as I am aware, these are not used for TV broadcasts); see https://en.wikipedia.org/w/index.php?title=Video_compression...

I-frames are only sent, say, once or twice a second.

When a channel is switched, the TV has to wait for the next I-frame, since P-frames (and B-frames) only encode the difference to the previous I-frame (or to the previous and next I-frame in the case of B-frames).

If you are aware of a possibility for efficient video compression that avoids this problem, tell the HN audience; the really smart people who developed the video codecs apparently have not found a solution for this. ;-)

Otherwise complain to your cable provider that they do not send more I-frames to decrease the time to switch between channels (which would increase the necessary bandwidth).


It's actually worse than that - first you have to tune, then you have to wait for a PAT frame (which has a index of PMTs in it), then you have to wait for a PMT (which contains pointers to the audio, video and ECM streams, and then you have to wait for an ECM (encrypted key for the stream), at that point you have have decrypted video and can start looking for I-frames ....

(smart systems both cache a whole bunch of this stuff and revalidate their caches on the fly while they are tuning - first tunes after boot might be slower as these caches are filled)


> When a channel is switched, the TV has to wait for the next I-frame, since P-frames (and B-frames) only encode the difference to the previous I-frame (or to the previous and next I-frame in the case of B-frames).

You can apply the encoded difference to a grey (or black!) screen, the way (IIRC) VLC does in such cases. This means that the user immediately gets a hint of what's happening onscreen, especially since the audio can start playing immediately (also, often the P/B-frames replace a large portion of the on-screen content as people move around, etc.). Surely it isn't any worse than analog TV "snow".

If it looks too weird for the average user, make it an "advanced" option - 'quick' channel changes or something.


Anyone old enough to remember the revolution of interlaced GIF files on the web? ;)


I remember when a family member first got satellite - the tuner would change instantly, and the show would "pepper in" over a second or so until the next full frame. There's no technical reason the change can't be displayed immediately - it might not be pretty, but whether that's preferable is subjective.


If my memory is correct that was the first directv/ussb system. I think I remember reading they were using something like a pre-finalized version of mpeg2.


Unless I'm very mistaken about modern digital transmissions, cable coming into the house is still based on a broadcast system, which means you're getting everything all the time. The frames are all there for every channel, they're just not being read. I don't know how much processing or memory it would take to read and store frames for surrounding channels to the one you're on, but I imagine its possible.


While everything is usually on the cable (there are exceptions) the STB only has a limited number of tuners, which means it can't "see" everything at once. The channels that are nearby in number may or may not be on the same physical channel, which would mean the STB is effectively blind to them.

But even in boxes with multiple tuners (DVRs) your solution would require tying up at least three tuners (current channel plus one up and down) which would cut down the number of simultaneous recordings that are possible. I doubt many people would like that tradeoff.

However, the biggest issue is that most boxes simply don't have more than one MPEG decoder in them.


Actually that's a good point, my sky tv box lets me record 6 streams plus watch a channel at once, however i rarely have more then 1 or 2 recordings at a time, so on average i have +2-2 channels available to holder in buffer.


Feel free to find a chipset on the market that can decode hundreds of channels of Full HD H.264 simultaneously...


Only your current transponder.


You wouldn’t have to actually decode it, just receive it all and buffer everything after the last key frame. That eliminates waiting for the next key frame.


receiving it all _is_ the problem. nobody would pay the price for an RF front end with the bandwidth to digitize the entire OTA TV portion of the spectrum. they're spread from 54MHz to 806MHz. that's 752MHz of analog bandwidth. that's huge. (i'm not even sure you could buy that frontend for love or money. ok, maybe you take advantage of the gaps and just have 3-4 frontends. now there's a correspondingly large amount of PCB space taken up, more interference issues, increased defect rate, etc)


Essentially waiting for I-frames as the mini-GP noted it?


Many channels are broadcast, but some are a hybrid. Observe that not everyone needs every channel every second. You can set aside some over-committed bandwidth in an area for the less-in-demand channels and deliver to that area the channels actually desired. This required a change to the CableCard architecture, which has been rolled out for a while now.

Some of the channels I like used to be difficult to tune with my previous cable box, because it would not correctly coordinate tuning with the infrastructure, so I'd have to retune. If I left the box on such a channel and turned it off, the next time I'd use the box the screen would be black.

In the old days all the channels were analog, and used 6MHz each (may vary in your region), and channel changes were much faster.


probably not much because you don't need to decompress them, just keep in volatile memory


> probably not much because you don't need to decompress them, just keep in volatile memory

How do you plan to obtain I-frames without decompressing them?


The decompression could be done lazily when you switch to that channel.


The same way that switching to a channel does it. But instead of waiting for next one to arrive you first scan your 0.5s buffer.


That's a big part of it. But the other part, which can actually contribute more time than the decoding, is simply slow software on the STB.


There are some terrible middleware implementations out there. I remember hearing about some early attempts from DirecTV at building a DVR. They were encoding everything in xml and sending it through IPC spaghetti. IME, the level of talent in the consumer electronics space is much less than, say, a big N company. You have a lot of EE turned sw guy types, or java-trained CS grads who don't understand performance. Now that the industry is slowly dying, it's losing even more talent.


Cache the most recent iframe on the network and have the STB pull + display the cached iframe till it catches up? This would enable fast channel scanning/flipping at the very least ...


IIRC Microsoft did this in their IPTV platform.


The tuning chip costs maybe 50 cents or something. Just have 3 or 4 of them and pretune the next few channels.


> The tuning chip costs maybe 50 cents or something. Just have 3 or 4 of them and pretune the next few channels.

That's 50 cents the shareholders can pocket, and everyone has already inured themselves to the slower experience.

Isn't progress great?


50 cents per unit times several hundreds of millions of units adds up to real money in a hurry.

BOM costs make or break mass market hardware products. You don't just add 50 cents of BOM to a mass market item without a real good reason.


> BOM costs make or break mass market hardware products. You don't just add 50 cents of BOM to a mass market item without a real good reason.

I guess the question is, why is that so?

IMHO, a valid "real good reason" is fixing a product/technological UX regression. However, it seems American business practices have settled on shamelessly selling the cheapest acceptable product for the highest acceptable price. If cheaper means a little crappier and enough customers will put up with it, cheaper it is. I'm dissatisfied with it because it usually means the stuff I buy is less durable or lacking on some fit-and-finish area.


I think you and your OP are both correct.

50 cents x 4, along with the other increase likely $5+ of BOM cost increase could make or break a consumer product. But your reason is also true as it improves UX.

This is where innovation and Apple comes in, you need to market the product with a features that masses of consumer believes in it and are willing to pay for it. ( Lots of people, including those on HN often mistaken innovation as invention )

There is nothing "American" about this business practices, it is the same as any European, Chinese or Korean Manufacturers. They could have very well put this feature in but I am willing to bet $100 it wouldn't make a difference to consumer's purchase decision. So why continue to add $5 or more for a feature they cant sell.

But Apple has the ability to move consumers, and to charge higher ( as a package along to this feature ) to demand a premium. And if Apple successfully market this feature, say with some sort of brandname like "QuickSwitch", it is only a matter of time before other manufacturers copy it.


Go look at all the failed hardware-based kickstarters for why BOM costs matter.

It has nothing to do with “American Business”. Just a fact of life in a competitive market.

Is it worth spending an extra 2-5 million in tuner chips so 100 million set top boxes can channels can change faster? You tell me?


Based on my recent interview with a company that does a huge amount of livestreaming, the really smart people at least have a few ideas about this.


Sure. The answer is then, probably, "it's not cost-effective to do that in COTS consumer equipment."


Remember, decoding is much cheaper. And if you own or have some control over the client, there is flexibility about how you get things.

That said, it's true that there still may not be a practical solution which is better for the user than letting them wait a second.


No need to reinvent video encoding. At least my local provider seems to fix this by having all the channels streamed as multicast continuously and then having the TV box request a small bit of video over normal TCP to do the channel switch immediately and only later syncing to the multicast. That allows you to change channels quickly at any time and starting to watch at whatever the latest I-frame was.

I notice this happening when IGMP forwarding is broken in my router and channels will only play for a second or two after being switched to and then stopping. Switch times are pretty good.


Or, the TV UX designers can realize this, and make the TV respond instantly by switching eg. to channel logo plus a name of current programme (it is available from EPG) and then replacing it with video 0.5 later.

This would allow a rapid channel surfing, something I haven't been able to do on any recent TV.


> If you are aware of a possibility for efficient video compression that avoids this problem, tell the HN audience etc...

This is a bad comment/reply - upstream wasn't complaining about the codec but about the usability (of the devices).


I explained that an important reason why the switching time is like that lies in the fact how modern video codecs work. Taniwha gave an important addition: https://news.ycombinator.com/item?id=21836542


Why not have the previous and next channel frame-data points loaded in the background? This would enable to switch quickly, even if it costs a bit more resources on the hardware side.


No. It's not a codec problem. They can leave it on the last decoded frame and fade it nicely to, say, the average color for the 1 second without going to black, and you don't have to be a super decoder genius to implement something that's a little less aesthetically jarring.


joncrane was complaining about the time to change channels. This approach does nothing to decrease this time.


Exactly. It does placate users with some indication of activity.

This can impact how someone feels about the change, but does nothing to solve the time to change problem.

One thing it does do is confirm the change is in progress. That is a subset of the time to change problem.

Many current UX on this do not give a very good, or any indicator of successful input.

Quite a few people may see their feelings about the time to change improve because they can diver their attention away from the change knowing it will eventually happen.


> Exactly. It does placate users with some indication of activity. [...] One thing it does do is confirm the change is in progress.

A black screen is also a sign that a change is in progress. But this is exactly what my parent fortran77 complained about (https://news.ycombinator.com/item?id=21835676).


There is no making all the people happy without just changing quick, eh?


You can show the program name and what's on on that black screen, this info is already available from EPG.


That's actually a nice spiff. Doesn't entirely fix the problem, but could help with surfers looking for something interesting.


I think what parent also meant is input delay from the remote.


I complained about the style of the article elsewhere in the thread, but this doesn't exactly disprove the point that it's objectively worse now.


Error correction also adds some latency.


My favorite "the world is terrible" curmudgeon observation is how awful hotel TVs (and hotel TV remotes) are. Every single hotel room TV I've had the displeasure of using in the last N years takes more than 20 seconds to turn on after multiple button-presses, starts out on a terrible in-house channel at high volume regardless of the last-watched channel+volume, takes multiple seconds to change channels, and has no real TV guide option to let you see what's on. This plus HDMI ports are regularly inaccessible and it's nearly impossible to use streaming sticks (roku) due to web-based wifi portals that only work half the time if you have an ad-blocker enabled.


Careful what you wish for - many years ago I worked for an interactive TV company that focused on the hospitality market. One large chain of hotels seriously asked if we could do a remote control with three additional large buttons at the top for "Beer", "Burger" and "Porn".

Turns out getting like that manufactured in the quantities we were looking at is a nightmare - so it didn't happen.

Edit: Clearly it would have been easier to have one button for "Beer, burger and porn" - but that has only occurred to me now.


Sounds like a legit app, though.


While in Japan, the TVs would often turn on the moment you entered the room. This was fine, as I would mute them or turn them off. At one hotel, I managed to bug one out such that volume didn't work anymore. No worries, I'll just turn it off. Except when I went to turn it off, it crashed the TV app and it automatically restarted. All the wires were integrated so they couldn't be tampered with and the unit had no real buttons. I thought I was going to have to sleep with the same four very excitable adverts on rotation blasting into my ears!

Mercifully, pressing the TV Source button triggered a different app that didn't crash when I pressed the off button, and in what must be the software engineering achievement of the decade, the off button turned off the screen.


In the hotels I stayed at in Europe it's usually a standard non-smart TV plus a custom board that connects to it over HDMI. Sometimes the whole thing is enclosed in some kind of plastic shroud that clips over the TV but nothing a bit of "percussive maintenance" can't fix. From there, the HDMI port is accessible.

However, in most cases, at least in mid-range rooms, the TV is barely bigger than my laptop so it just doesn't make sense to use it.


The usual problem I have is that I need to switch to DHCP based DNS to register my MAC address to the room number, then switch back so the hotel can't screw with my DNS lookups.

It might not be your ad-blocker or script-blocker; it might be your DNS settings.


Yes, pleease, just provide Miracast or Chromecast so I can watch content from my phone on the big screen.

Or just let me use an HDMI port.

I don't watch channels anymore, and I don't want to pay for your pay-per-view content.


I'm in a room right now that has an HDMI cable on a little pug infront the TV. Unfortunately I never remember to bring my USB-C to HDMI adapter when I stay here.


Hotel remotes. You all triple-sanitize them, right? And then burn them....


I don't generally bother. I like to challenge my immune system once in a while to keep it in fighting condition.


Uhhh.

"In fact, we found semen on 30% of the remote controls we tested."

http://m.cnn.com/en/article/h_990ff64f90f909f55b916692ee340d...


When working on digital settop boxes a decade or 2 ago, channel change latency was one of the toughest problems. Eventually we (collectively, the whole industry) gave up and settled for 1 second as acceptable. Channel surfing quietly went away. When your i-frame only comes so often, there's not a whole lot you can do.

Nowadays, that problem is solved by reducing actual live content to a strict minimum; everything else can be on-demand.


Maybe there's not a lot you could have done while keeping the hardware cheap. I can think of a few ways to improve the user experience of channel surfing without waiting for an i-frame every time.

The cheapest would be to just use a constant neutral-grey i-frame whenever the channel flips, and update that until a real i-frame comes along, while playing the channel audio immediately. Ugly video for a second, but high-action scenes fill in faster. I'd bet that most people could identify an already-watched movie or series before an i-frame comes in, at least 80% of the time.

More expensive would be to cache incoming i-frames of channels adjacent to the viewed channel, and use the cached image instead of the grey frame. Looks like a digital channel dropping update frames during a thunderstorm for a second.

Prohibitively expensive (back then) would be to use multiple tuners that tune in to channels adjacent to the viewed channel, and then swap the active video and audio when the channel up or channel down buttons are pressed. Drop the channels that just exited the surfing window, and tune in to the ones that just entered it. Surfing speed limited by number of tuners.

Televisions still don't do this, even after more than a decade of digital broadcast, and multiple-tuner, multiple-output DVR boxes.


These days they should be able to guess what 20 channels you might change to and start recording those in the background.

I've always suspected the reason it's slow is because you press the remote button, the DVR sends that to the provider, the provider has to verify that you can do what you are asking it to do, then a response comes, then the change can start.


The slowness can also be because the channel you are trying to watch is not even there.

I'm not sure who all is using it now but I used to work on the setup for Switched Digital Video. If nobody in your neighborhood was watching a certain channel, it would stop getting broadcast. That freed up bandwith for other things like internet. Once you would tune to a channel, a request would go to the head-end, it would quicky figure out if the channel is being broadcast in your area. If not, a request would go to the content delivery system to start feeding it to the QAM and then obtain what frequency the channel was on, and finally relay that back to the settop box which would tune and start the decoding process.

Rather impressive tech but again, this would add a bit more latency to that particular channel switching.


And that's one of the reasons things seem worse: there are massive, clever efforts to provide an infinity of options, but they're not completely seamless. Compared to the do-one-thing-and-one-thing-only appliances of yesteryear, it necessarily looks bad, especially when looking through the memory's rose-tinted glasses.


Nope, the STB has everything it needs to decrypt the channels you have access to. Tuning live TV channels happens 100% on the box. The only exception (a technology called Switched Digital Video) is just a way to save bandwidth on the cable, nothing to do with conditional access.


If I unplug my internet connection, my box won't let me do much at all. If I press pause or fast forward it just tells me the operation isn't authorized.


That's a weird system, certainly not the standard way of doing things on a traditional cable TV system.


I think it's pretty normal these days. The cable companies sell the data on what people watch, where the rewind, pause, fast forward, etc...


No, it's absolutely not normal for a regular cable DVR to tie authorization for DVR functionality to an Internet connection.


My last two have been like that. Try unplugging your internet connection then playing something you recorded earlier. If you have an AT&T or DirectTV DVR, it won't play.


Can't there be a dedicated channel or connection that's always tuned to that just broadcasts I frames from all channels, so that the box has all the latest frames for all channels and can start playing instantly when switching channels?


I remember when this happened, when it went "digital" and I know why it takes a second to switch channels, but it ruined the experience for me. While, at the time there was ways to watch video on the computer or from a DVD or VHS, I still liked to "channel surf" every once in a while. But that requires switching channels fast for an random amount of time based upon what was on and how long it took your brain in that moment to absorb it. And sometimes you'd stop and go back. But with digital, most of the time the load time was longer than the time I'd be on that channel in the first place. It'd take minutes to go through 10 channels as opposed to seconds. Channel surfing was its own activity for me back then - and it was ruined.

Nowadays there's youtube - but it's hardly the same thing.


I don't know who can afford cable any more to be honest. The only people I know who pay for cable are in the 45+ age range and they only use it because they just leave it on 24x7 or to watch the weather.


This I do agree with. I haven't had cable for years and years, but when I'm at someone elses house I am baffled how insanely slow it is. That would drive me NUTS on a day-to-day basis.


Analog TV were real time at the speed of electrons minus some layers of inertia (capacitance say).

Nowadays there's many layers of various quality, and software reactivity is a factor of so many things.


To be more accurate, the speed with which electrons move through the wire is rather low (which does not matter, of course, because the signals are carried by the electromagnetic forces which propagate very quickly).


Also, TVs kinda started having a black screen in between switching of channels. At night that becomes incredibly uneasy for eyes.


Modern cable systems are more akin to satellite broadcast systems than they are to the terrestrial broadcast systems of yore. There's an order of magnitude more content on cable these days. When you tune a channel now, instead of simply tuning to a frequency and demodulating the signal, content is spread across many frequencies, each of which carries a QAM-modulated MPEG transport stream that carries many programs. It takes longer to tune QAM than baseband analog or even PSK. So the physical frequency change takes longer than it used to do. Once frequency is tuned, decoder may need to check provisioning in order to be able to decrypt whatever program you've requested, so that adds to the time it takes.


It takes time to pick up the MPEG2 scheme and start decoding.


all they need is to create to a low rate channel with slowly updating snapshots of all the programs being transmitted.

i don't know anyone who does that, but it is damn trivial.

(former middleware engineer)


> all they need is to create to a low rate channel with slowly updating snapshots of all the programs being transmitted.

> i don't know anyone who does that, but it is damn trivial.

Getting this standardized in an international standard is far from trivial.


You may well be hitting a deeper point here.

Many of the strongest AND weakest consumer technology experiences are influenced by standards in some way.

Positive Examples: pluging in an usb headset, electrical sockets, sms.

Negative examples: Trying to hook up your laptop in a random conference room, transferring a large file locally between 2 devices from different vendors.

Maybe with ever increasing conplexity and capability the annoyances fall between industry actors and solutions would need first multiple parties acknowledging the problem followed by succesful coordination.


"they" usually write requirements for set top middleware and control the broadcast. the problem, i think, is not that it is difficult (Musk launched a massive electric car company and spacecraft to ISS in the relevant time frame), but that "they" don't care about the users.

pretty much anything cable and satellite tv companies do is against the user. there is very little (if any) innovation in the industry, and that's why they will eventually die.


The Ericsson MediaRoom IPTV platform (used by AT&T Uverse in the US) does something similar to this, but it's only used to show a small window to preview the highlighted channel while you're scanning the guide.


It's not only switching channels, even just turning the thing on. Also switching between inputs takes a long time, a large part of which is spend in the menu. The only thing I ever do is switch between AppleTV and PS4 and adjust volume because the PS4 is orders of magnitude louder than the AppleTV, yet so of that always feels super clunky if I have to use the TV remote.


It's patently absurd that a modern cable box chugs just when flipping through the channel guide. The provider has control over the software and the hardware so it should be butter smooth all the time.


Doing that would take extra work, and cost more money. Why should they bother, when the dinosaurs who still watch cable TV aren't going to pay them any more for this butter-smooth experience? The people who care about such things have all moved to on-demand streaming services.


It is interesting to see just how much more information is in a second of video than it used to be.

That said, I'm not sure that counts for the latency by itself. Pretty sure it doesn't. :(


The problem is that (mpeg video frames, mpeg audio frames, streaming latency, hdmi decoding, etc...) all introduce a delay for reasons.

The answer: put an ad in there.


It depends on the TV or tuner box. Some are very fast and some are very slow. It is something you have to test before buying if you can.


Yeah, it sucks though that most tv providers supply the lowest-specced box they can provide. For my parents it was bad when they bought a 4k television, and the box "supports" that, though since some firmware update it starts lagging every 20secs. Dropped frames and the usual video-"smudge". I hope they get a (FREE) update soon. Because this is just a broken promise.

And I'm not even talking about the Netflix "app" that's on there. Holy s#!t that's slow. Or the TV-guide. They now resort to teletext because that's much faster... I mean...


Yes. They can fade out the old screen with "snow" until the next MPEG stream starts to decode.


Also navigating through the "guide" is simply horrendous.


To call this rose-tinted glasses when considering how things worked in 1983 is a massive understatement.

A counterexample: in 1983, enter two search terms, one of them slightly misspelled or misremembered, hit f3: "no results", spend 10 minutes trying to find the needle in the haystack, give up and physically search for the thing yourself.

Enter two search terms slightly incorrectly now: no of the time it will know exactly what you want, may even autocorrect a typo locally, get your accurate search results in a second.

When things were faster 30+ years ago (and they absolutely were NOT the vast majority of the time, this example cherry picked one of the few instances that they were), it was because the use case was hyperspecific, hyperlocalized to a platform, and the fussy and often counterintuitive interfaces served as guard rails.

The article has absolutely valid points on ways UIs have been tuned in odd ways (often to make them usable, albeit suboptimally, for the very different inputs of touch and mouse), but the obsession about speed being worse now borders on quixotic. Software back then was, at absolute best, akin to a drag racer - if all you want to do it move 200 meters in one predetermined direction then sometimes is was fine. Want to go 300 meters, or go in a slightly different direction, or don't know how to drive a drag racer? Sorry, need to find a different car/course/detailed instruction manual.


All of this is absolutely no excuse for the ridiculously high latency everywhere now.

Want to give me fancy autocorrect? Fine. But first:

* Make a UI with instant feedback, which doesn't wait on your autocorrect

* Give me exact results instantly before your autocorrect kicks in

* Run your fancy slow stuff in the background if resources are available

* Update results when you get them... if I didn't hit "enter" and got away from you before that.

It's not that complicated. We've got the technology.

And also, there's still no fucking reason a USB keyboard/mouse should be more laggy than their counterparts back in the day.


Windows 10 search and macOS's spotlight do something like this. It's irritating when results reorder themselves milliseconds before I hit enter.

Either way I'm not sure it rises to the level of indignation shown here.


The only thing Windows 10 UI rises to is a paragon of lagginess - and I say that about the UI I generally like, and one that runs on 10-year-old equipment just fine.

There's no good reason for a lag after hitting the "Start" button.

There's no good reason for a lag in the right-mouse-button context menu in Explorer (this was a "feature" since Windows 95, however).

I could go on for a long time, but let's just say that Win+R notepad is still the fastest way to start that program, because at least Win+R menu wasn't made pretty and slow (but still has history of some sorts).

The search box behaves in truly mysterious ways. All I want it to do is bring up a list of programs whose name contains the substring that I just typed. It's not a task that should take more than a screen refresh, much more so in 2019. And yet, I still have no clue what it actually does - if it works at all[1].

[1]https://www.maketecheasier.com/fix-windows-10-start-menu-sea...


Just install Classic Shell (now Open-Shell). Resistance is futile.


It would not have a big deal to measure/guess your reaction time and wind back changes you had not chance to see.


In fact I’m pretty sure Safari does this in its Omni-bar. Why it can’t be system wide I don’t know.


I agree that a lot of the use cases back then were hyperspecific and the software were drag racers. However, much of what he talks about is productivity tools. I believe those fit the same description. I occasionally work with many different screens using a TN3270, some of which look very different and have very different purposes, however they have a common interface with similar keystrokes. This makes navigating even unfamiliar screens a breeze. He talks about is the commonality of keyboard language compared to that of GUI and I think that is an excellent point of his.

Check out this specific part of his postings: https://i.imgur.com/Roz80Nd.png

The main idea I got from his rant was that we have mostly lost the efficiencies that a keyboard can provide.


So to some degree that's preaching to the choir around here; I still use emacs extensively and adore it, but I'd also never wish it upon anyone who doesn't value work efficiency over low cognitive load and nice aesthetics. In my experience, at least, that's most people.

In light of that, I think it's less that we've "lost" keyboard-driven efficiency as much as knowingly sacrificed it in favor of spending UI/UX dev time on more generally desired/useful features. The nice thing about being the type of power user who wants more keyboard functionality is that you can often code/macro it yourself.


Software in the 80s was: extremely low throughput but excellent latency.

We could really do better on the latter category here in the 21st century.


Part of that was the widespread use low-resolution, high refresh rate, zero-latency (CRT) monitors, and the use of hardware interrupts instead of a serial bus for input.


> one of the things that makes me steaming mad is how the entire field of web apps ignores 100% of learned lessons from desktop apps

I can't agree with this enough. The whole of web development in general really grinds my gears these days. Stacking one half-baked technology on top of another, using at least 3 different languages to crap out a string of html that then gets rendered by a browser. Using some node module for every small task, leaving yourself with completely unauditable interdependent code that could be hijacked by a rogue developer at any moment. And to top it all off now we're using things like Electron to make "native" apps for phones and desktops.

It seems so ass-backwards. This current model is wasteful of computing resources and provides a generally terrible user experience. And it just seems to get worse as time passes. :/


I’ve gone back to making simple HTML pages for all sorts of projects. Even trying to avoid CSS when possible.

It’s funny, in a way, because the “problem” with straight HTML is that it was straight hierarchical (and thus vertical) and so a lot of space was wasted on wide desktop displays. We used tables and the later CSS to align elements horizontally.

Now on phones straight html ends up being a very friendly and fast user experience. A simple paragraph tag with a bunch of text and a little padding works great.


Related: I was bored last Sunday, so I decided to install Sublime Text. I'm normally a VS Code user, but VS Code is built on Electron and it felt a little sluggish for certain files, so I wanted to give something else a try.

I've been using Sublime all week and it feels like an engineering masterpiece. Everything is instantly responsive. It jumps between files without skipping a beat. My battery lasts longer. (I don't want to turn this into an editor debate, though. Just a personal example.)

If you would've asked me a month ago, I would've said that engineers cared too much about making things performant to the millisecond. Now, I would say that many of them don't care enough. I want every application to be this responsive.

I never realized how wasteful web tech was until I stopped using it. And I guess you could say the same for a lot of websites – we bloat everything with node_modules and unnecessary CSS and lose track of helping our users accomplish their goals.


If I remember correctly, this is like the third comment I have read on HN in years that shows Sublime Text is faster than VSCode.

I have been arguing about this since may be 2016 or earlier, on HN it was the echo chamber of how much faster VSCode is compared to Atom! I cant believe this is built on Electron etc. And with every VSCode update I tried and while it was definitely faster than Atom, it was no way near as fast as Sublime. And every time this was brought up the answer was they felt no different between VSCode and Sublime or VSCode is fast enough it didn't matter.

The biggest problem of all problems is not seeing it as a problem.

I hope VSCode will continue to push the boundary of Electron Apps, they had a WebGL Render if I remember correctly that was way faster, not sure if there are anymore things in the pipeline.


VSCode WebGL link - was enabled in one of the latest versions as an experimental feature for terminal: https://code.visualstudio.com/updates/v1_41#_new-experimenta...


I used to choose Eclipse over IntelliJ for similar reasons years ago. When it came to refactoring and other integrated features IntelliJ won, but Eclipse could do partial compilation on save and the run of a unit test automatically within moments, something which on IntelliJ invoked a Maven build that often took minutes.

The performance of feedback and the speed with which you can go through the basic cycle of coding and testing it works as expected is the speed at which you can develop code and the editor is a pretty critical part of that.


Interestingly enough I had kind of the opposite experience

Since I haven't done much with Java itself, the build times weren't as impactful on me.

What made a big change was how IntelliJ, despite being pure Swing GUI, was order of magnitude lower latency, from as simple things as booting the whole IDE, to operating on, well, everything.

Then I switched from Oracle to IBM J9 and I probably experienced yet another order of magnitude speedup.


Weird to compare an IDE and a text editor, though.


VS CODE [1] is a text editor. Visual Studio [2] is the IDE.

[1] https://code.visualstudio.com/

[2] https://visualstudio.microsoft.com/vs/


VS Code has Intellisense, an integrated debugger, and much more. What makes it not an IDE?


VS Code and Sublime Text are comparable, though: they're both extensible text editors, not real IDEs.


VS Code ~= VS. It's a text editor, not an IDE (unless you consider things like VIM to be IDEs because you can use plugins to do IDE-like stuff with them.)


That's kind of a marketing distinction. VS is not a kind of monolithic spaghetti monster system: internally, it is made of components, which provide a text editor, indexing for various languages, etc.

You can do likewise with VS Code or other environments, except maybe some plugins are not installed by default.

In the end it boils down to: how do we define an IDE? And even if it is about bundled capabilities, I would still be able to create a "dedicated" (would not need much modification) Linux distro and declare it to be an IDE.

It was easier to distinguish IDE from other things in the MS-DOS era.


What makes it not an IDE to you? VS Code has first class Intellisense, an integrated debugger, ...


Reminds me of John Carmack complaining that it was quicker to send a packet of data internationally compared to drawing on screen: https://www.geek.com/chips/john-carmack-explains-why-its-fas...

More relevant to the article, I fully agree with the authors upset at trying to do two parts of the same task in google maps, it's entirely infuriating.

Edit: duplicate submission: one directly on twitter, this one through hte threaded reader. The other submission has >350 comments: https://news.ycombinator.com/item?id=21835417


> in 1998 if you were planning a trip you might have gotten out a paper road map and put marks on it for interesting locations along the way

In 1998, I used https://www.mapquest.com/ to plan a road trip a thousand miles from where I was living, and it was, at the time, an amazing experience, because I didn't need to find, order and have shipped to me a set of paper road maps.

In the 1970s, when I had a conversation with someone on the phone, the quality stayed the same throughout. We never 'lost signal'. It was an excellent technology that had existed for decades, and, in one particular way, was better than modern phones. But guess what? Both parties were tied to physical connections.

Google Maps is one product, and provides, for the time being, an excellent experience for the most common use cases.

> amber-screen library computer in 1998: type in two words and hit F3. search results appear instantly

So that's a nice, relatively static and local database lookup, cool.

I wrote 'green screen' apps in Cobol for a group of medical centers in the early and mid 90s. A lot of the immediate user interface was relatively quick, but most of the backend database lookups were extremely slow, simply because the amount of data was large, a lot of people were using it in parallel, and the data was constantly changing. Also: that user interface required quite a bit of training, including multi-modal function key overlays.

This article has a couple of narrow, good points, but is generally going in the wrong direction, either deliberately or because of ignorance.


This seems to be conflating several separate problems

1) I can't manipulate data in my resultset well enough in google map.

2) Searches are too slow.

3) Mousing is bad.

Now, you can argue that those are related.

The first two are an argument for moving away from full-page post/response applications to SPA-style applications where the data is all in browser memory and as you manipulate it you're doing stuff on the client and pulling data from the server as needed, desktop style.

The latter? I don't know why he had to go back to DOS guis. Plenty of windowed UIs are very keyboard friendly. Tab indexes, hotkeys, etc.

> GUIs are in no way more intuitive than keyboard interfaces using function keys such as the POS I posted earlier. Nor do they need to be.

This is where he loses me. I remember the days of keyboard UIs. They almost all suffered from being opaque. You can't say "the problem is opaque UIs" when that describes the vast majority of keyboard-based UIs.

While there are obviously ways to create exceptions, GUIs are intrinsically more self-documenting than keyboard inputs, because GUIs require that the UI be presented on the screen to the user, and keyboard inputs do not.


The begging of his rant was was very much all over the place but towards the end he very much honed in on the loss of keyboard efficiency. I'm sure everyone who has experienced keyboard efficiency can agree.

I think the part you quoted is an interesting problem. I agree with his statement that UIs are not necessarily intuitive, however I do believe they are easier to pick up than keyboard inputs. Seeing older people try to navigate websites shows me how much "intuitiveness" I take for granted and is actually just experience. I think the focus though was that the "intuitiveness of a GUIs" is not worth the loss in efficiency, especially when we have the capabilities to combine the two interfaces into one system.

One thing he said stood out to me, which was that including a mouse interface on a primarily keyboard-based system is much better than trying to add keyboard functionality in a primarily mouse-based based interface.


Mousing is bad. It really is a cumbersome tool. I do a lot with my workspaces so that I can avoid using it.


Hmmm

Most Millennials I know who are technical absolutely love mice because they grew up using them, and most of them have extensive PC gaming experience to boot.

I’m the Linux/CLI junky among them and even I don’t find mousing cumbersome—to use someone else’s words, it’s an amazing, first class input device. Same goes for trackballs. By comparison touchscreens are a joke, there’s no depth of input like with a mouse(RMB,LMB,etc) and every action requires large physical movements.

I’m used to seeing people fly through menus with a mouse at speeds people here expect to only to see from keyboard shortcuts. Just because mousing is cumbersome for you, doesn’t means it’s universally true at all. I know keyboard shortcuts are fast, but it’s a lot to memorize compared to menus which typically have the same basic order. File|Edit|View|...|Help


Interesting! I am definitely in your first group. Born in the 90s, grew up using a mouse and playing mouse driven PC games, yet I far, far prefer the concise precision of a keyboard.

I guess it just depends on what the program requires of your inputs. When it comes to software development, window switching and maneuvering around websites, keyboards are precise and rapid, where the mouse can only do one thing at a time before needing to travel to the next input.

The other important part about ditching the mouse, is that when you're predominantly typing and using both hands on the keyboard, switching over to the mouse takes a non-trivial amount of time. You have to move your hand over there, figure out where the cursor is on the screen, then do what you need to do with it. When you're doing it hundreds of times a day, it adds up.


I grew up with a mouse too and am excellent with it. I was born in the 90’s for reference. However recently on the job I have learned to use keyboard to navigate mainframe menus and editors and it’s very nice. While I don’t find a mouse bad at all, it is noticeably less efficient in some key use cases. Many productivity tools could benefit from a focus on keyboard interfacing.


I never disagreed with that.


After reading this headline I felt like I was having Deja Vu. I wasn't, it was posted 11 hours ago

[1] https://news.ycombinator.com/item?id=21831931

[2] https://news.ycombinator.com/item?id=15643663 (posted two years ago)


I marked this one as a dupe and was about to merge the threads, but the discussion here actually seems to be better than yesterday's. There's clearly a major appetite for this topic, so perhaps we'll break with normal practice and leave this thread up.


I agree. I went to click this link thinking it was the link I saw yesterday that looked intersting but hadn't clicked at the time, now I'm doubly excited to find that there are multiple threads covering the same thing.

Additional: anyone know of a good F# library for the gui.cs framework? Before I manually write bindings I figure I can throw a quick query out there.


Seems everyone commenting on the post from earlier today totally missed the ultimate point of his post, which was UI bloat and failing to take advantage of keyboard efficiencies. It's almost like the large majority of them only read the title...


The other one is still on the front page…


Coming from the ops side and as a user, I blame a focus on abstractions away from fundamentals that lure new and old developers alike into overcomplicating stacks.

For example, I would say some significant percentage of both websites and electron-style apps don't need anything other than pure HTML5+CSS3. Javascript only if necessary. Not here are 5 different Javascript frameworks! Of course there are cases where a framework is applicable, but I'm talking about the vast majority of cases. Then on top of all that are 20+ javascripts advertising and spying per page and pages become unusable without a good js blocker (umatrix is my favorite). Some of this is technical and some of it is business, either way developers need to focus on the fundamentals and pushback against dark patterns.

Now on the desktop side, this is also why I am a CLI junky who lives in a terminal. CLI apps don't get in the way, because most of the time you are just dealing with text anyway. There are many websites that ought to be a cli app at least via api. This is also one of my criticisms of companies trying to force people into the browser by disallowing api clients.

It was the constant bloat and spying that finally spurred me to go gnu/linux only many years ago, and things are only getting better since then. It requires a change in how you do your computing, yes. It may not be easy (higher learning curve), but the rewards are worth it.


I was driving across country, became tired and had to check-in at a roadside motel a few weeks ago. It was 3AM and the clerk took about 15 minutes to complete the check-in. She apologized several times and said "this was so much faster back years ago when we had DOS. There's too many clicks, screen freezes and confirmation with the newer computers". She was older so I'm assuming by newer she means POS systems built within the past 20 years


One that got worse and then better again at least in my experience is boot up time. Old computers running MS-DOS I remember booting up very quickly (unless you waited for the full memory self test), then for quite a long time it took forever to get into Windows no matter what you did to try to speed it up. More recently things start up pretty quickly again; I think it's mainly hardware (solid-state disks being the major improvement), but I do think Windows seems to do a little better software-wise too. (Linux and BSD I haven't used as a desktop in a very long time so I'm not sure where those are now. OSX I don't have much of a sense of, partly because it just doesn't need to fully reboot as often as Windows.)


>but I do think Windows seems to do a little better software-wise too

Windows 10 does a little trick to speed up boot - when you perform a shutdown, Windows 10 saves a mini-hibernation image to the hibernation file. When you perform normal boot, it can start up quite fast[1]. This gives noticeably shorter boot time especially on spinning rust drives (i know, i know, $CURRENT_YEAR). However if you perform a "reboot" instead of "shutdown + power on", you'll get the full length boot of notably longer time.

[1] assuming the hardware setup is sufficently unchanged


This exact hibernation feature is what I feel is making my shutdowns slower!

I've been rocking an M.2 ssd for quite some time now and Win10 _always_ takes a considerate 1-2 minutes to shutdown.


I get 11 seconds to shut down, 11 seconds to boot, on a 256G NVMe ssd. I'm curious what's going on on your machine.


Wouldn't even dumping literally all ram contents take only a few seconds on an NVMe drive? It's probably something else.


That's the first thing I disable after ever Windows 10 update (it keeps getting reset for some reason). If you have a dual-boot setup, it can result in a corrupted partition if you try to access your Windows 10 partition from another OS when fast start is enabled.


I run separate partitions for OS and storage on dual boot machines, so I can avoid the risk of one OS corrupting the other.


Nope; if the hibernated OS has the storage volume mounted, and you edit that volume in another OS, separate or not you will end up with corruption, just as if two machines had mounted it simultaneously.


I feel that (partially due to this hibernation file, partially due to SSDs) that Windows boots are actually faster than ever now. Even a reboot seems to take under 30 seconds. I remember just the RAM test on my Pentium 1 was several minutes, and the boot time was a few more often


May be, but Ubuntu still boots faster in the same hardware, doing a full boot sequence.


My linux laptop sleeps so efficiently that I just never shut it down. It can sleep for days, at which point I've almost definitely plugged it into something. It sleeps in my backpack with no temperature issues. The SSD makes working with traditionally quite heavy tooling lightning fast.

Likewise, my Windows machine hibernates so fast that boot time feels like I'm just waking it up from sleep.

Thanks to the advent of SSDs, applications are also quite peppy to startup. Music, movies, pictures, all so fast to use.


Still hasn't caught up to a c64. A flick, loaded.


If you keep your computer in suspend-to-RAM mode, it will do that and still use less power over time than the C=64 and its damn power brick.


There are some good points here, but of course I am going to ignore those to talk instead about the stuff I disagree with!

The section about google maps follows a form of criticism that is widespread and particularly annoys me, namely => popular service 'x' doesn't exactly fit my power user need 'y', therefore x is hopelessly borked, poorly designed, and borderline useless.

There is always room for improvement, but all software requires tradeoffs. One of the things that makes a product like google maps so powerful is that it makes a lot of guesses about what you are actually trying to do in order to greatly reduce the complexity and inputs required in order to do these incredibly complicated tasks.

So yes, sometimes when you move the map some piece of data will be removed from the screen without your explicit consent, and yeah, in that moment that feels incredibly annoying. But balance that against the 100s or 1000s of times you used google maps and it just worked, perfectly, because it reduced the number of inputs needed to use it to the bare minimum.

Google maps doesn't need to fit every use case perfectly, and while its fine to talk through how your hyper specific use case could and should work, remember all the times that it seamlessly routed you around traffic from your office to your house in one touch while you were already hurtling down the highway at 70 mph.


The example for maps is not a "hyper specific use case": "The process you WANT: pick your start and end. now start searching for places in between. Your start and end are saved. When you find someplace interesting, add it to your list. Keep doing that, keep searching and adding."

That's a common use case. The problem with Google maps (and the problem with a lot of modern software) is, as you say, it makes a lot of guesses.

The definition of a good user interface is "to meet the exact needs of the customer, without fuss or bother"*

Google Maps is great for finding directions to a very specific place. But after mapping those directions, doing almost anything else destroys that route. If I have to (and I do) open multiple map tabs, or repeatedly enter the same route info after making a search (if I'm on a phone) it is not a good UI.

*https://www.nngroup.com/articles/definition-user-experience/


> The process you WANT: pick your start and end. now start searching for places in between. Your start and end are saved. When you find someplace interesting, add it to your list. Keep doing that, keep searching and adding

Am I missing something - this use-case is already supported! You choose start & endpoints, then your start the trip (which "saves" them) - you can now search and add as many waypoints as you desire


That's what I was thinking, I do that all the time.

For example, do a map of San Francisco to New York City. Now you want to visit the world's largest ball of twine, so you add a waypoint, and start typing "Ball of Twine" and a drop-down will appear with a few choices, pick the one you want and it'll add to the map. You can re-order them as needed to optimize your route.

You still need to know the name or address of the waypoint you want to add, but that's the case with paper maps and is a good use of browser tabs to search for it.


I've been using Google Maps since it launched, and had no idea this is how it works!

One important factor in a good UI is that it is discoverable! If you build an amazing feature but forget to inform the user about it, you've wasted the work.


I discovered this by myself - I mean, the search button is right there! I must have been in the middle of a trip & searched for something, which pops up an "Add Stop" button. I think it's pretty discoverable.


I'm glad you discovered this, but let's look at a strawman grandma: "the search bar is where you type where you want to go, therefore if I search, it will turn off my current navigation / reroute my destination to whatever I am searching for".

The above is not a foregone conclusion, but it was how I thought until I read your and parent's postings about it, and I'm a techie. For every 1 techie that doesn't know about a feature, there are 1,000 users (or something like that).

I would posit that the discover-ability difficulties are present whether someone is in a TUI or a GUI.


I'm specifically referring to the Google Maps mobile app - it doesn't have a search bar, but has buttons that come and go depending on your current context. If a button is visible in your context, you can bet it works in that context. As an example, the layer, compass and search buttons are present when you're not navigating, but once you start, the layer and compass buttons are replaced by the audio chatiness button. The search button is still present, and the implication is that its usable in my current context: that's great discoverability in my book.


I only hit start when I start driving, and then I don't look at the screen much, since I'm driving.

If you hit start before, it starts talking a lot, and that's annoying.

To be fir, I've never really wanted this feature much, so I haven't tried to find it.


I just recorded a two minute video me of me trying to do this. Maybe it will help you understand why people like Gravis (and myself) are frustrated. The video is at http://paste.stevelosh.com/1983.webm but it's a little blurry, so I'll narrate it.

(There's a less blurry mkv at http://paste.stevelosh.com/1983.mkv for those that want it.)

I go to `maps.google.com`. The page loads a search box, with my cursor focused inside it. Then it unfocuses the search box while some other boxes pop in. Then it refocuses the search box. Is it done thrashing? Can I type yet? I wait for a few seconds. I would have already entered in my query by now in 1983. I sigh. This bodes well.

I guess it's as done as it's ever gonna be. I search for "rochester ny to montreal qc". I wait for the screen to load. It finds me a route, which is actually good. Step one done.

Now I want to find a restaurant somewhere in the middle. Let's try just browsing around. I find somewhere roughly in the middle — Watertown seems like a good place to stop.

I zoom in on Watertown. I wait for the screen to load. I look around the map and see some restaurants, so I click one. Now I want to read the reviews, so I scroll down to find the "See All Reviews" link. My scroll wheel stops working after I scroll more than an inch or two at a time, until I move it out of the left hand pane and back inside it. I sigh, wiggle my mouse back and forth repeatedly to scroll down and click on the link.

A whirl of colors — suddenly the map zooms in on the location. Why does it do this? I wanted to read the reviews, not look more closely at the map! Now that the map is zoomed in, a hundred other points of interest are suddenly cluttering the map. I wanted to read reviews about this restaurant, and suddenly 3/4 of my screen is filled with text about other places. I sigh.

I ignore the garbage now cluttering most of my screen and read some reviews. This place seems fine. I click the back arrow, then click Add Stop to add it to the route. I wait for the screen to load. Suddenly my screen whirls with color and zooms out, losing my view of Watertown. I sigh.

My trip is now 8.5 hours instead of 5.5, because it added the new stop at the end. AlphaGo can win Go tournaments, but I guess it would be too much to ask for Google to somehow divine that when I add a stop in the middle of a 5.5 hour trip, I might want to visit it on the way by default. I sigh and manually reorder the stops.

Let's also find a gas station somewhere before Montreal, because I like to get gas before I get into the city so I don't have to deal with it once I'm in. Cornwall seems like a good place to stop.

I zoom in on Cornwall. I wait for the screen to load. I don't see any gas stations markers, but that's fine, there's a button that says "Gas stations" on the left! I click it and the screen goes blank. I wait for the screen to load. I've suddenly been whisked away to downtown Montreal instead of looking around where I'm currently centered on the map. Guess I should have read the heading above the buttons first. I sigh.

I click "back to directions". I wait for the screen to load. The map does not return to where I was previously, it just zooms to show the entire route, throwing out my zoomed-in application state. I think back to Gravis' tweet of "gmaps wildly thrashes the map around every time you do anything. Any time you search, almost any time you click on anything" and I sigh.

I rezoom in on Cornwall. I wait for the screen to load. The gas station button didn't work, but surely we can search, right? I don't see a search box on the screen, so I roll the dice and hit Add Destination. This gives me a text box, so I try searching for "gas stations" and pressing enter. This apparently didn't search, but just added one particular gas station to the route. It also zoomed me back out, throwing away my previous zoomed in view.

I rezoom in on Cornwall. I wait for the screen to load. I notice the gas station it picked happens to be across the US/Canada border from the route. That clearly won't work. I sigh and remove the destination. This zooms me back out (I wait for the screen to load), throwing away my previous zoomed in view.

I rezoom in on Cornwall. I wait for the screen to load. I click Add Destination again and this time notice that when my cursor is in the box, there's a magnifying glass icon — the universal icon for "search" — right next to the X icon (which will surely close the box). It even has a tooltip that says "Search"! Aha! That was well-hidden, UI designer, but I've surely defeated you. I click the magnifying glass icon and it… closes the input box. I… what? I sigh, loudly. It has also zoomed me out, throwing away my previous zoomed in view. I wait for the screen to load.

I rezoom in on Cornwall. I wait for the screen to load. Okay, apparently I can't search to try to find routes. I guess I'll resort to browsing around the map again. I notice what looks like a gas station called "Pioneer" and click on it. Cool. But then I realize this is on a bit of a side street. Surely I can find a gas station along the main road. Let me just cancel out of this location by pressing X.

My entire route is completely gone. All that time I just spent, flushed down the toilet. To add insult to injury: this is the one time that it didn't automatically zoom me out and lose my view of the map. It just threw away all of my other state.

Fuck this. I'm with Gravis.


You know, the back button works in Google Maps and does exactly what you might think it does: take you back to the state you were just in.

After you accidentally lost your route, you could have just used a built in feature of your browser to get yourself back to where you were.

EDIT: The rest of your post was entirely accurate. Google Maps is a slow, stuttery mess on literally every platform I've ever used it on recently. At least the back button works...


I guess years of using single page Javascript webapps where the back button is a complete shitshow has trained me to not even consider trying it. I'm impressed it actually works on Google Maps.


Why do you think that use case is that common? And why do you think google maps should elevate that particular use case above other competing use cases?


I hadn't even thought of this specifically, but after it was mentioned in the initial post I realized just how much that is my use case most of the time and how often I am fighting with google maps to accomplish what are relatively simple tasks like adding an additional point along a route. If you are trying to add multiple additional points on a route it gets even worse.

This is all much worse on the Android app as well, where it makes the assumption that your use case is to get from where you are right now to somewhere else. Trying to get from point A to B, where neither is where you are now, is unnecessarily frustrating.


> This is all much worse on the Android app as well, where it makes the assumption that your use case is to get from where you are right now to somewhere else.

That strikes me as a fantastic assumption. I wonder what percentage of routes involve the user’s current location? I bet it’s high!


Yep. But it used to be even better when it made that assumption clear by adding a pre-filled box with your current location.

It just worked for the default case but when you needed something else it was straightforward to do that.


Doesn't it? It gives me two boxes, "current location" and "destination" and I can change either.


I can't reply to eitland for some reason, but yes, I get both those boxes.

I open the app, click my destination, and then click "Directions". The very next thing is both of those boxes, with "current location" defaulting to the start location. I can then change that if I want.

It optimizes for my most common use case, but allows me to do it otherwise, too. I don't think I could design this better.


Straight away after you open Google Maps on a mobile?


No, I open it, look for the destination, then press directions and can edit the starting point.


Then we agree. I find that utterly annoying since I've seen how simple it could be but it seems many people disagree with me :-)


On android I just (from london) typed "Washington DC to New York" and it instantly popped me up directions for the other side of the world, with two editable boxes.

That seems pretty decent UX wise?


That is such a narrow use case. Were you actually planning that trip I am sure you would find the UX lacking.

* What about stops along the way?

* What about saving the results for later?

* What if you want to do some other mapping task in the middle of all this?

* Are the directions given feasible?


I just tested this and it works as follows;

- Open Maps

- Search destination. It autocompletes after about 5 characters

- Select destination

- Screen changes to infobox about the location. There is a prominent "Directions" button

- Press "Directions"

- It changes to a route view, the Start is autocompleted to Current Location but obviously editable

- Press into start location edit box

- I can type location or "Choose on map"

This process requires essentially the minimum possible information from me (I want directions, from A, to B). What is frustrating about it?


That is 9 or so manual steps and it doesn't become clear until step 7 or so that it can even be done! There's nothing intuitive about this and when someone knows how it works that must be because they've either learned it from someone or kept on experimenting with it until they figured it out.

compare this to the original that they "simplified" away:

- Open app in navigation mode (step 1)

- it shows two boxes, where you are going from and where you are going to

- fill said boxes. There is a button next to from to choose current destination. (step 2 and 3)

- click get directions (step 4)

Compared to the current "simple" version it is immediately clear and there are fewer steps and less things you need to know.


You didn't break down your steps like the parent did. Here's your way:

1) Open app 2) Search for start 3) Select start 4) Search for destination 5) Select destination 6) Click directions

Here's the parent's way:

1) Open app 2) Search destination 3) Select destination 4) Select directions 5) Search for start 6) Select start

They're the same process.


No it was literally 4 actions (3 if you accepted the default starting point) and the the same amount of typing the current solution. I didn't summarize anything.

1. Open app in navigation mode (there was a separate icon for that)

2. Accept default start or type if you don't want the default.

3. Point at destination

4. Type destination and enter

Besides it was immediately obvious when I opened the app for the first time on my first smartphone, it just made sense and still does when I think about it.

Edit: I reread https://news.ycombinator.com/item?id=21836204

I exaggerated wildly and can get it down to 5 steps. It is by definition discoverable since we have all discovered it, but I hold that it is still not obvious or self-explanatory in any way.


It's immediately obvious to me how to use it now. And if you accept the default start then both solutions are still the same.

Gmaps right now works like this:

1) Open App 2) Click search box 3) Either select a destination from the list that pops up or start typing and actually search. Once that's done the route pops up with your travel time. 4) Click start

If you want to change your start:

4) Select the starting location 5) Search or select from the list that pops up and your route and travel time are show. 6) Click start

It's not rocket science. It's all obvious from the UI.


Obviously it's fewer steps if you combine some steps into 1 when paraphrasing.


The point is it was one step. There was a separate entry point or what should I call it from the main Android menu that took me straight to this.


This is exactly what I usually want to do with maps. I have a route I want to plan, and I want to do more than one thing along my route, or see what else is in the area. It's futile in Google Maps.

I see no reason that supporting this thing that old mapping software used to support would elevate it "above" other use cases. If you just want a single route, you do one search and you see the result and you never click the "add" button, no problem.

I should dig out an old Delorme Street Atlas CDROM and install it in a VM, to get some sense of how many clicks it took to do the things I used to do. I don't think it was many. It was definitely pickier about address entry; that's one place Google has absolutely improved. But aside from that, it was way more powerful at pretty much everything else.


Not trying to just nitpick, but if we assume that the Google Maps use research and data from users when planning features then the feature you're after may actually not be very common.

And your answer to someone asking "Why do you think that use case is that common?", your first line literally just talks about your use case from your point of view:

> This is exactly what _I_ usually want to do with maps. _I_ have a route _I_ want to plan, and _I_ want to do more than one thing along my route, or see what else is in the area. It's futile in Google Maps.

I'm not saying that wouldn't be useful, it's just that maybe not that many people need it... I guess it was built with the idea that you would just open more tabs to search other things?


> if we assume that the Google Maps use research and data from users when planning features

Based on what I've seen from Google product design, this is a pretty bold assumption.

While Google has access to unfathomable amounts of data collected from users, it's more than happy to eschew that if the data conflict with higher-level product or company strategy decisions, which generally are much less motivated by raw user data,


I have not conducted a survey, but they strike me as reasonably common desires.

On a 4-5 hour road trip, I want to take the kids to see a castle or something somewhere around 1/2 to 3/4 of the way. Even just wanting to have lunch somewhere other than Hilton Park or Newport Pagnell would be such a use case.

I have also wanted it for visiting someone - I'm going to their house, what is my most convenient option for buying some wine and/or flowers on the way?

I have wanted it when I've been away from home and have a big time gap between finishing my planned activities (or having to check out of my hotel) and my train or plane departure. What is the best way to spend a few hours that is anywhere on the route from here to the airport/station.


I think there is an anti-car bias in Google Maps and similar services.

Everything is oriented around the model of "reserve a hotel", "reserve a flight", like you really are on rails like a European.

Today's online maps aren't up to the freedom that motorists have to make small deviations from a route. For instance if I drive from here to Boston I am likely to stay at a hotel en-route, that could be anywhere from Albany to Worcester. I don't have strong feelings about where, but it might be nice to find a good deal or find a place that I think is cool.

Thus I am interested in searching along a tube around my route, not clicking on cities like Springfield and running a search at each one.


Google is in the directory business. Ultimately they don't want us to make the most informed decision, they want us to "feel lucky" and trust The Algorithm. Because the more we "feel lucky", the bigger the fear of businesses to get punished by The Algorithm for insufficient ad spending.

That's why desktop web search is less valuable to Google than mobile web search, mobile web search is less valuable to them than map search, map search is less valuable than voice search and voice search while driving is their holy grail because there the ranking game is completely winner takes all. A second page hit on desktop has a better chance at getting traffic than the second place overall in voice while driving. (And those sweet "while driving" hits will almost always be followed by actual business transactions, whereas the old desktop is just a mostly worthless page view)

Afaik Google is far from allowing businesses to directly bid for that coveted number slot (it would ruin their ability to keep the balance between attracting advertisers and attracting eyeballs), but the result is even better for them: when businesses "bid by proxy", via buying other ad products in hope/fear that it might be a factor in the ranking they don't just get the winner's money. I'd absolutely say that drivers are very high on Google's audience priority list, it's just that nobody on that list is a customer.


My "big time gap" example is explicitly a non-car use case.

The visiting example for me is normally a non-car use case. If going by car, I would probably pick these things up close to home and carry them all the way.


Wouldn't agree with regard to Apple Maps being "anti-car" when there isn't even a bike mode. Walking- and car-mode are unusable for cycling, with car-mode taking bad routes for cyclists and walking mode giving directions way too late, when a cyclist is already on/past the crossing.


> Everything is oriented around the model of "reserve a hotel", "reserve a flight"

Of course, that's where Google makes their money from the service. Google Maps isn't a public good, it's a line of business.


Google Maps shows search results annotated with how much time they add as a detour.


Search for destinations along your route is (finally) a feature in the gmaps mobile app. It isn't in the web-app. Apparently Google agrees that it is a desirable feature. Why does the UI-limited, small screen experience surpass the rich desktop experience?


You can do it on the desktop app, via the add a stop feature. If you add a stop and then enter a search term ("coffee shops"), you'll see matching results along your route. (IIRC it doesn't show you the added travel time like the mobile app does, though. Maybe that's what you're referring to?)


Finally? I've been searching for things along my routes since at least the 2017 Eclipse as that's the first time I can remember using the feature on my trip from BC to Oregon.


Because if you plan to travel far you often want to plan a stop along the way to eat, use the toilet and refill anf maybe rest a bit in some hopefully interesting place.


Because it's a map. That's what you do with maps. You browse them, you study them, you mark them up.

I use Google Maps almost daily and this is also my complaint. It's not a hyper-specific use case. Google Maps are good for navigating from point A to point B when you are sure of both, but they suck at being a map. For instance, lack of always-on street names and weird POI handling makes them problematic to use when you want to explore the area you're in.


I disagree. That's what we used to do with a map because we did not have information available at our fingertips.

We would study the map ahead of time, based on the map figure out our plan of action by either making mental notes or notes in notepad, or notes on a map and eventually execute our plan based on the information we have selected.

We no longer need to do that. We can decide "I want to do something around X" , go to X and when we want to do something specific ask maps "Where can I find Y around X"?

Ability to drop pins removed the need to study map to complete most of the tasks. When one stumbles upon something interesting while reading a book, watching a show, scrolling through eater, one can drop a pin on a map so next time that person is in the area the pin is there!


Repeatedly searching and dropping pins in Google Maps is like eating your dinner by pulping it and drinking through a straw.

Studying a map ahead of time and marking it up (on the map itself, as we did with paper maps and dry-erase or permanent markers) is a more efficient interface. There's this forgotten principle in UI that users can mentally filter out noise and focus on relevant parts very good; that's what our sense of sight is optimized for. Having to actively search whenever you need to know something is an inferior experience, both in terms of efficiency and because of missing context.

(Also, dropping permanent pins is AFAIK impossible in the Google Maps proper; it's a feature of "my maps", which is hidden somewhere and has weird interactions with Google Maps.)


> Repeatedly searching and dropping pins in Google Maps is like eating your dinner by pulping it and drinking through a straw.

You are thinking about it as a synchronous workflow. Study map->create a plan->execute a plan. This workflow was the only workflow because it was impossible to execute a search when needed.

Google maps is optimized for a modern workflow. "I'm here. I need X. How do I get there?" With pins that workflow is asynchronous.

For example, I use pins for restaurants. I find/read something about a place I want to try at some point. I drop pins. Next time when I happened to be in the area I see the pins that I dropped. It may happened to tomorrow or three months from now. My alternative is yelp with its sync workflow - search and analyze results of a search or rely on my memory of what place should be around where.


That's what Google's "My Maps" does. It allows you to mark up a map. I've used it for planning trips frequently to mark places of interest and then used that map to plan my trip using regular ol' Maps for the turn-by-turn.


Gas station or McDonald's on the route to some far destination. I've done this in the android app and it works but it can be a little awkward.


Because it's so simple. I've been wanting it for YEARS. What's more simple than planning a trip? What else do you need a map for if you're not planning to go from point A to point B?

I hate when I need a restaurant or gas station ALONG MY ROUTE and yet years later no maps have this ability. It's insane.


Selecting/Creating waypoints has been a digital mapping-feature of every service/tool I have used since digital mapping/navigation became a thing. From garmin hand-held off-road GPS topo navigation systems to units used for fishing to hiking/biking.

It has been a cornerstone of digital navigation since the things were invented. To claim that it's an edge-case ignores history and instead highlights how _you_ use the tools.

IMHO it was more obvious that Google wants to you 'actively search' for $waypoint item while enroute instead of pre-planning. "hey google, show me restaurants near me"

That gives them a better way to monopolize on advertising and forced $ from companies in order to stay relevant and appear in those type of searches.


Search along route is surprisingly hard to get right. Results near to the route geographically are not necessarily near in terms of disruption to your route (other side of a river, wrong side of a highway etc.)

To add to the "things are getting worse" narrative, we implemented this properly back in the days when sat navs were still relatively exciting things. Last I saw the algorithm was to do a lightweight route plan through nearby search results and find the ones that made the smallest difference to your arrival time at your final destination. I don't think the google maps search API does that yet, although I haven't worked in the area for quite a while.


The Android app will show you how much each search result along the destination will add to your route. It also shows the gas prices if you're looking for fuel.


That's how I plan virtually every multi-stop journey. And adding this functionality would not take away from the usability of gmaps for single-leg journeys with no stops.


How would not erasing information when you move the map compromise the current use case, though?


Because especially on mobile, screen space is insanely valuable, and if you want to show new information from the new area the map has been moved to you often need to remove "stale" information.


Sure, but it could just as easily be a button to clear this information explicitly or it could just be hidden, but not deleted.


it wouldn't, which is why the complaint in these tweets ("drag the map even a pixel? it erases all your results and closes the infobox you were looking at") is, in fact, not something that happens.

give it a go.


Making guesses isn’t a problem. The ideal information software requires no interaction at all! You open your maps app in the morning, and it instantly brings up how to get to your next calendar appointment. If you’re making a long drive, it automatically suggests a gas station along the route. The more our software can infer, the better!

Bret Victor has a great essay on building information software called “Magic Ink” [1]:

> Information software, by contrast, mimics the experience of reading, not working. It is used for achieving an understanding—constructing a model within the mind. Thus, the user must listen to the software and think about what it says… but any manipulation happens mentally. Except possibly for signaling a decision, such as clicking a “buy” button, but that concludes, not constitutes, a session. The only reason to complete the full interaction cycle and speak is to explicitly provide some context that the software can’t otherwise infer—that is, to indicate a relevant subset of information. For information software, all interaction is essentially navigation around a data space.

Of course, guessing poorly is a problem, but that’s an issue with execution.

[1] http://worrydream.com/MagicInk/#interactivity_considered_har...


It's not guessing poorly that is the problem.

The problem is guessing poorly, and making it cumbersome for the user to override your guess.


You can search along the route if you are navigating, however I agree it could be improved in that you don't get to see any details of a place or add it to saved places (you either add a stop or don't). I don't think Maps is bad for this reason but I agree it is possible to improve.


I can do that today. What app are you using?


Edit: yes. Rewrote half of it.

The thing with Google Maps was that it was actually reasonably good and intuitive on mobile until sometime 5 or 7 years ago when someone decided it had to be "simplified".

The old version was easy: you enter "to" and "from", and it gives you a route.

I think it also had multiple entry points so you could choose "navigate", "browse" and "timeline" or something directly from the system menu.

The "simplified" version removed all that + the timeline feature I think and replaced it with one search box.

The timeline came back after a while as did a number of other features they removed but it still isn't as easy or intuitive as the early versions and it still annoys me every time I want to get a route from A to B (as opposed to from where I am now to B).

Compare this to Windows 95 that I disliked for a few months until I got used to system wide drag and drop and realized it was in fact better than Windows 3.1.


>The thing with Google Maps was that it was actually reasonably good and intuitive on mobile until sometime 5 or 7 years ago when someone (probably a UX designer : ) decided it had to be simplified.

All Google stuff ended up doing this when they started trying to standardize their "design language" across their services. They developed a very annoying habit of hiding every useful function or bit of relevant contextual information inside a poorly marked hamburger menu somewhere. It's extremely annoying from a discoverability perspective and I strongly suspect any UX designers involved lost a lot of arguments for a decision like that to get codified.


I'm convinced this is the entire purpose of Google PMs; take a good product, make it "prettier", strip away functionality and usability, move to another department, get replaced by someone whose goal it is to make the product even prettier-er, strip away even more functionality, wash/rinse/repeat.

Is there a single person who prefers the monochrome GMail UI where you can't easily visually parse one thread from another, or the "new and improved" functionality where you need to click at least 2 or 3 times to even SEE what address you're sending to or from, or to change the subject?


Performance has also degraded disastrously. My phone is kind of a dinosaur at this point (Galaxy 7), but the lag when scrolling around the map is seconds, and elements delay-load so that misclicks are constant.


Changed to an iPhone XR a few weeks ago. The cheap model of iPhone and it still feels way faster than the my latest androids, including Samsung S7 edge and other flagship phones (and yes, I bought them new).

I've stuck with Android until now, but now that I can replace the keyboard on iPhone I gave it a chance and I'm super happy with it.


You are comparing a 3 gen old device to the previous gen iPhone.

iPhone 7 I had is noticeably laggy with maps and other heavy apps.

Meanwhile my Samsung S10e (equivalent to the XR) has more than sufficient performance and has a better screen for a lower price.


I tried to hint at this being a problem also when I bought them new, but I can see that it wasn't to clearly written.

My point is that my current iPhone is the first phone since my Samsung S2 that hasn't disappointed me by being slow more or less immediately after unboxing it.


> popular service 'x' doesn't exactly fit my power user need 'y', therefore x is hopelessly borked, poorly designed, and borderline useless.

So you are telling me that wanting to see the name of a given street without having to zoom 10000x (and even then sometimes...) or figuring out how to get the directions to and from somewhere are "power users needs"?

Give me a break. Google maps was way easier to user as a map before. Now it prioritizes ad revenue at the expense of what users actually want to do.

There are no "power users" in this new world.


Getting directions is simple. Even the author doesn't take issue with that specific functionality. And I've never had an issue with Google maps hiding street names when there was ample room to show them, but if you're more than a little zoomed out, there isn't that room. Showing only major roads is a decent trade off, as is the mouse scroll wheel for zoom in/out. Or maybe it's my own failure of imagination: how would you improve in this particular area?


There are definitely cases where I am so zoomed in that only a couple of roads are visible on the entire screen, yet there is no name on a road and I have to drag the view down the road multiple screen widths until I can see the name. (This isn't a trade-off for advertising, it's just a flaw.)

Edit: In fact, I just confirmed it now. Opened up Google Maps and started looking at small roads. For the third road I checked, Maps wouldn't show a name no matter how far I zoomed in, and I had to drag the view as described above.


It's really not about not having enough room. An example that I found immediately zooming at some random location on Google Maps: https://svkt.org/~simias/up/20191219-174804_map.png

Note that none of the larger roads leading to the roundabout at the top-left is named, while some (but not all) smaller streets names are here. Instead you have the "D509" label copy/pasted haphazardly but that's not the actual name of the boulevard that would be used on a post address so it's of very limited use (and even leaving those labels in there's plenty of room to add the actual street name).

Here's Open Street Map for the same map at a similar zoom level: https://svkt.org/~simias/up/20191219-175553_map-osm.png

OSM doesn't have all the bells and whistles of GMaps, but as far as the map itself it's vastly superior IMO.


I think the D509 type of issue is just a trade off. At least when in happens in the US: It is the "official" designation and can be used on Post, while the local name changes from town to town. That is less of an issue when navigating within the same town, more of an issue when navigating through towns: Go 30 miles along a "CR" county road and it might change names 4 times. Rather confusing for directions to make it look like you have to travel down 4 different roads for that one leg of the journey. I suppose Google could show both though, depending on zoom level.


> Getting directions is simple. Even the author doesn't take issue with that specific functionality.

Yes he does. You touch anything and state is erased. Don't know exactly which one of the search results you want to go to? Tough shit, the interface works against you.

And still, getting directions used to be simpler. Now you have to decipher unlabeled hieroglyphs, and the interface keeps changing. You can't even get used to it. You are constantly being nudged to do shit that is not what you really want to do, such as "exploring your neighborhood".

> And I've never had an issue with Google maps hiding street names when there was ample room to show them, but if you're more than a little zoomed out, there isn't that room. Showing only major roads is a decent trade off, as is the mouse scroll wheel for zoom in/out.

Luck you, I guess. I have this problem all the time. A better trade off: if I searched for "Market Street", show that label! That would be a start. And frequently labels aren't shown even when there is plenty of space.

Oh, and why not show the scale of the map by default? Is this also a "power user" feature? I thought it was a crucial piece of information when reading a map...

> Or maybe it's my own failure of imagination: how would you improve in this particular area?

Easy. Revert to the interface circa 2010. It had none of the above problems.


Exactly. It basically forces you to walk with the map open and not just do like 'keeping walking straight until you see road x'. Such a damming thing when you are walking 15-20 minutes to a place in a new city or even in an unknown area in your city.


Granted, some good ideas.


I've given up on street names. The problem of unlabeled street names is worst in cities (where you generally need directions the most) because the GPS has a high chance of being lost due to tall buildings and other disruptions.

I should be able to get directions without having the GPS. If the GPS is lost, I really need those street names, NOW, without touching my phone.


Don't get me started on names of bus stops


It's not just about 'power users', and you can see this easily by looking at car dashboard controls for standard consumer cars over the last couple of decades. Old controls had the property that you could in most cases feel and adjust them without looking, and they responded instantly. Today one is often forced to interact with a touchscreen which has to be looked at it while driving, responds poorly and erratically to touch, and often buries basic options in deep menus. A simple thing like lowering stereo volume, previously instantaneous, can now have a significant latency.

None of the above is about power users. And none of this is innate to today's hardware. It's a matter of prioritization.

And consumers, though initially swayed by shiny objects, do eventually respond to good design and good engineering. Indeed Google itself found its early success partly through clean and thoughtful design, at a time when other search engine websites were massively cluttered and banner ads were the bane of the Web.


Two points:

They keep changing the interface faster than I can learn it.

The way to enable complex expression of tasks in a user interface is by composition. ie have microtasks that you can then combine in different ways.

That's what makes excel great - each cell is a single function but you can combine them to build stuff the developer never knew was possible.

Being able to store state ( a location ) and then operate on that state is a pretty basic building block of a composable map UI.


>But balance that against the 100s or 1000s of times you used google maps and it just worked, perfectly, because it reduced the number of inputs needed to use it to the bare minimum.

I can't, because the autocomplete/dropdown/prediction for saved locations is disabled if you don't enable Google's device-wide "Web and App Activity" spyware function. This means I have to type my address every time I want directions home, even though I manually saved it. It's hot garbage.

Another very common usecase which is impossible: getting directions to a place and then looking at the street view so I'll know what it looks like. I have to remember to check street view before the search or redo the search entirely. Again, hot garbage, and inexcusable for a company spending bazillions of dollars on UX people.


I have a problem rather frequently with Google maps. I zoom in to a small 3x3 mile area and search for something like Chinese restaurants. What usually happens is Google either zooms out to the entire city (a 100x100 mile area) or it zooms away completely back to my home city. That's fucking annoying.

Then I manually place everything back to where I actually wanted it and had it in the first place, and lo and behold it's showing me every kind of restaurant instead of Chinese restaurants. I have to click the search this area button which of course conveniently wasn't on the original screen to get the results I wanted in the first place.

Then I click on a restaurant to look at the pictures and read the reviews and when I'm done I naturally click the back button and it's all gone. I'm back to some other screen, maybe an empty map or one of the screens I was on previously that I didn't want, but it's almost never the list of Chinese restaurants in the small geographic area I was researching in the first place.

And that's just one example of one problem I regularly have with Google Maps. It's a horrible horrible user experience if you aren't using it in the way they think you should be using it.

I don't think this is a power user use case. Everybody wants to look up Chinese restaurants at a family or friend's house at some point in their life. Why is this so fucking hard?


There are many excellent points made in response to your comment, but I'd like to point out one thing:

> popular service 'x' doesn't exactly fit my power user need 'y'

Being a power user isn't a function of geekness, or a mark of belonging to some niche. Being a power user is a function of frequency and depth of use.

My wife is a power user of a particular e-commerce seller backend, a certain CAD software, and Excel, all due to her job. She is not technical, but when you spend 8 hours a day each day in front of some piece of software, you eventually do become a power user. Teenagers of today are power users of Snapchat, because they use it all the time.

Software being "power user friendly" isn't about accommodating existing power users; it's about allowing power users of that software to appear. It's about allowing the room for growth, allowing to do more with less effort. Software that doesn't becomes a toy, inviting only shallow, casual interaction. It's not all it could be. And it's the worst when software that was power user friendly becomes less so over time - it takes back the value it once gave to people.


The Office 2007 redesign was panned for similar reasons. Power users couldn’t understand why the biggest, most prominently placed buttons were for copy and paste, when everyone knew that you could use ctrl+c and ctrl+v instead. Turns out, only the power users knew that and the vast majority of users clicked 4 times for edit-copy, edit-paste. Now they only had to click twice.

This was a quality of life improvement for the vast majority of users, even if it wasn’t for the minority who used it the most.


Indeed, but even without getting into the details of UX tradeoffs, etc... Google Maps was science fiction in 1983. The whole thread misses the fact that the functionality we have in our pockets today would have absolutely blown our minds in 1983. How can it be called perceptually slower?


Simple. Because it got slower over time.

All these threads about software being worse and "perceptually slower" than it used to are about regressions. Google Maps and other mentioned tools aren't pushing the envelope. They aren't bleeding edge. They were science fiction made manifest 10-15 years ago, and since then actually decayed in utility, ergonomics and performance. Meanwhile, all the money invested in all the software and hardware should have given us the opposite outcome.


While utility and ergonomics can be debated, I'd be surprised if performance of Google Maps in particular has regressed. Google keeps track of latency metrics for every one of its thousands (millions?) of services. If the services involved in Google Maps began to regress in latency, it would have been noticed and addressed. You can be sure that many performance improvements have been implemented, and if you had access to the 2005 and 2019 versions, you would be amazed at how capable and fast today's is compared to then.


I don't have access to 2005 vs. 2019 versions to Google Maps to compare, but I really don't care about latency as much as I care about local performance. Google Maps is getting noticeably heavy over time.

What I have good memory about though is GMail. I've been using it for 10+ years now and that really keeps getting slower and heavier over time, while offering no extra functionality to compensate.

Google may track latency metrics for all their services, yet somehow, the stuff they do is one of the most bloated stuff out there. I guess they don't look at, or care, about those metrics.


I get your point, and it has its merit, but at some point you'd expect things to slow down, right?


My experience makes me expect it, on the theory that popular software sucks and companies building it tend to have broken incentive structures.

It shouldn't be like this in theory. Computers only ever get faster (occasional fixes for CPU bugs notwithstanding). So making software slow down requires active work. So does removing or breaking useful features.


Google has an alternate interface to maps called "My Maps"[1] with the kind of editing and composition features the author misses. I use it all the time to make maps for road trips with lots of points of interest and complicated routes.

[1] https://www.google.com/maps/d/u/0/


I tried mymaps some time ago, but there was no way (or i couldn't figure it out) to have gmaps use it for navigation. The two seemed to be quite separate products.


You had me until you started minimizing power users, like pretty much every software developer today. I don't see the guesswork as a good thing; Google Maps hasn't "just worked" for me anywhere near 100% of the time in years. "One step forward, two steps back" would almost perfectly describe GMap's "smartness"


My biggest complaint about google maps is that you cant read them when printed. I've been criticized for even wanting to print them, but the option is there with unreadable results.


I like his point about not knowing what to look for. On a paper map, you can see all of the points of interest for a specific location. This is great for planning trips. On google maps, you might not see what's around until you zoom in, but you don't know where to zoom in the first place.


Except the guesses need to be good. I make it a point to avoid criticizing applications for not being power usery enough but there's a line you can draw where it becomes clear that the guesses are too frequent and consuming too much computing power, just to be ignored.

I've seen this a lot with my mother in particular, who is certainly not a power user but knows enough to get by, struggle to use software that's trying too damn hard to guess what she wants, instead of letting her just tell it what she freaking wants.


It's like how Microsoft announced they were going to use advanced AI to predict when would be a good time to reboot people's PCs for updates instead of, you know, just fucking letting them pick.


Or the myriad of ways I struggle with my Apple products that are almost universally black boxes in which stuff goes and functionality comes out, without any way to change or debug behaviors.

Apple seems to get it right pretty consistently, which is why I keep their stuff. But when it does manage to go wrong, holy shit debugging it is an absolute nightmare.


What bugs me about these nostalgic rants is that somehow the convenience of today should be mixed with the frugal interfaces of the past.

So you couldn't be bothered to find a better route planner and defaulted to Google Maps? Which you got for free, instantly available? And now you're unhappy because it doesn't do exactly what you want? Please spend some effort on your tool selection before you spend time ranting. And let's turn this around: Did you at some point in 1990 wish you'd brought a map? How did you resolve that? Google Maps on a pocket computer would have been a marvel back then!


His general point is that the point and click interface is a blunt tool and there is room for specialisation. The POS example in the tweet-rant is ugly and non-intuitive but the perfect tool for the job. I'm not sure how I'd improve Google Maps exactly because I haven't really thought about it, but I do find it frustrating enough that I end up breaking out the good old BC Backroad Mapbook from time to time[0].

0. https://www.backroadmapbooks.com/


I don't understand the purported problem this man is facing when using google maps. Like step by step what is going on that google maps erases what you're doing. I've never had a problem as described, and seeing these other comments mkaes me question whether I'm using google maps in a drastically different way than the common person, or if the people complaining about this issue is using it weirdly, or if theres some different version of google maps that we are using.

if I open up maps, type in the location i want to go to, click 'directions' and add my starting location i click and drag all over and move the map around, zoom in and out to look at the route and my info does not disappear. I click add a destination if i want to add a destination. if i want distance from the center of town, I change the starting location of my directions to the center of town.

For the people who face issues with google maps, can they please describe to me what you are trying to accomplish and the steps you take that results in the info on screen disappearing? I'm genuinely curious on what I might be doing differently to have such a seamless experience vs the awful experience described.


It is absolutely true. I don't know of any current computer that comes close to the low latency of the 8-bit Apple //e for example. Here's a survey:

https://www.extremetech.com/computing/261148-modern-computer...

That survey shows that the Apple //e latency from keypress to screen display of 30 milliseconds is something current computers don't even approach, even though their processors are far faster.

Here's another article: https://www.pcgamer.com/the-latency-problem-why-modern-gamin...

Of course there are reasons for this. We demand a lot more functionality. But there are costs to the functionality. In addition, most systems are not designed to minimize latency or jitter.

I believe we could do a lot better, but it would require that more hardware and software developers care about it.


In aggregate, search is orders of magnitude faster because it's so much more accurate. When was the last time you had to think about your search terms?

The other day I was thinking about the Baader-Meinhof phenomenon, but I couldn't remember the name of it. So I googled "when you suddenly notice something everywhere" and Baader-Meinhof was first result. Go back to 1983 (maybe even 2003) and let me know how long it takes to structure your search terms so you get the right answer.


> When was the last time you had to think about your search terms?

Literally every day because Google has become so ridiculously bad at guessing what I want. I'm trying to tell it what I want, but it keeps throwing out words or adding synonyms or whatever else the fuck it does.


"amber-screen library computer in 1998: type in two words and hit F3. search results appear instantly"

Bullshit. The search results appear instantly if you are searching a small text file in an editor. But if your app is actually fetching the data from somewhere, a dBase database on a network disk mounted from a Netware server, good luck. You type your words, you hit the search key and then you wait. There is no indication that anything is actually happening, no spinning things, not progress indicator, nothing. You cannot do anything while you wait either (multitasking is not a thing in DOS, remember?), you just sit and wait there hoping for the best.

And this is an application that is trying to keyword search few megabytes of data over a local network. Not an application doing a fuzzy search on hundreds of terabytes of data across half a planet...


Exactly so. I used those computers back in the day. I actually designed library software in the 80s. Users today would go crazy waiting for the hard drive a 10Mbps Ethernet connection away to slowly pull data out of dBase database.


This is a very hard rant to read. The examples either aren't factually true or are poorly explained and the "solution" is largely nebulous.

I feel like this topic has merit and with a well written article with real examples (both 2019 and 83) it could be something special, but this isn't that.

I'd be interested to know how many upvoted based on the title/what they expected this to be, rather than after having tried to read it. Most of the replies from the older thread are about the title alone (or being critical of the content).


I upvoted it, but I have mixed feelings about it. I agree with the overall sentiment that it's ludicrous how much slower everything feels on computers these days: we're able to do more complex things than ever, which is great, but it's hard to rationalize why even the simple things that we've been doing forever have become so sluggish.

My first job out of college was to migrate an internal inventory management system from a in-terminal keyboard-based application (which nobody could be hired to maintain, as they stopped teaching IBM-RPG to college grads twenty years before) to a webapp. As part of gathering requirements I learned how to use it, and god damn was it fast. I watched salespeople and warehouse stockers alike fly through the menus so fast that they didn't even need to wait for screens to draw before advancing (and the screens were snap-quick to draw!), they just conjured the paths they needed from muscle memory and batched up a dozen keypresses and boom, done. I felt awful imposing this new laggy webapp on them, though I did make sure to at least leave the familiar keyboard shortcuts.

But at the same time I think the author is somewhat overselling the idea that keyboard interfaces are just as intuitive as mouse interfaces. To use the application above as an example, when we showed the new webapp to people who only had cause to use this system occasionally, they were able to navigate it more easily and professed to prefer it over the old one. Contrast this with people who used the system all day, who grumbled at the loss of productivity and need to re-learn things.

Keyboard shortcuts are great for power users who have cause to master an application thoroughly. For occasional users who don't need to master it and don't want to learn anything, a mouse will get them up and running faster. And, of course, there's still no excuse for how much lag and slowness we willingly endure in our modern UIs.


I feel like a large part of the unintuitiveness of keyboard interfaces is not actually an intrinsic quality of keyboard interfaces since they're so often conflated with command-line or TUI interfaces. There might be an alternative but yet undiscovered paradigm which combines the best of both worlds: keyboard control and intuitive, beautiful, graphical widgets.

To motivate this with a somewhat blunt example, what if we had a typical GUI but named each control (button, field, etc) with a short code (1-2 letters), in the style of Vimperator/Tridactyl/Vimium? You lose none of the intuitiveness and discoverability, but suddenly you can select the control much more quickly and precisely than clumsily trying to point a mouse pointer onto it.

And that's just a silly example I came off the top of my head now. There may be much better keyboard-centric yet intuitive paradigms.


>To motivate this with a somewhat blunt example, what if we had a typical GUI but named each control (button, field, etc) with a short code (1-2 letters), in the style of Vimperator/Tridactyl/Vimium?

Excel basically does this. If you hold down alt it hovers the companion hotkey above each function on the ribbon. It is very handy for learning your way around.

I think this is actually the big challenge with keyboard interfaces is that there is a learning curve. The advantage of mouse (or even mores touchscreens) is that a literal monkey can figure it out. But they're also limited in what they can do and how quickly they can do it. Meanwhile, keyboard based inputs take some time to learn but become extremely powerful once you're adept at them. But for non-power users who aren't that comfortable navigating a computer, it can feel like trying to learn a musical instrument for them. We nerds can flit from one interface to another because a lot of the general mechanics and muscle memory can carry over. But for people who haven't trained that skill it's much harder to learn these things, similar to playing video games. People who have never done it have no clue what they're doing when you put a controller in their hands.


I mostly agree with your point: keyboard interfaces are often harder to learn than the standard graphical mouse interface.

But I'd like to argue that a mouse also takes a bit of time to grok when you first encounter it. A simple interface like "each thing on the screen has a two-letter code; type the code to select it" doesn't sound intrinsically harder to explain to me than teaching someone how to use a mouse the first time. Especially when you consider that there's often a lot of subtlety in when to use the right or the left button. I find this is something that often trips up people that are encountering a mouse for the first time.


> no clue what they're doing when you put a controller in their hands.

May I agree to that?

I am a power user by any measure (CS major and program for a living) but still fail to grasp the controls of the simplest video game, to the point that, after hours of play, I still confuse (say) jump and attack


Older versions of Excel, that had traditional menus, did this by underlining the access key so you could see it immediately at a glance without first holding down alt and waiting. And so did every other Windows program (and OS/2, and GTK, and Qt).


I'm not sure if you're aware that standard GUIs (common 90s-style GUIs based on IBM's CUA[1] design) already do this. Every control can be assigned an access key[2] that allows you to access it with alt + some letter, with the letter indicated using an underscore.

The trend lately seems to be to hide the underscore until you hold down alt for a while, which makes the feature much more difficult to use when you don't know every shortcut by heart.

[1] https://en.wikipedia.org/wiki/IBM_Common_User_Access

[2] https://docs.microsoft.com/en-us/windows/win32/uxguide/inter...


I was aware of this as a thing that programs often did but wasn't aware that this was a standard. Thanks!

That said, I find that even when programs support this for accessing menus, they often do not support clicking buttons and focusing text boxes using it, so it could be taken further.


They can, and they often do (or used to). It works with both buttons and text boxes – the latter by connecting the text box to an adjacent label.

Have a look at some older piece of UI in Windows and hold down alt for a while. All we need for this to be taken further is for the programmer to pay a bit of attention to the existing feature and make use of it.

For a non-Windows example, I just looked at the XFCE appearance settings dialog. Hold down alt for a few seconds and you'll see that every single control has an access key: all the tabs, all the buttons, all the checkboxes, all the dropdowns.


I'm on Linux and I quickly tried some of my most commonly used software, such as Firefox and Deluge, but they did not seem to be supporting this, unfortunately. Only the menus get their access keys highlighted.


I appreciated his complaints, but not his solution. Only a programmer would think that going back to keyboards is a good idea.

Most people are happy waiting for a few seconds for their webpage to load, if it means they can do it without learning anything new.

Programming is a profession, and we design software for people who have other interests than software. Many of us would be better off if we got better at empathizing with people who are not like us.


> Only a programmer would think that going back to keyboards is a good idea.

He goes too far and his suggestions are nebulous.

But it sure would be nice to go back somewhat to the keyboard.

There are so many apps with broken tab-orders; so many common operations without common shortcuts; so many badly tuned completion engines; bad interactions with autofill, etc. There's a whole lot of stuff that I could do faster on the keyboard, but I am being constantly trained to not dare assuming I can fly through and the right things will happen. In any application that I am not positive will do the "right things", I am slow and tentative.

It's bad for accessibility, too. A vision deficit or dexterity deficit impacts mouse use harshly.

We need to go back to making the keyboard experience good. Not just in individual applications, but across the board. While we're at it, we should realize that being too free in design choices has negative impacts to user. There was a time that Apple really cared about this stuff, for instance, and usability on Apples excelled because you knew that there was a lot of effort in the developer community to do the right things and conform to common standards.


And you have to have 3 browsers because 2 of them are broken on some field. I wonder how often updates cratered green consoles back in the day. Curses could be tricky at times.

The path for many commercial UIs seems to be to map out complex processes into the most linear common path so that anyone off the street could do it. All that mouse action kneecaps productivity, and as soon as you come to an exception you enter a hell of popups or bazaar of UI elements.

Then of course they chop out all the keyboard ahortcuts.

Great UX is sort of like fusion, it’s always 30 years away.


Great UX is worse than fusion - at least once we get fusion done it'll be worked out, but UX will always be on the brink of regression.


>I appreciated his complaints, but not his solution. Only a programmer would think that going back to keyboards is a good idea.

Or anybody that has to deal with frequent input work -- anybody at a POS, a factory control center, an air traffic control tower, a ship, a library, and thousands of other such cases.

Those people would very much want to go back to keyboards if they used such an interface, and would very much would resist taking one from them for a mouse based interface.

And most of us are not that alike those people for many of our program uses, we just change between different programs, many of which could be modelled like the programs mentioned above (like TFA describes) and be far easier/faster to use.


That's true and actually that's a good point because I sometimes design software for people who key in stuff at a warehouse. I should ask them if they'd like those features.

It's funny I read the article without thinking I could apply the results at my job!

But going that extra mile is hard. In most shops like mine, what people need is more features and fixes, and elegant UI isn't a high priority. It's not all laziness, there is wisdom in satisficing.


>Most people are happy waiting for a few seconds for their webpage to load, if it means they can do it without learning anything new.

Ignorance is bliss. Show them a fast way and they don't want to go back. Unless your software has a "fast" and a "slow" mode you can't make assumptions that those users are "happy" with that delay.


A better rant is “24-core CPU and I can’t move my mouse” [1].

The thing I want to say when I hear programmers ranting about software performance is: Dude, you’re part of the problem! It’s like the guy who complains about traffic while sitting in his car on the freeway. You’re part of the problem and can help fix it. Everyone who put a spinner in rather than fix the underlying performance issue is part of the problem. Everyone who chose a slower interpreted language to sacrifice runtime speed for development speed is part of the problem. Everyone making a web request for something they can cache or compute faster locally is part of the problem. Anyone who makes their web site 2MB of Javascript around 20K of content is part of the problem. Collectively we are all doing this to ourselves little by little, and as experts and programmers we have the power to correct it, so do your part and correct it in your own software rather than just complaining!

1: https://randomascii.wordpress.com/2017/07/09/24-core-cpu-and...


I found the list great and it fit the twitter format well. I usually don’t like twitter but it is what it is and this seems like what it is good for.

I disagree about mice but everything else hit the nail on the head for how frustrating it is to be using basically supercomputers and still be sitting around waiting on code bloat.


I upvoted it because I wrote and spoke this last 20 years.

Keyboard buffer. You know what is keyboard buffer?


It's the thing that used to let me connect to a network in Windows 7 by rapidly typing win, s, n, f to open the start menu -> settings -> network -> Foobar Example VPN – without having to wait for Windows to catch up between the steps. The commands are buffered as you type and executed when the system is ready.

Of course, this doesn't work anymore in Windows 10. Now you have to wait for each bit of mysterious UI to load before you can click on to the next step.


[flagged]


[flagged]


This is a discussion about the perception of computer speeds. Why be uncivil?


I didn't intent to begin with.

Check the PS, it's meant to mirror the parent's tone (and exact words) as an anti-example.


Computers jumped the shark in 1998. I remember dual-booting NT4 and BeOS on a PII 300 MHz with 64MB of RAM. Connected to a 256 kbps SDSL modem, it’s the best computing experience I’ve ever had. You could do everything you can on a modern machine. Indeed, even more, because the modern versions of software like OneNote and Word are neutered compared to the ‘97 versions.

It feels like all of this effort has been spent making computer interfaces worse. The only improvement I can point to between MFC and a modern web UI is DPI scalability. Besides that, there are tons of regressions, from keyboard accessibility to consistency in look, feel, and operation.


Yes, that late 90s/early 2000s nirvana. I could perfectly browse the web and write documents and send emails on a PowerMac G3 without too much fuss. Only things like photo/video/audio manipulation were really lacking from machines at the time compared to today.


I don't know if it's the baby duck syndrome, rose tinted glasses, or 'back-in-my-day-ism', but I agree with you wholeheartedly.

Computing for me peaked in about 2002-2005 (I'm young, sorry), coasted okay until about 2010, then began gaining weight until today where I have a sense that computing latency is like a type 2 diabetic man having his midlife crisis. Either he loses some weight, or he has a heart attack and dies.

I agree on your point re: number crunching for rendering videos, manipulating photos, audio DSP etc.


MFC UIs will scale if you use dialog units and configure the manifest to disable auto-scaling, won't they? I guess I'll soon find out what the problems with it were, as I'm working on a Winforms app that currently runs in auto-scaled blurry mode.


I've considered using a PowerBook G3 as my main computer, but there are two practical limitations: can't get new batteries, and no support for modern cyphers/encryption.


The reason for this is, among other things, systems programmers wrote the UIs in 1983. Today random twenty-years-old web-muppets write the UIs.

The system programmers of 1983 were used to low-level programming, and most of them had probably written code in assembler. Web programmers seldom have that deep understanding of the computer.

At least, this is true from my own personal experience.


I agree with much of this mainly because as a teenager in the ‘80s, I witnessed the speed and efficiency of secretaries navigating Wordperfect’s non-GUI. The POS ui in the tweet thread is similar. There’s certainly room to rethink the use of GUIs everywhere for everyone.


I distinctly remember watching the Netscape Navigator logo for long periods of time waiting for a page to load.


You should have turned images off.


I wonder if there was a "images are ruining the internet" panic like the the modern day "Look my blog is just html, why can't everything be that way?" rants?


I just remember wishing it was faster. Now that it's faster I still wish it was faster.


The web, not the internet.

Yes, there was. For email too.


Even reading that stupid post is slower because the poster can't be bothered with capitalization, punctuation, or complete sentences.


Yeah, this looks like a collection of badly written twits. Who types like that?


I think this is a case of selective memory. I’m restoring and programming for an old Mac Plus, and it’s anything but blindly fast. Booting, opening an application, hell even dragging a window has visible refresh of the contents behind it. Windows used to take forever to boot (still isn’t that fast), and anything requiring networking was orders of magnitude slower.


For those interested in metrics, Dan Luu wrote a really cool article that measured input latency on computers from different times: https://danluu.com/input-lag/


As I scrolled through this, using my mouse to click the heart on a number of the tweets, I really understood the point he makes about actions that don't need to use a mouse. When I program in vim and run things on the command line, I don't use my mouse. Even when I browse Reddit, I don't (often) use my mouse because of the RES extension, which lets me browse with keyboard shortcuts. I haven't really thought about how much easier things generally are when I'm not using my mouse, and I wish there were a similar extension for other websites. Does such an extension exist for Twitter, or for HN?


I worked at a Fry's for about a month, this was back in 2014 and they were phasing out the blazing fast POS system he mentions and moving to a web based one. Nearly every employee hated it and it made everything much slower.


This post is GOLD! I'd prefer Wordperfect or MSWord in DOS over any of them in Windows. Same with 1-2-3, Supercalc, Excel. Windows version are so kludgey. I could write 3x as fast as I can now.


Or Wordstar.. :)

Remember when you could see and edit embedded format codes in your word processor? Surely that never happened!


Speed is always the #1 feature: https://varvy.com/pagespeed/wicked-fast.html


I got a Raspberry Pi 4 with 4G of RAM. It is so much faster at booting and logging in than my Mac Pro, by far. And that Mac Pro has a whole lot more cores and memory, etc.

How much LESS am I getting out of of the Raspberry Pi? I'm not exactly sure. I just know I can reboot in seconds and get back online while my Mac is still be showing that stupid grey logo screen.

Of course, the Raspberry Pi can't come close to running Logic Audio and 5 virtual machines simultaneously doing other things in the background as I generally do on that Mac. But the boot up to useful state speed metric favours the much lower spec'd device by far.

For daily stuff like going from OFF to editing an office document, the Mac is sloth slow. It's a very interesting and wide divergence of competencies when it comes to performance and ability.

We are definitely witnessing the later part of the Law of Diminished Returns when it comes to updating and upgrading computer equipment.

I can't say whether or not we've reached the apex, but we sure are close if we haven't. Today, anyone can have a super computer. But nearly nobody does.


"There's no reason for Twitter to use a mouse. There's nothing mousey about this website, not a damn thing twitter: i need to navigate through a linear list and perform one of four actions on discrete items, almost all text-based"

He's apparently never bothered to learn Twitter's built-in keyboard shortcuts. You can do almost everything on Twitter without touching a mouse.


Any reference? Closest I can find on Twitter's site itself is this [1] but the second shortcut (l to like) doesn't work for me.

[1] https://help.twitter.com/en/using-twitter/how-to-tweet

Edit: OK, it works, when you use default keyboard controls (e.g. tab) to select the tweet first. I'd assumed it would work from a 'single tweet' page without having to do anything else first. Overall, looks promising, but not as nice as the keyboard controls in gmail, for example.


As somebody that actually uses a computer from 1983, i question the conclusion.

The speed of modern pcs is surprisingly bad, relatively speaking, but this mainly just means they aren't as much ridiculously quicker as they probably should be.

(The main issue cited is not meaningful, in my view, as computers from 1983 just couldn't do graphical maps at interactive frame rates at all.)


I don't know, I can compile code perceptually instantly and get optimizations which perceptually instant compilers didn't do in 1983. I know that Turbo Pascal was very fast when it came to compiling Pascal, and I certainly appreciate the effort which went into making that happen, but Turbo Pascal didn't turn string handling code into SIMD operations after turning a recursive function into an unrolled loop.

Partly, this was due to the fact the IBM PC did not have SIMD operations in 1983, but the rest is because modern compilers run complex optimization passes in tiny fractions of a second.

Also, I'd like to see this person say that operations on computers were perceptually instant while logged into a PDP-11/70 or (let's be a tad generous) a VAX-11/780 with a half-dozen other people all trying to use it at the same time. Yes, faster computers existed in 1983. Didn't mean the likes of you got to use them.


This also reminds me of the conspiracy perpetrated by the industry to remove the antiglare treatment of laptop screens and at the same time reduce the screen area by changing the aspect ratio. All this, of course, was done solely for the benefit of the consumer. (Oh, that, and a wild ornament around the keyboard.)


I used to know a computer music academic who used to attempt to run FFT convolution and other DSP processes on an Atari ST.

One time he started a process, went away on holiday for two weeks, and it was still running when he got back.

These days it would be much faster than real time - not just native, but in a web browser.


Well, I think it's much more valid to claim it's much slower than 2000. Take for instance, Microsoft Office products. On a business class desktop, i3-i5, they are much slower than they were in 2000 in all aspects. Software has become slower as computers have zoomed in speed.


I agree I think this is generally more accurate. Browsing the web, writing documents, and navigating around (modest sized, not enterprise size)spreadsheets feels much slower than it did in the late 90s/early 2000s. I won't go back as far as 1983, but maybe 2003?


We have a whole generation of programmers who believe that developer productivity is the most important thing, but this ceases to be true as soon as you have any users at all.


The author must have lived in a different 1983 from me. I remember Commodore BASIC freezing up for a minute to perform garbage collection on its 30K of heap if you created too many temp strings.

The C-beams glittering in the dark were pretty cool, though.


I agree with a lot of this.

Every app wants you to "experience" its data solely in its walled garden with no ability to cross-compare data. Say I want to find a coffee shop on the way to somewhere but also one which appears in a list of great coffee shops on a separate Reddit post I found. Pretty much impossible on mobile, and a major pain on desktop.

At the very least, give me the ability to hide records/items/instances I've already seen as I'm working my way through several searches or lists. Often times searching things on Yelp and Google feels like re-reading the same list, just differently ordered over and over again.


I got mad just yesterday at a new transit system interface with a map. It has moving icons to represent the present locations of buses. I clicked on a single point in the map, and it popped open a big black box of information. So big that it covered the icons of interest. Hmmm. If I could move the box to one side, I could leave it open ... nope, the box won't move.

Bad enough. So I wanted to close the box. That took a minute, because you couldn't just click anywhere inside, you had to click at one single ... and unmarked ... location. Here? Here? Here?


When comparing to amber/green screen interfaces, hatred of the mouse is odd. Sure, for a simple search interface the text interface was fine-- search field, arrow down through results. But for anything more complex you end up having to tab between a dozen or more fields. ERP interfaces we're especially tedious in this regard, so the author's nostalgia for the "solved" problems of that age aren't completely warranted, and the mouse is at worst just as bad.


in 1983 there was no network delay to edit a document. nowadays it's "in the cloud!"


The "things shifting around" may be very deliberate.

I've noticed that many advertising-supported pages (say trade publications such as 'FierceWireless') are a disaster on mobile with ads and other intrusive pop-ins causing page elements to move all around to the point where it isn't worth trying to click anything because it won't be there by the time your finger gets there -- but "oops!" you clicked on an ad so the cash register goes KA-CHING!


The maps feedback is spot on -- as a consumer I want a decision support tool that helps me run searches reliably and quickly. I think G wants to show me a maximum of three things at once so they can optimize ad clicks.

G maps had the option to be excel and instead chose to be the bottom shelf of the cereal aisle. It's fine to treat your users like consumers instead of power users, but that opens a hole in the market for a city-aware, mobile-friendly GIS tool.


I wanna know more about these super fast library computers. The public library I worked in in 1999 used computer terminals that were just doing full-screen telnet sessions to a remote Internet host. It was like connecting to a BBS from my home modem.

Random other info: One time the application on the remote host crashed and dropped me to a Unix shell. ls showed the directory had around 50000 files. The system's name was "DS Galaxy 2000".



Ah, the same Nickle's Worth who said (iirc), "Europeans call me by name. Americans call me by value."


Reading through the comments it would seem that the article is a false alarm. The count of upvotes dictates the opposite.

My opinion is that the baseline for what we achieve as good (and quick) service has been slowly lift up by the industry. At the same time specific design practices were implemented to ease our frustration about slow service, and "feel" at least that the service is quick.


It's a common trend. Headline with a popular opinion says something a typical HN user is inclined to agree without even reading it, so there will be a bunch of upvotes and some "me too" comments (disguised as personal anecdotes). But those who actually clicked on the link will find a number of issues with the content of it and will tell about it here. And here we are, lots of likes and comments suggesting it isn't really worth attention.


I feel like it’s a behavior encouraged by Googles Material design or derived projects: slide details in and out of the screen (google maps pin details, for example), hide input boundaries (no border on search text input fields), tons of sporadically placed spinners.

I really dislike that tendency in design and behavior and find it counterproductive.


Sorry, but in 1983 I was using CLOAD and CSAVE, and it took very long minutes to save & load anything (on cassette!).


If you read the article, he's talking about the ui experience for certain narrow applications, not i/o on '80s 8-bit home computers.


I was doing advanced Sprite manipulation on a TI-99/4A

And also saving and loading to audio cassette. :-(


I won't go back to 1983 but I have an old Mac with Snow Leopard and Text Edit is way more responsive on the old Imac than on my new Macbook pro. I know there was some change due to security, but I definitely feel this one. And I won't talk about the Adobe Suite ...


Jesus Christ on a cracker people, are you all really this stupid? The difference between 1983 and 2017 is the magnitude of information and functionality. In 1983, a whole MB of ram was what super computers ran on. Your average desktop typically had around 2 KB of ram. My desktop today runs 16 GB, that's 8 million times more data.

First of all, when data sets start to get that big it becomes a monumental task to organize just the execution order of the compiled code. Second, the main thing the bitch from the article is complaining about not being able to do is something you just can't do from the phone UI. Google has a trip planner app that specifically helps locate interesting landmarks between the start and destination points, and even plan where to stop for gas or find a hotel for the night all from a single tab. For having spent so long writing his rant, I'm surprised he never tried googling a trip planner app. It would have been faster and prevented him from proving what an idiot he is.


Well, in 1983 I was waiting all day for my program to load from floppy. Then maybe I had to swap disks out and wait again. But yeah, if you had one of those awesome 10 Meg drives, things would load fast!

Then you would run out of memory if you were doing anything ambitious.


I don't know, I can compile code perceptually instantly and get optimizations which perceptually instant compilers didn't do in 1983. I know that Turbo Pascal was very fast when it came to compiling Pascal, and I certainly appreciate the effort which went into making that happen, but Turbo Pascal didn't turn string handling code into SIMD operations after turning a recursive function into an unrolled loop.

Partly, this was due to the fact the IBM PC did not have SIMD operations in 1983, but the rest is because modern compilers run complex optimization passes in tiny fractions of a second.

Also, I'd like to see this person say that operations on computers were perceptually instant while logged into a PDP-11/70 or (let's be a tad generous) a VAX-11/780 with a half-dozen other people all trying to use it at the same time. Yes, faster computers existed in 1983. Didn't mean the likes of you got to use them.


In December of 2018, the average news article took 24 seconds to load, consumed 3.58MB of your data plan, and Google scored it a Speed Index value of 11,721 (a Speed Index of 3,000 or lower is considered good).

It has gotten worse this year.


So whats with the punctuation (no upper/lower case, no commas, periods, etc)?


When I worked on an IBM mainframe, everything we did had a guaranteed response time built-in. We had to plan and engineer the required response time. Seems like we have forgotten to care about response time.


This especially drives my crazy about my phone. I'm not talking about using it as a web browser or handheld computer, just basic phone functionality like making a call. It's so damn slow! WHY?


Bloat


I wish everything was optimized for speed of interaction above all other metrics.

Make it so.


The human condition in a nutshell. We have to relearn the same lessons every 20 to 30 years and probably much more often in the software world. Progress is mostly sideways instead of forward.



They guy hates the commoditized web. Get in line. It's not about what you want, it's about what the service you're using is trying to sell you.


Ironically, this rant is posted in chunks on Twitter, making it hard to actually read it in a fluid manner. How about a blog post with paragraphs instead?


Has anyone ever also noticed how cars are always heavier than the bikes we used when we were children. And how much more air you need and how much longer it takes to fill a car tire with a hand pump.

Also, remember when none of us was a portuguese author and we used paragraphs rationally and wrote things down without splitting them into 200 letter long strings because we used mediums made for rants and did not need a separate app to wrap it all into something vaguely resembling text?

Also has anyone noticed that time passes and things change?


The point of the rant was that in IT, and from a user perspective, they keep changing for the worse rather than the better.

Calling this sort of thing out, when the entire industry keeps professing they're "making the world a better place", is valuable IMHO. I happen to agree with a lot of the examples.


Your comment is ad-hominem (dismissive) and does not contribute to the conversation. You can dislike with the style/syntax of the article but the point it's making isn't "things change" it's that "things change for the worse" complete with several examples which other commenters here either support or disagree with, but helpful comments mention central points in the article in good-faith conversation.


I am not confident you are using ad-hominem correctly. From the Merriam Webster online dictionary:

“1 : appealing to feelings or prejudices rather than intellect an ad hominem argument

2 : marked by or being an attack on an opponent's character rather than by an answer to the contentions made made an ad hominem personal attack on his rival”

(Here on HN, I think that #2 is the most common usage.)


I prefer to see it as continuing the argument in the same tone as the original - ie not particularly truthful and set in tone and content to support a skewed point of view and appeal to sentiment rather than anything else.


Hey, the purpose of a computer is to consume electricity while running IA-mandated corporate policy enforcement and virus scanning software. That we provide users with monitors and input devices so that they can use any remaining spare CPU cycles on whatever machines just happen to be located near their desks to help themselves do their "jobs" is just icing on the cake.

They should be grateful if opening a modestly-sized word document takes less than 10 seconds.


Isn't it obvious, it's because webapps get data from very far away and back in the day the data was right there.


I like programming in C, but it seem that C in GUI dev is slowly dying.

GTK is truly the last bastion, and even it took a hit with GTK 3.0


Absolutely agree, I remember when hospital clinical data was captured by keyboard hotkeys with no mice. It was quick to learn and lightning fast to capture and search. Along came progress in the form of a visual basic frontend. Network traffic escalated, frozen screens became common, data was corrupted, the computer that was supposed to save time sucked up the energy of busy professionals. This was 30 years ago and it is still in use.


As Pauli said, “That’s not even wrong.”


I don’t know, I had a Commodore 64 back in the 80s and loading those games was painful


So you hate the mouse. Mouse is an benevolent dictator. It tries to help. You want to know who Stalin is? Fucking touchscreens. You're blind? Fuck you. You have Parkinsons? Off to the death camp you go. You want to type in your pocket while paying zero attention to the poison screen which is taken up by half by an idiotic keyboard on which you will continuously mistype and be moronically autocorrected to a point where you WILL LONG FOR THE GOOD OLD DAYS OF TYPING WITH T9 - enjoy the bullet to the back of your head you sexual deviant. You will drag, pinch, stretch, lick the glossy oh-so-fragile glass screen, you will fucking obey the tech giants that want your eyes glued to the screen so you can see more ads until you lose it and start loving the uncaring big brother and all the pain you are brought from him. Mice are evil. Pshaw.


Do you need a hug?


Actually, honestly, yes.


This is two years old - a lot of stuff in this rant already sounds totally obsolete. Early web apps were crap and we're still learning how to do stuff that works on both phones and PCs but the sentiment expressed here seems off base now.


i got really annoyed when new motherboards starting not including ps/2 ports for keyboard and mice.

not surprising my railgun accuracy in quake 3 started to decline in quake 3 after that.

I blame the ports.


Explained before the advent of computers as Jevons’ Paradox


Before web tracking webpages would load instantly.


> Suppose during my regular game development everyday life I've installed Photoshop recently and I want a look at a screenshot of the game - someone reported a problem or something.

> So I double-click on the thing ... one ... two ... three ... four ... five ... six ... it's about seven seconds before I can actually see the image. [...]

> So that's really slow and I'm going to talk about that for a bit, but there's an element of severe irony to this which is that as soon as I double click this thing within one second it draws an image and it's a pretty high resolution interesting image. It's just not the image that I care about.

> So obviously it's not hard to start a process and draw an image on the screen in much less than seven seconds. They just don't manage that.

> Now. I gave a speech a year ago that started this same way. That was 2016. It's now 2017 and of course a new version of Photoshop has come out and of course what will align directly with my point in the next few slides: they've made it worse. And the great way in which they've made it worse is: say there's some operation that you want to do maybe once in a while like create a new image... So I'm going to go to file, new ... Urghh. And that menu takes - it probably takes about a second to come up.

> And you might think "Oh, well, you know it was just all these assets were cold or something.. Maybe they come off the hard drive. It'll be faster next time." And it's like: "Well, let's test that out. Nope." Like every time. I'll use the keyboard shortcuts. File. New. Nyeaarghh. [...]

> Imagine if the people who programmed this were trying to make VR games. Everybody would be vomiting everywhere all the time.

> Well what machine am I running this on? It's actually a pretty fast machine, it's a Razer Blade laptop with a pretty high-end i7 in it, and you can talk about how fast the CPU is or the GPU is and some arbitrary measurement and I'm going to discuss CPU speeds here and I want to say in advance that none of what I'm about to say is meant to be precise or precise measurements. I'm making a general point.

> And the general point is that the CPU of this thing would have been approximately the fastest computer in the world when I was in college. Or the GPU would have been the fastest computer in the world in the year 2000 or thereabouts.

> Now you might say "That's a really long time ago. This is ancient Stone Age." Well. Photoshop was released in 1990 before either of those dates. And Photoshop 6, which I used heavily during my earlier days in game development is from the year 2000. And this is what it looks like. This is a screenshot of Photoshop 6. It's got all the same UI that Photoshop has today. It's got all these same control widgets and it's got our layers and channels and all that stuff. Today the UI is a different colour but apart from that it's essentially the same program.

> Now I don't doubt that it has many more features. But you have to ask how many more features are there and what level of slowdown does that justify?

- Jonathan Blow, Reboot Develop 2017: https://www.youtube.com/watch?v=De0Am_QcZiQ&t=155


What is this? Are these tweets? It's like reading a programming language where vast amounts of syntax is optional. This style of blogging is making the world worse.


Yeah. It is a tweet thread, which is such a bad idea that 3rd party apps have appeared to make reading them easier (threadreader for example). Unfortunately threadreader and its ilk can't make the content any better formatted or grammatically correct..er.


Twitter is a great (as in terrible) example. I only enable javascript for a few websites so - when I click on a link to twitter - I see the twitter page being rendered and then this is all erased and twitter puts up a screen demanding that that I click on a link to see the content. After this click and a further page load I get to see a few hundred characters (the actual message) and a ton of rubbish, all of which was downloaded and rendered the first time round.


This is a website that 'rolls up' tweets. Here's the original tweets: https://twitter.com/gravislizard/status/927593460642615296


We've changed the URL to that from https://threadreaderapp.com/thread/927593460642615296.html. The community is divided on which interface it prefers, so we usually break the tie by linking to the original source, which the site guidelines call for anyhow.

https://news.ycombinator.com/newsguidelines.html


oops you pressed a key, your results are erased

Every damn time!


thread reader loads fast, that's because it doesnt force you to watch the page "booting up" and then its equivalent windows hourglass (ajax spinners). Maybe we should try a browser without javascript ...


>Mice are bad. Mice are absolutely terrible.

Bah humbug, I hate modern technology!

This luddite author should go into goat herding if they hate modern tech so much.


The computer mouse was invented back in the 60s, its hardly modern technology


Doesn't that make the author even more wrong?


Thank G-d. I have been saying this for year. Code is:

- Getting less stable - Getting less speedy - Getting less efficient

I blame the fact that old programmers and young programmers do not have the appropriate relationship.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: