Hacker News new | past | comments | ask | show | jobs | submit login
Almost everything on computers is perceptually slower than it was in 1983 (tttthreads.com)
498 points by dredmorbius on Nov 7, 2017 | hide | past | favorite | 341 comments



This could be a copy-paste of rants I have written. I think marketing has ruined pretty much everything. Everything has a subscription now, and it's in the cloud.

My experience is that everything is slower than when I first started using computers as a child in the 90's, and it does it worse. You don't own anything anymore, upgrades routinely take away features, and nothing interfaces with anything else.

There are a lot of reasons why building an app on a web framework makes sense, but the fact that I routinely lose access to applications I've paid for because the company has gone out of business or stopped supporting them drives me insane. I can still install Excel 97 if I want to, but I have 0 confidence that anything I've bought from Google will work a year from now.

I mean, there is perceptible lag in character entry that never used to happen. Sure, my computer used to freeze inexplicably, but Atom crashes often enough that I don't consider that a win, and text has a noticeable delay before it appears on the screen when using Atom. Text entry is something I can implement flawlessly on a microcontroller and the majority of applications I use on a daily basis get it wrong.


You do, in fact, have a choice, you always did, everyone does. There are decent, completely offline solutions (Libreoffice vs GDocs), you can self-host a lot of things and do a private "cloud" on a Raspberry Pi or an old laptop (https://yunohost.org/#/ ; https://cloudron.io/ ; etc). The tradeoff is compatibility and not being trendy. However, that has always been a problem both the 80s and the 90s. Mac vs PC? Amiga vs anything?

A lot has changed since and a lot has become more comfortable. It slower, more complicated, much more annoying. But you do have the power to simply stop using whatever you dislike[1].

So: don't buy from google. Build you own cloud. If you don't like the application or if it's instable (atom), use something stable (vim?). It _will_ be uncomfortable, but it's there.

[1] I'm well aware the cost of, for example, refusing to use Facebook, but you still can refuse to use it, whereas life without WeChat in Asia is becoming near impossible.

EDIT: Sorry for possible misunderstandings, I mixed up 3 different topics in a way too short answer. My point is: there are alternatives, there are choices. You don't like the user interface of something, protest: use something else. There _are_ options, but they are not the most visible, not the most comfortable choices, and many of them will involve setting services up for yourself.


> There are decent, completely offline solutions (Libreoffice vs GDocs)

LibreOffice is slow. On a ~2010 business-class Dell laptop that still is in perfectly good physical condition, running Debian 9, with a handful of tabs in Firefox and a mosh running:

    adamantoise:~ geofft$ time libreoffice --terminate_after_init
    
    real    0m8.103s
    user    0m1.696s
    sys     0m0.520s
The rant here isn't about cloud vs. local, it's about performance and functionality. We had the cloud in the late 1980s, it was called dialing up to a mainframe, and it worked fine because software was designed to be fast in the presence of slow (by today's standards) connections. One of the examples of that -- library card catalogs running on amber-screen terminal -- was explicitly brought up in the rant.


> LibreOffice is slow. On a ~2010 business-class Dell laptop that still is in perfectly good physical condition, running Debian 9, with a handful of tabs in Firefox and a mosh running:

Libreoffice is monolithically designed, as shouldn't be representative of the generic performance of modern software.

The reason is that StarOffice was envisioned as desktop environment, therefore, all the components of the suite are loaded on the first startup.

  $ libreoffice --writer &
  $ time libreoffice --calc --terminate_after_init
  
  real	0m0.104s
  user	0m0.016s
  sys	0m0.008s
If anything should be compared, then Microsoft Office should be used, as it always had a very fast startup (AFAIK, due to components preloading).

I think the article is very misguided; the problem is in the expectation that anything should be faster, even in contexts that are implicitly slower (networks, i.e. internet), or with features that will inevitably slow down the experience (e.g. browser plugins).

Using as reference the slowest software - Libreoffice, Atom, Firefox - is nothing else than cherry-picking.

In this perspective, a sibling article is correct - if you want faster alternatives, you have them.


re 1: Slow - compared to google docs? Are you serious?

EDIT: not startup time, who cares about startup time? When you load a few hundred pages doc, or a massive excel.

re 2: I only wanted to give examples, but you're right, my wording is off.

I wanted to say there are fast and good interfaces: there's vim for the terminal, there's Rainloop for webmail, miniflux for RSS reading - these are all keyboard-oriented ones, all working decently and as fast as they can. In case of web-connected ones, the speed of going back and forth is certainly an issue.

Local - as in not making connections on the network - in my opinion, will always have a speed advantage though.


> re 1: Slow - compared to google docs? Are you serious?

Slow compared to decent standards of what a word processor needs. For reference, WordPad perceptually opens instantly for me, and has felt like it opened instantly since Windows 95, I'm pretty sure. I am saying that both LibreOffice and Google Docs are examples of software that take way too long to do simple things because they're busy doing complicated things that I don't actually care for. (And arguably this isn't really their fault per se, they're both designed to compete with / follow the expectations set by Microsoft Office of what an office suite is, and those expectations are that all the complicated things are available).

> Local - as in not making connections on the network - in my opinion, will always have a speed advantage though.

Depends what you're doing with the network. I do just about all of my work in the terminal with a mosh connection to a cheap server a couple hundred miles away, and I never feel perceptual slowness from my machine not being local, unless I am literally in the subway or in an elevator. And I actually gain a lot of perceptual speed because I can close my laptop and switch to my phone instantly and see the exact screen where I was.

If you're streaming binaries from the network or trying to pump animations on a 2048-by-1536 true-color retina display, yes, remote is going to be slow. Half the tweetstorm's point is that you shouldn't be designing software to need that in the first place.


If your word processing requirements are very lax you should probably just be using Markdown + an editor.


And if they're not, LaTeX is great.


> And if they're not, LaTeX is great.

And you can use pandoc to get the best of both worlds.


> When you load a few hundred pages doc, or a massive excel.

LibreOffice on even midsized spreadsheets is quite slow. Slower than opening it Google Docs in fact, which borders on shameful.


> EDIT: not startup time, who cares about startup time? When you load a few hundred pages doc, or a massive excel.

It was an example - the slowness (to me) is present for the entire interface. It was just easy to demonstrate. (Also in both LO and Google Docs I can type plain text into an existing document just fine.... I'm curious what you find slower in Google Docs than LO.)

My point is that this rant is not, as far as I can tell, a "local software good, cloud software bad" rant. There are reasons to believe that, e.g. debuggability and computing freedom, but UI speed is not one of them.


re 1: Slow - compared to google docs? Are you serious?

I can't speak for the poster above yours, but I would be serious. In my experience, on the startup time, google docs way out performs open office. It's open in single digit seconds, open office is 10s+

I still prefer offline software, but the bloat can be awful. At least web tech is designed with web browsing latency expectation. I find most desktop software these days has a much larger start and use latency by far.


Not everyone it prepared to host, maintain and secure a server cluster of OSS.


Having a choice is not a thing for granted. You need to work for it. You need to maintain the right to it. Those who are not prepared, learn it, or live with whatever is provided by others.


This argument generalizes to everything, though.

Do you not like owning a car/tractor/printer running proprietary software that you're not permitted to modify? I guess you should have bought a 20 year old version and upgraded it yourself.

Do you want a choice of food that you trust won't make you sick? Then you probably should have worked for that lettuce from planting to harvest, not just complained about safety standards.

Total independence from the actions of others isn't impossible, but no one posting comments online is practicing it. And if "working for it" can include learning to use solutions other people have built, then it can also include writing this article on what's change and what's missing.

If you live in a society, talking about your preferences isn't an alternative to working for them, it's part of that work.


I merely stated you need to do, act, work for choices and for freedom. Expecting freedom of choice for granted is spoilt belief.


This really does smack of elitism. Not everyone simply has the time to do that.


If you follow popular technology you get a slow stream of annoying technology thats predatory by nature. When I switched to the free software side, my experience sped up and I had finer control over everything. My files are mine and not a company's, non-free javascript is disabled so the web is faster, and my system is lightning fast since its devoid of windows telemetry. Encryption keys and offline password managers can be a hassle but the freedom overshadows the difficulty for me. Or maybe its stallman stockholm syndrome.


> I can still install Excel 97 if I want to

That's the survivor bias. There's plenty of things that you can no longer install and the things you can still install, it's mostly because there's someone that maintains their API and whatnot (thus, the company hasn't gone out of business or haven't decided not to maintains it). Look at games, GOG has an R&D department to make sure you can run the games on their platform.

I'm pretty sure there's a ton of people here that can talk about some old computer or VM that they have to keep up because of old legacy stuff.

> I mean, there is perceptible lag in character entry that never used to happen. Sure, my computer used to freeze inexplicably, but Atom crashes often enough that I don't consider that a win, and text has a noticeable delay before it appears on the screen when using Atom. Text entry is something I can implement flawlessly on a microcontroller and the majority of applications I use on a daily basis get it wrong.

Who forced you to use Atom? That's your decision to use something still heavily in development using some new technologies (not meant to be faster or more stable, just easier to develop into). It's like buying a bicycle and complaining it doesn't go as fast as the car you had before.


GOG only makes easy what the community and nerds have been doing for years. GOG made a business out of it because they pay people to "maintain" the "API". Their products are not under active maintenance nor development.

Now, if there WAS a company paying for all that maintenance it would be Microsoft, with their almost religious commitment to backwards compatibility.


> Their products are not under active maintenance nor development.

Isn't it the same? Someone that directly update the app to newer API or someone that update an API to make sure the older apps are still running is pretty much the same at the end. Someone has to do that work.

There's someone that pay for it, whether it's nerds by their time, or Microsoft by their backward compatibility support.

Even for Microsoft, at the end of the day, you pay that price through your next Windows license, and still, they may decide not to keep supporting some of it (16 bits programs are no longer supported over 64 bits for example).

It's not different from a subscription model, except that instead of paying a fortune to support most things (which may not include what you actually need), you pay only to keep support over the product you actually use and need.

I'm not saying subscription models or cloud application are amazing, but being able to keep running your old apps is not the norm and is mostly made possible because there's someone that pay for it for you. At least in a subscription model, that relation is pretty direct, you pay for the software to keep working, unlike hoping that Windows invest into that old techs to make sure they keep running.


"but being able to keep running your old apps is not the norm"

Only for the cloud cohort. Not all of us drink the kool-aid! Try living in a computing environment where the technology-rug isn't pulled out from under you every three months.

Both Windows and Linux have not had core changes to their API. The fundamentals of application programming haven't changed in decades. Why shouldn't we allow old apps to run?


I've got a good-sized collection of vintage hardware and software. Microsoft stops supporting something? All well, I've got my older versions still available. They don't get updates, but for the most part, I didn't expect them to when I bought them.

> you pay only to keep support over the product you actually use and need.

And you pay it over and over again. Until you don't need it (which might be because they removed some feature you were relying on, with no recourse).

> but being able to keep running your old apps is not the norm

Losing access to my old apps isn't the norm, unless I choose to let go of control over my own computer's software.


Exactly. When somebody uses Atom, he cannot complain about performance compared to compiled apps. And saying this as huge fan of Atom.


We've even managed to slow down things like Excel 97. I played with a 486 laptop recently, and was completely amazed at how fast things like Word felt (once they'd loaded off the ancient hard disk).

The most immediately notable part was the typing, which felt completely instantaneous. I haven't found out exactly why, but the switch from serial to USB keyboard interfaces (and the lag therein) as well as the various windowing pipeline "improvements" over the years seem to be in the mix.


Relevant post on keyboard latency here: https://danluu.com/keyboard-latency/

Takeaway: today’s keyboards are slow. Even gaming keyboards marketed as fast are slow. There’s more latency just in today’s keyboards than the entire end-to-end latency in an Apple II system from 1977.


Old PS/2 keyboards are faster than USB because they were based on interrupts. That said, I think the cause underlying perceived low speed on modern platforms is more a bloat and inefficiency problem. Things are totally out of control. The PS/2 slight advantage over USB is an anecdote.


There were recently some benchmarks of USB keyboards and they all add quite a lot of latency, even the so called gaming keyboards.


on PCs, keyboards were never "serial" as in "the serial port"(mice were), but they were always "serial" in the same way that the USB port is "serial". Perhaps it's better said that the switch from a special-purpose interface to USB is what introduced lag.


I know on microcontrollers, if you want to reduce jitter (by sometimes up to a magnitude), you disable USB. Ive noticed my interrupt loops on ARMs go way down when the micro doesn't have to service USB.

But then you have to use an STLink to upload programs.


The Word processor on my 8 Bit Computer (BBC Micro) is ready to type into about 2 seconds from when I switch the power on.


I feel like this is the core of how "Open Software" is aiming to combat this. By removing the consumer-driven model away from software and putting users in closer touch with developers, we are supposedly developing tools together that better fit our needs. Though, OSS arguably is struggling to meet this theoretical ideal, especially with the current lack of support most OSS devs have.

My life improved considerably when I switched from Sublime to vim, Excel to VisiData, Word to LaTeX (though, I am unsure if LaTeX is completely OSS), etc. I invested the time in learning the language of the tool, but it pays off in droves.


There's no difference between OSS and proprietary software for the users, except price and the (most often theoretical) ability to modify the software and redistribute it.

Don't get me wrong: I use OSS wherever possible, contribute where I can, and love the freedom to tinker with the software's innards. But UI just isn't OSS' strong suit. Mainstream Linux desktops are no faster than MacOS, and quality as perceived by end users is far lower.

Some of that is due to missing hardware support, and some of it is a result of the OSS' "bazaar model" of development simply not being as good a fit for UIs as it is for CLI applications. But in any case, the forces at play in the market are the same for OSS and proprietary software.


Sorry to say, but the big name DEs are off chasing the UX tail of the big name commercial stuff, come hell or high performance costs...


> I am unsure if LaTeX is completely OSS

The TeX sources are widely available and (infamously) well documented. The LaTeX macros are merely widely available. I forget what licence they're under, though.

What makes you unsure they're open source?


> This could be a copy-paste of rants I have written. I think marketing has ruined pretty much everything. Everything has a subscription now, and it's in the cloud.

Well, shrink-wrapped software only needed to be purchased once to have something, and it was amenable to piracy. It also didn't let companies collect, expropriate, and sell off your private data. Hell, once upon a time, people even wanted to take EULA provisions to court as illegitimate, since you couldn't use a product you'd already paid for if you didn't "agree" to a post-purchase "contract".

Something had to be done to make profits sustainable!


I would blame Scrum and the culture of storyboards. Go and take a look at any storyboard. 99% of it is feature addition. Not speed improvement, not security. It is some "scrum master" who comes up and says 10% of our customers have said that feature X on our app would be awesome and based on this POOMA-ed projection we will have a 10% improvement in retention and 100% improvement in some other random statistic, completely ignoring the fact that you are negative impacting things like speed and responsiveness that the other 90% of the user base gives a crap about, which was not captured in this random survey of yours


> I think marketing has ruined pretty much everything.

Actually, in this case, it was technological advancement coupled with those companies' desire to make profits that 'ruined pretty much everything'. Marketing had nothing to do with it.

In fact, do us a favour and re-read your post, then tell us what parts were ruined by marketing?


The cloud is DRM. I think that and not scalability or anything else is the primary driver for most apps. It's a way of putting your code where users can't directly access it and charging a monthly fee.


Atom is pretty terrible, but you have options. VSCode is also based on the web stack and is very fast (though Sublime is even faster).


Ugh. Guys think bigger than just web stacks. Pluma/Gedit, notepad++, TextMate... stop sucking off JS


Kate, Sublime, Vim, Emacs, 4coder, UltraEdit, BBEdit, Bluefish...


After using various IDEs for a brief period of time I was always comming back to Kate because there's no input lag there.


Sounds like you need a new editor, friend. Can I suggest Sublime Text?


I wonder how much of this is a result of UI designers deliberately putting tiny delays/animations into their UIs to make them more "usable"?

For example, it was a common piece of advice that if clicking a button opens up a dialog box, there should be a brief animation showing the box expanding from the direction of the button, so the user would associate the clicking of the button with the appearance of the box and understand where it came from. This does actually improve usability the first few times it happens, but over hundreds of iterations these tiny delays really slow down the overall experience of using the interface.


Just last week I discovered I could disable animations in Android through the hidden developer options. It's fabulous. The only initial quirk was no visual feedback when a picture is successfully taken, so it felt like i missed the button, but I've gotten used to it already.


!!!!!

That is a amazing!!! I had no idea developer meant "interested user".

> Make sure developer options are enabled. If they're not, go to Settings > About phone, then tap on Build number several times to enable it

> Go to Settings > Developer options, and scroll down to Window animation scale, Transition animation scale, and Animator duration scale.

> Tap on each of the animation options and turn them off.

https://lifehacker.com/disable-animations-on-android-to-impr...


There are also various visual feedback options, such as a CPU load indicator, or a debug renderer for screen touches which allows you to see if your touchscreen is misbehaving and how (happens to me when I plug my charger, for example).


This is the best thing I've learned all day.


On my first try I accidentally tapped "Android version" instead of "Build number" :)


on 7.x, that leads to an easter egg.


:( I am still on Android 6.

I thought the Google Nexus 5 phone would be getting updates often...but I guess not.


If you you're comfortable with custom ROM, LineageOS[1][2] is very stable and snappy on Nexus 5.

[1] https://wiki.lineageos.org/devices/hammerhead

[2] https://download.lineageos.org/hammerhead


8.0 too


Wow that makes an huge difference to usability. My phone feels like it's several times as fast now. Who knew that it felt slow and laggy on purpose!

It's really good!


> Just last week I discovered I could disable animations in Android through the hidden developer options. It's fabulous. The only initial quirk was no visual feedback when a picture is successfully taken, so it felt like i missed the button, but I've gotten used to it already.

Are you talking about this: https://www.cnet.com/how-to/speed-up-your-android-by-adjusti... ?

I set mine to "0.5", and everything's much snappier, but they still play so I still get some feedback.


You can also speed them up a bit. Makes the phone seem a lot snappier without breaking apps that seem to rely on animations for timing (I'm looking at you, messenger's "share" feature which doesn't have a send delay if you disable animations)


Another bonus is how much longer your battery lasts without animations. I'm pretty sure battery saver just turns off tactile response, and disables animations for the most part (in addition to changing the gps update speed if you are using location services)


Try setting just "Animator duration scale" to 0.5x while leaving the others off. This will fix the camera problem, as well as makes things like closing apps in the "recent" list less janky, and you still get the rest of the benefit.


It's a real pity that there isn't a .25x setting, or even a .125 setting; the .5x setting feels nice, but still a bit slow-ish. I understand that the intent here is to simulate slow phones (try using the phone with the 10x setting — it's horrid!), but it'd be nice to simulate a fast phone instead!


Even on the 0.5x setting things like the refresh pinwheel spin a little too fast, so 0.25x might be a bit much. It just controls too many different animations.


Does anyone know if you can do the same with iPhone?


Settings -> General -> Acessibility -> Reduce Motion


Oh nice, thanks! Unfortunately it seems that apps still animate.


Yeah and I still see a lag from opening apps.


If you have root you can write a MobileSubstrate hook that calls UIView setAnimationsEnabled:NO in a hooked constructor of UIView (then call super.)


Completely disabling animations would break many apps, because a lot is going on in completion blocks of animations. Setting animation durations to a very low value would work though (that’s also what you have to do for UI tests).


Only if you jailbreak


I've done this as well. It reminds me of when I upgraded PalmOS devices in Ye Olde Days and home would appear instantly when you tapped it.


There is a similar setting in Windows, in case anyone is interested.

HKEY_CURRENT_USER\Control Panel\Desktop\MenuShowDelay


Wow! Didn't know this existed, thanks for the tip!


This is simply amazing! Thanks for sharing!


A lot. Just take a look at the desktop switching animation in OS X that's been there now for years. I bring that up anytime anyone tries to suggest Apple has decent design sense. No sane UI designer would put something like that in without the possibility of disabling it (like their other shitty animations). To be clear, any UI designer who hasn't understood the absolute fact that after a hundred or a thousand times, any animation becomes despised by a large fraction of one's user base shouldn't be working in this field. A lot of these unqualified people also apparently make a lot of videogames with un-skippable cut scenes. Same idea. Same stupidity and lack of any forethought whatsoever about the user's experience. It's almost like they're making this software with only the manufacturer's goals of looking 'shiny', 'sleek', or whatever dumb adjective of the day people who don't understand tech use to describe it. Oh, wait a second ...


Few years back when I had to use OSX I tried to use native workspaces. The switch workspaces animation was already too long for me the first time. In Windows there are (were?) at least all those registry edits that one can do to do whatever one pleases. Also Windows has the adjust all visual effects for best performance setting. But I couldn't find anything for OSX.

An example of the way I use workspaces:

- one workspace for terminal, browser with documentation and some text editor

- second workspace for a full-blown IDE, another browser window with documentation and the developed application

It's funny that there are easy solutions to the cut scene problem. If user skips the cut scene there could be a little message that if you press some key the cut scene will replay. And/or give the list of all cut scenes to the point somewhere in the menu. But most of the time cut scenes are there to bring the cinematic experience and are easy and lazy thing to do...


You can cut down on those pointless MacOS animations if you turn on "Reduce Motion" in the accessibility settings.


That's a huge frustration with current web pages for me, and for iOS. Animations that stall interaction rather then enable it do exactly the opposite funtction. And if you're ever in a situation where interactions get chained from user requests, like the bug in the iOS calculator, you're gone way off the path on your UI


Settings -> General -> Accessibility -> Reduce Motion on iOS is a really nice way to improve the subjective interface speed.


That had zero effect on the interaction-halting animations for me.

I'm not opposed to animation, I'm opposed to animation that makes the phone unresponsive and not listen to input, or delays the display of information for more than 200-300ms. Unfortunately the "Reduce Motion" only seems to be for some very minor things.


>I wonder how much of this is a result of UI designers deliberately putting tiny delays/animations into their UIs to make them more "usable"?

I disable most if not all of the animations on my android phone and windows desktops for that very reason. The animations make for great tech demos but also makes everything feel so much slower when trying to get things done.


Who knows how to completely disable all animations in macOS?

I've tried many from this list https://apple.stackexchange.com/questions/14001/how-to-turn-...

but it seems that most of commands doesn't actually make any affect


Maybe those animations could be sped up with continued use. Then new users get the visual connection between their actions and the response, while experienced users don't have to wait as long for the animation to play.


No other UI infuriates me more than the McDonalds ordering UI at their locations. Everything is all over the place and the animations make it 100x slower.


You inspired me to look up how to turn off animations in MacOS; this has some answers: https://apple.stackexchange.com/questions/14001/how-to-turn-... although I can't vouch for their accuracy.


I think animations, as most program them, slow down the UI exactly as you say.

But when their timing is tight, and the whole thing happens in under, say, half a second, you get the benefits of that context association and it still feels really cool and high-tech. People jsut set their animations too slow. (Seriously, nothing should ever take longer than 500ms)


I don't have a link, but in referencing interactivity tests for some remote software development I did some years back, the maximum time between pressing a button and perceiving its effect to be considered "instant" is around 200ms. You don't want to go slower than that if you want it to "feel" fast & responsive.


You're talking about Nielsen thresholds, and the limit for "instantaneous" is actually ~100ms.[0] Anything slower is considered to have taken some time.

Now, if that wasn't depressing enough - the Nielsen research is from early 90's. In the intervening 25 years, the users have certainly not become any less impatient.

0: https://www.nngroup.com/articles/response-times-3-important-...


Even better. Does that give the animation enough time to show though? At that speed I'd guess you might as well make something instant


At 60 frames per second? Sure, that's enough for 12 frames.


People jsut set their animations too slow. (Seriously, nothing should ever take longer than 500ms)

They could start with 500ms and subtract 100ms from each UI activity after each time it is used until the delay is 0ms.

The user would learn the interactions on the screen first and then not be bothered by the animations later. As a bonus the user might actually thing the program "speeded up after a 'break-in' period".


Now that's an awesome idea! It'd be fine to actually just store that setting in memory, and reset with every restart: I wouldn't mind a half-second delay on my first window zoom after restart if I knew that it'd drop down to zero four zooms later.


I think a lot of this true!


The front end developer at one of the companies I was working for nearly fainted when I told him I like Craigslist's web interface. The only criticism he could muster was "But it looks like the 90s". And then he proceeded to tell me off "because you don't want to learn how to use a new interface". Of course I don't want to, it is a skill with zero transferability. Learning how to be productive with an interface that only works with 'a' given app, for 'a' duration of six months until some bozo changes the whole UI, because some focus group decided that "it would look so pretty if ..." (and that is the best case scenario, mostly it is because some UI/UX guy was inspired while sacrificing chickens to a graven image of Steve Jobs) is bunk. The two most hated things that came out of Redmond in the last 20 years were the 07 rejig of the Office UI (so you had muscle memoried a bunch of key strokes that made you super productive, well the F U buddy, we are going with this weird ribbon thingie for no discernable reason - fixed that by recording a bunch of macros that did what Office 03 did) and the Win 8 UI. I mean which diseased mind thought that it would be a good idea? And if that was not retarded enough, to force it on server users.


Craigslist has an amazing interface. It's so easy to use. It's ugly, but who cares! I don't care if it's pretty. It does exactly what I want. There was a great article on this a few years ago that I think is perfect: https://m.signalvnoise.com/why-the-drudge-report-is-one-of-t...


And honestly, to make it not be ugly would really only require some tasteful CSS, not a redesign or rewrite in some JavaScript framework.


Craigslist UI/UX is great. The problem is that simple/super clean interfaces don't require designers. So pretty much every designer will be against it out of principal.

Of course, every site can't be craigslist.


The Office ribbon was/is far superior to the original, flat UI for novice and intermediate users. By flat, I don't mean aesthetics but rather the layout. Every menu option and button had the same visual importance of every other button/option which makes it difficult for novice and intermediate users to find the right one. It was fine for power users because they will put in the effort to learn the interface no matter how crappy it actually is. And power users usually get upset when the interface changes because they now have to relearn it. But that doesn't mean it was a bad change in absolute terms, it is a lot better for non-power users which are the vast majority of users.


Having to build pixel perfect pretty UIs is one of the main reasons I've been holding back on getting a new job.


In the name of usability, we have practically neutered these computing machines. I think optimizing for user experience is partly to blame for this. You don't want users to have a confusing experience the first time they use the application, and you want the application to look good and be inviting. But that may be at cross purposes with making applications that allow open-ended exploration. Exploration implies that the user is driving the process, which suggests that the UI is not actively trying to constrain them to the usage patterns it is designed for.

I think pretty much everything in computing is like this. By setting our sights on enabling users rather than training operators, we have not only limited what they are able to do, we have actually turned ourselves into users, and we're basically unable to program without the help of frameworks, libraries and Stack Overflow.


> In the name of usability, we have practically neutered these computing machines.

Except usability is getting worse too. There is now less and less content on the average screen and more and more whitespace and "design chrome". Buttons no longer look like buttons, hamburger menus are legion, and many new features are found behind undiscoverable gestures that must be mastered.


I have a theory that by optimizing for the UI/UX of a First time user, things have gotten worse for the power user. In the past when most of our apps were desktop apps, there was a hidden assumption that the user would be using the apps for years and therefore had the time to build up muscle memory, currently when bulk of the apps on App Store are not downloaded and those that are, are barely opened twice, it makes sense to optimize to make the whole thing easy for the FT User. That by definition makes it harder for the power user. Notice that the really responsive websites are ones that cater to a group of people who by definition are more likely to be power users: HN, Github etc


There is a big dose of fashion and fad-following in usability. The things you mention are manifestations of poor decisions made in the name of usability, but actually in the interest of being pretty.

I think even making the right decisions in terms of usability, tethered to a few short-sighted user stories, will result in this situation. Usability should be in service of a higher purpose, which should be enabling the user to understand how to use something powerful and flexible that does not constrain them only to what is envisioned by the developer or the UI architect. That's what has happened here with Maps. You can perform User Story #1 or #2 with zero friction. But there is no way to do what @gravislizard wants to do, which wasn't anticipated by Google or specifically designed for.


Not only is there a lot of fashion and fad-following, but there is a distinct lack of measurement and metrics. You want this animation or to change the color on this control? Fine--show me the empirical data that demonstrates that it improves usability or any of our KPIs. None of this, "I'm an artist and I subjectively feel this will be good for users." That's total BS. Show me the measurement that justifies your design request, or let's A/B test it.


> I think pretty much everything in computing is like this.

But then there's Dwarf Fortress, Blender, Vim, Emacs, ...

Accounting for Sturgeon's Law, things aren't quite so bleak.


Did we forget how Emacs used to mean "Eight Megabytes And Constantly Swapping?"

Saying that as an Emacs User for almost 20 years now. Even today there are things that can bring Emacs crawling to its knees. The moment you start adding auto-complete to a big project, or if you have your files hosted in some NFS server, you're SOL.


The autocomplete is somewhat easy to fix in principal. Just offload to a daemon for the autocompletes, for example.

No, I haven't bothered with it. :( I did setup GLOBAL, which is quite nice. Though, many languages nowdays would benefit greatly from a presentation compiler. Which seems silly, at the extreme. It isn't like projects have gotten more complicated in scope. They have ballooned in implementation complexity, though. :(


But vim/emacs are ancient, and even Blender is from the nineties. I would say the situation is pretty bleak if we need to rely forever on these ancient beasts because we can't make good software anymore.


Ancient,yes. Still actively developed and maintained. Maybe if contributing into projects was more of a thing, then trying to reinvent, we had better sofware.


Yes, exceptions are exceptional.


I simply cannot agree with the take that making things easier to use "neuters" them. Enabling people to actually use these things makes them more powerful, not less.

I think far too often, people get caught up with the false machismo of, "This is hard to use, yet I can use it, so I'm a badass and you're not."


I agree with you about false machismo. I'm trying not to say that hard-to-use tools are essentially good. I am saying that some essentially good tools are not very new-user-friendly, and that worrying excessively about the new-user experience has been detrimental overall.

One reason for that is the "new user experience" for a more powerful system will imply learning some things that are not directly relevant to whatever problem you are trying to solve. Excel worksheets, layers in Illustrator, "command mode" in vi and Blender, etc., are more like gas/brake pedals than they are like "do you want directions to this destination?" I don't see how you can avoid thrusting some learning on new users of a powerful system. Powerful systems are going to have powerful metaphors that are probably not totally intuitive at first.

You can do more stuff with Illustrator than Omnigraffle. It's a more powerful tool. That brings with it a longer or steeper learning curve. But, if you just want to make a few charts, Omnigraffle is easier to use.

I do feel that, over time, we have become very gun-shy about making complex applications that have richer models like this though, preferring to make applications with simpler UX that aim directly at smaller problems. And I think this is partly because UX is more straightforward for smaller problems.


But that depends on easier for whom; the issue is that is a system is very easy for beginners it will be crap as an expert and vice versa. This is the reason a lot of applications used to have a beginner/expert mode switch. When I use a mobile app, it is usually beginner only, so every time I have to go through the same steps and explanation wizards. I don't want that; most of these things, for experts, would be one form while for beginners they are 20+ pages. Literally.

Error messages are very generic and are passed immediately to support (more work, more time for an expert who probably would understand the real error).

Nothing wrong with that but it is not easier to use for me once I am an expert; it is harder and more annoying.

I would like expert mode in many apps.


Last year, in a fit of nostalgia I finally bought an Apple IIe, a system I had wanted 30 years ago, at a garage sale. Somehow in moving it back to my house, I dislodged the floppy controller card, leaving the system to boot from ROM. I turned it on, it beeped, and there was the prompt. I perceived it as instantaneous. I know that from that point I could have written a BASIC program and saved it to a cassette tape. The anecdote falls apart once you actually start trying to boot ProDOS from disk and have to rely on the something besides solid state.

This experience got me thinking. What has changed in the meantime? Computers have become communication devices and communications devices have become computers (though even this ancient Apple came to me with a modem installed). Developer time has outstripped hardware costs, and somehow an hour of developer time is worth the same as wasting a million hours of the users' time (1 million users x 1 hour or whatever formula you wish).

I think we all recognize this problem, and it's already too expensive to fix. The hardware and software space are too federated, too balkanized, too complicated to ever integrate a system into a cohesive whole in the way the Apple II was.


> Developer time has outstripped hardware costs, and somehow an hour of developer time is worth the same as wasting a million hours of the users' time

Key point here. It's hard to get the biz guys and product managers to agree with spending developer hours on performance improvements, because it's hard to measure the effect it has on business metrics. Does making your application take 750ms to launch rather than 8 seconds really bring in more sales? Who knows? Nobody tries to measure it either, so we'll never fix the problem.

Feature cram, on the other hand, is easier to justify, which is why every application eventually ends up with a plug-in interface, theme-able, skin-able, able to read E-mail, and able to interact with Facebook and Twitter.


And I'm not even talking specifically about application performance, though what you're saying is still absolutely spot on. Think of all the time wasted because of poor, or incomplete documentation. An I sympathize with anyone writing documentation. We're definitely still in the era of move fast and break things.

These costs which accrue to everyone because no one can afford to take it on themselves, I would call externalities in the same sense we do when talking about pollution.


The core point of the entire thread can be summed up by one of the tweets; "I believe well designed keyboard interfaces and well designed GUI interfaces have exactly the same learning curve."

I agree with this. A properly and well designed keyboard interface is faster than any mouse. On the other hand, a properly designed mouse interface can be fast too. Both need to be applied when it makes sense.

I can also resonate with the GMaps example; people find it rather ridiculous that I prefer to use pen and map to plan routes but GMaps simply does not cover the complex demands of holiday routes with family.


IMO Google Maps became completely awful when they made that new version of it, and they've been doing everything in their power to continue making it worse. Sure the scrolling is a lot smoother, but it's missing just a ton of features that the old version had. Plus the problems mentioned in the tweet storm, everything disappearing as soon as you change one item. PLUS that stupid side-bar that takes up half the screen now.

Edit: Here's another good one I just ran into -- I can't sign out of just one Gmail account. I have to sign out of all my Gmail accounts, and sign back into the ones I didn't intend to sign out of.


I suspect most of gmaps' problems are there because it's an advertising company. They don't want you to plan a route, they want you to click on local search ads: https://support.google.com/adwords/answer/3246303?hl=en

Note that if they clear your pins so you have to re-search for something, that's potentially another CPC payment to them...

You can sign into multiple Gmail accounts? How on earth does that work, does it involve the menacing prospect of "linking" them?


I'm not really sure how it works under the hood, not being terribly familiar with how these login systems work, but you can basically (in your browser) click "add an account" and it will open a new tab and you can sign into the account. So right now I have 3 tabs open for gmail with 3 different accounts.


> Edit: Here's another good one I just ran into -- I can't sign out of just one Gmail account. I have to sign out of all my Gmail accounts, and sign back into the ones I didn't intend to sign out of.

This seems like it is a shitty behavior inherent with OAuth. I have half a dozen Microsoft accounts for various things, and you just can't sign out of one and sign into another, unlinked account - things go all sideways and the auth providers that are trying to read your cookies get really confused and go into 401 redirect loops. It's better to burn it down and open a new set of incognito tabs instead.


This is a really interesting idea. It suggests that there is an entire missing world of terminal-like web apps.

I suppose that search is effectively a text based app, so that's at least one mainstream terminal-like app.


A good GUI can be navigated with the mouse, and a good CLI can be navigated with aid from teh mouse, so the dichotomy just doesn't make sense.


Given the speed of modern computers, I'm fascinated by the fact that so many processes complete in human-scale time (seconds-minutes), rather than milliseconds or years+, depending on problem complexity. If times for compute tasks have some power law distribution (a big if...), I'd expect a very small part of that distribution to overlap with the 'seconds' range.

Then again, computers do so many things in the millisecond range and faster that maybe what we observe IS only a small fraction of the total.


It's a relief to do a bit of Go from time to time, render a complete html template in 40 ms (with almost 2 MB of JSON data) instead of spending forever figuring out the JS framework du jour to do the same thing, only with it feeling a lot more involved, heavyweight, and slow - even if said framework claims to be faster than all the other JS frameworks.

I think I want to do back-end or terminal-based interfaces again. Native interfaces. Mmmmm


> Native interfaces. Mmmmm

If you decide to work on native GUIs, please don't create your own toolkit on the assumption that you can do better than the OS. You'll almost certainly break at least one thing: accessibility for blind people.


This has nothing to do with Go and a lot to do with choosing the right tool for the job. Maybe Go is the best tool for you in this job, but a given js framework can serve someone else just as well if it is in their expertise area. Don't confuse your expertise with language/ecosystem pros and cons.


This has nothing to do with choosing the right tool for the job, but with performance, which this discussion is about. Practically all JS frameworks, having multiple levels of (near-inscrutable) abstractions naturally degrade the performance and certainly contribute to the trend the submitted twitter thread is talking about.


The right tool for the job != whatever tool is in your expertise area.


I've had a similar experience on a current project.

Even after disabling any and all caching and minification (read: basically parsing all templates per request, a bit like PHP), the entire render process take less than 20ms of time to the point that I've simply left out caching during some production runs.

I also keep the interface free of any expensive rendering precisely because I don't really need all of the niceties of JS frameworks.

Surely, the page could be more fancy but vanilla HTML5/CSS3/JS can do the job too. (And document.querySelector has replaced jQuery for me)


I'd love to see more terminal interfaces. I've been looking for a modern TUI library for a while.


It's pretty much an availability heuristic / complexity (or cost) constraint.

If the operation cannot be done quickly -- in a few seconds at most for reasonably interactive work, in a few minutes to, perhaps, a few days for batch -- then it's very, very very rarely done.

It it occurs in less than about 1/10 second, there's not much incentive to try to speed it up, and inefficiencies that bump process time up to even a few seconds tend to creep in. That despite the fact that there are psychologically and outcomes-measurable differences between interfaces with even 1/10s vs. 1/1000s delays or responses.

The Jevons Paradox plays into this, balancing against Gresham's. Stuff that's cheap (clock time) happens, stuff that's non-discernable (under perceptible / outcomes limits) tends not to get constrained.


You have it on the head. The tasks we pick for computing are the ones that can be completed in seconds.

There is plenty of interesting computation we could and would do if what now requires a year would only require a second.


I think the opposite direction is actually more telling. Developers tolerate bloat and BS up to the level of seconds.

So there is plenty of computation being done that only requires milli or microseconds, but due to development practices now stretches just out into the annoying time frame.

(Prime example: the growth of web pages to match increases in processing, RAM, and internet speeds, means that the web is actually slower to use these days.)


When the web was young and most people were on dialup, having access to a T1 meant that everything happened nearly instantaneously. All the sites were optimized for dialup levels of complexity. Now that everyone has broadband there is no equivalent. The slowness is either because the page is so heavyweight in terms of memory or processing power (note that more money does not buy much single-threaded performance), or because it's making so many network requests that everyone faces at a similar latency.

Even the fastest computer and the highest-end Internet access can't get you such a huge advantage in performance that it can make the web feel snappy again.


> Even the fastest computer and the highest-end Internet access can't get you such a huge advantage in performance that it can make the web feel snappy again.

Until you install an adblocker and disable javascript. An enormous amount of modern-web overhead is adtech and user tracking cancer.

If you don't believe me, install privacy badger/ghostery and ublock origin, visit any popular news/entertainment website, and look at the number of blocked elements/requests as you load a few articles. All of those take time and suck up resources.


All programs grow to outstrip available computing resources?


I've experienced this. My first computer managed about 100 flop/s, or 3 Gflop/year. A modern PC can do that year's work in a millisecond! Yet when my current computer struggles to display a web page there isn't much interesting computation going on.


We're usually not computationally bound. Often its disk or network I/O and that basically means tasks are content bound. We tune for maximum content in the acceptable human scale time. The moment we take less than a second to load something it means we can find more things to load.


I think it's mostly a cognitive bias, we don't even notice processes that take milliseconds, and we either optimise or simplify (approximate) problems that would have taken years to make solutions more practical.


> and we either optimise or simplify (approximate) problems that would have taken years to make solutions more practical.

Or we just don't do them.


Software engineers optimize for human scale time.


I kinda feel that UI developers should work with crappy, old PCs. Then they could optimize to the point where things are barely usable to them, but then they'd be snappy for everyone else.


Luckily Google Chrome has added device emulation recently for slower devices and network conditions. Hopefully tooling like that will make it in many developers Workflows.


No? Getting on a BBS in 1983 at 300 baud means text files "downloaded" at the same speed you can read. Word processors were crazy painful to use, and incredibly slow. A text editor was just a line-by-line replacement. My TRS-80 would slow to a crawl even making the most basic (ha!) of programs.

Over the years, I've often thought about why computers never seem to get faster - mostly it is because people have a tolerance for response speeds, and that is unchanging. So software sits somewhere inside that tolerance range, because why be ultra fast when most people don't really care that much?


Yeah. For me in 1983 loading something meant 5 mins waiting for the cassette to play through. Switching to a different program meant repeating the process again.


One of the computers I had was cassette tape based, and the tape "reader" broke. I was rather shocked to find I could use my parents cassette player and connect its output to my computer, and it would (generally) read tapes just fine. As a bonus, I figured out how to have the speakers on the tape player on at the same time so I could listen to my programs being read in :D


The author went back a tad far, but not too much. The Amiga came out in 1985 and the Apple IIGS in 1986. The user experience of those machines would be more comparable a modern PC with a thousand times the clock speed and memory


I had an Amiga and the user experience was not comparable to a modern PC, except if you squint really hard. Like really hard.


Given the whole article is about how some people do care your dismissive comment is as pointless as it is incorrect.


Sure, some small amount of people care, definitely. And there are options for them. And everyone would like it if things were faster, but they just don't care enough to put their money towards a fast, but less feature-filled, option.

Plenty of UX research confirms what I have stated. Why does the average webpage take N seconds to load, with a standard deviation of D? Because that's the range most people are ok with it and going faster has rapid diminishing returns, while going slower lets you get away with a lot of inefficiencies that results in reduced cost.


In reality, so many people have a problem with it that they are actively interfering with the code of the webpages to the point of endangering the business model of much of the web.


It's a tragedy of the commons. It's very rare for one piece of software to be solely responsible for slowness, but each individual piece contributes just ever so slightly.

I happen to know quite a bit about front-end technologies, so I'll speak to those. Bootstrap is around 100 kilobytes. One hundred thousand bytes - after gzip and minification. By itself, is it slow? Not very. Bootstrap fans will point you to endless benchmarks and tests that show its impact on performance. Same with React: one hundred and forty five thousand bytes, after gzip and minification [0]. According to React fans, React is blazing fast!

But development happens, so you throw in a few hip libraries and frameworks and suddenly Slack takes one billion bytes of RAM. Whoops. "But it's not React's fault!" Sure it's not. If React is the only bloated thing on your site, it will work great. But chances are that if you've got React, you've also got Bootstrap, and some visualization things, and some code from Stackoverflow that iterates over your DOM in O(n!^n!). And all of that is how things get slow.

Now, I might be a bit biased, because I spent years of my life working on a CSS framework 100x smaller than Bootstrap, but I think that if everyone spent time optimizing the size of things to be 100x faster we could get back to snappy UIs. Yes, it would be hard, and yes, it would require compromises, but the result just _feels_ good. There's something about a webpage loading in 250ms, or a button reacting as soon as you tap it, that just feels nice. Maybe it means not using React; maybe it means you don't use as many nice-to-have frameworks, but I think it's an achievable goal.

[0] React fans will point out that if you don't need to interact with the DOM, this gets smaller. Yes, this is true, but obviously for most webpages you kind of need DOM interactions.


Sure but tbf, we didn’t even have gpus in 1983, we had text and ascii, and a terminal. And a math coprocessor if you were lucky. And roads, and the aqueduct. And sewers. And a VT100

But other than that, what did IBM ever do for us?


I've written many sites with React, Bootstrap, and a dozen or three other third party dependencies, but I've never run into a case of a button not responding when clicked or things taking a noticeably long time to load (assuming that the server processing or sheer size of the payload over the network isn't the cause).

I think a lot of it boils down to "don't do work you don't have to." But there's nothing about a framework that causes -- or saves you -- from that.


Great work on the sites you've made! Unfortunately, most of the time when I see a simple webpage not responding or lagging heavily, it's made with React/Bootstrap.

It's particularly tragic when the page is only text and images.


You probably weren't updating 50 different ad networks on every mouse event.


> some code from Stackoverflow that iterates over your DOM in O(n!^n!)

OK, I'm curious. What's a legitimate problem where the most naive solution is O(n!^n!)?


There was one I helped suggest fixes for on StackOverflow a while back. The program searched DOM nodes, found its target, split the node, inserted the two new nodes, and continued searching in the new node it had just inserted.

The result of this was truly horrific performance. If one node contained thousands of matches, the repeated node removal, copying, inserting, repeat, took more than 30 seconds. 30 seconds on a 4 GHz machine.


That was a joke, honestly I doubt anything would be that slow. There are a few algorithms that are O(n!) though.


> I spent years of my life working on a CSS framework 100x smaller than Bootstrap

Interesting! Can you share that?


http://mincss.com (and yeah, homepage needs some work, my friends say it sounds like too much of a corporate product and not a FOSS CSS framework)


Thanks! I will investigate it in the future where I would normally have considered Bootstrap.


"but I think that if everyone spent time optimizing the size of things to be 100x faster we could get back to snappy UIs."

Quite possibly. Who's going to pay for it, though? And who's going to make sure the frameworks are still easy to use and not buggy?


This! I once took over programming a point of sale system that our (internal to the company) users were complaining about. Turns out there was no concept of tab sequence, you had to use the mouse for everything. I asked the previous programmer to show me how to use the form he created, and he started typing a phone number using the top row of the keyboard instead of the numpad. But hey, it had lots of javascript!


To be fair, I only started using the numpad after I had a temp job in data entry. I hate laptops that don't have them now, but seeing as a lot of laptops cut them off, most people can't think num pads are that important.

Not paying attention to tab order in an enterprise app is a cardinal sin though.


Not paying attention to tab order is also bad for accessibility, e.g. for blind users.


When we bought MacBooks I bought a bunch of USB numeric keypads to plug in to their desktop monitor. Made a lot of folks doing data entry very happy.


> and he started typing a phone number using the top row of the keyboard instead of the numpad

As I'd expect any good programmer to...? As a 100+ wpm touch typist (including symbols + numbers) and former finance professional, I use the top row of the keyboard for numbers.

How is having 8 fingers available that don't have to move much going to be slower than using three that have to move up and down?

I get the point that the programmer wasn't behaving like the users; that's a good critique, but I'm picking up the vibe from this comment that you think the numpad is inherently better for that sort of thing? No way...

Edit: huh, seeing the chain of comments here, I guess I'm in the minority? I don't see how someone can consider them a touch typist (type quickly without looking at the keyboard) and not be able to reach and use numbers and symbols, but maybe a lot of folks here are like that? How do you program with numbers, and ampersands, and parentheses, and asterisks and the equal sign and underscores and all that, having to look down at the keyboard???

It just seems obvious to me that a skilled typist would be much faster using the top numbers than the numpad.


For those of us that have been fortunate enough to avoid having a job that involved an adding machine or cash register the number pad is an awkward AF to use. Conversely the top keyboard row is in range for touch typing.


I've avoided that job, but back when I still had a keyboard with a numpad I'd use that if I had to fill in numbers repeatedly. I can't touch type numbers accurately, it's just a bit out of reach.


People that do any kind of data entry or work in finance learn to use it too. The number pad is not just for cashiers.


Yeah, which is why I mentioned adding machines.


I dunno, I've used the number pad by default since I was a little cutter begging to sneak into my mom's classroom and play on the Apple II that was in there. I might be a weird one; I can't switch myself over to using WASD controls in video games either; I've got to use the arrow keys, and then hope that I can bind the other important commands to keys in that vicinity.


I've always found it easier to type long strings of numbers on the number pad. I don't know why, it's not as though I ever deliberately trained myself to use it, but for me the number pad is much faster than using the top row.


Your fingers don't have as far to travel, and there's no seeking involved (hand doesn't move)


Less ambiguity (three vertical rows, no horizontal movement), and the most-frequently entered keys (1, 2, 0) are not at extreme and hard-to-assess ends of the movement range.


Everything is much closer together, you don't have to move your hand only your fingers. It is clearly the better input device for pure digit sequences.


Not for left handed people. My primary hand is better/closer to the top row. My non-primary handle struggles with the keypad, especially if I don't use it very often - which I don't.

Worse, if you're on a laptop a lot then this habit is kicked out the door. Enabling a numpad on the keyboard is less efficient than just using the top row.


> Not for left handed people.

I'm left handed; the numpad is freakin' awesome for number entry. Back when I still played games like Descent and WoW, it was also very efficient to setup the mouse for my left hand and leave my right hand on the number pad. Aim and fire with mouse, maneuver with numpad (in Descent), or action bar items on numpad (in WoW).


I think that using phones trains the muscle memory for the number pad (although upside down). I have found the same thing though, where I instinctively can use the num pad much faster than I should be able to, with relation to the amount of time I use it.


I don't get your point about typing phone numbers. When I was being taught to touch-type in first and second grade, my teachers told me that I should type on the top row, not the numeric keypad. Not sure why. Maybe because it takes less time to move to the top row and back? Anyway, to me, typing numbers on the top row is not an obvious problem.


I mentioned that because I thought it showed how he'd never really watched one of the users in their job- once you get used to the numpad it's a lot faster. You can type in a long number very quickly without taking your eyes off the screen.


The numpad is a considerably faster input method, and it can be performed one handed. For a point of sale system, this is critical: the clerk can juggle an item while typing the phone number (or price, or whatever)


For a POS system input is probably all numbers- look up customer by phone, make choices from a menu, enter quantities. Letters are for new customers and special instructions.

I worked at a print shop that switched from DOS to Windows for it's POS software, it was an order of magnitude slower. At least we could play solitaire though.


The touch-typing course was probably developed 50 years ago for typewriters, which don't have numeric keypads.


Because most accountant and office types will use the numeric keypad and go blazing fast. That’s probably one reasons for the complaints. God help any PC tech who screws up their settings and doesn’t enable num-lock at boot.


> Maybe because it takes less time to move to the top row and back?

Presumably this is it. I find typing on numpad is much faster and simpler, but 1) laptops typically don't have one and 2) on normal keyboards it takes my hands away from the main keyboard. Then I switched to a Kinesis Advantage2 and now the numpad is integrated into the main keyboard, using a foot pedal to toggle between main keys and integrated keys.


It depends on the length of the number you're typing and the context.

In PoS, you're not likely to be typing many letters most of the time, so left hand for function keys, right hand for numbers is pretty good. You can move your hands if you move to a search.


When most of the stuff that you type are numbers then numpad makes more sense than top row (also, many non-english keyboard layouts have numbers in top row shifted).

For POS and similar applications it actually makes sense to design the UI such that it can be used only with numpad, for example by using +,-,*,/ as function keys for common operations.


The teacher was just trying to focus your practice on one specific skill instead of another.


That is until you are entering dozens or hundreds of numbers.


The web feels slower than it was 10 years ago too.

It's fascinating how we tend to over-engineer and bloat things.


The web is hundreds of times more feature-rich than 10 years ago too though. But, sites like HN are some of the few that are still built like they were 10 years ago. The rest all adds rich JS frameworks (for supposed faster interaction after the first load), a few billion social links, ads, cookie banners, "sign up for our newsletter" or "pay our subscription pls" popups when you first land on them, rich fonts, animations, images, etc.

I mean it's a lot nicer if you look at it from a distance after it's loaded, but it's not as snappy as it used to be.


I agree modern websites look a lot nicer... Or do they?

They all look the same: big images, not much text, the top bar, yeah this newsletter thing to upsell with these crappy (but effective) marketing techniques...

Somehow I feel old school websites (HN, Reddit, and so) are more sticky, more addictive, more unique. They focus on what we really want: contents and communication with others.


Another thing I realized it's nonsense lately: object oriented programming.

It's often an over-engineered bloat that only works for trivial Programming 101 courses (yeah the famous Employee or Bike class). But in 10+ years of programming, I found that OOP is just a complete mess (you end up with awful classes in your code like Service, Manager, AbstractFactory, and so).

I wish we could just use variables and functions, that's all we need really. :)


It depends on what kind of development you're doing.

For most web and business programming I've done, at the end of the day I'm mostly just taking data and transforming it into other data. Most of the time, looking at these programs as a series of functions through which data gets piped to obtain the result works well. As a result I think that functional programming is a good fit a ton of code that is written using OOP. For fun, I tried converting some old side projects to F#. Once I'd gotten used to the language, I ended up being able to express the same ideas more clearly, and with fewer lines of code. Functional programming isn't magic, but it can be very effective when used in the right places.

Lately, I've been getting into game development as a hobby. And when I'm simulating a miniature world with hundreds or thousands of stateful actors, I find that OOP works really well - in that the way to code models the game world aligns well with my mental model of the game world. You absolutely can program games using a functional style - I've just found OOP to be a better fit here for my uses. It's still useful to use a functional approach where you can, though, even inside an OOP game code base. John Carmack has written and spoken about this.


> I wish we could just use variables and functions, that's all we need really.

You still can. You need decent data structures too, but I never in over 20 years really "got" OOP. It seemed needlessly complicated. Functions and data always seemed to do the job for me.


OOP as most people get to know it seems to be a bleak application of the original concept.

And the original was not aimed at dealing with the internals of a single program sitting on a single CPU, but grand simulations running on massive clusters. There each "object" would could very well be a process of its own, running on its own dedicated hardware.

Effectively OOP became another one of those buzzwords on the bingo board...


You can: try FP languages. My personal recommendations (from opposite ends of the FP spectrum) would be Clojure and Haskell. F# is pretty nice too.


Hit counters, guestbooks, web rings, autoplay midi files, under construction gif animations, cursor animations, HTML frames, etc. We've been adding unnecessary garbage to webpages since almost the very beginning of WWW.


I'll add another thing to your list - segment! It's so weird to me to see a homepage load, look into my dev tools and find that my browser is talking to Facebook.


And yet I find myself increasingly reaching for tools to remove that complexity: Outline.com, Reader Mode, Pocket (despite its many, many flaws).

Not infrequently: stripping out content, running it in Markdown, and generating a static document (PDF, ePub, text) that I can just fucking read.


Noscript or Umatrix?


Neither of those are an option on Chrome/Android, and I don't think they apply on Firefox/Android (not sure about disabling JS).

Firefox does offer uBlock, at least, which is a massive relief. I also run (on the router) a large, and after significant modularisation by me, flexible dnsmasq blocklist that addresses another large set of issues.

On desktop (Linux, MacOS), I have and use: NoScript, uBlock, uMatrix, and Stylish, as well as Reader Mode.

It's still often less aggrevating to grab source or rendered text and create a standalone Markdown text, for various reasons.

I'm looking into the notion of a browsing mode that presumes the website designer is an idiot and that the HTML markup is at best a hint at what the semantic structure ought to be -- see concepts such as POSH (Plain Old Structural HTML), and ... a few other things. Work in (very slow) progress.


Could have sworn i have seen a version of Noscript that worked with Firefox on Android.


Checking: there are multiple javascript toggles, but not "Noscript" specifically AFAICT. "NJS" seems to be among the closest to that.

https://addons.mozilla.org/en-US/android/search/?q=javascrip...


Ah, it never went beyond the alpha stage.

https://noscript.net/nsa/

Only reason i bumped into it was because i was toying with Tor for Android, and their fork(?) of Firefox had it bundled.


It doesn't need to be slower.

You can still use server rendered templates and have things load extremely fast. Most sites don't need 3mb of JS to show structured text.


Over-design perhaps, but i would say that the web is under-engineered, as engineers would find the inefficiencies and get around them.


I think what guy is actually experiencing strong feeling of nostalgia for older times. And I understand that. I see it on myself more and more with coming years too. We were simply born in some era, which holds strong memories in us. We hold sentiment to old things (programs), because somehow they're connected with best memories from time we were much younger. And the best memories are usually bounded to age around 20 - 30. Ask senior to tell you a story from his time and there is high probability that the story will be from their younger times, than from older ones.

My point is, that maybe todays software is not all that bad. We just don't feel the same needs as 20 year old guy somewhere at Google who programmed it. And it also works apposite way too. Show 20 year guy Visicalc, what he will think about it. Or check teens reactions to Windows 95.

https://www.youtube.com/watch?v=8ucCxtgN6sc


But maybe it is. On nearly every metric that matters to power users software is worse.

Control? We have less.

Options? Less.

Ownership? Not that we ever had it, but now you don't even get a disc. Licenses are even more restrictive.

The pace of 'upgrades'? Way, unnecessarily faster

And when the business dies now? Now you're fucked.

There are stories of mechanics running their shops today with 40-year-old dinosaurs, where their biggest logistical issue is getting old parts to repair when it breaks. It was sad that the magazine article I read that story in laughed at the "backwards" proprietor, when in fact he is a hero for standing up to the current trend of users-as-serfs cloud everything.


> The pace of 'upgrades'? Way, unnecessarily faster

Not only that, but the availability of easy upgrades has incentivized companies to sell unfinished software and hope to patch fast enough in response to user complaints, instead of even bothering to actually finish the product as marketed.


"Options? Less."

How so, given that more people are able to write software, and it's easier to create something than ever before?

"And when the business dies now? Now you're fucked."

How is that any different than before?


You've answered your own question... more people are learning to program, from fewer and more professionally run sources, which preach a gospel that prizes engagement, conversions and ease-of-use, over empowerment, functionality, or choice. Plus, with this larger and more mainstream community, where 'coders' are pulled in off the street with little technical knowledge (let alone passion) who never grokked complex apps anyway, produce what they know. The end result is a sea of gimped, appliance-like 'apps' that do little more than funnel users towards business objectives.

And when the business died, their software still ran. Hard-won expertise kept it running. No stupid CEO or developer could stop you, and neither could they pull the rug out from under your feet with a sunset or some other shitty move. You bought your software and that, mercifully, could be the end of your relationship with that publisher.


This doesn't really address my point. You still haven't explained anything regarding choice, or any of your other points.


> On nearly every metric that matters to power users software is worse.

Citation needed.


Citation: myself and the other power users I have talked with. WTF is this, wikipedia?? Robot.


You worded your initial response like you were referencing actual stats about software. I'm sure you think you and your colleagues are reliable sources, but to the rest of the world, they aren't.


Every single one of those statements was subjective. Don't be fooled by my use of the word 'metric'. It is, frankly, amazing that people here expect one to have facts and citations for every little nitty point.


It's not about having a "fact or citation for every nitty point"... It's about having even a tiny grain of self-awareness before making a statement about the global state of software in 2017, and perhaps having the self-awareness to know that, generally speaking, your anecdote ("citation") is meaningless.

I guess at some point I need to stop being surprised that HN comments are really no better than Reddit's.


I'm going to challenge those citations, then.


Well the only way a 20 year old is getting weirded out by a Win 95 interface is (1) he has never seen a Windows machine because Win 10 basically reverted to the same Win 95 interface or (2) never worked on a desktop/laptop with KDE, Cinnamon or the other DEs inspired by Win 95. And Win 95's UI rocks IMHO (and I say that as a person closer in age to those annoying pipsqueaks in that video). You go to a button labeled "START". It has clearly labeled folders called "Accessories", "Programs", "Internet" etc and then you get to where you want to. For 90% of the UIs, a tree like that is a perfectly good data structure. The more annoying thing is why is it that software like Office etc take two orders of nagnitude more RAM while being essentially 90% similar in functionality to Office '97


> You go to a button labeled "START". It has clearly labeled folders called "Accessories", "Programs", "Internet" etc and then you get to where you want to. For 90% of the UIs, a tree like that is a perfectly good data structure.

The strict hierarchy of the start menu was admittedly pretty nice, right up until (I think) Windows XP, where they started having the weird bifurcation at the parent level.


Yes I get the rosy-nostalgia bias but ... these apps run demonstrably faster. That's not nostalgia.


> My point is, that maybe todays software is not all that bad.

Well, as an interface designer who is supposedly in the age category where his best memories are still being created I feel a bit insulted, because from my point of view, a lot of things are pretty bad. Don't get me wrong, things are also a lot better, but there is more to life than HD video, better color fidelity, novelties made possible by virtual reality, and the interconnectedness of the web - even when these are all big improvements over the past! But it does feel like the computer is being domesticated into the new TV.

As far as interfaces themselves go, the main culprit (from a design POV) is almost always the priority of touch-first design, with mouse a distant second, and keyboard input only existing for things that could not removed through dumbing things down, like input forms.

Now touch interfaces are great in some areas, but any time I have to select/copy/paste text I am painfully reminded of their limits. In general it feels like 90% of the time I am struggling to do things that would be trivial with mouse and/or keyboard. And let's not even go into the lack of tactile feedback[0].

Even more insulting is that it is not that hard to make an interface that supports different modes of input, or where the potential keyboard input is easy to understand. There even exist modern quasi-innovations like react-select, which uses a select field for mouse and touch and lets you type to autocomplete among available options for keyboard power-users[1].

The opening line about perceptual slowness also rings true. In my childhood we had long loading times because of slow hard drives or CD-drives. Now we have them for the sake of saving development costs and adding ad revenue. The internet is a bloated mess, which from a content perspective it doesn't need to be at all; many websites work better if you turn off JavaScript[2].

Then there is web development itself, starting with a reliance on bloated overly generic modules, all the way down to the metal where there is an unawareness that there is even such a thing as an ever increasing memory/processor performance gap, or that this might also affects JavaScript code.

Well, it does, and not even browser vendors themselves seem to be fully aware of it. For example: on all platforms that I've tried, a simple radix sort is between two to eighty times faster the built-in sorting algorithms when sorting numbers, for all array types, regardless of whether it is in-place or a sorted copy[3]. (in the use-case for which I investigated it, the improvement is five to ten-fold, allowing me to do interactive animations where before I had to resort to slow renders). The difference makes sense for plain Arrays, which cannot assume integer values, but why the heck are the typed arrays so slow? They're plain contiguous memory array; compared to all the other browser complexities this is about as simple as it gets!

I could go on for a while but the point is: it's not like things were better in the past. It's just that a number of these things should have gotten better and instead seem to have regressed. And I know it's complex combination of many reasons, but it's still saddening to see.

[0] http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...

[1] http://jedwatson.github.io/react-select/

[2] http://idlewords.com/talks/website_obesity.htm

[3] https://run.perf.zone/view/Radix-sort-Uint8Array-loop-vs-fil..., https://run.perf.zone/view/Radix-sort-Uint8Array-100-element... https://run.perf.zone/view/Radix-sort-Uint8Array-loop-vs-fil..., https://run.perf.zone/view/Radix-sort-Uint8Array-100-element..., https://run.perf.zone/view/Radix-sort-Uint8Array-loop-vs-fil..., https://run.perf.zone/view/uint32slice0sort-vs-1000-items-ty..., https://run.perf.zone/view/uint32slice0sort-vs-radix-sort-10...


Computers are about control and wanting to return to the old days of aiding the power user has nothing to do with nostalgia.


What's really annoying and frustrating to me is that, even with a browser extension/plugin like Vimium, websites and webapps seem to almost go out of their way to break my ability to use my keyboard.

A lot of the changes that break usability for me are purely cosmetic changes that, e.g. replace a `button` with an `img` or `svg`.


The corporate culture to release a product in time rather than having it perform fast ruined the IT world and is infecting the FOSS ecosystem too. Just throw away all VM-constrained or interpreted languages (or use them only for prototyping) and the problem is almost solved. Then make everything work offline unless it is absolutely necessary to do the opposite (backups, updates etc). Rediscovering the lost art of optimization would also help a lot.


This does not work. The economic incentives are biased towards fast releases of mediocre software. Whoever delivers faster and cheaper tends to win.

Quality is detrimental to the business. Good software does not require support. Paid support earns money.

An initial release that is stuffed with features takes longer to develop (the competition gets the customers) and has fewer incentives for later upgrades (less income after initial release).

Software cannot really improve unless quality standards become mandatory. The liability disclaimer should go. Engineers don't get a free pass if their products don't work. Why should software be treated differently?


Funny to have a rant on UX on medium not suitable to long form posts.

It is sadly very similar in desktop realm. You can’t copy text from the most part of the interface. There can be an error and you have to rewrite it to find what is this all about. The computer has this already written down, it’s just silly. That is the one thing that usually is better about webapps. Of course there are silly designers who want to expose their users to the same limitation, by blocking copy or context menu. But usually they don’t do this, because it’s more work.

That’s why CLI will likely never die. By default you can manipulate the data to your heart’s content. It may be slow and sometimes half-assed, but it still will be faster. You don’t have to rewrite anything. It’s a bit sadly the closest we have to a data-oriented system. Where one can manipulate all available information without massive hurdles. Only with some hurdles.

What else: Oberon, Plan9 interfaces (Acme!), Powershell, AppleScript (? - when I was using OSX at work few years ago I couldn’t find reference manual of the language), certainly more throughout the history.

We can’t do anything useful in 2d and yet we are working on VR/AR interfaces where undoubtedly we will make same mistakes (and more!).


> By default you can manipulate the data to your heart’s content.

http://blog.vivekhaldar.com/post/3996068979/the-levels-of-em...


Recently I discovered how well sapGUI works from the UX perspective. It is essentially an glorified graphical 3270 terminal emulator which runs applications that very often rely on user entering cryptic but human readable IDs (there is lookup functionality integrated in essentially all such fields, but frequent users typically do not use it that much).

Another interesting fact is that the 3270/sapGUI/dynpro model is almost perfect match to how HTML forms work (with the inline lookups and such stuff being the small part that would require some trivial JS/AJAX). I don't understand why today's web apps universaly try to reinvent the wheel by being "single page", emulating desktop WIMP interfaces and such things when simple HTML would work perfectly fine.

Edit to add another point: For our anime conventions I wrote our own online ticketing system. It has web interface for customers and general administration, but the on-site box office uses ncurses interface which intentionally has very ISPF-ish feeling to it, genral consensus of the staff seems to be that it is significantly more efficient and intuitive than web based systems used by other local conventions.


3270 is really fascinating evolutionary dead end. I don't have first-hand knowledge (wish I had!), but as far as I can tell it is bit like high-level ncurses/dialog(1)/forms built in directly to the terminal layer. Of course being very proprietary very enterpricy tech, there isn't that much information about how it worked in practice, but it certainly feels like a very different path for UIs.


3270 allows user to edit screen contents without interaction with host computer and allows host computer to mark screen regions as user editable or not. Interaction with host mostly consists of sending (text) framebuffer contents back and forth.

SAP's dynpro is abstraction layer on top of that which presents somewhat higher level view of that (forms and fields, not framebuffers and characters) and also allows 3270's behavior to be emulated on normal character-oriented terminals (obviously by ncurses-like software running on host). This abstraction is sufficiently high-level that sapGUI can look like somewhat modern desktop application instead of obvious 3270 emulator. (Windows version of sapGUI allows you to render parts of UI as embedded HTML or ActiveX components, but that capability does not seem to be used that much outside of SAP's development tools)

From programmer's point view it is not that different from web 1.0 application built on semi-modern form handling library (eg. WTForms or whatever your all-in-one webapp framework provides). Probably only difference is that for web frameworks you will have if(form_submitted_and_valid()) somewhere, while in SAP body of that conditional and rest of the controller implementation are always two distinct procedures.


This lends itself to supporting functional physical design. I'm getting tired of the screenification of everything. I don't want a touchscreen in my car. I want a bunch of knobs that I can reach for, feel, and use without looking at them. I cannot do this with a touch screen.


i strongly agree. a physical thing is just more expressive, and when designed right, that can be leveraged. cars are the example ne plus ultra of how touchscreens arent always a great idea


> And it's worth noting that HYPERTEXT, specifically, is best with a mouse in a lot of cases. Wikipedia would suck on keyboard.

A decade ago I would navigate web pages quicker than now. I was using Opera, which had keys for navigating from hyperlink to hyperlink (spatially, using shift+arrow keys). It also keys for navigating to the logical previous and next page--that is, not navigating history, but following "link rel=prev" and "link rel=next" links. Unfortunately, the web evolved to make this harder. Everything is a hyperlink and no one uses link rel tags anymore.


It's possible that the vimium extension for chrome will give you what you want. The 'f' shortcut gives something similar to what you want.


The thing that horrifies me about all of this is that instead of criticizing or fixing issues most people adapt what they do and even how they think to match all the deficiencies in software. Often, users simply can't imagine systems that work fundamentally differently and better. Often, when deficiencies are pointed out people start defending the mess and pointing out all the clever (half-assed, really) workaround and hacks they came up with. They are proud of all the time they spent (wasted) learning about obscure and counter-intuitive software functionality. They are proud that barely-working plugin/extensions/add-on they found and set up. All of that to do thing that should be trivial to begin with.

One counter-point, though.

> I make no secret of hating the mouse.

If you look at the original uses of the mouse it was great. Especially in systems like Xerox Star. Star allowed people to perform complex tasks with almost no learning curve.

https://www.youtube.com/watch?v=Cn4vC80Pv6Q

(Note how they weren't shy of using keyboard either. There are dedicated hardware buttons for standard commands like copy, find, repeat, "properties" and even for common text editing actions. Meanwhile, our keyboards don't have dedicated keys for undo, redo, cut, copy and paste - operations that are used in almost every application today.)

Trouble is, we lost most of the driving ideas behind Xerox-style interfaces. Using a predefined set generic, powerful commands. Object-oriented UI. Uniformity of representations. Modern system have those things only as vestigial traits and in very limited contexts.

I don't think there were any quantum leaps in conceptual UI design since Xerox Park times. There were some minor improvements in very specialized apps and significant regression in software that's used universally. For examples, phones and tables almost completely lost drag-and-drop functionality and generic file UI.

For example, drag-and-drop is a very powerful concept, because it allows you to perform actions by combining things you already know about - and those "things" figure out how to interact in the best way possible. So, for example, instead of having N "Print" buttons in N applications you can have a singe drag-file-onto-printer-icon action that does different things based on the type of the file. [BTW, this is also the key idea behind the original notion of OOP.] Unfortunately, that's not how it works in modern UIs. They don't use either the keyboard or the mouse to their full potential.


People get used to doing things a certain way. When they have to overcome a terrible interface, an attachment forms. It's no different than the pride of those that live in physically harsh environments.

Habituation is a powerful thing.


I think a lot of the perceived slowness is now the fact that so much more is reliant on the network now - and more often than not requires several hops over the public internet to some data centre somewhere.

As a reminder - network is really really slow compared to pretty much anything else: https://people.eecs.berkeley.edu/~rcs/research/interactive_l...

I'd not be surprised if the author's library search terminal from the 80s had a local copy of the index, or at least the index was stored on a server on the same lightly-loaded LAN as the terminal.

My home laptop is hugely, hugely faster than ones I've had a decade ago for local workloads, but my network connection's latency and bandwidth have not improved nearly as fast.


Network latency can be one reason but I found the lack of optimisation to be a bigger factor. As long as a programme somehow works many developers won't try to optimise it. That gives us Electron apps which are easily 10x slower than a comparable native app (pgAdmin is a prime example).


The day I sold my Atari ST, I pulled it out of the stack of old computers, connected it to a monitor, hooked up a disk drive and turned it on.

Poof: Desktop with a working mouse cursor in under one second.

I booted a nearby PC, and then hit reset until it booted. In the time that the "hot shit" 386 box took to get to a DOS prompt, the ST booted about fifty times. To something with graphics.

Modern system boot times depress me. We have database servers that take 15 minutes to POST. Doing updates or diagnosing hardware-level issues with these things is g l a c i a l l y slow. I work with message queueing systems that are not shy about several seconds of latency -- what are these stupid things doing? The datacenter they're running in is less than a hundred microseconds wide, and everything is high-end XEON CPUs with SSD and more cache than the first 30 computers I used, put together.

Sigh.


Everything is faster when you build it into ROM. The cost is flexibility. Should you wish to run another OS on that ST, ypu would have to bootstrap it through TOS.


I've since built systems based on flash that boot in tenths of a second, with over a thousand times the memory of that ST. I'll point out that the PC BIOS the ST was competing with was also in ROM, and only spent the last couple seconds of its execution loading stuff from disk -- I think the test was more than fair.


On a modern system, you could use flash RAM to get the same speed with all the flexibility advantages.


Indeed and unlike the 90s, SSDs are cheap now. :)


I'm still thankful for https://tinyapps.org even well after it has gone nearly read-only!

To qualify for TinyApps, a program must:

1. Not exceed 1.44mb

2. Not be adware

3. Not require the VB/MFC/.NET runtimes. Also, preference is given to apps which are 100% self-contained, requiring no installation, registry changes, etc.

4. Preferably be free, and ideally offer source code. Shareware will only be listed if there is no freeware alternative.


Fortunately we still have a choice. That's also one of the main reason I'm always using command line if it makes sense. In most cases you get instant or near-instant results. Our lives are too precious to waste them on sitting by the computer and waiting for the results - only because some people decided we should do it this way, and other people followed, and still other people had to implement it, dealing with mediocre infrastructure and additional complexity, usually with mixed results.

Nobody will steal this freedom of choice from me.


> i posit that nobody wants autocomplete-style live DB lookups. They don't fit the mold that autocomplete fits in.

1. I live-code an example in about:blank with devTools window open.

2. I roll my own little jQuery-like closure for the sake of convenience.

3. It works.

4. I close the browser and forget about the thing.

5. Later, I decide I wanna test out a feature of the Web Animation API, so I open my browser to about:blank again.

6. I remember that I used a little closure last time-- as I type the var name Chromium pops up an "autocomplete-style live DB lookup" menu. At the bottom is My Little Closure from last time!

7. I move the mouse down to the relevant line in the menu. Or, I use keyboard arrows to navigate to it. Or I keep typing and narrow down the menu options.

Useful? Check. Discoverable? Check? Obtrusive? Negative-- in the case that I don't every want previously typed expression, that menu option is put at the bottom out of the way of the more common JS internals and DOM methods.

Default settings in a terminal (or terminal-based GUI) generally require the user to either type something to get into a history mode, or type tab to do completion. But both of those options are less discoverable-- they require me to know ahead of time that I want to retrieve something. With devTools it shows me what can be retrieved so that I know what's available even if I'm a neophyte.


Its 2017 and I wait for my computer. I wait on Linux, I wait on Windows. This sucks hard. It is still complicated to search for stuff or to morph data from one representation into another. The web is even worse. And lets not talk about sharing data or other computer-resources with others.

When looking back at what people invented in the 80s till today, I hardly see much progress.

I don't care much about how much RAM an application takes, I care about speed. The next big OS should have a golden rule, never make the user wait. That's the ultimate goal which all OS's fail to address today.

What I want in the end is most times some text which is layout in a visually pleasant way, some images, some videos. I thought multiple times about writing a program which would fetch all data from all the websites I'm interested in, messengers, email-accounts and so on and store them somewhere – so that I could later view them (with whatever program) without any fucking delay.


This puts to more eloquent words the very same thing that I've been ranting about to peers and coworkers for years. Cheers for that; I'm going to share this with them.


> i posit that nobody wants autocomplete-style live DB lookups

Amen. I abhor this. I always get tripped up on these "smart" autocomplete entries.


Awesome. Looks a lot of people are frustrated with this like I am. More importantly there is nothing wrong with me :)


Perception of speed means a lot. I once was evaluating reporting apps back when people did reporting on the desktop. One app would render all the pages and then start displaying them. Another app would display each page as it rendered them. Which was faster? Overall, the first. Which felt faster? The second.


> one of the things that makes me steaming mad is how the entire field of web apps ignores 100% of learned lessons from desktop apps

Burns me up too but I think this is by design. Meaning, web app developers generally don't want to listen, even when their bringing that to the desktop in the form of "native" apps.


Feels like this is the cycle of computing.

First came the big irons.

Then came the desktop.

Then came mobile.

then came the web.

And none of the later ones seems particularly interested in learning from those that came before.

Maybe it is willful ignorance, maybe it is youthful hubris...


The computer store I used to work at switched from an dedicated cash register to a PC based QuickBooks POS system and it was a POS (piece of sh).

Windows based GUI app. You couldn't ring up anything without looking at the screen, taking your hand off the keyboard to move to the right input box constantly.


The argument about function keys resonates with me. I still use F5 (refresh) and F2 (rename) but can't remember when I last used any of the other keys. Instead of using F6 for a function in a programme I now have to use something like CTRL+Shift+K. Why? Why not utilise the keyboard properly?


About the google maps things. Anybody knows an alternative where I can measure things (like the length of a named road or the width of a building) and count things (for example, I draw a polygon, and I want to know the number of houses within it).


OsmAnd is pretty good for measurements and stuff.

http://osmand.net/


The point of sale system shown in the tweetstorm is this one, and is highly recommended if you're even in an occasion where you need to sell stuff.

http://keyhut.com/pos.htm


I wonder if there is a business model for a company to recreate the 1995 native desktop UX for popular web services (scraping, API). Relatedly, how much various web SaaS tools can be... de-webbed without significant loss of functionality?


If this resonates with you check out "The Humane Interface" by Jef Raskin.

https://en.wikipedia.org/wiki/The_Humane_Interface


Gmaps is so far ahead of anything available in 1983 that a comparison is almost silly.


Except for actual, physical maps, with indices for finding locations, and all kinds of markings to denote things of interest (such as restaraunts, museums, etc.)


Physical maps don't necessarily have:

- satellite imagery

- reviews for destinations

- automatic re-routing

- traffic congestion

- toll/public-transit information

- On a mobile/gps enabled device you get your location as well

- etc.

Comparing physical maps to Google Maps is one-dimensional: you can see a specific area (if you had the foresight to buy it before hand!) with roads and mile markers.


Do people actually use satellite imagery other than to appreciate how cool is to look at one's house from above?

Reviews are mostly around average 3 stars for most places. Because there are two types of revies: 1 star - waiting times were horrible, the meal was cold and waiter was rude; 5 star - the best meal I ever had, everyone was cheerful and helpful.

Automatic re-routing is good. Especially with traffic information. However it is miserable where there are unknown closed roads. It will constantly reroute you to the closed road. You can't pick a road segment and remove it from consideration. Then you are back to experience of physical maps. However slightly worse, because you don't know road numbers - up to this point they were irrelevant.

Traffic congestion is a big plus.

Public-transit is also a great thing.


> Do people actually use satellite imagery other than to appreciate how cool is to look at one's house from above?

Definitely! I used it when trying to find a good spot to view the recent eclipse - I wanted to find an open place that would not be obstructed by buildings, trees or mountains. I've used it to understand better how water drains through my community and across my property.

> Reviews are mostly around average 3 stars for most places.

That is exactly what I'd expect!

> Because there are two types of revies

That seems to be hyperbole.

> Automatic re-routing is good. Especially with traffic information. However it is miserable where there are unknown closed roads.

That's definitely true. Paper maps would not have this information either. In a worst-case scenario, you could use a Google Map like a paper map - disable routing and trying to plan a route manually!


My CPC took 30min to load a game from datasette.


One way to improve this is to do like gmail and provide keyboard shortcuts in addition to the point and click interface. (It has to be explicitly enabled in settings, but it didn't have to be that way). Like the author is claiming, it takes very little time to get used to and you get more productive quickly. The only problem I've run into (also covered by the OP) is that input focus might be different than what you think and you end up quickly doing unintended things to your email. Luckily there's an undo function for that.


Ugh, now i am reminded of how Youtube hijacks the home button to mean "start playing the video from the beginning". This makes it impossible to scroll back to the top of the page quickly using a keyboard.


No doubt, Microsoft office took tasks that was 1 button and made it 5+ clicks. And this feature creep expanded everywhere with companies thinking they are smarter than you, so they try to train "you" to work their sites.

Also why I don't like apple computers, you use dos/windows/linux, and get to an mac, and if you don't know the secret gestures or commands, its just painful.

Its like how Gnome 3 took over and had to redesign everything, then finally came back to Gnome Classic.

Or Windows said TILES is the future, deleted your start button, then went "My Bad" and gave you a start button again.

Directories are disappearing, now its "Search Boxes" everywhere.. Good luck if you know the asset file, and the file doesn't have the name "asset" in it. Or if the damn file has permissions for you to even open it if you find it.

Don't even get me started on android phones, I hit dialer, and I don't get a keypad when I need it, I'm in a menu tree now and dialer wants to display current contanct info.. grrr.

Yeah, KISS is no longer a thing, its all wiz-bang, we can do it better than you. NoSQL is awesome! Oh wait, back to SQL we go. Perl SUCKS! Python2, er, 3 now.

Get off my yard.


This rant has some good points, but completely misses other very important points. The main one being that those "really fast interfaces" were only fast or even useable if you spent months to years learning how to use them. They were also horribly limiting. (For example, with no mouse and no way to change where the input is focused, you can't re-sort the lists by a different field.)

I worked for a major cash register manufacturer writing DOS-based software for point of sale (PoS) systems in the 90s. One thing our customers (big stores) constantly reiterated was that training costs were very high and retention very low, so they wanted simpler-to-use systems, but with no less functionality.

The big wins tended to be combinations of hardware that made the process easier (think grocery store scanners), and better UI on screen that was easier to understand and use. We ended up making a graphical (but still DOS-based) UI along with some specialized hardware (an LCD screen with function keys around it so the command name was right next to the key, but could change per screen). It's been used for the past 20 years by the likes of Walmart, Target, and the USPS, so I guess it worked. (I haven't worked on it in 20 years, so I'm sure it's changed from how it was when I was there.)

It may very well not be as fast to load any given screen or respond to any given command. But it costs less to train people to use it, and it's easier to understand. I believe that it also helped improve checkout speed, as well, which was important to some of the bigger stores.

I feel the pain the OP has, but I don't think their rant is entirely justified. There are good points (especially about maps!), but they're also seeing the past through rose-colored glasses.


I certainly agree. The one case where this isn't necessarily true is a fresh boot. Granted, my reference point starts from the 90s, and that's assuming you're booting from an HDD (not an SSD, flash, or restoring cached state from quick/fast boot).


I disagree with that. Even spinning HDDs are several orders of magnitude faster than even the main RAM of computers from 20 years ago.


RAM from computers from 20 years ago had seek times of milliseconds? I find that very hard to believe.


Not in latency, by a very long stretch (500ns 30 years ago).


Not in latency but definitely in throughput. I remember being amazed 20 years ago at 3MiB/sec throughput when copying some files around. Today, even a spinning disc can sustain 160MiB/s and peak at 240MiB/s (personal anecdotal evidence) before looking at RAID or hybrid SSD options.

Sure, latency is still pretty bad. But a well-designed application is going to minimize individual seeks required, and therefore minimize the total aggregate latency cost.

It's the same way across the internet; a well designed application is going to minimize the latency cost across a socket by minimizing the amount of synchronization (seeking) required to continue work. TCP connection, TLS handshake, HTTP processing and routing, application processing to disk/database, multiple queries/seeks to handle the request... it all adds up.


Well, this is a cute bit of nostalgia, but it used to take 6.5 seconds to redraw the terminal screen at my local library, which was connected to some distant system at 2400bps.


Software fatigue for sure

It is so bad on iOS with and iPhone I am trying to figure out how to live without one.

Doing anything productive on iOS takes 12 clicks, 6 button pushes and takes 9 minutes.


As an app developer I have been guilty of this in the past as well. I really try to make interfaces instantaneous nowadays so you can keep hammering away while in the background things like fetching and saving happen.

Things like naive saving (always continue immediately assuming inserting or updating really happened), command queues and aggressive preloading can make applications really quick again despite they're just an interface for an API.


"GOD FORBID i click on anything while its loading" hahaha that's how you distinguish experienced from not experiences users


But, you know, clearly AI is going to replace the job of people making maps in the future. Because software just keeps getting better.


Oh, man. That Google Maps peeve absolutely speaks to my soul. Every single time!



everything seams a bit much. There are so many things you just could not do at all in 1983, like HD videos or 3D engines or even rendering Mandelbrot fractals


The weirdest thing is when navigating to an HD video is slow, individual folders take entire seconds to open up, then when you finally get to the video file and open it, the video runs smoothly and perfectly. What on earth is the file navigator doing??


"Things that could be done in 1983" getting worse is fairly defensible.


Why can't we have HD video and textboxes that can keep up with typing? Why do we have to choose?


totally agree. eventhough machines and networks are faster than ever. The Web apps are getting slower everyday. :(


Ehhh?

* I remember spending hours, sitting on my new 2GB hard drive, formatting it to FAT32. Or defragmenting every week... because that disk format was rather subpar compared to modern file systems.

* I remember the 10+ Floppy Disks that needed to be manually swapped in and out to install... something. I think Microsoft Word. Or maybe it was Windows. In any case, Floppies were slow as all heck, and when CDs came out everyone was amazed by their speed.

* I remember being able to "see" the flood fill algorithm in "Paint" applications line-by-line fill up regions. Computers didn't have the memory to fill up those regions in one pass back then and had to resort to slower algorithms.

* I remember waiting multiple minutes to download a 5MB file in the 90s. 56kbps downloads 5MB in 11 minutes, for example. Even small pictures were unreasonable to distribute on the internet, let alone audio or movies. Flash was popular because a lot of it was rendered vector art and could fit into the ~3MB or 4MB size needed to be practical.

* I remember the dial-up sound. It took multiple minutes to CONNECT to the internet, and you couldn't use your phone while you were on the internet. To check your email took at least 5 minutes before you saw your first email. ~2 minutes dialing in, a few more minutes downloading the emails, and then finally you could begin reading them

------------------

Seriously, did the author here ever use 90s technology? I remember searching through 3 CDs worth of information to write reports in the 90s.

Yes, CD-rom based encyclopedias. You'd juggle CD-roms constantly to do things that are as simple as "Wikipedia" or "Google" today.

We are way, way wayyyyy faster than the 90s. I can imagine that one would be jaded by the past, but... speed? Nah man, we're way faster today than back then. Its not even a close contest.

------------------

I mean seriously, when did people stop using Floppies? It was like, 2004 or 2005 if I remember correctly. And there was a good chunk of time in there where we spent many minutes burning CD-ROMs (when our data was too big for Floppies: MS Word documents easily over the 1.44 MB limit...) but Flash Drives were way too expensive to be practically used.

I think some people used those "Jazz" and "Zip" drives as a "floppy replacement", but really it was burning CDs / DVDs (which had a significant delay to spin up and write). Modern Flash drives are instant. Modern cloud-storage is instant.

I cringe at the thought of returning to the 90s. I have a 30MB Powerpoint sitting on my desktop that I've been working on between my home and work. Do you have any idea of how much time it would have taken to transfer that back and forth in the 90s?


Do you also remember that the Word that you installed from 10 floppies was a lot more responsive than the Word you subscribe to now?

You are saying hardware is a lot faster now. And you are correct. What he is saying is that applications are a lot slower, in spite of the faster hardware. And he is also correct.


Word hasn't gotten much slower or faster in my experience. Honest.

Word has a bunch of additional features that are nice to have. We have spell-check and grammar check today, layouts that don't explode, decent "styles system" (kinda like LaTeX's tagging: you can change a style throughout your document pretty easily under modern Word)

I'd take Word 2016 any day over Word 97. Granted, I'm one of those "strange" people who see absolutely nothing wrong with the modern Ribbon (way better than menus inside of menus, that change from system to system. My school's toolbars for Word 97 looked completely different than my home computer's toolbar... it was kind of ridiculous)


git is very fast


Was originally a tweet-storm -> https://twitter.com/gravislizard/status/927593460642615296, if anyone is confused by the stream-of-consciousness style.

It is frustrating that I've got a machine on my desk more powerful than supercomputers a generation ago, and it locks up and can't refresh the screen as fast as my relatively slow typing on a regular basis...


> Was originally a tweet-storm -> https://twitter.com/gravislizard/status/927593460642615296, if anyone is confused by the stream-of-consciousness style.

I don't mean to detract from the point, but this piece would be helped so much by properly fleshing it out into an article. presenting it as a sequence of 92 tweets is combining the worst of both worlds.


I thought the choice of medium was itself an ironic comment on the crappiness of modern computer usage. Tweetstorms are essentially unreadable, even with a tool like this thread aggregator.


Precisely my response on first encountering the content.

Wherein I discovered that this particular tweet aggregator could be initiated by a third party (others ... cannot).

https://plus.google.com/+TimWesson/posts/VSGs2BddARD


Yeah, but now you can buy an SGI onyx or some other model of cray and turn them into a super-computing-kegerator

When we built out Lucas arts presidio complex, ILM had several of these husks of rendering past on the loading dock waiting for the scrap heap.

I still regret to this day I didn’t take one.


I wouldn't sweat it too much. I had a working Octane workstation complete with sgi monitor and sgi keyboard. It's not easy to figure out a good use for it unless you can use the tanky case for something (ceramic grills over the case fans!). I even got shake to work on it, but it ran about the same speed as a 1ghz PIII.


Is it all the corporate mandated spyware? Because it's really fun when your work computer goes into a CPU spinup because of this.


I'd created the Tttthreads variant because reading tweetstorms is (ironically) another horribly bad interface.


Not only that but due to an existing bug/oversight in Windows, you can actually freeze your mouse pointer with a machine... even with 24 cores sitting idle.

(Great read.)

https://randomascii.wordpress.com/2017/07/09/24-core-cpu-and...

We have gigantic, steamrolling, bullet train machines capable of going nearly infinite miles-per-hour. And everyone's "design" solution... is to add their own stops along the way so the train has to come to a complete stop, [un]load some passengers and start chugging along again. And surprise surprise, a gigantic train takes awhile to accelerate back up to speed.

We've built faster and faster trains, but filled them with so many stops along the way that it's sometimes faster to take the bus.


Similarly, with Gnome on top of Wayland the mouse is hitched to the main refresh loop...


Holy crap. That seems like such an oversight for something so crucial.


I enjoy that this applies pretty directly (almost aesthetically) to deeply-pipelined CPU architectures. Though for general computing, I am pretty sure it's still a win.


I’m constantly stunned by the fact that chrome basically allocated a gig of men to each open tab...

Can anyone eli5 why this is?

I typically have thirty tabs open at any given time.


Are you counting shared memory multiple times? Try hitting shift-esc in Chrome and checking the memory usage there. My heaviest tab (Gmail) is only around 250M.

Edit: I mean most computers don't even have 30 GB of RAM, but if you do there's an easy way to check. Open a bunch of tabs and note your system's total RAM usage. Then close Chrome and see how much gets freed up. If it's 30GB then you have a problem :)


Only 250M. ONLY 250M.

Gives up and dies

I don't mean to be dramatic but this is a huge part of the problem. How is it that the exact same application (email) ran perfectly well on systems with 1M of RAM (and no virtual memory!) and now it takes two-hundred-and-fifty-fucking-megs?


Because... it's not at all the same application? Email used to keep everything on disk and load up one message at a time. Now everything is on the server and has images embedded, so it all gets cached into RAM. If I switch to the "HTML version" of Gmail, the tab takes only 3MB to list my emails.

There's also inflation of the content itself. That same HTML view uses over 100MB to render one email with a lot of pictures.


Wow - I was going to make a counterargument about how alpine is perfectly capable of keeping everything on disk (and how I usually don't care about images), but my alpine process, which has been running for a bit over a month and is connected via IMAP to a remote server with about 220,000 messages in the inbox, is using 250 MB virtual and 100 MB resident if I'm reading ps right.

So maybe Gmail is actually pretty efficient!?


I'm sure the Gmail Android app is nice and light. I'm guessing the RAM bloat in the browser is from the JIT keeping lots of profiling data and cached compiled code, which looks good in benchmarks but does get a little "heavy".


Full Unicode support, including input methods for languages that can't be typed directly on a QWERTY keyboard, takes a lot more than 1MB. Seeing as most of the world's population doesn't speak English, this is a very important improvement.


Well said.


250MB is around 100'000 pages of text. Or some 150 full HD images. Why such amounts of data needs to be in memory at all times to read/reply to emails, I have no idea.


Especially when all heavy lifting is done on the server.


It's only the same functionality but not the same application. Mail clients of old were standalone compiled programs that only did one thing. A Chrome process is almost a mini operating system and API layer on top.


A OS in a sandbox on top of an OS on top of an CPU...


I'm running mutt, and right now it's only consuming 8.5MB (out of an image size of 12MB). That still feels large to me, but my first computer had only 16K of RAM (I've written larger emails) so I might still have a bias.


It was doing much, much, much less.


Was it though? Gmail is just a frontend to a system of services running in a data center. All Gmail does on our computers is display data, accept commands, and relay those commands to a server.


That's a pretty gross oversimplification of things. One of the sibling comments has a pretty good comparison of what Gmail is doing that most 80s email clients weren't.


OK checked.

htop report that system idle with nothing running consumes 996MB mem -- one tab open in chrome jumps to 1.77GB (out of 16GB)

now I have 6 open tabs... cnn, HN, reddit, reddit, reddit, netflix - and just to display these tabs its allocated 2.5 gigs... I just cant understand how displaying 6 websites requires 2.5GB of ram?

Ill admit to being stupid on this topic - but I can maths, and that just doesnt appear to be logical - and I am not complaining, im trying to understand. So can anyone ELI5 why displaying the default pages of some of the fastest sites on the internet, reddit and hn, would consume the resources they do? im literally curious.

http://i.imgur.com/Lm3iexB.jpg

WHY being the operable term here.. I just want to know WHY a browser is so mem intensive for displaying fucking text. The whole point was that in 1983, we had a VT100 or somesuch and they were fast as heck... but now I have to pre-load a shit-ton of ads or something that consume my local resources and ruin my experience? Do we need to punch a designer in the face?

http://media.fakeposters.com/results/2009/07/16/z22e8wsvrf.j...


Sometimes being fast means using lots of RAM. https://hacks.mozilla.org/2017/10/the-whole-web-at-maximum-f...


Hmm... I may be - I’ll check when I get to work, I maybe have made a mindflake as I was pissed that chrome had so much allocated when it should be small given its job...


Turns out building a standards compliant DOM rendering and JavaScript is actually pretty complex.

When you throw in the heaps and heaps of different features that must follow volumes of specs, backwards compatibility with previous implementations, and cross platform compatibility I think it’s pretty safe to say that your average modern browser is one of the most, if not the most complex program installed on your machine.

It’s job isn’t small at all.


250 megabytes for a simple email client.


Versus, say, 1GB for Outlook and variable amounts for Mail.app depending on usage. It's not 1993 — these days email includes a full text search engine, rendering complex content, etc. Since even a phone comes with considerably more than 250MB of RAM, if you don't want to use the resources you paid for to make things faster or better, wouldn't that be an argument for using pine or mutt in a terminal window?


> if you don't want to use the resources you paid for to make things faster or better, wouldn't that be an argument for using pine or mutt in a terminal window?

I rather think that pine, mutt or gnus in a terminal window (or even X) would be faster and better than a web client.

I read my mail in emacs using notmuch these days, and the full text search engine is a couple orders of magnitude faster than Google's Inbox; the complex content rendering is faster than Firefox or Chrome; everything is better and nothing is worse.


Outlook is using 88MB for me, after running all day. Not bad for a mail client that is bloated and slow.


Thankfully on a webmail, these all happen on the server, not consuming client memory. Oh, wait.


It's not an email client. It's a web browser using the features of a web browser to give you email client features. There is no expectation that it have a similar footprint as a binary tailored for the job of being an email client.


I agree with this guy that having to use a mouse is often a chore and inefficient. Google search used to have a feature where you could hit tab after a search and it would auto select the results in order, but that seems to be gone. We need more features like that. Keyboard control supremacy will come back, it is inevitable.


My terminal responds more quickly than it did in 1983. In every other way computers have subsumed televisions as entertainment devices, and have performance characteristics appropriate to their main uses.

Because of Moore's law advances, the industry has learned to design for the next generation CPU. Before Moore's law was understood, industry designed for the current hardware and everything was extremely fast.

In a way I wish RIM had won the smartphone wars, but after the patent loss and removal of the click wheel, the devices were far less usable.

Apple's foray into Skeuomorphism offers us a history of the competitive landscape for usability and design. Skeuomorphism is a way of avoiding the learning curve associated with abstractions. If the software is meant to let the user take notes, making the screen yellow with lines on it helps the most abstraction-resistant users grok what is going on.

Most adults can't do basic algebra, so it should be no surprise that the dominant mobile software platform succeeded only after substantially dumbing down the UI to the point where it was essentially abstraction free.

With Jony Ive taking more control of iOS we are finally breaking free of the Skeuomorphic training wheels. Google's material design is an ambitious attempt to create an abstraction that retains some of the more textural and familiar "material-ness" of real-world objects, without resorting to crude mimicry. I wouldn't say it's close to perfect, but it is promising.

So consider the need for a fast GPU and high res graphics in a mobile device to be (in a sense) a tax that we must pay to help the small percentage of abstraction-phobic luddites understand how to write a note or add a meeting to the calendar. It also makes the device useful for gaming and for watching TV, which many people enjoy doing and which create massive revenue for content companies.

Apple came to dominate the smartphone market because it understood that consumers wanted an all-purpose device that didn't shove too many abstractions down anyone's throat, had decent video games, would let you watch TV and movies, did not feel like using a computer.

With Android, Google has mostly stayed behind by about one year to cut costs and capture market share, but has increasingly been focusing on delivering a top-tier experience and offering/supporting top-tier hardware.

Movies, shows, and songs make up 99% of most users' stored data. So of course the device needs to work with that data. Any comparison to 1983 cannot consider such massive data because it was unfathomable. Back then we thought calendars, note taking, simple spreadsheets, etc., were what computers were good for. Nobody realized that they would become handheld TVs with a built-in gossip rag with custom content about all of our friends.

We will someday reach a point where a mobile device reaches "peak dopamine", meaning that no further improvements will be necessary to make the device more entertaining or more addictive. We have a long way to go, and a lot more transistors will be needed in the hardware to reach peak dopamine, so we can expect the trend to continue. UI responsiveness figures in somewhere, but is obviously not the main driver of hardware cost increases.


Wow this comment is getting a lot of downvotes. I'm curious what I said that offended.


tl;dr we have a network layer? what is this garbage


its all about the command line! right!? that thing rules!!


There's a ton here, virtually all of which I agree with strongly and have long-harbored and deep frustrations with.

I've addressed a number of them in "The Tyranny of the Minimum Viable User", arguing that this is a case of Gresham's Law and market interactions, in a domain with a tremendous range of human capabilities. The result though is a disenfranchisement of more capable users.

https://www.reddit.com/r/dredmorbius/comments/69wk8y/the_tyr...

There's the whole matter of computer inputs, and the tremendous and persistent utility of keyboards.

There's my immense frustration with mobile computing, where a near perfect set of physical characteristics (form-factor, self-supporting-but-removeable cases, stow-away keyboards) are sabotaged at every possible point by atrocious and user-hostile OS and application design and lock-down, crippled storage and capabilities, gratuitously thwarting advanced use. The lack of form-factor and compatibility standardisation means that keyboard- and case-pairing to devices requires not manufacturer but model-specific compatibility. Warrenties are not honoured (Logitech). Devices aren't updated (Samsung/Google).

https://ello.co/dredmorbius/post/lqgtwy_rhsfbdh5cdxb1rq

On the failure of systems to cross-reference, I give you Pocket, which absolutely and deliberately stymies advanced use and actively gets worse the more you use it. I've submitted long and detailed feedback to Pocket (now part of Mozilla), and whilst acknowleged, there's been absolutely no movement on any of these, even the simplest, such as incremenetal search through my copious tag classification. It literally takes several minutes to swipe through this by hand. The application -- intended for long-form reading -- has no search-in-page functionality.

Whiskey. Tango. Foxtrot.

https://www.reddit.com/r/dredmorbius/comments/5x2sfx/pocket_...

I'd really like to know how the hell to clobber the tech industry with a cluestick, because the present model is absolutely not working.


Nice rant, but sadly as he says, this is just how it is. There are very few pieces of software I use that aren't absolutely chock full of brain dead stupid behaviors....not just mildly annoying or difference of opinion style issues, but downright dumb/broken. Gmaps is a good example, my blood pressure always rises while trying to use it.


No, the people on HN are the ones who could change it. Many here develop software. The current mantra of many is to avoid premature optimisation. Who cares about a function that runs 50ms? Well, in reality this function will at some point run within other functions and needs to be called 10 times, adding half a second to page load. Micro services are also really nice to develop but you'll almost always pay with performance. Even inter-datacentre latency adds up if you need 100 calls until you can show the result to the user.


Back in the mid eighties rendering very simple three dimensional scenes using ray tracing took a great deal of time.

"...images generated using some of the above improvements to ray tracing. They all took approximately 50 minutes each to compute on a VAX 780." - From http://www.cs.yorku.ca/~amana/research/cones.pdf

Example image found here: http://www.cs.yorku.ca/~amana/research/images/spheres.jpg

A DEC VAX 780 cost over a hundred thousand dollars - http://www.computerhistory.org/revolution/mainframe-computer...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: