
Almost everything on computers is perceptually slower than it was in 1983 - zeveb
https://mobile.twitter.com/gravislizard/status/927593460642615296
======
mesozoic
Ironic he also had to type out the entire thing in 140 character segments each
of which bounced all over the world rather than sharing a simple text file
write up.

------
gozur88
From a purely UI perspective that may be true (neglecting things like
incompatible file systems, lack of networking outside large institutions, and
the difficulty of doing simple things like sending email attachments), but
that would be true irrespective of the adware. Text-based UIs are more
efficient that GUIs, and mobile GUIs are a study of inefficiency.

On the other hand, it could take weeks to learn a nontrivial UI well enough to
make it fast.

------
Deimorz
This was already posted and had a large discussion here earlier today:
[https://news.ycombinator.com/item?id=15643663](https://news.ycombinator.com/item?id=15643663)

------
jstewartmobile
There's probably already a phrase for this, but I wonder how much of this is
due to powerful machines making more room for bad programmers?

This guy is beating on GUIs, but I'm not sure that's the problem. I've used
rock-solid GUIs on machines that ran @1MHz with less than 1MB of RAM that
absolutely _screamed_ performance and were a joy to work with.

~~~
srean
> making more room for bad programmers?

That reminded me of

[https://en.wikipedia.org/wiki/Braess%27s_paradox](https://en.wikipedia.org/wiki/Braess%27s_paradox)

and

[https://en.wikipedia.org/wiki/Wirth%27s_law](https://en.wikipedia.org/wiki/Wirth%27s_law)

------
zeveb
An easier-to-read version is here:
[https://tttthreads.com/thread/927593460642615296](https://tttthreads.com/thread/927593460642615296)

(also, for some reason Twitter thinks that Firefox on Linux is a mobile
browser … why Twitter why?)

~~~
Finnucane
That didn't help. It's still unreadable, but the problem isn't the formatting.

I guess it depends on what we mean by faster. Certain kinds of interfaces were
more responsive when they didn't have to deal with a lot of graphic overhead.
If you were working on a DecWriter with a 300 baud connection to a mainframe,
it wasn't that fast. Need data from the disk drive? Way slower. And of course,
in 1983, no pc's had any kind of multi-tasking.

~~~
horsawlarway
Not to mention his entire rant on maps losing search items when you pan is
just wrong.

Or fuck it, lets talk about the fact that I have an application that has zero
install time that will literally display the roads/geography/business
locations of basically every civilized place in the whole freaking world, and
it's almost always reasonably up to date.

And the biggest irony of all, he chooses to format his entire fucking lunatic
rant in messages capped at 140 characters, when there are lord knows how many
better options available.

This man is walking proof that nostalgia is real, and that people are
generally idiots whenever they talk about "The good ol' days"...

------
dpark
> _Search far and wide. Search for cities and then click around inside them.
> Read reviews. Do street view_

> _When you 're all done, you go back to your plotted trip and start laying
> out the chosen locations and optimizing your path.

> _You can do this with a paper map. You can't do this with gmaps. So you just
> don't do it.*

Yes, it’s so easy to “click around cities” and “do street view” on a paper
map. It’s amazing that Google Maps even exists given the magic paper maps we
already had.

~~~
naikrovek
You missed the point entirely. Nicely done.

------
DrScump
I've never heard that from anybody who _actually used_ computers in 1983.

Latency problems in the web/app world are the result of
adware/bloatware/trackers/etc. and/or poorly-implemented frameworks and
stacks, not modern hardware or O/S.

~~~
flohofwoe
I did (well nearly, started in 1984 on 8-bit home computers), and the only
thing that was dramatically slower than today in normal day-to-day tasks was
loading data from cassette tapes, floppy disks or via modem.

Since I've been on computers with hard discs (Amiga 3000 early to mid-90's)
it's definitely true that responsiveness either hasn't improved much, or got
even worse (I remember how confused I was when I first saw an application with
a splash screen, to hide the long startup time). This sort of carelessness has
been spreading like a plague since then.

~~~
yetihehe
I still remember adding a kind of splash/waiting screen to one email
application because The Most Important Feature was executing too fast and
users wouldn't feel that it really works. Sometimes latency is caused only by
management.

~~~
jackhack
That exists now on TaxCut software - when it says "checking your return for
possible errors or red flags" or similar. It has some blinky lights animation
or arrows or such that runs for about 10 seconds. It was probably done before
Windows could bitblit the image to the screen.

------
henvic
Indirectly related...

I am the one-man team for the CLI (Command-Line Interface) for a new cloud
computing PaaS.

This is a small project (10+ employees, mostly software engineers) on a more
extensive company (500+ employees).

Sometimes I have a [really] hard time dealing with management and co-workers
because they expect me to mimic the behavior of our GUI console (web-based)
with a high fidelity level that many times is just inappropriate for the CLI
or not what you want on a CLI.

Most of this is caused by lack of experience on both using and writing CLI
tools (I happen to have way more experience than them in this field because I
am a long-term heavy Unix user - since a kid, besides having a greater
understanding of Unix programming in general). I wish they at least read "The
Art of Unix Programming" by Eric Raymond to better respect some design
decisions I made.

I am very open to criticism and even enjoyed the unexpected help I received
from the project UX / UI designer, that was enlisted to help me out (except
for things I clearly decided against but were kind of forced on me).

Here is an incomplete list of things I had to deal with related to
performance/usability:

* (lesser problem) backlash against language: Go was my language of choice for this, many of the team would instead use Node.JS

* lack of understanding that I have to support a vast amount of configurations: BSD/macOS/Linux/Windows + different shells + terminals (and it gets worse / weird when you talk about Windows), and that somethings that look nice in one system, might not look the same in most terminals. Also, many times it is difficult to predict with accuracy what terminal is in use (I am looking at you Windows Subsystem for Linux, that pretends it is xterm-2565, while not supporting most things of it).

* colors, colors, colors. The designer wanted colors everywhere. Foreground, background, Alpha-RGB. You can not imagine how hard it was to convince that we better stick with ASCII standard 16 colors and that alpha channel didn't even make much sense, at least while I am alone

* Explaining why we better stay away from new UTF-8 icons "that only renders nice on macOS iTerm"

* Explaining why colors look different in other terminals (say, Hyper)

* Explaining that I have no control over the screen background of the user whatsoever and can't even detect it appropriately / would not be nice to do so

* Animations, animations, animations. Why do they expect animations everywhere? How do I explain to them that if I try to open an URL on the browser it opens kind instantenously, and it needs no animation because the user wouldn't even see it (and then being told to just put a small delay before opening for it to appear to happen... fast)?

* Decorations, decorations, decorations. I lost this battle and the error handling mechanism prints a weird symbol on every line of error. I am actually stripping the decorations a few pieces at a time and intend to eventually get rid of it just by being a rebel and removing it kind of "accidentally" (especially when it gets in the way).

* Lack of understanding of Unix software programming patterns (filter program, pipes, redirections, etc -- see The Art of Unix programming) [and why I don't accept passwords to be passed as an argument as this is... soooo easy to use™!]

* No respect for spacing on a listing of entries: hard to defend that tabular data must be presented as tabular (even though it might get repetitive) or at least have easily parsable separations. One example is a proposal of listing services while showing their projects, and only print the project name on the first line of service [and assume the project above for all others]. This is a deal breaker for pipping or filtering out data most times, and people just want me to ignore it and feel disappointed with me when I do the right thing (ignore their opinion instead).

* I was demanded to add heuristics to a status system due to lack of proper feedback about remote actions such as deploying/restarting/stopping services. This would never work very well because of many things, such as concurrent calls, eventually request just took longer or happened in practice in 'the wrong order', and so on. Took more than six months until I got a decent API change where I could trust the status and more six months for a more CLI-like approach of not doing more than one thing at once to be agreed upon (instead of always trying to replicate the GUI [blocking] behavior).

Many of these things above I managed to ignore, did in a way that was easily
revertable, or just accepted for the sake of my well-being [and
wished/planned/plan to move to a more sane approach soon].

------
jbergens
Yes some things feel slower today, but not all programs and it's not always an
apples-to-apples comparison. For example, you should not compare programs that
edit local files without any network involved with those that have a network
connection used to send or receive data. You could as well say that it took
hours to get flight prices 1983 since you had to go to a travel agency and ask
for help. Now we can do it from our phones. And we have another difference
right there, there were a limited number of travel agencies and agents. Maybe
there were only 100 users on the mainframe that asked about flights at the
same time. Today it may be 10 000 users at the same time. But if we are
talkning about UIs then it is interesting to also look at how many things you
can show on the screen and change quickly.

I will go on a little rant here. About 5 years ago I wrote a little web
application for phones. I think Samsung Galaxy S4 was a common phone at that
time and it is a lot slower than the modern phones of 2017. We used Angular 1
(React was not released publically) and created a single-page-application with
offline storage in the browser using IndexedDb. The response time for most
things was fast enough to not cause any real complaints from our users.
Switching between views was fast and changing contents in list was also done
quickly. Actual saving to the server took some time (could be a few seconds or
up to about 30s) and we had to build a progress bar to keep users calm but
they only had to use that once every 15 minutes or less. Some reasons this
worked was that the offline storage was much faster than the network (3G or 4G
gsm) to the server and the fact that a mobile screen only can hold a limited
amount of data at one time. It was still html, css, js and the DOM, all
running on a slow phone. These things _are_ slower then some more native
technologies but you can get enough performance for many apps even with html
and js if you avoid networks or hide the latencies.

Another project, even earlier, used server generated html and a relational db.
When we ran that, including the server, on the developer laptops which were
much slower than todays laptops the response times for simple save-calls was
below 100ms. It was so fast that we feared that users wouldn't understand that
the data was saved. We added a simple timestamp that was shown when the save
was done to make it more clear. This was once again without network (for our
trials).

Some UIs are harder to get good performance from. A word processor with
WYSIWYG might have to reflow almost all text when something at the top of the
screen is changed and games might have to repaint a lot of small items on the
screen at once. For these types of systems html+css might be a bad fit but for
most enterprise and productivity applications it should be possible to get
good enough performance today. We might not always get this, partly because we
are optimizing for development time instead. To develop offline storage
solution might take more time and if you make many network calls in a way that
freezes the ui it will be noticed by the users.

And as I started with, if you only have 100 users the backend might be very
responsive even if you don't optimize is much. If you have over 10 000 users
at the same time you might have to do a lot of optmization on the backend
also.

------
jlebrech
cpus are faster but they also have the nsa inside of them.

there was a point where mhz stop making a computer faster, it was multitasking
I recon. at 100mhz as DOS application/game was blazingly fast. but also
nothing was bound to the network, it would just spin a cdrom to get the
information, and at 150kb that was faster than most internet connections.

