My experience is that everything is slower than when I first started using computers as a child in the 90's, and it does it worse. You don't own anything anymore, upgrades routinely take away features, and nothing interfaces with anything else.
There are a lot of reasons why building an app on a web framework makes sense, but the fact that I routinely lose access to applications I've paid for because the company has gone out of business or stopped supporting them drives me insane. I can still install Excel 97 if I want to, but I have 0 confidence that anything I've bought from Google will work a year from now.
I mean, there is perceptible lag in character entry that never used to happen. Sure, my computer used to freeze inexplicably, but Atom crashes often enough that I don't consider that a win, and text has a noticeable delay before it appears on the screen when using Atom. Text entry is something I can implement flawlessly on a microcontroller and the majority of applications I use on a daily basis get it wrong.
A lot has changed since and a lot has become more comfortable. It slower, more complicated, much more annoying. But you do have the power to simply stop using whatever you dislike.
So: don't buy from google. Build you own cloud. If you don't like the application or if it's instable (atom), use something stable (vim?). It _will_ be uncomfortable, but it's there.
 I'm well aware the cost of, for example, refusing to use Facebook, but you still can refuse to use it, whereas life without WeChat in Asia is becoming near impossible.
EDIT: Sorry for possible misunderstandings, I mixed up 3 different topics in a way too short answer. My point is: there are alternatives, there are choices. You don't like the user interface of something, protest: use something else. There _are_ options, but they are not the most visible, not the most comfortable choices, and many of them will involve setting services up for yourself.
LibreOffice is slow. On a ~2010 business-class Dell laptop that still is in perfectly good physical condition, running Debian 9, with a handful of tabs in Firefox and a mosh running:
adamantoise:~ geofft$ time libreoffice --terminate_after_init
Libreoffice is monolithically designed, as shouldn't be representative of the generic performance of modern software.
The reason is that StarOffice was envisioned as desktop environment, therefore, all the components of the suite are loaded on the first startup.
$ libreoffice --writer &
$ time libreoffice --calc --terminate_after_init
I think the article is very misguided; the problem is in the expectation that anything should be faster, even in contexts that are implicitly slower (networks, i.e. internet), or with features that will inevitably slow down the experience (e.g. browser plugins).
Using as reference the slowest software - Libreoffice, Atom, Firefox - is nothing else than cherry-picking.
In this perspective, a sibling article is correct - if you want faster alternatives, you have them.
EDIT: not startup time, who cares about startup time? When you load a few hundred pages doc, or a massive excel.
re 2: I only wanted to give examples, but you're right, my wording is off.
I wanted to say there are fast and good interfaces: there's vim for the terminal, there's Rainloop for webmail, miniflux for RSS reading - these are all keyboard-oriented ones, all working decently and as fast as they can. In case of web-connected ones, the speed of going back and forth is certainly an issue.
Local - as in not making connections on the network - in my opinion, will always have a speed advantage though.
Slow compared to decent standards of what a word processor needs. For reference, WordPad perceptually opens instantly for me, and has felt like it opened instantly since Windows 95, I'm pretty sure. I am saying that both LibreOffice and Google Docs are examples of software that take way too long to do simple things because they're busy doing complicated things that I don't actually care for. (And arguably this isn't really their fault per se, they're both designed to compete with / follow the expectations set by Microsoft Office of what an office suite is, and those expectations are that all the complicated things are available).
> Local - as in not making connections on the network - in my opinion, will always have a speed advantage though.
Depends what you're doing with the network. I do just about all of my work in the terminal with a mosh connection to a cheap server a couple hundred miles away, and I never feel perceptual slowness from my machine not being local, unless I am literally in the subway or in an elevator. And I actually gain a lot of perceptual speed because I can close my laptop and switch to my phone instantly and see the exact screen where I was.
If you're streaming binaries from the network or trying to pump animations on a 2048-by-1536 true-color retina display, yes, remote is going to be slow. Half the tweetstorm's point is that you shouldn't be designing software to need that in the first place.
And you can use pandoc to get the best of both worlds.
LibreOffice on even midsized spreadsheets is quite slow. Slower than opening it Google Docs in fact, which borders on shameful.
It was an example - the slowness (to me) is present for the entire interface. It was just easy to demonstrate. (Also in both LO and Google Docs I can type plain text into an existing document just fine.... I'm curious what you find slower in Google Docs than LO.)
My point is that this rant is not, as far as I can tell, a "local software good, cloud software bad" rant. There are reasons to believe that, e.g. debuggability and computing freedom, but UI speed is not one of them.
I can't speak for the poster above yours, but I would be serious. In my experience, on the startup time, google docs way out performs open office. It's open in single digit seconds, open office is 10s+
I still prefer offline software, but the bloat can be awful. At least web tech is designed with web browsing latency expectation. I find most desktop software these days has a much larger start and use latency by far.
Do you not like owning a car/tractor/printer running proprietary software that you're not permitted to modify? I guess you should have bought a 20 year old version and upgraded it yourself.
Do you want a choice of food that you trust won't make you sick? Then you probably should have worked for that lettuce from planting to harvest, not just complained about safety standards.
Total independence from the actions of others isn't impossible, but no one posting comments online is practicing it. And if "working for it" can include learning to use solutions other people have built, then it can also include writing this article on what's change and what's missing.
If you live in a society, talking about your preferences isn't an alternative to working for them, it's part of that work.
That's the survivor bias. There's plenty of things that you can no longer install and the things you can still install, it's mostly because there's someone that maintains their API and whatnot (thus, the company hasn't gone out of business or haven't decided not to maintains it). Look at games, GOG has an R&D department to make sure you can run the games on their platform.
I'm pretty sure there's a ton of people here that can talk about some old computer or VM that they have to keep up because of old legacy stuff.
> I mean, there is perceptible lag in character entry that never used to happen. Sure, my computer used to freeze inexplicably, but Atom crashes often enough that I don't consider that a win, and text has a noticeable delay before it appears on the screen when using Atom. Text entry is something I can implement flawlessly on a microcontroller and the majority of applications I use on a daily basis get it wrong.
Who forced you to use Atom? That's your decision to use something still heavily in development using some new technologies (not meant to be faster or more stable, just easier to develop into). It's like buying a bicycle and complaining it doesn't go as fast as the car you had before.
Now, if there WAS a company paying for all that maintenance it would be Microsoft, with their almost religious commitment to backwards compatibility.
Isn't it the same? Someone that directly update the app to newer API or someone that update an API to make sure the older apps are still running is pretty much the same at the end. Someone has to do that work.
There's someone that pay for it, whether it's nerds by their time, or Microsoft by their backward compatibility support.
Even for Microsoft, at the end of the day, you pay that price through your next Windows license, and still, they may decide not to keep supporting some of it (16 bits programs are no longer supported over 64 bits for example).
It's not different from a subscription model, except that instead of paying a fortune to support most things (which may not include what you actually need), you pay only to keep support over the product you actually use and need.
I'm not saying subscription models or cloud application are amazing, but being able to keep running your old apps is not the norm and is mostly made possible because there's someone that pay for it for you. At least in a subscription model, that relation is pretty direct, you pay for the software to keep working, unlike hoping that Windows invest into that old techs to make sure they keep running.
Only for the cloud cohort. Not all of us drink the kool-aid! Try living in a computing environment where the technology-rug isn't pulled out from under you every three months.
Both Windows and Linux have not had core changes to their API. The fundamentals of application programming haven't changed in decades. Why shouldn't we allow old apps to run?
> you pay only to keep support over the product you actually use and need.
And you pay it over and over again. Until you don't need it (which might be because they removed some feature you were relying on, with no recourse).
> but being able to keep running your old apps is not the norm
Losing access to my old apps isn't the norm, unless I choose to let go of control over my own computer's software.
The most immediately notable part was the typing, which felt completely instantaneous. I haven't found out exactly why, but the switch from serial to USB keyboard interfaces (and the lag therein) as well as the various windowing pipeline "improvements" over the years seem to be in the mix.
Takeaway: today’s keyboards are slow. Even gaming keyboards marketed as fast are slow. There’s more latency just in today’s keyboards than the entire end-to-end latency in an Apple II system from 1977.
But then you have to use an STLink to upload programs.
My life improved considerably when I switched from Sublime to vim, Excel to VisiData, Word to LaTeX (though, I am unsure if LaTeX is completely OSS), etc. I invested the time in learning the language of the tool, but it pays off in droves.
Don't get me wrong: I use OSS wherever possible, contribute where I can, and love the freedom to tinker with the software's innards. But UI just isn't OSS' strong suit. Mainstream Linux desktops are no faster than MacOS, and quality as perceived by end users is far lower.
Some of that is due to missing hardware support, and some of it is a result of the OSS' "bazaar model" of development simply not being as good a fit for UIs as it is for CLI applications. But in any case, the forces at play in the market are the same for OSS and proprietary software.
The TeX sources are widely available and (infamously) well documented. The LaTeX macros are merely widely available. I forget what licence they're under, though.
What makes you unsure they're open source?
Well, shrink-wrapped software only needed to be purchased once to have something, and it was amenable to piracy. It also didn't let companies collect, expropriate, and sell off your private data. Hell, once upon a time, people even wanted to take EULA provisions to court as illegitimate, since you couldn't use a product you'd already paid for if you didn't "agree" to a post-purchase "contract".
Something had to be done to make profits sustainable!
Actually, in this case, it was technological advancement coupled with those companies' desire to make profits that 'ruined pretty much everything'. Marketing had nothing to do with it.
In fact, do us a favour and re-read your post, then tell us what parts were ruined by marketing?
It is frustrating that I've got a machine on my desk more powerful than supercomputers a generation ago, and it locks up and can't refresh the screen as fast as my relatively slow typing on a regular basis...
I don't mean to detract from the point, but this piece would be helped so much by properly fleshing it out into an article. presenting it as a sequence of 92 tweets is combining the worst of both worlds.
Wherein I discovered that this particular tweet aggregator could be initiated by a third party (others ... cannot).
We have gigantic, steamrolling, bullet train machines capable of going nearly infinite miles-per-hour. And everyone's "design" solution... is to add their own stops along the way so the train has to come to a complete stop, [un]load some passengers and start chugging along again. And surprise surprise, a gigantic train takes awhile to accelerate back up to speed.
We've built faster and faster trains, but filled them with so many stops along the way that it's sometimes faster to take the bus.
When we built out Lucas arts presidio complex, ILM had several of these husks of rendering past on the loading dock waiting for the scrap heap.
I still regret to this day I didn’t take one.
Can anyone eli5 why this is?
I typically have thirty tabs open at any given time.
Edit: I mean most computers don't even have 30 GB of RAM, but if you do there's an easy way to check. Open a bunch of tabs and note your system's total RAM usage. Then close Chrome and see how much gets freed up. If it's 30GB then you have a problem :)
Gives up and dies
I don't mean to be dramatic but this is a huge part of the problem. How is it that the exact same application (email) ran perfectly well on systems with 1M of RAM (and no virtual memory!) and now it takes two-hundred-and-fifty-fucking-megs?
There's also inflation of the content itself. That same HTML view uses over 100MB to render one email with a lot of pictures.
So maybe Gmail is actually pretty efficient!?
htop report that system idle with nothing running consumes 996MB mem -- one tab open in chrome jumps to 1.77GB (out of 16GB)
now I have 6 open tabs... cnn, HN, reddit, reddit, reddit, netflix - and just to display these tabs its allocated 2.5 gigs... I just cant understand how displaying 6 websites requires 2.5GB of ram?
Ill admit to being stupid on this topic - but I can maths, and that just doesnt appear to be logical - and I am not complaining, im trying to understand. So can anyone ELI5 why displaying the default pages of some of the fastest sites on the internet, reddit and hn, would consume the resources they do? im literally curious.
WHY being the operable term here.. I just want to know WHY a browser is so mem intensive for displaying fucking text. The whole point was that in 1983, we had a VT100 or somesuch and they were fast as heck... but now I have to pre-load a shit-ton of ads or something that consume my local resources and ruin my experience? Do we need to punch a designer in the face?
When you throw in the heaps and heaps of different features that must follow volumes of specs, backwards compatibility with previous implementations, and cross platform compatibility I think it’s pretty safe to say that your average modern browser is one of the most, if not the most complex program installed on your machine.
It’s job isn’t small at all.
I rather think that pine, mutt or gnus in a terminal window (or even X) would be faster and better than a web client.
I read my mail in emacs using notmuch these days, and the full text search engine is a couple orders of magnitude faster than Google's Inbox; the complex content rendering is faster than Firefox or Chrome; everything is better and nothing is worse.
For example, it was a common piece of advice that if clicking a button opens up a dialog box, there should be a brief animation showing the box expanding from the direction of the button, so the user would associate the clicking of the button with the appearance of the box and understand where it came from. This does actually improve usability the first few times it happens, but over hundreds of iterations these tiny delays really slow down the overall experience of using the interface.
That is a amazing!!! I had no idea developer meant "interested user".
> Make sure developer options are enabled. If they're not, go to Settings > About phone, then tap on Build number several times to enable it
> Go to Settings > Developer options, and scroll down to Window animation scale, Transition animation scale, and Animator duration scale.
> Tap on each of the animation options and turn them off.
I thought the Google Nexus 5 phone would be getting updates often...but I guess not.
It's really good!
Are you talking about this: https://www.cnet.com/how-to/speed-up-your-android-by-adjusti... ?
I set mine to "0.5", and everything's much snappier, but they still play so I still get some feedback.
An example of the way I use workspaces:
- one workspace for terminal, browser with documentation and some text editor
- second workspace for a full-blown IDE, another browser window with documentation and the developed application
It's funny that there are easy solutions to the cut scene problem. If user skips the cut scene there could be a little message that if you press some key the cut scene will replay. And/or give the list of all cut scenes to the point somewhere in the menu. But most of the time cut scenes are there to bring the cinematic experience and are easy and lazy thing to do...
I'm not opposed to animation, I'm opposed to animation that makes the phone unresponsive and not listen to input, or delays the display of information for more than 200-300ms. Unfortunately the "Reduce Motion" only seems to be for some very minor things.
I've tried many from this list https://apple.stackexchange.com/questions/14001/how-to-turn-...
but it seems that most of commands doesn't actually make any affect
I disable most if not all of the animations on my android phone and windows desktops for that very reason. The animations make for great tech demos but also makes everything feel so much slower when trying to get things done.
But when their timing is tight, and the whole thing happens in under, say, half a second, you get the benefits of that context association and it still feels really cool and high-tech. People jsut set their animations too slow. (Seriously, nothing should ever take longer than 500ms)
Now, if that wasn't depressing enough - the Nielsen research is from early 90's. In the intervening 25 years, the users have certainly not become any less impatient.
They could start with 500ms and subtract 100ms from each UI activity after each time it is used until the delay is 0ms.
The user would learn the interactions on the screen first and then not be bothered by the animations later. As a bonus the user might actually thing the program "speeded up after a 'break-in' period".
Of course, every site can't be craigslist.
I think pretty much everything in computing is like this. By setting our sights on enabling users rather than training operators, we have not only limited what they are able to do, we have actually turned ourselves into users, and we're basically unable to program without the help of frameworks, libraries and Stack Overflow.
Except usability is getting worse too. There is now less and less content on the average screen and more and more whitespace and "design chrome". Buttons no longer look like buttons, hamburger menus are legion, and many new features are found behind undiscoverable gestures that must be mastered.
I think even making the right decisions in terms of usability, tethered to a few short-sighted user stories, will result in this situation. Usability should be in service of a higher purpose, which should be enabling the user to understand how to use something powerful and flexible that does not constrain them only to what is envisioned by the developer or the UI architect. That's what has happened here with Maps. You can perform User Story #1 or #2 with zero friction. But there is no way to do what @gravislizard wants to do, which wasn't anticipated by Google or specifically designed for.
But then there's Dwarf Fortress, Blender, Vim, Emacs, ...
Accounting for Sturgeon's Law, things aren't quite so bleak.
Saying that as an Emacs User for almost 20 years now. Even today there are things that can bring Emacs crawling to its knees. The moment you start adding auto-complete to a big project, or if you have your files hosted in some NFS server, you're SOL.
No, I haven't bothered with it. :( I did setup GLOBAL, which is quite nice. Though, many languages nowdays would benefit greatly from a presentation compiler. Which seems silly, at the extreme. It isn't like projects have gotten more complicated in scope. They have ballooned in implementation complexity, though. :(
I think far too often, people get caught up with the false machismo of, "This is hard to use, yet I can use it, so I'm a badass and you're not."
One reason for that is the "new user experience" for a more powerful system will imply learning some things that are not directly relevant to whatever problem you are trying to solve. Excel worksheets, layers in Illustrator, "command mode" in vi and Blender, etc., are more like gas/brake pedals than they are like "do you want directions to this destination?" I don't see how you can avoid thrusting some learning on new users of a powerful system. Powerful systems are going to have powerful metaphors that are probably not totally intuitive at first.
You can do more stuff with Illustrator than Omnigraffle. It's a more powerful tool. That brings with it a longer or steeper learning curve. But, if you just want to make a few charts, Omnigraffle is easier to use.
I do feel that, over time, we have become very gun-shy about making complex applications that have richer models like this though, preferring to make applications with simpler UX that aim directly at smaller problems. And I think this is partly because UX is more straightforward for smaller problems.
Error messages are very generic and are passed immediately to support (more work, more time for an expert who probably would understand the real error).
Nothing wrong with that but it is not easier to use for me once I am an expert; it is harder and more annoying.
I would like expert mode in many apps.
This experience got me thinking. What has changed in the meantime? Computers have become communication devices and communications devices have become computers (though even this ancient Apple came to me with a modem installed). Developer time has outstripped hardware costs, and somehow an hour of developer time is worth the same as wasting a million hours of the users' time (1 million users x 1 hour or whatever formula you wish).
I think we all recognize this problem, and it's already too expensive to fix. The hardware and software space are too federated, too balkanized, too complicated to ever integrate a system into a cohesive whole in the way the Apple II was.
Key point here. It's hard to get the biz guys and product managers to agree with spending developer hours on performance improvements, because it's hard to measure the effect it has on business metrics. Does making your application take 750ms to launch rather than 8 seconds really bring in more sales? Who knows? Nobody tries to measure it either, so we'll never fix the problem.
Feature cram, on the other hand, is easier to justify, which is why every application eventually ends up with a plug-in interface, theme-able, skin-able, able to read E-mail, and able to interact with Facebook and Twitter.
These costs which accrue to everyone because no one can afford to take it on themselves, I would call externalities in the same sense we do when talking about pollution.
Over the years, I've often thought about why computers never seem to get faster - mostly it is because people have a tolerance for response speeds, and that is unchanging. So software sits somewhere inside that tolerance range, because why be ultra fast when most people don't really care that much?
Plenty of UX research confirms what I have stated. Why does the average webpage take N seconds to load, with a standard deviation of D? Because that's the range most people are ok with it and going faster has rapid diminishing returns, while going slower lets you get away with a lot of inefficiencies that results in reduced cost.
I agree with this. A properly and well designed keyboard interface is faster than any mouse. On the other hand, a properly designed mouse interface can be fast too. Both need to be applied when it makes sense.
I can also resonate with the GMaps example; people find it rather ridiculous that I prefer to use pen and map to plan routes but GMaps simply does not cover the complex demands of holiday routes with family.
Edit: Here's another good one I just ran into -- I can't sign out of just one Gmail account. I have to sign out of all my Gmail accounts, and sign back into the ones I didn't intend to sign out of.
Note that if they clear your pins so you have to re-search for something, that's potentially another CPC payment to them...
You can sign into multiple Gmail accounts? How on earth does that work, does it involve the menacing prospect of "linking" them?
This seems like it is a shitty behavior inherent with OAuth. I have half a dozen Microsoft accounts for various things, and you just can't sign out of one and sign into another, unlinked account - things go all sideways and the auth providers that are trying to read your cookies get really confused and go into 401 redirect loops. It's better to burn it down and open a new set of incognito tabs instead.
I suppose that search is effectively a text based app, so that's at least one mainstream terminal-like app.
Then again, computers do so many things in the millisecond range and faster that maybe what we observe IS only a small fraction of the total.
I think I want to do back-end or terminal-based interfaces again. Native interfaces. Mmmmm
If you decide to work on native GUIs, please don't create your own toolkit on the assumption that you can do better than the OS. You'll almost certainly break at least one thing: accessibility for blind people.
Even after disabling any and all caching and minification (read: basically parsing all templates per request, a bit like PHP), the entire render process take less than 20ms of time to the point that I've simply left out caching during some production runs.
I also keep the interface free of any expensive rendering precisely because I don't really need all of the niceties of JS frameworks.
Surely, the page could be more fancy but vanilla HTML5/CSS3/JS can do the job too. (And document.querySelector has replaced jQuery for me)
If the operation cannot be done quickly -- in a few seconds at most for reasonably interactive work, in a few minutes to, perhaps, a few days for batch -- then it's very, very very rarely done.
It it occurs in less than about 1/10 second, there's not much incentive to try to speed it up, and inefficiencies that bump process time up to even a few seconds tend to creep in. That despite the fact that there are psychologically and outcomes-measurable differences between interfaces with even 1/10s vs. 1/1000s delays or responses.
The Jevons Paradox plays into this, balancing against Gresham's. Stuff that's cheap (clock time) happens, stuff that's non-discernable (under perceptible / outcomes limits) tends not to get constrained.
There is plenty of interesting computation we could and would do if what now requires a year would only require a second.
So there is plenty of computation being done that only requires milli or microseconds, but due to development practices now stretches just out into the annoying time frame.
(Prime example: the growth of web pages to match increases in processing, RAM, and internet speeds, means that the web is actually slower to use these days.)
Even the fastest computer and the highest-end Internet access can't get you such a huge advantage in performance that it can make the web feel snappy again.
If you don't believe me, install privacy badger/ghostery and ublock origin, visit any popular news/entertainment website, and look at the number of blocked elements/requests as you load a few articles. All of those take time and suck up resources.
Or we just don't do them.
I happen to know quite a bit about front-end technologies, so I'll speak to those. Bootstrap is around 100 kilobytes. One hundred thousand bytes - after gzip and minification. By itself, is it slow? Not very. Bootstrap fans will point you to endless benchmarks and tests that show its impact on performance. Same with React: one hundred and forty five thousand bytes, after gzip and minification . According to React fans, React is blazing fast!
But development happens, so you throw in a few hip libraries and frameworks and suddenly Slack takes one billion bytes of RAM. Whoops. "But it's not React's fault!" Sure it's not. If React is the only bloated thing on your site, it will work great. But chances are that if you've got React, you've also got Bootstrap, and some visualization things, and some code from Stackoverflow that iterates over your DOM in O(n!^n!). And all of that is how things get slow.
Now, I might be a bit biased, because I spent years of my life working on a CSS framework 100x smaller than Bootstrap, but I think that if everyone spent time optimizing the size of things to be 100x faster we could get back to snappy UIs. Yes, it would be hard, and yes, it would require compromises, but the result just _feels_ good. There's something about a webpage loading in 250ms, or a button reacting as soon as you tap it, that just feels nice. Maybe it means not using React; maybe it means you don't use as many nice-to-have frameworks, but I think it's an achievable goal.
 React fans will point out that if you don't need to interact with the DOM, this gets smaller. Yes, this is true, but obviously for most webpages you kind of need DOM interactions.
But other than that, what did IBM ever do for us?
Quite possibly. Who's going to pay for it, though? And who's going to make sure the frameworks are still easy to use and not buggy?
I think a lot of it boils down to "don't do work you don't have to." But there's nothing about a framework that causes -- or saves you -- from that.
It's particularly tragic when the page is only text and images.
OK, I'm curious. What's a legitimate problem where the most naive solution is O(n!^n!)?
The result of this was truly horrific performance. If one node contained thousands of matches, the repeated node removal, copying, inserting, repeat, took more than 30 seconds. 30 seconds on a 4 GHz machine.
Interesting! Can you share that?
Not paying attention to tab order in an enterprise app is a cardinal sin though.
As I'd expect any good programmer to...? As a 100+ wpm touch typist (including symbols + numbers) and former finance professional, I use the top row of the keyboard for numbers.
How is having 8 fingers available that don't have to move much going to be slower than using three that have to move up and down?
I get the point that the programmer wasn't behaving like the users; that's a good critique, but I'm picking up the vibe from this comment that you think the numpad is inherently better for that sort of thing? No way...
Edit: huh, seeing the chain of comments here, I guess I'm in the minority? I don't see how someone can consider them a touch typist (type quickly without looking at the keyboard) and not be able to reach and use numbers and symbols, but maybe a lot of folks here are like that? How do you program with numbers, and ampersands, and parentheses, and asterisks and the equal sign and underscores and all that, having to look down at the keyboard???
It just seems obvious to me that a skilled typist would be much faster using the top numbers than the numpad.
Worse, if you're on a laptop a lot then this habit is kicked out the door. Enabling a numpad on the keyboard is less efficient than just using the top row.
I'm left handed; the numpad is freakin' awesome for number entry. Back when I still played games like Descent and WoW, it was also very efficient to setup the mouse for my left hand and leave my right hand on the number pad. Aim and fire with mouse, maneuver with numpad (in Descent), or action bar items on numpad (in WoW).
I worked at a print shop that switched from DOS to Windows for it's POS software, it was an order of magnitude slower. At least we could play solitaire though.
Presumably this is it. I find typing on numpad is much faster and simpler, but 1) laptops typically don't have one and 2) on normal keyboards it takes my hands away from the main keyboard. Then I switched to a Kinesis Advantage2 and now the numpad is integrated into the main keyboard, using a foot pedal to toggle between main keys and integrated keys.
In PoS, you're not likely to be typing many letters most of the time, so left hand for function keys, right hand for numbers is pretty good. You can move your hands if you move to a search.
For POS and similar applications it actually makes sense to design the UI such that it can be used only with numpad, for example by using +,-,*,/ as function keys for common operations.
It's fascinating how we tend to over-engineer and bloat things.
I mean it's a lot nicer if you look at it from a distance after it's loaded, but it's not as snappy as it used to be.
They all look the same: big images, not much text, the top bar, yeah this newsletter thing to upsell with these crappy (but effective) marketing techniques...
Somehow I feel old school websites (HN, Reddit, and so) are more sticky, more addictive, more unique. They focus on what we really want: contents and communication with others.
It's often an over-engineered bloat that only works for trivial Programming 101 courses (yeah the famous Employee or Bike class). But in 10+ years of programming, I found that OOP is just a complete mess (you end up with awful classes in your code like Service, Manager, AbstractFactory, and so).
I wish we could just use variables and functions, that's all we need really. :)
For most web and business programming I've done, at the end of the day I'm mostly just taking data and transforming it into other data. Most of the time, looking at these programs as a series of functions through which data gets piped to obtain the result works well. As a result I think that functional programming is a good fit a ton of code that is written using OOP. For fun, I tried converting some old side projects to F#. Once I'd gotten used to the language, I ended up being able to express the same ideas more clearly, and with fewer lines of code. Functional programming isn't magic, but it can be very effective when used in the right places.
Lately, I've been getting into game development as a hobby. And when I'm simulating a miniature world with hundreds or thousands of stateful actors, I find that OOP works really well - in that the way to code models the game world aligns well with my mental model of the game world. You absolutely can program games using a functional style - I've just found OOP to be a better fit here for my uses. It's still useful to use a functional approach where you can, though, even inside an OOP game code base. John Carmack has written and spoken about this.
You still can. You need decent data structures too, but I never in over 20 years really "got" OOP. It seemed needlessly complicated. Functions and data always seemed to do the job for me.
And the original was not aimed at dealing with the internals of a single program sitting on a single CPU, but grand simulations running on massive clusters. There each "object" would could very well be a process of its own, running on its own dedicated hardware.
Effectively OOP became another one of those buzzwords on the bingo board...
Not infrequently: stripping out content, running it in Markdown, and generating a static document (PDF, ePub, text) that I can just fucking read.
Firefox does offer uBlock, at least, which is a massive relief. I also run (on the router) a large, and after significant modularisation by me, flexible dnsmasq blocklist that addresses another large set of issues.
On desktop (Linux, MacOS), I have and use: NoScript, uBlock, uMatrix, and Stylish, as well as Reader Mode.
It's still often less aggrevating to grab source or rendered text and create a standalone Markdown text, for various reasons.
I'm looking into the notion of a browsing mode that presumes the website designer is an idiot and that the HTML markup is at best a hint at what the semantic structure ought to be -- see concepts such as POSH (Plain Old Structural HTML), and ... a few other things. Work in (very slow) progress.
Only reason i bumped into it was because i was toying with Tor for Android, and their fork(?) of Firefox had it bundled.
You can still use server rendered templates and have things load extremely fast. Most sites don't need 3mb of JS to show structured text.
My point is, that maybe todays software is not all that bad. We just don't feel the same needs as 20 year old guy somewhere at Google who programmed it. And it also works apposite way too. Show 20 year guy Visicalc, what he will think about it. Or check teens reactions to Windows 95.
Control? We have less.
Ownership? Not that we ever had it, but now you don't even get a disc. Licenses are even more restrictive.
The pace of 'upgrades'? Way, unnecessarily faster
And when the business dies now? Now you're fucked.
There are stories of mechanics running their shops today with 40-year-old dinosaurs, where their biggest logistical issue is getting old parts to repair when it breaks. It was sad that the magazine article I read that story in laughed at the "backwards" proprietor, when in fact he is a hero for standing up to the current trend of users-as-serfs cloud everything.
Not only that, but the availability of easy upgrades has incentivized companies to sell unfinished software and hope to patch fast enough in response to user complaints, instead of even bothering to actually finish the product as marketed.
How so, given that more people are able to write software, and it's easier to create something than ever before?
"And when the business dies now? Now you're fucked."
How is that any different than before?
And when the business died, their software still ran. Hard-won expertise kept it running. No stupid CEO or developer could stop you, and neither could they pull the rug out from under your feet with a sunset or some other shitty move. You bought your software and that, mercifully, could be the end of your relationship with that publisher.
I guess at some point I need to stop being surprised that HN comments are really no better than Reddit's.
The strict hierarchy of the start menu was admittedly pretty nice, right up until (I think) Windows XP, where they started having the weird bifurcation at the parent level.
Well, as an interface designer who is supposedly in the age category where his best memories are still being created I feel a bit insulted, because from my point of view, a lot of things are pretty bad. Don't get me wrong, things are also a lot better, but there is more to life than HD video, better color fidelity, novelties made possible by virtual reality, and the interconnectedness of the web - even when these are all big improvements over the past! But it does feel like the computer is being domesticated into the new TV.
As far as interfaces themselves go, the main culprit (from a design POV) is almost always the priority of touch-first design, with mouse a distant second, and keyboard input only existing for things that could not removed through dumbing things down, like input forms.
Now touch interfaces are great in some areas, but any time I have to select/copy/paste text I am painfully reminded of their limits. In general it feels like 90% of the time I am struggling to do things that would be trivial with mouse and/or keyboard. And let's not even go into the lack of tactile feedback.
Even more insulting is that it is not that hard to make an interface that supports different modes of input, or where the potential keyboard input is easy to understand. There even exist modern quasi-innovations like react-select, which uses a select field for mouse and touch and lets you type to autocomplete among available options for keyboard power-users.
Well, it does, and not even browser vendors themselves seem to be fully aware of it. For example: on all platforms that I've tried, a simple radix sort is between two to eighty times faster the built-in sorting algorithms when sorting numbers, for all array types, regardless of whether it is in-place or a sorted copy. (in the use-case for which I investigated it, the improvement is five to ten-fold, allowing me to do interactive animations where before I had to resort to slow renders). The difference makes sense for plain Arrays, which cannot assume integer values, but why the heck are the typed arrays so slow? They're plain contiguous memory array; compared to all the other browser complexities this is about as simple as it gets!
I could go on for a while but the point is: it's not like things were better in the past. It's just that a number of these things should have gotten better and instead seem to have regressed. And I know it's complex combination of many reasons, but it's still saddening to see.
https://run.perf.zone/view/Radix-sort-Uint8Array-loop-vs-fil..., https://run.perf.zone/view/Radix-sort-Uint8Array-100-element..., https://run.perf.zone/view/Radix-sort-Uint8Array-loop-vs-fil..., https://run.perf.zone/view/uint32slice0sort-vs-1000-items-ty..., https://run.perf.zone/view/uint32slice0sort-vs-radix-sort-10...