Seriously the worst image viewer I've ever seen.
P.S. not from MS
"git-torrent is lowing your system down, do you want to disable it?"
Another says careful what you wish for, don't want everything to be locked down like Apple do you?
Finding that file via Explorer search takes 10 minutes. Via dir, it somehow takes 10 seconds or less.
And of course, the one time this bit me was when I issued ‘rm -rf *’, suddenly realized the command was taking waaaay too long, ctrl-c’ed it, and felt the blood drain from my face as I realized I had just lost 1/4 of my mp3’s.
Not a bad first text editor though. Cut my programming teeth with it.
The most impressive, simple piece of software I've tried is a search tool called Everything.
I thought search was just hard and slow. Everything indexes every drive in seconds and searches instantly. I imagine it must be used by law enforcement to deal with security by obscurity.
1. A large hires screen so I can see lots of context
2. Lots of disk space
3. Online documentation available
4. Protected mode operating system
6. Collaboration with people all over the world
The productivity destroyers:
1. social media
>The productivity destroyers:
> 1. social media
stares at HN page
Having a real mode operating system (DOS) means that an errant pointer in your program could (and often did) crash the operating system requiring a reboot. Worse, it would scramble your hard disk.
My usual practice was to immediately reboot on a system crash. This, of course, made for slow development.
With the advent of protected mode (DOS extenders, OS/2), this ceased being a problem. It greatly speeded up development, and protected mode found many more bugs more quickly than overt crashes did - and with a protected mode debugger it even told you where the bug was!
I shifted all development to protected mode systems, and only as the last step would port the program to DOS.
> 1. social media
2. Project Managers
We've made trade-offs in the computer space, input latency and rendering of the screen (also in terms of latency) has suffered strongly at the hands of throughput and agnosticism in protocols. (USB et al.)
Poll the keyboard matrix for a key press.
Convert the key press coordinate to ASCII.
Read the location of the cursor.
Write one byte to RAM.
Results will be visible next screen refresh.
A modern PC and OS would do something more like this:
The keyboard's microcontroller will poll the keyboard matrix for a key press.
Convert the key press location to a event code.
Signal to the host USB controller that you have data to send.
Wait for the host to accept.
Transfer the data to the USB host.
Have the USB controller validate that the data was correctly received.
Begin DMA from the USB controller to a RAM buffer.
Wait the RAM to be ready.
Transfer the data to RAM.
Raise an interrupt to the CPU.
Wait for the CPU to receive the interrupt.
Task switch to the interrupt handler.
Decode the USB packet.
Pass it to the USB keyboard driver.
Convert the USB keyboard event to an OS event.
Determine what processes get to see the key press event.
Add the event to the process's event queue.
Task switch to the process.
Read the new event.
Filter the key press through all the libraries wrapping the OS's native event system.
Read the location of the cursor.
Ask the toolkit library to draw a character.
Tell the windowing system to draw a character.
Figure out what font glyph corresponds to that character.
See if it's been cached, rasterize glyph if it's not.
Draw the character to the window texture.
Signal to the compositor that a region of the screen needs to be redrawn.
Create a GPU command list.
Have GPU execute command list.
Results will be visible next screen refresh.
I could drag this out longer and go into more detail, but I don't really feel like it.
I'm sure people who actually work on implementing these things can find inaccuracies with this, but it should give an idea how much more work and handshaking between components is being done now than in the 70's/80's. Switching to gaming hardware isn't enough to get down to ye olde latencies.
Yet there are 1000 Hz USB, and optimizations for all of the above, and e2e lag soon might become solved, and then if we're lucky it'll percolate down to consumer stuff eventually.
Anecdote: I can't count the number of times I have seen a team changing a color, updating a logo, or moving an image a few pixels, resulted in happy clients/customers, and managers sending a congratulatory company wide email. While teams solving difficult engineering problems may have garnered a quiet pat on the back, if they were lucky.
IMO not just that, but also that the sale happens very early - before people get a chance to discover the UI is garbage. What's worse, in work context, a lot - probably most - software is bought by people other than the end users. Which means the UI can be (and often is) a total dumpster fire, but it'll win on the market as long as it appeals to the sensibilities of the people doing the purchase.
I think at this point we're trading performance for a bunch of ultimately worthless bling bling. No one's added a damn thing that improves user productivity.
That sounds like an extremely unhealthy business environment. It'll also leave you with just the worst engineers who cannot find a better job. A company that doesn't do this should be able to run circles around one that does.
Your friend is wrong. It's an imperfect proxy, but looking at programs that do work, speed is a good proxy for quality, because speed means someone gives a damn. There are good programs that are slow, but bad programs all tend to be bloated.
Of course "speed" is something to be evaluated in context. In a group of e.g. 3D editors, a more responsive UI suggests a better editor. A more responsive UI in general suggests a better program in general.
> this (speed) is never a measure of a program's quality. Is this universally accepted?
Universally? No. It all depends on who you ask. Companies tend to say speed isn't, but the truth is, a lot of companies today don't care about quality at all - it's not what sells software. If you ask users, you'll get mixed answers, depending on whether the software they use often is slow enough to anger them regularly.
Especially when you’re doing the same task “template” on a day to day basis, even 1 second per input adds up quickly.
In many cases I'm happy with simple but slow but fast enough.
Regardless of anything else, this is 100% happening to me on a regular basis. And the ironic thing is that I think it is caused by the attempt to speed up getting some results onscreen. But it’s always 500ms behind, so it “catches up” while I’m trying to move the mouse to click on something.
HTML was designed for static documents, it boggles my mind that things like nodejs were created. It's not a secret.
HTML techs can't even run efficiently on a cheap smartphone, which is the reason apps are needed for smartphones to be usable.
Every time I'm talking to someone for job offers, I state that I want to avoid web techs. No js, no web frameworks. I prefer industrial computing, to build things that are useful. I don't want to make another interface that will get thrown away for whatever reason.
Today the computing industry has completely migrated towards making user interfaces, UX things, fancy shiny smoothy scrolly whatnots, just to employ people who can't write SQL. Companies only want to sell attention. This is exactly what the economy of attention is about.
All I dream about is some OS, desktop or mobile, that lets the user write scripts directly. It's time you encourage users to write code. It's not that hard.
It is. Try teaching coding to someone non technical, especially someone that doesn't want to learn, and by the time you get them to understand what a variable is, you will fully understand that coding is not for everyone.
What he suggested wasn't viable in terms of productivity either. One may be programmer but don't want to spend time administering the insignificant parts of the system.
I never understood the culture of elitism in system micro-administration by hobbyist crowd.
That's largely the fault of ads. Some well placed JS stuff is lightning fast, even on mobile.
> I despise the web.
The web used to be great. I think you're despising something else.
> HTML was designed for static documents, it boggles my mind that things like nodejs were created. It's not a secret.
Someone already told me we were heading this way in 2005, using JS to write apps inside web pages. It boggled my mind then, but it hasn't really boggled since. My main worry (and sadness) was that JS was such an utterly shitty programming language back then. It was something you loved to hate, writing JS functionality for web pages was almost like a boast; look at the trick I can make this document do by abusing this weird little scripting language.
But that has changed, oh boy has it changed. Almost all the warts in JS can be easily avoided today. With the addition of the `class` keyword (standardizing the already possible but hacky class-like constructs), the arrow functions, and the extreme performance increase in current engines, it's actually become one of my favourite languages to code for. But don't worry I don't use it to write bloated web apps :)
> HTML techs can't even run efficiently on a cheap smartphone, which is the reason apps are needed for smartphones to be usable.
That's not the reason why apps are "needed". It's simply because it allows for more spying on the user. A website can only do it when it's loaded, an app can do it all the time, periodically, on boot, or whenever. They get a neat device-global tracking ID (and more than enough fingerprinting info, just in case), which makes tracking super easy and the advertisers happy. They don't have to do anything with that cookie permission banner EU law, because, well yeah apparently the EC didn't realize that apps are being used to do everything the advertisers want but the websites can't. Cookies are child's play compared to the trackers they can insert at elbow depth.
And the few apps which are actually apps because performance of being a web app doesn't suffice, they actually tend to be about performance and not so bad in the bigger picture.
You see the same thing on the web though, all the bloat and slowness and shit is caused by ads and tracking. Normally we fight against industries that are a net negative on society. Except that ads happen to be equivalent to propaganda, and tracking happens to be equivalent to surveillance, so somehow there's not a lot of push from the powers to get rid of these things, because of how convenient it is this industry just builds the infrastructure for them. They especially like it when the tracking is sent over unsecured HTTP.
That said, there should be sufficient work for a qualified engineer to write code for industrial applications, no? Many web devs can only write PHP/JS framework code. If you know how to program industrial controllers, or have similar qualities, experience with various industrial systems, I doubt you're going to have to explain anyone you're not a web dev ...
Part of the issue stems from the "strong data coupling" that's all the rage. Everything on the page should correlate at any given point in time. Add a character to a search box and the search results should be updated. The practical effect of this is that any single modification could (and often does) rewrite the contents of the entire page.
The other thing the article brings up is that developers and designers often disregard input flow. This may be partly driven by not having sufficiently dynamic tooling (Illustrator can hardly be used to design out flow patterns, for example.)
These two issues have a unifying quality: Websites must be "instagrammable", which is to say look good in single snapshots of time, and the dynamics take a serious back seat.
I thought the entire point of react was that it _doesn’t_ rewrite the entire page (DOM diffing)?
Not because I particularly doubt that expectations on the wider web are different to HN. Just that the crowd here isn't going to be able to easily understand the people out there. Anyone who expects interaction was weeded out long ago.
To get meaningful answers about what kind of UI people prefer, you'd have to sit a lot of them in front of several different interfaces, show them around, and then let them use those interfaces for prolonged amount of time, and then - only then - ask which one they prefer. But this almost never happens in the wild, so the market is completely detached from what people want.
They won't get it with React. And I'm referring to both how fast the webapp runs and development speed. It gets too complicated too quickly, even for relatively small sites.
I should do a thorough writeup of the infinite loop issue in React.
Most usage of React is for web apps. For most of those, a native UI makes no sense and has massive distribution implications.
Like the first Mac retailed for $2500 US. Go spend $2500 on a PC today, you'll have a great time.
Granted, economies of scale make this kind of a dumb argument. But it has a bit of truth to it. People are just less willing to spend as much on their machines, as well as push much more limited platforms like mobile to their limits. We should definitely deal with that as developers, don't get me wrong - but not having to deal with the optimizations they dealt with 40 years ago doesn't make me unhappy.
I have a top of the line Intel processor that’s less than 2 years old (launched, not bought). 970 Evo Pro that’s the one of the fastest drives around. 32 GB RAM (don’t remember the speed but it was and is supposed to be super fast).
Explorer takes a second or two to launch. The ducking start menu takes a moment and sometimes causes the entire OS to lock up for a second.
The twitter rant is spot on.
There’s so much of supposed value add BS that the core usage scenarios go to shit.
And this is coming from a Product Manager. :-)
Anyway the referencing problem is painful. I feel it often. Google maps or Apple Maps. Try to plan a vacation and Mark interesting places on it to identify the best location to stay. Yup gotta use that memory. Well isn’t that one of the rules of UX design, don’t make me think?
Regarding OSes: storage has gotten so much faster and CPUs haven’t, that storage drivers and file systems are now the bottleneck. We need less layers of abstraction to compensate. The old model of IO is super slow is no longer accurate.
I'm writing this on an AMD Phenom II, running Debian and StumpWM, that's over 10 years old. I've upgraded the hard drive to an SSD, and the memory from 8 Gb to 16 Gb (4 Gb DIMMs were very expensive when I first built it) and it's as fast as can be.
My work computer is much newer, has twice as much memory and a newer Intel processor, and I really can't tell the difference except for CPU bound tasks that run for a long time, like compiling large projects.
Fellow X220 user here... a solution for this exact problem where the system runs out of memory and then you sit there staring and waiting until it churns around long enough until it can do stuff again is to run earlyoom.
It will kill off the firefox process (or whichever is the main memory hog) early, which is also annoying but less so than having to wait minutes until you can use your computer again.
Anyway, while every version of Windows I have used has become inescapably crudded up and slower over time, on Linux, even the old laptop, the only thing that got slower over time was the web browser. Which has mostly to do with webpages becoming heavier.
 Actually win95 cause I can't remember if this also happened on win 3.11 and the like.
I am a first year CS student. when I got my first laptop recently, I got crazy and installed debian (had some prior experience with command line), it didn't work very well for laptop. All DEs except enlightenment (yeah i even tried it) had lots of display related glitches due to cheap hardware.
Then I moved on and installed fedora. Nothing to tweak from CLI. Just changed few settings from GUI and peace of mind even on relatively obscure hardware.
It has been vastly simplified and worth it for anyone in IT / CS related fields.
2 GiB total is not a lot
This is one of the things that made me ditch Windows when it came out, but I was pretty sure they would have fixed it now. Now I'm convinced Windows 10 is part of an authoritarian experiment in getting populations to gradually submit to a worse quality of life.
Privately it was so easy to ditch. (Still have it on dual-boot, rarely use it and so every 2-3 times it needs to update for long minutes while I wait. Meanwhile Mint updates the kernel during operation while I barely notice at all.)
Modern hardware introduces a significant amount of latency, its important to differentiate throughput and speed, a modern computer would crush a 2e in throughput a million times over, maybe more, but that doesn't mean its pipeline is shorter.
I see latency as a silent killer, of sorts. For instance, if you introduce a tiny bit of mouse latency, users won't notice the additional latency, but they will sense that their mouse doesn't feel quite as good. Give them a side by side comparison, and I bet most will be able to tell you the mouse with slightly less latency feels better.
This extends to everything. Video games with lower latency appear to have better, smoother controls. Calls with less latency result in smoother, more natural conversations. Touch screens with less latency feel more natural and responsive.
(I only have anecdotal evidence of this, but I am absolutely convinced of it.)
They are literally not. At all. You're way off. For anyone who cares about latency, you gotta be sub 50ms at least. For anyone doing generic not latency-sensitive work, maybe you can get away with 100ms, but that's stretching it.
200-250ms is the (purposefully built-in) latency with which an autocomplete may appear while typing. Not the latency for a single character or mouse click!
Where do you get 200ms latency anyway? That's a lot
I disagree. I have such a PC (64 GB of RAM, Quadro GPU, SSD, etc.) and I absolutely do notice things being slow, even things like Word, Excel, and VS code, let alone resource-intensive professional software.
I know from experience, the most godlike PC you can possibly build does virtually nothing to make common applications less laggy.
The common denominator there is browser tech & I think that will improve with time. And network-delivered services like Google Maps & Wikipedia are best compared to CD and DVD-ROM based services like MapPoint and Encarta, which had their own latency and capacity challenges.
In the meantime, you can still use tools like vim for low-latency typing. And it’s kind of interesting to see a Java GUI (IDEA) perform as well as it [has](https://pavelfatin.com/typing-with-pleasure/).
Browser based apps are a shitshow though, but I figure that's mostly out of anyone's control. I chop that up to the browser being fundamentally a poor place for most applications, even ones that are tightly coupled to a server backend.
I bet at the time you barely noticed though.
I recently read a history of early NT development, and then installed NT4 in a VM to play with, choosing a FAT disk. It is /extremely/ responsive. Much more so than the host OS, Windows 10.
The NT4 and 95 shells were tight code. They were replaced a few years later by the more flexible "Active Desktop". This was less responsive.
In later releases, Windows started to incorporate background features, such as automatic file indexing. File indexing is IO intensive and hammers your CPU cache.
When I was regularly using NT4 (years ago), I had an impression that there was some overhead caused by registry searches. If this was ever a thing, improvements in raw computing power have conquered it.
If anyone else wants to try, NT4 and VC++ cost me next to nothing on amazon. For a good editor, get microemacs. Python2.3 works. (Don't let it near an open network.)
It's hard to find an excuse, considering:
- Adobe has vast resources
- Photoshop is a mature piece of software
- It's image editing, not a complex video game (look at what something like Red Dead Redemption 2 can accomplish with every frame, @ 60FPS)
Both Adobe’s, and their customers.
At a certain level, when a graphic designer complains that Photoshop is too slow, they don’t push back against Adobe for optimizing poorly, they just buy a new computer.
On the laptop I'm typing this on, Windows Explorer often takes several seconds to open.
And why should they? Today's smartphones are much more powerful than the most powerful supercomputer of 1983. Computers have been powerful enough for most practical purposes for years, which means most people select on price rather than power. And then a new OS or website comes along and decides you've got plenty of power to waste on unnecessary nonsense.
Please stop blaming the consumers, they have very little freedom of choice.
> as well as push much more limited platforms like mobile to their limits.
I don't think anyone has really pushed any recent smartphone to their limits. I haven't checked if any demoparty maybe had a smartphone compo, but if they didn't, then yeah nobody has really tried.
The C64, Amiga and early x86 PCs have been pushed to their limits though, squeezing out every drop of performance. And there still exist C64 scene weirdos that work to make these machines perform the unimaginable.
Smartphones haven't been around long enough and have been continuously replaced by slightly better versions, that really nobody has had time to really find out what those machines are capable of.
> but not having to deal with the optimizations they dealt with 40 years ago doesn't make me unhappy.
I used to have to deal with such optimizations and I totally get that. It's freeing and I occasionally have to remind myself what it means that I don't have to worry about using a megabyte more memory because machines have gigabytes. Except that a megabyte is pretty huge if you know how to use it.
But not having to deal with the optimizations also means that new developers never learn these optimizations and they will be forgotten. And that's bad. Because there's still a place for these optimizations, like 95% of the code doesn't matter, but for that 5% performance critical stuff, ... if you just learned the framework, then you're stuck and your apps gonna suck.
It's kinda weird to optimize code nowadays though. At least if you're writing JS. It's not like optimizing C or machine code at all. If you're not measuring performance, 99% sure you'll waste time optimizing the wrong thing. Sometimes it feels like I'm blindly trying variations on my inner loop because sometimes there is little rhyme or reason to what performs better (through the JIT). Tip for anyone in this situation: disable the anti-fingerprinting setting in your browser, which fuzzes the timing functions. It makes a huge difference for the accuracy and repeatability of your performance measurements. Install Chromium and only use it for that, if you worry about the security.
Going from low clocked memory to high clocked memory can cost a bit of money (last I looked, it was like a 30-50% premium going from 2666 to 3200 to 3600MHz). As well, if you're comfortable, tightening the CAS timings on your memory can see noticeable improvement in memory bound applications. I personally have measured a 25% performance increase once my memory profile for 3200 was set correctly (mostly a Ryzen thing) and just upgraded to 3600 and haven't tested, but in my larger projects with tons of in-memory code I'm noticing improvements.
Iterating over a loop can be a world of difference depending on what is happening in the loop and what vector instructions your CPU supports, and how well it is supported. As well as your CPU's clock, L1/L2 cache sizes... basically everything.
I hate software.
- it's not bloody obvious how they work - randomly clicking on meaningless icons, try to uncover functionality.
- then just as you get used to it, they change it!
My biggest feature request would be a key stroke to hide all the floating crud that is obscuring my view of the map!
But dude, DESIGN. The design. Look at those rounded corners.
Selling a software release is a one-time payment. Selling a support subscription is recurring revenue. And if you make your software horrible enough to use without the support subscription, it is automatically immune to piracy.
As a practical example, I don't know anyone who uses the free open source WildFly release. Instead, everyone purchases JBoss with support. It's widely known that you just need the paid support if you want your company to be online more than half of the day. And as if they knew what pain they would be causing, their microservice deployment approach was named "thorn-tail".
Remember when softwares are stored in floppy. It took a while to load. Then every application came with different behavior and key bindings.
The computers are faster, can do more stuff, and monitors have higher frame rates. But for many applications that aren't games latency and non-responsive UIs are a growing problem.
I couldn't type up handwritten notes reliably, because half a page in, I would fill up the buffer and characters would get dropped.
I don't think I could have typoed this, there must be a spell checker somewhere that I haven't disabled...
The article argues that keyboard is a better interaction hardware than mouse. Google Maps doesn't work exactly as he wanted. Popups everywhere, etc.
15 years ago I would have been waiting 20 minutes for a single song to download on a hard wired PC.
Well Moore's law is falling by the wayside, if they want to start doing more with less the software guys are going to have to stop using interpreted languages, GC, passing data as json rather than as binary, all that overhead that's deriguer but that doesn't directly go to getting the job done
"What Andy giveth, Bill taketh away"
But the main problem seems to be a lack of a clear architecture in many systems. These systems have often accumulated so much technical debt that nobody understands why they are slow. Profiling and optimization might remove the worst offenders but usually don't improve the architecture.
Basically, in the software industry, we use the hardware gains to cover up our organizational deficits.
You can write seven layers of lagging crap in c if you like.
- extra branching in the parsing code (the parser cannot predict anymore what the next field will be, they could be in any order)
- extra memory allocations, decreased memory locality (due to variable-length/optional fields, and also the tree-like structure).
So if your data consists in a single object composed of a timed-fixed set of little-endian integer fields, you're comparing the above costs to the cost of a single fread call* with no memory allocations.
* many other data formats provide similar flexibility, text-based ones (XML) and also binary-based ones (IFF, protobuf, ISOBMFF, etc.)
don't write it as such though, you must write the endianess-decoding code (which the optimizer should trash anyway on little-endian architectures - e.g LLVM does).
Still not very readable.
Twitter takes that away because it offers a UX that's makes publishing too easy for your random ideas. People with low self control will create threads like this. With hundreds of likes comes self validation so they keep doing this.
The same people who are telling me that their computers are slow are the same people who need a flashy animated button for every single action and the same people who refuse to understand that passwords are not just a formality.
To each his own.
Computers have gotten much faster in terms of raw speed and throughput, yet that hasn't translated into much of an improvement in basic UI interactions and general functioning.
I'm here trying to write in LibreOfficea several-page document with minimal formatting, usinga high-spec CAD laptop/workstation -- and every damn keystroke is laggy!
My muscle-memory arrow-keys & quick moving around to edit portions of sentences, merge/split lines -- all rendered useless - because I need to wait for the cursor to catch up.
A part of my mind keeps wandering off to whether I should go setup another old DOS box and load XYWrite - which was feature-rich, always lightning-fast and never laggy and worked great. Of course, the lack of printer drivers...
In every area, the software developers just squander more of the processing power than the incredible continuing hardware advances provide.
Anyone have any advice on software that at least attempts to work closer to the metal and lets us see the performance that we should see from modern hardware (for all values of modern)?
I got out of the software industry because of this trend to building on multiple layers of squishy software, instead of requiring efficiency - this framework compiles to that pcode, talking to the other API, which gets thunked down to the other ..... where the h*!? is the hardware that does some actual computing?
It seems like it happened for the same reason that high-fire-rate rifles took over military use -- because they figured out that most troops couldn't actually shoot straight, so it worked better to just let them spray bullets in the general direction than to require & teach real skills.
Similarly, this whole morass seems designed to make it easy for mediocre programmers, and programmers that learn something more serious like Haskell are considered exceptional, while the bulk of stuff is written for
I also totally agree with his complaint about things disappearing when you try to click on them. Most of his rant is just about how crap Google Maps is, though.
Even many frameworks for iphone and android that are essentially web apps are terrible and make every app slow and miss clicks.
On the latest and modt powerful iPhone no less.
If you are creating a product as a web app only, you are telling me you do not care about UI enough.
Programs never not had bugs or issues, but we never had the Situation that every app, even though it technically works, is either sluggish, breaks somehow or requires the user to learn intricate timings to use a simple UI.
Developers took the easy way out and used these frameworks because its simple and, they feel, good enough.
And here we are.
The profession needs to actively fight Moore's law in order to keep our jobs relevant, and find more "work" to do - most of which is not only poorly engineered, but culturally destructive.
If you care about this, there's a tiny community of developers that actually care about reversing it: https://handmade.network/
Computer latency: 1977-2017 (https://danluu.com/input-lag/)
1. The "release early, iterate often" culture - as it encourages half-assed software to flood the marketplace.
2. Poor or non-existent incentives for proper code maintenance.
That is blaming the wrong aspect of agile development. Software should fail early. It is wasteful to go through a lengthy process only to find out the final product either is incompatible with the market or someone else made a tool that already dominates the niche market.
The problem most of the time is not realising you are actually selling a prototype and what should be a proof of concept ends up in production.
Google Maps has improved the primary complaint here. You can now search along your route.
They say, type two words and push F3, well, you can implement a telnet (or SSH) service which provides such a program.
Or, better may be, what I thought of "SQL Remote Virtual Table Protocol". You can access remote data using local SQL, allowing you do make cross-referencing data, both with the same and with different data sources.
Of course, there is still going to be network latency regardless what you do. But many local programs are still slow (as many comments mention), also due to doing too many things, I think. (Maybe network latency may make it a bit slow even if you use telnet to implement the old interface now, but not as slow as with HTML, which is just bad for this kind of things.)
Modern user interfaces I think are also bad, and makes it slow.
I hate touch screens, and slightly less hate mouse. Command buttons and toolbar icons are bad and keyboard is better, I think. There are some uses for mouse, but it is way overused.
Don't get all defensive: take this as a boatload of opportunities to make things better.
I woke up the other day to find my mouse broken, and believe me, on macos, it's very hard to do anything without the mouse. I had to look up all sort of crap from my phone just to find out how to reboot the thing.
Is it? I tried your example on windows and I could shut off the computer easily (alt-f4), then I opened up a browser (windows button, write chrome), navigated to your post, wrote this message and logged in without touching the mouse. I've found that you can navigate most websites without a mouse, as you can just move to the links by using the browser search and then click then with ctrl-enter.
Edit: I even managed to go back and edit this message without touching the mouse.
And there are many ways to restart the Mac using shortcuts:
I found it funny that this appeared on Twitter, a website which always slows down my browser, especially in VMs.
The most sold computer ever in history is still alive:
New cases: https://shop.pixelwizard.eu/en/commodore-c64/cases/90/c64c-c...
New keycaps: https://www.indiegogo.com/projects/keycaps-for-your-commodor...
New software: https://csdb.dk/
It's happening boys, back to the future!
SSDs are fast.
ROM is faster. That's what microcomputers booted off in 1983.
Whereas an IBM PC booting into DOS in 1986 took, sure, seconds, but a lot more seconds. You could read a lot of the messages as they scrolled by during boot.
To get to a BIOS configuration screen now, you need to independently research the key that will bring it up and memorize it. Then you have to frantically mash it during the whole very brief boot process, because there's only a split second during which it will actually work. It used to just be a boot message. When you saw the message, you had time to hit F12 or whatever.
Windows now by default has "quick startup" which is effectively log the user out kill their apps and hibernate.
Beware if you dual boot and want to access the windows files or your machine does not handle hibernation well.
Actual startup probably takes more like 20-40 seconds
This is not true. Are you still using a platter hard drive? (If an SSD, have you looked up benchmarks for it?)
My ~5 year old laptop used to cold boot Windows 7 in less than 10 seconds (once I'd disabled most autostarting programs, at least). It currently cold boots Ubuntu in ~5 or so; most of that time is spent displaying the UEFI and Grub splash screens. This is made possible _almost entirely_ by a Samsung Evo; I'm looking forward to getting an M.2 drive when I replace the computer.
Internally my computer tells me the process takes about 5 seconds from the OS start to graphical environment but in reality there are several steps. For example this doesn't account for the period of time between hitting the power button and the OS itself starting to run, entering full disk encryption password, unlocking volume.
I would be surprised if a full restart actually took so short. Maybe not loading a menu or unlocking a volume is sufficient to explain the difference?
I will not start to count the seconds and fight which OS boots faster, but it is certainly much faster than it was in the nineties. Boot times are certainly one thing where modern computers have significantly improved. Everyone who compares an instant on 8-bit is oversimplifying things. Try booting to ie. GEOS on one of those.
Startup finished in 8.878s (firmware) + 1.666s (loader) + 1.592s (kernel) + 3.265s (userspace) = 15.403s
graphical.target reached after 3.176s in userspace
Why? By the time you're logging in, booting is already finished.
My Chromebook boots in seconds, with full GUI and everything and usable. My Windows desktop boots in seconds and usable. I'd say anyone saying Apple 2 was faster is comparing apples (ahem) to oranges. In no way Apple 2 provided faster user experience for anything compared to modern machines.
> I make no secret of hating the mouse. I think it's a crime. I think it's stifling humanitys progress, a gimmick we can't get over.
Does the world's typical computer user today hate the mouse, and prefer a keyboard-only interface (CLI)? No -- in fact, command-line interfaces are less discoverable and harder to use, starting out. Even as a programmer, I struggle to remember the flags to many common command line utilities
Sure, the author's example of a cashier's checkout console might be great as a text-only interface -- cashiers use it day-in, day-out, and can learn all the keyboard shortcuts in a day. But what about the self-checkout machines that shoppers use maybe once a week? Would you rather have every person have to learn a list of keyboard commands while navigating a two-color interface?
Does the modern web poorly serve the author, who's good enough with technology to master any UI? Sure!
But the modern web works better for the billions who otherwise would not have started using it in the first place
We need to start talking about expected utility.
For software that's used briefly and once in a blue moon, it's perhaps not worth the effort to make the UI particularly ergonomic. Most web pages fall into this category - the random e-commerce shop or pizza delivery service you're using today. It would be nice if the UI wasn't actively user-hostile, but it's not critical.
The problem is with software used regularly, for extended periods of time. Like, during a work day. A very large part of the world's population interacts with software at work. A lot of them sit in front of a small set of programs 8+ hours a day, day in, day out. For example, a word processor + e-mail program + IM + e-commerce platform manager + inventory manger. That software needs to be as ergonomic as possible, otherwise it's literally wasting people's life (and their employers' money). Such software needs to be keyboard-operable, otherwise it's just making people suffer.
A lot of software falls into this category. If you're doing a startup that is meant to, or even conceivably can be used in a business, you probably have some full-time users. You probably want those full-time users. If so, then for the love of $deity make it more like that old DOS POS than the hip mobile-on-desktop web garbage. Otherwise you're wasting people's health, money and sanity.
Not to disagree with the general point you're making, but autocompletion of commands just using the tab command is how CLIs get discoverability and it's kind of cool.
Whenever I "don't know" I just 'cmd -<tab><tab>' and suddenly I am presented ith a list of options that I can filter by continuing to type the option I suspect I need, or tab to the one that I see on screen. Then if that requires an argument <tab><tab> it let's me select the, for example, file that is needed as the argument.
You assume you already know which `cmd` to type. Most users don't.
It ran very well on a 33mhz 68040 with 32 to 64 megabytes of RAM.
Seems like a pretty good tradeoff to me.
I'll take a slow autocomplete box of all the world's knowledge over a lightning-fast lookup of my local files in a single directory any day.
Youtube does this in video description and comments. If I'm scrolled down there to read comments, maybe I want to actually just read them and not click to read them?
Reddit does this. I don't know what reason I have for reading a thread of comments other than reading comments, so why do I have to keep clicking read more?
Twitter evidently does this. If I'm reading a thread, why do I need to click to read more? And after a couple dozen posts, click again. In this case, it also seems to expand the unread posts above the point where you're currently scrolled to, so you have to scroll back up and manually figure out where exactly the last post you read is and where the new stuff begins.
Many shops do this, by cutting product descriptions at a few lines so you can't read what the product is all about without clicking read more.
And they do the same thing with reviews.
I'm really tired of clicking read more over and over again in places where reading is the whole point!
1. In an attempt to improve perceived performance on initial load a decision was made not to load all content in at once. In the case of a Twitter thread containing potentially hundreds of items that’s reasonable, for product descriptions less so.
2. The widget being used to display a product description on the product page itself is also used elsewhere on the site, but in a context where space is constrained to fit a grid. They got round that with a read more link.
3. Sadly the most likely for product descriptions, in an attempt to determine customer interest an arbitrary cut off was chosen for how much of a description is shown. Metrics are then tracked on which descriptions are expanded, and taken as a proxy for customer interest in those products.
In an age where web pages are several MB large, bandwidths surpassing 100Mbps, and GPUs alone with 11GB of RAM, we can't render more text on a screen.
This page right now contains around ~8kB of pure text, and everything is already expanded as opposed to most other comment sites. I'm aware that formatting, layout, data modelling, messaging etc increases that amount, and that's fine, I'm just baffled that it's possible to have a slower experience with percievably the same amount of brain-data as 20 years ago, but with hardware that is magnitudes better.
We shouldn't lose this much to UI fluff.
If we wanted to do some work around getting more text to users and improve the reading experience of sites with at least a portion of them designed for that purpose, we certainly could.
I think web devs have collectively broken their brains if they think this.
EDIT: Checking my preferences, there is an option on the bottom that actually is disabled and says the opposite: "Use new Reddit as my default experience". So I guess that's on by default and you have to disable it.
Edit: another of my favorites is a Cogweel icon for the contextual settings, another cogweel icon on the other side of the screen for the account settings, then a hamburger menu for navigatio, then an icon to display all apps (like the numpad icon) and then an another menu when I click my profile picture.
Every time I'm looking for a preference it's like it's Easter! :)
Comments don't have adds. They are a feature that cost money, but don't bring any.
I think what I really want is something more akin to org mode, where it's expanded/contracted by default (configurable) and when I hit a single key, it expands portions to reveal more detail. I often find when I'm in the middle of a thread, I start to think that I'm wasting my time and want to collapse that thread in some way so I can start searching for where I want to reinsert myself. I rarely want to read every single comment on one of these services. Basically, I think the intention is on the right track, but the execution is poor. Getting it right would be tricky, but I hope someone tries and sets a better bar than the one we have.
Tweets are actually structured as a tree similar to Reddit comments, but while Reddit essentially displays a depth-first traversal, Twitter opted for a breadth-first traversal so it can show all the immediate replies. Almost every "show more" is for going one level deeper in that tree.
(With some caveats/custom rules, at least - if a subtree only has 1 reply it'll often be shown in-line, then there's tweet chains like this one)