x264 is kinda absurdly good at compressing screencasts, even a nearly lossless 1440p screencast will only have about 1 Mbit/s on average. The only artifacts I can see are due to 4:2:0 chroma subsampling (i.e. color bleed on single-pixel borders and such), but that has nothing to do with the encoder, and would almost certainly not happen in 4:4:4, which is supported by essentially nothing as far as distribution goes.
> I used a Satechi USB-C power tester and measured an 8% peak power savings using UASP. That means you'd get 8% more runtime on a battery if you do a lot of file transfers.
No no, even better! Peak power consumption is lower, but the same work is completed much more quickly due to increased throughput, so the energy required for the same work is decreased dramatically. Between the performance increase and the lower power usage I wouldn't be surprised if this reduces energy use by 50 %.
Yeah. This kind of savings comes up in interesting places elsewhere -- if you make your algorithm run twice as fast but using twice the memory, your time-integrated memory use (measured in gigabyte seconds) remains the same. So if you can make good use of that memory while your algorithm isn't running, you're saving CPU and not really using more RAM.
For something like the Pi this is unlikely, but in data centers with big, latency-insensitive distributed workloads "using memory for less time" can be a real win. (The battery example with the Pi really is the perfect example though, because you're usually interested in the time-integrated value more than the peak draw.)
It depends a lot on the workload, though. That kind of statistic is hard to draw any solid conclusions from, except that “it uses less CPU and less overall power during IO activity” and you’ll save some amount of energy consumption (depending on what you’re doing).
Some of the successful large-socket-count servers, specifically SGI Ultraviolet series (now sold by HPE AFAIK) actually use NUMAlink - a much updated version, but the basic remain the same as with MIPS and Itanium systems.
Enter Germany, where it was decided DAB is not a federal matter and basically you have 16 small states with entirely different stations available on DAB. The one real advantage DAB could have had, and they threw it away just to auction off the same frequency band a couple more times.
CRTs are "dumb" devices, they literally just amplify the R/G/B analog signal while deflecting a beam using electromagnets according to some timing signals. As far as input lag goes, they're the baseline. For fast motion they have some advantages at leat over poor LCD screens as well, since non-strobing LCDs quite literally crossfade under constant backlight between the current image and the new image; we perceive this crossfading as additional blurring. A strobing LCD on the other hand shifts the new image into the pixel array and lets the pixels transition while the backlight is turned off. The obvious problem - it's flickering.
LCDs that aren't optimized for low latency will generally just buffer a full frame before displaying it, coupled with a slow panel these will typically have 25-35 ms of input lag at 60 Hz. LCDs meant for gaming offer something called "immediate mode" or similar, where the controller buffers just a few lines or so, which makes the processing delay irrelevant (<1 ms). The image is effectively streamed through the LCD controller directly into the pixel array.
> Animations and intentional delays. It can't be said, how much faster a machine feels when something like MenuShowDelay is decreased to 0, or the piles of animations are sped up.
These animations effectively increase the input lag significantly. Even with them turned off there are extra frames of lag between a click and the updated widget fully rendering.
(Everything below refers to a 60 Hz display)
For example, opening a combo-box in Windows 10 with animations disabled takes two frames; the first frame draws just the shadow, the next frame the finished open box. With animations enabled, it seems to depend on the number of items, but generally around 15 frames. That's effectively a quarter second of extra input lag.
A menu fading in takes about ~12 frames (0.2 seconds), but at least you can interact with it partially faded in.
Animated windows? That'll be another 20 frame delay, a third of a second. Without animations you're down to six, again with some half-drawn weirdness where the empty window appears in one frame and is filled in the next. (So if you noticed pop-ups looking slightly weird in Windows, that's why).
I assume these two-frame redraws are due to Windows Widgets / GDI and DWM not being synchronized at all, much like the broken redraws you can get on X11 with a compositor.
> USB is polling with a fairly slow poll interval rate (think a hundred or so ms).
The lowest polling rate typically used by HID input devices is 125 Hz (bInterval=8), while gaming hardware usually defaults to 500 or 1000 Hz (bInterval=2 or 1). Most input devices aren't that major a cause of input lag, although curiously a number of even new products implement debouncing incorrectly, which adds 5-10 ms; rather unfortunate.
> For example, opening a combo-box in Windows 10 with animations disabled takes two frames; the first frame draws just the shadow, the next frame the finished open box. With animations enabled, it seems to depend on the number of items, but generally around 15 frames. That's effectively a quarter second of extra input lag.
This isn't usually what I think of when I think of "latency." Latency is, to me, the time between when the user inputs, and when the system recognizes the action.
This becomes especially problematic in situations where events get queued up, and then the extra latency causes an event to attach to something that is now in a different state than the user perceived it to be when they did the input—e.g. double-clicking on an item in a window you're closing right after telling the system to close the window, where you saw the window as open, but your event's processing was delayed until after the window finished closing, such that now you've "actually" clicked on something that was, at the time, behind the window.
On the other hand, the type of latency you're talking about—between when the system recognizes input, and when it finishes displaying output—seems much less troublesome to me.
We're not playing competitive FPS games here. Nobody's trying to read-and-click things as fast as possible, lest something horrible happen.
And even if they were, the "reading" part of reading-and-clicking needs to be considered. Can people read fast enough that shaving off a quarter-second of display time benefits them?
And, more crucially, does cutting that animation time actually cause users to be able to read the text faster? Naively you'd assume it would; but remember that users have to move their eyes to align with the text, to start reading it. If the animated version more quickly "snaps" the user's eyes to the text than the non-animated version, then in theory the user using the animated combo-box might actually be able to select an option faster!
(And remember, none of this matters for users who are acting on reflex; without the kind of recognition latency I mentioned above, the view controller for the combo-box will be instantly responsive to e.g. keyboard input, even while the combo-box's view is still animating into existence. Users who already know what they want, don't need to see the options in order to select them. In fact, such users won't usually bother to open the combo-box at all, instead just tabbing into it and then typing a text-prefix to select an option.)
The "latency caused incorrect handling" is IMO the worst thing in all the "modern desktop is slow" complaints.
I can deal with 9 seconds latency, if I can mentally precompute the expected path taken and the results match it - this happens just by being familiar with what you're doing, and can be compared to using Vi in edit mode with complex commands.
I can't deal with 400ms lag if the result is that a different action than the one I wanted is executed.
I disagree. What you describe is a problem of interaction design founded on bad assumptions; with good interaction design I don't have to show the user that the computer is doing something for the user to be able to tell it happened. This is a problem of the system not showing its state transparently and relying on the user to notice a change in hidden state indicated by a transient window.
Windows Explorer gets your particular example right: When you copy a bunch of files into a folder, it will highlight all of the copied files after it is done, so it doesn't matter if you saw the progress bar or not.
Ehhh not really. There was some FPGA mining that happened in Bitcoin but it was always niche rather than at-scale.
FPGAs are a lot better than GPUs from the perspective of energy efficiency, but the FPGAs themselves are very expensive to purchase, and are a lot more specialized than GPUs. ASICs showed up before FGPAs ever really took off, and even if ASICs hadn't showed up it's not clear that FGPAs would have been superior on a total-cost basis to GPUs except in niche circumstances.
What do you mean by "more specialized" ? FPGA's can implement any logic circuit whereas GPU's are optimized for highly parallel arithmetic computations. I don't understand how that makes FPGA's "more specialized". It seems like they are in fact more general.
Historically voice encryption was politically only meant for state use, with strict controls, and us plebs not getting any voice encryption or very weak encryption only. Compared to encryption on the internet, this state has persisted for longer in communications. Even in new communication standards the options for encryption generally offer weak/irrelevant security for modern standards (end-to-end encryption).