I'll spend some time looking into iTerm2's latency. I'm sure there are some low-hanging fruit here. But there have also been a handful of complaints that latency was too low—when you hit return at the shell prompt, the next frame drawn should include the next shell prompt, not the cursor on the next line before the new shell prompt has been read. So it's tricky to get right, especially considering how slow macOS's text drawing is.
If I could draw a whole frame in a reasonable amount of time, this problem would be much easier! But I can't. Using Core Text, it can easily take over 150ms to draw a single frame for a 4k display on a 2015 macbook pro. The deprecated core graphics API is significantly faster, but it does a not-so-great job at anything but ASCII text, doesn't support ligatures, etc.
Using layers helps on some machines and hurts on others. You also lose the ability to blur the contents behind the window, which is very popular. It also introduces a lot of bugs—layers on macOS are not as fully baked as they are on iOS. So this doesn't seem like a productive avenue.
How is Terminal.app as fast as it is? I don't know for sure. I do know that they ditched NSScrollView. They glued some NSScrollers onto a custom NSView subclass and (presumably) copy-pasted a bunch of scrolling inertia logic into their own code. AFAICT that's the main difference between Terminal and iTerm2, but it's just not feasible for a third-party developer to do.
Holy cow! I wonder if iTerm2 would benefit from using something like pathfinder for text rendering. I mean, web browsers are able to render huge quantities of (complex, non-ASCII, with weird fonts) text in much less than 150ms on OS X somehow; how do they manage it? Pathfinder is part of the answer for how Servo does it, apparently.
I'm guessing that you're somehow preventing Core Text from taking advantage of its caching. Apple's APIs can be a bit fussy about their internal caches; for example, the glyph cache is (or at least used to be) a global lock, so if you tried to rasterize glyphs on multiple threads you would get stalls. Try to reuse Core Text and font objects as much as possible. Also check to make sure you aren't copying bitmaps around needlessly; copying around 3840x2160 RGBA buffers on the CPU is not fast. :)
For a terminal, all you really need to do for fast performance is to cache glyphs and make sure you don't get bogged down in the shaper. Pathfinder would help when the cache is cold, but on a terminal the cache hit rate will be 99% unless you're dealing with CJK or similar. There's a lot lower hanging fruit than adopting Pathfinder, which is undergoing a major rewrite anyway.
(I'm the author of Pathfinder.)
I've confirmed that I always use the same NSFont in the attributed string used to create the CTLineRef.
If you're curious, the text drawing code is in drawTextOnlyAttributedStringWithoutUnderline:atPoint:positions:backgroundColor:graphicsContext:smear: and is located here: https://github.com/gnachman/iTerm2/blob/master/sources/iTerm...
If you're able to spare some cycles, please get in touch with me. firstname.lastname@example.org.
"(I'm the author of Pathfinder.)"
I keep thinking that sometime soon the magic of HN will wear off, or we will hit "peak hackernews" or something like that.
This draws the curves directly in the pixel shader.
TextEdit - http://i.imgur.com/RIDBuKP.png
Xcode - http://i.imgur.com/PYFLOxH.png
SublimeText - http://i.imgur.com/ZhnQR1v.png
CotEditor - http://i.imgur.com/J9TPiO6.png
I switched to iTerm2 recently after repeated Terminal.app slowdowns and crashes, and have no issues doing the exact same things I did in the previous application. Good work.
Wow, what were you doing? I think I've seen maybe one Terminal.app crash in the past few years, and I'm a very heavy terminal user.
I have no idea what the cause of the instability was but I couldn't fix it and couldn't tolerate it.
With regards to the shell prompt, I'm almost positive that Terminal.app is using some heuristic to read the prompt after a <return>. A little experiment with a tunable spinloop and a fake prompt in C suggests that Terminal.app waits for about 1ms after a return character before updating the screen: a delay of 950us produces an "instant" prompt, while a delay of 1050us shows the cursor at the start of the next line. As the article notes, a 1ms delay is not really noticeable, and that kind of delay only has to happen in a handful of situations.
There is a lot of reasons, mostly: font rendering, unicode handling, all effects, while pure text is simply copying memory areas and nothing more.
You only need to be as fast as one screen refresh
period, 1/60th of a second usually, plenty of
cycles on modern cpus. Many games that are very
complex graphically do it without problem
Rendering 60fps (or even 1,000fps) is not remotely the same as achieving low latency.
Even if a game is rendering at 60fps, input latency (the time between a user clicking a button and something happening on the screen) is often over 100ms.
This article breaks it down... there's a link to it in the original story about terminal latency posted by OP:
Your reply also reads like games are getting away with introducing 100s of milliseconds of additional latency, which just isn't the case for the most part. There is hardware and kernel buffering that userland software can do little about, admittedly, that can easily add up to getting you above 100ms as you say. Even there, some improvements can be made:
Gsync reduces latency by getting rendered frames displayed as quickly as possible, by tying refresh rates to frame rates. "Time warp" reprojects rendered scenes shortly before refresh to help reduce latency between head movements and screen movements in VR - but this is also effectively "improving" throughput and worst case latency, by ensuring there's always a new head-rotated frame - even if the main scene renderer wasn't able to provide a full scene update yet. There's some talk of going back to single buffering and chasing the beam, although I'm skeptical if that'll actually happen for anything but the most niche applications.
Moreover, text mode display hardware does not absolve one of Unicode handling.
Will you be releasing beta version or users to test or any updates will go to the main release version?
On a related note, I am big into latency analysis and driving down latency in interactive systems. I'm quite familiar with the touchscreen work cited at the top, and having played with the system I can attest that <1ms latency feels actually magical. At that level, it really doesn't feel like any touchscreen you've ever used - it genuinely feels like a physical object you're dragging around (the first demo of the system only let you drag a little rectangle around a projected screen). It's amazing what they had to do to get the latency down - a custom DLP projector with hacked firmware that could only display a little square at a specified position at thousands of FPS, a custom touchscreen controller, and a direct line between the two. No OS, no windowing system, nada. After seeing that demo, I can't help but believe that latency is the one thing that will make or break virtual reality - the one thing that separates "virtual" from "reality". I want to build a demo someday that does the latency trick in VR - a custom rig that displays ultra-simple geometry that has sub-millisecond latency to human head movement. I will bet that even simple geometry will feel more realistic than the most complex scene at 90 FPS.
On the one hand, I have to agree that Terminal.app is quite good and very impressive. I don't bother with a third party terminal application and I do everything in the terminal.
However, one of the very valuable things about working in the terminal is the safety and immunity that it provides. No matter what bizarro virus attachment you send me, I can text edit it in a terminal without risk. There's nothing you can paste to me in irc that will infect or crash my computer.
Or at least, that's how it should be.
But the trickier we get with the terminal - the more things it does "out of band" and the more it "understands" the content of the text that it is rendering, the more of this safety we give up.
Frankly, it bothers me greatly that the terminal would have any idea whatsoever what text editor I am running or that I am running a text editor at all. It bothers me even more to think that I could copy or paste text and get results that were anything other than those characters ...
Make terminals fancy at your peril ...
I'm not sure what you mean by this. Terminal.app doesn't know that you're running a text editor. It does know that you're running a process called 'vim', which is kind of magic, but not too much (ps has always been able to show what processes are attached to a given tty, for example). If you're referring to the parent comment's "full mouse support for apps like vim", they just mean it supports "mouse reporting" control sequences, which date back to xterm. If anything, Terminal.app is late to support this (only in the latest release, whereas alternatives like iTerm have supported it for ages).
> It bothers me even more to think that I could copy or paste text and get results that were anything other than those characters ...
Well, the terminal has to interpret escape sequences for colors and such in order to display them, so why shouldn't it also preserve that metadata when copying and pasting? Like any other rich text copy+paste, it will only be kept if you paste into a rich text field; discarded if you paste into a plaintext field.
That said, there are a few 'odd' things Terminal.app supports: e.g. printf '\e[2t' to minimize the window. (This also comes from xterm.)
I'm afraid there aren't many of us that care about the subtle details of interaction that mean the most to one's experience. Working in audio, I know the difference between hitting a button and hearing a sound 40ms afterwards, and 4ms afterwords. I would much prefer to use the 4ms, even if it means sacrificing half of the system's features.
I feel like such a product will never reach the market, because the market will think they need lots of features, which results in a sacrifice of latency and other UI consistencies. There's always some developer writing the weakest link of an otherwise perfect system. For example, the CPU/GPU hardware, kernel, and browser's accelerated rendering are all engineered with millions of man-hours to be as blazingly fast as possible, and then a web developer comes along and puts a single setInterval() call in their online game or something, and all the optimization benefit goes to the trash. Or, because animation is a trend in UI design right now, developers purposely put in hundreds of milliseconds of delay between common actions like minimizing windows, switching desktops, scrolling, opening/closing apps on mobile, etc.
Basically in order for your dream of true virtual reality to be achieved, the principles and respect for low latency has to be maintained across the whole system's stack, especially the higher-level parts.
I've dabbled in music, and I can fully agree with the audio latency comment. Human hearing is exquisitely tuned for latency (probably since it's integral to direction-finding), so even the slightest delay is noticeable. Hitting a note in an orchestra even a few ms late makes you stick out like a sore thumb.
iTerm2 is vastly better on mac, but still far inferior to gnome-terminal.
And: what's with "disk contention"? How do terminals do that and what's relevant to the discussion?
At the end of the day, there is a trade off to be made. Terminals (or any program, really) can have 1-frame input latency (typically 1/60sec) and give up v-sync and tearing results, or they can have a worst-case 2-frame input latency with v-sync, and then you're looking at 2/60sec or ~32ms.
The way I understand it tripple buffering adds latency in exchange for higher sub display hz framerates.
Double buffering renders the next frame while displaying the current.
That results in latency of 1 frame since input.
Triple buffering adds another frame to the queue, resulting in a 2 frame lag.
With double buffering the framerate gets cut in half if it cannot meet vsync, with triple buffering it can also get cut in thirds. So double buffering is 60 -> 30, where the frame lasts 2 refreshes. Triple is 60 -> 40, where one frame is displayed for 1 refresh and another is displayed for 2.
Nowadays it's probably better to use adaptive vsync, which simply disables vsync when the framerate drops. This will reintroduce tearing, which might be preferable in fast action games.
"In triple buffering the program has two back buffers and can immediately start drawing in the one that is not involved in such copying. The third buffer, the front buffer, is read by the graphics card to display the image on the monitor. Once the image has been sent to the monitor, the front buffer is flipped with (or copied from) the back buffer holding the most recent complete image. Since one of the back buffers is always complete, the graphics card never has to wait for the software to complete. Consequently, the software and the graphics card are completely independent and can run at their own pace. Finally, the displayed image was started without waiting for synchronization and thus with minimum lag.
Due to the software algorithm not having to poll the graphics hardware for monitor refresh events, the algorithm is free to run as fast as possible. This can mean that several drawings that are never displayed are written to the back buffers. Nvidia has implemented this method under the name "Fast sync"."
Triple buffering uses more display buffer memory, but roughly the same cpu/gpu load as vsync-off. It's great for latency. It makes a whole lot of sense. It's been around for like 20 years. But you rarely see it used ...
I think it's funny to have the suckless project page for st go on and on about how XTerm is clunky and old and unmaintainable, but the result of this small and clean minimalist terminal is a closer loser in terminal performance, which subconsciously and consciously detracts from the experience.
XTerm has the logic for handling partial screen updates and window obscuring other terminals don't bother with because it was written in an era that these weren't mere 10-50msec delays but 100+msec delays; Anyone who used dtterm on a sun IPX knows what I'm talking about
I'm also a Terminal.app user.
> alacritty and terminal.app are fast enough that they’re actually limited by the speed of tmux.
Initial testing has shown it not to (noticeably) impact perf in our highly unscientific benchmarks.
I'm a native Windows user these days, so I can't use it quite yet, and as such, have fallen behind the time on news.
When new Mac users ask for general app recommendations, they often seem to get immediately steered away from Terminal.app and into iTerm. I'd understand this phenomenon if T.app was horrible, but it's rather good!
(I have a theory about the long shadow of Internet Explorer causing the use of stock OS apps to subconsciously feel passè)
And iTerm still has feature edges e.g. truecolor, or better multiplexer support (Terminal only supports vertical pane splitting).
I'm 90% sure that's not accurate. It's had tabs for quite a while.
I don't think that's true. I definitely had tabs in Lion (10.7) and I'm pretty sure they're at least as old as Leopard (10.5) and maybe older.
Here I don't really see what you mean since linux-st is the term-emulator with the lowest latency (only comparable to alacritty) so it looks like it's simple (though not that simple) and quite fast (on Linux).
If you double check the plots, you will notice that "linux-st" isn't performing bad at all. The author also suspects XQuartz as one reason for the higher latency, which makes absolutely sense.
It's handy for keeping "tail" or "watch" commands or similar visible — the same reasons people use tmux, tiling window managers, so on.
The UX isn't perfect, but it's useful enough that I've stuck with iTerm despite the lower performance (and bugs — it's pretty buggy, and the main author rarely seems to address Gitlab issues).
iTerm has other nice features. It can run without a title bar (saves space), it does cmd-click-to-open-file, and it has a lot of customization options. I don't really use most of the features; the tiling aspect is the main feature I rely on.
My setup is to run MacVim on the left half of the monitor and then iTerm2 on the right half. iTerm is then split into generally three horizontal splits.
I love the tmux integration, I used tmux before anyway and it's honestly not that different if you used tmux's built in mouse support but focus follows mouse in terminal panes is a nice touch.
Displaying images inline is alright I guess but I don't actually use it that much.
There's a bunch of stuff listed on their features page that sound useful but I don't actually use (yet). Idk I suppose I haven't noticed any appreciable difference in speed.
As far as I know, in terminal you can't use cmd as meta key, which immediately kills it for me as an emacs user (furthermore in iterm2 you can set it up so that left cmd = meta, right cmd = cmd, which I find very useful).
In my view, simplicity often leads to better performance as a side effect -- but of course there are many exceptions.
Nevertheless, I wouldn't start optimising software unless the software is really unusable. Optimising software to look well in rare corner cases is not a good idea imho, if the price is adding a lot of complexity.
It's a really helpful benchmark, IMO, as it's the main problem I see with different terminals. On a chromebook, most SSH clients are effectively useless because if you accidentally run a command that prints a lot of output (even just 'dmesg'), the terminal locks up for a huge amount of time, seconds or even minutes. You can't even interrupt the output quickly.
I appreciate that it's a different problem to the latency that the OP is trying to measure, but as a benchmark, it's actually very useful.
> The closest thing that I care about is the speed at which I can ^C a command when I’ve accidentally output too much to stdout, but as we’ll see when we look at actual measurements, a terminal’s ability to absorb a lot of input to stdout is only weakly related to its responsiveness to ^C.
Given how easy it is to accidentally spew something that I don't want to wait for, even if it is spewing quickly, I'm squarely with him in not caring about the speed of display. Slow it down to just faster than my eyes can make sense of it, and make ^C fast, and my life will be better.
when SSHed into a remote machine, if I run a command that spews a lot of text, how quickly does the terminal respond to ^C, stop printing text, and return me to the prompt.
Based on my findings, ^C is highly related to the speed of the output because the process running in the shell may be way ahead of the terminal's parsing/rendering. Imagine you run `cat foo`, the shell could take around 1s to send the output over to the terminal, the terminal might then take 10 seconds to parse and render the output. So after 1 second a ^C will actually do nothing because the cat call has finished. This is the case with Hyper, it hangs due to slow parsing and too much DOM interaction (as hterm was not designed for this sort of thing).
There's actually a mechanism for telling the process to pause and resume (sending XOFF/XON signals), which allows the terminal and shell to stay completely in sync (^C very responsive). However, these only really work well in bash as oh-my-zsh for example overrides the signal with a custom keybinding. Related links:
Original PR: https://github.com/sourcelair/xterm.js/pull/447
Post-PR bug: https://github.com/sourcelair/xterm.js/issues/511
If it instead takes a long time, there are probably large buffers between:
* If you're talking about ssh to a faraway machine, the TCP layer is probably responsible. (I'm not even sure if there's anything you can do about this; the buffer (aka "window size" in TCP terminology, plus the send and receive buffers on their respective ends) is meant to be at least the bandwidth-delay product, and as far as I know, the OS doesn't provide an interface to tell it you don't need a lot of bandwidth for this connection. It'd be nice if you could limit the TCP connection's bandwidth to what the terminal could sink.)
* If you're talking about something running on the machine itself, it's probably an over-large buffer inside the terminal program itself.
If this becomes popular enough, then zsh will figure out how to offer the feature that bash already does, and terminals will happily adopt it. Then everyone's lives are better! :-)
I agree totally with the speed of display updates not really being important - if my terminal is spewing hundreds of pages of text, it doesn't matter whether it's redrawing at 50fps or 5fps. My hunch is that the slowest terminals are the ones that insist upon drawing every single character of output. They then end up with a huge buffer of text that needs to be rendered even though the ^C may have been sent and the noisy program has been terminated.
I trust the author's benchmarks over your experience. (Doubly so since my experience matches the author's.)
So why not (say) buffer it up somewhere, show a preview, show a "I would be spewing output right now, press <Space> to see what the last screen is, press Ctrl-C to stop, press 's' to just spew output". The speeds and amounts of data for which this happens could be completely configurable.
You can always put a limit on how much to buffer up (I'm not saying that this system should completely buffer output files until you run out of disk space), and sometimes 'spewing' is actually what we want.
The vt220 had a 'slow scroll' speed which would buffer text and scroll it at a viewable speed, limited by the quite small memory on the screen. It also had a 'pause' key which would pause the display and then continue the output (again limited by the terminal's memory). See also PC 'scroll lock' key.
The way to tackle this is not to "buffer output up". In fact, the way to tackle this is the opposite of filling up buffers with output.
It is to decouple the terminal emulation from the rendering. Mosh runs the terminal emulator on the remote server machine, and transmits snapshots of state (using a difference algorithm for efficiency) at regular intervals over the network to the client, which renders the terminal state snapshots to the display on the local client.
Various tools can and do deal with the spew though. Ofhand, script (typescript), screen, and tmux. If you're running a session through a serial terminal emulator (e.g., minicom), that would be another instance.
I'm not going to claim these are particularly mainstream uses (though I've made use of each of them, and been grateful for the ability to do so). But they do exist, and I suspect there are others.
stdout [MB/s] idle 50 [ms]
urxvt 34.9 19.8
xterm 2.2 1.9
rxvt 4.3 7.0
aterm 6.0 7.0
konsole 13.1 13.0 note: stops moving when printing large file
terminator 9.1 29.4 note: stops moving when printing large file
st 23.0 11.2
alacritty 45.5 15.5
Side note: was supervising some kid who absolutely couldn't believe that the reason his 'workstation was locking up' was that he was catting giant logfiles in a second pane of his single-process gnome terminal.. I told him to use Xterm or something else, since they spawned one process per window, and he tried for a while, but went back, and continued to complain, because just couldn't believe that the terminal could get bogged down, and further, missed his pretty anti-aliased fonts.
Latency for input: https://github.com/pavelfatin/typometer
For me this is a great illustration of how much latency there is in the GUI. Not sure if everyone can feel it, but to me console mode is much more immediate and less "stuffy".
I'm a fast typer, but this is 2017 - computers are fast, right? Nope. On newest, maxed dell xps there is a HUGE latency difference I feel if I use pure linux console vs any graphical one (is it gnome/windows or mac).
Typing and working with pure text is really fast and you INSTANTLY feel and see the difference. Try it.
It definitely has slow throughput at catting files, e.g. if I cat the output of "seq 100000". The latency seems better though. Probably not as good as text mode.
I honestly don't know what text/console mode even is. I know there is a VGA "spec" -- I think all graphics drivers for PC-compatible devices have to support VGA and text mode? Or is it part of the BIOS?
The easiest way to turn it off is to add "nomodeset" to your kernel commandline.
The correct term, by the way, is not "text mode", but rather "a virtual console" or "virtual tty" or such.
Hopefully that gives you enough search terms to learn more.
The confusion arises because people, as you have done, erroneously conflate the kernel virtual terminals with "text mode". In fact, the norm nowadays is for kernel virtual terminals to use graphics mode. It permits a far larger glyph repertoire and more colours, for starters, as well as things like software cursors that can be a wider range of shapes and sprites for mouse pointers.
The point being that it's not measuring latency, nor throughput of the rendering, but rather throughput of the emulation.
People notice the extra latency though. I sure do. I remember what it's like to have a CRT getting photons displayed nearly immediately after a keystroke. That's exactly what makes an old 286 feel snappier than a 2017 macbook pro while typing.
I have to look into Kmscon which seems promising.
Estimating, it takes at least 500 - 1500 ms. I have no idea what's happening here... if it's an X thing, a driver thing, etc.
I don't recall it being fast on any machine I've used recently. It's at least 100x slower than it should be to be usable -- it should be around 5 to 15 ms, or even less.
Specifically I run kmscon on one virtual terminal + tmux for scrolling/tabs, then an X server running chromium on another. I'm not a web dev, but I am continuously having to switch between the two for tracking merge requests, ticket status, testing via our frontends etc.
It's not bad, but kmscon is pretty much abandoned at this point, and I don't know of anyway to have a setup like this run nicely on multiple monitors. It was meant to be just an experiment at an über-minimal setup, I was planning to switch back to my previous i3 + st based setup after a month or so, but now it's been most of a year and I'm still using it for some reason.
I think the big thing I really enjoy about this setup is the complete lack of window management. Even the minimal window management I had to do with i3 (1-2 terminals running tmux + 1-2 browser instances) is gone. It feels like that's removed a small unnoticed stress point from my work day. If I ever get round to setting up a window manager again I think I'm going to try and keep it limited to 1 terminal + 1 browser instance and rely entirely on their internal tab support.
Are you running a distro or did you build this setup yourself?
What role does kmscon play? I think you could just run raw tmux in one VT and X in another? Although to me it seems slow to switch between the two.
It seems like Linux should support the multi-monitor setup as I said in a sibling comment -- maybe I will take some time to investigate it.
Was latency once of your considerations, or was it mainly lack of window management?
If you have time a screenshot would be helpful :)
kmscon has better handling for colors, fonts, etc. than the linux console; that's the only real reason I'm using it. On my laptop's builtin display I have no delay switching between any of the linux console/kmscon/X; when I plug in an external monitor I do get ~1-2 second delay switching from the linux console/kmscon -> X, no delay the other way.
There does appear to be some bug with switching from X -> kmscon, it just shows a black screen, but I've gotten used to switching X (VT 3) -> linux console (VT 1) -> kmscon (VT 2) which seems to work around that. There's also another bug where the Ctrl key seems to get stuck down in kmscon when switching to it sometimes, has only happened ~4 times in the last 8 months and I can fix it by just running `systemctl restart kmsconvt@tty2` and attaching to my tmux session again.
Since I'm not doing any frontend changes I don't ever really need to look at both my terminal and browser at the same time so haven't taken the time into seeing if I can have different VT displaying on different monitors. I prefer the portability of a laptop over having the most productive single location setup.
Latency was not at all a consideration, it was purely an exercise in how minimal a setup I could have. I spent a couple of weeks without having X installed and using command line browsers when I needed to, but using GitLab and JIRA through command line browsers was a real pain (and if I recall correctly some stuff was impossible).
A screenshot is difficult since it's multiple VTs and I don't think kmscon has any kind of builtin screenshot support. Just imagine a full-screen terminal with tmux, you hit Ctrl-Alt-F2, now it's a full screen browser; that's basically it.
One other thing I do have setup to make life a little easier are some little aliases syncing the X clipboard and tmux paste buffer so I can copy-paste between X and my terminal. And I have DISPLAY setup in the terminal so things like xdg-open and urlview can open stuff in my web browser.
That is, the X server would only know about one monitor. But the kernel would know about both, and it could run processes connected to a TTY which writes to the second monitor. Rather than a TTY connected to an xterm connected to the X server. (I think that is the way it works)
This goes back to my question: is text mode part of the graphics driver or part of the BIOS? I assume the BIOS has no knowledge of dual monitors, but my knowledge is fuzzy there.
rxvt versions newer than 2.7.1 and, more recently, rxvt-unicode seem to have some other issues that make them really slow (particularly non-bitmap font rendering), but rxvt-unicode supports mixing bitmap and other (Terminus) fonts which seems to be a reasonable solution.
Copyright (C) 2016 Microsoft Corporation. All rights reserved.
Loading personal and system profiles took 542ms.
I think it doesn't display if your profiles take less than 500ms to load.
EDIT: Just tested with a clean profile with the line "Start-Sleep -m xxxx" for various values of xxxx, and the message has shown up with times just above 500ms but not below.
Hyper which currently uses a fork of hterm, is in the process of moving over to xterm.js due to the feature/performance improvements we've made over the past 12 months. Hyper's 100% CPU/crash issue for example should be fixed through some clever management of the buffer and minimizing changing the DOM when the viewport will completely change on the next frame.
I'd love to see the same set of tests on Hyper after they adopt xterm.js and/or on VS Code's terminal.
Related: I'm currently in the process of reducing xterm.js' memory consumption in order to support truecolor without a big memory hit.
> on my old and now quite low-end laptop
Trust me, that's not a low-end laptop. Either that has the shittiest cpu ever and a terribly mismatched amount of memory, or the author's view of that is high-end or low-end is skewed; in either case, what's low-end nowadays would be ≤4GB RAM. 16GB is LOTS, useful for developers that run large builds and/or VM's regularly.
I very much like the rest of the article though, would love to see some latency improvements here and there!
For user-facing terminals/workstations, I would consider ≤8GB low-end ("unusable"). 16GB would be mid-low ("usable"), 32GB would be mid-high ("comfortable"), ≥64GB would be high ("good").
For servers, ≤64GB is low-end, ≥512GB being high-end.
That 8GB would be considered low-end does not mean that no one uses it, though. Some people might still rock 2GB laptops, or use the original 256MB Pi 1 as a light desktop.
- 8GB is easily available and what most cheap laptops sport,
- 16GB is either default or an addon for cheaper laptops,
- 32GB is a premium that is not always available, and
- 64GB is usually only available in huge workstation or gamer "laptops", although there are some decent-size Dells with it.
While those are not my choice of metrics, they do seem to support the "low", "mid-low", "mid-high" and "high" labels I personally added. At most, you could argue that ≤8GB is low, 16GB is mid and ≥32GB is high. 10 years ago, 16GB would have been high-end for a laptop, but no more.