Please note that I am not attacking wezterm here, my criticism is aimed to be constructive because terminals are important. I think wezterm is a very cool project, and I wish all the best to it, but why not make a terminal that actually (correctly) implements those escape sequences? (We are talking about a couple dozen controls here.) Given the development history (3+ years, 3k+ commits) I would say that this is not a resource issue but a prioritization issue.
For an example of a GPU-accelerated terminal with much less polish, less unicode/ligature support and less portability, but one that takes fundamental correctness very seriously (coupled with excellent performance and very low latency) in a tenth of the source code and (presumably) a mere fraction of development resources spent on it, take a look at zutty. Note, if you like wezterm, you will probably NOT want to switch to zutty, because it's a much more basic program.
I've run vttest a few times; one of the biggest issues I have with it is that it isn't a unit test. It relies on a human to know what looks correct and know from what is on the screen what it was trying to do.
If you can concisely describe specific conformance issues, then I'd really appreciate it if you could file a github issue for each one. Please don't just file an issue that says "run vttest", as that isn't very actionable.
> We have an automated regression testing setup to run VTTEST in Zutty and verify that the output is a pixel-perfect match of the pre-approved video output. You can thus expect the terminal output to be correct – be it driven by tmux, emacs (with org-mode, helm, magit, etc.) or whatever else.
Which sounds like it might be useful for you? https://github.com/tomszilagyi/zutty
Just before the section you quoted, it says:
> Zutty passes the subset of VTTEST screens that we care about
FWIW, wezterm passes the subset of vttest screens that I care about too :-p
More seriously though, I can't use anything from zutty as it has an incompatible license, and that approach still doesn't resolve the main issue that I have with vttest, which is that it requires a human to interpret the display and reverse engineer what's happening from the code.
esctest is a much more reasonable target for conformance testing:
For example, you should maybe open a discussion on what you're seeing in vttest and compare that with OP's comment. If you don't see any problems in vttest, then say so and proceed from there.
As someone who maintains packages, I'd be willing to hear feedback from any source because being aware of them is better than not at all. I can't speak for how
wezfurlong ranks these set of potential issues-- if they're worth even investigating for example-- but from the perspective of a person giving feedback, I would find his response as unmotivating.
In particular I really struggle to understand the point of the ssh interface given that ssh already supports multiplexing built-in with ControlMaster.
What happened to "do one thing and do it well"? Make me an ultra-fast minimalist terminal, then I'll use tmux, ssh and kermit if I need them. Focus on handling all the many, many corner cases that you need to implement for accurate terminal emulation.
Well, I guess I'm turning into a bit of a hater here, I guess I'm just frustrated that it's not the project for me. Hardware accelerated Rust terminal hyped me a little too much I suppose.
ssh support is present because Windows' pty story, while better than 5 years ago, isn't great. The integrated ssh support allows bypassing that layer when running wezterm on Windows and connecting to a real unix system.
serial port support is present because I do a lot of embedded work on multiple systems and want a consistent environment.
Multiplexing is there because that is actually what you're building when you add multiple tabs, windows and panes.
You can, of course, not use any of those features and still potentially enjoy using the ones that you do need/want. Or just not use it; I don't mind!
Alas, it does not.
I start to feel a little insecure about my own abilities, I've mostly focused on building small components or little systems from the ground (in C) up with minimal or no libraries. That works and is rewarding, until the project becomes unmanageable because of some aspect that I'd done poorly, and I start a new project that focuses on exploring the particular kind of subsystem that I didn't know how to do properly.
It also seems like the following is a pattern. I checked out that intimidating Rust project using git. It's something like 50MB to download. Then I run "cargo build". I wait 5-10 minutes and it is still "fetching". At some point, it goes on to compile 550 (!) transitive dependencies. A little while later, the build failed, could not compile libc, unicode-xid, cfg-if, proc-macro2. Not sure, maybe I have the wrong version of "cargo" installed?
I get back to my small current hobby project, which is somehow more rewarding to me since I can still check it out on a fresh Linux or Windows box, run "build.sh" and run it. That whole process can complete in 20 seconds.
Am I too stubborn? Or should a terminal emulator really be not such a massive project? (No offense meant).
Eh, Rust doesn't have OOP. What do you mean? I get that Rust is a bit hard to read if you aren't used to it but apart from that it's fine for me. Don't forget that Rusts verbose-ness also stems from dealing with the borrow-checker which in turn gives you memory safety. Almost every C-project has memory bugs somewhere.
The thing with cargo: In general cargo is really painless, the first build does take long but after that build times are ok for most projects. You only run into problems with projects like servo which are huge.
build.sh scripts have to be managed by the author while cargo run magically works in basically any Rust project.
>Am I too stubborn? Or should a terminal emulator really be not such a massive project? (No offense meant).
Depends, this is not only an emulator but also a multiplexer. A simple terminal emulator that supports basic commands is a small program, I mean look suckless st. But if you get into suporting ligatures, UTF8 etc. you will get a lot more complexity that in turn adds code and dependencies.
I imagine properly supporting UTF8 and Ligatures in C would be a huge pain.
FWIW, in my experience so far with wezterm, most people that have run into issues with old rust versions are not running rustup at all, so I feel like something rust/cargo should also have a mechanism to specify an appropriate version.
Perhaps it's just visually jarring to you because it's in phosphor. I wonder if some fonts that look fine on paper don't translate well to computers.
I don't have factual data, but this seems obvious to me. Especially on lower DPI screens. For example, a serif font looks just too busy on a screen, whereas I really love them in a book.
I've noticed that on non high-DPI screens and at "reasonable" sizes (say between 10-14), whenever a font has an elaborate design it tends to be unpleasant on the screen, because everything that doesn't line up with the grid tends to get blurry or have random colors at the edges.
I'm somewhat surprised that bitmap fonts have fallen out of favor. While we may get somewhat used to seeing not perfectly sharp fonts and after a while they don't seem blurry anymore, comparing them side by side with a bitmap font is striking. Especially on dark backgrounds, all the different hues are much more subtle, yet better defined.
I'm running terminus on a computer I sometimes use with a 24" FullHD screen and I find the absolute sharpness of it is very, very pleasant to look at. It is both thin AND without blurriness or color fringes, even on dark backgrounds.
I've been using it for a while now, can't say it makes a big difference(both positive or negative), but it kind of looks nice which is the point in the end.
I do it all the time in Vim; it makes it easier to visually ignore comments in the code. For anything that's not a comment, it's pretty ugly.
The glyphs are just randomly cursive or print. I wasn't complaining about serifs, I was complaining about randomly mixing cursive and print.
The codebase is flexible: I could write my own frontend if I wanted or use the termwiz crate (included in the codebase) to build cli apps using the same nice layered architecture..
Another issue I have with Alacritty is that it looks ugly on Wayland (because of lack of user window decoration), I'm not sure if this has somehow solved that issue (building against gnome or something).
I love Alacritty, but for my setup it doesn't work. Looking really hard for a fast and snappy replacement.
The downside is that this requires installing the multiplexer on the remote machine, whereas iterm works with the much more prevalent tmux install.
Not sure what this means in terms of performance, I've never had any issues running remote tmux in alacritty or in iterm (via `tmux -CC`).
I don't have any use for running tmux locally, but I would sure love to be able to have actual windows for remote sessions (I use i3 and would prefer to manage all windows through that).
I work with a few different hosts on SSH and having an active connection for each as a labelled window (tab) in the terminal itself that I can just open up a terminal anywhere to come back to is great.
Personally I use the `tmuxinator` helper tool[^1] that allows you to create configurations as json files to be launched/attached to with short aliases (eg `mux s myconfig`)
Literally no shame in that. GUIs with multiple windows and tabs were invented for a reason.
The time waste will be paid off by giving yourself a cross-networks, cross-client, always ready, instant on, saved state, scriptable replica of what you're doing on your computer by arranging windows.
I can mosh to my VPS and have tabs and windows and panels left running as I left them. The connection doesn't fall if I change WiFi hotspots or switch to my mobile hotspot. This works on my laptop, desktop, phone, raspberry, whatever the client.
And doesn't require hardware acceleration /s
Or having a persistent state across long times, or work from a specific tty from another host...
Or be able to split the screen to follow multiple outputs...
I mean, you can live without, but once you really meet them, you can no more.
Proper ligature support requires terminals to support N bytes of input mapping to M character cells... and unlike the N:1 or N:2 cases, the N:M case doesn't allow a hard-coded list of how many cells an input string covers, it varies from font to font. And that risks software designed for one font breaking in a terminal that uses a different font, if it has a different set of ligatures, or the ligatures are different sizes.
Ligatures are handled at rendering time, partly because the terminal emulation and model layer doesn't know about fonts (it can run "headless" in a multiplexer where there are no fonts), and partly because the shaper library doesn't output information about cells but rather about which glyphs to render in which positions--that information doesn't easily map to the terminal cell model.
For extra pain, some font designers have ligatured sequences that map to a single glyph that may be several cells wide, while others use alternative glyph fragments for each component of the ligature, and others may emit two blank glyphs, followed by a triple wide glyph with a negative x offset of almost two blank glyphs in width. It's difficult to map this information to cells.
If you see the term "full width" character in old terminal related specs and documents, they are talking about 2 cell CJK glyphs/characters.
I can understand your concerns around resource usage. Extraterm doesn't follow a minimalist philosophy, more of a swiss army chainsaw one. But as an application it still needs to make you think "what I'm getting out of this is worth the resources". Features and new features you can't find else where, plus becoming a platform for plugins and deeper integration with other software is the broad idea here.
You can convert between these forms.
Surely even the worst of the worst motherboard embedded graphics can cope with displaying some text in a window ? Its something computers have managed to do since their invention ?
Today you want to be able to render high dpi text on a 4k display, possibly on multiple monitors in multiple terminals at the same time.
And you better have more than
240 FPS (terminal render updates per second), because some displays run at 240hz or higher and you don't want your terminal to produce flicker on your screen.
I'm sorry, what ?!
I use SSH a lot every single day. Not once have I thought "sheesh, this is slow".
Before SSH became the default, I spent even more of my life on Telnet. Same again, never had a problem.
I also sometimes do console connections to switches and routers.
The only lagging I've ever seen in a terminal window is when I've been on the end of a poor internet connection !
And yes I've done it on 4k displays and multiple monitors and multiple terminals.
I've never observed any flicker either (and yes I use non-standard "modern" fonts).
I've never had problems with Microsoft Word or Notepad either !
I'm sorry but I fail to see why a terminal emulator needs the same power as Photoshop or your favourite game.
Frankly I would put a GPU-accelerated terminal emulator in the category of "product solving a problem that doesn't need to be solved".
I know it's not an issue for many and they tell you "excuses me, what?", but I can tell you, it's a thing.
When I think about it, there's no smooth movements in my usage. Scrolling is at least one line at a time, cursor movement is at least one character width at a time. So I would be hard pressed to notice latency.
Btw, 240 fps is a very minimal jump from 120 fps. It's much less than 60 to 120. I can't imagine ever being able to tell if my terminal is getting drawn faster than ~50 hz. Well maybe if I was scrolling more than 50 lines per second, but I would probably not care how it looks.
> but I would probably not care how it looks.
Me too. It's not about aesthetics, I don't care much about it. It's how it feels, slightly higher latency gives me an uncomfortable feeling during computer interaction.
If I had to decide between something that looks very pleasant, but is slow and something that looks crappy but is ultra fast in terms of latency, then I would always pick the latter.
- Also: only this way can we lay the groundwork to get terminals to look like the hilarious parodies of computer interfaces in movies.
if for nothing else, it's a much nicer nohup(1).
because once i learnt it, it doesnt matter which terminal emulator i use, i have a full window manager solution _anywhere_.
but it's not a zero sum game for me, i have multiple windows to multiple hosts, not one giant tmux mess. in those windows i have tmux'ed sessions, which makes working on a given host much nicer for me.
as a result i have dozens of terminals.
but, i never work on more than one project or task at a time. so there is really no need for more than one terminal-window to be on my screen at any one time.
on other words, using tmux locally helps me avoid clutter on my desktop.
I will give it a fair trial
here is my non scientific benchmark, latest macos:
new wezterm window: `find /`: 70% CPU
new kitty window: `find /`: 30% CPU
the kitty window is visually faster as well...
No hard numbers, yet simply resizing a window is visibly slow or "accidentally" catting 100k line txt file in the terminal takes several seconds. Both is instant in iTerm and Terminal on my machine.
Also, I'd assume this has nothing to do with terminal emulator (might be a wrong assumption). I thought the shell/TUI just has to position the cursor on the right hand side and emit the correct Unicode?
The terminal itself doesn't need to be RTL, but the text that is entered or printed.
Here's screenshots of Konsole and Alacritty:
The colorscheme used in vim in that screenshot is my personal vim colorscheme which is leaning on the terminal color scheme, with the doc comment color explicitly selecting that orange color, for its added visual terror factor. (Really, I just like my Rust doc comments to be rust colored).
> No offence, but will installing this on "Windows 10" lead to any instability. Can I uninstall it without breaking something. I ask because, I'm currently on my second desktop. Recently bought after my eight-year old Linux desktop gave up the ghost.
> As I said, I bought a new PC intending to put Linux on it. On the first, trying to enable boot-from-usb, to create a recovery image. Couldn't enter the bios. After enable safe-mode in Windows, I ended up with a black screen. Because certain combinations of graphic card, monitor and UEFI are incompatible. You might think all I need to do is reboot, but no, the recovery utility puts the PC in permanent safe-mode.
> On the second PC, I tried to enable usb boot in the UEFI but ended up with "corrupt bios". So now I don't dare touch it for fear of ending up with a black-box again. How the f*ck can they design a combination of Windows and the BIOS that there is no way of recovering from a bad configuration. I never had such problens on my eight year old PC. There was no way to break the BIOS with a few key combinations. I tried reading-up on UEFI, seems to be an excercise in confusion. Sorry for the rant.
But then I saw their latest comment and you're absolutely right ahah. Good call