https://news.ycombinator.com/item?id=27834649 (from today)
There are several fast Rust-based, GPU-accelerated terminal projects that are open source and under active development:
Fig ( https://fig.io/ ) isn't Rust, but it's worth noting as a YC-backed competitor terminal product that has autocomplete similar to Warp.dev. Their website is eerily similar to warp.dev 's website, to the point where I can't help but imagine one was the inspiration for the other.
I really thought 'Alacritty' came next! It's cool there are a few. If someone's looking for something popular and robust though, and not specifically interested in the cloud features of OP, Alacritty is pretty mature, actively developed, and also rust if you care.
2021-07-14T17:05:56.423Z WARN window::os::windows::window > EGL init failed with_egl_lib failed: with_egl_lib(libEGL.dll) failed: no compatible EGL configuration was found, atioglxx.dll: LoadLibraryExW failed, fall back to WGL
2021-07-14T17:05:56.500Z ERROR wezterm_gui::frontend > Failed to create window: The OpenGL implementation is too old to work with glium
Genuinely curious. I've never ran into a situation where the standard terminals felt "sluggish"... barring a slow internet connection sshing.
 The latest video for it is here: https://www.youtube.com/watch?v=99dKzubvpKE
I used Urxvt for about fifteen years, it didn't feel slow at all. Kitty just _feels_ faster.
Are there any benefits I'm missing that come from it being implemented in Rust?
That said, it's particularly bizarre for a closed source product, where the language is completely irrelevant ¯\_(ツ)_/¯
Also, I like how simple HN is, but sometimes I wish for "tags", at least for project languages - that would keep the title "cleaner", while still including the language for anyone interested. Would be cool to search HN posts by language too.
1) This is a new codebase that we can study and contribute to. Knowing it is in Rust is relevant, so us Rust people know if we are able to dig into it.
2) Rust makes some guarantees about the properties of programs written in it. These guarantees are of interest to some, to the point where many have adopted Rust versions of common utilities (ls/exa, grep/ripgrep, etc.).
edit: just saw it is closed source. Bit of a bummer, and nullifies my points to a certain extent. But I will leave them up since I think they apply to other projects that are more open.
What this experience (and others like it) have taught me is that far more people should care, and that we as an industry should be investing dramatically more effort into completely rewriting low-level C projects that were designed and developed thirty or more years ago. Little of the code I saw would pass modern code review for a variety of reasons.
Software engineering tools and practices have improved dramatically in those thirty years, but our foundations are built on frankly completely shoddy infrastructure that’s barely held together with duct tape and fishing line. Our industry profits almost exclusively off of building more and more on top of what’s already been built, but I’m convinced it’s becoming increasingly important to spend more of our collective effort on fixing, replacing, and repairing the stuff that lies below.
None of this is meant to malign the efforts of the developers of `bash` or any of the other venerable yet aging tools at the bottom of our stacks. There’s no blame to be had for building a tool with the knowledge and best practices of the time and for being hesitant to “fix what ain’t broke” in the thirty years since. But frankly, these things are a lot more broke and waiting to fail than most of us realize.
Further, none of this is to say that things are on the dire verge of collapse or that any one program is going to be the thing that dooms us all. Simply that our foundations are suspect and are increasing need of repair, particularly as we build ever-higher on them. Community efforts like those to improve OpenSSL in the wake of severe exploits have been invaluable and we need so much more of it. Preferably before exploits are discovered.
So I for one applaud efforts to rewrite and rethink these programs, and especially to develop them in modern languages with stronger protections, infinitely fewer footguns, and better testing infrastructure.
We wrote up a pretty detailed technical design if you're interested in checking out how we built and why we made the decisions we made:
Having written terminal code both in compiled and interpreted languages, and on machines about a thousand times slower than a typical modern machine, speed here is also much more down to algorithm choices - especially for rendering - than language.
I find the focus on FPS pretty odd as well, as really high update speeds for a terminal typically only comes into play if dumping huge amount of data to the terminal by accident, in which case typically handling scrolling properly is easily speeded up by skipping lines that need never be visible at all.
I do expect terminals I use to be low latency for normal operations, but that's a very different consideration.
Not objecting to your choice of Rust. Just like some others found it odd to highlight it near the top of your main page.
That said, I'll note that unlike Warp it will actually run on Linux...
Writing a tty which has to support specific shells isn't confidence inducing.
How did you paint yourself into that specific corner exactly?
I'd be more careful about the messaging then, something like "you can use any shell you want, we provide enhanced features for bash/zsh/fish and are looking to expand that in future".
This is a terminal emulator not a shell language so this should be compared to the likes of Xfce4-terminal, Konsole, Alacritty, Kitty, st, etc...
Which is true.. but owing mostly to the fact that C actually has a stable ABI.
It's the same deal with every popular language that was the new hotness at the time: "written in Python", "written in Ruby", "written in Go" from years ago, which I see less of now.
Nothing wrong with this, it's just a straightforward way of tailoring information to the target audience to maximize engagement with an ad, whether the authors did it consciously or unconsciously (maybe they like Rust a lot and mention it often).
"All cloud features are opt-in. Data is encrypted at rest."
The fact that this even needs to be stated makes it a hard no, especially for a terminal emulator.
Opt-in and "encrypted at rest" AKA actually encrypted are the correct defaults.
I'm also uninterested unless both the client and the server are made open-source, because terminals are a bread-and-butter program and there's just no way I'm getting locked into someone else's cloud for that.
But the specific sentence you've quoted is a strange thing from which to recoil, from where I'm sitting.
All details here: https://www.warp.dev/privacy
We are particularly excited about sharing our Rust UI framework since there aren't a lot of great options for building native UIs in Rust right now. We want to make it cross-platform first though.
Druid as it exists today is not a great fit for what you're trying to do, as drawing on mac is all based on Core Graphics and Core Text. GPU-based drawing is the future, and we're working on that (piet-gpu), but it's not yet in a really usable state.
In any case, hope this goes well. It's clear to me that Rust-based UI is the most viable path to an Electron alternative in the future, and the more evidence we have for that, the better.
You'd exactly right that's why we didn't pick Druid--we needed something that rendered on the GPU. We actually talk more about this decision in our "How Warp Works" blog post here: https://blog.warp.dev/how-warp-works/
Developing in the open would make it easier to trust that you're not doing something nefarious. When I think about the kind of data that goes through my terminal on a day-to-day basis, I have to _know_ for sure that my terminal can be trusted. That makes Warp a non-starter for me until it's open source, which is really a shame because it looks fantastic.
Your plan should have been to wait until something was downloadable before you announced it.
Anything useful you produce will be community-maintained eventually, and your cloud product will vanish once you complete your amazing journey; anything in-between seems like rent-seeking. Meanwhile, your competitors are Free Software and can out-develop you.
True. And a terminal that requires an e-mail login? I doubt they are going to build a social product, then users can co-op or date in the terminal.
It's rare enough for me to have a need for sharing a screen, but sharing a single type of window would cover just a tiny proportion of those rare cases.
My tech stack is atypical enough in many respects that maybe others will find it more useful, but to me this seems like a couple of useful but easily copied features (the auto-complete), and some stuff I'd never use.
Email is required only for the closed beta. Warp will be offline-first. All details here: https://www.warp.dev/privacy.
I spend 1/3rd of my day in terminal emulators, and anything that reduces the number of stupid (normally memory corruption) crashes that interrupt my work is a legitimate value proposition for me.
Edit: This isn’t an endorsement of this particular product, to be clear. I don’t know anything about it, except that being written in a fast, memory safe language endears me to it.
I use urxvt on Linux, and I don't have any problems with it.
Edit: I should also mention that I see plenty of non-crashing, non-hanging bugs with iTerm2 that look like memory misuse to me: split-workspace mode (tiled with a text editor) causes wonky graphical errors, and I've seen what looked like buffer reuse artifacts when switching between monitors.
I cannot recall any crashes in alacritty, gnome-terminal, xfce-terminal or the legacy windows console for other terminals I've used a decent amount.
Warp is a fork of Alacritty, their team was very gracious in collaborating and reviewing Warp's initial designs.
People who care about security.
Those people tend to care more for their term being opensource than it being in Rust. Dont trust those who claim otherwise.
Can you explain your objection in more detail? It seems like a terminal offering some kind of real-time collaboration with team members is a reasonable thing to offer, and that if you have that feature then it's good that it's opt-in with encrypted data.
Unicorns (but they probably don't exist).
> the report says that they have "recently reconfirmed" the lair of one of the unicorns ridden by the ancient Korean King Tongmyong, founder of a kingdom which ruled parts of China and the Korean peninsula from the the 3rd century BC to 7th century AD.
Kind of seemed to me like just another project.
I guess the kids these days have to start at volume 10 with "hard no!" or "best ever!" to be heard. THAT is a fad.
EDIT: silent downvotes for saying this?
What if the project turns out to not be a fad, and a month or six later more people are using it and loving it? How does a "hard no" person back down from their bold rejection? This is what I was talking about.
I'm pleased that there is more innovation in the terminal emulator space. For a tool used by just about every developer, not a ton has changed since the launch of the VT100  in the 70s.
Fig adds autocomplete  to your existing terminal. Soon, we're launching the ability for you to build your own visual apps and shortcuts (like these: https://fig.io/videos/old-sizzle-reel.mp4) and share them with your team or the community.
Rather than building our own terminal, we integrate with the terminals you already use (even the one embedded in your IDE). This means everyone on a team can collaborate but keep using their existing terminal and shell setup.
We are excited to integrate Fig with Warp.
E.g. Sixel, ReGIS, html output, terminals treating the scrollback buffer as a live document where commands can be re-executed. AmigaOS let you auto-complete filenames using a file requester.
The hard part is becoming so seamless that it integrates into peoples workflow to the point of being something they're not willing to live without, and none of the above really succeeded at that.
The terminals I use have Sixel and ReGIS enabled, and I have a shortcut to output images to the terminal, but I still often forget and open a separate image viewer anyway, for example (but it is awesome to be able to display an image from a remote server directly over the ssh connection, but that depends on a utility script on said server).
Instead we revert to the lowest common denominator because it's good enough and other things, like latency and being ubiquitous across platforms matters more.
We've been following fig and are excited to see more innovation in this area!
I’m Zach, from Warp. Excited to get feedback on what we’ve been working on. Happy to answer any questions!
* How do you expect to make money from developers using Warp?
* What's the source code license?
* Where's the source code?
We may add paid features for enterprises around collaboration, and other things on the server that would cost us money to support.
Great product! I signed up for the beta, excited to try it out.
You can find more details about why we decided to build our own UI framework in the "How Warp Works" blog post we wrote here: https://blog.warp.dev/how-warp-works/
Next up will be Linux, followed by Windows.
Yet when the subject of terminal emulators comes up, there are often comments from other HN users insisting that Windows terminal emulators are molasses compared to MacOS, or that this emulator is much faster than that. I don't get it.
You might be a particular type of person... I don't remember Windows Cmd being not slow, ever. It feels extremely sluggish. It supposedly has something to do with Windows ConHost, but I don't know about the details.
In fact, even most terminals on Linux are unbearable to me. Not only do the antialised vector fonts tire the eyes on sub-4K screens - most of the terminals I've tried feel slow as well, though not as slow as Windows Cmd.
Personally I use xterm with bitmap fonts whenever I can, it feels sooo good. It has both crisp font rendering as well as being reasonably fast/responsive. (It's probably not the fastest ever in throughput if you want to display /dev/urandom, but that's not my benchmark).
Yet people seem intent on inventing slow ways of doing it that then gets people to reinvent all kinds of complex ways of making it faster.
The performance differences between different terminals are indeed absolutely astounding (but rarely matters much, because the things that are much faster/slower are things you rarely do on purpose, like dumping thousands of lines to the terminal at once)
I suppose that terminals like Gnome's  are slow or feel "jittery" (i.e. inconsistent latency) for example because everything needs to go through a separate terminal server process. Then, there is Desktop compositing in the way. There might even be latency added by GPU acceleration (handling double buffering badly vs naively sending pixel updates to the Display Server?). Then, vector fonts are just terrible for programming on normal  displays - either anti-aliased (mushy, eye-straining) or not (jagged-y).
Another issue is historically bad / incompatible VT-100 emulation. I don't know what it is and I'm not a fetishist for organically grown terminal cruft, but in any case I've always had the most trouble with Gnome and KDE (missing characters, wrong colors, wrong drawing positions) while xterm works just fine, only the occasional "tput reset" needed after dumping binary data.
 just talking experiences, I'm probably referring to something based on VTE? Don't use Gnome regularly.
 "normal" meaning sub-4k / ~100dpi, soon will have to be referred to as "older"
This, I think, sufficiently explains the slowness of terminals on Windows.
I started with computers more than 30 years ago, and have been a "power user" and coder for essentially all of that. In all that time and experience, I've never found any terminal to be slow, or found one to be noticeably "faster" than another.
That's pretty much where most of the terminal performance difference is on most platforms. Latency can be an issue on some, but really this was a mostly solved problem as far back as the Amiga.
I have a toy terminal written in Ruby + a tiny C extension to interface to raw xlib that currently draws individual characters and redraws the entire screen on scrolling, and even that is fast enough for most normal usage (but indeed slow at printing megabytes of text)
But getting a terminal fast enough, including on the "print megabytes of text" test boils down to a few simple principles (and you'll be "fast enough" without doing all of them even on pretty slow hardware):
* Render your text to a buffer, and only render to screen at intervals.
* Use non-blocking IO and read as much as you can into suitably sized buffers (aka: reduce pointless context switches)
* Scroll the bitmaps using whatever OS/toolkit provided functionality, rather than re-rendering the text like my stupid Ruby term.
* If you have multiple lines in your buffer, scroll once and render all of the new lines at once.
* If vector fonts, prefer to pre-render glyphs to a buffer rather than re-rendering every time.
That's about what I remember from rewriting parts of the AROS (AmigaOS replacement) terminal handling code a decade ago (does not do all of the above, but is still more than fast enough).
If a terminal doesn't paint on a per-frame basis, then dumping reams of text to that terminal will slow the program to the speed of printing, this can easily be a 100x slowdown versus only updating the view when the monitor wants more pixels.
Most of them don't do this, so the ones that do are within their rights to advertise it as a feature.
I know, totally unrelated except for the rust part but the annoying thing is that a search for warp always shows startrek stuff and searching with rust brings up quite rusted metal solutions stuff so I use warp+rust :)
(Off to work on my next project that will speed up note taking - probably going to name it after a technology in my favourite tv series)
A lot of work happens in the terminal but it is not in repo or documented, and each team member kinda develops their own thing. I see Warp standardizing and making the non-code writing part of development
How does this handle really long outputs like with [tail -f], or pagers like [less]?
Also, does opening up vim “just work”?
* log in to AWS
* push commits (though that's mostly in Emacs now)
* tail logs from remote services
* use ssh
* random grepping/cat/awk/sed one-off stuff
I don't see any of those benefiting from real-time collaboration. The use-cases presented on the landing page don't make sense to me either - when my entire team is debugging production, we usually fan out and all look at different things rather than needing to fan-in and all do/look at the same thing. And if I want to chat I'll have Slack open next to the terminal.
This is where the real-time collaboration comes in. Easy to teach novices how to use powerful shell commands to make things easy.
If I want to save a session, I use tmux. If I want to share that session, I use tmate.
I can also self-host tmate on my own machines which makes it even more attractive.
I think some of the session stuff is nice, but it's not for me.
It scrolls at 2 fps.
However, if you work closely with larger companies, I'm sure there's some killer feature that will hook the engineering managers, even if developers are split on using it.
Huh? What about Linux? Why not use something like gfx-rs instead? It's too bad Apple refuses to support Vulkan, but at least you could use something that could help it actually work on multiple platforms. Using Metal will make it unportable, unless you plan to support multiple GPU APIs.
Also, there is no source in the Github link: https://github.com/warpdotdev/warp
How will that impact the quality of the product in comparison to, say, iTerm2?
I see the team  has some strong credentials, which honestly looks overpowered for "just" a terminal app. Unfortunately, that makes me suspicious of their goals for "just" a terminal app, which has a relatively well-established expected feature-set.
zsh has a similar facility.
Banter is always useful
There is no download link, no "sign-up to hear more". With all of the thought and polish that went into this I'm surprised to see the ball dropped at the most important step!
In my case the "Request access" button was blocked by uBlock Origin.
the typo did not help
This would be interesting if the source was actually available.