Telemetry should always be opt-in. Yes, that means vendors will get much less data. It's on them to deal with it.
On a related note, I wonder how long it takes until one of the vendors of popular CLI tools or desktop apps get fined for GDPR violation. I wonder how much of existing telemetry already crosses the "informed consent" requirement threshold. I'll definitely be filing a complaint if I find a tool that doesn't ask for it when, by law, it should.
For usage data, it allows developers to focus on features that matter and know which ones you can remove.
For example I don't collect any data in my app, but it also means that I fear removing any features that are slowing me down, because I have no idea about how people use it.
As for why sometimes this would be better as opt-out, well, on iOS crash reports are opt-in, and only about 20-30% of users have them enabled. That is fine for huge programs or ones with little surface.
Your point is basically "surveillance data is useful". And, well, yes. There would be zero debate over surveillance if there were literally no desirable reasons to have it.
The most recent crash reporting system I worked with was little more than a stack trace without any user data recorded at all. Not even IP addresses of the report.
We didn’t care who was crashing and we didn’t collect any PII at all. It was a simple report about where in our code the crash occurred.
It was very useful for fixing bugs. No surveillance or PII involved.
For me, i'd love to enable telemetry for some of my more liked, FOSS apps - but even with those my question immediately arises "What are you sending?".
Without someway to monitor, are they sending filenames? Are they sending file contents? How much is it? etcetc
To satisfy my questions i need some sort of router-enabled monitoring of all telemetry specific traffic. So i can individually approve types of info... and that seems difficult. But the days of blanket allowances from me are long gone due to the bad actors.
1. All exfiltration of data must be under my direct control, each time it happens. You can collect all the data you want in the background, but any time it is transmitted to the company, I must give consent and issue a command to do it (or click a button).
2. All data that is exfiltrated must be described in detail before it is exfiltrated. "Diagnostic data" isn't good enough. List everything. Stack trace? Crash report? Memory dump? Personal info (list them all out)? Location information? Images (what are they, screenshots? from my camera?) Time stamps from each collection. If it's nebulous "feature usage data" then list each activity that is being logged (localized in my language). Lay them all out for me or I'm not going to press that Submit button.
3. I need to be able to verify that #2 is accurate, so save that dump to disk somewhere I can analyze later.
4. The identifiers used to submit this data should be disclosed. Is a unique user id required to upload? Do you link subsequent uploads to the same unique id? Is that id in any way associated with me as an account or as a person in your backend? I want you to disclose all of this.
5. For how long do you retain the data I sent? How is this enforced? Is there a way for me to delete the data? How can I ensure that the data gets deleted when I request it to be deleted?
6. Do you monetize the data in any way, and if so, am I entitled to compensation for it?
I don't know of many (if any) data collection schemes that meet this bar, yet.
But some of these are impossible, e.g. how do you localize a stack trace?
Bad agents, spammers, ruined that. The trust is gone, and wow it's a terrible idea to accept SMTP connections from any random IP addresses.
Legitimate senders have to "opt in" to the SMTP sending system by getting a static IP of good repute, or else using a forwarding host which has those attributes.
That, and additionally, if the data is stored, it's a question of when, not if, it'll become a part of some data breach.
To developers and entrepreneurs feeling surprised by the pushback here, please take a honest look at the state of the software industry. Any trust software companies enjoyed by default is gone now.
I work with UL a lot and they have lists of standards and specifications that help us meet the safety requirements of electronic devices. These standards are then used to meet the customers demand for a high level of safety. Customers in my field do not even consider products that don’t have UL. This strategy is good to better inform the consumer while standards are kept by an independent firm whose incentives are aligned to maintain their credibility.
I am not deep in the software field but I can imagine that groups like the EFF or similar orgs have a standard. The issue is that the consumers of these products don’t seem to care about this outside of the privacy advocate world.
Now that I think about it, it's safer to simply not use software that opts users into studies (like telemetry analysis) without informed consent.
That's the thing, isn't it? They'll never know. They can't; it takes deep technical knowledge to even be able of conceptualize what data could be sent, and how it could be potentially misused.
Which is to say, it's all a matter of trust. Shipping software with opt-out telemetry (or one you can't disable) isn't a good way to earn that trust.
Well, not exactly. The original comment was that "surveillance data" is useful to the user and their installation. For instance in getting better response to your error report. Or, I'd say, in keeping your software patched and up to date (check for updates is included in the OP as suppressed by 'do not track', since it necessarily reveals IP address).
Even if none of these things that make it useful to the actual users were in effect... surveillance would still be happening because it lets the vendor monetize the users data.
Useful to whom and for what seems relevant, instead of just aggregating it all as "sure, it's useful in some way to someone or it wouldn't happen."
I find tools that don't do this are generally more powerful because they allow for deep expertise and provide a ton of payoff if you put in the effort.
E.g: Vim. 80%+ of users probably don't use macros. Hell, I use them <1% of the time. But I'm sure glad they're there when I need them.
No, you can't remove it. Even though I'm using it rarely it's existence might be the reason for me to use the tool at all, so that when I need it the feature is available.
This came about with audacity. There I have my set of standard filters I run all the time, even though they don't bring much benefit, they are there and nice. They will be on top of a usage statistics. Then there are filters I need for a special effect or to repair something really broken. Those I use ahrdly, but when, they make the difference.
Or when talking command line: `ls` without options I use a lot (Well actually a lie, i have some alias in my shell rc), sometimes I use `-a` or `-l`. This doesn't mean that maintainers should remove `-i` since once per year or so I need it to compare inodes with log entries or something and then it's important that flag exists.
You need qualified information about what features are important. Not unqualified statistics.
I get that this might not be a popular sentiment, but resources are finite. If we have a situation where we can't maintain both features, which one do we focus on? Usage metrics can absolutely be beneficial there.
Also relevant is: Is there a maintenance cost or is that code that rarely needs attention and doesn't compete with important new work.
I see how usage telemetry can be useful in deciding whether or not it's worth it to keep supporting a feature, but I offer two counterbalancing points:
1. What people may be worried about - what I myself am worried about - is the methodology creep; it's too easy to end up having telemetry drive feature removal decisions, as in, "monthly report says feature X is used by less than 1% of users, therefore let's schedule its removal for the next sprint". The problem here is, telemetry alone will likely lead you astray. It's useful as a data source, not as the optimization function of product development.
2. If a feature you're worrying about has significant use, you most likely already know it without telemetry - all it takes is following on-line discussions mentioning your product (yes, someone might need to do it full-time). If removing the feature will have major impact on your maintenance budget, and non-telemetry sources don't flag this feature as being actively used, you can just axe it - revenue hit from lost userbase you've missed is unlikely to be big.
From this follows that the telemetry is most useful for deciding the fate of features that aren't used much, and don't cost much to maintain. At which point, I wonder, do you really have such low margins that you can't afford to carry the feature a while longer? I'm strongly biased here, because I'm going only by my personal experience - but I'm yet to see a software company that doesn't have ridiculous amounts of slack. Between the complexity, management mess-ups, piles of technical debt and the nature of knowledge work being high-variance, having a feature slow your current development down by half won't have much long-term impact.
The question thus is, are the gains from usage telemetry really worth the risk and potential ethical compromise? Would those gains be significantly lessened, if the telemetry was opt-in, and the company put more work into getting to know the users better? I suspect the answer is, respecting your users this way won't hurt you much, and may even benefit you in the long run.
 - Other than one outsourcing code farm I briefly worked in (my boss loaned me for a couple weeks to a friend, to help him meet a tight deadline), but these kind of companies don't make product decisions, they just close tickets as fast as possible.
 - And hopefully leading some devs notice the need for a refactoring, in order for that feature to not be a prolonged maintenance burden.
Fair enough. Still, there are two separate steps here: collecting crash reports and sending them. What if the app asked if it can send the report, letting you optionally review it? Many programs today do that, I think it's an effective compromise. Additionally, the app could store some amount of past crash reports, and the places for the users to get the support e-mail (a form, a button, an address in a help file...) could request you to check for, or automatically call up, those past crash reports, and give the user choice to include them. The way I see it, the app should give users near-zero friction to opt-in, but still have them opt-in.
It won't solve the problem of bad support requests completely, but nothing ever does - random people will still write you with problems for which you have no data (e.g. network was down when crash occurred), or for which no data exists (because requester is a troll).
> For usage data, it allows developers to focus on features that matter and know which ones you can remove.
I accept this as an argument in favor, though personally, I don't consider it a strong one. I feel that "data-driven development" tends to create worse software, as companies end up optimizing for metrics they can measure, in lieu of actually checking things with real users, and thus tend to miss the forest for the trees.
Picking good metrics is hard, especially in terms of usage. The most powerful and useful features are often not the ones frequently used. Like, I may not use batch processing functionality very often, but when I do, it's critical, because it lets me do a couple day's worth of work in a couple minutes.
So, for me, can usage telemetry improve software? Shmaybe. Is it the only way? No, there are other effective - if less convenient - methods. Is the potential improvement worth sacrificing users' privacy? No.
> on iOS crash reports are opt-in, and only about 20-30% of users have them enabled. That is fine for huge programs or ones with little surface.
I feel the main reason this is a problem is because of the perverse incentives of app stores, where what you're really worried about is not crashes, but people giving you bad reviews because of them. Mobile space is tricky. But then, forcing everyone into opt-in telemetry doesn't alter the playing field in any way.
It's not such a strange thing given that they paid for the software.
And if sending a crash report means receiving money, then more users will send them.
I think the only way crash reporting can work, outside of support contracts, is as a favor by the user to the vendor. But, to maximize the amount of such favors, the vendor would have to treat users with respect - which is pretty much anathema to the industry these days.
What bug bounties also have is a big barrier to entry. You generally need to be at least marginally competent in software development, and do plenty of leg work, to make money with them. Turning regular crash reports into bug bounties removes that barrier, amplifying the spam problem.
For example, I used to work developer relations on TensorFlow. We wanted to make the framework accessible to enterprise data scientists. The problem was that these users were not familiar with the tools that we commonly used to get feedback - GitHub issues, the mailing list, etc.
Most of them were using TensorFlow on Windows via Jupyter, which wasn't well-represented among the users that we had frequent contact with.
It was really hard to understand the universe of issues that prevented most of these users from getting beyond the "Getting Started" experiences. Ultimately, these users are better served by easier to use frameworks like PyTorch, but I think a big reason that TensorFlow couldn't adapt to their needs is that we didn't understand what their needs were.
Another big problem is that it takes a certain level of technical sophistication to know how to send maintainers a useful crash report. If you rely on this mechanism, you will have a very biased view of your potential user base.
Having good intents does not justify skipping consent. The “opt-out” mentality is a very slippery slope, since you’re already stating that consent does not need to be explicit (hint: it’s not consent if not given explicitly AND freely).
Yet another reason to prefer free as in freedom software, which can be forked.
To bring the original simile full circle, let’s say a person signs a contract with another person, and then that person forces themselves on them. I doubt the presence of a clause in the contract saying “I agree to have sex with X” would absolve them of guilt.
This is rather a reason to prefer free software licenses. Culture has yet to catch up to this, but in the long run I hope the collective consciousness learns to distrust and avoid complicated proprietary software licences.
Why not add a splash screen on the program start up that informs your user of upcoming plans so they can intervene? Like "Hey we are planning to remove feature X to speed up the program, do you agree?"
And this is because actual usage metrics don't really translate to opinions. I have features in programs that I use maybe once every two years, but then I really need them. Then there are other features I use daily and I really hate them with a passion.
Your telemetry data will show you only that such features see little use. They won't tell you how much value is derived from that use, or what the effects of removing such functionality will be on the suitability and viability of your application.
Default telemetry is demonstrable harm for negative gains.
1. See: https://old.reddit.com/r/dredmorbius/comments/69wk8y/the_tyr...
You have basically just given a justification that crash reporting can be conducted based on legitimate interest instead of consent, and as such does not require opt-in.
Many people mistakenly believe consent is the only possible justification for data processing under GDPR, whereas there are actually 6 possibilities, and you can ask a lawyer which one can apply for a given data processing flow.
Note that whereas I do believe that crash reporting can indeed be considered legitimate interest, I wouldn't consider plain telemetry ("phone home without a technical good reason") to fall under that umbrella...
That's one reason (besides privacy) why I have Netguard running as a firewall on our Android phones and set to block traffic by default for each app, unless the app's creator convincingly explains why their app should be allowed to access the net.
I want the software to be streamlined, have no features except what I'll use, and for the community to be specifically people like me. I want other people to not use the software and use up dev bandwidth.
And I love it when telemetry biases the stats towards me. That way all devs will eventually be making software for people just like me.
Of course, I would opt-in too, with the same mindset but different use cases, and the software would provide equally for both of us. Add in a few more people like us, and we'd end up with a quality tool, offering powerful and streamlined workflows. Those who don't like it would start using a competing product, and tailor it towards their needs. Everyone wins.
Reality of course is not that pretty, but at face value, it still beats software optimized to lowest common denominator, serving everyone a little bit, but sucking out the oxygen from the market, preventing powerful functionality from being available anywhere.
 - It's a mistake that's much easier to make when you're flooded with data from everyone, rather than having a small trickle of data from people who bothered to opt in.
My dream is that everything is above the adequacy threshold for everyone else so that they don't build their own equivalent tool but that everything is also past my pleasantness threshold. I think the most effective means of doing this is to focus existing products into being past my pleasantness threshold while ignoring others since high switching costs keep most people on the same path they were before, and because things like medical research they don't really get to re-optimize.
I understand that this sounds sarcastic, but it is not.
For Support: Follow these steps:
Step 1 - opt-in to telemetry ( diagnostic ) data reporting
How much are you willing to pay me for my app usage data? Oh, nothing? Well then, buzz off.
I know the answer to that question. None. Never remove a feature.
A problem I encountered was also localisation. Once you localise your program, adding any string is exponential work. In this case removing features can give you a lot of slack.
Sorry, but if devs are requiring THAT tight connection with end users to MAINTAIN software, they are probably should stop and leave. Its impossible to figure out a new feature from the such reactive approach, and they would have to resort to more traditional way to interact with end users. Thus... making coverage analysis a totally redundant thing.
Tighter user connection is suitable for enterprise software, not for general deployment.
And why so much worries about removing working(!) features?
As it is, letting the industry standardize on a DNT opt-out is just making telemetry more established as a standard practice, making it harder to argue that it should, in fact, be opt-in.
The problems we have with tracking on the web are in big part because it was an established practice before appropriate legislation against it was drafted. In the CLI space, we have an opportunity to nip it in the bud, because it's not - as of yet - standard practice for console tools to silently spy on you.
 - And while we're at it, standardizing on a browser-provided consent UI, instead of each site providing its own, with its own dark patterns. It's the same idea.
We've already been there and it was basically shelved because users were indifferent and companies wanted the data regardless.
Definitely agreed, and I'd want to have some form of strict liability for data breaches, based on what kind of information has been leaked. Currently, a company holding data about me (e.g. name, email address, phone number, credit history) causes a large amount of risk to me, but themselves carry no risk in case of a data breach. They are the ones who can decide to collect less information, keep shorter retention policies, or restrict access to prevent a breach, but they have no incentives to do so.
Yes, this would be a complete up-ending of many business models, but if your business model relies on collecting data without collecting the associated risk, it's a business model that society shouldn't allow to exist.
What many of today's software authors want/expect is free testing.
"To me, this seems to not just be admitting defeat, it's ensuring defeat right from the start."
While I do not use any of the example programs the mentioned, it seems like these environmental variables would be appropriate if the user wants to toggle between tracking and no tracking. However, for users who would never want to enable tracking, "no tracking" should be a compile-time option. It would not suprise me if that is not even possible with these programs. How is the user supposed to verify that "Do Not Track" is being honoured.
Many of apps and tools are open-source and free. While I assume everyone wants to provide best experience, it's hard for me to justify being angry for bugs and problems in tools that I got for free, not bought them.
Secondly, the industry realized that going fast, releasing often, measuring results, and improving over time is a winning strategy. No matter how often we as users will complain that "they changed something again", we still want to get things fast. Deploying new version once per year is not something we would really like in most cases.
And fast development cycle inevitable comes with bugs, but they can be fixed quickly, not in the next year. Because even if you spend 2 months on testing your app, it will still contain bugs that will surface after the first real user touches the app.
This really needs to be a feature of the telemetry tools in the first place. Because, ultimately, most telemetry is being implemented by startup engineers who are burning the midnight oil to complete the telemetry JIRA ticket before going back to the long list of other stuff they have to implement.
I have experienced this from all three sides - as a software engineer implementing telemetry, as a product manager consuming telemetry, and now as a founder who is building a tool to collect telemetry in the most respectful manner possible.
Thank you for taking being respectful to users seriously.
I'd be very interested in learning how your consent flows look, and what other aspects of your product are driven by the goal to "collect telemetry in the most respectful manner possible". I couldn't see much on it on the landing page, so if you have a moment, could you provide additional information, either here or in private?
You cannot set up Bugout telemetry in your codebase without first defining your consent flow.
We have a library of consent mechanisms that you can chain together like lego blocks to build these flows. For example, our Python consent library is here: https://github.com/bugout-dev/humbug/blob/main/python/humbug...
Consent is calculated at the time that each reports are sent back. This means that your users can grant and revoke their consent on a per-report basis, which is the only respectful way to do things.
We are also building programs which will deidentify reports on the client side, before any data is even sent back to our servers. This work is still in the early stages, but here's v0.0.1 of the Python stack trace deidentifier: https://www.kaggle.com/simiotic/python-tracebacks-redactor/e...
I would really love to hear any feedback you have.
I like the design for your consent pipeline, and the code itself is very readable.
I have some further questions:
1. You say:
> You cannot set up Bugout telemetry in your codebase without first defining your consent flow
How is it enforced? Is it just an API limitation that I could work around by defining my consent block as below?
def much_consent_so_informed() -> ConsentMechanism:
def mechanism() -> bool:
> Consent is calculated at the time that each reports are sent back. This means that your users can grant and revoke their consent on a per-report basis, which is the only respectful way to do things.
Correct. I like how you think about this. I assume the SDK user will be ultimately responsible for prompting the end-user for consent; I wonder if you have any "best practices" documents for the software authors, so that they don't have to reinvent respectful consent flow UX from scratch?
> We are also building programs which will deidentify reports on the client side, before any data is even sent back to our servers.
I don't see any code in that Kaggle notebook you linked (I'm not very familiar with Kaggle, I might be clicking wrong). Should I assume your approach is based on training a black-box ML model? Or do you use some heuristics to identify what data to cut?
Here is a recipe for adding error reporting (reporting of all uncaught exceptions) in a Python project. The highlighted line shows that, when you instantiate a reporter, you have to pass a consent mechanism:
We allow you to create a consent mechanism that always returns true:
> consent = HumbugConsent(True)
But even with that mechanism, we ultimately respect BUGGER_OFF=true:
Of course, someone can always create their own subclass of HumbugConsent which overrides that check. We don't have a good way to prevent this, nor would we want to restrict anyone's freedom to modify code.
re: Kaggle and stack trace deidentification
We started by crawling public GitHub issues for Python stack traces and built up a decent sized dataset of these:
Our emphasis is on building simple programs that we can reasonably expect to run on any reasonable client without using an exorbitant amount of CPU or memory. For this reason, we aren't using black box ML models. Rather, we analyzed the data and came up with some simple regex based rules on how to deidentify stack traces for our v1 implementation.
Apologies for the link not working. It seems I had to publish a version of the notebook. This link should work now: https://www.kaggle.com/simiotic/python-tracebacks-redactor
Actually, we started this work on a livestream if you're interested in watching: https://youtu.be/TFKe614Ml1M
Again, really appreciate your engagement and feedback. Thank you!
Let's imagine the ls command with telemetry. What happens when you make an error like this?
$ ls all-the-pr0n
ls: cannot access 'all-the-pr0n': No such file or directory
> Telemetry should always be opt-in
Opt in needs to be very precise and spell out exactly what is being shipped. For a lot of command line tools, telemetry is going to create more problems than it is worth.
> I wonder how much of existing telemetry already crosses the "informed consent" requirement threshold.
This is the question we all have to wonder about.
I don't question that analytics can be helpful. I do question the degree to which it is, relative to other methods of gaining the same insights (such as better QA, user panels, surveying people, etc.).
I also don't think it would be horrible too. Inconvenient, yes. But horrible? People used to ship working software before opt-out analytics became a thing.
If you're less into respect and more into manipulation, offer them a meaningless trinket. A sticker on the app's home screen saying "I helped", or something.
This proposal is terrible and comes at the problem from the exact wrong direction. If someone wants to come up with a "export GO_AHEAD_AND_SPY=yes" envvar that enables telemetry, fine.
GDPR notwithstanding, I'm of firm belief that any kind of telemetry in software should be strictly opt-in and require informed consent. I say should, it's an ethical view, not legal.
Title: "Diagnostic Data"
Explanation: "Send all Basic diagnostic data, along with info about the websites you browse and how you use apps and features, plus additional info about device health, device activity, and enhanced error reporting."
And this is ON by default.
"websites I browse"...
In the FOSS world, we typically have distros between the applications and the users. If the applications honor the variable, then that's all the control that is required. A distro can implement an opt-in model by defining the variable with a value of 1 in the base system, so that it's present right from boot.
My issue isn't with the point of control - it's with the default. Telemetry of all kinds should be opt-in. People shouldn't have to worry that they're constantly being watched. They shouldn't have to hope that every single telemetry stream is operated by competent and careful software engineers, guided by honest and law-abiding managers. You know how this industry works; it's a rare case where a data collection scheme doesn't overreach, accidentally ingest too much, leak data, or turn malicious and pass it to bad actors.
The reliable and practical way is to have a ad blocker at the kernel level similar to browser ad blockers.
Not everyone shares your views
The software you write is yours. My data is not. You have every right to include or not include features, but you have no right to take my data without my permission. Your rights end where mine begin.
How would GDPR help with anonymous data? Say you have a CLI that sends back the frequency of usage for all top level commands daily. If the user doesn't log into the tool, or that information isn't sent then the developer would have IP address. If they discard that, how would it land under the remit of GDPR?
I'm curious because I think it's easy for small developers to try and jump on this bandwagon. The big companies will all have vetted their telemetry strategy with their legal teams and have compliance reviews in place, as well as people who will handle cleanup from data spills. Bob is less likely to have this for his popular CLI tool.
I think it wouldn't, given proper handling of the IP address.
Where I'd expect your Bob to land in trouble is in mishandling crash reporting, in particular wrt. logging. It's very common for log files to accidentally acquire passwords or PII, or potentially other secrets protected by different laws. To be safe here, you'd have to ensure no user-provided data, or data derived from user input, ever enters the log files - which may include things like IP addresses and hostnames, names of files stored on the machine, etc.
Why not simply fork or choose not to install code you don’t like, rather than forcing your beliefs about what does or does not constitute acceptable code on the developers?
Firefox does it too. And it keeps a log of past crashes on about:crashes where you can decide which ones to submit.
If you click on the details button, you can see almost everything outside of pure hexdumps of RAM.
There's no need for automatic sample submission if you respect your users' privacy.
If it’s so important to the project the devs can ask users, educate them, and receive informed consent. There are plenty of ways a dev can force a user’s attention for a few minutes to hear their “pitch” and if the users still don’t want to opt in after hearing the reasons perhaps the reasons aren’t nearly as compelling as the devs believe them to be.
Here's the problem: people are idiots. You can manage or visit any Github issues page for a major project for ten minutes and recognize that even our industry is not immune from this. People also, overwhelmingly, use the defaults. When presented with the option to turn on tracking, most people won't, despite the fact that for most developers, its a legitimate good which benefits the user over the long-term.
You can say "well, if people want to be idiots, that's their right". Idiocy never remains in isolation. If they refuse to update the app, then update Windows and it stops working, Users don't throw their hands up and say "oh well that's my bad". They don't complain to Microsoft. They complain to AppDevs. That becomes a ticket, which is written far-too-often from the perspective of anger and hate. Its triaged by, usually, overworked volunteers.
Telemetry is not all bad. There is no "ensuring defeat" right from the start, as if its some war. Most developers just want to deliver a working project; telemetry enables that. Giving users the ability to opt-out, maybe even fine-grained control over what kinds of telemetry is sent, is fantastic.
Devil is in the details. Unless you are very careful, even a basic crash report may leak PII (commonly, through careless logging).
> Here's the problem: people are idiots.
I know what you're referring to, but I have a similarly broad and fractally detailed counter-generalization for you: companies are abusive. Their consider individual customers as cattle, to be exploited at scale. They will lie, cheat and steal at every opportunity, skirting close to the boundaries of what's legal, and considering occasional breaches into outright fraud as costs of doing business.
Yes, I know not all companies are like that - just like not all users are technically illiterate. But the general trend is obvious in both cases.
What this means is, I don't trust software companies. If a company asks me to opt into telemetry, with only a generic "help improve experience" blurb, I'm obviously going to say no. It would be stupid to agree; "help us improve the experience" is the single most cliché line of bullshit in the software industry. There's hardly a week without a story of "some well-known company selling data to advertisers". Introduction of GDPR revealed the true colors of the industry - behind each consent popup with more than two switches there is an abusive, user-hostile company feeding data to a network of their abusive partners. So sorry, you have to do better than tell me it's in the long-term benefit of the users - because every scoundrel says that too, and I have no way of telling you and them apart.
And now for the fractal details part:
> Idiocy never remains in isolation. If they refuse to update the app, then update Windows and it stops working, Users don't throw their hands up and say "oh well that's my bad". They don't complain to Microsoft. They complain to AppDevs.
Yes, idiocy on both ends. There's a reason why users refuse to update the app. It's because developers mix security patches, bugfixes, performance improvements, and "feature" updates in the same update stream - with the latter often being a downgrade from the POV of the user. I'm one of those people who keep auto-update disabled, because I've been burned too many times. I update on my own schedule now, because I can't trust the developers not to permanently replace my application with a more bloated, less functional version.
(Curiously, if usage telemetry is so useful, why software so often gets worse from version to version?)
Secondly, if the user updates Windows and your app stops working, it's most likely your fault. Windows cares deeply about not breaking end-user software, historically it bent over backwards to maintain compatibility even with badly written software. It's entirely reasonable to expect software on Windows to remain working after Windows updates, or even after switching major Windows version.
> Most developers just want to deliver a working project; telemetry enables that.
Telemetry does not enable that. Plenty of developers delivered working projects before telemetry was a thing. What enables delivery of working projects is care and effort. Telemetry is just a small component of that, a feature that gives the team some data that would otherwise require more effort to collect. Data that's just as easy to lead you astray as it is to improve your product.
> Giving users the ability to opt-out, maybe even fine-grained control over what kinds of telemetry is sent, is fantastic.
Yes, and all that except making the telemetry opt-in is even more fantastic. You want the data? Ask for it, justify your reasons for it, and give people reasons to trust you - because the average software company is absolutely not trustworthy.
An error-handling branch without a counter on it will not get through code review. That an incident was detected through user reporting and not telemetry/alerting is a deeply embarrassing and career-limiting admission in a postmortem. That logs were insufficiently detailed to reproduce them problem will be a serious and high-priority defect for the team. Something like an entire app without any crash reporting is gross negligence on the part of the Senior VP on whose watch it happened.
I'm not really remarking whether this is good or bad, you're free to think this is a bad move, but from my perspective it is definitely the way the industry moved. Among my colleagues, releasing without a close eye on thorough telemetry is some childish cowboy amateur-hour shit.
Like if you’re Amish or under some weird historic preservation regime or working near a delicate billion-dollar scientific instrument. Perhaps you really need carpentry done with hand tools. You can find a contractor who wants to do that. You don’t hire a normal firm and then get mad that they failed to seek consent before plugging in their table saw.
This approach leads to fewer vendors.
Not sure why this is not obvious?
> would that be a bad thing?
Not saying it would be.
I like the idea, but the execution leaves a lot to be desired. I can understand why some Homebrew devs think it's just an attempt from someone to pad their resume. It's essentially a single person setting up a website, then submitting a bunch of untested pull requests to a bunch of projects.
I imagine this would work much better if a large distro like Debian would adopt this first. They have the credibility and weight necessary for such a project, they can make it much more useful by asking for the desired setting during OS setup, and they can make sure it's universally respected via patches in the packaging process. From there it would have a chance at wide adoption.
That all said, there has to be a path for unknowns to contribute good ideas back FOSS and I do think this is actually a pretty good idea that deserves to gain attention.
> 1. User runs a cli command with —telemetry-disable flag
> 2. CLI sees the flag, and sends one telemetry request to note that it was disabled
Wow. The first thing the "Hey, please don't send telemetry" flag does is send telemetry.
I can't. That suggestion seems the product of desperate reach for an ad hominem attack.
There are clear, plausible and obvious reasons for the projects existence that require no seeking of hidden motive to understand or explain.
I'd agree OS-level integration for this stuff would be better, supposing OS manufacturers would also elect to include their own telemetry under such an umbrella.
I'd love it to gain more traction. It was an idea and I thought it would be better an idea and a website than just an idea.
It was strange to see it get labeled as a marketing attempt during my attempts to gain some
traction, considering I'm not selling a damn thing.
I have severe focus issues, so it had to be a one-day project unfortunately, which is why a couple of the patches weren't tested very well. Treasure your flow state when you get it.
Ultimately I didn't invest any more time into it as developers who put default-on spyware in their apps don't actually want more people opting out. It's a doomed concept.
Indeed, because it assumes developers who are doing opt-out tracking will respect this voluntarily.
I disagree with the poster above about Debian adopting this. They shouldn't adopt DO_NOT_TRACK, they should ban any package that does tracking by default from their repo; in distros that already keep non-free software out of the standard repository this wouldn't be that much of a leap. This seems necessary, as there's no sign that a "please don't track me?" flag will work better in the terminal than it does now in the browser.
Debian maintainers are amazing and they go one step further: they patch it out when they distribute it. They also patch out old version time bombs and all manner of other phone-home.
As an open source software developer, I do have some sympathy for the upstream devs here, and some frustration with distro maintainer policies. I'm not interested in getting a bunch of bug reports for issues that were fixed 4 years ago, or introduced because Debian maintainers "patched out" something they shouldn't have.
I understand that upstreams might get frustrated that their bugfixes haven't made it to stable distribution releases, but it's important to understand that expecting otherwise (except with manual, per-fix intervention) is generally against the principle of having a stable distribution release in the first place, and exactly why users are choosing to use stable distribution releases.
> Debian maintainers "patched out" something they shouldn't have
Debian sets its own policy about what is and isn't acceptable, in order to give users consistent behaviour across all packages. Again: Debian users want this consistency; users who don't want this use other distributions (like Arch for example, which aims for the opposite). An example is this topic: Debian maintainers generally patch out telemetry-by-default.
> I'm not interested in getting a bunch of bug reports for issues that were fixed 4 years ago, or introduced because Debian maintainers "patched out" something they shouldn't have.
I agree with this part. Distribution users should be reporting bugs to their distribution bug trackers in the first instance, and only sending reports upstream in cases that the bugs are confirmed to be relevant to upstream.
That's unnecessarily harsh. Distro maintainer's primary responsibility is making the Distro as a whole work together and sometimes that means choices that are not optimal for individual programs/libraries on their own. But packaging itself does already reduce the burden on upstream a lot by preempting any build-related support requests from users as well as many compatibility-related ones.
Sometimes upstreams interest is also not aligned with the user's interest (e.g. the topic of this thread) and there the distro will tend to choose the user's interests - that's a good thing.
As for time bombs specifically, those don't make much sense when the software is installed via a repository that has an update mechanism. Not wanting bug reports for old versions is no excuse for planned obsolence.
If you think a distro is increasing your support burden it is quite acceptable to tell users from that distro to use the distro's bug tracker.
Sadly, this is in itself adding a large support burden.
I think distributions should be doing a much much much better job of informing their users how they should report issues.
tracker.debian.org seems to be almost impossible to find on google for example (If I search i3-wm debian it's not in the first 5 pages!).
> I have severe focus issues, so it had to be a one-day project unfortunately, which is why a couple of the patches weren't tested very well. Treasure your flow state when you get it.
This isn't really an excuse. Imagine how somebody on the other side of a broken PR sees this? This 100% reads as "I don't care about your project enough to do the work and instead am doing this solely for my personal satisfaction".
Those probably didn’t help your cause.
Creating more social and reputational consequences for individual worker bees who make such commits on the job is also on my to-do list.
Ultimately the opt out vars are token efforts by developers anyway. These projects only do the bare minimum of opt-out-ability because they have to be able to point to the opt-out setting as justification for their shipping of spyware-by-default. Making it easier to opt out isn't something they want, regardless of how much I do or don't mask my contempt for such unethical, user-hostile practices.
The way the Audacity thing is playing out is instructive. Many devs simply feel entitled to take over your machine as if it is theirs and your double-click is a blank check. My PRs against autoupdates have run in to similar developer resistance.
Wow. This is a terrible approach for a worthy problem. You want to make the options into "lose my livelihood via being fired or lose my livelihood via social and rep consequences"? That is awful.
No joke, in my mind this "solution" is on the same level as this monstrous cancelculture. You put a tracking in an OSS project that sends 2 anonym ID and maybe an IP address (which is easily spoofable anyway)? Time to cancel your career and your livelihood.
If someone else telling the truth about one's actions on their webpage is a threat, perhaps people should be more measured in the actions they undertake.
What I'm describing is reporting, not cancellation.
Humans are the ones responsible for their actions.
What do you mean? The closed PRs are all for free and open source software. You seem to misunderstand their intentions and have a lot of entitlement.
If we talk about morality and ethics, I think this is worse than implementing tracking, isn't it?
I know you guys wants to change the world and I absolutely agree that there are too many tracking in the world, but FFS, let's just take a step back and think about YOUR actions and their consequences.
Not everyone has the luxury to quit a job if they don't agree with the morality of their work. I know we are talking about OSS, but lot of the developers are living on the donations and sponsorships or even got bought out by larger companies and to keep their jobs, they have to do said implementations.
Clicking on your HN profile, I see “I am available for hire”. It was clear to me that’s what the maintainers were referring to: you marketing yourself, which you’d have been able to do with greater effectiveness as the originator of a feature adopted by popular open-source projects (had it worked).
Note I’m not claiming that’s what you did, I’m explaining what you found strange.
I'm not sure that's true at all.
It's one Homebrew dev.
At the very least, there should be two different switches - "ads" and "everything else".
If you don't want telemetry or crash reporting, that's fine - you may not care about helping the developers of an open-source project improve their software and that's your personal choice. Similarly, you may want to manually install security patches, while I want them to be automatically installed so I have one less thing to worry about.
There may even be some crazy person that wants ads. I don't, but I'm not going to try to take away their freedom to choose them, which is what this proposal would do - an all-or-nothing switch for non-essential network access.
Give me granularity. There's no (user-facing) reason not to.
It's on the developers of the software to make these settings granular and off-by-default. If I want to help a developer, I'll go out of my way to do it and flip one of those switches if that's what it takes.
Personally, I don't think this goes far enough; I think programs should require explicit consent simply to connect to the network, and most should request permission to connect only to certain domains. But that's just me ;).
why? privacy advocates in these discusssions always jump to this position and assume everybody is going to agree just because they've invoked the magic p-word. telemetry is useful, and most people don't really care enough to change the defaults one way or the other.
assuming that "because privacy" is not an argument that will sway me, can you explain why i should default to the less-useful option just for the sake of appeasing the people most likely to change the defaults?
No, I can't. I think that ethical principles like respect for users' privacy are more important than collecting data to fix bugs/features. Shareholders may disagree, of course; this is where developer agency and collective bargaining can come in, but that's a longer discussion.
> telemetry is useful, and most people don't really care enough to change the defaults one way or the other.
This sentence is correct, but it isn't enough to justify making telemetry opt-out. I don't think this is a situation where the apathy of the majority can overrule the rights of the minority:
- People can't always choose to use or avoid software; they have schools, employers, an inability to make informed consent, etc. that can prevent them from using your software with consent to its terms.
- Privacy is often a need, not a want. People--including their future selves--are often at-risk, and need software to respect their vulnerability by default (as auditing all transmitted data for all software is beyond unrealistic).
- A person should not have to justify having privacy. Others should have to justify taking it. That's how rights like privacy work; privacy is something we have by default until it is infringed upon.
- Rights like privacy, speech, and information access without censorship (from books to newspapers to the Internet) aren't driven by will of the majority. They're driven by the fact that preserving them for the minority is necessary.
Users have lots of things that would be useful to developers, but that doesn't mean developers are entitled to them.
I'm just going to point out that you accused privacy advocates of magical assertions, then did the same thing :)
(there is a point of view which says that you make good products through thoughtful and rigorous design, rather than pouring over databases of events to try and reverse engineer how people are actually using your product)
* Figuring out which features are not being used, and being able to change things according. This also covers UI decisions - Oh. We added this new toolbar. Is anyone actually using it?
* Prioritizing localizations of a particular country / language.
* An actual positive feedback loop for products that don't generate revenue. Most open source project maintainers just receive bug reports, feature requests, and sometimes hate mail. Telemetry information which roughly shows that even though there are some people complaining loudly about the software, there are 50k other users happily using it, is very very motivating.
* Crash reporting / exceptions monitoring - Bugs are inevitable, and this gives so so much information. In my experience, very few users actually report bugs. Many might even just rate your app badly, and then ignore any attempts at following up. Especially cause they have deleted the app.
There are two problems with this view.
The first is that it's generally user-hostile. If you're meaning your software to be useful to users, then you need to see what your users' needs are, and how they're actually using your software - not how you think they should be. There are people who design pieces of software that are meant to be opinionated, beautiful gems that are not meant to be used by others (which includes software that pays lip-service to users but is so user-hostile that it's effectively not for them anyway) - and telemetry, crash reports, auto-updates, and more generally a feedback loop from user to developer are not for them.
The second is that it requires a lot of discipline. Look at how user-hostile most applications (both open-source and proprietary) are anyway - do you really think that their developers are going to have the discipline to carefully engineer a good user experience? As an idealist, I wish that they did, but as a realist, I know that in most cases that will never happen, and so telemetry and crash reports (crashing is part of a bad user experience) will get you a slightly better piece of software than nothing at all.
Thoughtful and rigorous design includes user research and testing.
In my experience it takes more discipline to use telemetry responsibly. I've seen equivalent data used to justify removing 1 feature and making another more prominent. It seems to go hand in hand with UI churn. And it seems to lead to dismissing user feedback.
Crash reports and event streams are different. But there's no reason not to ask consent anyway.
Developers might think telemetry a good thing - then decent ones would care to convince the user that it's a good thing too. Some users will agree, some will not, but asking is just a basic act of respect and decency.
I think regularly updating the package formulae for homebrew is actually necessary to carry out the functionality most users expect from homebrew.
And the proponent of the standard got in an argument with homebrew about supporting the env variable. (Although not necessarily for suppressing automatic updates? Which might be a violation of the standard, to suggest you support it, but only support it incompletely?)
Perhaps the proposed standard needs some more consultation and fine-tuning (as is common with standards for a reason) before trying to strong-arm projects into adopting it.
It may be that both you and I think that's not a great idea, or at least needs more fine-tuning as a proposed standard.
> This is a proposal for a single, standard environment variable that plainly and unambiguously expresses LACK OF CONSENT by a user of that software to any of the following:
> ad tracking
> usage reporting, anonymous or not
> automatic update phone-home
> crash reporting
> non-essential-to-functionality requests of any kind to the creator of the software or other tracking services
I don't think fine tuning would help. People who think they're entitled to collect user data without consent don't want to make it easy to opt out.
This seems like one of those internet debates where what we're talking about keeps changing in pursuit of "winning" rather than enlightening.
I was also unimpressed with the tone of the issues/PRs opened. As an adult, one of the many lessons we learn is that if you want people to do something you want, you should be friendly, constructive, and positive - not calling out people on bad behaviors. I definitely don't want to encourage ignoring issues and letting people get away with things, but in this context a change in attitude would have brought much more success IMO
Also I found it really weird that this has just bubbled up to the front page when the original PRs for this project were opened in 2019. Why is this popping back up now?
Unfortunately, the way this guy conducted himself was terrible. His tone and behavior in the issues and insistence that it's a standard just because he made a webpage were really off-putting. I can see why no one was champing at the bit to endorse this.
They don't want to make it easier to opt out; the opt out only exists to deflect blame.
The majority of "getting someone else to do something" in our society is done with negative "coercion" (for lack of a better term) than positive.
Most people go to work only because a policeman will come to their house and make them homeless at gunpoint if they don't.
The percentage used to be even higher, but excommunication doesn't carry the sting it once did.
Ultimately, the end result of all legislation and regulation of industry is the implied threat of negative consequences, up to and including state-sanctioned violence, for people who violate laws. Calling out bad behaviors is a common and traditional technique for getting new laws passed.
Shame and bad PR is a powerful tool. Last November I shamed Apple into dropping unencrypted OCSP. Perhaps one day I can shame developers into not spying on their users without consent.
Fact: the bad PR of shipping nonconsensual spyware is the only reason there is an opt-out lever at all. If we can amplify that, we can make it being enabled a default.
$ sudo ip netns add jail # create a new net namespace with no access
$ sudo ip netns exec jail /bin/bash # launch a root shell in the new namespace
# su - your-normal-username # become non-root
$ ping google.com # see if network is accessible?
ping: unknown host google.com
$ ifconfig -a
lo Link encap:Local Loopback
LOOPBACK MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
$ audacity & # See if you can track me without a network audacity...
An actual effective approach to privacy is to use a firewall to block all unwanted connections (allowlist only), and a DNS sinkhole like pi-hole.
IMO we should get more on the more aggressive side when it comes to mitigating tracking, and use tools that actually are a threat to the status quo. Tools like adnauseam was blocked by Google on their store because it actually worked—that says a lot more than a env var string they'll happily ignore.
Saying an imperfect solution is worthless is a little like throwing the baby out with the bath water.
IMO it's not even an imperfect solution. I'd say performative, and past DNT efforts have failed, so why are trying this again?
Maybe a tool that lets one easily report software for GDPR violations? That would have more teeth.
The _vast_ majority of applications this relates to are not going to have marketing and advertising departments. They're unlikely to have more than a handful of maintainers. Most are likely just someone's side project.
> The industry can't even manage with web standards, imagine trying to get scummy tracking companies to comply.
We're not talking about a new standard that's even remotely in the same league as web standards in terms of complexity and nor are we talking about getting every scummy company to comply. A lot of command line applications already have an opt out so all this is proposing is making that opt out option the same.
This isn't any different to other common environmental variables like http_proxy -- not every application supports it but enough does that it is still useful.
> IMO it's not even an imperfect solution. I'd say performative, and past DNT efforts have failed, so why are trying this again?
As of yet, no effort has been made with regards to DNT in this field. You're conflating running untrusted web applications, with running trusted (and often open source) applications locally on the command line. They're two very different fields and as said already DNT options already exist for the latter but with every application having their own preferred variable name. All this proposal is seeking to do is standardise that name.
> Maybe a tool that lets one easily report software for GDPR violations? That would have more teeth.
To reiterate my original comment: the existence of one doesn't prevent the existence of the other. Why pick when we can have both?
Ahh, my big issue is that I don’t mind tracking in genuinely honest software. Want to know how many people are still running your app on 32-bit machines and that’s it? Be my guest!
It does seem like DO_NOT_TRACK isn’t supporting honest software, though. I can’t enable it for Audacity and disable it for Homebrew, for example.
$(which brew) $*
alias brew="DO_NOT_TRACK=1 brew"
Seriously, your concern is true for any software you run. You install software--say AWS CLI--on trust. But maybe it also installs a non-CLI keylogger, right? You'd have to do some serious investigation to know.
The point, I think, is to have a standard to make it easier on the user to select the option across multiple applications.
It should, imo, be opt in instead of opt-out (defaults to DO_NOT_TRACK=1).
Agree on the software installs, and the risks attached. Modern OSes sandbox software wrt IO permissions, IMO we should be doing that sandboxing at the network level too.
Disagree with you on the opt-in. It should be neither, the software vendor should be the one asking permission, either explicitly or as a prompt on a firewall.
Lack of consent.
It’s very sad how this “you consent unless otherwise stated” ideology keep growing and growing.
“We’re not spying on you, it’s just that you never expressed not wanting to be spied upon”.
Here’s a very educational video on how giving consent works: https://youtu.be/oQbei5JGiT8
Do you want to enable Feature X?
2. Ask me again later.
Somehow, "no" is just not in their vocabulary. Imagine if "Silicon Valley" was a guy asking a woman on a date, and the only options he understood were "yes" AND "ask again later".
It's all about expectations: consent for everything is unworkable.
Equating some basic telemetry to rape is idiotic and offensive.
The variable should be named DO_TRACK.
> devlinzed - We shouldn't accept a future where spyware is the norm and rely on this environment variable. We know how that turned out with HTTP Do Not Track. As long as spyware is bundled with Homebrew, Homebrew shouldn't be used or recommended or normalized. The solution is for Homebrew to remove its spyware by having explicitly opt-in telemetry only.
> Homebrew blocked devlinzed on Nov 15, 2019
What we should work towards is privacy-conscious tracking, used sparingly only for monitoring critical pieces of the software and not all user actions. Flag/reject software that violates this. Then there is no need to opt-out for privacy concerns.
Privacy-conscious tracking begins with asking for permission to disclose my personal information (eg. my IP address), before anything ever goes on the wire.
They don’t need your IP address to track usage or health metrics. Most of it can be collected anonymously. We should encourage software to simply not collect personal information at all.
I don't even like to use the word "tracking" any more as it's lost all meaning. Not all "tracking" is identical: some is highly problematic, some is a little bit problematic, some is just fine. When you lose light of any and all nuance and difference then the conversation becomes pointless.
It's just that these topics attract people on horses so high they need spacesuits asserting all sort of things in the absolute that it appears you're in the minority.
Though honestly most users do not really care about the check box that says "I agree to give you access to all my personal information and sell it to everyone" when they click install, and it's such a sad situation. GDPR had a great potential, it's sad it was unable to do it's best.
Most of those checkboxes are not worth anything under GDPR, because people don't give a clear, informed consent when they have no chance of understanding what is being asked.
The law is not the problem. Lack of enforcement is.
Indeed, IP-addresses are considered  personal data in some cases – which only really means that you need to follow the GDPR: have a legal basis for processing, do not process the data for reasons other than that for which you have a legal basis, delete it as soon as you no longer need it, implement protective measures, etc.
Given the massive backlash against the GDPR and "cookie walls" by newspaper publishers, it's doing a pretty good job. Can you imagine a company like Apple whipping app vendors into shape regarding data collection without GDPR pressure?
I'd love too see them do something about it. Amd i hope they do.
They actually do, yes. I can see an auto-update "phone home" as justified interest to be able to quickly revoke insecure software, but usage analytics are clearly opt-in only.
It defines consent as “freely given, specific, informed and unambiguous” given by a “clear affirmative action.”
Pre-ticked boxes, for instance, are explicitly not allowed.
> Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her, such as by a written statement, including by electronic means, or an oral statement. This could include ticking a box when visiting an internet website, choosing technical settings for information society services or another statement or conduct which clearly indicates in this context the data subject’s acceptance of the proposed processing of his or her personal data. Silence, pre-ticked boxes or inactivity should not therefore constitute consent. [...]
Sometimes you gotta play the hand you're dealt.
Anyway, out here in the real world, there's legislation that forbids opt-out tracking.
Cmon, it's such a minor change. Getting these PR's in means _you are_ a major player (atleast to some) and you might be able to drive change in a positive way. Why not jump on the opportunity?
The guy also seems quite rude for someone with "Rude people blocker" in his bio.
Sad to see such, ah, well-socialized people in open in source projects.
> I would rather see the different packager managers sit together and work on that standard.
And especially big players should get involved, like the debian/ubuntu apt people, the fedora people, the nixpkg people, pip, node and all the others.
It’s almost as if any of the mentioned package managers has opt-out tracking code embedded in their CLIs.
1) It's not a standard at all. Some guy made a webpage and used the word "we". Even in the GitHub issues there was debate about what having this variable set would mean (e.g. should a call to a formatting service be blocked).
2) One maintainer pointed out that being an early adopter would force them into advocacy that they aren't prepared to do. Judging from how argumentative Sneak was being, there's probably an element of "we don't want to be the vanguard for this bully and his variable"
I do think it's very much a minimal-effort PR. It's hardly a PR at all: at the very least the tests and documentation should have been updated, and actually writing out some more rationale would help as well. I certainly wouldn't send a PR like this, at least not without the text "this is a proposal, and if accepted I will update the tests and documentation as well".
Could be wrong though and the maintainer simply needed a vague excuse to not merge it. Haven’t completed a deep dive. I couldn’t be bothered to compare repository at the time of PR 2 yrs ago on mobile.
I think that opening PRS makes a lot of sense, but smaller projects or projects that don't yet have an opt-out mechanism are the most likely to adopt it in the early days I think. Then once you gather some momentum around those bigger projects might follow suit.
If this had restrained itself to the definition of tracking used by the original failed DNT effort, I might think very highly of it, but attempting to shoehorn the more stringent “offline by default” viewpoint into the framing stolen from DNT makes this proposal unlikely to be widely adopted.
It’s really unfortunate that this good technical idea was combined with that overreach, as once the idea is rejected by others for going too far, any similar idea will likely be rejected without consideration. Oh well.
Tangentially, do all the adblock extension users realize that their IP address is constantly divulged by all the rules updates their extensions are running in the background? This proposal would, if fully implemented through the entire software stack, result in browsers launched on the user’s machine refusing to allow any background network connections, which would break all ad blockers as they depend critically on that behavior. I’m not convinced that users prioritize “don’t divulge my IP address” over “block ads”, which seems to be a conceptual nail in the coffin of overreach here.
Expanding that to mean “do not reveal the user’s IP address” is a significant expansion of the original meaning, having little to do with the protection against being individually tracked that the DNT header was designed for.
Asking Homebrew to disable analytics has nothing to do with targeted advertising opt-outs, but supports the ideological goal of the authors in promoting their new idea that revealing the existence of the user is “tracking”. That expansion exceeds the scope of the definition of “tracking” as used to reference targeted advertising and marketing, something that is not applicable to the Homebrew project’s existing flag in any way.
Attempting to promote their beliefs by aliasing those two concepts together - tracking protection against targeted advertising, versus disabling telemetry and auto update - is therefore overreach to me, especially when presented without this context to projects such as Homebrew and others in pull requests, and I think it will ultimately lead to their effort’s failure.
To me, this is all a direct consequence of their attempt to ride the coattails of the failed DNT effort by reusing its key phrase “do not track”. They should have used a different name. In a few seconds, I can think of a pretty great one that still acronyms to DNT, so clearly creativity wasn’t the obstacle here. Instead they chose to anchor to “Do Not Track” and it’s too late now to change that. We’ll see what happens.
It was a lot more amorphous than that. Some of the people behind DNT did have that particular meaning in mind, but others had different sets of things they wanted to prevent. There was enough disagreement on this point that they didn't manage to get consensus on anything more than "this specification does not define requirements on what a recipient needs to do to comply with a user's expressed tracking preference" -- https://www.w3.org/TR/tracking-dnt/
I feel like many people aren't aware that popular open source tools already collect usage data.
more useful would be something like this that lets users opt in or out of the various TYPES of tracking that are done.
for example, I don't care about application usage telemetry. if you want to know that it's hard for me to remember a particular subcommand and am often looking up help for that subcommand, I don't care if you want know that.
if you want to show me "we're hiring" horsepoo, well you can go away. opt out. same for "please donate, I'm not rich enough yet" messages. and straight ads can just die.
options to tweak those things granularly would be welcome, if we're just daydreaming about things that will never get adopted.
Who do you feel entitled enough to judge someone’s humble request for a donation, despite them developing the free software that you use often enough to be annoyed by it?
"hey I'm gonna give away my time and effort for free" is not how you get money. it's how you get geek cred or build up a resume or whatever. all fine, of course. do what you want.
if you want to charge for your code, charge for it and stop asking for donations every time I clone or build your stuff.
have a donation button somewhere so people CAN donate, of course. asking is very .. beggy and it drives me nuts, especially when it's from people who make $500k/yr at their day job.
I don’t know who hurt you in FOSS, but you sound really mean.
Neither of those require network requests, so it seems to me that they're a completely different sort of thing than this effort intends to address?
It's not clear to me why they would think this, but I think it is because you always get this if you offer software as a service and many people have been involved in working on these projects, despite the vast majority of software not being offered this way (indeed, 96% of IT is on-premises spend ). I think the people who have worked on these projects have carried this attitude of usage information is their information over to other domains, without realizing that they are different domains.
For example, it claims one of the banned things is 'automatic update phone-home'. However, for brew they are only trying to disable analyitics, brew will still "phone home checking for updates", and there is no way to disable that (at present).
Personally, I'd want to see something more fine-grained. Personally I want automatic updates (at least notifications that one is available), I can't imagine using a package manager that didn't provide that.
If a user cannot be bothered to open a ticket themselves or at least opt-in to telemetry to demonstrate the issue, then is it really an issue that developer's should fix? If a app crashes in the woods and no one bother's to report it then is it a bug? (Joking partly).
For commercial apps the dynamic is a bit different since customer's can & do expect things to be fixed without any action on their part.
In these cases, filing a ticket is often pretty useless. What's needed are widespread stats to determine severity, frequency, correlations, etc.
In fact, I'd say the Venn diagram of bugs that can be meaningfully reported in a ticket, versus those best detected through crash reports, is pretty small. So you need both.
Why do you think developers shouldn't fix crashes? Shouldn't developers themselves decide what they want to fix?