Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I disagree that telemetry is inherently bad. As product engineers, telemetry is often our only visibility into whether or not a system is functioning healthily. How else can you detect difficult-to-spot bugs in production?


> our only visibility into whether or not a system is functioning healthily.

Your problem here is viewing the end user's setup as part of your system.

It's the user's private system -- why should you have any visibility into how it is functioning?


They said a system, not their system.

Car computers report telemetry to mechanics, and given that digitization allows for economies of scale, this isn't that different.


> Car computers report telemetry to mechanics

Yes -- and they shouldn't.


Yes, they should. It assists with repairs, increases safety, leads to recalls, and in cars with GPS units, even reports road emergencies and saves lives.

You can feel uncomfortable that this is happening, which is an entirely okay opinion to have, but when it comes to forcing that opinions on others, please don't.

Imagine if we were debating the qualities of buttons because the Amish were uncomfortable using them.


Sure but Amish have the right to choose to use the button or not. You should have the option to opt-out.

Or at least have that option until society says “well given how many life’s are save we need to collect this data from everyone”. But you’d get a say in that conversation as well.


You can opt out, the exact same way.

Don't buy a car that uses telemetry, just like you don't buy clothes that uses buttons.


As a software engineer I disagree. You are saying that you want to collect my personal information so you can fix your bugs. I don't see it being a valuable trade. I'll just find someone who can fix their bugs without tracking me.


>You are saying that you want to collect my personal information so you can fix your bugs.

How do you define personal information? Let's use Chrome as an example. Recording what website I visit is clearly personal information. What about recording how many tabs I have open, how much RAM each tab is using, and when each tab was last viewed? Is that personal information to you? I personally don't value keeping that private and it is probably a valuable piece of information that could help the developers improve what has been one of the biggest user complaints about Chrome since almost its release.

I think that is generally OP's point. Each piece of data exists on a spectrum in value for both the user and the developer. Data should be kept private when it has value to the user. There is little harm in sharing the data with the developer when the user would deem it low value and the developer would deem it high value.


It's pretty easy to understand what information is technically static and could be used to track you. Number of tabs: low possible range and pretty variable, even for tab hoarders, so it's low entropy information. Amount of RAM used in each open tab: that should be statistically significant and I'm pretty sure could be used to identify people if there are enough tabs open for a long enough period. When each tab was viewed: every (not-)clicked tab is a bit of information, you don't need much to narrow down a person. Interesting reading on de-anonymizing people on seemingly anonymized data: https://www.wired.com/2007/12/why-anonymous-data-sometimes-i...


Telemetry isn't okay simply because it can't be used to track someone. The number of tabs I have open isn't identifiable information, but it's still my private information, and should not leave my computer without my advance consent. Using my computer hardware to transmit my usage activity (even my unidentifiable usage activity) without my consent is a dick move.

My usage data is mine, as is my hardware and network connection.


You are going beyond my example by saying this information can be used to track you. This is the only information collected in my example. It is not associate with any other information so there is no value in trying to deanonymize it.

Perhaps it is better if I approach the question from a different angle. What is the downside of someone having this specific information about you? Can you think of a single negative repercussion from someone knowing how many tabs you have open? That is the fundamental point here.

The idea that all information related to a user should inherently be private just seems like a needless draconian standard and one that didn't exist in the pre-digital age. The privacy value of each piece of information can vary wildly. Some of it deserves protecting. Some of it doesn't.


Personal information is a bit nebulous. Do we consider the list of function calls in a stack trace "personal information"?


If I sent the stack trace to you, no. Otherwise, yes. It's my stack trace after all.

(Perhaps "private" not "personal" is a better term here, but stack traces can expose personal information too, if they include details about function arguments.)


For me, the point is really about control.

These companies know people don’t actively want to be surveilled which is why they sneak this shit in instead of being upfront about it.

If it was so great for consumers it would be an opt in not an opt out hidden behind a series of dark patterns.

Even Apple switches Siri back on after every OS upgrade.


+1 to this. As long as proper privacy concerns are addressed and the data gathering is imperceptible to the product experience, telemetry signals are immensely valuable for improving the product in a variety of ways.


Many users care more about their privacy than your product.


Certainly true, but it doesn't counter the original claim: Anonymized telemetry collection with proper privacy considerations can have a net positive impact on the product.


I would agree to the telemetry if all code was FLOSS and everyone could see what exactly was being transferred.


So why does $product need to send telemetry data via google? Why can highly complex software that runs most of the worlds internet infrastructure (linux) work without telemetry? Why is telemetry not opt-in or relies on reports in situation where a bug causes an issue like firefox crash reports? I'd rather have privacy and buggy software then bug free software in exchange for no privacy at all


>So why does $product need to send telemetry data via google?

Because Google is responsible for most of the software on said product. Who would be receiving that telemetry data if it wasn't Google?

>Why can highly complex software that runs most of the worlds internet infrastructure (linux) work without telemetry?

First, this is a false premise because it ignores the potential that telemetry could help improve this software but most Linux distros have decided against it for other reasons. Secondly, it ignores that some distros do in fact include telemetry.

>Why is telemetry not opt-in

It probably should be when it comes to something that has potential to invade privacy, but we have to be realistic that practically no one will actively turn on telemetry if it is initially set to off. That drastically decreases the value of the collected data and it basically turns into nothing more than something customer service can tell someone to turn on while trying to troubleshoot a specific issue.

>or relies on reports in situation where a bug causes an issue like firefox crash reports?

Telemetry isn't just about bugs. It is also about guiding future development, knowing what features are used, knowing the workflow for users, etc. It can provide value beyond crash reports.

>I'd rather have privacy and buggy software then bug free software in exchange for no privacy at all

This is completely fair. I would generally agree with you and bet that most HN readers would too. However this is not a binary choice. Not all telemetry is inherently bad. Not all loss of privacy is inherently damaging. This is a complicated issue that will involve compromises and anyone sticking to a complete extreme of it being all bad or all good isn't going to offer anything productive to this conversation.


> Because Google is responsible for most of the software on said product. Who would be receiving that telemetry data if it wasn't Google?

Depends, on Android maybe. On my Android Device, not really i don't use google software with the exception of the core android system without gplay services. On iOS, the HTML Based Web, or Desktop Systems, I see no need for google to exist. If you need telemetry, run your own damn telemtry server instead of feeding the FAANG Privacy nightmare even more.

> First, this is a false premise because it ignores the potential that telemetry could help improve this software but most Linux distros have decided against it for other reasons. Secondly, it ignores that some distros do in fact include telemetry.

Distros may, Linux itself does not. The fact that the majority of Linux Distros work just fine without telemetry shows that large scale software developement and deployment work just fine without invading peoples privacy needlessly.

> It probably should be when it comes to something that has potential to invade privacy, but we have to be realistic that practically no one will actively turn on telemetry if it is initially set to off.

so, if given the fair and free choice everyone will chose against telemetry? And that doesn't make you ask yourself "are we the baddies?".

> That drastically decreases the value of the collected data and it basically turns into nothing more than something customer service can tell someone to turn on while trying to troubleshoot a specific issue.

So, wheres the problem here? Sounds EXACTLY how a good telemetry system should work. If the bugs don't bother the users there's no need to invade their privacy to fix them, if they do bother them, telemetry can be a tool to help them. There's no need to generate "valuable data" except to invade peoples privacy.

> Telemetry isn't just about bugs. It is also about guiding future development, knowing what features are used, knowing the workflow for users, etc. It can provide value beyond crash reports.

Why is it any of your effing buisness what my workflow is like? If i need a feature i request it. This shit is only accepted because the majority of users lack a meaningful understanding of the depth of invasion by app and web developers into their privacy.


>Depends, on Android maybe. On my Android Device, not really i don't use google software with the exception of the core android system without gplay services. On iOS, the HTML Based Web, or Desktop Systems, I see no need for google to exist. If you need telemetry, run your own damn telemtry server instead of feeding the FAANG Privacy nightmare even more.

The article is specifically about the mobile OSes and the default apps and services. I'm not sure why your general complaint about third parties using FAANG tracking is relevant here, but I have no argument against it.

>Distros may, Linux itself does not. The fact that the majority of Linux Distros work just fine without telemetry shows that large scale software developement and deployment work just fine without invading peoples privacy needlessly.

You are doing the same thing again. You are assuming a level of "work just fine" without having a comparison for what it would look like with telemetry. Ignoring the privacy issues for a second, can you say definitively that Linux would see no technical improvements from developers having access to telemetry data?

>so, if given the fair and free choice everyone will chose against telemetry? And that doesn't make you ask yourself "are we the baddies?".

Because the benefits of telemetry are widespread while the downsides are localized. The incentive for an individual user to participate is low and isn't well understood so they will default to off. Expand that to everyone and you end up with the tragedy of the commons.[1] It has nothing to do with skulls on a cap, it is basic individualized economic incentives playing out that lead to less than ideal results for the whole.

>So, wheres the problem here? Sounds EXACTLY how a good telemetry system should work. If the bugs don't bother the users there's no need to invade their privacy to fix them, if they do bother them, telemetry can be a tool to help them. There's no need to generate "valuable data" except to invade peoples privacy.

>Why is it any of your effing buisness what my workflow is like? If i need a feature i request it. This shit is only accepted because the majority of users lack a meaningful understanding of the depth of invasion by app and web developers into their privacy.

Once again you are returning to bugs. This is about more than just bugs. Very few pieces of software are published and then abandoned beyond bug fixes. Today most software needs to constantly evolve and add new features. Maybe you are the type who will request those features from a developer in official channels, but that isn't common.

Also most users will simply decline when presented with the option to submit a bug report. They just don't see the a strong enough or immediate enough connection between a bug report and the bug being fixed. I would bet any developer who has spent time informally talking to their users would have heard some complaints about their software that were never previously voiced through official channels. That is just the nature of things. A developer will get more valuable data if they don't leave the sending of this information up to the whims of the user in the moment when a bug report screen might appear in front of them.

[1] - https://en.wikipedia.org/wiki/Tragedy_of_the_commons


Your arguments all ultimately focus on the value telemetry generates for the company, not for the user. These two should theoretically coincide, but in practice, they don't. Telemetry may be a fine thing in the abstract, but it's mostly used in a very hostile manner.

People would be more comfortable with telemetry if they could trust it's being used only to fix bugs and improve workflows. The reality is far from that, though. Telemetry's main use in end-user software is to provide data to direct various aspects of development that ultimately boil down to: how can we extract more money from our users? That's part of the reason we get dumbified apps full of questionable design decisions and user-hostile anti-features. Instead of asking people what software they want, "data-driven" companies are just setting up a control system around their users, with changes in the software being meant to influence behavior towards better monetization.

Until that gets fixed, I'm going to keep preemptively blocking any and all telemetry. I'm also very happy that GDPR forced companies to surface a lot of hidden surveillance, and that I can just dismiss all these notifications knowing I'm legally opted out by default. To the extent I am in fact opted in - i.e. companies literally breaking the law - I yearn for the day DPAs in member states get serious about issuing fines. Until then, the next time I spot telemetry enabled by default, so help me God I'm filing a GDPR complaint.


> You are doing the same thing again. You are assuming a level of "work just fine" without having a comparison for what it would look like with telemetry. Ignoring the privacy issues for a second, can you say definitively that Linux would see no technical improvements from developers having access to telemetry data?

"works just fine" in this case means "is the backbone of the global internet infrastructure". Could it potentially be better with telemetry? Maybe. Could it potentially be better if linus torvalds personally surveils all interactions with any technology i have, no matter how private? Likely. Could it be become better if i stick a probe up my butt to measure frustration when using any product? Sure. What an asinine argument, of course telemetry can make software better in some cases but the global invasion of privacy of literally every computer user is not a worthwhile trade off for some bugfixes and giving POs some rough idea of user interaction to ignore anyway.

> Because the benefits of telemetry are widespread while the downsides are localized. The incentive for an individual user to participate is low and isn't well understood so they will default to off. Expand that to everyone and you end up with the tragedy of the commons.[1] It has nothing to do with skulls on a cap, it is basic individualized economic incentives playing out that lead to less than ideal results for the whole.

the downsides are my privacy and the privacy of millions of user who frankly do not understand the implications of it is invaded for some fringe benefit to the developer. It's not a tragedy of the commons situation but abusive behavior from developers targeting users that don't know any better. Thought Experiment: If every person on the planet would magically gain a deep understanding of how telemetry works, what would the vast majority chose to do? Get it out of their live as much as possible. Would you give someone detailed data where you take your car, at what speed, at what time, with the added benefit of governments gaining access to that data so that you use 5% less wiper fluid?

> Once again you are returning to bugs. This is about more than just bugs. Very few pieces of software are published and then abandoned beyond bug fixes. Today most software needs to constantly evolve and add new features. Maybe you are the type who will request those features from a developer in official channels, but that isn't common.

This has nothing to do with bugs. I don't need google or mozilla to know how i use my webbrowser. It's none of their fucking buisness in any way shape or form. If it crashes enough i will either complain or use a different product. If they want to know what improvements they should make or how they should evolve their product they can ask me. openly, freely and with consent. If 99.999% of users do not care to answer, then that's fine. Just because you can invade my privacy to improve your product or evolve it doesn't mean you should or should be allowed to do so. In fact it should be fucking illegal without explicit, well informed consent.

> Also most users will simply decline when presented with the option to submit a bug report. They just don't see the a strong enough or immediate enough connection between a bug report and the bug being fixed. I would bet any developer who has spent time informally talking to their users would have heard some complaints about their software that were never previously voiced through official channels. That is just the nature of things. A developer will get more valuable data if they don't leave the sending of this information up to the whims of the user in the moment when a bug report screen might appear in front of them.

This is just insane. If a User doesn't care enough about a bug or a crash to fill out a bug report or voice their opinion on it why do think you can just invade their privacy instead? Just because almost everyone can't be bothered to answer surveys on the phone should survey designers just decide to go and analyse everyones trash instead without asking? It's valuable data after all and most people don't answer surveys. Why don't we just go ahead and track everyones movement while we are at it. I'm sure we can improve traffic flow with that valuable data. Just because most People wouldn't like that doesn't mean we cant just invade their privacy because we think we know better.

God, i hope the EU gets their shit together with the GDPR someday and fines devs and companys like that out of existence.


This argument does not hold because you can compare Google to Apple (in this case and based on the article) and say that if this was the case, then Apple which gathers less data would have inferior (more bugs, slow feature development, etc.) than Google. I see the competition, which is Apple in this case, doing relatively fair without (presumably) gathering as much data, therefore I absolutely don’t buy this claim.


Funny enough you are saying my argument doesn't hold but your reasoning actually falls perfectly in line with my comment.

The argument isn't that all telemetry is good or that we should accept any level of it.

The argument is that all telemetry is not inherently bad.

As the article states, Apple does telemetry too. If you are ok with Apple and not Google, you are agreeing with me that this is a nuanced issue and the specific level of telemetry needs to be debated. If you are taking the stance that all telemetry is bad. You need to find another company to champion besides Apple.


I agree with you that not all telemetry is bad and as a software engineer I understand the value of it. What I am trying to say is this kind of argument has been used by the likes of Google, Apple, Facebook, etc. as an excuse to collect an excessive amount of data (even sometimes illegally) and that’s the reason I pushed back against it. As you correctly mentioned, this is a complex issue. For example, there is no way for most users to differentiate between what could be useful and what is unnecessary violation, what privacy breach is severe and what is not. Until we have a practical solution to these problems, I won’t trust those companies to play ethical and only use my data in good harmless ways. As for Google, it’s worth keeping in mind that we are talking about a company that intentionally misleads users in occasions to collect their data.


> I'd rather have privacy and buggy software then bug free software in exchange for no privacy at all

Unfortunately, nobody offers bug free software in exchange for no privacy. It’s still buggy.


We’re increasing the risk exposure for every user for our own trivial convenience. It is inherently bad, just like other forms of widespread surveillance that is often motivated by some seemingly good cause, like catching terrorists.


Telemetry is inherently bad if it's not done with the informed, opt-in consent of the end user whose data it's (mis)appropriating, oftentimes silently.

There's no issue with opt-in telemetry, where the user says "yes, it's okay to track me".

Invisible, silent, always-on telemetry is actually just spyware that's been mislabeled.

Ultimately it's not the telemetry that's at issue: it's the unethical and selfish behavior of the software/device manufacturer.

No sane or reasonable person thinks that an EULA is informed consent.


Once upon a time fixing bugs in production didn't happen because the product got all the bugs out before production. If it had bugs in production, the product failed.


You used the phrase "once upon a time", a common opening for fairy tales, which seems apropos for describing a magical land where products achieved a 100% bug detection rate before release. I suppose this might have been true 50 years ago, at the dawn of the electronic calculator, but that is now an age of legend...


I've often wondered about this commonly repeated belief that software of ~30 years ago was less buggy than software today, because it doesn't really line up with my memories. There's definitely part of it that comes from a standard "back in my day", rose-tinted glasses sort of thing.

But I actually think a lot of it comes from the fact that modern software can be easily patched, whereas older software couldn't. It is easy to believe that software today is buggier because of just how many patches we get for it. But back in the day, any bugs that existed in the product were not as visible, because we weren't getting weekly updates where the patch notes say "Bug fixes."

How many massive vulnerabilities existed in major products of the day, and continued to persist unnoticed by all of us because of the relative impossibility of patching them out?

On top of that, modern software is simply more complex -- often times an order of magnitude more complex. (Whether this increased complexity is always needed/appropriate is a separate question.) I'm not sure what metric you would use to be able to do a "bugs per complexity unit" sort of comparison between then and now, something that attempts to control for increased complexity, but my intuition is that it would be pretty flat.


When that was true, several decades ago, products generally had upwards of 2 years of design/architecture/engineering effort and definitions prior to another 3-5 years of development.

It still (sometimes) happens for medical, aerospace and other transportation software that interfaces with hardware where safety is a concern.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: