Am I the only one who thinks this is kind of petty? It's not like Audacity is trying to read your email (or even rip off your tracks). The telemetry I'm reading about is literally:
1. error reporting - the user has to click a button to share crash logs. It's basically a macro to help the user create a support ticket.
2. version checking - no PII information being kept, literally just helping the developer get an idea of what versions they need to be supporting--you do want them to support the version you're running, right? Well, if not, you can turn it off.
The former is probably not a big deal if you have to press a button -- nor would it require an eula clause I would expect, you could be informed of the data collection at the time you'd press the button.
The latter though? Somehow the industry got by for decades without intimately knowing what version every single user of the software was running. I think they'll survive without it.
Anyways, how is it "petty" to make a version of the software for which there is clear demand? No one stomped on their birthday cake, they just took an OSS piece of software and modified it, compiled it, and released it. In what world is there any malice in doing the exact thing an open source license exists to allow you to do?
> Somehow the industry got by for decades without intimately knowing what version every single user of the software was running
This is the same argument that was used about seatbelts and the internet. You might have survived without it before, but that doesn't mean it isn't going to be better with it in the future.
Seatbelts are a pretty thoroughly disingenuous argument here. No one is going to die because a company making a DAW doesn't know how many people are using what versions.
So far no one in the many replies to my post have given even a single reason why the world is substantially better with data collection on versions. The argument is either this kind of attempt to substitute an Obvious Good Thing and then not explain how this thing is like that, or that "it's just a harmless bit of data collection, as a treat."
I think data collection should probably be held to a higher standard than that. Come up with why it makes my life better first, then pitch me on it. Otherwise you're just pissing in cornflakes and calling it breakfast.
> Somehow the industry got by for decades without intimately knowing what version every single user of the software was running.
I guess this changed long before Audacity. Today many programs, including open source tools, bundle some sort of telemetry.
I think that instead of opposing Audacity we should talk about other projects in similar situation; we should treat them the same and think about a more general solution. There are already discussions like this, e.g.: https://consoledonottrack.com .
In our current societies, this is how you push for a more general solution:
Claw back the privacy on high-profile applications, get an amount of attention that’s significant enough to either lead the application to change or for the other fork to become the go-to. This reminds other companies that people do care and that the cost of tracking is in taking privacy from users who then leave when they can.
I value privacy but version numbers seems like an odd place to make that stand since it’s one of the least unique bits of information, especially in this century where automatic updates are increasingly the norm.
> Anyways, how is it "petty" to make a version of the software for which there is clear demand?
I don’t think it’s clear that there is any demand for a version that doesn’t collect crash reports or version usage. The outrage was about the vagus clause about information necessary for law enforcement or similar.
I’m perfectly happy to see that code (if it exists) being thrown out, but to remove user confirmed crash reporting?
I’d expect every app I use, OSS or not, to have automated crash reporting (a confirmation dialog is a nice touch) and version usage tracking. It’s possible to maintain software without it, just like it’s possible to maintain software without other feedback like a bug reporting system or a profiler.
The clause is there because of the data collection, not independent of it. As people have been so quick to point out: "you will find this clause in everything that collects data as standard boilerplate."
If it weren't collecting version info what possible reason could it have to have a clause in the EULA about giving info to the police? You have to be phoning home for that to even mean anything.
I can see why the developer wants it, but a person wanting something is not inherent evidence of it being "good". I want lots of things that are bad for me, and even some things that are bad for other people.
Here's the thing: In order to track version numbers, this developer now subjects its users to potential police surveillance. Even if that surveillance is "harmless" on its own, they have still made that bargain. And I feel like we're well past the point of credibly taking any given individual act of surveillance as being isolated and unentwined with more problematic kinds.
Right, but that's just boilerplate you'll find on any such agreement. It means that they may be compelled legally to provide whatever information they collect and that you should be aware of that fact. It's not like they're secretly plotting ways to steal your secrets.
Count me on Team Crash Report, for sure. Anyone who's worked on any kind of project like this knows how valuable live user telemetry is. These features make software better for all of us. If you really don't like them don't use them and carefully audit the opt-out mechanism to make sure it works. Don't throw poop on the walls.
It is not about steal secrets. In almost every single privacy focused discussion, one side always built up this argument about "stealing secrets" in order to provide counter arguments.
Software should not collect information in the first place if it may get necessary for law enforcement, litigation and authorities to demand it. If the information is interesting for a third-party then the collection filter is not fine grained.
Live user telemetry does not have to mean Personal Data. If I know that 80% of users who download version 1.2.3 got a crash within 5 minutes, which living person can I identify with it? If I however get download logs of IP addresses, browser identity tags, file names, windows profile names, user directory names (and so on), then that cash report is providing unnecessary personal data.
If I have access to the crash reports, can I do business intelligence gathering? Can I discover information which gives stock market insights? If the answer is yes, then you are collecting too much information.
The only reason to not publish all crash reports openly on the web for anyone to download should be undiscovered security vulnerabilities. The data itself should be inert.
I assume you have not read these type of privacy policies before, but it's extremely common for web sites and online services to disallow children under 13, at least in the US because of COPPA compliance. In general, it is illegal for commercial entities operating in the US to collect data on children under 13 (although in some cases there are some exceptions). See for example the Github privacy policy which includes a similar clause: https://docs.github.com/en/github/site-policy/github-privacy...
This is a perfect example of how the less money you charge, the worse your users will behave. Audacity is collecting crash reports. Plenty of software collects crash reports. Crash reports are necessary for building reliable desktop software.
For some reason free (gratis) software attracts the most entitled users ever. If I were in charge of Audacity I'd be inclined to charge a $1 "distribution fee" just to weed these users out.
I don't think this is an example of entitled users.
I think rather it is rather an example of an external company (Muse group) not understanding the community behind the piece of software they have taken over.
Why is the crash report collecting information about children, and not filtering out the information so only information about the software is collected?
While that is possible, I strongly suspect that in Audacity case it is because they are outsourcing the data collecting to google analytic services, and google uses the personal data as payment for the service.
They could do the data collection themselves and chose not to store personal identifiable information, in which case they can remove the legal boilerplate since it won't be needed. This suggestion naturally already exist in the GitHub issue.
I'm not sure what you mean, these laws aren't strictly concerning telemetry. Did you mean that telemetry is bad because children under 13 years old could accidentally use it? If so, that's the purpose of the law -- to prevent that. You can sue a company that is found to be unlawfully collecting data on children.
Generally, no, that's not what that law in the US is acknowledging. It doesn't make any special consideration for any definition of "spyware" or any other similar concept, it talks about all kinds of data collection, including ones that would be otherwise voluntary and beneficial for an adult. There might be some other US law that talks about that, but COPPA doesn't.
It’s audio editing software. Kids should be allowed to use audio editing software freely. If the law requires no-one collect data on kids, stop collecting data from kids rather than tell kids they can’t use the software. That way, adults and kids have the ability to use the software.
Leaving the data collection on all the time, makes data-collection part of the terms of use of the software. Which makes data-collection part of the business model. Which makes the software spyware. It is always watching you.
If I followed you around all day, you’d label me a stalker, If I didn’t approach you, didn’t proactively threaten you, didn’t tell anyone else what I knew about you, you could still legally bring sone level of force against me. How is constant telemetry any different?
If you think that collecting anything from your users that would put you in violation of COPPA should a child decide to use a local application against local data on a local computer is appropriate in any way, you probably ought to think again. There is no justifiable need, period.
I'm not sure what you mean. It can still be operated offline, in which case there is no telemetry sent. The analytics is just another service that you can use.
Audacity is not a website or online service. It's a completely offline audio editing program that has worked fine without telemetry for over twenty years. My kids were using it just fine, and now they're suddenly not allowed to and the only difference is the telemetry.
Yeah I’m kinda confused with the huge amount of anger surrounding this. Yes the text is scary, and it’s scary because being made aware that governments do have the power to make pretty much everyone turn over information they have on you isn’t fun. But this really isn’t Audacity’s problem specific.
This still requires the adversary to coerce your friends/family members into snitching on you - it involves effort and risk for them and doesn't scale.
Compromising a telemetry server is a one-off operation, would work at scale and is much less risky as the targets have no way to detect it.
With that, I would say good luck troubleshooting your server if you don't collect any logs whatsoever. I wonder how you would even protect against bruteforcing and DDOS attacks if you never stored IP addresses for any amount of time.
The issue here is that the server you downloaded the desktop app from is. You can reduce the amount of this you have to deal with by shipping a native app, but you can't get rid of it entirely as long as you plan to host a web site or a download of something, or if you plan to let users communicate useful things back to you (such as their hardware specs, OS version, crash reports, usage patterns, etc).
I'm personally not confused, just disappointed. Sadly I've seen far too many FOSS discussions that become overrun with irrationally paranoid rhetoric, sometimes bordering on the reactionary. This stuff is nothing new. You'd think that with the ability to quickly check the code and recompile it to get rid of any unwanted bits, that would make this kind of attitude go away, but for whatever reason it only seems to make it worse.
It's usually what happens when a software project has a lawyer involved. Copy left spooks people, anonymous contributions that may or may not be licensed spook people, lack of a privacy policy spooks people, etc.
In my opinion, if it's desired to have FOSS driven by individual contributors, the legal education aspect for each contributor is just as important as the contributors knowing how to code. Sadly I think some projects are way behind on that.
The division happened when they added telemetry to Audacity. Forking it is the only move forward. Time will judge the projects on their own merits. In the meantime, we can all at least rest well knowing the chance to defend against greed is available to us because of FOSS.
I understand that you feel upset that they added something that you didn't want, but you don't have to continue adding to the division and cynicism. Forking is not the only move, and I would actually suggest against it -- what you want is simply a build with the telemetry disabled. I don't think you want to throw away any other new features that aren't related to the telemetry (and in fact, you may still be able to indirectly benefit from it that way if it leads to some valuable product insights from them). So characterizing this as greed seems to not make so much sense. If they were getting super rich off this and not making any other improvements then maybe you could say that, and I would join you in saying hey, something's not right here, but that doesn't seem to be the case.
More exhausting than developing towards an entirely different (and often redundant) feature set without any help from upstream? That's what is usually meant by "fork." If you want the minimal effort option and you don't care about new features or security fixes at all, you can just stick with an old version, no fork is necessary there either.
Complacency about these things is how our freedoms get eroded. At some point these organizations need to be called out for such behavior to send a message to the rest of the corporate world looking to sink their claws into OSS acquisitions.
This specific complaint seems to be about data privacy, not about a freedom being eroded.
Also, since you can fork it, as has been previously mentioned, there seems to be no purpose in objecting to this type of FOSS acquisition. Worst case, the project ends as it was before the acquisition, with no corporate support or funding whatsoever, at which point it seems it won't make any difference whether there was a complaint or not.
> When do we share your information with others? ...When the law requires it. We follow the law whenever we receive requests about you from a government or related to a lawsuit.
Maybe I'm nitpicking here, but isn't Mozilla saying they will share any data they've already collected with law enforcement (which should be just basic telemetry stuff) while the Audacity EULA says it will actively collect data if compelled by law enforcement ? Doesn't that imply collecting any other type of data LE wants? Again maybe the exact wording makes no real difference but the way it is phrased can make it enough of a valid concern to "justify' a fork imo.
I think the issue is more that the group that bought Audacity had thrown several red flags, of which this is one.
However, Audacity is cool and all, but I wonder if it would be better to create a simplified frontend based on Ardour, the way GarageBand is for Logic.
Has it though? What are the red flags? The only thing they've done wrong IMO is using Google Analytics and they changed that pretty much as soon as this whole thing started.
The telemetry has always been opt-in (though they didn't communicate that very well) and their privacy policy is just standard stuff.
I haven't been following it closely, but in another thread on Audacity earlier today there was a threat by the Muse group saying that it was illegal to use their public API to download public domain content and that they would collaborate with China to physically track down developers who posted code that used the API. Or something like that.
Again I haven't followed it. Honestly it looks a bit to me like the playbook you see when a project is sold to a private equity firm.
Audacity is designed as an audio editor rather than a DAW. I'd be surprised if a DAW like Ardour had destructive editing, spectral analysis, or sample-level waveform view. From the UI perspective, I don't know if it has the ability to "open" an audio file and turn it into a single-file project, or if you can disable tempo sand best-based timings altogether (since podcasts and such lack a concept of tempo).
Telemetry creates a liability for the user for no benefit (and as Windows 10 demonstrates, the amount of telemetry is anti-correlated with software quality).
Even if we assume there is no malicious intent from either the developer nor their infrastructure provider (their initial telemetry attempt was using Google Analytics which is definitely malicious), it can still be coopted by a malicious actor who is able to observe the network traffic or compromise the telemetry infrastructure and put the users at risk.
And an otherwise sane friend of mine thinks Alexa is dope. I guess it's always a matter of where you draw the line, I for my part, want to be in total control of who gets sent what concerning ALL my devices, and I think this should be my right as a citizen and customer.
> their initial telemetry attempt was using Google Analytics which is definitely malicious)
What's wrong with Google Analytics? As long as the ad retargeting bits aren't enabled ( they aren't by default), it's just a regular anonymous analytics solution.
> What's wrong with Google Analytics? As long as the ad retargeting bits aren't enabled ( they aren't by default), it's just a regular anonymous analytics solution.
This assumes you trust Google? Why would you trust an entity that's both got a business incentive to stalk people, is able to hide it (given how many factors go into ad targeting it's impossible to reliable prove from the outside which data was used to target an ad) and is big enough to successfully fob off regulators and get away with it (their current data processing consent flow violates the GDPR for example)?
Muse group, a russian for-profit company that seems to have a shell-headquarters in Cyprus (see https://www.crunchbase.com/organization/muse-group), recently acquired Audacity as part of their expansion into the broader audio-production world.
As a first action, they changed their contributor License Agreement making a future change from a GPL license to a closed-source License possible. (it also allows for dual-licensing a paid version) https://github.com/audacity/audacity/discussions/932
They currently say they're not going to do that, but if they wanted to (and Muse-group is for-profit) they could without the contributor having any recourse. (They have already confirmed a cloud-service for Audacity, which for me already reeks like "we want to have closed-source tools that use our open-source contributors code").
Having a CLA isn't a problem in and off itself, for example, as they correctly state: the FSF requires a CLA because they want to license all their stuff as "GPL 3 or above", which is only possible using some CLA-mechanism. (it's also easier for them to defend GPL related lawsuits if they are the copyright holders)
What MuseGroup forgets to mention, is that they are a for-profit company (https://opencorporates.com/companies/cy/HE411908) while the FSF is a foundation (specifically a 501(c)(3) non-profit organization), which shifts their incentives a lot.
Now, barely a month later, they add the option of collecting data from users and people aren't happy. It is, at the very least, very tone-deaf/stupid from them to add telemetrie only one month after (IMO) showing their hand for what we can expect for the future.
Thanks, I was definitely missing some of this context. Forking is probably the appropriate response if the contributing community no longer trusts the acquiring company. In an active open-source project, the value of the software lies with the contributors anyway, so as long as they're unified Muse group doesn't have much power or recourse.
That said, I personally think this fork is a clumsy attempt thus far. If there truly is a rift so wide as the parent comment claims, it probably calls for a total rebrand. The way it's currently positioned makes it look like the author is simply attempting to apply pressure to get Muse to change its behavior, but based on this context I doubt that's going to be a viable long-term strategy.
It's not clear what sort of thing you're referring to, it's generally not possible to make a service where you have accounts and billing, that simultaneously doesn't store your information. The whole point of it is that you want it to store your information.
Whatever telemetry GitHub collects is probably the least interesting stuff they have on me. What's way more revealing about me as a person is all the stuff that GitHub is collecting and storing about me because storing that information is exactly what GitHub is for: code I've written, interactions with people in issue discussions, billing information related to my account, projects I've starred, etc.
Yes, they probably also know that I usually connect from an IP address in Chicago. And, while storing that information isn't strictly necessary, I'm also more-or-less 100% confident in saying that anyone who might want to keep tabs on me already knows I live in Chicago, and that would have been true 30 years ago, too. (Well, 30 years ago it was another town, but still.)
I suppose a lot of this is maybe just a halo effect due to all the (quite legitimate) upset about surveillance capitalism? Which is fair. And perhaps we want to say that it's easier to try and shut down all telemetry than it is to distinguish legitimate telemetry from abusive telemetry. Lord knows, if we try to carve out any concept of legitimate telemetry, adtech companies will immediately go about probing it for loopholes.
I do wonder if a company with a nasty non-compete would subpoena GitHub to get where and what time you logged on to see if you were at work and if you used a work laptop for a project.
If the maintainer had waited out the initial flurry of attention, they probably could have chosen the name they wanted without much drama at all. Instead, they overreacted, deleted the first thread, and then immediately started editing and deleting user comments in the second. Ouch.
If my interpretation of the term is correct, by using Audacity, you are agreed that it can collect any data the authority request it. The data will be hosted on EU region but shared with Russia and USA too because they have branches on these countries.
That's not what I expect from a software that require no remote server to function.
The government can compel audacity (or rather the muse group) to anything, regardless of your consent to an EULA. The only remedy is the Muse group fighting a hypothetical order in court.
I couldn't agree more. As someone who has used Audacity for many, many years and finds himself recommending it to people who have little to no prior computer audio experience I couldn't be more happy that there's some momentum towards improving the user interface and stability. It's not a bad program, but elements of the UI feel very dated and in my (admittedly edge-case) use I have to tip-toe to not cause crashes.
The solution is a competent dev or two updating it to HIG standards, running some usability tests, copying the best in class design, etc. Telemetry is secondary at best, not necessary.
Looks like they removed auto-updates or at least automatic checking for updates. I think that's pretty neat. As the in other comments, this is an advantage of open source. Things can be removed. I only wish this was more common amongst other projects. Imagine Firefox forks with all the telemetry and other annoyances removed. With closed source software, good luck getting anything removed.
In this I use the royal "you", I'm agreeing with the parent (it's early, my words often fall out wonky in the morning)
While I do understand the underlying cause and indeed hate things that violate my privacy as much as the next nerd.. you guys know most software phones home for update checks right?
Hell I'll tell you what. Unplug your router from the WAN side and wait a minute. See that popup there on all of your devices telling you you're offline? There's a connectivity check that's been giving out the exact same information the entire time you've had your OS installed
I really do appreciate the vigilance and don't want to discourage the raw energy on display at all but this one might not be the cause to go all in on
On the other side of my own argument it definitely won't hurt to chuck them a GDPR request in a month or two to see what they are gathering :)
> Am I the only one who thinks this is kind of petty?
Their new changes restricted use by individuals under 13. This is probably because they could run into trouble with GDPR with the personal data they are storing on users, which they have no good reason to store except that they can.
Audacity is used in public schools. Forking to keep the project usable by children learning the craft is not petty, it’s a worthwhile thing to do.
Ruining community trust so you can unnecessarily collect private information on your users, that is petty.
I wish we could add telemetry to the rr debugger without risking this sort of blowback. We have no idea how many people are using rr, and how much, so when we talk to hardware vendors and other groups we depend on, we have no leverage. This is a real problem.
It would be a big boost to free software if some org like Freedesktop had a standard telemetry library, data collection policy, data collection and publication service, and a distro-wide master switch to opt into data collection, and it was socially acceptable to use that framework.
To minimize blowback, you need to show that you respect user's privacy:
- Communicate clearly and discuss openly before writing code, asking for feedback from the community.
- Make very clear from the beginning that this is opt-in (Audacity failed this, IMO)
- Give specific examples of the actual benefits of telemetry in concrete terms (e.g. "leverage with hardware vendors"), not just "make $product better".
- avoid dark patterns (the screenshot Audacity has added now to the pull request has a highlighted "yes" button - that already will make some people hate you, anything worse than that will rightfully make most privacy conscious people hate you)
- Avoid controversial third party providers (this likely amplified the outrage in the Audacity case a lot)
- If you screw up and/or get unlucky and a shitstorm starts, back down. Even if Audacity actually fixes the issues to a point where everyone would consider it reasonable had it been like that from the start, trust has been lost to the point where any telemetry feature will be controversial.
Audacity screwed up in multiple ways:
- poor communication in general
- Still doesn't intend to make the data collection opt-in. (The update check sends OS and version information, and that part is supposed to be opt-out.) The privacy policy also makes clear that it wasn't supposed to be opt-in (uses legitimate interest instead of consent, tells people who are under 13 to not use the app).
- introduced this shortly after other controversial changes (change of ownership, CLA requirement)
- collecting data "For legal enforcement"
- "If you are under 13 years old, please do not use the App." (following from COPPA limitations - it should be "please disable analytics" instead)
Just ask the first time it is run: "Hello, we would like to know how many people are using our software. you can either click on that link using xyz analytics, or send us an email at xyx@xyz.com, thanks a lot for your support."
This reminds me of the Linux Counter project. When you installed Slackware, you automatically had an e-mail from Patrick Volkerding or Harald Alvestrand asking you to e-mail Harald to explain how you used Linux ("I use Linux at home" / "I use Linux at work" / "I use Linux at school").
This personalized request always felt so warm and casual to me, and I appreciated the thought that Harald Alvestrand cared to know that I, personally, was using Linux.
If there's no reason for the user to share the data, 99% won't, if only because it's the quickest option to get it out of the way.
My suggestion would be to give a reason, however small, to send that email. For example, "Would you like to vote on which of these four issues from our roadmap you would like us to prioritize?" (best asked after a bit of usage, rather than on first installation).
Maybe so, but as a user I have no incentive to want to allow it. And there's no checks or balances, since it's completely unilateral on the part of the software vendor. My experience has been that there's continuous creeping growth on what gets collected and stored for eternity.
Any nonvolatile bit is a public bit. It's only a matter of time.
That's roundabout and not compelling to a user. They have no idea what is gathered, how it's used, or any idea if their issues will be solved or when. It's like buying lottery tickets with payoff in bug fixes, except you don't know what they cost. Bad deal. The software house always wins.
It's especially bad for rr, since it doesn't otherwise have any reason to talk to the Internet (I see people mentioning Firefox telemetry, but you know, Firefox is a browser, you expect it to talk to the net).
The best I can think of is to incentivize it other ways; e.g., telemetry only for bug reporting, or a "you ping us, we give you a nice hat" or something.
Mozilla does it . Nobody is forking Firefox over just that, I don't think everyone is against it.
If it opt out and part of the install process and uses know what kind of data will be used it should be fine.
Audacity seems a exception here. They are broadly following this kind of rules , however there is blowback which other projects don't seem to be getting. Perhaps there are underlying issues in the community ?
As far as I can tell (and correct me if I'm wrong) while other open-source projects might be able to get away with opt-in telemetry, Muse Group has simultaneously (a) raised the ire of other open-source developers with the bungled response to e.g. https://news.ycombinator.com/item?id=27740550; (b) had technically correct but really optically horrid legal language about potential uses of the Audacity data, which implied a blanket granting of any metadata to any buyer of the company to use as they see fit, even if GDPR wouldn't let them do that; (c) didn't make it clear from the get-go that telemetry would be opt-in, nor make any sort of foundational commitment that it would only ever be opt-in; and (d) hasn't yet reacted quickly to describe their ownership structure in the interest of transparency.
It's a perfect storm of mistrust and crisis mismanagement. And the sad thing is that there are talented folks like Tantacrul, Head of Design at Muse Group, who could really benefit the community with opt-in telemetry guiding their product decisions, who have now been thrust into a crisis management role by this botched rollout, and are unprepared to give the transparency now required. Did nobody else at the company learn from WhatsApp's debacle on this?
It's especially sad because while few would shed a tear for WhatsApp's missteps, Muse Group's holdings in MuseScore and Audacity are literally the lifeblood of hobbyist and student music creators. Mistrust in that software can easily lead to "you know what, I heard bad things about Muse stuff, I won't pick it up" and that would rob the world of so, so much creativity. Muse Group, rein in your lawyers, swallow your pride, and act like the stewards of the future of music that you are.
WhatsApp probably a not right comparison ? It is not like there was ever going to be opt out in WhatsApp even, and Facebook's reputation and history of abuses is at different league than pretty much anyone else.
I think there's a massive gulf between an employer's behavior and an employer's transparency; lack of transparency is not in itself a reason to condemn a person for working at a place. There's no actual evidence that Muse Group is actually installing this telemetry in bad faith, nor any actual evidence that they wouldn't be willing to modify the details so it's acceptable. They seem to be absolutely horrible communicators, but ones with a product that does good in the world (and yes, I think that their efforts to thread the needle between the litigious whims of the music industry and the effort to democratize access to sheet music and the tooling around it are societally necessary). So I think it's appropriate to be critical of how they handled the situation but still support the idea that the people working there can be doing so in good faith.
That's true, if the employee truly didn't know what their employer was doing.
At this point, however, the age restriction should have given it away. If it's not allowed for children, it's because of COPPA. And if you're trying to avoid COPPA, you're doing something nefarious. The employee should know that by now.
That is the opt out for crash reporting, the version checking doesn't show a dialog before it phones home. Version checking is configurable in the settings, but it will run once I guess.
LOL. Both Firefox Desktop and Mobile have forks that disable telemetry, the auto updater and even the periodic check for tracker and malware blocklist updates.
> Of course that is not the same as actual usage, but pretty close.
How would you know that? It seems quite plausible that there are statistically significant differences between the subset of users who enable it and everyone else.
1. Collect the telemetry to a local file. The file must not be encrypted, be human-readable and contain only anonymized data. I should also be able to confirm all this by just reading the file.
2. Bug the user every time rr finishes or something like that. Ask them to send the telemetry by running a curl command that you will print out right there.
3. Have a sensible log collection policy on the telemetry servers and explain that too.
No, we do not need a "standard telemetry library". A standard crash reporter might be OK. It should ask the user if you want to submit a crash report, and give you a ticket number which allows you to visit a web site and see what's been done about it.
You can get rr in many ways: distro packages, Github release downloads, cloning the repo and building it yourself. It's difficult to track all that.
Also it matters how much people use rr. If a lot of people download it, use it once and never use it again that's important information (especially if you also report when it worked and when it didn't work).
Downloads are an incredibly non-useful statistic. Appreciate your disinterest in providing information but it’s disingenuous to say this is a worthwhile piece of data.
If you engineered a way to do it while maintaining user privacy, it would be a service to lots of FOSS developers.
Edit: How about routing it via something people trust, such as a trusted proxy like Mozilla (if they would host such a thing) and over the Tor network (if that could be bundled easily).
It doesn't sound like they've done anything different. They're just small enough that nobody can be bothered and people haven't written poorly informed criticisms of their decisions all over Reddit and Hacker News.
As seen, forcing or tricking others into fulfilling your desires is a violation of trust and rightly condemned. Best you could ask them to install a separate package to enable it.
I see "rr" as a little different because it is a developer tool. There is going to be a little more sensitivity about accidentally sending proprietary information.
Audacity is an end application and really isn't likely to accidentally send something sensitive short of packing up entire audio files (which would be stupidly obvious if it were being done).
Filenames/filepaths can be quite sensitive sometimes, and are occationally collected by telemetry (I do not know if it is in this case). For example, I know that Audacity is used for forensics, where that would be considered quite problematic.
2. Users pretty much never tell maintainers what is going wrong for them: no messages (much less proper bug reports), no contributions in any way!
3. Something invariably goes wrong for users that do the above.
4. Users immediately hit social media and try to tar the entire project, developer/company reputation, whatever. “Zero stars”, doesn’t work in their use case. You have surely seen these “reviews” before, and mindless tweets.
THIS is why I am torn on telemetry to some degree. IF you demand no auto-feedback in the name of privacy, THEN you should help out voluntarily OR not use free software! Yet software maintainers can’t count on this at all. They risk being hanged in the public square for every bug.
You got this part wrong. Users contribute to the project and/or draw contributers to the project. This happens to proprietary projects like twitter as well, where key features were invented by users. https://qz.com/135149/the-first-ever-hashtag-reply-and-retwe...
What if the telemetry data went through an intermediary you trust? I’ve been thinking about this and I am similarly torn on the issue. But if I knew the data was going to pass through some organization I trust , like the eff for example, I’d be much more comfortable opting in. This would be a sort of guarantee that the data I’m giving away is needed to improve the software and is not going to be used for anything else.
I think that makes sense. I am not in favor of a free-for-all on the implementation but I can definitely see the benefits of (say) having a bunch of anonymous crash logs telling you the things that users won’t, and such a thing should at least go through a reasonable 3rd party.
No. Just no. When I'm running a local application against local data on a local machine, there's no need for anything related to any of that to leave the machine for any reason.
You don’t “owe” feedback but you should stay off Twitter/reviews/etc. if you haven’t at least tried to contact the developer first when there is a problem.
As I see it, your options are:
- Decide the software is not for you, for some reason, and silently uninstall it, or...
- Try to make the software better, working with the maintainer somehow (tell them what is wrong, contribute a fix or a bug report, etc.). People would be surprised how sometimes a fix is really simple but the use case may be really obscure, and it is literally just a matter of finding out that the problem exists.
My problem is that people seem to employ a “3rd option” of just deciding all by themselves that software must be poorly implemented and trash, and worthy of public scorn, because they can. If developers go long enough without any real feedback, while enduring negative “press”, you’d better believe they will at least consider something like a telemetry tracker to tell them what the heck is going on. At least that way, they can find these problems and actually fix them, to preserve their reputation.
Yes, some people are difficult, but that's neither here nor there. That they exist is not a justification to violate the trust of a larger group. Floss did fine before mandatory telemetry.
You can find plenty of complaints online about software which gathers data from its users too. Putting telemetry in your software doesn’t stop people from complaining about it.
I find this trend against reasonable telemetry in OSS ridiculous and naively idealistic. I could also divine that perhaps 90%+ of Audacity users use a closed source application that uses invasive telemetry.
The way the free software community treats its own is reminiscent of progressives in America: constantly eating your own kind while letting your opponents flourish.
There’s a reasonable amount of telemetry, and without it the free, OSS side can’t compete with closed source products on a level playing field.
I’ll even go further and suggest that light telemetry should be on (and able to be disabled) by default.
An application universally ridiculed for it's inclusion of in-depth telemetry. It's time to challenge your preconceptions: find the roadmap items, features or bugs informed by the gathering of the telemetry data.
Here is what's really happening:
* Migrating the codebase to C#
* Iterating upon the existing app design based on the latest guidelines for Fluent Design and WinUI
That last one is probably going to be obsolete by the time they are finished. What's the value here that telemetry is delivering?
This is the story all over the industry. Fancy dashboards everywhere, but the people shaping products haven't made an evidence-based decision in years.
And part of it stems from the idea that a local application working on a local machine against local data ought to be forbidden to children. If there is anything at all about the telemetry that even raises a question about COPPA under those circumstances, you're doing it wrong.
More people are worried that the new TOS conflicts with the code license, which seems like fair criticism (why does it need to enforce a minimum age? why are we withholding information for law enforcement?). Had they approached this from a more transparent/secure perspective it would have been a lot more understandable, but Open Source has always been about voluntary contribution, which is how it sustains an otherwise suicidal business model. Until recently, there was no incentive to add telemetry, but Audacity has recently undergone a bit of a takeover (maybe "change in management" is more apt), which gives plenty of people reason for concern. I'm certainly going to be sticking to the forks on my machine, but it's ultimately up to the users, and to a greater degree, the volunteers.
Shouldn't this have a separate name, to avoid confusion and filesystem/repository clashes? I don't know if there is a trademark involved, but that's a potential legal issue the project will want to keep clear of.
If there isn't a leading suggestion for the new name yet, I offer "Temerity". It's a close synonym for "Audacity", and highlights both the boldness of this new project, and the recklessness of Muse Group's changes. It also cleverly alludes to (avoiding) "Telemetry", which is a distinguishing feature of the fork.
Sneedposting is a 4chan meme indeed, but this time there is nothing particularly nefarious about it; it's an adult joke from an old episode of The Simpsons, with a store front saying:
Any idea why the naming poll has attracted so many trolls? How did 4chan types even find out about it? I know that people on 4chan like to disrupt any public vote they can, but this doesn't seem like a relevant target at all.
Many of the people on /g/ are regular programmers. You may be underestimating how much overlap there is between readers of this website and /g/. Remember that there is more to 4chan than just /b/ or /pol/.
Random anecdote. Every person IRL that I've ever met that uses 4chan has been seriously weird. I stay away from the site entirely because I don't know what the fuck is going on over there but it produced (or caters to) some of the weirdest people I've ever met.
A big portion of its users were unhappy with the addition of telemetry and welcomed the fork, someone made a thread because the guy who did the fork added "sneedacity" as an option to the poll(due to sneed being a joke, not due to negative reasons). The drama started when the maintainer shut the poll down when "sneedacity" was winning (despite the fact that he added it, not trolls or someone else) and started to ban everyone who asked questions regarding this. This is hardly a "4chan raid" since "sneedacity" was added to the poll by the maintainer himself who also made the poll publicly available. People participating in a public poll is not a "raid".
He then claimed that he was called 70 times and was sent 3000 mails as harassment but never showed any proof and deleted some of his tweets regarding it. Interestingly the 4chan threads he posted on his twitter as proof of the alleged harassment didn't contain any personal information or people requesting his personal information. He was even caught lying about the phone calls in the comments here which lead to him deleting his posts, https://postimg.cc/BPXX26W1https://web.archive.org/web/20210706000825/https://twitter.c... . I don't know about you but refusing to provide proof, changing the story when asked and deleting posts when it gets noticed looks suspiciously like pretending to be a victim.
He's putting some vile stuff on his github bio currently(changes often) so I don't know what's going on. It's a shame the needless drama this guy caused with his inability to manage a project is getting more attention than the current situation with audacity.
Voting and naming things really gets them going. Remember Boaty McBoatface, or naming a chunk of the space station after Stephen Colbert? Also, it's a long weekend and people are bored.
I have no horse in this race, but wouldn't it be better to keep crash reports in there? Of course it should be disabled by default, and it would never be uploaded. That way individuals can raise issues and attach the crashreports if they need to.
I think the reaction is completely overblown, the telemetry and crash reporting are already opt-in. The auto-update-checker (opt-out) only reveals OS, as part of HTTP user-agent, and IP, as part of the TCP connection.
Stop. Just stop. Software needs to reach out to the internet to provide some much-needed functionality such as auto-updating or crash reporting.
Branding any of this as telemetry then creating a bunch of immediately abandoned forks is a joke.
Before you say “well make this opt-in” - that doesn’t work for crash reporting nor does it really work for auto-updating.
Things change. The internet needs Javascript, distributing desktop apps using electron makes a lot of sense and developers need automated feedback. Please stop.
No, you stop. It absolutely does NOT need this functionality. Is it nice? sure. But it should be opt in and prompted, not opt out and automatic.
> that doesn’t work for crash reporting
It does.
> nor does it really work for auto-updating.
Good.
> The internet needs Javascript
No it doesn't. The ad economy depends on it to bypass consent. Do not confuse that with the ad economy requiring it to function nor does it mean the internet requires it. Again, is it useful? of course, there are many instances where it can (keyword can) enhance the experience, but it is not necessary for 99% of applications (no, your infinite scroll SPA is not a necessary function). Is it necessary? Absolutely not.
> and developers need automated feedback
No they dont.
The only things you're referring to are necessary for are forcing things by your users without their knowledge or consent, and for superfluous flair and form. It's possible to build opt-in, privacy respecting, informed consent software diagnostics and telemetry without forcing this nonsense on them, especially with respect to the topic at hand, which is not a web application, but an offline native desktop application.
Do you prefer the web from 1995? No youtube, no github, no google docs, no google maps, no real-time in-browser messengers/chats, no hangouts/google_meet/jitsi, etc. etc.?
Half of those aren't really needed and are better server by native apps anyway.
However the Internet isn't the Web, so certainly the Internet doesn't need Javascript.
Also neither does the web, but for sites that can provide functionality which wouldn't be possible without javascript it can be a nice optional feature. Though all of the examples you gave either do not need javascript (youtube, github) or are better as native apps (google docs, google maps, IMs).
Native apps meant "for Windows and Mac only". While JavaScript has its problems, nontechnical members of my family are happy to run Ubuntu and get done everything they need to get done, because the browser can do it, because of JavaScript.
And instead of making cross-platform native software toolkits to ease that burden, the industry decided to reinvent the entire von Neumann architecture in interpreted JavaScript and dub it “assembly,” whatever that means. Then since you’re already in a browser at that point, why not phone home and say hi, and anyone who dares to instead suggest that software operate without an Internet connection is told “just stop” because the “market has spoken” and absolutely no progress was made in software engineering until the pointless backlog that nobody looks at was pointlessly filled by pointless, automated telemetry.
Industry practices collapse under the weight of their own complexity and people argue on HN about it with absolutely no context on the real issue, just presupposed bugbears like “privacy”. Video at 11.
JavaScript is the single worst thing to happen to computing in its history. Maybe not the language itself - it’s fine enough for its purposes - but the industry deciding that its purposes overlap with a wildcard glob is a huge mistake that we will be paying for long after everyone in this conversation is deceased. It’s annoying that you credit that situation to JavaScript and laud it given what else comes with doing everything in a browser (enabling the surveillance economy, inadvertent network dependencies, computers running at about 5% ability, etc etc)
> enabling the surveillance economy, inadvertent network dependencies, computers running at about 5% ability, etc etc
I can give you computers running slower. As for the surveillance economy, I am not sure why you think that installing someone else's crap on your computer is preferable compared to running that crap in your browser and blowing most of it away every time you leave the site (or blowing it away completely by clearing your browser's storage). As for network dependencies, websites are now perfectly capable of running offline, while native apps may need network access for doing anything useful as much as a browser app does.
As a developer, I would rather write once and run everywhere. As a user, I would rather not install hundreds of megabytes of someone else's "apps" that could have been a web page on my computer, other than the bare necessities that I am comfortable with. Nor do I particularly care to update those installed apps locally, downloading hundreds of megabytes more or having the risk to run into software version issues. Having vim on my machine is fine. Having Facebook, or Twitter, or Reddit is not.
No, they don’t. If they did, we wouldn’t be having this conversation.
Since you’ll probably pedant me with several names I can already think of, it’s worth remembering that existing does not imply easing burden nor being useful beyond a README and a weekend jog.
Linux community can only blame themselves, it is the only platform without a full stack for native development, unless we are speaking about X Athena Widgets, left behind UNIX several decades ago.
Each distribution is a snowflake, and now GNOME even considers UI design tooling something worthwhile killing in name of GtkBuilder with raw XML, exactly what UI/UX designers have asked for.
So everyone pays the cost for a handful of Linux users?
I mean, even if you were fine with that, nowadays pretty much everything is available on Linux too and crossplatform development tools for native applications existed for decades already.
TBH you could actually do youtube and github just fine with no javascript. Browsers have native video player support for a while and do not require javascript to function. You could even embed ads into a video stream and make it indistinguishable from a normal video file.
GitHub almost works as-is without javascript. Pretty much the only stuff that doesn't work is stuff that gets loaded after the initial page load, and that could be changed to load with the page.
You mistake everyone using javascript to do those things as the only way it's possible to do those things. That's certainly not true.
You can use HTML, Canvas, and CSS to do just about everything you mentioned.
Video is easy. In pure HTML: <video autoplay loop muted playsinline src="..."></video>. No Javascript (or even CSS) required. You can certainly expand on that as well.
Github does not require Javascript to function. You can toggle JS off and still use it pretty much as is, and the features that do break are trivial to implement without it.
Interactive realtime chat does not require javascript. https://github.com/kkuchta/css-only-chat , and the same principals can be applied to online docs and editors as well.
Javascript is the norm because it has inertia behind it - largely due to circumstance more than any inherent natural benefit, not because it has some secret sauce that the internet needs that couldn't be easily replaced should it be Thanos-meets-Tron-Crossover Snapped out of existence.
A much stronger argument in favor of Javascript would not be pointing to all the good things you can do with it, but rather, with how well and optimized and streamlined doing those things with Javascript is these days. Just to clarify, I'm not saying Javascript is the problem, but rather, the abuse of it, and that abuse would still be an issue if other technologies replaced it one day.
I use the internet with JS toggled off by default as a result of the rampant abuse of dark patterns (for example, the Google Search Suggested Alternatives box that is served below the sponsored result(s) that always "conveniently" expands after a short delay to push the first actual result down the page and replaces it with yet more sponsored content), 3rd party/cross site abuses (Malware), as well as the overabundance and omnipresent ad-spam. Sites that break as a result of my opt-in JS browsing is usually a sign it's not worth my time. Sometimes I'm proven wrong, but sites that can fallback gracefully and display some basic info without JS are usually the worst offenders..
TL;DR: Yes, in a way, I do prefer "the web from 1995", where Javascript isn't everywhere, and only present in places that I specifically whitelist, because it's all too often abused to negatively impact my experience instead of enriching it.
Software doesn’t need auto updating. The “move fast and break things” industry needs auto updating. Other than some very few exceptions (e.g. a web browser with a huge attack surface may need auto updates to protect the user).
Similarly, crash reporting being opt-in works very well. On crash you present the user with a crash report, they review it - and can send if they desire to. Throw in a lil’ “don’t ask again” checkbox and you’re good to go.
Yes it does. Bugs need to be fixed, improvements need to be made. That’s the lifeblood of most user-facing software. Without it you end up with a huge user base running legacy versions, reporting the same bugs and asking for the same features.
> Bugs need to be fixed, improvements need to be made.
Those are my needs, not the software's. The software has no needs. And if I don't mind the bugs and aren't interested in "improvements," I'm not actually concerned with the software's feelings.
When I say this, I'm assuming I'm not talking about networking software, but if you had your way, every piece of software on my computer would be networking, and reporting my usage (however it opaquely defines that) to strangers. I don't want to have a lifelong relationship with the person who wrote the sudoku program I installed from my distro's repo. I really don't want to get owned five years from now because that person needed to "improve" it when I didn't ask.
That’s a minor inconvenience at the bug-screening level. I’ve used audacity for over a decade, updating it once every year or two - either when i got a new computer, or when it occurred to me there’s probably a new version out there.
That’s your opinion, not a fact. I’m happy for auto update to exist, but it should always be at the discretion of the user if they want to take part in it.
It's like this because it is an overwhelming benefit to someone who is selling something. Not because it is necessarily overwhelmingly what people want.
Some other market outcomes I'm unhappy with: ad tracking, excessive plastic packaging, cheap goods - expensive repairs, all sodas are at least twice as sweet as they need to be.
When there are options, I choose otherwise. If an option I've come to rely on changes to be something I don't prefer - i'll raise a stink.
When the market speaks, people who insist opt-out and telemetry auto-update is necessary then insist that the market just doesn't know what's good for it. This is Audacity forking. It could fail, but I don't know that anyone has the kind of allegiance to Audacity as a project (that is recently under new ownership) to not follow the fork. The new ownership didn't buy the users, and if somebody else will take better care of the project why not go with them?
The project lead said he had no say about the telemetry and couldn't turn it down. For me, that makes is a business decision and not a technical one. This is to monetize, not to help me.
It will fail because the number of users outraged about this is far lower than the threshold needed to create a sustainable community. Check back in a few months and see.
It’s crash reporting and update checking. Stop getting so outraged over such minor things.
This is funny because usually whenever i see that mentioned to excuse negative behavior it isn't really about the market speaking but about the market staying silent :-P
The market has been given plenty of choices. What we’ve seen repeatedly is that the number of people who will actually do anything _more_ than speak loudly is extremely small.
More people think open source means “I can be VP of product without showing up to work” than “I can roll up my sleeves and help make the hard thing I want happen”.
WHENEVER I had (or left) the auto-updates on, I came to regret it, from Windows updates to Android apps. Form it usually takes is (and all of them personally encountered):
- system instability
- just plain data deletion and/or loss
- features not working at all, or quarter-baked
- previous free feature moved to premium or behind the paywall
- incompatibility with previous files
- just plain resetting my previous settings and customizations to defaults
- arriving with the wrong language (based on keyboard layout or location vs my OS-chosen one)
- somebody was frelling bored, so decided to move the position or change a keystroke of the function in the menus - just "because" or "it boosts engagement" or some similar crappy excuse
- plain spyware/malware delivered as an "update" (with app betwixt updates being sold to spammers/scammers)
- app/program sold in the mean time, with the new one preferring to siphon all the telemetry possible (all options conveniently on by default)
- messing up my file or action associations
...and probably 12 other things that don't cross my mind at the moment.
I will bloody update my hardware and my installed (or portable-d) software whenever the frell I please. I cannot stand the mommy-gloves and mommy-stance I am subjected to
I do not have will nor time to battle with that. My largest "attack surface" in any/all meanings of the word comes from auto-updatijg. So, off with your hea-- auto-updates.
I've been blocking/filtering all auto-updates and all Internet access to all apps/programs/OSes where ever possible. Whenever possible, I use portable versions that don't require installation, and I have a huge software library; letting things run amok is not an option.
I've been doing it for a decade now (in different, but ever enlarging scope), and I've never been happier. Nothing breaks. Things work. AS THEY SHOULD. And nobody forcefeeds me the crap 95% of the "common folk" are subjected to.
Yes, this is a rant. And the topic is a sore one. I'm just so, so, so bloody sick of it.
So you keep track of critical CVEs for the entire stack of all software you use, and make quick and informed decisions about which components really do need to be updated immediately?
Almost all of software in my library/catalog is offline-only, as in - online component is not required, nor wanted: I'd specifically choose local/offline variant vs a fancy online-required one; open-source over freeware over commercial, etc. (I have only addressed the auto-updates, not all other points of my "system", not its exceptions, and certainly not its finer points.)
I thought it self-evident in my post that I expect the software in question to notify me of a new version, provide the cumulative change log, and offer me an update (plus even an auto-update, but only as an option).
Software that did that (let's call it "polite" software), got its net access (for limited aforementioned uses). And if it was a piece of software I particularly found useful and non-clashing with my stances, I sometimes even enabled the telemetry (because it might help improve it).
What I cannot stand is being forcefed those. They break everything when I least expect it, and when I most need the things to function and function fast (e.g. auto-update on program start VS offered update on close). Most (90? 95%?) of my catalog is specific (and offline) use only, so eventual issues usually do not affect me.
Often used software I update on a more regular schedule, but still - almost exclusively manually (by downloading the new portable version). OS is updated once I'm done with a project or whatever I was doing in that duration - meaning when I can temporally afford to be frelled in the posterior by Microsoft's updates, and spend an hour fixing stuff... such as figuring why the bloody gamepads are now suddenly preventing my monitors from going to stand-by, thus messing up my presence & tracking software, thus messing up my home automation recipes. (If you are impacted, it's the 2019-timestamped XBox driver; revert to something ancient [find it somewhere], and "lock it" down by listing device identifiers via group policy.)
To sum up, in simplified terms - any frell-up resulting from an auto-update impacts me 100% (and costs me time, nerves, will to live) VS extremely low percentage of being affected by the critical security vulnerability du jour (and especially taking into account security and precautions I practice).
Constantly-used and online-required software has its own, different set of "rules" (security, precautions, guidelines, etc.). Don't mistake my ire at a certain type of an unfortunately too common (mis)use of a feature (in my specific circumstances and use cases and scenarios) as an irrational blanket refusal of absolutely everything under that category.
(Note: typing on a phone, so due to editing troubles my chain of thought might be all over the place.)
You're really splitting hairs here, the "polite software" you're talking about also needs to "phone home" to check and offer the update, which probably leaks about as much information as you manually refreshing the project's web site every week.
I am not. Please re-read. While privacy is always an issue, notice I even directly said I'd willingly enable telemetry for some software (as a means of disambiguation and explanation of my stance), amongst other caveats.
However, what I do not stand for is auto-updates without my explicit knowledge and explicit consent, because it regularly messes up my... well, my "anything and everything". (Insert specific xkcd instance about breaking-workflow here.)
Therefore, due to personally-experienced and ever-widening common abuse of the mechanic (and ever-encroaching march on my privacy and rights on & from all fronts), I am less and less willing to even consider giving the pieces of software I actually enjoy the benefit of the doubt (aka useful telemetry).
Please notice - my original reply was (exclusively) addressing the specific portion of one of the preceeding user's comments "pushing" for the auto-update mechanism as mandatory (and not to the Audacity's issues/attempt/approach specifically). From my perspective, it seems that you and the previous commenter are addressing my replies as a full and all-encompassing description of the whole of my approach and practices (and I have specifically said that isn't the case in the comment above this one).
Additionally, there is an assuming of the non-use of any (or a combination, or all) of the myriad of the additional layers of protection available to any semi-advanced user [e.g. Tor, purpose-configured browser, proxy, VPN, etc.] while obtaining the software in the first place (and/or subsequent instances), and that the software in question could be "gotten" from alternative and directly unrelated locations if need be. If true, that assumption would be in error.
My use of Audacity has not changed in 15 years, and yet I keep updating it, and it keeps breaking on me! It’s like I have a compulsion to get the latest (though I’m on Arch and the latest is over a year old, and I’m fine with that).
The users cannot be trusted, the users are dumb, we know better than the users, we must steer the users, the goals of the project are more important than any one user's, etc. Take your pick at this point.
> Software needs to reach out to the internet to provide some much-needed functionality such as auto-updating or crash reporting.
Crash reporting is more of a problem than telemetry. Any report that contains the contents of memory may end up leaking confidential data. That data may be considerably more sensitive than an analysis of how the software is being used. The end user should be informed of the risk, be able to review the data being shared, and have the right to block it.
As for updates, there are distribution models where this is handled by a third party. In some cases, a list of packages is downloaded so the only way for the third party to know whether something is installed is when an update is retrieved. In other cases, a list of software is sent to the third party but it is a trusted third party (e.g. Google, rather than a random app developer).
> but it is a trusted third party (e.g. Google, rather than a random app developer).
I'd trust a random app developer way more than an ad company whose data processing consent flow is intentionally annoying and still does not comply with the GDPR.
Some app developers are going to be better at respecting privacy than Google, while some are going to be worse. The problem arises when you have to assess each one: people either won't or run the risk of relying upon misinformation.
The kind of analytics data a typical telemetry system will collect is pretty benign in isolation. It only becomes a risk when it's in the hand of an adversary with a global view of the internet, such as Google, Facebook or large-scale analytics providers such as Mixpanel (as their libraries are integrated everywhere and they have a ton of traffic they can use to correlate and deanonymize users).
Wrong, users absolutely do need their software to not crash and to stay relevant. The user doesn’t have to update it (so no silent upgrades), but showing something when a new version is released is a great thing for moving your user-base off legacy versions containing bugs fixed in later versions.
Show a button to check for updates on any version timestamped older than X months where the button hasn't been clicked in a while. Show the button on the crash dialog. Don't open a single socket without approval. Let the user decide.
This is a fundamental principle of human interaction -- ask first before touching someone's stuff.
The problem nowadays, is with everything connected to everything else, some bugs need fixing fast.
Imagine if there is a buffer overflow in Audacity, meaning someone can take over your machine with a malformed sound file. I'd want to know about that ASAP -- not after a month.
On average, I believe most users benefit from an automated check for updates. It should be possible to disable it, but we should first worry about keeping users safe.
On average, I believe most users benefit from an automated check for updates. It should be possible to disable it, but we should first worry about keeping users safe.
They should still ask first, at first run. I actually enable update checks for a lot of things, if I'm given the option up front. I'll submit crash reports and even periodic telemetry if I know that the crash reporting system doesn't download and run arbitrary code, shows me everything that will be submitted, and allows me to omit needlessly identifying details (e.g. custom kernel strings).
Some projects (KDE) make it really hard to submit crash reports, by requiring an account, etc. It must be optional, it must be easy, and it must be transparent. Then users will volunteer crash reports and stats. Respect breeds respect.
"moving your user-base" shows your bias here. The developer doesn't get to move their user base, the users get to decide what version of software they wish to run.
You don't need to perpetually run the latest release to "stay relevant". All of my workstations are still on Catalina and they're fine.
Nobody should need YOUR crash reports. First line defense should always be internal crash vulnerability identification and closure. Information from users can then take two forms: user initiated reporting and opt-in automated user reporting.
Nowhere in that paragraph was their any technical reason that someone concerned with both getting crash fixes but avoiding crash reporting themselves should be told they can't have them both.
Also, opt-in silent updates don't inconvenience anyone. The user gets whichever situation they are most comfortable with.
Opt-in crash logs and opt-in silent updates should both be logged for easy user perusal.
--
None of this inconveniences developers or users.
--
The problem with any centralization of non-opt-in unlogged data gathering is it creates completely unnecessary hazards and trust problems.
Once an organization is pulling data, that data is now available to (1) incentivize the organization to use it in ways the subjects many never have considered, (2) incentivize the company to increase the amount of data they pull for good or ill, (3) is potentially a target for security breaches, and (4) is a potential target for governments to coercively surveil users by accessing the data or encouraging more data to be pulled.
None of the problems in the last paragraph will be apparent to users until they have already suffered damage. So concern about non-opt-in unlogged data gathering is extremely prudent.
Stop. Just Stop. Software does not need to violate a users privacy to provide functionality that is not really needed at all such as auto-updating, nor crash reporting.
Calling any of this as anything other than Telemetry data is factually incorrect
The movement around forking is NOT just over this new privacy policy, but is a culmination several actions by the new owners of the project to go agaist some very basic norms of Open Source stewardship. Further they are more or less giving the Linux Maintainer community a "FU" over their use of custom / modified version wxwidgets which makes packaging the 3x version of Audacity problematic for many linux distribution for security reasons.
As to the statement that "opt-in does not work with crash reporting" Sure it does, Many applications (including Microsoft, and Mozilla) have a dialog that ASKS the user if they would like to submit crash reporting data to the vendor on a per crash basis, There is LITERALLY zero technical reason for automatic, opt-out crash reporting
I was more responding to the - in my opinion ridiculous - notion that even apps that don't require the internet to function at all should be expected to phone home by default, and that auto updates are an essential.
Nevertheless, it's good to know they've chosen an opt-in construction for Audacity!
Software is a tool that does a job for me. When I buy or download the software it does that job and should continue doing that job in the same way indefinitely. It never needs to change. It never needs to connect to the internet.
Obviously this applies to offline tools and not web browsers.
> creating a bunch of immediately abandoned forks is a joke.
This part is not inherently true. Only time will tell how Audacity reacts, and they could very well have an UnGoogled-Chromium/Codium situation on their hands.
The user crsib appears to be the main contributor to audacity, so I would have to imagine others would have to pick up the slack for a fork to be viable.
so he/she will have to constantly monitor these files moving forward, for the fork to be stable/similar to audacity/audacity, which I am not sure if he/she/others are interested in doing this for the long term, but I guess time will tell.
Full disclosure: I'm the creator of the tool that I linked to
Picking two applications with less global users than a small town or a large village doesn’t make a great point. They might as well have been abandoned - no relevant mass of people use them.
Your point is worth thinking about it, audacity needs a critical mass of users. But does it have enough original users to keep from being considered abandonded already?
Edit: to be clear, other software is often used instead of audacity. It isn't as well known as say photoshop, or maybe even inkscape.
Auto-updating: Why do you need auto updating? Why not stay on old version or rely on package managers which do that for you? Isn't that freedom the user should expect when using software?
Crash reporting: Why not just generate a crashreport-date.txt and allow the user to save it and email it to you? Why should they be forced to report it?
> Crash reporting: Why not just generate a crashreport-date.txt and allow the user to save it and email it to you? Why should they be forced to report it?
nobody does this. It’s such an utterly naive statement and exactly in keeping with others in this discussion. It also doesn’t catch non-terminating errors.
> Why not stay on old version or rely on package managers which do that for you?
Why would you want this? This also doesn’t work for Windows.
Because maybe an old version works good enough for your needs. Every software having auto updates is the reason package managers don't catch on on windows. Then you see every single software having bulky planned tasks/start on boot auto updaters.
Oh please, this project was doing fine for two decades with regular bug reports and it suddenly needs to collect data straight after a hostile takeover by a company with a shady privacy policy? An offline audio editor doesn't need to be connected to the net.
My networking programs need to update for security reasons, but anything that isn't networking can update when I say it can, which may be never if I don't see a tangible benefit in the update.
There are many ways to reduce bugs. Extensive coverage-guided testing, like SQLite does, is one. Automated fuzzing is another. Writing in languages with strong type systems that let you statically verify the program's behavior is another. Collecting crash reports is another, and it's certainly one valid way, but it's not the only way.
It's pretty clear that Audacity's users, in aggregate, aren't that concerned about needing crashes fixed - or at least are less concerned about it than about their privacy. Maybe Audacity already is sufficiently bug-free for their needs!
While I also think that the "telemetry" argument is overblown, and I think automated updates are basically table stakes, and I wholeheartedly agree with you re JS and Electron, I think it's entirely reasonable for users to say that they don't find automated crash reporting valuable, and I don't think that it's particularly reasonable for developers to say "We know better than you."
I completely disagree with your assessment, Audacity just crashed twice for me yesterday.
Also, with SQLite, it seems the major people reporting bugs are people who have paid support contracts with drh, which most likely requires sending identifying information.
I’ve found several bugs in SQLite, one of them pretty fundamental and obvious. It’s a complete nightmare to report them, forcing you to use some utterly terrible forum software with limited searching, visibility and a shockingly poor UI. Genuinely the worst and most confusing thing I’ve had to use in the last decade. Sourceforge issue trackers are a step up from this. They also offer no convenient way to test your system using the latest RC/beta/alpha/whatever without manually staying on-top of the releases page and updating it.
I don’t think the rest of the comment warrants much discussion, but I wanted to point that out.
Are those crashing bugs, though? I'm making a very specific and nuanced claim here - that crash reporting is not a required mechanism for delivering high-quality software.
If SQLite is low-quality software, but the manner in which it is low-quality does not involve crashes, then that does not contradict my claim. There are a lot of ways software can have bugs other than crashes: incorrect behavior, confusing defaults, missing features, inaccurate or nonexistent documentation, poor API design, etc. Automated crash reporting can't help with any of that. If you discovered that "SELECT * FROM users WHERE paid = 1" returns unpaid users too, that's a really bad bug, but no automated crash reporter could possibly tell the SQLite developers about it.
I do agree with you that users of software at scale don't report bugs and so you can't rely on them reporting bugs; I'm just pointing out that there's no reasonable way around it except for crashes, and there are other ways to address the specific use case of crashes.
There is an unreasonable way around it: add an analytics framework that tracks every click and mouse movement and keystroke your users do. There are web analytics libraries that do exactly this and let you replay your users' behavior. I hope we all agree that Audacity should not do that.
(By the way, if I go to https://sqlite.org , there's a "Support" link at the top that tells me to use their forum. You can then click on the forum link, click "New Thread," and click "Remain Anonymous," and there's apparently a way to post things. I guess this forum does actually use Fossil internally according to the message at the bottom, but it seems really straightforward to me, so I am surprised at your claim that this is confusing....)
> forcing you to use some utterly terrible forum software with limited searching, visibility and a shockingly poor UI
You mean Fossil? We use it, it works great for us, doesn't get in the way, doesn't require constant rtfm. Also you could just clone the repo if you want to test. It exports to git.
If Audacity branded itself as as an "offline audio editing program" I would agree with you, but its headline is (and has been) "Free, open source, cross-platform audio software." I think using the internet to check for updates and report crashes is perfectly within that mission and, on balance, not using the internet in those situations seems to go against the mission.
Should there be a way to disable that communication? Yes, and there is.
Beyond all that - if you are seeking the level of anonymity that would require evading program update checks or crash submissions - then you should probably lock down your network as your biggest worry is probably not your open source audio software.
I guess the question is - is online telemetry and crash reporting reasonable considered and expected to "it's stuff" for an offline audio editing program?
And if so, how far down that slope is it reasonable to go before trying to claim that crypto-mining is also an essential part of off line audio editing software?
First off - I don't divide programs into "online" "offline", because what's the point?
It's not like program will stop working when it tries to send crash report without internet connection (if handled correctly).
>I guess the question is - is online telemetry and crash reporting reasonable considered and expected to "it's stuff" for an offline audio editing program?
For me - yes.
Yes, I do consider crash report as a part of software maintenance efforts and I have nothing against sending
critical data e.g at which place it crashed & e.g basic informations like OS, Hardware, GPU/Sound drivers version.
It's possible to do this in a way that doesn't reveal your IP - run the update-checking server as a Tor onion service and embed a little client that knows how to connect to the Tor network. Then your entry node doesn't know what you're connecting to, and the remote update server only knows the IP of the relay talking to it, not your actual IP.
Still, a good question is how you're getting updates in the absence of automatic updates (and for software whose goal is parsing binary file formats, chances are high that if you aren't getting updates, you have a pretty big privacy problem already). If you have no auto-update code but you're telling users to download new versions from, say, GitHub, then GitHub gets their IP addresses anyway.
The reaction is completely overblown. As a software developer myself I know how important crash reports are to figure out which bugs are impacting the most number of users. If everyone moved to this version without telemetry, we would end up with a far buggier version of Audacity because the developer has no clue of what bugs of prioritize. Audacity runs on multiple OSs and with the huge amount of hardware devices available, it's impossible for the maintainer of an open source project to test them all. I really hope that this project doesn't catch up, for the sake of bug-free audacity.
How did Audacity gain it's popularity over the past 15 years, without any of this stuff?
Did you use computers 15 years ago? Linux desktop and open source tools? Nothing had this crap back then, and some of it was pretty good. Audacity itself is an example!
Open source software is not a commercial product. You don't need to prioritize and optimize for engagement or market share or any such crap. Just fix what you want, fix what you can, interested capable users will contribute attempted fixes for their own problems and you incorporate them if they make sense to the project.
I don't think telemetry has ever resulted in good software, it just pushes everything towards being an iPad with one button on it. Only skilled tasteful developers make good software, and that was true long before telemetry existed.
Audacity does a lot of useful stuff and it's free + cross-platform, and that's a major source of its popularity. But it's always had usability and stability issues for as long as I've used it (and that's at least 15 years).
I'm aware, I use ubuntu, and my server provisioning scripts remove all the useless bloatware, abrt and apport and landscape etc etc, of which there seems to be a bit more each year. On my personal systems I use Arch linux and customize my kernel.
Actual fixes overwhelmingly come from individuals manually reporting weird lockups and backtraces, often taking the initiative to bisect the issue to a particular commit.
But if this project does catch on, it indicates that users prefer a pro-privacy Audacity to a bug-free Audacity. That might not be the decision you (or I) would prefer, but I think it's a legitimate decision for users to make and we ought to respect that.
It indicates bias in reporting too. There are a few people I know using Audacity and they have not heard of this whole issue (yet?) - and I really don't expect they will care if they do. So I expect we'll have some hardline pro-privacy users keeping this fork while a majority keeps doing their work using the original upstream.
Here we'll only hear the strong pro-privacy views because that's the news.
>I have no horse in this race, but wouldn't it be better to keep crash reports in there?
Audacity has been around for 21 years, has generated a strong following from a wide range of users, and is one of the most widely-loved FOSS projects. It achieved said status without telemetry for 21 years.
People keep saying the telemetry thing is "overblown", but there is clearly a large number of us who simply don't want it. It doesn't need it, and the past 21 years have demonstrated that.
> I believe when you make crash reports / telemetry disabled by default then you lose a lot of reports,
Perhaps I should have been clearer. I think something similar to Firefox's crash reports. If a crash happens, prompt the user and show them the report, and finally, let them decide. Perhaps even take out the networking features and open the default client with an attachment or a path of the report so they can be attached to a github issue.
> Perhaps I should have been clearer. I think something similar to Firefox's crash reports. If a crash happens, prompt the user and show them the report, and finally, let them decide.
Honestly, in my opinion this is completely reasonable. This is exactly how many different software do this including Firefox and Android (by default anyway). As others have mentioned the blowback on this is, in my opinion, entirely unreasonable.
Having said that, Sentry integration is another matter.
> Having said that, Sentry integration is another matter.
How? I only know Sentry as the backend tool you use to analyze the uploaded crashdumps? Or do you mean that a "crash report" could be something smaller than a minidump? (although I personally think a minidump is not a bad compromise)
wouldn't it be better to keep crash reports in there?
Different people have different needs and different levels of sensitivity to this sort of thing.
Personally, I don't want my offline audio editor talking to anything without asking. But since Audacity doesn't do that, this will be what I use instead.
If Audacity were to ask before sending data, a good chunk of people wouldn't be upset. But it doesn't, so here we are.
Also — and I've said it before — automatic "telemetry" and crash reporting is just lazy. It also lacks context. If you want to know what your users' experience is like, just ask them.
As someone who's worked at a mobile app company, crash reports were described to me repeatedly as "I was blind but now I see".
Is it reasonable to expect Audacity developers to just ask all of their users "Hey, how are things? Are things crashing? How and why?" I'm not sure if you've ever tried to diagnose a user's crashing issues, but, especially with non-technical users, you're likely to get responses like "A dialog popped up but I closed it, I don't remember what it said", or "I wasn't doing anything and it just crashed out of nowhere".
Automatic crash reporting is like night and day; sometimes, you can even find and fix problems before your users get a chance to report the crash (if they even bother to do so, which most don't).
> If Audacity were to ask before sending data, a good chunk of people wouldn't be upset. But it doesn't, so here we are.
> Just to reiterate, telemetry is completely optional and disabled by default. We will try to make it as clear as possible exactly what data is collected if the user chooses to opt-in and enable telemetry. We will consider adding the fine-grained controls that some of you have asked for.
In other words, Audacity is asking before sending data and people are upset regardless, apparently due to just general ignorance about the situation thanks to some people blowing it out of proportion and describing it inaccurately.
Ask who? In-product surveys would be ignored or trigger this same outrage, and requests for an email address at download time simply result in throwaway accounts and other garbage input.
What successful ways can you envision that a software developer ought to be asking their users for their experience, when the software is available for free without restriction upon download or use?
How would you ensure that users who are satisfied with the way Audacity is working will take an equivalent amount of time to report their satisfaction as users who aren't satisfied with the way Audacity is working, so that your feedback is representative of the userbase as a whole rather than of the vocal subset that have problems? Would you advise Audacity to stop trying to measure or assess its users and their opinions, and instead simply do whatever they think is best for Audacity without harvesting user data?
If Audacity truly accepted the view that the product should never report a single byte of information back to them and that they should never attempt to collect any data whatsoever about their users, then they would be forced to make decisions about Audacity without those users' data (including their opinions!), even if those decisions could end up being disliked or contentious among a subset of the users. At that point I expect they would likely determine that product autoupdate and feature usage metrics are a valuable feature for the product and not of concern to most users, and then ship those — regardless of the cries of outrage from the subset that hold contradictory opinions — because, after all, that subset demanded not to exist to Audacity, and so their wish is fulfilled.
How can the vocal contingent of Audacity users who say "my existence should remain unknown to those who make the software I use" expect to have their opinions considered relevant, when they specifically demand not to be considered as existing at all?
To me, your comment overflows with cynicism and not much else.
All of the things you bring up are solved problems. They were solved years ago and the feedback methods continue to be used today by quality software companies. Just because Audacity and its owners choose the lazy route doesn't mean it's the only route.
Programmers and the companies they work for should stop trying to normalize telemetry. It's not normal.
I have seen no effective solution for this, in any product yet to date, that both addresses my questions and satisfies the terms of "users do not exist" demanded by those who are against all telemetry or telemetry-capable functions.
Either products make decisions without telemetry, or they make decisions with telemetry. Either products make decisions without user input, or they make decisions with user input. You do not name any specific feedback methods that do work as a solution to this, and certainly I see none in use in projects such as Homebrew or Audacity that are cited as acceptable to you.
What "quality software companies" offer a feedback method that does not stir outrage among those who incorporate it into their products and/or efforts, when their feedback is not treated with priority and urgency? Every software company I've seen, small or large, has angry users who complain that their opinion has not been treated as correct by software companies, and those users congregate on forums and reiterate their pet peeves every day or week or month, in the futile hopes that this clamor will somehow influence the software company (which, generally, it does not). You do not address at all my concern about biased-towards-complaints data resulting from in-product surveys or feedback methods, and you do not explain how feedback methods unsupported by telemetry can measure the true amplitude of complaints rather than being biased by how loudly the squeaky wheels squeak.
You have failed to provide any supporting evidence for your argument that these are 'solved problems', and instead framed your unsubstantiated view as if it's fact. That isn't a viable approach at Hacker News, and I hope you'll take the time to correct it.
How do you ask your users about their experience with the software without collecting potentially personally identifying information? Most Audacity users are not going to be on a mailing list or forum. Would it be better to occasionally pop up a dialog asking for feedback? How do you account for response bias in that feedback?
1. error reporting - the user has to click a button to share crash logs. It's basically a macro to help the user create a support ticket.
2. version checking - no PII information being kept, literally just helping the developer get an idea of what versions they need to be supporting--you do want them to support the version you're running, right? Well, if not, you can turn it off.
What's the big deal?