While I have been cautious and was trying to make sure that not any code could run by mistake (or false environment variable / ifdef checks down the line), I have to say that The Muse Group's intent seem to have been clearly that the code can be built without any of the tracking.
All networking related features seem to have been optional, and could be disabled via environment variables like `HAS_NETWORKING` or `HAS_SENTRY_REPORTING` etc.
I've detailed this more in my comment in the GitHub repository's issue  to make sure others will read about it, too.
Personally I think this is a classical miscommunication error, and it seems as the intent of Muse Group was to be as legally compliant as possible. Of course, a dialog where end-users could opt-in to everything would've been nicer than enabling/disabling it via build environment flags - but that could've been a simple rational discussion.
My current temporary (and building) fork is available at GitHub, too  and I would love to see this project being governed by a community of developers rather than a single owner that can potentially dictate integration of tracking mechanisms on others.
To be clear, this legal compliance is with the laws and government of Russia, where the parent company is domiciled.
>"For the purposes of this Notice, WSM Group with registered office at Moskovsky pr-t,40-1301, Kaliningrad, Russia, 236004 (“Audacity“, “us“, “we“, or “our“) acts as the data controller for the Personal Data that is collected via the App and through the App."
>"All your personal data is stored on our servers in the European Economic Area (EEA). However, we are occasionally required to share your personal data with our main office in Russia and our external counsel in the USA."
Audacity doesn't have incriminating data about its users.
Are you seriously worried that the KGB is gonna request your extradition because of the filenames of some mp3s you were editing when a crash occurred?
Example: Verizon's "Message+" for the Mac crashes for almost any reason, including "you just selected an old conversation." I allowed its auto crash report to work, several times.
I gave them a negative rating on the Play Store, citing this. They wrote and asked me for details on "the problem" so they could fix it. I told them to start testing their software.
If you're a developer on the other end of these things for some company, please speak up. How many are actually looked at?
I think correlating those automated crash dumps with reviews in a software distribution channel would have been non-trivial, and not something I would have spent time on though.
In this case, though, I think you would have said about the review "Oh, yeah, that's the embarrassing bug where it crashes when you do something totally ordinary. Fixed (or fixing) that."
That said I'd say I fixed 95% of the crashes, even the trivial ones. The remainder were difficult enough to understand the cause (ie: out of memory and we got caught holding the stick) that risk of change outweighed fixing.
I think this is a culture-clash question. Open source software generally does no automated crash reporting. Some software like the Rust compiler will print out a crash report and ask you to copy/paste it into a GitHub issue on your own; you can decide not to. The general cultural assumption is that if you want the software to get better, you have to participate, and if you don't participate, the developers don't owe it to you to fix your crashes. In fact, with the software I mentioned above, you generally don't even get it directly from the developer; you get it from a redistributor like a Linux distro, who may have modified it. Therefore it may not even make sense to send crash reports directly to the original developers, and the social contract is between you and the redistributor.
Some open-source software like VLC does prompt you to send crash reports if you want, but it doesn't send them by default, and their data collection terms say nothing about law enforcement (https://www.videolan.org/privacy.html
Some open-source software that operates at a scale that resembles traditional "shrink-wrapped" (conceptually) software does even more telemetry - Firefox, for instance, but even so it doesn't say anything about collecting data for law enforcement: https://www.mozilla.org/en-US/privacy/firefox/
There's also a world of closed-source software where there isn't any culture of participation and where asking a user to file a bug makes little sense. This is the world of Windows and macOS and mobile apps, all of which do automated crash reports. This is the world of web analytics tools that track exactly where your mouse cursor pointed so developers can build more intuitive websites, and more generally, the world in which software runs online and server-side because it helps developers. The social contract here is that the developers improve the software fix bugs without your active participation, and in turn the software needs to send them data about how you use it - both crash reports and general usage data - on its own, without you doing anything.
The real question is which world Audacity now lives in. A clause like "data necessary for law enforcement" is perfectly standard for websites, and probably they copied and pasted it from some standard terms of service document. But can you imagine Emacs or Python having such terms?
And yet, Fedora , Ubuntu , and KDE  do. So what you're saying doesn't paint a full picture: even if upstream of those projects you mentioned doesn't take the automated bug reports, there is likely some downstream that is using them to aggregate crash data and turn those into bug reports. It is not feasible, on a large scale, to manually comb through crash dumps and ask people to figure out how to use gdb. Open source projects don't have any special sauce here.
Also, it seems very strange to complain about companies collecting data that they're required to collect by law. "Forking" there is a fool's errand, any other company legally doing business in the same areas will be following the same laws.
Fedora's tool does not automatically report the bugs - it just automates the process of allowing the users to do so if they choose, and handles generating stack traces locally or remotely (configurable) and dealing with core dumps and submitting them if so chosen (again configurable), and tries to tie crashes to existing bug reports if you choose to submit the data to prevent devs from being flooded.
It even has some pretty great functionality to help make sure no private/sensitive data leaks in the report, and lets you go through things line-by-line that based on a default or custom list of keywords that may indicate sensitive data (such as passwd, key, etc.) before you finally choose to submit it.
And it's all done on a case-by-case basis, so for each crash you can choose whether to submit any data or not, in case you're running one app that you'd be concerned about leaking sensitive data versus another.
Thanks for clearly phrasing this, because I think this is exactly what I'm complaining about.
Audacity is, in fact, not required to collect anything by law, and for a very long time, it did not. It's a piece of software that runs on your computer. It doesn't even require you to have a network connection. It doesn't require the developers to even know that you exist.
No law (in any slightly free country, at least) prevents the existence of software that doesn't phone home. WordStar wasn't illegal. Emacs and Python and ffmpeg aren't illegal.
It is true that once you start collecting data, legal requirements start to apply, and you need to comply with requests for that data. That's a really good reason not to collect it in the first place.
If all you have is a stack trace, and you have nothing personally identifying (including filenames), then the data you have is useless for any other purpose besides fixing the bug. You're protected from legal requests, you're protected from illegal requests, you're less of a target of hacks, and so forth.
Fedora's ABRT, for instance, goes out of its way to ensure that any crash data they collect is suitable to make public: https://abrt.readthedocs.io/en/latest/ureport.html#ureport That's a pretty good way of squaring this circle. If the data is okay to make public, then by definition it's not a problem if anyone (law enforcement or otherwise) wants to see it, and you don't have to rely on their good intentions.
Ubuntu disables Apport by default in release versions, according to the page you link: https://wiki.ubuntu.com/Apport#Why_is_apport_disabled_by_def...
KDE doesn't claim to collect information about crash dumps; it just collects statistics about how it's used.
It is, of course, harder to debug things from just a list of symbols. And as you say, it's not feasible to teach everyone how to use gdb. So there's a tradeoff. You acknowledge that the software isn't going to be as good as possible in terms of bug-free-ness (or you find other ways to address that problem, like fuzzing or static analysis), but in exchange, you make the software better in terms of how it treats the user's privacy.
Audacity's more comparable peer would be Adobe Photoshop. I'd expect crash reports in a GUI app like either of these. I'd also expect that anyone with data on a server would need a clause like "data necessary for law enforcement".
I would not expect that law enforcement regularly hounds Adobe for data from crash reports.
I do actually think Firefox is a very interesting comparison here, because despite coming from the open-source culture, they collect extensive telemetry and they do work in a model where they can't expect user participation. But they in fact have no clause about data necessary for law enforcement.
(I'm also not defending Firefox as perfect, and a complication of Firefox is that it's designed to be used with an internet connection anyway. Audacity is an app that makes perfect sense to use offline. So was Photoshop, for that matter.)
In any case, the clause itself is not the interesting part; the interesting part is the implication that they have enough data to be interesting to law enforcement. Adobe and Facebook certainly do; whether or not it's used, it's there, and there are so many other reasons to be concerned about the data existing (rogue employees, hacks/leaks, etc.). The developers of Emacs clearly don't. I would guess the developers of Firefox try to avoid it, which is why Firefox doesn't need such a clause.
I definitely agree that this is what people are upset about, it just seems like an overreaction.
After all, that's probably not why they added the line – it's probably just part of the boilerplate used by their new lawyers.
The FSB, on the other hand, does; and I wouldn't put past them to try stuff that even the original KGB probably would not have dared.
EDIT: I am worried about governments coming after some people, but I am not worried that Audacity's ToS telemetry or ToS will be pivotal to their success in doing so. Just look at Assange – they got him on falsified rape charges, not his history of torrents or website visits.
`SentryHelper.h` is required for `ADD_EXCEPTION_CONTEXT` macro, which can be empty.
`HelpMenus.cpp` has a syntax error in preprocessor directives and an extra comma.
I thought I amended those changes to the latest commit, but I seem to have forgotten to push those fixes. Originally I just injected the macro via command line as it didn't do anything when reporting was disabled.
Can you please confirm that your HEAD contains the ":bug: Fixes" commit?  - I'll upload the built Arch package to the GitHub releases as soon as it's built again.
I was using the "audacity-git"  AUR package's PKGBUILD file (replaced repo URL with my one) to generate my package and it seemed to work without issues with those fixes applied.
There was also a "pthread_cleanup_pop(1)" patch in the PKGBUILD for the alsa library which I wondered about why it hasn't been pull requested upstream...but no idea what's up with that tbh.
Wouldn't it be much easier to replace the import/include/whatever of the low level networking bindings with a stub library where all functions just immediately return with a networking error, as if the computer was off line?
That would make it all but trivial to keep up with upstream code changes, and keep the disconnected version up to date.
e.g. call python, call netcat, call perl, call wget, ...
They deliberately and unnecessarily put themselves in a position where legal compliance could mean betrayal of their users.