> We offer Customers a choice around these practices. If you want to exclude your Customer Data from helping train Slack global models, you can opt out. If you opt out, Customer Data on your workspace will only be used to improve the experience on your own workspace and you will still enjoy all of the benefits of our globally trained AI/ML models without contributing to the underlying models.
Why would anyone not opt-out? (Besides not knowing they have to of course…)
Whats baffling to me is why companies think that when they slap AI on the press release, their customers will suddenly be perfectly fine with them scraping and monetizing all of their data on an industrial scale, without even asking for permission. In a paid service. Where the service is private communication.
I am not pro-exploiting users' ignorance for their data, but I would counter this with the observation that slapping AI on product suddenly makes people care about the fact that companies are monetizing on their usage data.
Monetizing on user activity data through opt-out collection is not new. Pretending that his phenomenon has anything to do with AI seems like a play for attention that exploits peoples AI fears.
I'll sandwich my comments with a reminder that I am not pro-exploiting users' ignorance for their data.
Sure - but isn't this a little like comparing manual wiretapping to dragnet? (Or comparing dragnet to ubiquitous scrape-and-store systems like those employed by five-eyes?)
Most people don't care, paid service or not. People are already used to companies stealing and selling their data up and down. Yes, this is absolutely crazy. But was anything substantial done against it before? No, hardly anyone was raising awareness against it. Now we keep reaping what we were sawing. The world keeps sinking deeper and deeper into digital fascism.
Companies do care: Why would you take additional risk of data leakage for free? In the best case scenario nothing happens but you also don't get anything out of it, in the worst case scenario extremely sensitive data from private chats get exposed and hits your company hard.
Companies are comprised of people. Some people in some enterprises care. I'd wager that in any company beyond a tiny upstart you'll have people all over the hierarchy that dont care. And some of them will be responsible for toggling that setting... Or not, because they just can't be arsed to with how little they care about the chat histories of the people they'll likely never even going to interact with being used to train some AI.
i mean, i am in complete agreement, but at least in theory the only reason for them to add AI to the product would be to make the product better, which would give you a better product per-dollar.
Because they don't seem to make it easy. It doesn't seem as a individual user I have any say in how my data is used, I have to contact the Workspace Owner. When I do I'll be asking them to look at alternative platforms instead.
"Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at feedback@slack.com with your Workspace/Org URL and the subject line “Slack Global model opt-out request.” We will process your request and respond once the opt out has been completed."
I'm the one who picked Slack over a decade ago for chat, so hopefully my opinion still holds weight on the matter.
One of the primary reasons Slack was chosen was because they were a chat company, not an ad company, and we were paying for the service. Under these parameters, what was appropriate to say and exchange on Slack was both informally and formally solidified in various processes.
With this change, beyond just my personal concerns, there are legitimate concerns at a business level that need to be addressed. At this point, it's hard to imagine anything but self-hosted as being a viable path forward. The fact that chat as a technology has devolved into its current form is absolutely maddening.
> We offer Customers a choice around these practices.
I remembered the joke from The Hitchhiker's Guide to the Galaxy, maybe they will have a small hint in a very inconspicuous place, like inserting this into the user agreement on page 300 or so.
But the plans were on display…”
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well, the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.
Seriously, for your sake; don't do this whole "I am the champion of Apple's righteousness" shtick. Apple doesn't care about privacy. That's the bottom line, you lack the authority to prove otherwise.
Because you might actually want to have the best possible global models ?
Think of "not opting out" as "helping them build a better product". You are already paying for that product, if there is anything you can do, for free and without any additional time investment on your side that makes their next release better, why not do it ?
You gain a better product for the same price, they get a better product to sell. It might look like they get more than you do in the trade, and that's probably true; but just because they gain more does not mean you lose. A "win less / win more" situation is still a win-win. (It's even a win-win-win if you take into account all the other users of the platform).
Of course, if you value the privacy of these data a lot, and if you believe that by allowing them to train on them it is actually going to risk exposing private info, the story changes. But then you have an option to say stop. It's up to you to measure how much you value "getting a better product" vs "estimated risk of exposing some information considered private". Some will err on one side, some on the other.
How could this make slack a better product? The platform was very convenient for sharing files and proprietary information with coworkers, but now I can't trust that slack won't slip in some "opt out if you don't want us to look at your data" "setting" in the future.
I don't see any cogent generative AI tie-in for slack, and I can't imagine any company that would value a speculative, undefined hypothetical benefit more than they value their internal communications remaining internal.
> Of course, if you value the privacy of these data a lot, and if you believe that by allowing them to train on them it is actually going to risk exposing private info, the story changes. But then you have an option to say stop. It's up to you to measure how much you value "getting a better product" vs "estimated risk of exposing some information considered private". Some will err on one side, some on the other.
The problem with this reasoning, at least from what I am understanding is that you don't really know when/where the training of you data crosses the line into information you don't want to share until it's too late. It's also a slippery slope.
> Think of "not opting out" as "helping them build a better product"
I feel like someone would only have this opinion if they've never ever dealt with any in the tech industry, or capitalist, in their entire life. So like 8-19 year olds? Except even they seem to understand that the profit absolutist goals undermine everything.
This idea has the same smell as "We're a family" company meetings.
I for one consider it my duty to bravely sacrifice my privacy to the alter of corporate profit so that the true beauty of LLM trained in emojis and cat gifs can bring humanity to the next epoch.
> Think of "not opting out" as "helping them build a better product"
Then they can simply pay me for that. I have zero interest in helping any company improve their products for free -- I need some reasonable consideration in return. For example, a percent of their revenues from products that use my data in their development. I'm totally willing to share the data with them for 2-3% of their revenues, that seems acceptable to me.
Yep, much like just about every credit card company shares your personal information BY DEFAULT with third parties unless you explicitly opt out (this includes Chase, Amex, Capital One, but likely all others).
For Chase Personal and Amex you can opt out in the settings. When you get new credit cards these institutions have the default setting to sharing your data. For Capital One you need to call them and have a chit chat that you want to exercise the restriction advertised in their privacy policy and they'll do it for you.
PG&E has a "Do not sell my info" form.
For other institutions, go check the settings and read the privacy policies.
I don't see the point of Rocket Money. They seem like they exist to sell your info.
You should keep track of your own subscriptions. My way of doing this is to have a separate zero-annual-fee credit card ONLY for subscriptions and I never use that card for anything else. That way I can cleanly see all my subscriptions on that credit card's bill, cleanly laid out, one per line, without other junk. I can also quickly spot sudden increases in monthly charges. I also never use that card in physical stores so that reduces the chance of a fraud incident where I need to cancel that card and then update all my subscriptions.
If you want to organize it even more, get a zero-annual-fee credit card that lets you set up virtual cards. You can then categorize your subscriptions (utilities, car, cloud/API, media, memberships, etc.) and that lets you keep track of how much you're spending on each category each month.
What I don't understand is why it's an opt-out not opt-in. From a legal standpoint, it seems if there is an option to not have something done to you, it should be the default. For example, people don't have to opt-out of giving me all their money when they pass by my house, even if I were to try to claim it's part of my terms of service.
I'm willing to bet that for smaller companies, they just won't care enough to consider this an issue and that's what Slack/Salesforce is hedging on.
I can't see a universe in which large corpos would allow such blatant corporate espionage for a product they pay for no less. But I can already imagine trying to talk my CTO (who is deep into the AI sycophancy) into opting us out is gonna be arduous at best.
I'd be surprised if any legal department in any company with one will not freak the f out when they read this. They will likely loose the biggest customers first, so even if it is 1% of customers, it will likely affect their bottom line enough to give it a second though. I don't see how they might profit from an in-house LLM more than from their enterprise-tier plans.
Their customer support will have a hell of a day today.
…a choice that’s carefully hidden deep in the ToS and requires a special person to send a special e-mail instead of just adding an option to the org admin interface.
Why would anyone not opt-out? (Besides not knowing they have to of course…)
Seems like only a losing situation.