Wow! I thought nothing would surprise me anymore about Adobe. But this is a new low even for them.
This is ripping off artists and accessing unauthorised content of your customers executed at the highest level. Boggles my mind how this is even legal anywhere especially in countries under EU? Like how this is an opt out and not an opt in?
Seems like Content Analysis is turned on by default for the whole creative cloud suite. And, to make it worse, you have to go their site, login to your account and then opt out? Oh boy, I smell a gigantic lawsuit waiting to happen.
so you use their software and they can use YOUR IP because YOU gave THEM permission but god forbid you try to pirate THIER IP. then all bets are off and their lawyers will hound you...
i had a friend who was "Being audited" by "Adobe and autodesk" because he had been using a student account being a college student and a family member was running a business and he says "one google ad last year i had put my own name along with that family member". the point of the audit was "to assess if there was any violation of copyright and if they were using only licensed software in their machines" and failing which they would be liable to pay full amount of the software with back payments."
i told the friend to tell them to "fuck off".... that was last year,
I’d love to drop them, but they’re industry standard. Good open source alternatives such as Gimp are somewhat lacking the AI features. Which have most likely been trained illegally!
yeah, this was e&y and also kpmg harassing the poor guy. we wrote a simple letter of please fuck off and i am certain they backed off.
the guy was traumatized to a point he was asking if post HDD format, records of "pirated windows" would remain... oh boy
scummy way of making money.
the problem is, all these proprietary software companies, from tally in accounting to Microsoft with excel/word and adobe for example, they pay the colleges/schools to have their product be used in education. i am talking about india here specifically so the students "learn" that piece of software. that is precisely the idea behind "student discount" or "free for students".
That is what "most" companies do while the really scummy of them, adobe et al then go after these poor souls
I'm in the EU, and currently content analysis is toggled off for me. How new is this option? If it's from the past couple months I definitely didn't touch it, if it's older I may have turned that off myself at some point.
> I'm in the EU, and currently content analysis is toggled off for me
I'm also in the EU and checked this (very well-hidden) option for the first time after reading the thread - and it is defaulted to off.
Yet more evidence for the usefulness of GDPR. At times like this I'm genuinely grateful for these EU regulations that protect the individual against the encroachment of corporations on their private lives and personal data.
Agreed, I'm trying to see if there's anything in my profile that could have been confusing, but I'm a paying customer so they definitely have the correct country on record for me and I'm paying in €.
I'm in the US and this option is off for me. As the page loads now, it is very clear in what they are wanting to do with this setting. I don't ever remember reading this page, but had I read this there's no way I would ever turn it on or if it was on by default I would definitely turn it off. So I just have "no recollection Senator" of if I did turned it of myself or its default behavior is to be off. So just anecdotal stories from some rando on the interwebs.
Used to be a heavy Adobe user, but stopped some years ago. Recently, a project required me to use Lightroom again so had to sign up again (with different email this time), so checked this user that was created ~2 months ago, and the setting is On by default. I'm not in the US.
To me this is industrial espionage rather than copyright infringement. If I post a picture online and you use it some way i didn't give permission for, maybe there's an infringement case, that's not very interesting.
Here, we have effectively malware that is taking stuff that was never shared, from a personal or business cloud, and using it for some other purpose. That's a much bigger deal, because it involves stuff nobody agreed to make public.
I strongly suspect Microsoft does this too with Office. I'm always catching them trying to make excuses to upload images for "connected experiences" (industrial espionage) and I imagine in the end they just do it. This is a big problem, for example when working as a contractor and dealing with private client data. There is a risk that's hard to manage of big vendors stealing data through their software that I don't know how to manage
This continues to be a confusion in copyright. You can't put a photo online and tell me how I can consume it. You can tell me I can't distribute it to anyone else.
The discourse around this stuff seems so insane to me. Am I wrong on this? It feels like the cat is out of the bag and the only way to have generally good outcomes is to go ham on open source models. IMO there's a zero percent chance that generative AI can be meaningfully controlled and any efforts to restrict training will only result in fewer people having access to the good models. They will still be created, only by people who don't care about the ethics at all, or big corpos who seek to collect rent.
I think the only way I see a good outcome is that the FOSS models are so good there's no money to be made competing.
I have absolutely no problem with generative models, not even for producing 'illegal' images and texts, but the copyright infringement by big players is disgusting. Microsoft (copilot), openai, Adobe, ... etc will wholesale breach copyright and not give a damn, but god forbid if you copy a line of their code they will use all their might to crush you. If they want to use copyrighted stuff for their models, they should license it.
This comment is a bit hard to unpick. It's not clear that training violates copyright. However plagarism (created by any means AI or not) is already covered by existing IP law.
(The difficult part is telling whether AI generated stuff infringes on the copyright of stuff in the training data. And the fact the onus is on the end-user of AI tools sucks...)
However - if you're arguing that copyright law should be extended to prevent use in training - that's a much tougher sell and one where I'm not 100% sure of my position.
Training a model essentially creates a derivative work from the input data. GPL licensed code requires keeping the same license for derivative work. BSD-style licenses at the very least require retaining the copyright statement.
It feels pretty clear to me that the trained model and code generated from it would have to adhere to the original license.
In a way ML is a bit like lossy compression. Would you think JPEG encoding and decoding an image strips the result of its original copyright?
There is a point where a derivative work is so transformative that the original copyright no longer matters. In the US you would argue via fair use, where the precedent is "The more transformative the new work, the less will be the significance of other factors, like commercialism, that may weigh against a finding of fair use" [1], and other jurisdictions have equivalent ways to excempt works that are different or transformative enough. So the big question is how transformativ a given model is. And at least for language models I'm not sure we even understand Transformer models enough to give a qualified answer.
I think this is true, but currently you can coax e.g. stable diffusion to generate art in the style of a specific person, and if you do things right, it will even have a smudge-y version of their monogram/signature in the bottom right.
Pretty clear indication that something is not yet 'transformative enough', IMO.
Same happens with Copilot and such - you can get it to spit out tens of lines of code verbatim pulled from someone else's project, with comments and all.
These kinds of things just don't sit right with me.
Producing art in the style of someone else is perfectly fine. It's been done for centuries.
What is not fine is claiming that this is from the artist whose style is being copied. That makes your work a counterfeit.
To me, the legal protections for artists already exist: copyright offers protections against verbatim copying, laws against counterfeiting protect against original work made to be passed as if it were from the artist.
Whether these laws are still adequate in the face of AI-generated content is debatable and my guess is that we are in for interesting tests and debates around the subject.
Artists should have the means to protect their work. It's a condition to create a society where art can flourish in public. Everyone benefits.
However, artists should not be able to suppress the means of creation. Technology has always been a boon to Art. It can be the cause of huge shifts and some types of art may disappear in favour of new ones, but it's never hindered our innate need for creating art.
> I think this is true, but currently you can coax e.g. stable diffusion to generate art in the style of a specific person, and if you do things right, it will even have a smudge-y version of their monogram/signature in the bottom right.
You're conflating two things here:
1. SD can generate art in the style of a specific person
2. It also adds signatures to some pieces
These are independent things. Sometimes it does 1 without 2 and sometimes 2 without 1. And the signature can appear on the wrong artist - or the signature is a weird mix of different signatures.
It's just that it's been trained on "paintings with signatures" so it includes them. It's not the damning evidence some claim it to be.
This is almost certainly false. Derivative work is a category of copyrightable material where an existing work was further creatively transformed in order to create said derivative work. For example painting a moustache on the Mona Lisa.
That creates a derivative work, which has a separate copyright on the creative addition (the moustache).
Training an AI has been found to be not creative work. The AI weights, as a result, are not a derivative work, and likely cannot be copyrighted.
Your GPL example is an unexpectedly poignant one, as well, though it will be to your chagrin.
If you publish your GPL code, and I learn from it and produce code that does the same thing, that is not restricted by copyright. Furthermore, if it's found that the expression of the code itself cannot be separated from the function, the expression is not copyrightable in the first place, and the GPL is redundant.
It's only in the instance that I reproduce your code exactly and the function is separable from the creative expression that your copyright is valid.
Regarding your JPEG compression, it's possible that a ML model will produce outputs that, of themselves, are copyright infringement. A recognizable picture of Ariel, for example. That doesn't mean that the training, the model, or anything else it produces is infringing.
> This is almost certainly false. Derivative work is a category of copyrightable material where an existing work was further creatively transformed in order to create said derivative work. For example painting a moustache on the Mona Lisa.
The Mona Lisa has been in public domain for several centuries, so it's a pretty bad example.
1. Github Copilot will sometimes produce non-trivial inputs it was trained on as outputs
2. Those outputs will not conform to the license
That is, in my mind, a clear violation. This could easily be solved by making training (or transformation or whatever) explicit in the license.
Edit: I single out Github copilot because such examples were found. I'm not sure you can find such examples for things such as Stable Diffusion or DALL-E, but the operating principle is the same.
> I'm not sure you can find such examples for things such as Stable Diffusion or DALL-E
DALL-E 1 will sometimes replicate watermarks (without being promted for it [1]). I'm not sure anyone has ever shown a 1-1 replication like what's shown with Copilot, but it's easy to get image models to create clearly derivative works (just ask for the Mona Lisa, or even a well-known modern work).
It isn't copyright infringement to train an AI on information that you obtained legally.
Example: If I put my book on mybook.com, and you download it (legally), you can read it, learn from it, and produce works in a similar style, all without my consent, and copyright offers no tools to restrict that.
The only tool copyright offers to protect against that is distribution.
A.I.'s are subject to copyright (unlike a human being like yourself ) so the A.I. which has been taught is an infringement of the copyright of the artist because the A.I. is a derivitive copyrightable work.
Since copyright doesn't govern your brain's "wetware", the comparison to human behavior is irrelevant.
Two issues with that. Training an AI has thus far been regarded in courts as non-creative, and thus the AI model/weights are not copyrightable, nor does the model sufficiently represent the original work, so it's not a derivative work.
I think you are confusing the issue of whether the A.I. output is creative with whether training the A.I. itself is creative enough for the A.I. software to be copyrightable.
The law has a very broad definition of creativity when a human is involved, even a human taking a picture of the Mona Lisa is considered creative.
Since humans train the A.I., and it takes creativity to design the training scheme, the A.I. would appear to me to clearly be a creative work and copyrightable.
You brought up a seperate issue of whether the model infringes the original work. And whether it's okay because the model doesn't resemble the original.
I don't think that's precisely the accurate legal standard. What's relavant is whether the copying into the model constitutes fair use.
One of the considerations of fair use is whether the use is transformative. But that's only one consideration a court will look at in determining whether the A.I. company can succeed in a fair use defense. I don't believe this is settled law.
But if there's any broad point I want to make, it's that the law does not ever consider a machine or piece of software "like a human" or say "If it's okay for a human to do it it's okay for a piece of software to do it."
I am not confusing that issue. I am stating outright that it is not sufficiently clear that a human is responsible for the neural net weights. The gradient descent may be considered automatic.
Section 313.2 of the U.S Copyright compendium states, "The
(copyright) office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author".
Moreover, the Copyright Act's definition of a computer program is “a set of statements or instructions to be used directly or indirectly in a computer to bring about a certain result”. Neural net weights are clearly neither statements nor instructions, so it's unlikely the weights would gain copyright as a computer program.
It's indeterminate whether the SSO doctrine[1] is operable on neural net weights, as it can be difficult to determine in which order weights are used. If those weights were to gain copyright protection, they may not be enforceable on similar sets of weights.
Regarding whether resembling the original work is the legal standard for being considered an infringing work: the test applied by a court is called Substantial Similarity[2], and it's extremely unlikely that neural network weights would be substantially similar to training data when considering the idea-expression divide.
Copying into the model... are you suggesting that if I have an image that's represented by an array in memory, and I copy that array from one memory location to another in my pc, that act of copying is the infringing act?
Fair use may even be a bit non-sequitur in this case. If I look at copyrighted art, I don't have a fair use case, because I haven't incorporated the art into anything, though the act of viewing has changed my brain. Training a neural net on training data may not even be considered incorporation.
Your brain is immune from copyright lawsuits as being a derivative work because your brain is not a tangible medium of expression.
Your thoughts are not derivative works under the copyright act because your thoughts, seeing as they are not contained in a tangible medium of expression, are not works at all.
Computers are clearly tangible mediums of expression.
We can argue whether a particular piece of software is copyrightable or whether a dataset is protected under the copyright act's definition of compilations.
But it should not be up for debate whether whatever is happening in your brain when you learn something is in any way relevant in terms of copyright law.
I would also add the idea- expression dichotomy is irrelevent.
If the dataset was merely ideas like "A boy goes to Wizard school" then you might win on that ground. But if you feed it the complete works of Harry Potter, no court is going to claim that the data the machine created based on Harry Potter is an "idea".
How is open source supposed to compete with commercial players that can get a lot more data with unethical means (like say Adobe using all pictures their users sync with Lightroom).
Face recognition had this problem for a long time: motivated companies with the right product, the right connections or the right amount of money can get much better datasets than what is available to researchers. Generative AI will face the exact same problem, along with the other difficulties of funding such models
I don't disagree with you, but this discussion was about Adobe using its customer's photos without their consent, right? (no, I do not consider the small print to count here. Not on a moral level)
I feel like the discussion of ethical use of ML models, while a valid one, comes a couple of steps after that. Like, the hypothetical existence of good and fair FOSS models won't make this transgression by Adobe magically irrelevant, will it?
A few updates ago, they made it very difficult to save files to your local system. I wonder if this has anything to do with that. It is at the least a very dark UI pattern to get people to use their cloud storage offerings. Add to it that they use any data in the cloud storage as training data just really goes over the top.
ah. so people will continue to use this because its the "industry standard" so they have the monopoly to fuck around people all the while these same people will not use/improve/suggest/fund open source alternatives in the classic chicken or egg problem...
i am so glad blender project is slowly being accepted. i have a friend who laughed at me 7-8 years ago when i recommended him blender. recently we were talking about some stuff and he brought up blender as in "this is something i should test". i see that as an absolute win...
hopefully there are many other softwares like kdenlive or darktable or inkscape or others that need blender-like movement so that they can be capable alternatives
I swapped to using RawTherapee years ago and never looked back, at least for my workflow it's miles better than Lightroom. Blender is best in class when it comes to FOSS creative software, a real exemplar of the best case.
Inkscape on the other hand is like shoving forks in my eyes every time I try to use it. I started working on an web-based SVG editor just to cover my own usecases. I'm just using the browser renderer because the Inkscape frontend is a total joke. There's tearing when I try to resize a rectangle, and it updates at like 8fps. It's actually embarrassing that software can be that slow doing something so basic. It's incomprehensible to me, I can render and display these staggeringly realistic game worlds at 240fps and the very same hardware struggles to resize a rectangle.
To be honest with you, I've been experiencing those funny things with inkscape >1.1 or so - before that, it was flawless even on the crappy PC I have. Still I think it's great and nothing comes close to it regarding SVG.
The inkscape UI has become way better, though and I can live with the sluggish performance. But have you ever tried to use the resulting SVG in an interactive way? This is hell and I gave up. Partly it is because SVG is bad for it in general, partly because inkscape makes it worse with weird output (relying heavily on the viewBox for example)
Yeah! Should have mentioned that one too. I literally had it open and was doodling when I wrote the post. It's not perfect but I still prefer it to Photoshop most of the time. Unfortunately it's pretty useless for vectors.
I hope to come back to this thread with someone tamping the reality because right now this is as bad and immoral as it gets (as a photographer). You'd expect at least a fake sense of consumer/artist protection from a company like Adobe that NEEDS these customers to survive.
But hey, another reason to hate the prison guard landlord.
True but look at their earnings, when they came up with their cloud crap in their product lines their revenue jumped through the roof. Clearly people en masse don't care enough, despite a very vocal minority like me who were cursing at cloud-anything from day 1
I disagree (but also, IANAL). They use the private images to train their models. Means the image is part of a data set, and this is subject to licensing. The image is however part of your private collection, so the license, besides the wording in the privacy page, is sketchy at best. Also, it will collide with GDPR and CCPA rules on data processing.
Data models are subject to licensing too, and using images without proper licenses “poisons the well”.
Usually the copyright on a data set is held by the person who accumulated the data, but mere compilation is not enough to bestow copyright. The arrangement and selection itself needs to be sufficiently creative or original.
Putting all your photos in a folder does create a dataset, but doesn't meet the threshold of creative input for that dataset to be copyrightable as a compilation.
Now, you do hold the copyright on all of your photos. This means that you can restrict their transmission to others. You can even transmit them to someone as part of a contract that they not engage in a particular activity, like training an AI.
But if they proceed to train an AI using them, they've committed a breach of contract, not copyright. And I'll tell you, I'm not a lawyer myself, but instead a SME who works with a lawyer, and in order to receive relief for breach of contract, a judge is going to look to said contract in order to calculate damages. If you have a contract that says that something is a breach, but no language that assists a judge in calculating damages, you shouldn't assume you're going to get any relief in excess of, perhaps, an injunction.
Can you imagine a better business model? You have to ignore even the least trace amount of ethics, but at this point I don't expect anything else from big tech anymore (or everyone else, to be honest).
I assume this is "Lightroom" and not "Lightroom Classic" (both of which are available via Adobe CC, but Classic is offline, whilst the other was always full-cloud).
Historically I've used Lightroom Classic, but I've always been worried about Adobe killing it off one day.
I think I might go back to Phaseone Capture One[1]. I tried it a while ago and found it pretty decent, despite the name its not just for Phaseone camera users.
I moved from LR Classic to Capture One because I was getting fed up with Adobe's business practices. I'm much happier with C1, even more after I switched from Nikon to Fujifilm and RAW processing for Fuji X-Trans was so much better on C1.
I used to be a customer for the LR + PS bundle, got grandfathered when they turned LR into LR Classic but then Adobe tried to force me to upgrade to a more expensive plan multiple times, I stopped using Adobe completely and get all my photo editing needs from C1 + Affinity Photo. It's been a bliss to be away from Adobe.
Photoshop is improving its "remove background" like feature
quite a bit lately. Sporting "AI" in their description.
I am not as outraged if that is the main use case.
I wonder if Google or Apple uses some form of the images
they hold for similar purposes.
I can easily argue though that once they have the data
they can use it to train all sorts of features including
generating art which to me is far more upsetting.
If it was opt-in, and creators would be compensated for the use of their work, it would be ok. As implemented it is just... well... on-brand for Adobe I guess.
The expectation from a cloud storage is that it's yours. They are not supposed to help themselves to your files in any shape or form. If the big could providers start breaking this unspoken promise to save on content acquisition costs, people will lose trust in the cloud. Even if this is legally ok (which is questionable), it is extremely short sighted.
In general I have always assumed this to be true, but iirc Adobe explicitly said their storage was private and now I have some photos I synced to my phone possibly wrapped up in this.
Well, the issue of access is not a technical one for the manufacturer, only moral one. Fines for now rarely work, since they are so low and take so long to happen, management can comfortably sail away on their golden parachutes. Add jail time, properties confiscated and it will be a different topic.
Basically we all have to trust forever some money-focused multinational corporations that they will keep being nice 'because its the right thing to do', or some similar marketing mantra. Look at any corporation and lawsuits around it and you will see very little morality anywhere.
For me, cloud is dead for most of my purposes, regardless of data stored. The idea looks nice on paper but then so does communism.
They are probably using those to train stable diffusion/dall-e/imagen like model as it's current state of the art AFAIK. As for whether google or apple do it too, unless some employee makes a throwaway account and shares, we don't know. If I had to guess though, I would say google probably does, as well as publicly crawlable images. Apple most likely does too, unless they just bought the data from some other service, because they too have ml powered image capabilities and they couldn't have achieved it without obtaining a sufficiently large dataset first.
Now I don't see anything that implies that this is being used for training AI generated work... but I don't see any reason it can't be.
This has me seriously concerning for not only the future of art but all created work and who actually has the rights to that depending on what you use to make it.
What particularly bothers me is how quickly this was just accepted by society. I saw so many people defending it as "fun" because they just don't understand how it actually works.
I was really hoping that society would push back against AI generated things... that was proven wrong very very quickly.
> Adobe may analyze your Creative Cloud or Document Cloud content to provide product features and improve and develop our products and services.
Nothing in that document seems to indicate this is opt-in.
> Let's say that you access Creative Cloud or Document Cloud via a personal account and prefer that Adobe doesn't analyze your content to develop and improve our products and services. In that case, you may turn off content analysis at any time from your Adobe account.
I'm not a professional photographer, so I don't have their needs. But Google Picasa still works for my photos. I use Dropbox to sync them off my phone onto my PC.
And Krita and Photopea has replaced my need for Photoshop after I dumped Adobe several years ago. And then there's mobile apps like Procreate and Infinite Painter. I also own the Affinity products which are quite nice. But I admittedly don't use them much.
I lost trust in Adobe years ago, especially when every new release for the yearly Adobe Max was full of new bugs. Seeing news like this reminds me to never go back.
Things like this are the main reason I went to Linux for everything except certain games, and locally installed open source for office and photo post processing.
I really miss the days of locally installed OS's, local software suites, when you did not need online accounts and cloud computing for the most mundane tasks. And I really hope those days will come back, or at the very least things like Ubuntu, LibreOffice and Darktable will stick around for quite a while.
Funny thing so, the old versions of lightroom and photoshop, the offline ones, still work perfectly fine under Win 10. So if you have a license key I don't see how Adobe could kill those.
If I sign exclusive licence contract with my client that guarantee originality of my work (ie logo) without option to sell copyright to others and Adobe break it just like that, it's great for lawsuit.
I assume they either make money or plan to make money from it. And as long as it’s optional I guess it’s fine BUT why not give a little cut to those who provide images in the form of a reduced fee?
Doesn’t feel very clear cut. It the images aren’t kept after they are fed into training, then it’s hard to say where there is PII stored.
Selling privacy for money is legal, that’s what every store membership card where you get a member rebate does (in which case they of course associate all your purchases with the account).
This is honestly very convenient. If I ever accidentally delete a picture I can just ask adobe for a backup. Very consumer consious decision on adobes part.
I've switched to C1 when Adobe went to subscription-only, pretty happy with it. They are about to make substantial changes to their licensing though, in what looks like an effort to push users to their subscription model. No more brand-specific versions (up until now you can get a version that works with only Fujifilm or Sony cameras for a bit more than half the price), no more upgrade pricing (just a "loyalty scheme") and even less in terms of updates than you used to get with a perpetual license. You're already paying more for the perpetual version if you update every year, I guess that's going to get substantially worse.
I guess soon it'll be either a subscription or barely-usable open source like darktable, which is a shame for hobbyists. I'd love to use open source, but darktable just isn't there, by a long shot. It's godawfully slow with 26MP images on a brand-new M2 Macbook (C1 is blazingly fast in comparison), the UI is unstructured and very spartan, lots of UI flicker and glitches that become super annoying after a while, no dehaze feature, shadow detail suffers in comparison to what C1 does there, etc. I bet a well-executed LR Classic/C1 competitor from Affinity would be a tremendous success.
If can cope with the totally different workflow, Darktable is working really well as well. AT least from what I can tell lookin at the same RAW files being post-processed by me using Darktable and my dad using Lightroom / Photoshop.
It is really good so, that there are alternatives to Adobe out there.
From what I understood, one big difference which makes this one worse is that it is using unpublished photos. Which can include works in progress, finished work under non-disclosure agreements, and even intimate things like personal nude pictures. It's more than just lacking consent or opt-in or licensing; these are files which shouldn't be available to these models (or to the public in general) at all.
Yep, the problem is they are providing the illusion of privacy and violating it. They should just make everyone’s cloud storage public access for viewing if they want to treat it like public data.
This is a key question. What Adobe's doing is incredibly anti-creator but arguably better because you can at least opt out. There's a lot of cognitive dissonance and hypocrisy about this right now even in communities that should know better.
This appears to be true for now unless you're syncing Lightroom Classic with Adobe Cloud. But I think it's safe to say that Adobe would not have a problem farming your local catalogs for images in the future without warning.
Specifically, what's the issue here? Seems a lot of people are hating on Adobe here, but as far as I'm aware it's fairly standard for companies to process your data to train their models. At least I see this kind of opt-in / opt-out everywhere.
Is it that this is an opt-out not an opt-in, or is that it's even an option to begin with?
If we as consumers want companies like Adobe to provide us with cool AI tools then it seems this is unfortunately just what's required. And if you don't like it, you can opt-out. So I'm guessing the issue is that it's an opt-out not an opt-in? Which seems kinda minor and would perhaps violate GDPR in the EU.
Well you say obviously, but people in this thread are complaining about how Adobe is ripping off artists which doesn't seem particularly relevant to the issue of acceptable data consent practises.
I agree it should be opt-out by default (and believe this is a violation of GDPR if it applies in the EU), but they are allowing you to opt-out and they are explaining how they'll use the data so this seems like a relatively minor issue to me honestly. Because yes, unless you're extremely naive your assumption should always be that every time your data is sent to a server or put on some cloud that it will be used in ways that you didn't consent to and potentially even disapprove of. Obviously I don't agree with that, but I'm sure there are companies doing exactly this without giving you an opt-out and without even telling you they're doing it.
This is ripping off artists and accessing unauthorised content of your customers executed at the highest level. Boggles my mind how this is even legal anywhere especially in countries under EU? Like how this is an opt out and not an opt in?
Seems like Content Analysis is turned on by default for the whole creative cloud suite. And, to make it worse, you have to go their site, login to your account and then opt out? Oh boy, I smell a gigantic lawsuit waiting to happen.
tweets from artists:
https://twitter.com/CrownePrints/status/1610441583899418624?...
https://twitter.com/SamuelDeats/status/1610365369134333955?t...