IANAL but that sounds like harrassment, I assume the legality of that depends on the context (did the artist previously date the subject? lots of states have laws against harassment and revenge porn that seem applicable here [1]. are you coworkers? etc), but I don't see why such laws wouldn't apply to AI generated art as well. It's the distribution that's really the issue in most cases. If you paint secret nudes and keep them in your bedroom and never show them to anyone it's creepy, but I imagine not illegal.
I'd guess that stability is concerned with their legal liability, also perhaps they are decent humans who don't want to make a product that is primarily used for harassment (whether they are decent humans or not, I imagine it would affect the bottom line eventually if they develop a really bad rep, or a bunch of politicians and rich people are targeted by deepfake harassment).
^ a lot of, but not all of those laws seem pretty specific to photographs/videos that were shared with the expectation of privacy and I'm not sure how they would apply to a painting/drawing, and I certainly don't know how the courts would handle deepfakes that are indistinguishable from genuine photographs. I imagine juries might tend to side with the harassed rather than a bully who says "it's not illegal cause it's actually a deepfake but yeah i obviously intended to harass the victim"
Because AI lowers the barrier to entry; using your example, few people have the drawing skills (or the patience to learn them) or take the effort to make a picture like that, but the barrier is much lower when it takes five seconds of typing out a prompt.
Second, the tool will become available to anyone, anywhere, not just a localised school. If generating naughty nudes is frowned upon in one place, another will have no qualms about it. And that's just things that are about decency, then there's the discussion about legality.
Finally, when person A draws a picture, they are responsible for it - they produced it. Not the party that made the pencil or the paper. But when AI is used to generate it, is all of the responsibility still with the person that entered the prompt? I'm sure the T's and C's say so, but there may still be lawsuits.
Right, these are the same arguments against uncontrolled empowerment that I imagine mass literacy and the printing press faced. I would prefer to live in a society where individual freedom, at least in the cognitive domain, is protected by a more robust principle than "we have reviewed the pros and cons of giving you the freedom to do this, and determined the former to outweigh the latter for the time being".
You seem to be very confused about civil versus criminal penalties....
Feel free to make an AI model that does almost anything, though I'd probably suggest that it doesn't make porn of minors as that is criminal in most jurisdiction, short of that it's probably not a criminal offense.
Most companies are only very slightly worried about criminal offenses, they are far more concerned about civil trials. There is a far lower requirement for evidence. AI creator in email "Hmm, this could be dangerous". That's all you need to lose a civil trial.
Why do you figure I would be confused? Whether any liability for drawing porn of classmates is civil or criminal is orthogonal to the AI comparison. The question is if we would hold manufacturers of drawing tools or software, or purveyors of drawing knowledge (such as learn-to-draw books), liable, because they are playing the same role as the generative AI does here.
Because you seem to be very confused on civil liabilities in most products. Manufactures are commonly held liable for the users use of products, for example look at any number of products that have caused injury.
Surely those are typically when the manufacturer was taken to have made an implicit promise of safety to the user and their surroundings, and the user got injured. If your fridge topples onto you and you get injured, the manufacturer might be liable; if you set up a trap where you topple your fridge onto a hapless passer-by, the manufacturer will probably not be liable towards them. Likewise with the classic McDonalds coffee spill liability story - I've yet to hear of a case of a coffee vendor being held liable over a deliberate attack where someone splashed someone else with hot coffee.
Photoshop also lowers that barrier of entry compared to pen and pencil. Paper also lowers the barrier compared to oil canvas.
Affordable drawing classes and YouTube drawing tutorials lower the barrier of entry as well.
Why on earth would manufacturers of pencils, papers, drawing classes, and drawing software feel responsible for censoring the result of combining their tool with the brain of their customer?
A sharp kitchen knife significantly lowers the barrier of entry to murder someone. Many murders are committed everyday using a kitchen knife. Should kitchen knife manufacturers blog about this every week?
I agree with your point, but I would be willing to bet that if knives were invented today rather than having been around awhile, they would absolutely be regulated and restricted to law enforcement if not military use. Hell, even printers, maybe not if invented today but perhaps in a couple years if we stay on the same trajectory, would probably require some sort of ML to refuse to print or "reproduce" unsafe content.
I guess my point is that I don't think we're as inconsistent as a society as it seems when considering things like knives. It's not even strictly limited to thought crimes/information crimes. If alcohol were discovered today , I have no doubt that it would be banned and made schedule I
> Hell, even printers, maybe not if invented today but perhaps in a couple years if we stay on the same trajectory, would probably require some sort of ML to refuse to print or "reproduce" unsafe content.
Fun fact: Many scanners and photocopiers will detect that you're trying to scan/copy a banknote and will refuse to complete the scan. One of the ways is detecting the EURion Constellation.
If yes, why doesn't the same law apply to AI? If no, why are we only concerned about it when AI is involved?