
Who owns copyright to Deep Dream images? - somerandomness
https://plus.google.com/+AndreasSchou/posts/hspwf3iTqx8
======
kolme
This is just a neural network fed with some dog pictures, it's not really
"thinking", it doesn't "create" anything. In other words, it's not (yet)
artificial general intelligence. As I see it, it's just some software which
creates an output given an input. In essence, a tool.

So I guess, unless otherwise stated in the license of said software, the
ownership lays on the operator.

Maybe, one can argue that the engineers who "fed" the NN do have the rights of
the output.

But the software is not clever enough for me to see here a similar case to
that of wikipedia, the photographer and the monkey.

~~~
rtkwe
It is affected by that case though because of the policy and guidelines it
spawned.

> “Because copyright law is limited to ‘original intellectual conceptions of
> the author,’ the Office will refuse to register a claim if it determines
> that a human being did not create the work,” said the US Copyright Office in
> its latest compendium of practices published Tuesday. “The Office will not
> register works produced by nature, animals, or plants.”

and

> In new guidance the USCO has ruled that only works created by a human can be
> copyrighted under US law, which excludes photographs and artwork created by
> animals or by machines without human intervention.

Source [0]

This feels like it falls into the 'by machines without human intervention.'
There's no meaningful contributions from a human.

All that said this will eventually be an issue (hopefully) when machine
intelligence actually takes off or human uploads become a viable operation.
Though maybe by then we'll be less obsessed with every idea having to have an
owner, but that's neither here nor there.

[0] [http://www.theguardian.com/technology/2014/aug/22/monkey-
bus...](http://www.theguardian.com/technology/2014/aug/22/monkey-business-
macaque-selfie-cant-be-copyrighted-say-us-and-uk)

~~~
jsprogrammer
>There's no meaningful contributions from a human.

Humans provided the training data. Humans wrote the source. Humans compiled
and ran the program.

~~~
Houshalter
So the copyright belongs to google, even if you run it on your own photographs
on your own machine? That doesn't seem right.

~~~
sparkie
Consider if you took two copyrighted pictures and combined them in some way in
photoshop. We can lay claim to the combined work, but we may not have consent
of the original authors works to distribute thier work.

Now consider if you trained the NNet with the same two images, such that it
was highly overtrained and basically produced a combined replica of the
inputs. This is essentially the same as doing it manually in photoshop. That a
computer done it does not take ownership away from the creators of the two
images.

A NNet isn't trained with two images though, but millions. Do we abandon
copyright because of scale? Should the NNet operator not be required to keep
the entire training set such that copyright can be traced? Do we invent an
entire new industry on determining the probability that a particular image was
used to train a NNet (and by how much it affected it), such that it's owner
can claim royalties on anything the NNet produces?

The question isn't about google versus operator, it's about whether or not
we're going to continue investing in the madness of copyright for machines
designed to mimic human brains, and if so when will it apply to ourselves -
for we can't archive our own training set.

~~~
swhipple
> Do we abandon copyright because of scale?

This is an interesting question. I wonder if, due the number of items in the
training set and the minimal impact of each individual creative work, it would
be considered fair use.

If your training set significantly consists of images from someone else's
training set in the same domain, you might have a conflict. But for arbitrary
images, it may be analogous to search engines indexing (and learning from)
copyrighted material, which is generally protected.

------
michael_storm
The article (or the parts I've read so far; it's long and dense) assumes two
things:

1) "An AI" should be considered a person. (This one is more subtle; I don't
believe the author says it outright.) 2) "An AI", being a person, should be
considered the author of its output.

Both of these suppositions are quite debatable, but the article doesn't bother
to debate them:

    
    
      The law as it is currently configured cannot vest
      ownership of the copyright in a procedurally generated
      work in the work’s author-in-fact, because the work’s
      author-in-fact—a generative software program—has no legal
      personhood. Intuition and the principle of transitivity
      both suggest that the programmer of generative software
      is the logical owner of the copyright in the works
      generated by his or her software.
    

Whose intuition? Not mine. My intuition says that Deep Dream is a tool, and
that by running the tool, I am the author-in-fact of the work. Or maybe that
doesn't require enough human creativity for a court, and if not, fine -- the
creators of the images that my instance of Deep Dream sourced to create its
nightmare-scape are the authors-in-law. But Deep Dream is not the author, and
the programmers of Deep Dream are not the authors.

That's _my_ intuition (possibly quite wrong!). The article would do better to
distinguish intuition from fact.

~~~
eridal
> "An AI", being a person, should be considered the author of its output.

I guess "copyright holder" is a better term.

I can think of Music records, or Hollywood companies, which are not human but
are definitely "persons" that hold tons of copyrighted material, and even some
of those created that material.

I can't see why not a program that creates digital works couldn't hold the
copyright of its output

------
aaronbrethorst
FTA:

    
    
        Q: So, Andy, who owns the copyright to DeepDream images?
        
        [...]
    
        A: Basically, "no one does," because AIs don't have legal
        personhood.
    

Meanwhile, William Gibson 31 years ago:

    
    
        "That's a good one," the construct said. "Like, I own your
        brain and what you know, but your thoughts have Swiss
        citizenship. Sure. Lotsa luck, AI."
    

Gibson was very, very right when he said "The future is already here — it's
just not very evenly distributed." He just neglected to mention that the
future is also really friggin' weird.

~~~
mhink
Where is that Gibson quote from? I'm intrigued. :)

~~~
primaryobjects
That's from Neuromancer, when the hologram (like a chat bot) is having a
conversation with the main character.

------
SilasX
>Q: And [who owns the copyright to DeepDream images], according to [my law
school mentor]?

>A: Basically, "no one does," because AIs don't have legal personhood.

Wait, what? So if you build a tool that helps you build the final product, the
tool counts as the creator, since the tool doesn't have legal personhood? Do
we argue that easels and spirographs don't have legal personhood so if you use
that stuff in your work then no one has the copyright?

Reminds me of the quip: "Dismissing a graphic artist's work because it was
computer-generated is like dismissing Michaelangelo's work because it was
brush-generated."

~~~
FigBug
If you used a spirograph to create art, you would own the copyright. If you
created a spirograph that was robot controlled and could automatically create
art then nobody would own the copyright. I don't see how an easel count create
art on it's own.

~~~
blintzing
If I write a computer program (say a Processing sketch) that generates an
image, who owns the copyright to the image?

~~~
Retric
Unless your directly influencing the output you own the copyright to the
program not the output. Basically, if your code generates a random image you
don't get copyright, which _is_ useful as generating say every English haiku
is actually possible. It's under ~(15,831 syllable candidates) ^ 17.

------
imh
I think it's a much more interesting question to ask who owns the output if it
is trained on copywritten data. It works by iteratively tweaking the image to
look more like whatever it looks most like. It's not hard to imagine that a
simpler model that has only seen one picture of a dog might iterate a mildly
dog-like image into exactly that dog training image. Clearly that should
belong to whomever the original photo belongs to. So then what about the grey
areas beyond that simple thought experiment?

------
jfoster
In the eyes of the law, wouldn't the images be considered derivative works of
all the data the Neural Net was trained with?

~~~
tantalor
And the input image.

------
rl3
Obviously presuming Deep Dream images to be the output of a strong AI or an
otherwise sentient being is absurd, but it does raise a very good point.

That being: where do you draw the line in terms of what constitutes electronic
sentience? The short answer is nobody knows. We only have vague ideas, and for
all we know those may be completely wrong.

It becomes difficult in that the cognitive architecture of artificial
intelligence is not bound by biological limitations, and thus may be quite
alien in nature relative to mammalian cognition. As such, our existing ethical
frameworks may be completely unsuitable if applied to artificial beings.

What I personally find unsettling about the Deep Dream images is their
striking visual similarity to that of what a human sees on hallucinogenic
drugs. The underlying algorithms utilized are themselves are heavily rooted in
biological intelligence. It's not a huge leap of logic to say that if you were
somehow able to excise part of a living human's visual cortex and hook it up
such that it could receive input and generate output, you might see similar
results.

It's also worth noting that attempting to grow human brain tissue to any large
degree of scale or complexity within a laboratory environment would be
considered a highly unethical horror show, despite the fact that we almost
certainly have the technology to do such a thing today.

By the same token, one could do the exact same thing in digital form and the
perception would be entirely different. This brings up a lot of issues,
including the possibility that artificial agents of the future could
potentially suffer in silence on unimaginable scales, double standards with
regards to artificial versus biological intelligence, as well as fundamental
questions about what constitutes artificial suffering and sentience in the
first place.

------
vernie
Do blur filters have legal personhood?

~~~
jtmcmc
Exactly. Calling this NN AI (as in general / strong AI) is ludicrous.

------
visarga
A neural net is just software, like Photoshop. Who owns the images created
with Photoshop? The one who is paying for the electricity.

------
rm999
I tried reading that article, but for a topic that I find interesting
(especially as someone who has studied AI and machine learning) I hoped it
would be more approachable and easier to read. Can anyone explain the last few
sentences for me?

>The increasing sophistication of generative software and the reality that all
creativity is algorithmic compel recognition that AI-authored works are less
heterogeneous to both their human counterparts and existing copyright doctrine
than appearances may at first suggest. AI authorship is readily assimilable to
the current copyright framework through the work made for hire doctrine, which
is a mechanism for vesting copyright directly in a legal person who is
acknowledged not to be the author-in-fact of the work in question. Through
this legal fiction, the machinic creativity of generative code can be
recognized for what it really is—something other than (but owing to) the human
creativity of its coder

~~~
sp332
AI should be considered legal "persons".
[https://en.wikipedia.org/wiki/Legal_personality](https://en.wikipedia.org/wiki/Legal_personality)
This is not treating them like humans, but for example businesses can be legal
persons as well. It's an entity that can own things and enter into contracts.
Instead of making some new, complex law about AI owning copyrights, we can
recycle the "work-for-hire" doctrine.
[https://en.wikipedia.org/wiki/Work_for_hire](https://en.wikipedia.org/wiki/Work_for_hire)
That's where a person makes something (e.g. a book) as their job, and the
person who hired them ends up owning the copyright.

That way, the AI is "hired" to produce images for the operator, and the
operator owns the images.

------
dlss
This analysis is insane. It's just a very advanced photoshop filter.

------
visarga
And this comment is copyrighted by my "Left and Right hands", because it was
them that typed, letter by letter. You know, my hands are so smart, they have
lots of sensors and neural networks in them.

