Let's not get carried away. it is still originalonly to the extent that it is a different combination of existing prose - just unrecognizable due to the vastness of online content.
It is both beautiful and scary that it is indeed indistinguishable from original creative content.
No you're wrong. It's literally creativity. Machine learning models literally work because they fill in gaps where the data doesn't exist.
Any content generated by ML is very unlikely to be anything else other then original.
Imagine linear regression. Then pick a random point on that line. How likely is it that that point on that trend line coincides with an actual data point? Very unlikely.
ML is simply a multidimensional version of this. Some 1000 dimensional surface and the result you see is simply a point on that surface. The likelihood of that point on the surface to coincide with an actual data point is astronomically low.
Nobody is being carried away here. What you are seeing is raw creativity by an AI. It is a highly simplified version of human creativity, but the deep fundamentals are identical.
In fact all intelligence is simply a huge thousands or more dimensional surface. A giant curve fitting methodology. The only gap between ML and human intelligence as of now is the amount of neurons, the training algorithm and the actual template for the equation describing the model. But in essence we have the fundamentals down at a high level.
I agree with what you say and I would add that humans benefit from embodiment. AIs can't experiment like us; we discovered sciences by investigating our complex environment, they have MuJoCo and ATARI. We're also a whole society, AIs are usually individuals in isolation. We benefit from evolution, not just from learning while AIs don't have dicks. It takes a whole branch of industry to make the chips they run on.
That doesn't mean AIs can't have robotic bodies or access complex simulations, they could also have a society or be integrated in ours, and evolutionary techniques could be part of that process. For example AlphaGo got a good enough environment and evolutionary selection of agents, and it topped human abilities very fast. It's only a matter of time until they can have all our advantages.
But that's how humans learn language to begin with.
I mean, look at this sentence.
> But overall, I think the point of the article is that the AI was able to produce something fairly coherent on its own.
The AI not only combined words. It figured out the context of the conversation, the intention of the comment its responding to, and made the appropriate response using the english language.
This is true. But here's a thought experiment. What would happen if, say, an illustration-drawing AI (it's just a more fitting example than prose) is trained only on art from before the 20th century. It would surely excel in 19th century or medieval style illustrations, and yes it would produce original ones. But would we ever get a 20th century art style from it? By which I obviously mean even an alternative 20th century style? There are different tiers, or layers of creativity, that's my point I guess
Is this fundamentally different from training a human exclusively on the same and expecting them to advance the field of art by a hundred years? These sorts of progressions in humanity are slow and steady with occasional exceptions that produce leaps.
They may be slow, especially by machine standards, but they do occur. Their pace isn't really relevant to my point.
And humans are trained on the same; unlike AI, we don't have any external supply of art to rely on : ) At any given time, all we have is what we've already created so far
I would actually say that humans have an enormous extra set of data that we, as people, are "trained" on. We walk around in our daily lives, seeing things constantly, and that influences our perception of art. Art is always a product of the broader context it was made in (social, environmental, etc). Something that gets accepted or praised today might very well not have been 200 years ago.
One of the things that is interesting with these new big models is it is dramatically broadening the context in use. The models are learning both the textual representation of a concept, as well as the artistic/visual representation and the relationship between the two domains.
That's true, but this is exactly the unsurpassable limitation of AI's creativity.
It can generate art out of art. It can do it exquisitely well. But it lacks experiential/social inspiration. This component only comes from recycling here.
(Without the atmosphere of industrializing 19th century Central Europe - we have no Kafka. You can't simply generate Kafkauesqueness out of pre-existing literature.)
And that's why I refuse to believe AI can be creative in a meaningful sense of the word.
Obviously it's not AI's "fault", so to speak, but that's kind of beyond the point :)
PS. I can imagine - now we're going far into the realm of s-f - truly sentient AIs producing art that's genuinely creative. Art actually stemming from a self-conscious AI's psychological experience. But then it would probably be utterly incomprehensible for us : ) "If a lion could speak, we could not understand him", as the philosopher remarked.
The current generation of AI can generate art that is already original; if provided with feedback from wider population it would rapidly develop original style(s). This is because AI was made as a tool and it functions as a tool. Nobody trained the AI to advance the art style on its own.
Contrary, I wouldn't expect 19th century artists locked in their rooms to produce a new cohesive art style. You cannot even get artists to reliably create 19th century art at scale. They quickly get sidetracked, and keep complaining about the bars on the windows.
Generation of a "style", even for humans, requires a feedback loop where what was created is incorporated, mimicked, and expanded upon.
An AI whose training model only rewarded images that mimicked 19th-century art would never develop a "style" outside of that. But if you had a model that was trained on 19th-century art, and then provided a critic network that rewarded 20th-century art (as well as 19th), and re-trained the network, I suspect that it would develop a 20th-century "style", as variations that previously would have been rejected under a strict 19th-century criteria will now be accepted.
And it's this expansion aspect that seems crucial to me.
Endless rehashing of inputs can lead to seemingly creative outcomes, but that doesn't mean any actual expansion occurs. New works are created/discovered, but are new underlying principles discovered?
I recall seeing a Two Minute Papers episode about an AI built to be curious, or specifically - explore routes in a maze where it couldn't predict the outcome.
The easiest way to block it was to present an animated wall. The AI would just stare at it and not go anywhere.
The generation side is actually probably easier than the critic side. I don't know of any network that is capable of determining if something is aesthetically pleasing in an abstract sense, instead they always compare to known examples.
Humans do the same - recombine the same limited vocabulary in new ways to address new goals. We rarely invent original meanings that could not have been expressed before.
I believe the point of the original comment is to make AI that comes up with its own original style, not one that "reinvents" an already existing style after first being trained on a different already existing style.
1) I question this premise. People who revolutionize a field don't do this with feedback from the public. They do it by having a very unique perspective that they bring into being by sheer force of will - usually against great pushback.
2) If we assume your premise is true. The public does not have a preconceived notion of what the art should be like. Presumably you would show them a style of art that they have never thought about before and they will respond positively to it.
The fact that we can say a person revolutionized the field is specifically because their work, at some point in time, got recognized by a larger audience. The fact that an artist had pushback is also context that is used to determine something is revolutionary; you cannot have subversive art if there is nothing to subvert.
I don't agree. For example could Shostakovich's symphonies be composed in a world without WW2 and Stalin? They were major factors influencing what people value in an artistic experience, as well as influencing the composer directly.
Bach composed for the religious people, Mozart for nobility / people who could go to the opera, Beethoven for the romantic middle class who owned a piano, and Shostakovich for the oppressed masses in soviet Russia. The historical context shapes artistic criteria.
You can't make new art styles in a vat (brain in a vat), the process is connected to the world.
You can turn that around - why didn't 19th century painters use 20th century styles?
Art evolved in relation to the social and historical circumstances. At the very least you would need to give all the context of the 20th century beforehand (except for the art itself).
I don’t know how AIs work or any of their limitations, but I’m pretty sure that “we” tend to think much more about ourselves than we actually are. Humans like to perceive themselves as sacred inscrutable beings, while mostly we just reshuffle what’s already there, brought by natural complex processes. Maybe AI couldn’t show anything remarkably innovative (yet?), but that also applies to average you and me, who literally lived this life for so long. I doubt that anyone taught by solely a part of the internet snapshot could compare to the best [large groups] of us or to these AIs at least. Pretty sure we’d be just cavemen drawing recognizable pictures, not geniuses.
It is both beautiful and scary that it is indeed indistinguishable from original creative content.