I think it is not all that difficult to argue that math is art.
Sure, some questions posed by mathematicians (such as "Is god's number 20?") can be proven by computers with rote brute force, but the actual creative process of playing with a given system, adding and removing constraints, and seeing what emerges is very much a creative and artistic process.
One of the easiest ways to see this is to read a good math paper. They're rare, but good pieces of math can spark the same sort of feelings that a good pieces of art does.
But I mean it in the sense: Sensory inputs that evoke certain emotions (awe, fear, joy...) in the human brain - that should encompass most things people immediately associate with the term.
I personally think the definition in Wikipedia captures my sentiment quite well:
> Art is a diverse range of human activities in creating visual, auditory or performing artifacts (artworks), expressing the author's imaginative, conceptual ideas, or technical skill, intended to be appreciated for their beauty or emotional power.
Since I limited my discussion to art meant to be consumed for "consumer-grade" entertainment purposes, I'd focus more on the last part. I just think that it is possible, maybe even likely, that by using ML, we may at one point be able to "pinpoint" what humans, perceive to have "beauty and emotional" power and generate that. Have the networks learn the same rules that artists learn indirectly, so to speak. And I while this is probably a really hard problem, I don't see why this should be the "last" problem to be solved - we seem quite a bit closer here than we are in many other domains.
If we ever have strong AI, reverse emgineering human nature and an AI having the human experiences of a human lifetime seems in the same ballpark of difficulty.
Of coirse, human nature may be passé at that point, especially for AI readers, and ersatz poetry more popular (just as electronic instruments have displaced "real" physical instruments in popular music).
So even the author does not know if they are human or AI?
This means there will be a LOT of other changes that will be occurring as well, if that happens. And that it is unlikely to happen in the next decade.
Like royal families today.
I think the last jobs remaining will be the cleaning jobs. House cleaning, for example: try to have a robot take the dust out of your precious glasses collection. Or just fold your clothes.
What scares me is when AI start to generate "art" for other AI.
My heart, why come you here alone?
The wild thing of my heart is grown
To be a thing,
Fairy, and wild, and fair, and whole
It seems to me that there could be flashes of sentience in technology that occur well before that time..before we understand that there's someone listening. That would be a lonely lot indeed.
In the story, a poet AI has been produced, and it's put to the test by asking it to compose poems on extremely specific subjects and following almost impossible rules. For example:
"a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter "s"!
Here you can find the requests and the machine's results:
I wonder how long till we'll be able to do it for real.
Meaning, on the other hand, is a much larger vector to handle, and that's the real test of quality here. The OpenAI GPT-2 seemed to have meaning nailed down -- these poems clearly do not.
That would involve human labor. The NN learns that by itself from having enough data thrown at it.
> The OpenAI GPT-2 seemed to have meaning nailed down -- these poems clearly do not.
This is derived from GPT-2-small. So we already know that the state of the art is already better than what we see here.
And there is so much that could be done. I have a laundry list here: https://www.gwern.net/RNN-metadata#improvements
I'd like to suggest that you also strip the ends of their books, as they also contain boilerplate. In addition, I'd suggest stripping out introductions. The early results you got from Shakespeare sounded like they may have been taken partly from the intructions, which weren't written by Shakespeare at all, but by much later authors.
I also noticed that you ran out of memory at one point and reduced the neuron count as a result. You might want to consider doing some quick runs on AWS (or one of their competitors), where you can get plenty of memory (and also faster machines). That way you won't have to compromise your NN architecture for lack of resources.
Something else to consider is using some other optimization techniques like GA or GP to optimize the NN architecture or NN parameters, and also to maybe have multiple NN's vote on the results. Such metaheuristic and ensemble techniques have shown promising results.
Yet another thing to consider is using something called Dynamic Subset Selection to effectively train on the most difficult portions of the training data. I have not used this technique with NN's, but it's worked well with GP, and saves a lot of time.
There are a lot of hyperparameter optimization methods, but HO is only worthwhile if you can afford a lot of runs and usually delivers relatively small gains compared to scaling up your model/dataset. Right now, it seems like it would be a better approach to continue scaling up the Transformer and/or switching to Transformer-XL than it would be to attempt hyperparameter tuning of GPT-2-small finetuning training.
The Emperor Wu (the great Wu)
As a hook and claim the first hip-hop song by a major artist co-written with AI.
The bard does so brilliantly (though, of course, it's really Lem himself who wrote it).
 - https://en.wikipedia.org/wiki/Cyberiad
 - https://www.cse.wustl.edu/~jbuhler/cyberiad.html
Is that an intrinsic issue of the NN or with how information is extracted from it?
Also Ravana’s jaw? Lol. Try Raktabija