void feel(x: Noun)
void feel(x: Adj)
I feel[a] hunger, and I feel[b] sad. I feel[a or b, but not both] hunger and sad.
[a] transitive verb; a physical sensation, acting on the object (in this case, "hunger")
[b] intransitive verb ; an emotional state, described by the adjective (in this case, "uneasy")
EDIT: Clarity. Also, IANAL (I am not a linguist), and generally you don't want to take lingual advice from an engineer, like me.
Something about it being the same number of words and the least letters' difference; I can't rule out the possibility of a 'y'/'e' error.
When these neural networks get an input image and spit out labels like "bird" or "car", they haven't actually recognized which parts of the image are a car, nor what pieces it's made of. Instead they have memorized some textures and simple shapes which go with the label. It provides the kind of knee-jerk reaction that allows your brain to make you jump when a large object approaches fast, or think there's a tiger hidden in the dirty laundry when you turn around in a dark room.
That's why, when you reverse the process, it doesn't create meaningful images, but clumps of relatively common textures found in the training set. It lacks the hierarchy of concepts that allows you to identify objects and distinguish them from the background, which a baby learns in their first two years.
So all these garbage outputs would be classified as cars, because it's happening in the space that the NN doesn't really have information about.
The AI must be pretty close if they can already match the output of a confused human brain.
So, yes, it's not wonder that they're alike in some ways, and disturbing, too.
Is such picture even exist? That I would seen it once and wouldn't sleep for a week? Or upon seeing it, start crying without explanation?
Seeing something like this created by AI would be very impressive: "prepare to cry when you see this picture (guaranteed!)"
Most kids are able to correctly identify thing in their environment by age 1-ish since they're using words to describe those things around that age. Specially trained AIs are about at that level. Based on the difficulty of captcha tasks I think it's safe to say 1yo could identify cars, signs and store fronts about as well as Google's AI (were it actually in the situation, the 1yo probably couldn't conceptualize that some of the pictures in a grid of them on-screen represent cars or whatever). That's the level we're at and that's with AI that's specially trained to recognize those things.
Having jokingly brought that up, it's the best reason for hybridizing implants. "We should all just learn to get along" still applies.
This could fall under process, or generative art.
On a side not; A vast portion of art has been purposefully void of meaning for hundreds of years. "l'art pour l'art"
In addition to the work itself, the framing or presentation is essential to complete the artistic product. Printing the images out on nice material and framing them would help.
John Cage’s 4’33” is (IMO) a brilliant performance piece. It requires the context of a stage, of live musicians to be effective. It doesn’t work so well as a recorded piece!
Computer generated works don't have that, or any message.
The process of computer-generating a number of pieces itself might be some form of art, but more like in the form of art of programming rather than the art of painting.
I think these are art, since they affect me in the ways which I typically associate with art.
Art has a quality that is merely not personal and subjective. Interpretation is personal and subjective but if art was, then anything would do because there would be no objective-ish yardsticks to help judge between two random pieces. Postmodernism tried to explore the boundaries of how little is needed for the subject still to be art, which interestingly can be considered an art in itself. But the underlying current even there is that there is a human idea, motivation, or vision behind the art that is created. Computer generated pieces lack that; however, the process of computer generation probably does qualify as art. If an artist used a computer to generate images and make collages out the selected generated images that would be art—much like Warhol created art out of emblems of ordinary or entertainment figures.
I think the fundamental question here is why art requires a "human" element to actually be art, and what that "human" element actually is.
My opinion is that humans aren't really as special as we want to think we are, and that fundamentally there's no real difference between how humans create "art" and how machines create "art"; in the end, we're both drawing on our inputs, applying some subjective evaluation (possibly based on other inputs), and producing a corresponding output. How that "subjective evaluation" step happens might seem different enough to warrant different classifications, but I'm not really convinced it actually is.
Conversely, we can only understand, for example, animals' art to the very limited extent we understand animals: that is, we don't really have much of a clue if an animal doodling with pebbles in the sand is making art and trying to make a beautiful (to him) arrangement of stones, or just moving pebbles around for fun.
If we saw alien art, would we even know if it's art or something else? We would have to know the alien species, their culture, and how they live in order to even make educated guesses.
It might be that the concept of art likely carries over across species but we can only see in other species' art what is similar in our species.
If a monkey draws a monkey face we can understand that because we have similar eyes and we could draw a human face. But if we don't share even basic senses with some entity we can't possibly understand what visual art might be for them.
So, a machine producing "art" is something we can only understand as reflections to what a human would produce. We judge the machine's output by as if it was created by a human. But because we know it's not human made but just a compilation of random values and preset rules, it lacks the context we might call the human element and becomes void to us.
At a minimal scale the "human element" could even just be the story of the person who painted the image, even if the image is simple and not skillfully very refined. But if his paintings are a testament to how he after tenuous hardships ended up living on a small island and started painting, then we can reflect back to his story by looking at his pictures and try to see what parts of his life might be captured on the canvas.
Whether largely unintended consequences or not, the abstract forms of the man in the tie are interesting enough to not look out of place in a gallery, and the renderings of sheep and clocks are clearly the products of a mind (even though it's a human one)trying to get a response from the viewer attuned to the idea of androids dreaming of electric sheep, melting clocks being an iconic surrealist thing and failure to interpret "one (1) single clock" being funny. The grassy hillside, sans sheep, not so much. A quick play with the algorithm suggests a lot of its outputs are unrecognisable forms with no correspondence to anything, so he'd already thrown away a lot of uninteresting results to get to that point.
To intentionally use an exaggerated example: I can urinate on a wall, tell nobody else about it, give it no further meaning or consideration, have done it for no particular reason other than my need to urinate, and call it art (regardless of whether the setup premise is offensive to some).
I can look at this image from the site:
And I can decide that is art which represents my emancipation from indentured servitude in another life when I was a coal miner for BigCorp on Mars. Or any other seemingly ridiculous meaning I choose to give it.
Pissing on a wall and arbitrarily calling it art does not make it art. That style of thinking worked during the Dada movement https://en.wikipedia.org/wiki/Dada but is no longer something that would work today, unless you were doing a live art installation, in which case you are committing to some kind of commentary on some factor of the world anyways.
Artists almost always create art with meaning, and in the rare cases they don’t they are still attempting to make genuinely thought provoking works of art, even if it’s simply showing off their skills. If you go to an art gallery there is always the artist’s thesis statement or message written out at the entrance.
You are correct that art is subjective though. What an artist envisions and what their audience gets out of it has been shown time and again to not line up in many cases. And in a way, that is the beauty of art - our shared experiences while appreciating it.
These AI pictures would wind up being called art if someone purposefully produced them as such and if necessary, provided some meaning or context to them all.
I get what you're trying to say. That sort of thing isn't common in contemporary art, especially not in a gallery setting. But that's a tiny portion of all the art in the world.
For example - I know an artist in Tuscon who drinks his own piss. That's the performance. He might have a personal meaning behind it, but he doesn't explain it to anyone. And he does plenty of other meaningless things intentionally.
He's not world renowned or anything, but he is known among some artists in Tuscon, he considers himself an artist, and thats how he makes his living. He also has an art degree.
> You are correct that art is subjective though.
That's my argument against (most) graffiti.
Personally, I think that a helluva lot of modern art has a high BS level, and welcome the uproar that a computer could bring to it.
I'll also add that scarcity will be simply built-in. Most folks won't initiate such art - and such art has to be initiated. There will still be a personal selection process going on with a human behind the scenes. Just because the computer can make so many doesn't mean we'll see that many at all. I imagine it will eventually edge out some art forms: Making advertising graphics for movies and print, for example. Making logos. And so on.
Up one meta-level. The artwork isn't the sonata, or the portrait, or the sculpture. The artwork is the algorithm and input data.
It's the same logic with e.g. books: if there are 100 great books, people are going to love those books and some subset of people are going to find a book and say, "wow, this book was really meant for me". If there are 1,000,000 great books, then 1) there's going to be a wider selection for people who enjoy more niche aspects in books who wouldn't otherwise find those, and 2) there's a much higher chance of one of those million books really resonating with any random individual.
The real struggle is in curation and recommendation: which of these million books would John Smith _most_ like (ideally more than, say, the original 100 books), instead of requiring him to peruse through them all on his own.
"snakes on a plane"
"snakes with a plan"
> What always fascinated me is how those images look almost exactly like the hallucinations you get on some psychedelics... The AI must be pretty close if they can already match the output of a confused human brain.
> Traditionally, with computer generated stuff, you could clearly see the math in the algorithms (the sine waves and fractals and whatnot). With AI generated stuff it looks... natural... It's a computer no longer letting you see how he thinks.
The paper "Deep Image Prior" by Dmitry Ulyanov et al. gives compelling evidence that the structure of convolutional neural networks already encodes strong knowledge about the appearance of natural images, independent of any specific parameters (learned weights). Independence from parameters means it's independent from what task the network was trained to accomplish, and of the training algorithm.
This helps explain (IMO) why a neural network with "wrong" weights (meaning, the training process did not fully meet the goal of the project) still produces images that like plausible activations of the human visual cortex, rather than harsh mathematical patterns. The convolutional network structure is biased towards natural-looking images.
third-party blog post: http://mlexplained.com/2018/01/18/paper-dissected-deep-image...
The CAN images in that same Christie's article are much better, quite beautiful. But the author of CAN is full of shit in this interview answer:
> ‘An interesting question is: why is so much of the CAN’s art abstract? I think it is because the algorithm has grasped that art progresses in a certain trajectory. If it wants to make something novel, then it cannot go back and produce figurative works as existed before the 20th century. It has to move forward. The network has learned that it finds more solutions when it tends toward abstraction: that is where there is the space for novelty.’
The algorithm has grasped that art must move forward, so it paints abstract?! Or could it be that feeding random numbers into a black box neural network algorithm is never going to give you a human likeness... No, it must be that the AI just doesn't want to be Rembrandt.
With bullshit meters at this level, the next AI winter must be just around the corner.
It's hard not to anthropomorphize the neural net and have a little bit of pity/laughter at its struggle to paint even remotely accurate pictures
- we must not give it any decision making abilities because they will fail terribly on some mundane task
- SkyNet is not for yommorox, or the day after
Then it was quiet again. My attorney had taken his shirt off and was pouring beer on his chest, to facilitate the tanning process. “What the hell are you yelling about?” he muttered, staring up at the sun with his eyes closed and covered with wraparound Spanish sunglasses. “Never mind,” I said. “It’s your turn to drive.” I hit the brakes and aimed the Great Red Shark toward the shoulder of the highway. No point mentioning those bats, I thought. The poor bastard will see them soon enough.
Which makes me wonder how useful it would be to use different languages for teaching ML about the world?
Maybe an understanding across different languages might help it differentiate between objects with more accuracy? Tho, I'm probably making this sound far more simple than it actually would be.
If it can compare between different language models, then likelihood interpreting the right way could be increased. Google seems to be doing something like this with for Google translate, using Bible translations .
> a dog on a bun on dog on a bun on a dog on a bun on a dog on a bun on
or maybe I've completely misunderstood it and in a way it's passed my art turing test?
Visual information processing is the visual reasoning skill that enables us to process and interpret meaning from visual information that we gain through our eyesight; perhaps this is not the same for all of us and is well illustrated with this AI example?
Change http://aiweirdness.com/post/177091486527/this-ai-is-bad-at-d... to http://aiweirdness.com/post/177091486527/this-ai-is-bad-at-d... and it actually loads and looks proper.
I .. ended up with way more than would
fit in this one blog post ..
Enter your email and I’ll send you them
(and if you want, you can get bonus
material each time I post).
Probably doesn't qualify as a full-on dark pattern, but I am annoyed enough to say that I dislike this approach.
Is this Web 2.0, 3.0, or higher? I'd like to go back to the 1.0.. anyone got a D/L link so I can reinstall the good version?
Given this, you certainly don't have to subscribe to get more of these pics.
It's just someone trying to build an email distribution list for their (side?) gig. Feel free to enjoy the content without opting in.
How is that not shady marketing? This is bad press. Bad article. This is killing reader morale. We should not encourage people like this to build their gigs. I felt I wasted my time reading this in the first place.
Because you’ve already seen the good content! This blog highlights the most amusing/weird examples of stuff generated by AI. This particular post has 17 - I counted - examples. They’re not presented as a listicle or one-by-one slides, there’s not an advert between each one. This is exactly the sort of content we should be encouraging.