I was expecting some clickbait/spam (the layout of the website has that feel) but this was surprisingly super in-depth and 100% matches up with my experience doing prompt engineering.
There's a fine line between so descriptive that the AI hits an edge case and can't get out of it (so every attempt looks the same) and not being descriptive enough (so you can't capture the output you're looking for). DALL-E is already incredibly fast compared to public models and I can't wait for the next order-of-magnitude improvement in generation speed.
Real-time traversal of the generation space is absolutely key for getting the output you want. The feedback loop needs to be as quick as possible, just like with programming.
As someone who makes very weird and experimental stuff, DALL-E is like a Segway and CLIP is like a horse (especially with those edge cases that tend to self-engorge/get worse if you aren't clever). It's a shame compute costs aren't much different between the two (correct me if I'm wrong) - I don't think there is much of a purely artistic process with DALL-E, although I do like to use DALL-E Mini thumbnails as start images or upscale testers.
>Real-time traversal of the generation space is absolutely key for getting the output you want.
I've been sketching around a two-person browser game where a pair of prompters can plug things in together in real-time :D
Another interesting thing with prompt engineering is that attempt #1 with prompt x might yield something you don't want, but attempt n might yield something you do :)
That's an open source recreation based on DALL-E 1. It's different to DALL-E 2, if you want that look for DALLE2-pytorch, but note that it hasn't been trained fully yet.
My propmt of 'penguin smoking a bong' does not disappoint on either, although hugging face more accurately portrayed the act of smoking, while replicate gave me images of penguin shaped bongs
Hang in there — I only got my invitation a couple days ago. They're still rolling out invitations at a steady pace. But, just as a side note, one of the first things they tell you is that they own the full copyright for any images you generate.
You definitely have to play around with prompts to get a feel for how it works and to maximize the chance of getting something closer to what you want.
When did you sign up? I just signed up, and it sounds like it takes a year to get access, probably longer now. It's a bit frustrating because I didn't sign up when it came out because I didn't need it at the time, but now I'm afraid of waiting a year when I do. These types of waitlist systems encourage everybody to sign up for everything on the off-chance that they might need it later. Wish they just went with a simple pay-as-you-go model (with free access for researchers and other special cases who request it), like how Copilot does it.
I signed up just over a month ago and from what I've seen, it looks like you won't have to wait more than two months to get your invite. A lot of people who signed up around the same time as me have already received their invites, so it looks like they're speeding things up and getting ready for a public launch soon.
I don't think the provider of an AI image generator service can decide they own the copyrights to it (perhaps they can require you assign the copyrights, though it may not even be copyrightable?), only courts can (and they decided the person setting up cameras for monkeys didn't own the copyrights to the monkey photos)?
Courts only decided the monkey couldn't copyright the photo (the PETA case).
The copyright office claimed works created by a non-human aren't copyrightable at all when they refused Slater, but that was never challenged or decided in court. It's not a slam dunk, since the human had to do something to set up the situation and he did it specifically to maximize the chance of the camera recording a monkey selfie.
If I set up a rube goldberg machine to snap the photo when the wind blows hard enough, how far removed from the final step do I have to get before it's not me owning the result anymore? That's the essence of the case, had it gone to court, probably the essence here too.
My guess is the creativity needed for the prompt would make the output at least a jointly derived work regardless of any assignment disclaimers--pretty sure you can't casually transfer copyright ownership outside a work for hire agreement, only grant licenses--but IANAL and that's just a guess.
I've played with Midjourney for awhile and just got my invite to DALL-E last night. One thing I think is really cool about Midjourney is the ability to give it image URLs as part of the prompts. I can't say I've had tremendous success with it, and it still feels a little half-baked, but I wish DALL-E had something along those lines. (Unless it does and I'm missing it). It's much easier to show examples of a particular style than to try to describe it, especially if it isn't something specifically named in the AI's training set.
DALLE2 isn't as flexible as the more open colab notebooks here; you can do "variations" of an image but you can't edit an image except through inpainting, so it's hard to generate "AI art" style images of the kind Midjourney and Diffusion are good at.
It also won't allow uploading images with faces in them.
I'm also waiting but only put myself on the waitlist recently. I want to use it to generate synthetic image datasets from text descriptions. Very curious to explore the depth of what can be generated.
I use both too. Dall E has heavy restrictions. It's basically G rated, so no horror. And no real world stuff like "Donald Trump with a mohawk".
MJ falls apart when you ask for fine detail. It's a bit of the AI cliche where you have to describe the colour, shape, etc in detail to mold what you want. Asking for a "monkey, gorilla, and chimp riding a bicycle" might have a chimp riding a monkey-gorilla as a bicycle.
Dall E is a lot better with words. It seems to "smooth" some stuff. Like asking for a bone axe will still show regular axes.
But MJ is probably the best choice if you want to do landscapes and stuff, especially horror/dystopian themed.
Google's internal models (Imagen and Parti) are much better. It looks like DALLE2 is just not big enough to accurately draw faces, which are very detailed things.
"This person doesn't exist" uses StyleGAN which can definitely do faces, but can't do general pictures.
AFAIK the only lawsuit that tests this so far was a kind of weird case where the programmer was trying to register his algorithm as the creator of the image, as a "work-for-hire". The copyright office's reasoning however banged on about the necessity of "human authorship"
> The Office also stated that it would not “abandon its longstanding interpretation of the Copyright Act, Supreme Court, and lower court judicial precedent that a work meets the legal and formal requirements of copyright protection only if it is created by a human author.”
It's as much engineering as SEO. Though with the 'prompt engineering' it's the human brain trying to coax something out of the black box - ironically, an algorithm might be better at generating the prompts after being given points in its parameter space that fit the aesthetic direction the user wants to explore.
There's a fine line between so descriptive that the AI hits an edge case and can't get out of it (so every attempt looks the same) and not being descriptive enough (so you can't capture the output you're looking for). DALL-E is already incredibly fast compared to public models and I can't wait for the next order-of-magnitude improvement in generation speed.
Real-time traversal of the generation space is absolutely key for getting the output you want. The feedback loop needs to be as quick as possible, just like with programming.