Hacker News new | past | comments | ask | show | jobs | submit login
A prompt engineering guide for DALLE-2 (dallery.gallery)
242 points by keveman on July 13, 2022 | hide | past | favorite | 62 comments



I was expecting some clickbait/spam (the layout of the website has that feel) but this was surprisingly super in-depth and 100% matches up with my experience doing prompt engineering.

There's a fine line between so descriptive that the AI hits an edge case and can't get out of it (so every attempt looks the same) and not being descriptive enough (so you can't capture the output you're looking for). DALL-E is already incredibly fast compared to public models and I can't wait for the next order-of-magnitude improvement in generation speed.

Real-time traversal of the generation space is absolutely key for getting the output you want. The feedback loop needs to be as quick as possible, just like with programming.


I'm surprised at the artistic skill of the person who wrote the book, in contrast with the terrible web UI skill of the person who designed the site.


Wouldn't surprise me too much, if they were the same person, but had vastly different amounts of experience with the different media?


As someone who makes very weird and experimental stuff, DALL-E is like a Segway and CLIP is like a horse (especially with those edge cases that tend to self-engorge/get worse if you aren't clever). It's a shame compute costs aren't much different between the two (correct me if I'm wrong) - I don't think there is much of a purely artistic process with DALL-E, although I do like to use DALL-E Mini thumbnails as start images or upscale testers.

>Real-time traversal of the generation space is absolutely key for getting the output you want.

I've been sketching around a two-person browser game where a pair of prompters can plug things in together in real-time :D


Another interesting thing with prompt engineering is that attempt #1 with prompt x might yield something you don't want, but attempt n might yield something you do :)


Great document.

Damn I am salivating to get access to Dall-E for some projects. Been on the waiting list for quite a while.

I've been experimenting with Midjourney, which is amazing for spooky/ethereal artwork, but it struggles with complex prompts and realism.



That's an open source recreation based on DALL-E 1. It's different to DALL-E 2, if you want that look for DALLE2-pytorch, but note that it hasn't been trained fully yet.


My propmt of 'penguin smoking a bong' does not disappoint on either, although hugging face more accurately portrayed the act of smoking, while replicate gave me images of penguin shaped bongs


Replicate is a newer version being trained on the same data set so it should theoretically catch up soon, no guarantees of course.


DALL-E mini/Craiyon is fantastic, but it doesn't compare to DALL-E2 at present, when you're talking about photorealism.

That said, some styles (Comic book spreads) seem to come out better on Craiyon. And DALLE 2 does not know what a Crungus is.


Given that Crungus has now entered the Internet, the next version will certainly know what it a Crungus.


Is this significantly different from Dall-E2?


The model is roughly 4 orders of magnitude smaller.


That's a nice way of saying it's 10000 times worse. It's just worlds apart.


Idk, it's pretty damn good at a lot of things, still. It's definitely very useful. Mega, at least. Mini is ok.


Hang in there — I only got my invitation a couple days ago. They're still rolling out invitations at a steady pace. But, just as a side note, one of the first things they tell you is that they own the full copyright for any images you generate.

You definitely have to play around with prompts to get a feel for how it works and to maximize the chance of getting something closer to what you want.


When did you sign up? I just signed up, and it sounds like it takes a year to get access, probably longer now. It's a bit frustrating because I didn't sign up when it came out because I didn't need it at the time, but now I'm afraid of waiting a year when I do. These types of waitlist systems encourage everybody to sign up for everything on the off-chance that they might need it later. Wish they just went with a simple pay-as-you-go model (with free access for researchers and other special cases who request it), like how Copilot does it.


I signed up just over a month ago and from what I've seen, it looks like you won't have to wait more than two months to get your invite. A lot of people who signed up around the same time as me have already received their invites, so it looks like they're speeding things up and getting ready for a public launch soon.


Consider yourself lucky, I signed up in May and I'm still waiting.


I signed up on April 8th and I am still waiting too.


I don't think the provider of an AI image generator service can decide they own the copyrights to it (perhaps they can require you assign the copyrights, though it may not even be copyrightable?), only courts can (and they decided the person setting up cameras for monkeys didn't own the copyrights to the monkey photos)?


Courts only decided the monkey couldn't copyright the photo (the PETA case).

The copyright office claimed works created by a non-human aren't copyrightable at all when they refused Slater, but that was never challenged or decided in court. It's not a slam dunk, since the human had to do something to set up the situation and he did it specifically to maximize the chance of the camera recording a monkey selfie.

If I set up a rube goldberg machine to snap the photo when the wind blows hard enough, how far removed from the final step do I have to get before it's not me owning the result anymore? That's the essence of the case, had it gone to court, probably the essence here too.

My guess is the creativity needed for the prompt would make the output at least a jointly derived work regardless of any assignment disclaimers--pretty sure you can't casually transfer copyright ownership outside a work for hire agreement, only grant licenses--but IANAL and that's just a guess.


DALL-E needs human input to start generating, the monkey pressed the shutter all on its own.


I've played with Midjourney for awhile and just got my invite to DALL-E last night. One thing I think is really cool about Midjourney is the ability to give it image URLs as part of the prompts. I can't say I've had tremendous success with it, and it still feels a little half-baked, but I wish DALL-E had something along those lines. (Unless it does and I'm missing it). It's much easier to show examples of a particular style than to try to describe it, especially if it isn't something specifically named in the AI's training set.


You can upload an image to DALL-E, edit it and add a prompt to it as well.


DALLE2 isn't as flexible as the more open colab notebooks here; you can do "variations" of an image but you can't edit an image except through inpainting, so it's hard to generate "AI art" style images of the kind Midjourney and Diffusion are good at.

It also won't allow uploading images with faces in them.


Just got mine last night. I think they are scaling up invites in the past few days.


Same here - I got mine 2 days ago. Signed up when it first dropped.


I'm also waiting but only put myself on the waitlist recently. I want to use it to generate synthetic image datasets from text descriptions. Very curious to explore the depth of what can be generated.


Get familiar with CLIP regardless! I have very little interest in DALL-E as an artist/prompter but as a futurist it is quite exciting.


I use both too. Dall E has heavy restrictions. It's basically G rated, so no horror. And no real world stuff like "Donald Trump with a mohawk".

MJ falls apart when you ask for fine detail. It's a bit of the AI cliche where you have to describe the colour, shape, etc in detail to mold what you want. Asking for a "monkey, gorilla, and chimp riding a bicycle" might have a chimp riding a monkey-gorilla as a bicycle.

Dall E is a lot better with words. It seems to "smooth" some stuff. Like asking for a bone axe will still show regular axes.

But MJ is probably the best choice if you want to do landscapes and stuff, especially horror/dystopian themed.


The Open AI clear content policy is quite interesting to me. It's reasonable but clearly controlling.


They’re trying to walk a fine line. Maximizing revenue while avoiding regulation.


Nice! I was wondering why there are example images of real-looking people, but it seems this is allowed now:

https://www.vice.com/en/article/g5vbx9/dall-e-is-now-generat...


Hmm I signed up 2 days ago and it still says "Please don't share images of realistic faces." when you sign up.


Yeah, I saw that too, but it doesn't seem to be in the terms of use?


Based on this, an interesting project would be paraphrasing any regular prompt into a prompt that works for DALLE-2.


i really don't understand how people can appreciate something like this. to me it just filling the world with literally mindless garbage.


I really don't understand how someone wouldn't find this incredibly fascinating as well as intensely fun.


Sure, because having a synthetic intelligence that seems to understand complex concepts to create coherent visual art is something humans are used to.


Mindless garbage is what majority of humans create in every field.


It's more similar to photography/fishing than other art forms.


Dall-e still has a lot of work to be done with face construction.

Maybe that’s a feature not a bug.


It's seems to be by far the best of any other drawing AI besides the "this person does not exist" series, but those are quite specialized.

You could be right though. It does "digital art" well, but realistic faces poorly, and they slap down lots of restrictions to avoid deepfaking.


Google's internal models (Imagen and Parti) are much better. It looks like DALLE2 is just not big enough to accurately draw faces, which are very detailed things.

"This person doesn't exist" uses StyleGAN which can definitely do faces, but can't do general pictures.


Are there samples of faces by the Google models? The websites don't seem to show any. Though their 20B samples are incredibly impressive.


There's animal faces. Google employees have been tweeting a lot more image samples, though I don't remember if any have human faces.

(Its output seems to be a lot more aligned to the input than DALL-E2, but also less "artistic" and more like it just did exactly what you said.)


I think they’re not training on faces on purpose.


You are probably right. Having used it I sometimes get images with white polygons covering the faces of people as if they have been blanked out.


Can anybody recommend a prompt engineering resource for language models?

Interesting topic


Perhaps https://arxiv.org/pdf/2102.07350.pdf

Also Gwern has done a lot on this.


This is great, lots of good ideas in the deck.


There is some shared Google docs in the dalle2 discord community about this too.


What's the copyright situation for images from dalle/imagen?


AFAIK the only lawsuit that tests this so far was a kind of weird case where the programmer was trying to register his algorithm as the creator of the image, as a "work-for-hire". The copyright office's reasoning however banged on about the necessity of "human authorship"

> The Office also stated that it would not “abandon its longstanding interpretation of the Copyright Act, Supreme Court, and lower court judicial precedent that a work meets the legal and formal requirements of copyright protection only if it is created by a human author.”

https://www.copyright.gov/rulings-filings/review-board/docs/...


This would be super useful if I actually had access :P


When did you sign up? They seem to opening the gates more:

https://mobile.twitter.com/sama/status/1547212678644371457


I signed up on day 2, still no access :')


Yea, I signed up months ago and still no access. Be patient.


Calling this "engineering" is just beyond parody.


It's as much engineering as SEO. Though with the 'prompt engineering' it's the human brain trying to coax something out of the black box - ironically, an algorithm might be better at generating the prompts after being given points in its parameter space that fit the aesthetic direction the user wants to explore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: