Hacker News new | past | comments | ask | show | jobs | submit login

this is well articulated. another key point: dall-e 2 uses 70% fewer parameters than dalle-e 1 while offering far higher quality.

from wikipedia (https://en.wikipedia.org/wiki/DALL-E):

DALL-E's model is a multimodal implementation of GPT-3 with 12 billion parameters which "swaps text for pixels", trained on text-image pairs from the Internet. DALL-E 2 uses 3.5 billion parameters, a smaller number than its predecessor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: