Hacker News new | past | comments | ask | show | jobs | submit login
AuraFlow v0.1: a open source alternative to Stable Diffusion 3 (fal.ai)
164 points by treesciencebot 3 months ago | hide | past | favorite | 35 comments



Passes the "woman on grass" test ; - )

Seriously though, there are some minor hand issues and a rare missing body part. "Correct anatomy, no missing body parts." seems to fix it mostly. Still pretty good for an early 0.1 announcement.

Following full sentences is pretty good. Although this: "A photo of a table. On the table there's a green box on the right, a red ball on the left. There's a yellow cone on the box." keeps putting the cone on the table.

Not trained on naked bodies though - generates blob monsters instead.


Can you give me your prompt that generates passable humans? Even stuff that worked on SD3 generates flesh demons for me in the playground linked in the post.


You're right. Turns out I've just been lucky and got 5/5 good results. Every time I try now, I get blob demons as well. The joys of random generation...


Is there any model that can actually generate realistic naked human body? Thought they are all deliberately avoiding the subject in order to steer away from compliance issues.


https://civitai.com/models has plenty of fine tuned models.


Go to CivitAI, many models are able to generate naked bodies, they are SD finetuned models and you can download them to run them locally.


Well over more than 50% of all SD models in existence have a focus on NSFW. It's more like "Which models DON'T generate realistic hardcore porn?"


Sure, there's lots of people really into that. Discords for apps like DrawThings have NSFW sections where people share models/processes/results.


Which is not surprising, since porn is a multi billion dollar market. So there are hobby enthusiasts going to optimize their own kinks - but also professional players who want a share of the cake.


Professional players don't like being banktrupted and going to jail.


Oh boy you know nothing.

There are hundreds of models for every lewd aspect that you can think of. It’s the main area of “research” and generating porn has really supercharged the Sd development


Prompt adherence is great. I copied a few prompts from ideogram (which also adheres to prompt) and results were good until they involve female bodies. This for example https://ideogram.ai/g/ENMWd7PrQ32dIWSF91uMJQ/2 comes out exposing that training didn't have enough naked bodies. Prompt adherence is very very good otherwise. Can try top images of the day/hour from ideogram to test.


Just FYI, the ideogram.ai link is behind a login.


Not only that, but the only ways to sign up are either Google or Apple. What dystopia is this?


worth checking out. ideogram has the best creation of english text out there. Nightcafe (https://creator.nightcafe.studio/) sturck a deal with them recently and added support for the ideogram model in their tool also, if you want to try that, which I also highly recommend for general SD online image creation services.


I'll try Nightcafe, thanks!


In case you missed it, the authors were pretty smart to include that folded section in the middle, "Prompt for prompt-enhancement". I slapped that into gpt (https://chatgpt.com/share/2e53403e-4bd7-4138-ac34-55378e2ed3...) and made a few prompts. Ran those on their online demo. Initial impressions:

  - prompt adherence is really good
  - it's somewhere between SD15 and SDXL at creating pictures of text 
  - aesthetic quality is good, but leaves some to be desired
Gonna play more with it in ComfyUI.


AIs are still not able to understand negations.

Try "ramen without egg" or "ramen with no egg" and it will show ramen WITH egg.

Or "man without striped shirt" will give "man WITH striped shirt"


It's not trained for it, because that use case is handled differently. It would be mostly a waste of time to train the concept compared to other things you want to achieve. Instead you put things you don't want in the negative prompt. This example doesn't expose the option, but you can try it here for a different model: https://huggingface.co/spaces/gokaygokay/Kolors

Set the seed to 0 and prompt to "man in a loud shirt" - you get flowers. Sweet the negative prompt to "floral shirt" - no not flowers.

Sentence processors can definitely understand negation, (any non-trivial LLM can) but it would be a waste of time to train that in the image generators -vs- making other ideas better.


Why can't an imagen generate run a tiny little automatic text-to-text rewrite first, to apply these special linguistic rules?


Apps / interfaces to those models can totally add that. But it's not necessary to add that to the model itself.


some do. but generally people just learn to use the tools.


That’s what negative prompt is for. Stable diffusion also isn’t like llms. LLMs certainly understand negation.


I did mean AI's in general, so I have edited my original post.

> That’s what negative prompt is for.

This is what I mean by it "not understanding negations" You need whole separate prompt, just to say you want e.g. "ramen without egg" instead of just saying it in a single prompt that it understands.


You are not correct. LLMs understand "ramen without egg". Image gen models generally do not as this is not how images are described.

If you want to generate ramen without egg, you'll want _negative weighted_ prompts. "eating ramen, (egg:-1)"


Negative weighted tokens don't do what you think they do. Sometimes they act like a negative prompt, other times they don't.

Likewise, zero-weight tokens don't act like the token is absent from the prompt.


They generally do. It is difficult to differentiate "negative tokens don't do that" and "prompt adherence is shaky in general".

It's fully possible that the image model draws eggs in ramen but it doesn't know that the egg is an egg and therefore any attempts to interact with it via the egg token are futile. Generally speaking though thing:-1 should reduce the presence of thing for well understood concepts. It's a better tool on second pass alterations of an image.


You're not correct about AIs in general. Both chat LLM models and sentence embeddings can handle negation just fine. (Ask any chat "what clothes would a person wear if they weren't wearing a hat") Here it was simply not trained for that purpose. Maybe it wasn't worth it, maybe the creators thought that the negative prompt is enough, maybe the time was better spent on other examples. They way, it's not AIs in general, and it's not a tech limitation.


>AIs are still not able to understand negations.

AIs are able to understand negations, just ask an LLM a question. Text-to-image models are the ones that struggle the most with this, they usually do not have a very nuanced understanding of text.


Fails on “piano keyboard” (shows a full piano) and “close up of piano keyboard,” (bizarre duplicate keyboard monstrosity.)

It’s a difficult prompt. Nobody gets the grouping of black keys right. Maybe someday?



> The prompt comprehension is incredible! #auraflow

> "a cat that is half orange tabby and half black, split down the middle. Holding a martini glass with a ball of yarn in it. He has a monocle on his left eye, and a blue top hat, art nouveau style "

Plus an image that somewhat resembles that prompt. The cat has a human-like hand with a chopped off thumb and 6 fingers in total, differently colored eyes, a branch in front of its face, the ball of yarn is somehow floating in mid-air.]


These are somewhat valid issues. But given the currently available open models, this is a massive improvement. The human-like hand and changing the styles on the sides of the head isn't even bad - those are valid artistic choices you'd see on similar illustrations - they're just badly executed here.


Somewhat resembles? Come on.


So, now that this is released are we no longer going to have pendant people complaining that this "isn't real open source"?

Here is your model, complainers.

I'm not really sure why you'd be so insistent on that, as opposed to just fine tuning the "totally not open source, but instead just open weights" models.

But go ahead, I guess.

Now we can get back to talking about capabilities, usage, and results, as opposed to arguing about the definition of words.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: