so the play button creates variations of the image - it goes through a language model and the prompt is adjusted heavily inspired by the current prompt - that and the current image are used to create the next image
You can see the current prompt of an image by clicking the edit button
So it's using vector embeddings - for each image. As you swipe, you get an aggregate preferences vector - which is more heavily influenced by downloads and plays. There is a time decay on the preference vector (older likes mattering less).
Still tuning things for the optimal quality vs. your desires tradeoff (like purely optimising for your preference vector would lead to possibly crap images)
But maybe I can adjust a little more to preferences
Actually joking aside, if there are infinite results that are built up by preference it would be interesting to see what my "type" actually was. Particularly given o dont feel I have a "type".
Probably won't explore it, but I have to admit it could be interesting.
You can see the current prompt of an image by clicking the edit button