That's because they're leveraging BFL models (almost assuredly Kontext) - it's mentioned in the release notes.
The input image is scaled down to the closest aspect ratio of approximately 1 megapixel.
I ran some experiments with Kontext and added a slider so you can see the before / after of the isolated changes it makes without affecting the entire image.
The input image is scaled down to the closest aspect ratio of approximately 1 megapixel.
I ran some experiments with Kontext and added a slider so you can see the before / after of the isolated changes it makes without affecting the entire image.
https://specularrealms.com/ai-transcripts/experiments-with-f...