I'm a film colorist and have done the film print out method plenty of times.
You don't need to print to film to dither and reduce banding, it's trivial to add grain to achieve the same (either digitally or via film grain scans which are overlayed).
The film outs are just a different method of getting a film tone, curve, and grain - though honestly I've done some where people were unable to identify which was the digital master and which was the post film out version. Much of the time it's just an ego boost for the director and dp and a competent colorist can easily recreate it. That said, it's fun to do and means less work on my end so I don't discourage it.
I use a Yubikey as the 2FA for my bitwatden, then store all the TOTP codes with the passwords in the same vault. Quite convenient, and also adheres to the principles of MFA
This has been Adobe Firefly's value proposition for months now. It works fine and is already being utilized in professional workflows with the blessing of lawyers.
That sounds actively harmful. Often we want story boards to be less specific so as not to have some non artist decision maker ask why it doesn't look like the storyboard.
And when we want it to match exactly in an animatic or whatever, it needs to be far more precise than this, matching real locations etc.
I hadn't thought about that in movie context before, but it totally makes sense.
I've worked with other developers that want to build high fidelity wire frames, sometimes in the actual UI framework, probably because they can (and it's "easy"). I always push back against that, in favor of using whiteboard or Sharpies. The low-fidelity brings better feedback and discussion: focused on layout and flow, not spacing and colors. Psychologically it also feels temporary, giving permission for others to suggest a completely different approach without thinking they're tossing out more than a few minutes of work.
I think in the artistic context it extends further, too: if you show something too detailed it can anchor it in people's minds and stifle their creativity. Most people experience this in an ironically similar way: consider how you picture the characters of a book differently depending on if you watched the movie first or not.
I think of it in terms of the anchoring bias. Imagine that your most important decisions are anchored for you by what a 10 year old kid heard and understood. Your ideas don’t come to life without first being rendered as a terrible approximation that is convincing to others but deeply wrong to you, and now you get to react to that instead of going through your own method.
So if it’s an optional tool, great, but some people would be fine with it, some would not.
I guess this will give birth to a new kind of film making. Start with a rough sketch, generate 100 higher quality versions with an image generator, select one to tweak, use that as input to a video generator which generates 10 versions, coffee one to refine etc
AI training doesn't care anymore. Huge amounts of it now are intentionally created synthetic data made by LLM's to generate data to train bigger LLM's. The larger the model, the less it matters.
You don't need to print to film to dither and reduce banding, it's trivial to add grain to achieve the same (either digitally or via film grain scans which are overlayed).
The film outs are just a different method of getting a film tone, curve, and grain - though honestly I've done some where people were unable to identify which was the digital master and which was the post film out version. Much of the time it's just an ego boost for the director and dp and a competent colorist can easily recreate it. That said, it's fun to do and means less work on my end so I don't discourage it.