Hacker News new | past | comments | ask | show | jobs | submit | ancientworldnow's comments login

I'm a film colorist and have done the film print out method plenty of times.

You don't need to print to film to dither and reduce banding, it's trivial to add grain to achieve the same (either digitally or via film grain scans which are overlayed).

The film outs are just a different method of getting a film tone, curve, and grain - though honestly I've done some where people were unable to identify which was the digital master and which was the post film out version. Much of the time it's just an ego boost for the director and dp and a competent colorist can easily recreate it. That said, it's fun to do and means less work on my end so I don't discourage it.


Conscripts explicitly don't serve in Ukraine. The article even notes that.


The parent said, "Russia does not have forced conscription."

The parent was corrected.


This was trained to be run at FP8 with no quality loss.


The model description on huggingface says - Model size - 12.2B params, Tensor type - BF16. Is the Tensor type different from the training param size?


The best part of caving as a sport and science is that you can find and explore virgin systems here on Earth still.


There's a really good documentary movie about this too: https://www.imdb.com/title/tt0435625/


Bitwarden has a separate 2fa app so your totp codes aren't in the same password vault (though you can do that, but shouldn't).


Why shouldn't you?

I use a Yubikey as the 2FA for my bitwatden, then store all the TOTP codes with the passwords in the same vault. Quite convenient, and also adheres to the principles of MFA


If your one Bitwarden store were compromised in any way, it is game over since it also contains the 2FA codes.

If you were to use two apps / two stores, there is another hurdle.


That is exactly why I do it.


This has been Adobe Firefly's value proposition for months now. It works fine and is already being utilized in professional workflows with the blessing of lawyers.


That sounds actively harmful. Often we want story boards to be less specific so as not to have some non artist decision maker ask why it doesn't look like the storyboard.

And when we want it to match exactly in an animatic or whatever, it needs to be far more precise than this, matching real locations etc.


I hadn't thought about that in movie context before, but it totally makes sense.

I've worked with other developers that want to build high fidelity wire frames, sometimes in the actual UI framework, probably because they can (and it's "easy"). I always push back against that, in favor of using whiteboard or Sharpies. The low-fidelity brings better feedback and discussion: focused on layout and flow, not spacing and colors. Psychologically it also feels temporary, giving permission for others to suggest a completely different approach without thinking they're tossing out more than a few minutes of work.

I think in the artistic context it extends further, too: if you show something too detailed it can anchor it in people's minds and stifle their creativity. Most people experience this in an ironically similar way: consider how you picture the characters of a book differently depending on if you watched the movie first or not.


I know you weren't implying this, but not every storyboard is for sharing with (or seeking approval from) decision makers.

I could see this being really useful for exploring tone, movement, shot sequences or cut timing, etc..

Right now you scrape together "kinda close enough" stock footage for this kind of exploration, and this could get you "much closer enough" footage..


I think of it in terms of the anchoring bias. Imagine that your most important decisions are anchored for you by what a 10 year old kid heard and understood. Your ideas don’t come to life without first being rendered as a terrible approximation that is convincing to others but deeply wrong to you, and now you get to react to that instead of going through your own method.

So if it’s an optional tool, great, but some people would be fine with it, some would not.


Absolutely. Everyone's creative process is different (and valid).


I guess this will give birth to a new kind of film making. Start with a rough sketch, generate 100 higher quality versions with an image generator, select one to tweak, use that as input to a video generator which generates 10 versions, coffee one to refine etc


Ghost does all that.


If someone needs blood transfusions they have larger concerns than PFAS (that's likely at a similar baseline anyway).


AI training doesn't care anymore. Huge amounts of it now are intentionally created synthetic data made by LLM's to generate data to train bigger LLM's. The larger the model, the less it matters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: