I know that's /s, but I'm seriously astounded by how many great-at-video cameras come out from Sony, Canon, Nikon, et all that still don't have rudimentary features like genlock or easy timecode integration. Much less the ability to upload video clips and still frames somewhere automatically, in full resolution.
Sometimes I use my iPhone instead of my $3000 camera rig just because I can get the clip somewhere within seconds instead of the 2-3 minute rigamarole of SD card ingestion.
IIRC, iPhones (and every other smartphone) records variable frame rate videos (VFR), whereas professional cameras can record constant frame rate videos (CFR).
I think that VFR videos need to be re-encoded into CFR videos in order to be able to work with all the footage shot by different devices. It sounds to me like with the Genlock feature, it could actually be possible to record CFR videos on an iPhone and also synchronize the iPhone with other devices so that both video and audio does not drift relative to other devices. But that's just my speculation as I couldn't find any details about the Genlock feature.
I would love to know how the team working on 28 years later handled synchronization of the multiple iPhones they were using to shoot some scenes of the movie and if they got a helping hand from Apple, which perhaps allowed them to use some internal APIs to access hidden features of the camera stack...
One of the big use cases for Genlock these days is when you're doing virtual production with LED walls; you want to make sure the screen refresh of the wall is locked to the shutter of the camera. It's almost like 'vsync' in video game video settings, without it you risk seeing tearing in the backdrops.
The focus was naturally scenes and "vibes" that couldn't be achieved with traditional films cameras. Boyle and Mantle weren't afraid to use whatever was best for the job, which included drones and other camera systems as well. More on their iPhone camera rigs for anyone who's interested:
Would it be possible to point out more details about where Apple got the math wrong and which inaccurate approximations they use? I'm genuinely curious and want to learn more about it.
It's not that they deliberately made a math error, it's that it's a very crude algorithm that basically just blurs everything that's not within what's deemed as the subject with some triangular, Gaussian, or other computationally simple kernel.
What real optics does:
- The blur kernel is a function of the shape of the aperture, which is typically circular at wide aperture and hexagonal at smaller aperture. Not gaussian, not triangular, and the kernel being a function of the depth map itself, it does not parallelize efficiently
- The blurring is a function of the distance to the focal point, is typically closer to a hyperbola; most phone camera apps just use a constant blur and don't even account for this
- Lens aberrations, which are often thought of as defects, but if you generate something too perfect it looks fake
- Diffraction effects happen at sharp points of the mechanical aperture which create starbursts around highlights
- When out-of-focus highlights get blown out, they blow out more than just the center area, they also blow out some of the blurred area. If you clip and then blur, your blurred areas will be less-than-blown-out which also looks fake
Probably a bunch more things I'm not thinking of but you get the idea
The Adobe one has a pretty decent ML model for picking out those stray hairs and keeping them in focus. They actually have two models, a lower quality one that keeps things on device and a cloud one that is more advanced.
re: parallelization, could a crude 3Dfft-based postprocessing achieve a slightly improved result relative to the current splat-ish approach while still being a fast-running approximation?
i.e. train a very small ML model on various camera parameters vs resulting reciprocal space transfer function.
I second Apple having great IPS monitors and iPad displays. I couldn't notice any IPS glow or backlight bleeding and/or dimming at the edges when using an iMac. I can't believe manufacturers like Dell, LG or Samsung can't (or don't want to?) replicate the quality.
Isn't the 5K resolution on a 27" monitor too small? Or do you use upscaling?
I accept that the organic material contained in the pixels has limited lifespan and that the brightness of each pixel will eventually wear out. That's just the limitation of the technology.
But I'd like an OLED monitor to somehow mask/compensate for this degradation by e.g. adjusting the voltage/brightness of individual pixels according to their cumulative wear so that it's invisible to the user. That way, I would observe no signs of burn-in at, say, 30% brightness, but after years of cumulative usage, the monitor would get less and less bright (i.e. the wear would appear uniform).
What I'm primarily concerned about is temporary image retention; the outline of a white PDF document opened for hours being visible after switching to a dark IDE. I'm not sure if the 3rd gen QD-OLED or WOLED panels are resistant to such kind of image retention.