Hacker Newsnew | past | comments | ask | show | jobs | submit | semidror's commentslogin

With the introduction of ProRes RAW and Genlock, it seems like iPhones can finally be seriously considered for professional movie capture.


> Genlock

Finally caught up to the Amiga I see


I know that's /s, but I'm seriously astounded by how many great-at-video cameras come out from Sony, Canon, Nikon, et all that still don't have rudimentary features like genlock or easy timecode integration. Much less the ability to upload video clips and still frames somewhere automatically, in full resolution.

Sometimes I use my iPhone instead of my $3000 camera rig just because I can get the clip somewhere within seconds instead of the 2-3 minute rigamarole of SD card ingestion.


28 years later was filmed using the iPhone 15 as the sensor/capture device.


IIRC, iPhones (and every other smartphone) records variable frame rate videos (VFR), whereas professional cameras can record constant frame rate videos (CFR).

I think that VFR videos need to be re-encoded into CFR videos in order to be able to work with all the footage shot by different devices. It sounds to me like with the Genlock feature, it could actually be possible to record CFR videos on an iPhone and also synchronize the iPhone with other devices so that both video and audio does not drift relative to other devices. But that's just my speculation as I couldn't find any details about the Genlock feature.

I would love to know how the team working on 28 years later handled synchronization of the multiple iPhones they were using to shoot some scenes of the movie and if they got a helping hand from Apple, which perhaps allowed them to use some internal APIs to access hidden features of the camera stack...


One of the big use cases for Genlock these days is when you're doing virtual production with LED walls; you want to make sure the screen refresh of the wall is locked to the shutter of the camera. It's almost like 'vsync' in video game video settings, without it you risk seeing tearing in the backdrops.


Only a few scenes were filmed with iphones.


The focus was naturally scenes and "vibes" that couldn't be achieved with traditional films cameras. Boyle and Mantle weren't afraid to use whatever was best for the job, which included drones and other camera systems as well. More on their iPhone camera rigs for anyone who's interested:

https://www.ign.com/articles/28-years-later-danny-boyle-goes...

https://www.motionpictures.org/2025/06/how-28-years-later-dp...

https://www.wired.com/story/danny-boyle-says-shooting-on-iph...

Related: Driver POV footage for F1 was shot with Apple-created custom iPhone rigs as well.


You have it backwards.

Most scenes were filmed with iPhones, only a few were filmed with other cameras.


No support for timecode-over-Bluetooth (for Timecode Systems/Atomos) is a dealbreaker for us.


Would it be possible to point out more details about where Apple got the math wrong and which inaccurate approximations they use? I'm genuinely curious and want to learn more about it.


It's not that they deliberately made a math error, it's that it's a very crude algorithm that basically just blurs everything that's not within what's deemed as the subject with some triangular, Gaussian, or other computationally simple kernel.

What real optics does:

- The blur kernel is a function of the shape of the aperture, which is typically circular at wide aperture and hexagonal at smaller aperture. Not gaussian, not triangular, and the kernel being a function of the depth map itself, it does not parallelize efficiently

- The blurring is a function of the distance to the focal point, is typically closer to a hyperbola; most phone camera apps just use a constant blur and don't even account for this

- Lens aberrations, which are often thought of as defects, but if you generate something too perfect it looks fake

- Diffraction effects happen at sharp points of the mechanical aperture which create starbursts around highlights

- When out-of-focus highlights get blown out, they blow out more than just the center area, they also blow out some of the blurred area. If you clip and then blur, your blurred areas will be less-than-blown-out which also looks fake

Probably a bunch more things I'm not thinking of but you get the idea


The iPhone camera app does a lot of those things. The blur is definitely not a Gaussian blur, you can clearly see a circular aperture.

The blurring is also a function of the distance, it's not constant.

And blowouts are pretty convincing too. The HDR sources probably help a lot with that. They are not just clipped then blurred.

Have you ever looked at an iPhone portrait mode photo? For some subjects they are pretty good! The bokeh is beautiful.

The most significant issue with iPhone portrait mode pictures are the boundaries that look bad. Frizzy hair always ends up as a blurry mess.


The Adobe one has a pretty decent ML model for picking out those stray hairs and keeping them in focus. They actually have two models, a lower quality one that keeps things on device and a cloud one that is more advanced.


Any ideas what the Adobe algorithm does? It certainly has a bunch of options for things like the aperture shape.


re: parallelization, could a crude 3Dfft-based postprocessing achieve a slightly improved result relative to the current splat-ish approach while still being a fast-running approximation?

i.e. train a very small ML model on various camera parameters vs resulting reciprocal space transfer function.


Thanks!


I second Apple having great IPS monitors and iPad displays. I couldn't notice any IPS glow or backlight bleeding and/or dimming at the edges when using an iMac. I can't believe manufacturers like Dell, LG or Samsung can't (or don't want to?) replicate the quality.

Isn't the 5K resolution on a 27" monitor too small? Or do you use upscaling?


I accept that the organic material contained in the pixels has limited lifespan and that the brightness of each pixel will eventually wear out. That's just the limitation of the technology.

But I'd like an OLED monitor to somehow mask/compensate for this degradation by e.g. adjusting the voltage/brightness of individual pixels according to their cumulative wear so that it's invisible to the user. That way, I would observe no signs of burn-in at, say, 30% brightness, but after years of cumulative usage, the monitor would get less and less bright (i.e. the wear would appear uniform).

What I'm primarily concerned about is temporary image retention; the outline of a white PDF document opened for hours being visible after switching to a dark IDE. I'm not sure if the 3rd gen QD-OLED or WOLED panels are resistant to such kind of image retention.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: