Hacker News new | past | comments | ask | show | jobs | submit login
Why movies look weird at 48fps, and games are better at 60fps (accidentalscientist.com)
433 points by jfuhrman on Dec 24, 2014 | hide | past | web | favorite | 125 comments



I liked the article, but, as a game developer who does not specialize in graphics, I really liked one of the comments:

Joe Kilner - One extra issue with games is that you are outputting an image sampled from a single point in time, whereas a frame of film / TV footage is typically an integration of a set of images over some non-infinitesimal time.

This is something that, once stated, is blatantly obvious to me, but it's something I simply never thought deeply about. What it's saying is that when you render a frame in a game, say the frame at t=1.0 in a game running at 60 FPS, what you're doing is capturing and displaying the visual state of the world at a discrete point in time (i.e. t=1.0). Doing the analogous operation with an analogous physical video camera means you are capturing and compositing the "set of images" between t=1.0 and t=1.016667, because the physical camera doesn't capture a discrete point in time, but rather opens its shutter for 1/60th of a second (0.16667 seconds) and captures for that entire interval. This is why physical cameras have motion blur, but virtual cameras do not (without additional processing, anyway).

This is obvious to anyone with knowledge of 3D graphics or real-world cameras, but it was a cool little revelation for me. In fact, it's sparked my interest enough to start getting more familiar with the subject. I love it when that happens!


Game renderers do sample time too, for per-object motion blur[1] and sometimes full-scene blur or AA. To push the idea further, research has been done around 'frameless' renderers, where you never render a complete frame but sample ~randomly at successive times and accumulate into the frame(sic)buffer. At low resolution it feels weird but very natural, computed persistence of vision : https://www.youtube.com/watch?v=ycSpSSt-yVs . I love how even at low res, you get valuable perception.

[1] some renderers even take advantage of that to increase performance since you get a more human oriented feel by rendering less precisely.


Speaking of "frameless" rendering, I noticed during Carmack's Oculus keynote (https://www.youtube.com/watch?v=gn8m5d74fk8#t=764), he talks about trying to persuade Samsung to integrate programmable interlacing into their displays in order to give dynamic per-frame control over which lines are being scanned.

This would give you the same "adaptive per-pixel updating" seen in your link, though primarily to tackle the problems with HMDs (low-persistence at high frame-rates).


This AnandTech overview of nVidia's G-Sync is worth reading (meshes a bit with what Carmack mentioned about CRT/LCD refresh rates in that talk): http://www.anandtech.com/show/7582/nvidia-gsync-review

It's a proprietary nVidia technology that essentially does reverse V-Sync. Instead of having the video card render a frame and wait for the monitor to be ready to draw it like normal V-Sync, the monitor waits for the video card to hand it a finished frame before drawing, keeping the old frame on-screen as long as needed. The article goes into a little more detail; they take advantage of the VBLANK interval (legacy from the CRT days) to get the display to act like this.


Weird, I missed this part. Vaguely reminds me of E. Sutherland fully lazy streamed computer graphics generation since they had no framebuffer at the time.


Fantastic technique. Can’t believe it’s been almost 10 years since this video. Do you know if there are is any follow-up research being done?


I did search for related research a while back with no results. Tried to leverage reddit too, someone asking the very same was told this : http://www.reddit.com/r/computergraphics/comments/12gs2a/ada...


I always use this fact as a kind of analogue to explain position-momentum uncertainty in physics. From a blurry photo of an object, you can easily measure the speed, but the position is uncertain due to the blur. From a very crisp photo, you can tell exactly where it is, but you can't tell how fast it is moving because it is a still photo.

It's a good way to start building an intuition about state dependencies.


Welcome to Heisenberg's uncertainty principle[0] in the macroscopic world!

[0] http://en.m.wikipedia.org/wiki/Uncertainty_principle


Would it not be the observer effect instead?


True that pointing a camera to someone can change their behavior.


I hear it can even add ten pounds...


Another way of say of saying this is that a drum beat has no definite pitch because it's too short. It's exactly the same property of Fourier transforms behind the uncertainty principle.


There's also a broader picture -- that this effect arises because we use discrete, independent words to describe inter-dependent phenomena. The concepts of momentum and position are not good 'factorizations' of reality, so trying to talk about it in those terms leads to structural problems when you increase precision.

It's the same kind of problem you get when you try to talk about objective/subject morality, or whether science can prove things.

Another example I give:

The Sun revolves around the Earth -- True from Earth's perspective. The Earth revolves around the Sun -- True from Sun's perspective. They both revolve around their center of mass -- True in Newtonian model. The both follow straight geodesics in spacetime -- True in Einsteinian model.

Everything is true in some approximation. The idea is to increase your precision so that you can find truths that incorporate more information and thus provide greater insight and generality. If your language doesn't have enough 'bandwidth' to carry the information of -- that is, mimic the structure of -- your experiences, you have to develop more structured abstractions, or you'll lose clarity and expressive power.

So, for instance, you wouldn't have to prove that unicorns don't exist: you just have to show that providing more specific information doesn't result in a lack of clarity in a general model. A theory of three-legged unicorns offers no advantages over a model of n-legged unicorns because unicorns don't exist.

But now I've gone off the deep end...


I don't position or momentum are bad descriptions of a system. Also, it's not "factorizing" the system into the two parts, since they are related by a transformation (i.e. they are interchangeable, doesn't happen in factorization). It's important to have a good knowledge on the subject before you try to interpret meaning of it and specially when you try to extrapolate the situation.

I'm saying this also because the initial analogy wasn't so strong to begin with. A bunch of incorrect analogies can build the wrong picture of a theory which is hard to get rid of people's minds. Which is why it's good to reason through principles and not analogies (analogies are useful for other purposes: building bridges between the understanding of two different fields. I believe -- and they're made precise and clear in their shortcomings).


Position and momentum are bad description of a quantum system precisely because they are not independent. This is not really a statement about physics, but about language. It just happens to apply to physics because the purpose of physics is to develop useful language. And we make a better description in this case by saying things like 'position-momentum', using an actual equation, or by using a different concept like energy or wave-vector. All of which are better than position and momentum alone.

The extrapolation here really has little to do with quantum physics. I am also using QM as an analogy for linguistics. The theme is about how language captures information and how information-processing systems select output based upon linguistic structure. It's a very young and poorly understood subject, so I don't think it's very likely to get good theoretical work in a forum like this. These are primitive times, after all.


If you know the direction of the motion of a blurry object isn’t the location of the object on one of the leading edges? I thought the problem was more that you have no idea of the features of the object?


Indeed, motion blurring preserves almost all* information of a picture, given some assumptions (e.g. brightness stays constant, path is predictable).

*A linear blur of a 2D picture acts like a 1D sinc filter: information is completely lost only at spatial frequencies multiples of 1/d in the direction of motion, where d is the linear displacement, and otherwise attenuated.


Most movies use a 180-degree shutter angle, which means the shutter is open half the time. http://www.red.com/learn/red-101/shutter-angle-tutorial So you get motion blur for half the frame time, and no light on the film for the other half of the time. The Hobbit movies (at least the first one) used a 270-degree shutter angle, so even at half the frame time, they got 3/4 as much motion blur in each frame as a normal movie. That might contribute to the odd feeling viewers had. http://www.fxguide.com/featured/the-hobbit-weta/


I believe this was done by PJ for a couple of reasons. 1) Practical, increasing the shutter speed means either increasing the amount of light for scenes, using faster film stock ( or higher ISO on your RED camera ) or some combination of both. 2) This shutter speed was a compromise between the exposure time 180-degree shutter shooting at 24 -vs- 48 fps, and would still retain some of the blur so that 24fps screenings would appear relatively 'normal'


The RED camera is quite noisy. especially with the 5k sensor.


Indeed. I had to laugh though, because first I read it as a sound person would and wondered why you had made that comment here. The camera uses internal fans to cool down the sensor between takes and it is really loud, like a hair dryer. So when you start shooting that often means stopping because of some sound which wasn't obvious over the background noise of the camera cooling itself down. Not cool.


This.

The article wanders on and on, but is simply grasping at the much more learned aesthetic repulsion of motion blur.

24 and 25 fps (aka 1/48th and 1/50th) motion blur have defined the cinematic world for over a century.

Video? 1/60th. Why the aesthetic revulsion? While I am certain this is a complex sociological construct, there certainly is an overlap with lower budget video soap operas of the early 80's. Much like oak veneer, the aesthetic becomes imbued with greater meaning.

The Hobbit made a bit of a curious choice for their 1/48th presentation in choosing a 270° shutter. An electronic shutter can operate at 360°, which would have delivered the historical 1/48th shutter motion blur.

Instead, the shutter ended up being 1/64th, triggering those all-too-unfortunate cultural aesthetic associations with the dreaded world of low-budget video.

It should be noted that there are some significant minds that believe in HFR motion pictures, such as Pixar's Rick Sayre. However, a disproportionate number of DPs have been against it, almost exclusively due to the motion blur aesthetic it brings, and the technical challenges of delivering to the established aesthetic within the constraints of HFR shooting.


Not sure about how movies are filmed, but you don't have to shoot video frames at 1/FPS. That's just the slowest you can shoot. If you're shooting in broad daylight, each frame could be as quick as 1/8000, for example.

Shooting at the slowest shutter speed possible should make the most fluid video.


Worth noting that directors use high speed film to portray a feeling of confusion. The lack of motion blur gives that sense to the scene. E.g. the opening scene of Saving Private Ryan uses this effect.


The effects in the battle sequences of SPR are not a result of high-speed film per se, but the result of altering the effective shutter speed of the camera to reduce motion blur (switching from 180 degree shutter to 90 or 45 situationally). The scenes were still shot at 24fps. If you have a digital video camera with manual shutter control, set the framerate to 24fps and set the shutter to 1/200 and you have instant SPR.

This effect is now very common for action scenes in movies and also tons of music videos. Very easy to spot once you are aware of it.


You are actually speaking of the same thing, "high-speed film" similar to a "high-speed lens" doesn't actually affect the framerate, yet rather how fast it can produce an image from a certain light-source. It's simply more sensitive.

Now, you're correct that the actual effect is done by changing the shutter speed. but the loss of light is often compensated for by using a faster film, since using a larger aperture has more significant effect on the scene in the form of DOF.


They dropped frames too, right? Isn't it a combination of high speed frames at maybe 20fps?


I don't think so. That effect is the extreme sudden movements of the camera. It makes it look stuttery.


As far as I know, the "gold standard" is shooting at half the inverse of the FPS (eg. 1/60s exposures for 30 frames per second). This is how film cameras traditionally work, the so-called 180-degree shutter: http://luispower2013.wordpress.com/2013/03/12/the-180-degree...


I'm pretty sure that 24 FPS footage with a 1/24 second shutter speed would be completely unusable except as an extreme blur effect.


On the contrary. That's desired. It makes he motion more smooth. Lots of photographers with high end DSLRs have been asking for 1/24 when shooting at that framerate.


The corollary is that it should be possible to produce a movie-like quality in games, by over-framing and compositing a blur between frames. The result would have actual motion blur and update at say 30 fps, but without the jerkiness we normally associate with that frame rate.


Sure, if you can render at so 120 hz, or more and composite the 4 frames together into your single frame you will get a single improved frame. but even at 4 renders a frame you'll still get artifacts, I imagine you'd need at least double to make it worthwhile, But even that only gives you ~8ms to time step and render the entire scene. Minus composting and any other full-frame post effects. And hitting the ~16ms required for a 60fps is already pretty difficult.

Now, in video games we do have methods to help simulate inter-frame motionblur. The most commonly used is to build a buffer of inter-frame motion vectors, this is often generated with game state, but can also be re derived by analysis of current and previous frames to some affect. Then you do a per-pixel blur based on the direction and magnitude of the motion vectors. Which often works to good effect.


You wouldn't need to render the whole frames. You'd get even better results from randomly sampling over a number of frames to contribute to nearby pixels. In effect you're trading less work for more noise, and noise is kind of what you want here (so long as it accurately reflects what is happening).

So, you could render 16 samples, but only do 1/4 of the pixels on each so you'd get better motion blur for only a small increase in work over doing 4 temporal samples. The extra work corresponds to a bit of bookkeeping and the tweening of 16 frames instead of 4 and you'd still be rendering the same number of samples overall.


Each game frame is a snapshot taken with an infinitely small shutter duration but displayed for 1/30s or 1/60s (vs one movie frame, which has a shutter duration of, e.g. 1/48s and displayed for 1/24s).

So over-framing game frames will not produce motion blur, it'll simply merge two still images together. You need to simulate motion blur (usually as a post-process). This of course takes more time to render, potentially lengthening the frame times.


I don't see any theoretical difference. Provided our sampling rate is sufficiently high, merging together "snapshot" images should give exactly the same effect as motion blur.

Though in practice it would be difficult to render more than a few snapshot frames between the display's refreshes, and with a low sampling rate there would be noticeable errors, particularly if you take a screenshot of a fast-moving object.


That's not entirely accurate. It's true that a camera will capture the image during a certain interval of time instead of a definite point in time (obviously) but the length of that exposition time is not necessarily connected to the framerate.

For instance if you have a digital camera where you can select the framerate (pretty common these days) and if the exposition time was simply the frame period, it would mean that the image at 30fps would be exposed twice as long as 60fps and the resulting picture would look very differently.

Of course you can mitigate that by changing the aperture and other parameters but in my experience in practice you can select the exposition time and framerate independently on digital cameras. With very sensitive sensor and/or good lighting you can achieve very small exposition times, much smaller than the frame period. If you're filming something that moves and unless you want to be blurry on purpose you probably want to reduce the exposition time as much as possible in order to have a clean image, just like in the video games.


...and it will look very awkward if there is motion in the frame if you're not shooting very close to 1/(2 * framerate). There is a very small tolerance window, outside of which the picture will look mushy (if your camera lets you shoot very close to 1/framerate) or jerky (< 1/(3 * framerate)). Controlling exposure, if you want to maintain a constant aperture for depth-of-field reasons, is done using neutral density filters (including "variable ND" crossed polarizers) and adjusting the sensitivity/gain/ISO, not shutter speed.


This article is detailed and scientific.

However, anecdotally speaking, the concern I have with evaluating high-frame rate in film is that we have very little context—most of us have only ever seen Peter Jackson's Hobbit films in HFR. In other words, I have never seen how other directors' work would be affected by HFR.

Speaking exclusively about the Hobbit series in HFR, I too observed an uncanny valley that traditional films intrinsically avoid with their low frame rate. The Hobbit films felt more like a play than a film. A play with elaborate stage effects, but a play nonetheless.

In fact, my chief criticism of Jackson's directing with HFR is that the feeling of watching a play is amplified by how he mixes the sound and directs his extras. The extras routinely just mumble nonsense in the background, leaving only the character you're intended to be focused on speaking clearly. It's the same thing you see in a play when there is background dialog, and it's completely unnatural. You find yourself sometimes distracted by the characters in the background and realizing they're not actually doing anything meaningful or having real conversations. For example, in the most recent film, I found myself more distracted by the unnatural audio in early scenes (such as the beach scene) than the HFR video.

Combine that with the poor acting by the minor characters in the first 45 minutes of the recent film and I think HFR gets a bad rap in large part because the Hobbit films alone are our point of reference.


I think that you are spot-on regarding how The Hobbit comes off more as a play than a movie.

Another important issue is the unwitting association we have built up that inversely correlates frame rate and production quality. The movie tradition started as soon as video technology became just barely capable of supporting the medium. The enormous cost of moving beyond that ancient standard has kept movies at 24Hz for more than a human lifetime. Later, cheaper, consumer-orientated ed tech targeted getting the best quality out of 59.97Hz television displays, so of course they used all 59.97Hz available to them.

Because of this quirk of history, we've had a frame rate & quality dichotomy with movies on one side (very high quality & 24hz) and soap operas/TV sport/live action reporting/home movies (low to abysmal production quality and 60Hz) on the other for the entire life of everyone.

When MotionFlow frame rate upscaling TVs first came to market, I randomly polled people I found looking at them at BestBuy about what they thought. Even when watching super-expensive Hollywood productions, the most common response was that it "looked like a high school play." It was ludicrous to look at the enormous scenes, world-shaking special effects and top-bill actors on the screens and say "Yep. High school production." But, people did. And, they couldn't explain why. I think the lifetime spent unconsciously building up a mental model that high frame rate leads to low quality is probably a major factor in people's reaction to 48&60Hz movies.


All you need as a proof HFR looks weird is to watch first Capitan America (from the new series of Marvel films)

I personally think most of the issues come from clear motions - you can see they are using props just because they are too light and don't move like a real object would. There isn't enough blur to fool your brain.


Agree. Assuming the lukewarm reception of HFR for "The Hobbit" hasn't consigned us to another decade or so of 24fps, I am hopeful that the right director and cinematographer will eventually come along and exploit its aesthetic possibilities.

Imagine if "The Matrix" had been shot with the simulations in HFR and the real world in 24fps. Or if Kubrick had a chance to essentially invert the gauzy, candle-lit settings of "Barry Lyndon" and present a world in 8K hyper-reality.


Just riffing on your Matrix example, the now 'classic' example of using a temporal shift to aid story telling is the use of 45 or 90 degree shutter in "Saving Private Ryan" during combat scenes. Changing the shutter angle on a motion film camera is analogous to changing the shutter speed, in this case opting for a faster shutter during the action sequences. This reduces motion blur the same way shooting at a higher frame rate would, albeit keeping the 24p.


While not a fan of The Hobbit's use of HFR (I thought it worked well in mostly CGI action scenes and not at all everywhere else), I too am waiting for someone to use it in the right context.

I don't think Barry Lyndon would work, period pieces for the most part I believe work better at 24. But The Matrix very well might. With The Matrix it might work best if it was a mix of HFR and 24. HFR when they are in the matrix and 24 when they are out (of course that brings another set of problems in projecting mixed frame rates in the same movie).

HFR I believe would work really well in sci-fi that is meant to look and feel a bit sterile (think THX1138, The Island, etc.). Or in heavily CGI scenes.

The big problem with The Hobbit, since it is the first, there is still a bit of working out the kinks and bugs (lighting to me was a big issue in the first HFR film). Give it time and it might prove an indispensable creative tool.


Maybe I'm in the minority here, but I thought HFR dramatically improved the Hobbit films. The first 5 minutes of each movie felt very odd, as if everyone was moving in fast motion without actually moving any faster (sorry, that's the best way I can describe it), but after those first 5 minutes, the movie looked completely natural and amazing. Especially the 3D, it looked so good that I want every single 3D movie to be produced in HFR now, and I lament the fact that there's no way to get HFR at home.

As for feeling like a play, I guess I didn't pay attention to the background characters much because I never noticed any of this nonsense mumbling that you're talking about.


Well if you have a PC setup to play videos, there's a system that'll interpolate any video to a higher frame rate : http://www.svp-team.com/

I watched ST TNG using that and it certainly made it feel a bit more realistic.


Most modern TVs tend to have motion smoothing available as an option as well (that page says higher-end TVs but I've seen it on plenty of cheaper TVs as well, though I don't know when that was written). But it's nowhere near as good as HFR, especially in scenes with a lot of motion it tends to make things blurrier.


It depends on a person. For me, the weird effect you had for the first 5 minutes never subsided, and I disliked it all the way till the very end. I felt like it should be out of sync with sound, because everyone moved in such an unnatural way,but it never was, no matter how hard I looked.


BUilding on this, I saw the Hobbit the first time in 48, and the second time in 24fps.

The panning around at the beginning of the movie was HORRIBLE in 24fps. Visually displeasing. The HFR effect disappeared for me after a few minutes and I could enjoy the film for what it was.


Watch the non-main characters in any recent animated Disney film for the same effect.

They're almost stationary, occasionally blink, have blank looks on their faces, and, if they're the focus of your attention, look _completely unnatural_.

(Source: Parent of small children who started to look at anything else in a scene of Frozen after the 10th time.)


The trouble I have with everyone's evaluation of The Hobbit's HFR footage is that I am apparently the only one in the world that think it looks absolutely phenomenal. I think it's a positively game changing technology, and I hope this is just a short transitional generation of nay-sayers. The fact that I can watch a camera pan in a theater that doesn't look completely terrible and strobing is eye-opening.


I keep hoping that someone will release an HFR film that is not also in 3D. I saw an extensive demo of HFR at Siggraph a few years back and it was amazing. HFR can be so crisp and clear that you don't even NEED 3d. I have several friends who don't have stereo vision due to various vision issues. They won't go to see the Hobbit films in HFR, since it's tied to 3D, which is worse than useless to them. The industry needs to give HFR a chance to stand on its own.


I believe the play-like feeling can be attributed to the fact that 90% of it was filmed in front of a blue screen. Considering all the goofy camera tricks used to make the characters look proportionally correct, is must be very difficult to act in that kind of situation and have it come off feeling natural. Props to Andy Serkis for rising above the rabble.


Serkis has had a lot more practice.


I saw the first two Hobbit movies in HFR 3D, but the third in 24fps 3D. The third still looked like a play. I attribute it to the frequent use of infinite focus to reduce 3D-induced headaches, or if not that, a stereo separation that was too wide. The sets and locations felt very finite and contained, rather than expansive and overwhelming as in LoTR.


Desolation of Smaug has this issue as well. The only time the film felt truly deep and expansive to me was when Bilbo first stepped into Erebor and you got to see how massive the pile of gold was as it panned through the main chamber. Then it got closed up and tight again shortly thereafter. The few wider shots that followed reeked of bad CGI compositing (especially when Bilbo falls off the platform with the lever when Smaug knocks it down)

I chalked it up to the cinematography just not meshing well with the way the film was shot.


Just my opinion here, but for context you can include virtually any television program that originated as NTSC and was not shot on film and telecined. Just about any soap-opera for example. A more recent example would be NBC's Peter Pan Live, which looked like it was done in native 60p to my eye. Paradoxically it was a play but really would have benefitted greatly from 24p.


On TV you see high frame rate content every day (well, if you still watch traditional television, that is). It's just that big Hollywood movies (and fictional films in general) were not shot in HFR before. But all sports content, many documentaries and most news are 50 to 59.94 Hz (depending on where you leave) on traditional TV (not when viewed over the web, of course).


Avatar 2 will likely be 60FPS http://www.es.com/Products/60Frames.html

Opposite is Tarantino, who hates digital and is shooting is next film in 70mm. They should be released around the same time so we will have the best of digital and film to compare.


An article on framerate popped up on HN last year, about the same time, about last year's Hobbit movie. Something that article brought up was lighting: at a higher frame rate, more light hits our eyes. Directors (both full and of Photography) will need to adjust how they light movies - which probably means using less light. Same for things like makeup - they'll need to adjust to viewers being able to see more.


I didn't think of it at the time, but when my son and I went to see the film in imax over the weekend, I felt the need to excuse myself in the first few minutes to get a drink of water. I felt uneasy and my eyes kinda watered, but I eventually got used to it and was okay. I thought that I was feeling some type of way because the 3d never really "works" for me. I guess the frame rate had something to do with it.


> At 48Hz, you’re going to pull out more details at 48Hz from the scene than at 24Hz, both in terms of motion and spatial detail. It’s going to be more than 2x the information than you’d expect just from doubling the spatial frequency, because you’re also going to get motion-information integrated into the signal alongside the spatial information.

I had a conversation with a friend at Pixar about exactly this topic.

The issue goes beyond just pulling more spacial information out of a shorter timeframe, it's also that all the current techniques for filmmaking assume 24fps.

Everything has a budget of time and money, and when you say, make 1000 extra costumes for a shot, you cut corners in certain ways based on your training as a costume designer. Your training is based on trade techniques, which are based on the assumption that the director of photography (DOP) and director are viewing the work at 24fps with a certain amount of spacial detail. Doubling the frame rate means some of those techniques need to be more detailed, whereas others might be completely useless.

Given everything that goes into a shot (hair/makeup, set design, lighting, costume design, props, pyrotechnics, etc), it's unlikely everyone working on a high-fps film is going to be aware of exactly which techniques do and do not work. As a result, you get lots of subtle flaws exposed that don't work with twice the detail. The sum of these flaws contribute heavily to making the shot look 'videoish'.


Did your friend say what frame rate Pixar uses for its animated films?


Last year, I was told by one of the technical managers for animation at Pixar that they're all in 24 fps. They've used all the increased processing power since their founding in the 90's to increase detail, so the result is that the time it takes to render each frame has barely changed since Toy Story. Each frame of Monsters University took an average of 29 hours to render.

This article has some good info: http://venturebeat.com/2013/04/24/the-making-of-pixars-lates...

They seem to choose being film-like over being realistic when they have a choice. In one of their most recent short films, they actually lowered the framerate for a slow-motion effect to simulate a classic film technique.


> Monsters University took an average of 29 hours to render

Wow, when I was doing CGI work in the mid-90s our budget was generally 20 minutes per frame. The highest quality work we did was around 1 hour per frame, and this was for TV adverts and idents -- still, we didn't have Pixar's budgets to play with.


The most interesting point made in the article (for me) is that the presence of noise/grain - which effectively reduces real detail in an individual image - can actually improve the perceived detail across time with high frame rates.

At first, I thought this extra "detail" could be explained as an illusion (since noise/grain can mask a lack of resolution), but then I read the abstract quoted near the end of the article:

"...visual cortical cell responses to moving stimuli with very small amplitudes can be enhanced by adding a small amount of noise to the motion pattern of the stimulus. This situation mimics the micro-movements of the eye during fixation and shows that these movements could enhance the performance of the cells"[1]

So if I understand right, since the biological systems are tuned to extract extra detail via supersampling across time, and a small amount of noise/grain can enhance that ability (mimicking natural movement of the eye), it actually helps our visual system extract more real detail.

It seems counterintuitive to add noise for more detail, but the explanation is fascinating.

[1] Stochastic resonance in visual cortical neurons: does the eye-tremor actually improve visual acuity? – Hennig, Kerscher, Funke, Wörgötter; Neurocomputing Vol 44-46, June 2002, p.115-120


I wonder how is that related to the dithering algorithms in music production -- that add noise when downsampling an audio track.


Actually, these are two completely distinct phenomena.

Dithering exists in both the visual and the audio realms, as corysama pointed out in their linked article [1]. The term's use is identical in both realms, but neither is really related to stochastic resonance.

Dithering is a technique of adding noise to disguise sampling boundaries. It hides unrealistic artifacts from digital downsampling of analog signals, but it does not add more realistic detail. Critically, it does not increase the signal-to-noise ratio.

The phenomenon being described here occurs on a biological level. As described above, the additional noise improves your visual acuity. That is, you can extract /additional/ details with the noise added which could not be resolved before. This is due to a nonlinear change invoked in the system receiving the signal, not from the change to the signal itself. So, while adding the noise initially decreased the signal-to-noise ratio, the overall effect on the entire system is a increase in signal-to-noise ratio.

Here the mechanism, explained in the Henrig paper, appears to be purely biological, but stochastic resonance is not strictly a biological phenomenon; it can also be seen in the electrical domain [2].

[1] http://loopit.dk/banding_in_games.pdf [2] http://en.wikipedia.org/wiki/Stochastic_resonance



Did you just explain why some people experience visual snow? http://en.wikipedia.org/wiki/Visual_snow


Wow, I never knew. I have this a a low level. Always have.

Here's a mildly interesting thing: Say you are at a stop light, and you want to get the jump on the other guy.

Jiggle your eyes rapidly around the light, and you will pick up the change more quickly than other people do.

Eyes do not have frame rates. They sample in a staggered fashion, some rods / cones, others, inner eye priority, outer eye lower priority, with movement getting priority overall and detail.


I personally love HFR and have went out of my ways to watch the three The Hobbits in HFR (I traveled to Paris, the only place in France where they have it in HFR).

When people complain about 48fps being weird I just feel like they're just not used to it. It does look weird but after 20 minutes it looks amazing. I'm personally tired of not understanding anything in action movies that uses 24fps. It is kind of a luxury for the eyes to have 48 fps and I predict that in a few years we'll have the same debate we have with console now (60 fps is better than 30 fps).

We got used to 24 fps and so we're making justifications on why it looks better when it clearly doesn't if you take a step back.


I'm not surprised by average viewers being taken aback by HFR. What surprises me is how many ostensibly serious film critics are flat out rejecting it. I just can't believe that's anything less than clearly being on the wrong side of history, like people who probably complained about audio in films, color films, or CGI.


Just because it's new, doesn't mean it's better. And complaining doesn't automatically place one in the wrong side of history.

See 3D films. It keeps reappearing and with it the usual stream of complaints and people dismissing those complaints. However, audiences end up rejecting it after a while, for their own fuzzy reasons that may or may not be related to the complainers' arguments. This is already happening to the current 3D technology, I wonder why...

HFR fits this perfectly. It's new, but it isn't clear if it will hold with audiences. It's the audiences that decide if it's better.

As for CGI... That's a bad example. CGI is generally a good thing, but too much of it isn't.


I don't think it's better because it's new. I think it's better because it's a higher frame rate. It's not some new stylistic choice, it's just more information. It's better in the same way that a higher bandwidth Internet connection is better, or a phone with longer battery life.

I also don't think that 3D films is better in the same absolute sense, although I actually do think 3D is pretty good in theaters. The dizziness complaints about 3D are very valid because of physical implementation details, but I don't think the same about similar complaints about HFR.


People are extremely defensive about the 24 frames, so I feel for you trying to take a stance.

When I watched the Hobbit I thought to myself that some of the breathing movements in the CGI were amazingly fluent and they kind of reminded me of a computer game.

And then I thought, what a loss to Hollywood, that beautiful fluent movements have come to be associated with computer games and not films.


You're trying make an opinion a fact. "Better" has always been subjective when it comes to the senses, and I suspect it always will be. See the whole ridiculous audiophile cottage industry that can sell people $900 wooden volume knobs for their hi-if systems.


That's fair. 'Better' is complex.

Perhaps 'closer to reality' is a better statement. Reality doesn't render at 24fps.


Subjectivity is not the same thing as placebo. There has to be a perceptible difference to have a real preference.


People need to take a step back and remember that film is art, it's not always meant to be the best possible depiction of reality.

48 fps can be better, but 24 fps can too, it's another tool for a director to use.

The issue is when people start trying to proscribe 48 fps, or 24 fps, as inherently superior, it's all situational.


We can still watch old Chaplin movie and love it. Because it's good. But if new movies were going to be released in 16fps then nobody would watch that (if it's just one movie people might go because of the novelty). Especially for action movies, there is no excuse not to reach 48, and better 60 fps, since we now have the technology to do this.

Peter Jackson and James Cameron are precursors and they will have the knowledge and the technique before everyone else when everyone will start making 60fps movies.


In action movies, they often use an even faster shutter to reduce motion blur. The choppy motion makes hits look faster and harder. Lately I've noticed they'll even drop several frames from the middle of a punch to make it look harder.


I brought a friend to watch The Hobbit in HFR. He very much a care less and very non technical person. After the show he asked me why the movie not look like a movie but TV video. He said the mountains looked plain, not as grand like in LOTR.


When something it's clearly better than a previous technology, you "get used to it" pretty quickly.

Nobody complained when the first LCD monitors started replacing CRTs.

When then first "retina" displays appeared, the improvement over lower DPIs screens was clear, and nobody complained it was "too sharp".

It's clear that ithe HFR case the issue is not so clear cut: the fact that there are discussions about it (and articles like this one that try to place a scientific base to the fact that many do not like HFR movies), it shows that it's not just a matter of "getting used to it".


Actually, if you look back via the web archive, at the time when Retina appeared, a lot of people complained about it.

Doesn’t really bring any advantage, takes too much processing power, people can’t see that sharp anyway, "too sharp images hurt my eyes", etc.

It was exactly the same as with 3D or CGI, even though it is definitely useful.


Have you(anybody else here) tried interpolation ? What's your opinion of it ?


So, basically, at 24FPS things are blurry enough that you can't see the fine details, which means that special effects and costumes look realistic.

Increase the frequency to 48FPS and the blur goes away, meaning that we can see the fine detail, and suddenly sets look like sets, costumes look like costumes, and CGI looks like a computer game.


To be fair, CGI creatures look like a computer game regardless. A very pretty computer game, but still.

Sometimes it seems like people want to believe that CGI is a whole lot better than it actually is. People raved about Gollum in the LOTR films, but go back and look at it. Even at 24fps, it's not great. It certainly doesn't look real.


I agree with this but I think it is more complicated than people "wanting to believe CGI is better than it is", it is more like some sort of innate suspension of disbelief we all have as long as what we are seeing is as good or better than what we've seen before.

I lived through the entire CGI revolution in movies from the early days of Tron and the "Genesis effect" sequence of Star Trek II to now. The way the CGI in all the movies in between looks to me now is very different than it looked to me when first viewed and as a graphics nerd I was always interested and up to date on the technology was done. I don't believe my brain "wanted" to believe CGI was better than it was then, I just had no context for how it would look when it was done even better and as the goalposts moved what came before looked increasingly bad in comparison.


CGI backgrounds are very good though. Most people are not aware of just how much of Fight Club is CGI.

https://www.youtube.com/watch?v=Dlpr6CnKDFM


Incidentally, have you re-watched Fight Club recently? I remember when I watched it in 1999 I was completely blown away by the "impossible" cuts. I watched again a couple of months ago, and all I could think was that it looked like a Half Life 2 cut-scene. The standard for CGI rendering has gotten a lot higher. :)


go watch Interstellar. first movie where it was really, really hard to spot the transitions to CGI.

also, you'd be surprised how much CGI now is being used in TV series. the showreels are on vimeo, some amazing work.

example, game of thrones, season 4: https://www.youtube.com/watch?v=jK73bCuJXc8


I have wondered about this as well. Maybe the audience does not care so much and thus it makes no sense stay at stuff that can be pulled off with budget. That said there is more and more things that can be done well every year. I think something like Gravity looked pretty cool and it will be interesting to see if it looks good few decades down the road.


There weren't many organic things in Gravity. Mechanical things are much more easy to get to look right. (The easiest would be something made of plastic, I guess.)


Could that be due to the brightness difference between seeing it in the theatre and watching it on your TV? TVs tend to be set brighter.


> suddenly sets look like sets

This really puzzle me. I was an extra in LOTR 3, and was on some of these sets. They _don't_ look like sets. The Weta people put insane energy and time into making them look and feel realistic: dirt on the floors, dirt on the costumes, peeling paint, heavy chain mail (even though it's electroplated plastic, it's still heavy!)... the list could go on and on and on.


It's not just that the lack of realism jumps out. Everyone instinctively knows movies look strange compared to real life, but this dream-like quality is part of what makes them so seductive.


Exactly, though movies often aim not at realism but something better. How to achieve that?

So far, it's taken a lifetime of trial-and-error in every aspect of movie making to find what works (which happened to be at 24fps). 48fps might too.


Immersion. It doesn't have to be real, it just has to take you there fully enough for you to have an experience intended by the creator.



I disagree on both explanations:

1/ The "soap opera effect" explains the 48 fps issue.

2/ The lack of motion blur in games is the reason why higher fps are better (see https://www.shadertoy.com/view/XdXXz4 for a great visualisation).


There's also a high-level processing aspect. The brain excels at extracting relevant information, which includes discordant information. Back in the days, a solo violin was tuned slightly off to allow the audience to hear it over the orchestra. Barthes also came up with the "punctum" idea, whereby an odd detail in a picture will generate an impression. What I'm saying is that higher-level processing is probably responsible for a number of "impressions" that might have little to do with fps.


Most serious FPS gamers swear by screens that have a higher update rate than 60hz.

In the past this was achieved by setting your CRT to a low resolution and upping the refresh rate. More recently you can get TN LCD panels that offer 120 or 144hz update rates.

Moving the mouse in small quick circles on a 144hz screen compared to a 60hz screen is a very different experience. On a 60hz screen you can see distinct points in the circle where the cursor gets drawn. With 144hz you can still see the same effect if you go fast enough, but it is way smoother.

This makes a huge difference for being able to perceive fast paced movements in twitch style games and is the reason there has been a shift to these monitors across every competitive shooter.

My thoughts on this is that this behavior is similar to signal sampling theorems. Specifically the Nyquist theorem talks about how you have to sample at at least 2x the max frequency of a signal to accurately represent the frequency. For signal generation this means that you have to generate a signal at at least twice the rate of the max frequency you want to display. If you want to accurately reconstruct the shape of that signal you need 10x the max frequency (for example two samples in one period of a sine wave makes it look like a sawtooth wave, ten samples makes it look like a sine wave).

So, if you're moving your mouse cursor quickly on a screen or playing a game with fast paced model movement even if your eyes can only really sample at something like 50-100hz the ideal monitor frequency might be 1000hz. There's a lot of complexity throughout the system before we can get anything close to this (game engines being able to run at that high of a framerate, video interfaces with enough bandwidth to drive that high of a framerate, monitor technology being able to switch the crystals that fast, etc.).

Yes, 48fps movies typically look less cinematic, but I think this is a flaw in movie making technology and not of the framerate. The fight scenes in the hobbit sometimes look fake because you can start to tell how they aren't actually beating up the other person. This detail is lost at 24fps and is why they have been able to use these techniques.


2 samples of a sine wave does not result in a sawtooth wave being reproduced by the DAC. A perfect sawtooth wave actually contains infinitely high frequency content and thus can't be perfectly represented digitally.

Check out this video http://xiph.org/video/vid2.shtml which was recently posted here. Also the wikipedia page on sawtooth waves has an animation showing additive synthesis of a sawtooth wave: http://en.wikipedia.org/wiki/Sawtooth_wave


I just upgraded to a 144hz monitor and a 290 this Christmas, to celebrate them working on Mesa. And yeah, they work pretty flawlessly, even in my now 3 monitor setup.

And Quake. Holy shit. Playing that game at 144hz makes it feel incredibly real, even if its blocky and pixilated, the movements are incredibly organic and the camera turning feels like a head turning rather than spinning around on Google Maps.


Wait a second.. can you explain the 10x the max frequency part to accurately reconstruct the shape of the signal?

It's my understanding that you just need 2x (two points in a sine wave) to construct a unique wave. If you're getting a sawtooth, it means that you're sampling a wave that is composed of very high frequencies, and you're accurately sampling it, so a DAC can reconstruct it uniquely.


There's some discussion of it at the beginning of this article: http://www.ni.com/white-paper/10669/en/


What that whitepaper is saying is that "if you only sample 2xMaxFreq and then connect the dots with straight lines it doesn't really look like a sine wave so buy 5x as much instrument from us". That's a total cheat as that sawtooth graph they show is only possible if you allow higher frequencies. If the signal is bandwidth limited at the frequency of the sinewave the points you sample at 2xFreq only have one possible solution for the graph (the sinewave again). There are some great videos about this recently by xiph's monty:

https://www.xiph.org/video/

So if you sample 2xMaxFreq you have samples that describe the full signal and can reconstruct it exactly. So if our eyes really are 100Hz we can't see anything above 50Hz. That seems to align well with the ~50/60Hz threshold for flicker free viewing. Apparently higher framerates are only useful for when we have fast movement across the field of view which would be the case for FPS:

https://en.wikipedia.org/wiki/Flicker_fusion_threshold#Visua...


I just finished going through a Fourier Transform course. The technical answer is that you don't interpolate the samples with lines, but with the sinc function. The sinc function is sinusoidal and so it more naturally approximates waves. In this case 2xMaxFreq is enough to reproduce it exactly. Using linear interpolation in the whitepaper is a blatant lie.

>So if our eyes really are 100Hz we can't see anything above 50Hz.

I'm not sure this follows as we're not perceiving waveforms when light hits our eyes, but we're perceiving intensity of energy hitting our receptors.


This paper has a lot of false information in it. The sawtooth wave example is just not correct. There is exactly one band-limited (i.e. no frequencies greater than half the sampling frequency) waveform that corresponds to a set of samples. In the case of a sine wave sampled at twice the frequency, that solution is the exact sine wave that was produced. The video I linked to above has a demonstration of this signal reconstruction, using an analog oscilliscope to show that sine waves are reconstructed perfectly when sampled at only 2x the fundamental frequency.


Ah ok, so here I think is the slight confusion.

If you make the constraint/assumption that during reconstruction that you rebuilding a time domain signal composed of series of sinusoidals, then you're in the clear at just 2x sampling. For example, in Figure 2 in the article, it states that 2x sampling only provides frequency information, and not amplitude and shape. This is true if we assume that we're trying to directly reconstruct -any- periodic signal. Then if we sample at only 2x of the signals fundemental frequency, we are in fact stuck.

This can cause certainly cause confusion. So I think the usual way (I just dinker with DSP for funsies and a little bit at work, so I might have got it mangled) to deal with this confusion is to remember that sawtooth and square (and whatever) signals are chocked full of high harmonics that also must be sampled at or beyond the nyquist limit for you to be able to construct it.


I see the same arguments arise about HFR as I do with stereoscopy and the rhetoric follows the same as the switch from vinyl to digital music formats: it is no longer art. It feels like you lose the artistic effect when you add a multiple of information to your brain. The reality is artists need to learn how to be mindful of the new medium and the old tricks they used to overcome older medium defects need to be removed from the process. (Ex. Over use of makeup) I am excited because we have a bright future with better media technology and pioneers like James Cameron are leading the way.


Film is art, it's important people remember that, HFR is just another tool - it shouldn't be forced upon people.

You won't see people claiming 3D is an inherently superior format to film in, we shouldn't see the same for HFR.

Conversely if a director feels it's best for their film to use HFR, in full or in parts, people shouldn't be jumping on their back about it until they've seen the end product.


James Cameron (Titanic, Avatar, etc.) wants to get frame rates up to at least 48FPS. He considers that more important than resolution, pointing out that higher resolution only benefits the first three rows in a theater.

With the low 24FPS frame rate, pans over detailed backgrounds look awful. This is a serious constraint on filmmaking. Cameron's films tend to have beautifully detailed backgrounds, and he has to be careful with pan rates to avoid "judder". "The rule of thumb is to pan no faster than a full image width every seven seconds, otherwise judder will become too detrimental."(http://www.red.com/learn/red-101/camera-panning-speed)

There are some movies from the 1950 and 1960s where this is seriously annoying. That was when good color and wide screen came in, and films contained gorgeous outdoor shots of beautiful locations. With, of course, pans. Some of the better Westerns of the period have serious judder problems. Directors then discovered the seven-second rule. Or defocused the background slightly, if there was action in the foreground. Some TVs and DVD/BD players now have interpolation hardware to deal with this.

The author's analysis of the human visual system is irrelevant for pans. For pans, the viewer's eyes track the moving background, so the image is not moving with respect to the retina.



> Add temporal antialiasing, jitter or noise/film grain to mask over things and allow for more detail extraction. As long as you’re actually changing your sampling pattern per pixel, and simulating real noise – not just applying white noise – you should get better results.

This could be a viable alternative to supersampling for antialiasing. Rather than averaging multiple subsamples for each pixel fragment, this suggests that if a single subsample were taken stochastically, the results could be as good, or even better, so long as the frame rate stays high enough.

Antialiasing doesn't quite have the same impact on rendering performance in modern games that it used to, mainly due to new algorithms such as SMAA and the increased cost of postprocessing relative to rasterisation, but this could nonetheless lead to tangible performance improvements.


Does anyone know of a good demo of different frame rates that I can view on a laptop? Is this even possible with LCDs?


Can it be that description of 24 fps as "dreamy" is subjective? Because usually my dreams don't have such effect. I like plays and 48 fps Hobbit.

May be it's like in the days of monochrome media black-and-white dreams were a norm, but today they are exception.


I dug into high FPS film when I read that 24 fps were designed to be viewed in a dark theatre, when human eyes blur images due to switching between rods and cones.

Most of us no longer watch content in darkness. James Cameron is of the opinion that improving FPS is more significant than moving up from HD. I figured I should trust the professional who devotes his life to this.

To truly evaluate high FPS movies and video content, you have to watch it for a while.

The SmoothVideo Project (SVP) is pretty awesome. Needs some good hardware, made by volunteers, and needs some work to get set up well.

It struggles in scenes with lots of detail, but panning scenes are incredibly beautiful.

Going back is a bit difficult.


>>>To truly evaluate high FPS movies and video content, you have to watch it for a while.

Simply doubling the framerate of existing film is the wrong approach. To truly evaluate high FPS the director must take the framerate into account during filming.

I'll use SVP/InterFrame with low FPS sports, homemade video and occasionally anime but NEVER live action film. It cheapens the whole experience and undoes everything the director intended.


I don't quite understand. If a video is playing at 41fps, then your eye can sample each frame twice, with a difference of one microtremor to increase resolution. But if a video is playing at 83fps, you only get one sample per frame with no added benefit from the microtremor. The article states the opposite: that the latter framerate allows for a higher perceived resolution. Can anyone explain?



Is anyone else redirected to a 403 error on a completely different site (broadbandtvnews) when following the link?


But... UbiSoft said some games are better and more "cinematic" at 30fps. Derp!




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: