Joe Kilner - One extra issue with games is that you are outputting an image sampled from a single point in time, whereas a frame of film / TV footage is typically an integration of a set of images over some non-infinitesimal time.
This is something that, once stated, is blatantly obvious to me, but it's something I simply never thought deeply about. What it's saying is that when you render a frame in a game, say the frame at t=1.0 in a game running at 60 FPS, what you're doing is capturing and displaying the visual state of the world at a discrete point in time (i.e. t=1.0). Doing the analogous operation with an analogous physical video camera means you are capturing and compositing the "set of images" between t=1.0 and t=1.016667, because the physical camera doesn't capture a discrete point in time, but rather opens its shutter for 1/60th of a second (0.16667 seconds) and captures for that entire interval. This is why physical cameras have motion blur, but virtual cameras do not (without additional processing, anyway).
This is obvious to anyone with knowledge of 3D graphics or real-world cameras, but it was a cool little revelation for me. In fact, it's sparked my interest enough to start getting more familiar with the subject. I love it when that happens!
 some renderers even take advantage of that to increase performance since you get a more human oriented feel by rendering less precisely.
This would give you the same "adaptive per-pixel updating" seen in your link, though primarily to tackle the problems with HMDs (low-persistence at high frame-rates).
It's a proprietary nVidia technology that essentially does reverse V-Sync. Instead of having the video card render a frame and wait for the monitor to be ready to draw it like normal V-Sync, the monitor waits for the video card to hand it a finished frame before drawing, keeping the old frame on-screen as long as needed. The article goes into a little more detail; they take advantage of the VBLANK interval (legacy from the CRT days) to get the display to act like this.
It's a good way to start building an intuition about state dependencies.
It's the same kind of problem you get when you try to talk about objective/subject morality, or whether science can prove things.
Another example I give:
The Sun revolves around the Earth -- True from Earth's perspective.
The Earth revolves around the Sun -- True from Sun's perspective.
They both revolve around their center of mass -- True in Newtonian model.
The both follow straight geodesics in spacetime -- True in Einsteinian model.
Everything is true in some approximation. The idea is to increase your precision so that you can find truths that incorporate more information and thus provide greater insight and generality. If your language doesn't have enough 'bandwidth' to carry the information of -- that is, mimic the structure of -- your experiences, you have to develop more structured abstractions, or you'll lose clarity and expressive power.
So, for instance, you wouldn't have to prove that unicorns don't exist: you just have to show that providing more specific information doesn't result in a lack of clarity in a general model. A theory of three-legged unicorns offers no advantages over a model of n-legged unicorns because unicorns don't exist.
But now I've gone off the deep end...
I'm saying this also because the initial analogy wasn't so strong to begin with. A bunch of incorrect analogies can build the wrong picture of a theory which is hard to get rid of people's minds. Which is why it's good to reason through principles and not analogies (analogies are useful for other purposes: building bridges between the understanding of two different fields. I believe -- and they're made precise and clear in their shortcomings).
The extrapolation here really has little to do with quantum physics. I am also using QM as an analogy for linguistics. The theme is about how language captures information and how information-processing systems select output based upon linguistic structure. It's a very young and poorly understood subject, so I don't think it's very likely to get good theoretical work in a forum like this. These are primitive times, after all.
*A linear blur of a 2D picture acts like a 1D sinc filter: information is completely lost only at spatial frequencies multiples of 1/d in the direction of motion, where d is the linear displacement, and otherwise attenuated.
The article wanders on and on, but is simply grasping at the much more learned aesthetic repulsion of motion blur.
24 and 25 fps (aka 1/48th and 1/50th) motion blur have defined the cinematic world for over a century.
Video? 1/60th. Why the aesthetic revulsion? While I am certain this is a complex sociological construct, there certainly is an overlap with lower budget video soap operas of the early 80's. Much like oak veneer, the aesthetic becomes imbued with greater meaning.
The Hobbit made a bit of a curious choice for their 1/48th presentation in choosing a 270° shutter. An electronic shutter can operate at 360°, which would have delivered the historical 1/48th shutter motion blur.
Instead, the shutter ended up being 1/64th, triggering those all-too-unfortunate cultural aesthetic associations with the dreaded world of low-budget video.
It should be noted that there are some significant minds that believe in HFR motion pictures, such as Pixar's Rick Sayre. However, a disproportionate number of DPs have been against it, almost exclusively due to the motion blur aesthetic it brings, and the technical challenges of delivering to the established aesthetic within the constraints of HFR shooting.
Shooting at the slowest shutter speed possible should make the most fluid video.
This effect is now very common for action scenes in movies and also tons of music videos. Very easy to spot once you are aware of it.
Now, you're correct that the actual effect is done by changing the shutter speed. but the loss of light is often compensated for by using a faster film, since using a larger aperture has more significant effect on the scene in the form of DOF.
Now, in video games we do have methods to help simulate inter-frame motionblur. The most commonly used is to build a buffer of inter-frame motion vectors, this is often generated with game state, but can also be re derived by analysis of current and previous frames to some affect. Then you do a per-pixel blur based on the direction and magnitude of the motion vectors. Which often works to good effect.
So, you could render 16 samples, but only do 1/4 of the pixels on each so you'd get better motion blur for only a small increase in work over doing 4 temporal samples. The extra work corresponds to a bit of bookkeeping and the tweening of 16 frames instead of 4 and you'd still be rendering the same number of samples overall.
So over-framing game frames will not produce motion blur, it'll simply merge two still images together. You need to simulate motion blur (usually as a post-process). This of course takes more time to render, potentially lengthening the frame times.
Though in practice it would be difficult to render more than a few snapshot frames between the display's refreshes, and with a low sampling rate there would be noticeable errors, particularly if you take a screenshot of a fast-moving object.
For instance if you have a digital camera where you can select the framerate (pretty common these days) and if the exposition time was simply the frame period, it would mean that the image at 30fps would be exposed twice as long as 60fps and the resulting picture would look very differently.
Of course you can mitigate that by changing the aperture and other parameters but in my experience in practice you can select the exposition time and framerate independently on digital cameras. With very sensitive sensor and/or good lighting you can achieve very small exposition times, much smaller than the frame period. If you're filming something that moves and unless you want to be blurry on purpose you probably want to reduce the exposition time as much as possible in order to have a clean image, just like in the video games.
However, anecdotally speaking, the concern I have with evaluating high-frame rate in film is that we have very little context—most of us have only ever seen Peter Jackson's Hobbit films in HFR. In other words, I have never seen how other directors' work would be affected by HFR.
Speaking exclusively about the Hobbit series in HFR, I too observed an uncanny valley that traditional films intrinsically avoid with their low frame rate. The Hobbit films felt more like a play than a film. A play with elaborate stage effects, but a play nonetheless.
In fact, my chief criticism of Jackson's directing with HFR is that the feeling of watching a play is amplified by how he mixes the sound and directs his extras. The extras routinely just mumble nonsense in the background, leaving only the character you're intended to be focused on speaking clearly. It's the same thing you see in a play when there is background dialog, and it's completely unnatural. You find yourself sometimes distracted by the characters in the background and realizing they're not actually doing anything meaningful or having real conversations. For example, in the most recent film, I found myself more distracted by the unnatural audio in early scenes (such as the beach scene) than the HFR video.
Combine that with the poor acting by the minor characters in the first 45 minutes of the recent film and I think HFR gets a bad rap in large part because the Hobbit films alone are our point of reference.
Another important issue is the unwitting association we have built up that inversely correlates frame rate and production quality. The movie tradition started as soon as video technology became just barely capable of supporting the medium. The enormous cost of moving beyond that ancient standard has kept movies at 24Hz for more than a human lifetime. Later, cheaper, consumer-orientated ed tech targeted getting the best quality out of 59.97Hz television displays, so of course they used all 59.97Hz available to them.
Because of this quirk of history, we've had a frame rate & quality dichotomy with movies on one side (very high quality & 24hz) and soap operas/TV sport/live action reporting/home movies (low to abysmal production quality and 60Hz) on the other for the entire life of everyone.
When MotionFlow frame rate upscaling TVs first came to market, I randomly polled people I found looking at them at BestBuy about what they thought. Even when watching super-expensive Hollywood productions, the most common response was that it "looked like a high school play." It was ludicrous to look at the enormous scenes, world-shaking special effects and top-bill actors on the screens and say "Yep. High school production." But, people did. And, they couldn't explain why. I think the lifetime spent unconsciously building up a mental model that high frame rate leads to low quality is probably a major factor in people's reaction to 48&60Hz movies.
I personally think most of the issues come from clear motions - you can see they are using props just because they are too light and don't move like a real object would. There isn't enough blur to fool your brain.
Imagine if "The Matrix" had been shot with the simulations in HFR and the real world in 24fps. Or if Kubrick had a chance to essentially invert the gauzy, candle-lit settings of "Barry Lyndon" and present a world in 8K hyper-reality.
I don't think Barry Lyndon would work, period pieces for the most part I believe work better at 24. But The Matrix very well might. With The Matrix it might work best if it was a mix of HFR and 24. HFR when they are in the matrix and 24 when they are out (of course that brings another set of problems in projecting mixed frame rates in the same movie).
HFR I believe would work really well in sci-fi that is meant to look and feel a bit sterile (think THX1138, The Island, etc.). Or in heavily CGI scenes.
The big problem with The Hobbit, since it is the first, there is still a bit of working out the kinks and bugs (lighting to me was a big issue in the first HFR film). Give it time and it might prove an indispensable creative tool.
As for feeling like a play, I guess I didn't pay attention to the background characters much because I never noticed any of this nonsense mumbling that you're talking about.
I watched ST TNG using that and it certainly made it feel a bit more realistic.
The panning around at the beginning of the movie was HORRIBLE in 24fps. Visually displeasing. The HFR effect disappeared for me after a few minutes and I could enjoy the film for what it was.
They're almost stationary, occasionally blink, have blank looks on their faces, and, if they're the focus of your attention, look _completely unnatural_.
(Source: Parent of small children who started to look at anything else in a scene of Frozen after the 10th time.)
I chalked it up to the cinematography just not meshing well with the way the film was shot.
Opposite is Tarantino, who hates digital and is shooting is next film in 70mm. They should be released around the same time so we will have the best of digital and film to compare.
I had a conversation with a friend at Pixar about exactly this topic.
The issue goes beyond just pulling more spacial information out of a shorter timeframe, it's also that all the current techniques for filmmaking assume 24fps.
Everything has a budget of time and money, and when you say, make 1000 extra costumes for a shot, you cut corners in certain ways based on your training as a costume designer. Your training is based on trade techniques, which are based on the assumption that the director of photography (DOP) and director are viewing the work at 24fps with a certain amount of spacial detail. Doubling the frame rate means some of those techniques need to be more detailed, whereas others might be completely useless.
Given everything that goes into a shot (hair/makeup, set design, lighting, costume design, props, pyrotechnics, etc), it's unlikely everyone working on a high-fps film is going to be aware of exactly which techniques do and do not work. As a result, you get lots of subtle flaws exposed that don't work with twice the detail. The sum of these flaws contribute heavily to making the shot look 'videoish'.
This article has some good info: http://venturebeat.com/2013/04/24/the-making-of-pixars-lates...
They seem to choose being film-like over being realistic when they have a choice. In one of their most recent short films, they actually lowered the framerate for a slow-motion effect to simulate a classic film technique.
Wow, when I was doing CGI work in the mid-90s our budget was generally 20 minutes per frame. The highest quality work we did was around 1 hour per frame, and this was for TV adverts and idents -- still, we didn't have Pixar's budgets to play with.
At first, I thought this extra "detail" could be explained as an illusion (since noise/grain can mask a lack of resolution), but then I read the abstract quoted near the end of the article:
"...visual cortical cell responses to moving stimuli with very small amplitudes can be enhanced by adding a small amount of noise to the motion pattern of the stimulus. This situation mimics the micro-movements of the eye during fixation and shows that these movements could enhance the performance of the cells"
So if I understand right, since the biological systems are tuned to extract extra detail via supersampling across time, and a small amount of noise/grain can enhance that ability (mimicking natural movement of the eye), it actually helps our visual system extract more real detail.
It seems counterintuitive to add noise for more detail, but the explanation is fascinating.
 Stochastic resonance in visual cortical neurons: does the eye-tremor actually improve visual acuity? – Hennig, Kerscher, Funke, Wörgötter; Neurocomputing Vol 44-46, June 2002, p.115-120
Dithering exists in both the visual and the audio realms, as corysama pointed out in their linked article . The term's use is identical in both realms, but neither is really related to stochastic resonance.
Dithering is a technique of adding noise to disguise sampling boundaries. It hides unrealistic artifacts from digital downsampling of analog signals, but it does not add more realistic detail. Critically, it does not increase the signal-to-noise ratio.
The phenomenon being described here occurs on a biological level. As described above, the additional noise improves your visual acuity. That is, you can extract /additional/ details with the noise added which could not be resolved before. This is due to a nonlinear change invoked in the system receiving the signal, not from the change to the signal itself. So, while adding the noise initially decreased the signal-to-noise ratio, the overall effect on the entire system is a increase in signal-to-noise ratio.
Here the mechanism, explained in the Henrig paper, appears to be purely biological, but stochastic resonance is not strictly a biological phenomenon; it can also be seen in the electrical domain .
Here's a mildly interesting thing: Say you are at a stop light, and you want to get the jump on the other guy.
Jiggle your eyes rapidly around the light, and you will pick up the change more quickly than other people do.
Eyes do not have frame rates. They sample in a staggered fashion, some rods / cones, others, inner eye priority, outer eye lower priority, with movement getting priority overall and detail.
When people complain about 48fps being weird I just feel like they're just not used to it. It does look weird but after 20 minutes it looks amazing. I'm personally tired of not understanding anything in action movies that uses 24fps. It is kind of a luxury for the eyes to have 48 fps and I predict that in a few years we'll have the same debate we have with console now (60 fps is better than 30 fps).
We got used to 24 fps and so we're making justifications on why it looks better when it clearly doesn't if you take a step back.
See 3D films. It keeps reappearing and with it the usual stream of complaints and people dismissing those complaints. However, audiences end up rejecting it after a while, for their own fuzzy reasons that may or may not be related to the complainers' arguments. This is already happening to the current 3D technology, I wonder why...
HFR fits this perfectly. It's new, but it isn't clear if it will hold with audiences. It's the audiences that decide if it's better.
As for CGI... That's a bad example. CGI is generally a good thing, but too much of it isn't.
I also don't think that 3D films is better in the same absolute sense, although I actually do think 3D is pretty good in theaters. The dizziness complaints about 3D are very valid because of physical implementation details, but I don't think the same about similar complaints about HFR.
When I watched the Hobbit I thought to myself that some of the breathing movements in the CGI were amazingly fluent and they kind of reminded me of a computer game.
And then I thought, what a loss to Hollywood, that beautiful fluent movements have come to be associated with computer games and not films.
Perhaps 'closer to reality' is a better statement. Reality doesn't render at 24fps.
48 fps can be better, but 24 fps can too, it's another tool for a director to use.
The issue is when people start trying to proscribe 48 fps, or 24 fps, as inherently superior, it's all situational.
Peter Jackson and James Cameron are precursors and they will have the knowledge and the technique before everyone else when everyone will start making 60fps movies.
Nobody complained when the first LCD monitors started replacing CRTs.
When then first "retina" displays appeared, the improvement over lower DPIs screens was clear, and nobody complained it was "too sharp".
It's clear that ithe HFR case the issue is not so clear cut: the fact that there are discussions about it (and articles like this one that try to place a scientific base to the fact that many do not like HFR movies), it shows that it's not just a matter of "getting used to it".
Doesn’t really bring any advantage, takes too much processing power, people can’t see that sharp anyway, "too sharp images hurt my eyes", etc.
It was exactly the same as with 3D or CGI, even though it is definitely useful.
Increase the frequency to 48FPS and the blur goes away, meaning that we can see the fine detail, and suddenly sets look like sets, costumes look like costumes, and CGI looks like a computer game.
Sometimes it seems like people want to believe that CGI is a whole lot better than it actually is. People raved about Gollum in the LOTR films, but go back and look at it. Even at 24fps, it's not great. It certainly doesn't look real.
I lived through the entire CGI revolution in movies from the early days of Tron and the "Genesis effect" sequence of Star Trek II to now. The way the CGI in all the movies in between looks to me now is very different than it looked to me when first viewed and as a graphics nerd I was always interested and up to date on the technology was done. I don't believe my brain "wanted" to believe CGI was better than it was then, I just had no context for how it would look when it was done even better and as the goalposts moved what came before looked increasingly bad in comparison.
also, you'd be surprised how much CGI now is being used in TV series. the showreels are on vimeo, some amazing work.
example, game of thrones, season 4:
This really puzzle me. I was an extra in LOTR 3, and was on some of these sets. They _don't_ look like sets. The Weta people put insane energy and time into making them look and feel realistic: dirt on the floors, dirt on the costumes, peeling paint, heavy chain mail (even though it's electroplated plastic, it's still heavy!)... the list could go on and on and on.
So far, it's taken a lifetime of trial-and-error in every aspect of movie making to find what works (which happened to be at 24fps). 48fps might too.
1/ The "soap opera effect" explains the 48 fps issue.
2/ The lack of motion blur in games is the reason why higher fps are better (see https://www.shadertoy.com/view/XdXXz4 for a great visualisation).
In the past this was achieved by setting your CRT to a low resolution and upping the refresh rate. More recently you can get TN LCD panels that offer 120 or 144hz update rates.
Moving the mouse in small quick circles on a 144hz screen compared to a 60hz screen is a very different experience. On a 60hz screen you can see distinct points in the circle where the cursor gets drawn. With 144hz you can still see the same effect if you go fast enough, but it is way smoother.
This makes a huge difference for being able to perceive fast paced movements in twitch style games and is the reason there has been a shift to these monitors across every competitive shooter.
My thoughts on this is that this behavior is similar to signal sampling theorems. Specifically the Nyquist theorem talks about how you have to sample at at least 2x the max frequency of a signal to accurately represent the frequency. For signal generation this means that you have to generate a signal at at least twice the rate of the max frequency you want to display. If you want to accurately reconstruct the shape of that signal you need 10x the max frequency (for example two samples in one period of a sine wave makes it look like a sawtooth wave, ten samples makes it look like a sine wave).
So, if you're moving your mouse cursor quickly on a screen or playing a game with fast paced model movement even if your eyes can only really sample at something like 50-100hz the ideal monitor frequency might be 1000hz. There's a lot of complexity throughout the system before we can get anything close to this (game engines being able to run at that high of a framerate, video interfaces with enough bandwidth to drive that high of a framerate, monitor technology being able to switch the crystals that fast, etc.).
Yes, 48fps movies typically look less cinematic, but I think this is a flaw in movie making technology and not of the framerate. The fight scenes in the hobbit sometimes look fake because you can start to tell how they aren't actually beating up the other person. This detail is lost at 24fps and is why they have been able to use these techniques.
Check out this video http://xiph.org/video/vid2.shtml which was recently posted here. Also the wikipedia page on sawtooth waves has an animation showing additive synthesis of a sawtooth wave: http://en.wikipedia.org/wiki/Sawtooth_wave
And Quake. Holy shit. Playing that game at 144hz makes it feel incredibly real, even if its blocky and pixilated, the movements are incredibly organic and the camera turning feels like a head turning rather than spinning around on Google Maps.
It's my understanding that you just need 2x (two points in a sine wave) to construct a unique wave. If you're getting a sawtooth, it means that you're sampling a wave that is composed of very high frequencies, and you're accurately sampling it, so a DAC can reconstruct it uniquely.
So if you sample 2xMaxFreq you have samples that describe the full signal and can reconstruct it exactly. So if our eyes really are 100Hz we can't see anything above 50Hz. That seems to align well with the ~50/60Hz threshold for flicker free viewing. Apparently higher framerates are only useful for when we have fast movement across the field of view which would be the case for FPS:
>So if our eyes really are 100Hz we can't see anything above 50Hz.
I'm not sure this follows as we're not perceiving waveforms when light hits our eyes, but we're perceiving intensity of energy hitting our receptors.
If you make the constraint/assumption that during reconstruction that you rebuilding a time domain signal composed of series of sinusoidals, then you're in the clear at just 2x sampling. For example, in Figure 2 in the article, it states that 2x sampling only provides frequency information, and not amplitude and shape. This is true if we assume that we're trying to directly reconstruct -any- periodic signal. Then if we sample at only 2x of the signals fundemental frequency, we are in fact stuck.
This can cause certainly cause confusion. So I think the usual way (I just dinker with DSP for funsies and a little bit at work, so I might have got it mangled) to deal with this confusion is to remember that sawtooth and square (and whatever) signals are chocked full of high harmonics that also must be sampled at or beyond the nyquist limit for you to be able to construct it.
You won't see people claiming 3D is an inherently superior format to film in, we shouldn't see the same for HFR.
Conversely if a director feels it's best for their film to use HFR, in full or in parts, people shouldn't be jumping on their back about it until they've seen the end product.
With the low 24FPS frame rate, pans over detailed backgrounds look awful. This is a serious constraint on filmmaking. Cameron's films tend to have beautifully detailed backgrounds, and he has to be careful with pan rates to avoid "judder". "The rule of thumb is to pan no faster than a full image width every seven seconds, otherwise judder will become too detrimental."(http://www.red.com/learn/red-101/camera-panning-speed)
There are some movies from the 1950 and 1960s where this is seriously annoying. That was when good color and wide screen came in, and films contained gorgeous outdoor shots of beautiful locations. With, of course, pans. Some of the better Westerns of the period have serious judder problems. Directors then discovered the seven-second rule. Or defocused the background slightly, if there was action in the foreground. Some TVs and DVD/BD players now have interpolation hardware to deal with this.
The author's analysis of the human visual system is irrelevant for pans. For pans, the viewer's eyes track the moving background, so the image is not moving with respect to the retina.
This could be a viable alternative to supersampling for antialiasing. Rather than averaging multiple subsamples for each pixel fragment, this suggests that if a single subsample were taken stochastically, the results could be as good, or even better, so long as the frame rate stays high enough.
Antialiasing doesn't quite have the same impact on rendering performance in modern games that it used to, mainly due to new algorithms such as SMAA and the increased cost of postprocessing relative to rasterisation, but this could nonetheless lead to tangible performance improvements.
May be it's like in the days of monochrome media black-and-white dreams were a norm, but today they are exception.
Most of us no longer watch content in darkness. James Cameron is of the opinion that improving FPS is more significant than moving up from HD. I figured I should trust the professional who devotes his life to this.
To truly evaluate high FPS movies and video content, you have to watch it for a while.
The SmoothVideo Project (SVP) is pretty awesome. Needs some good hardware, made by volunteers, and needs some work to get set up well.
It struggles in scenes with lots of detail, but panning scenes are incredibly beautiful.
Going back is a bit difficult.
Simply doubling the framerate of existing film is the wrong approach. To truly evaluate high FPS the director must take the framerate into account during filming.
I'll use SVP/InterFrame with low FPS sports, homemade video and occasionally anime but NEVER live action film. It cheapens the whole experience and undoes everything the director intended.