H.261 - first standard DCT-with-motion-compensation codec
MPEG-1 - B frames, variable rates and sizes
MPEG-2/H.262 - interlacing, 4:2:2 and 4:4:4 subsampling, other minor features
H.263 (FLV) - mainly low bitrate improvements, introduction of intra prediction
MPEG-4 part 2 (DivX/Xvid, etc.) - more prediction modes, some very advanced and little-used features (3D shape coding?)
H.264 - different transform, even more prediction modes and features
H.265 - not really familiar enough with this one to say
A demo which lets you select various codecs to try out. You can, for example, play 720p VP9 video in Safari with the WebAssembly build of the decoder:
It includes 8k and HDR color support (in 10 bit in v1 via main 10 profile, and 12 and 16 bit in v2).
In addition it moved from 9 prediction modes to 26.
mp4 the media container is https://en.wikipedia.org/wiki/MPEG-4_Part_14
MPEG-2 part 4 is some conformance testing specification.
We used this library two years ago in HS for some low-latency VR streaming. And yes, it's a little CPU-intensive so our smartphones got rather hot.
We tried other streaming protocols (such as H.264) but all of them supplied noticeable latency that made our system disorienting. Only JSMPEG was fast enough for our purposes. It's a fantastic library for any low-latency streaming! Highly recommend it.
Anyways, self-plug for our old project: https://rmj.us/motorized-live-360-video/. Basically, the smartphone's gryoscope controls a remote video camera that streams a live-feed to the user's headset.
Curious what exactly you tried - H.264 is a codec, and there's a bunch of ways of delivering it to the client (HLS, WebRTC, some have build WebSocket-based streaming, ...), and I'd expect that the main latency is hidden there, not in the decoding?
And I think you're right on the encoding latency. I believe that H.264 buffers a little bit before it makes a decision on how to compress the frames, where as MPEG1 doesn't? I could be completely wrong but my gut is telling me that MPEG1 is basically independent, slightly-modified JPEG frames.
Not that simple, but not that far off.
Basically MPEG1 has key-frames (“JPEG” frames) and (forward) delta-frames (diffs).
From my understanding H264 has many improvements, including reverse delta-frames.
My guess is that disabling those in the encoder will improve real-time capabilities.
I didn't realize that you could disable those features in the encoder. I'll have to look into x264 tunables that lower latency. Might be interesting.
We had a lot of fun by having someone hold the device mounted on a pole, looking down on the viewer, and following them around the room. This provided a weird, quasi-video-gamey third-person perspective, like in GTA or some other.
And these guys did that with a car: https://www.youtube.com/watch?v=nIRUavithF8
Virtual embodiment requires a separate camera in addition to the VR. What they do is have you don a VR helmet, and in that you see the environment around you - but actually it's the camera mounted on your face. Then, however, the perspective begins to move. Without you moving yourself, the view you see moves, turns around, and you can see yourself. At this point, you basically feel disembodied. But then the perspective is slowly moved, with your real body still in view, into a 'virtual avatar' of some sort. From that point, everything you see is from that avatars perspective, and as far as your brain is concerned, that IS your body.
They have used this to transfer the perspective of large, imposing men into the virtual bodies of small women, then they have large, imposing male VR characters come in and begin shouting at them. As they look down, they see their thin arms, their short stature, their lack of muscle, etc, and they get legitimately scared of the huge male figure confronting them. Tests afterward showed that the men who underwent this experiment (men who had previously been abusive to their partners) showed a marked improvement in their ability to recognize fear in the face of others, an impairment common to most abusers.
It's a fascinating topic, and one of the leading researchers using it, a Dr. Metzinger, has proposed that a VR Code of Ethics be considered. I don't personally think we really have the ability to competently form such a code since we're pretty early on and don't really know how things will affect people, but it is an issue that's being considered. Any time potential censorship of things like this is proposed, I always consider the case of actors in films and plays. They're already far more "immersed" and doing things more "interactively" than any technology is likely to enable us to do. They use real guns (loaded with blanks), shoot them at real people, see blood packs explode, see those real people they personally know crumple to the floor or wail in pain, etc... and they're fine.
I guess nowadays this strategy would work much better, even considering that 4G is currently the norm.
I remember downloading similarly-encoded (and much worse) music videos at the time, pausing and resuming over multiple nights on services like KaZaa. Good times.
On my new Lenovo X1X I measured 4% CPU usage with Chrome under windows. Sure, the X1 has a new CPU, but I didn't expect this performance increase. Is this a windows optimization?
I think in the case of X1, the GPU card memory must be really good and WebGL is able to direct computation tasks to GPU
Runs surprisingly smooth, even though a native decoder is clearly superior. However, there are still nice uses, like adding support for HEIF to the browser:
What is its power use, versus native decoding?
None of those restrictions remain now, so this is more just an interesting proof of concept at this point.
Remember: We need full stack!
One disadvantage I discovered: it stops playing when you go to another tab
There is also a site to test features and their impact on performance: https://jsmpeg.com/perf.html
I have a very practical solution for you: bypassing the autoplay restriction for ads.
Otherwise sites would use GIFs or something like this, which is much less efficient.
Which is a perfectly valid reason to build something. But this project is 6 years old (almost to the day, according to Github) and has over 3700 stars, so it seems to be a bit more than a mere novelty.
Thankfully this is Hacker News, not Practical News.
> JSMpeg.Renderer.WebGL.IsSupported()?new JSMpeg.Renderer.WebGL(options):new JSMpeg.Renderer.Canvas2D(options);
Not sure why you didn't check if it supported WebGL or not.