
MPEG claims its new standard H.265 halves video bandwidth with no quality loss - Suraj-Sun
http://www.itwire.com/business-it-news/technology/56199-new-mpeg-standard-halves-video-bandwidth-with-no-quality-loss
======
pwthornton
This technology could be used to cut down on files sizes, to raise video
quality and to up resolution. A few thoughts:

1\. 1080p streaming video from Apple, Netflix and others looks pretty good,
but it suffers from more compression artifacts when compared to Blu-ray. It
simply doesn't look as good. With a more efficient compression technology,
streaming video could have similar file sizes to today's H.264 video but look
more like Blu-ray.

2\. I already own a laptop that does above 1080p video. When Apple release a
Retina iMac or Thunderbolt display it will be around 5k. Other manufactures
will be there too. 4k or so TVs and projectors are on their way. Streaming
technology makes a lot more sense than a new physical format for higher
resolution video. In order to realistically deliver 4k video (or 5k video like
The Hobbit is being shot in) over IP, we would need better compression than
H.264

3\. Files sizes could be cut down, allowing people to consume more video
without going over their caps. This would also allow mobile devices to store
more high quality video content for on the go.

The best case scenario would be that we get a combo of all three. 4k-5k video
content is still a bit way for home use, but when it does come. H.265 sounds
like the way to go.

In the next few years, this technology could be used to cut down on file sizes
some, while also upping the quality of video. This is what we saw with Apple's
1080p video which uses high profile H.264 video, whereas Apple's 720p video is
main profile H.264. Yes, the 1080p videos are bigger, but not by much.

~~~
jonknee
Nitpick, but Thunderbolt supports a maximum of 20Gbits/sec which can't drive
5k. It maxes out at 10 megapixels which is about 4k. 5k is nearly 14
megapixels.

[http://superuser.com/questions/441395/what-is-the-maximum-
re...](http://superuser.com/questions/441395/what-is-the-maximum-resolution-
that-a-thunderbolt-monitor-can-display)

[http://en.wikipedia.org/wiki/Red_Digital_Cinema_Camera_Compa...](http://en.wikipedia.org/wiki/Red_Digital_Cinema_Camera_Company#Recording_formats_2)

~~~
alttag
Doesn't that assume no compression in the transfer? Surely, if streaming
compression is getting better, there's some lossless compression that can be
used for display signals?

~~~
tedunangst
What does your video card/monitor do when it's asked to display a frame that
can't be compressed?

------
sp332
There are some details on the actual compression techniques here
[http://en.wikipedia.org/wiki/High_Efficiency_Video_Coding#Fe...](http://en.wikipedia.org/wiki/High_Efficiency_Video_Coding#Features)
although at first glance I don't see anything especially different from H.264
<http://en.wikipedia.org/wiki/H.264/MPEG-4_AVC#Features>

eta: For historical and comparative purposes, here's DarkShikari's evaluation
of an early prototype of an encoder.
<http://x264dev.multimedia.cx/archives/360>

~~~
gillianseed
Hmmm.... I'm not very versed in video encoding technology but from what I
gather x264 has been adding features outside of the 'official specs' for
pretty much all it's existance, and looking at the list I directly recognize
things like higher bit depth and cabac from x264 settings.

So I'm wondering is there some new sercet sause in h265 which makes it really
much better than say x264 or is it just a new standard created around extra
features which encoders like x264 already has implemented?

~~~
Jabbles
H265 has some quite innovative features (over H264), for instance instead of
dividing the picture into rows of macroblocks, it's divided into quadtrees,
which allow the compression algorithms to make more use of spatial similarity.

~~~
agumonkey
I naively wonder if it cannot be generalized to octrees. To embed the inter-
frame analysis into a single 3d sliding adaptive octree.

~~~
pbhjpbhj
> _embed the inter-frame analysis into a single 3d sliding adaptive octree_ //

I've no idea what this means [yet] but it sounds too awesome to ignore. H266
here we come?!?

------
gmartres
I participate in the GSoC and my project is an HEVC decoder for libav:
<https://github.com/smarter/libav/tree/hevc> (the decoder is contained in the
libavcodec/hevc* files). It currently only decodes I-frames and doesn't
include the in-loop filters.

Reference encoder: <http://hevc.kw.bbc.co.uk/trac>

Samples: ftp://ftp.kw.bbc.co.uk/hevc/hm-8.0-anchors/bitstreams/

Latest draft of the spec: [http://phenix.it-
sudparis.eu/jct/doc_end_user/current_docume...](http://phenix.it-
sudparis.eu/jct/doc_end_user/current_document.php?id=6465)

------
VMG
Assuming this is true across the board and not only for some edge cases, will
this actually mean that video files get smaller? It seems to me that this is a
case of Jevons paradox[1] where increased efficiency leads to higher
consumption. Example: x264 led to 720p encoded videos with higher file size
rather smaller files with lower resolution video content.

<http://en.wikipedia.org/wiki/Jevons_paradox>

~~~
porsupah
I'd say that's a good possibility, for now. Consider when Apple rolled out
1080p downloads - the filesizes were only slightly larger than for the
previous 720p versions, representing quite an efficiency boost. Adopting H.265
would seem to be able to chop the sizes down.

However:

\- tablets and phones are unlikely to be able to take advantage of H.265 until
the requisite GPU support arrives. This may hold up widespread adoption for a
year or two.

\- for how long will the video arena remain at 1080p? I hear Sky (UK satellite
broadcasting arm of the Murdoch empire) is trying to nudge toward 4K
broadcasting.

~~~
jpdoctor
> representing quite an efficiency boost.

It's a somewhat backwards way of thinking about it. They chose the filesize
and then set the dials for the encoding. They could have chosen the 720p to be
smaller larger or the same size compared to 1080p.

------
peterwwillis
Sweet! Half the bandwidth, double the copyright infringement.

I still remember the days when downloading movies was only practical because
someone had compressed them down to two 150-megabyte videos. When I got my
first 'high quality' 580-megabyte copy of The Matrix, I was thrilled.

Encoding used to be an art form. Now people just use whatever codec they want
with default settings to get that 50GB Blu-ray movie down to a couple
gigabytes and call it a day.

~~~
bradwestness
Yeah, I'm mostly curious how this codec stacks up against WebM/VP8 or whatever
other free codecs are out there. It'd be nice to see something take off that's
less encumbered by licensing issues.

~~~
0x09
Every generational codec announcement from MPEG manages to attain "50%
improvement", so it really remains to be seen. It certainly is a more advanced
codec than H.264, which itself is already rather better than VP8 in its
present form.

Regarding the other thing, MPEG is running two tracks for a royalty-free spec
based on existing patent-free tech and on grants from H.264 patent holders
(respectively) which they say they will decide on sometime this year. An
option like that from MPEG might not take off but can't hurt.

~~~
josephlord
I think that you probably get 50% at the generation introduction and another
50% over 10 years as the encoders are improved (and more hardware is thrown at
the problem).

Double track on licensing makes some sense for a profile that can be served or
provided free but I expect the efficiency benefits would mean that for
commercial uses you would pay. That would mean that most hardware will support
both so really I think most decodes will support both.

Maybe people will use the non free for mobile.

------
shmerl
They surely plan to lock the industry into their new closed codec for another
long time, since patents on H.264 will eventually expire. Will anyone come out
with improved open codecs to counter that for the sake of open Web?

~~~
rwmj
Possibly new codecs won't be needed. After all storage is increasing
exponentially, and even bandwidth is going up slowly. For most users it
doesn't matter if a movie fits in 600 MB or 300 MB.

~~~
shmerl
Bandwidth is still far from perfect in many cases (especially on mobile). So
it matters a lot.

------
colinshark
I'm pretty libertarian, but for things like standards and formats, I really
think the government should be stepping in and taking control. Standards,
formats, and basic internet access are the new "roads" of the modern world.
Commerce can flourish when we aren't fighting over them.

Even for something R&D heavy like video codecs. How much money are we dumping
into the NSA right now? Use some of that.

~~~
WiseWeasel
Which government institution should decide on the video codec for everyone to
use? How would they know when it's time to switch? I cannot see that working
any better than MPEG and MPEG-LA.

~~~
bzbarsky
Well, for cryptographic hashes the relevant institution in the US is NIST and
they switch at a point when there start to be worries about the previous hash
being subject to successful attacks sometime in the future.

There's no reason in principle that the same approach, again with NIST as the
relevant institurion, could not be used for video codecs.

It would be better than the MPEG-LA because the patent situation could be made
much simpler (e.g. automatic patent licenses would be granted to all
implementors of the standard).

~~~
WiseWeasel
Other countries also use video codecs.

~~~
bzbarsky
Sure. Unfortunately, the UN doesn't seem to be all that interested in this
sort of problem that I've seen.

------
podperson
The benefit will probably go first to someone like Apple or Google who can
both supply streaming content and control the software (and ideally hardware)
on devices (I imagine for low-powered devices you'll want hardware decoding,
so this will prevent Apple from, say, adding support to existing AppleTVs).

I guess we can all complain when the iPhone 5 doesn't support it.

~~~
Jabbles
Google is heavily pushing VP8, which is supposedly royalty free. I'd be
incredibly impressed if Apple could make a hardware decoder/encoder for HEVC
for their next iPhone (or whichever one comes after the standard is
finalised), but we won't know until then :)
<http://www.webmproject.org/tools/vp8-sdk/>

~~~
podperson
It will be interesting to see what happens with VP8/WebM. Really it looks like
Google tried to stymie Apple (which committed itself to H264) first by trying
to back Adobe/Flash and then VP8 (and announcing that H264 support would be
dropped from Chrome, which AFAIK it hasn't been on any platform). Thus far I
don't see VP8 achieving much and Google may just end up sticking with MPEG
standards.

~~~
taligent
Google is a few years too late with VP8.

It would have had an opportunity to take off when H.264 was in its infancy.
But now there is simply no use for it.

------
ck2
So it's doing it with half the minimum block size and looking far forward (and
backward) in the stream.

The cpu requirements must be intense. If they cannot do it with hardware
accelerated video drivers for current hardware, it sounds like it will tie up
multiple cores?

Maybe they like the idea of making everyone rebuy hardware.

------
brittohalloran
If it's really that much better they should give themselves more credit than
.001

~~~
protomyth
I do love the difference between a marketing organization and standards
organization.

------
Jabbles
This claim is pretty accurate. Since the standard isn't finalised yet it will
be a while before the hardware is developed to make it widely used in mobile
devices, as it's improbable that a software encoder could be made that uses
little power.

~~~
ksec
Well at least this time it will be much faster then H.264, Hardware decoder
are already in the work with many things could be reused fro H.264 HP
Decoding. So unless there are any major changes HEVC decoder will be coming
much quicker. Some Video Decoder IP has already begin to list HEVC decode as a
feature.

------
mark-r
Half the bandwidth with no quality loss is a pretty bold claim. Bold enough to
be unbelievable - it's not like the existing codecs are doing a horrible job.
Data rates are easily measurable, so I'd guess that "no quality loss" is an
exaggeration. Anybody have any data on this?

------
mistercow
Hmm, still no overlapping blocks, apparently. Do other people find block
artifacts less distracting than I do, and that's why nobody's trying to fix it
for image and video compression?

~~~
sp332
The deblocking filter is basically overlapping blocks.
<http://en.wikipedia.org/wiki/Deblocking_filter>

~~~
mistercow
Well, no, it's not - at least not in H.264. I have not read details yet on how
it works in H.265, but in general deblocking is a completely different
approach to solving block aliasing.

Deblocking works (very roughly) like this: look at the edges of the blocks and
see if there's a sharp edge there. Now check and see how strong of an edge
there is there in the original input. Depending on these two relative edge
strengths, blur the block edge. That is, if there's a strong edge in the
output, but not in the input, blur a lot. If the input _does_ have a strong
edge, blur less. H.264 also uses some other heuristics to decide how strong
the edge filter should be, and happens to do the filtering on the encoder side
as well as the decoder side, which allows for better interframe compression.

So while this, in a vague mathematical sense, does provide overlapping
information between blocks in a way that can be analogized to overlapping
blocks, that information is far cruder than true time-domain aliasing
cancellation

But to answer my own previous question, the reason they don't use overlapping
blocks appears to be that that the concept is very difficult to reconcile with
motion compensation.

~~~
sp332
If you encode overlapping blocks, how would you render them? Just average the
edges together?

~~~
mistercow
Sort of, but you vary the weighting to fade the blocks into each other.

Or technically speaking, what you do is you multiply the blocks by a window
function before you do the Fourier related transform. If you choose the
windowing function carefully, you can even set it up so that all you have to
do is add the overlapping areas together.

This is especially easy to understand in one dimension, which is more or less
how MP3, Vorbis and AAC do it. Block boundary effects are so noticeable with
audio that unless they are corrected very robustly, the quality is
unacceptably choppy.

The technique generalizes basically without alteration to two dimensional
data, but I've never seen an image or video algorithm that used it. JPEG just
ignores the blocking issue entirely, and video algorithms seem to rely
exclusively on deblocking filters. As I said, I think this has to do with the
fact that TDAC doesn't trivially generalize to motion compensation.

------
MattSayar
>...by 2015, [video] is predicted to account for 90 percent of all network
traffic.

So with this new standard, are they hoping to cut that down to 45%?

~~~
seles
Actually with this new standard they are hoping to cut it down to 81.8%

------
jpdoctor
I'm assuming it's proprietary like the other MPEGs - Any word on how
problematic the licensing will be?

~~~
wmf
It's likely MPEG-LA will handle the licensing and I'd expect a similar cost
structure. Newer codecs have tended to be cheaper, although at this point I
suspect pennies per unit are pretty irrelevant.

------
zapt02
No "visible" quality loss. We know what that means.

