I think the success of Spotify and Pandora and Rhapsody are proof that consumers don't care about quality. I don't know exactly what bit rate they're streaming at, but, it sounds pretty bad on mobile, so I assume it's something quite low. But, even though I recognize the crappiness of it, sometimes I listen to them in the car (my truck has a crap stereo, anyway, so no big deal there).
In short: Cool article, but the suggestion that consumers will stop it because they want bigger files seems weird.
Or imagine Amazon goes with pure VBR, then Apple makes an ad claiming their sound quality is "better" because their bitrate never dips below 128kbps. It's bullshit, but how is an average consumer suppose to figure this out? They'll probably err on the side of caution and buy the CBR version since, "it can't be any worse than the VBR one, but I don't lose bits and it's the same price!"
The whole article was talking about streaming vs. downloading. Streaming is _fine_ and Netflix will probably get away with their compression, but will Amazon/Apple be able to do that with downloads? He doesn't think so. People are fine with Spotify/Pandora because there is no perceived ownership of the songs they're streaming. People who actually buy and download audio or video, they have money in the game so they want "the best" and any loss of that is viewed as Amazon/Apple screwing them over.
IIRC, Apple also sold AAC as requiring lower bandwidth for the same quality, but I don't recall when that was.
0 - https://youtu.be/dPCNUExWR6I?t=3271
(Not sure if the video clarifies this point since I cannot watch it right now.)
But let's say Amazon decides to go with VBR to save space and download speed, now there's an easy way for Apple to attack Amazon. "We never dip below a certain quality. Amazon does. We care. Amazon doesn't"
Maybe that ad/slogan works, or maybe it doesn't. But if you're Amazon would you be willing to risk some weird consumer backlash over it? And if you're Apple it is an easy point to attack, and if it doesn't work, no harm no foul (and then secretly also switch to VBR and announce it at the next Apple conference!)
I have noticed that older pre-2.0 versions of Mixxx would sometimes get glitchy (warbly) when using the pitch-correction feature (key lock, I think is the name in the UI), but, I don't think it's related to VBR vs CBR, but I never thought to check.
The only time I've noticed an audible sound quality problem when DJing was with low bit rate (usually 128kbps) files.
The warbling definitely sounds like our pre-2.0 time stretching algorithm to me. The new one in 2.0 is much better.
Anyway, I really appreciate your work on Mixxx. I don't get to do audio or music work much these days, but I do occasional DJ gigs (mostly for free for non-profit orgs and such, but sometimes they're paying gigs), and it's been a whole lot of fun.
And, to be clear, the article is making a guess that Amazon or Apple are imposing a minimum bit rate to insure some lower bound on file size. I don't think there is really any solid evidence that Amazon or Apple are making decisions based on trying to make file sizes bigger to convince consumers they're getting "more value". I took exception to the premise, which is why I commented above. I don't believe Apple and Amazon are making decisions based on trying to increase file sizes, and I find it weird that the article suggest they are. I believe they are trying to maximize audio quality at smaller file sizes. Evidence seems to indicate that is what is happening.
And, we've come full circle to the point of my initial comment. I don't think the argument he's making about maintaining large file sizes is backed by evidence or a particularly good understanding of consumer behavior/preference. I do think his guesses about the Netflix algorithm are interesting, but his digression into consumer behavior is less so, IMHO.
That has led to a consumer revolt of sorts, for people who care about audio. That's not a large enough group to make a difference to the big record companies, but it is large enough that catering to them is a good business model in itself.
128kbps mp3 files do sound pretty bad, especially on the high end and during very dynamic parts. It is audible to me, regardless of the quality of the original recording and its mastering.
And, while we're talking, I'll point out that the loudness wars began roughly 20 years ago, so you may need to go back to an even earlier mastering to get the really good stuff. Certainly it was more pronounced through the late 90s and 00s, but I remember the first few records I bought that suffered badly from over-compression, and they were released around 1994-1996. The multi-band compression that has killed many great recordings began to see usage around that time as well, though it peaked much later (it was expensive to start with).
I'm not sure if it has ever happened with bitrates in audio files, but the megapixel war comes to mind. 36 MP in a point-n-shoot? Must be better than the 12 MP in a Canon 5D DSLR since 36 is way more than 12.
You can tell if you play them next to each other, on an expensive club sound system. Other than that, you probably can't.
That said, a 128kbps mp3 through a really loud system tends to sound harsh. It is definitely audible, even to untrained ears. But, anything above 192kbps, or so, is fine and any flaws that might be audible in an ideal listening environment are lost in the general noise of the club environment.
A 128kbps threshold is a holdover from the days of MP3s, WinAMP, ripping your CDs, and technically-savvy early adopter consumers. The average consumer is content to stream and doesn't know or care about bitrate, and the phrase "not even 128kbps" would be meaningless most readers.
Free Pandora maybe but Spotify doesn't. Spotify's bitrates are pretty good for most content and very good if you have a paid account with high bitrates turned on.
Some content is poorly recorded, mixed, and/or mastered which will sound bad at any bitrate.
Movies were different. I did the same thing but with 720p and 1080p versions of movies. The majority wanted 1080p even if it meant a smaller library.
That is, after a certain threshold, quality is no longer a concern. As ears are typically less fidelity than eyes, that threshold bar is higher for video than audio.
As an aside, this http://test.tidalhifi.com/intro is an interesting little test.
While I haven't keep that abreast of video encoding tech I'm surprised people are saying animated shows will take less bandwidth. Typically video codecs use DCTs which are great at roughly approximating complex patterns but do a bad job at solid blocks of color and contrast. The result is ugly artifact especially on the edges. Perhaps things have greatly improved with h.264
I suggest reading it again.
I've definitely noticed this - on 1080 full bluray files the bitrate swings enormously - from 2mbit/sec up to 50mbit/sec (so much so my Chromecast wifi can't keep up in high action scenes).
So I definitely think they will be able to make improvements to their 1080p content (and definitely 4k content - I don't think 25mbit/sec CBR is enough IMO).
Apple? It will take them another 50 years to start using free codecs.
UPDATE: A post on IETF blog about standardized free video codec effort: https://www.ietf.org/blog/2015/09/aiming-for-a-standardized-...
UPDATE: See here: http://xiphmont.livejournal.com/67752.html
You see, it would probably be difficult to sell those VBR files — some of which were quite a bit lower than 256 Kbps and a few even lower than 128 Kbps — because customers might perceive a loss of value."
The vast majority of customers do not care about the actual Kbps, as long as the sound quality remains above certain standards. Just market the different quality levels at different prices and most people would never think twice about it (and most would choose the cheapest version).
It's this generation's "warm sound".
(I'm very skeptical given that the higher base level of MP3/AAC encoding that the big music stores use these days has eliminated most of the obvious artifacts in lossy music. In fact, I don't think I've heard any artifacts in retail music since the Napster days!)
(Why the hell am I still paying for XM?)
> They all have the same server farms. Owned by Amazon, no doubt. And there aren’t any technical hurdles. It’s just more computation.
Does Apple use Amazon's servers? I thought Apple ran its own hardware/data centers. I've definitely heard war stories of Apple towing trucks full of racks into the desert so they could bump their capacity for cheap.
Page 41, Second paragraph:
"The keys, and the file’s metadata, are stored by Apple in the user’s iCloud account. The encrypted chunks of the file are stored, without any user-identifying information, using third-party storage services, such as Amazon S3 and Windows Azure."
"Each file is broken into chunks and encrypted by iCloud using AES-128 and a key derived from each chunk’s contents that utilizes SHA-256. The keys, and the file’s metadata, are stored by Apple in the user’s iCloud account. The encrypted chunks of the file are stored, without any user-identifying information, using third-party storage services, such as Amazon S3 and Windows Azure."
But seriously, could they not be using Amazon until they finish building out their own farm?
There used to be a group of people who wanted codec that is the same quality as Lossless at 256Kbps to 320Kbps. Personally I think MPC ( Musepack ) accomplished it. And it is patentless as well since it is based on MP2. But the codec never caught on in Hardware world. Meanwhile AAC does about just as well @256kbps despite being more complex.
That was in the Naspter -> iTunes download era, Then time flies, both group of people lost interest. Mainly for the same reason. Both group wanted to store as many music downloads as possible. 1st group dont mind a little quality loss, 2nd group wanted near perfect quality @256Kbps, however HDD prices dropped to a point where 1st group dont mind storing them in ~256kbps and 2nd group will simply store them as lossless FLAC.
Then we come to the age of Streaming, that doesn't necessarily means only Apple Music or Spotify and the like, the largest music streaming is properly Youtube. People dont download anymore, They just click and play on Youtube.
It tuns out, I think we have reached the stage of "good enough". Whether it is audio, or video. With Video, we can get huge improvement if we smooth out the noise / grain details.
Our broadband speeds continues to improve, we will have G.Fast & VDSL2, the next generation of DSL broadband tech. Most kids or youngster of this generation dont care about Audio / Video quality as much, they would rather want instant and ease of access.
Could Netflix really want to throw so much money at transcoding so many ways? Are there various tricks to do this at a reasonable cost? Like grab random 10 seconds across 15 points in a movie and try just that? Work with top n most popular first? Sort by existing biggest movies?
What they really care about is things that are multiplied by the number of users they have. Saving 5% of bandwidth on 100k users is very meaningful to them. So doing extra transcodes to figure out what to send is very valuable.
They've come and given talks at Cloud dev meetups and just an unexpected bump in bandwidth or delay in server response time causing traffic to back up is enough to knock over their servers and they do things no one else would even think of, like having their clients upload code to their servers to batch requests together in the optimal format for the clients, just to reduce bandwidth and load on their network.
I would bet money that Amazon ran into this same
conundrum with the unconstrained VBR mode of the
LAME MP3 encoder which they use.
Unless there is any noticeable affect on quality/streaming ease then consumers won't care.
You have to remember that most people can't/won't tell the difference between blueray and DVD, let alone bitrate change. More importantly they have lots of silly TV effects that actively fuck with the picture (Sharpening, Aspect ratio stretching, over saturation, active motion, and other horrid "enhancements")
The only reason netflix et al is a thing is because of the content, not the platform. (just look at how shit iTunes is to use) You can make the most wonderful interface in the world, but if there is no content, there is no point.
Streaming does cost money, but that's not the main cost of business. Most cost comes from licensing the content in the first place. (Then paying all your staff to do fancy things)
Seriously bandwidth is pretty cheap, compared the the cost of buy the license to broadcast a top rated movie. (a high ranking movie is easily a few $million. custom TV series is anywhere between 1 and 30+ million for a season.)
If Netflix can make my film stream at a high resolution faster and buffer less, over time I'll notice the buffering more on other platforms and just use Netflix instead.
That's exactly what's been done by illegal release groups (pirated content) which are very picky when it comes to time to release (encoding/sharing) and do their best to encode fast enough while having a good viewable quality.
When it comes to encoding, even for a large library like Netflix's, time to encode is always lesser than time/bandwith saved while sharing/streaming.
As of now, Netflix 1080p raw content (not transcoded) is delivered at a bitrate of 5200kbps/5900kbps with no differences between the content (animated or live action - here I compared BoJack Horseman to The Ridiculous 6).
While many (or even all) high quality release groups encode animated bluray at around 3000Kbps (1080p) (around 5500kbps for very high quality) while live action is encoded at around 11.0Mbps. The same difference is applied when the content is capped from TV and then encoded.
Stuff that is released via publically-tracked torrent (i.e. for mass-market audiences) often targets around 1.5-2 GB for a feature-length movie in 1080p. In contrast, releases aimed at Usenet (the technical crowd) are often 6-7 GB and sometimes as large as 11-13 GB for the same movie.
On the other hand the situation is much more equitable for audio. Lossless audio torrents are pretty common even in the torrent world, and due to the typically greater number of torrents available the overall selection of lossless files is probably at least as large as on Usenet.
I would assume that private trackers tend towards higher-quality releases.
While consumers only care if the quality is "acceptable", it's pretty easy to tell the difference between a crisp 4K picture and a 1080p picture, and also easier to see encoding or bitrate artifacts.
So I think this is probably an attempt to improve the margins a bit on content delivery costs without sacrificing quality.
Netflix has also embraced 4K content with its original series, so it is in a unique position to leverage the shift to higher resolution content for maximum profitability.
Or not: http://www.cnet.com/news/why-ultra-hd-4k-tvs-are-still-stupi...
With this method, you let the encoder figure out the bitrate for the quality target you want to hit, then you can use that bitrate in a constant quality encode if you like.
That's part of the author's point: if netflix can halve their bandwidth without users noticing, they will earn massive savings. Apple & Amazon will also want in on the savings (being competitors and all)
Netflix (and streaming services in general) on the other hand needs known bitrates with known constraints so their bandwidth estimation can work correctly without hiccups. Your personal video collection does not.
Whereas your personal video collection probably isn't being streamed over any link slower than several hundred MBit/s, which is more than enough for anything short of intermediate codec bitrates, plus significant buffering doesn't count against any data caps.
More seriously: the bitrates do not affect a player's compatibility with the container or video/audio format. For instance, if VLC can play H263, it doesn't matter if you use Handbrake to output 5100kbps or 100kbps.
I don’t know. It’s hard to predict because consumers… well… we’re fucking stupid.