Further, I think RED has a pretty close relationship with Nvidia . For RED customers, it'd stink to buy a beefy Mac Pro and not be able to edit 8K REDCODE RAW as well as they could on other OSes that have better Nvidia support.
Especially when you consider when the Mac Pro finally ships, it will probably be up against Zen 2 Threadripper (at least 32 cores, likely more), Nvidia GPUs, and PCIe 4.0 SSDs at a significantly lower price point. To not have solid REDCODE RAW support would be a huge miss for Apple.
I don't think this lawsuit will have any bearing on REDCODE hardware decode on Mac OS. Especially since it would likely be RED who is responsible for that support.
LOL are they marketing to children?
Which is to say 90% yes.
(I am one)
What does that mean? Is it lossless like lossless webp or png, or is it lossy and well performing under some metric like PSNR? "no perceivable loss in quality" can mean anything, including a lossy codec.
There are a lot of reasons to prefer lossless over lossy, but there is always the "good enough" point with storage media. Film is not lossless, so it doesn't matter if the digital storage is. What matters is if the lossiness in encoding is good enough to work with at the same level as film.
For example, there was a study a few years ago at McGill with trained listeners on the effects of bitrate with mp3 and m4a audio encoding that found that only a slim number of mastering engineers preferred lossless over lossy encoded audio (interestingly enough, some professional mixers and musicians found lossy encodings preferable to lossless even for jazz and classical music). But trying to convince audiophiles that those codecs are comparable to lossless is a losing battle.
I can transcode FLAC to Mp3 fast enough to stream it on my phone without issue, if I decide to use OGG or Opus or anything else, I don't have eventual reencoding issues.
Film/TV more generally I think tolerates slightly lossy codecs like RED or Pro Res, as there's often only 1-2 intermediate steps that could cause extra loss. (E.g. the part of editing that is just 'cutting' is pass-through, but color-grading would require a second encoding step.)
Thats a lot of data. Now, RED at the time didn't have a way of recording to large disk arrays, (unlike the alexa) so they used their own SSD pack things.
This limited the amount of space, and shooting time.
So they needed a way of doing more than RLE compression.
This meant that they had to start chucking away some data. With standard JPEG, you throw away 3/4 of the blue and green channels (kinda, its a different colour space) and then compress the rest.
The problem? in VFX the blue and green channels are crucial for "pulling a key" (green/blue screen work, the less clear, the more manuakl cleanup needed, which costs $$$). So all that 4k resolution will be useless because in practice, the bit that the VFX team need will be < HD res.
So RED used JPEG2000 that uses wavelets to compress things. Roughly speaking, instead of storing a per pixel value, you group chunks of the image together and store the _change in frequency_, that is the difference in colour between pixels.
This doesn't reduce the resolution so much and doesn't produce square artefacts like oldschool JPEG. The problem is that its quite CPU intensive. To the point that it would take >30 seconds to decode a frame.
GPUs make it trivial to do it in real time now, but back then, its was a massive faff.
Also, RED are masters of bullshit and marketing. There is quality loss, its just they never tell you that.
Minor nitpick: they’re miniPCIe SATA (not NVMe) SSDs in a fancy housing with some fancy pin remapping for the outer connector
This is new to me, I didn't think it was that simple. They harped on in the old days about how revolutionary the minimags were.
Edit: I had a look at https://www.indiewire.com/2018/05/cannes-2018-camera-cinemat... and annoyingly I have to admit that ARRIRAW looks to be much more popular I expected among smaller features. Like 10/32 specifically mention it, and some of the others are ambiguous.
> i.e. no /percievable/ loss in quality.
would be pretty redundant otherwise.
The only reason to describe a system as visually or perceptually lossless is because the encoding is lossy, even if very good, like an MP3 or JPEG at their highest quality settings.
For VFX, no.
That extra resolution, and crucially colour dynamic range means that less time and money had to be spent cleaning up green screen masks.
It also means that there is more scope to do things like Day-for-night, change the colour temperature, and have full artistic freedom to change the look and feel of a movie without reshooting.
As for frame rate I'm pretty sure we know from experiments that the average human eye can see the difference in frame rate all the way up to 150-160 fps, and that the trained eye can detect images shown for less than 1/200 second.
So however marginal the gain is, it makes sense for professionals that take pride in having the absolutely best quality. It's the same people that will finetune encoder settings for the individual scenes in a movie just to get that tiny improvement.
If so, it would seem you can game the system by documenting your invention well, only filing for a patent years later.
Also, if that’s true, comparing when you made the invention to when others filed for a patent is unfair.
That should be a concern for RED, as a comment on https://nofilmschool.com/red-camera-apple-patent-challenge says:
”We were using a SI2K (Ari Presler camera) in August 2006... The first two shots of my reel were done with the first version of that camera, it was a sensor with a lens connected to a PC through Gigabit Ethernet...
Patenting years later is basically giving right to people to use your patent, if you started implementing before the patent date.
But then a lot of patents seem to be frivolous.
It may not take a genius to do what they did (Apple's point) but:
- There are the first to do it.
- They are using it commercially in their own products, they are not just trolls.
- They didn't just patent an idea. They built a whole system around it. REDCODE is not just a compression algorithm. It is a compression algorithm optimized for a certain type of professional movie camera, one that they invented.
redcode is jpeg2000 in a tar file, with some metadata goop.
They didn't invent it, they spent a lot of time trying to obfuscate it, and were very put out when the VFX industry reverse engineered it. Whats worse, for a good few years the tools they made to support it were horrific. Red rockets were fragile FGPA boards that cost $5k, broke within months. The cameras themselves used to have terrible colour and rolling shutter.
They were not the first to make digital cinema cameras, they weren't even the best or cheapest at the time.
They _are_ trolls, a big example is "REDCINE-X PRO" which is a carbon copy of the foundry's Hero (well its ancestor.)
In short RED are almost as annoying to deal with professionally as Apple(Hint: I've done both, at the same time). RED have worse fanboys though.
So why does anyone use them?
When they first came out, they had some excellent promise. They were the first digital 4k movie camera. So if you wanted a digital end to end 4k workflow, for about 4 years, RED was the only show in town. However, if you wanted to use normal lenses and dump direct to a normal plate format, the alexa was what you really wanted.
Firstly, you couldn't buy or hire one, you had to know someone.
Secondly you had to get a bunch of adaptors for _everything_, which drove up the cost significantly
Thirdly no global shutter meant that it wobbled horribly when you panned left and right.
Fourthly its colour was off, and noisy as hell(visual noise) (some later generations are practically colourblind. cough hobbit cough)
If they had actually produced and shipped the RED one, when they said they would, then my opinion would have been different. If they'd shipped the epic on price and spec, again, they could have transformed the film industry. But they didn't. By the time 4k digital workflows were practical for most people, RED had lost its shine.
Why do I personally dislike them?
Because, whilst there are some lovely people that work there, the level of cultism and secrecy is frustratingly annoying.
I'm trying to do VFX on a shot, I've been given a couple of TB of .r3d files, it takes 30 second a frame to put into a useable format. Even then I have to fiddle with it, because the colour profiles are all messed up, or some other stupid issue.
I talk to the RED team, they try and sell me a $5k card that only works in a mac (I was working at a linux shop at the time)
Fortunately Nuke 5/6 had native .r3d handling, which meant that we could drop out to the farm and blast through all the footage at once, with decent conversion options.
Why did people used them?
They are much smaller than a film camera. Its practical to mount them on a moving rig, hand operate, etc, etc, etc, without loss of quality
Because there is no film, you can bump them, wobble them, sit them next to explosions and not worry that the film is warping.
way more resolution to play with. This means that you could push the ISO more and just half the resolution and get usable footage. which you had to do, making it basically an Alexa competitor
The key difference between the RED One and the Alexa, is that the Alexa couldn't record to an onboard device, the RED could. Also the resolution of the Alexa's sensor was lower, but the sensor size was correct. It also had a much greater usable dynamic range than the RED one.
The concept was nice, the company, and the fanboys can get to fuck.
That's exactly why people still buy Apple hardware at all anymore - their friends/peers already have them. It's surely not due to superiority by any metric for the price point.
I challenge this assumption. The iPad Pro is by far the best computing device for a certain subset of the market, not to mention the extremely fast Apple-made CPU. The Apple ecosystem is also leading in some of their features. And privacy-wise they are a step ahead of their competitors.
But the iMac is competitive for a machine with its specs, especially if you add in the advantage of OSX (IMO, better than Linux or Windows for a large-screen desktop).
And the iPhone/iPad line are the only mobile devices with decent specs, decent apps, and not built on Adware and privacy leakage. Worth money if you can afford it.
If they're so much more than a patent, then they shouldn't need a patent.
"A patent may not be obtained though the invention ... if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains."
Apple is saying this one is just an obvious application of pre-existing patents.
- There are the first to do it.
The only reason for not doing this to video earlier was just a question of bandwidth and storage limitations. RAW footage is massive, and we might have forgotten just how bad the storage situation was over a decade ago.
As Apple says, RED has brought nothing to the table that an orangutan of average intellect would not have been able to come up with when storage mediums were finally capable of keeping up with RAW video footage.
Interestingly BlackMagic removed recently CinemaDNG (basically multi-frame DNG AIUI) from their cameras citing patent threats from RED.
EDIT: What is AIUI by the way?
Edit: By the way BRAW is a lossy format - not knocking it, it looks like it's probably great for most purposes, but if you really want lossless format like CinemaDNG, BRAW is not that.
Maybe the fact that he's reporting about the patent is linked to his own history with RED
This is a controversial topic that has been the subject of a lot of discussion lately with people who work in the industry, owners of RED cameras, and other significant stakeholders.
1) They just discovered this British inventor's work and thought it was worth petitioning in order to save a few dollars for Final Cut Pro.
2) They plan to allow the iPhone/iPad to output to ProRes RAW and don't want to pay the significant royalties and so have been actively looking for prior art.
That's also valid for most patents from Apple including the iphone
There's one often-ridiculed "design patent" on the rounded-rectangle-in-a-grid home screen layout. But these "design patents" are a lot closer to copyright than actual patents.
The mere existence of Android phones are proof that at least one of the following two believes or yours must be wrong:
- That trivial stuff is easily patented
- That Apple is willing to abuse the patent system for anti-competitive reasons.
Okay, bevelled rectangles.
Instead of pinching pennies like a depression era farmer, maybe Apple should spend some of that $250B in cash to aquire RED.
They get the patent, and the royalties AND can put RED sensors in their iPhones and charge another $250/unit.
All these patents are now backing off too much the evolution
If that was remotely true they would be filing lawsuits all over the place. In fact the only time I can recall them suing anyone was Samsung who is hardly an innocent party.
Of course, if you think these patents are junk that never should have issued, Apple is doing God’s work.
> Funny how a patent warmonger company like Apple tries to fight against others' patents
Implying that Apple is a particularly aggressive company when it comes to patent enforcement. Which I think most all of us can agree they're not Qualcomm or IBM.
They also had a habit of suing over trivial and obvious design patents. https://en.m.wikipedia.org/wiki/Smartphone_patent_wars
However they are no where close to some of the worst offenders.