I believe the recently announced Afterburner card for the upcoming Mac Pro is an FPGA. [1] Maybe they're trying to make a reconfigurable accelerator? Would be amazing to be able to decode REDCODE RAW, ProRes RAW, etc, then turn around and reprogram the FPGA to accelerate encoding of H.264/H.265/AV-1/etc faster than a CPU could.
Further, I think RED has a pretty close relationship with Nvidia [2]. For RED customers, it'd stink to buy a beefy Mac Pro and not be able to edit 8K REDCODE RAW as well as they could on other OSes that have better Nvidia support.
Especially when you consider when the Mac Pro finally ships, it will probably be up against Zen 2 Threadripper (at least 32 cores, likely more), Nvidia GPUs, and PCIe 4.0 SSDs at a significantly lower price point. To not have solid REDCODE RAW support would be a huge miss for Apple.
> it'd stink to buy a beefy Mac Pro and not be able to edit 8K REDCODE RAW as well as they could on other OSes
I don't think this lawsuit will have any bearing on REDCODE hardware decode on Mac OS. Especially since it would likely be RED who is responsible for that support.
> RED wanted to find a way to make digital compression visually lossless (i.e. no perceivable loss in quality).
What does that mean? Is it lossless like lossless webp or png, or is it lossy and well performing under some metric like PSNR? "no perceivable loss in quality" can mean anything, including a lossy codec.
Having worked a bit in the field (on the audio side of film) "perceptually lossless" means that in a double blind study, an audience would not be able to discern between the lossless and the lossy encoded versions. The only way to verify is by testing with real people, and due to the nature of the business, it's easier to do it with professionals who have a higher bar for perception.
There are a lot of reasons to prefer lossless over lossy, but there is always the "good enough" point with storage media. Film is not lossless, so it doesn't matter if the digital storage is. What matters is if the lossiness in encoding is good enough to work with at the same level as film.
There's a weird bar for confidence, to be frank. Most often I've found that people disregard studies with people who have "golden ears" because they're not average ears, or that studies find that only those with "golden ears" can find a difference, and the audience tends to feel they're in that range.
For example, there was a study a few years ago at McGill with trained listeners on the effects of bitrate with mp3 and m4a audio encoding that found that only a slim number of mastering engineers preferred lossless over lossy encoded audio (interestingly enough, some professional mixers and musicians found lossy encodings preferable to lossless even for jazz and classical music). But trying to convince audiophiles that those codecs are comparable to lossless is a losing battle.
As someone with minor audiophilia, there is still some difference; when you have a lossless version, you can reencode to any lossy version without loosing quality over time.
I can transcode FLAC to Mp3 fast enough to stream it on my phone without issue, if I decide to use OGG or Opus or anything else, I don't have eventual reencoding issues.
It's not a given though I imagine. For audio for example you could discard inaudible parts of the spectrum and compress the rest losslessly. In this case only the first round is lossy and it doesn't degrade over time. But I'm not sure the same exists for images.
In the VFX industry you don't roundtrip lossy codecs; first step is to decode to something lossless like EXR or TIFF, and those are what you pipe through your workflows. (With maybe lowres JPEG "proxies" to make iteration a bit quicker.)
Film/TV more generally I think tolerates slightly lossy codecs like RED or Pro Res, as there's often only 1-2 intermediate steps that could cause extra loss. (E.g. the part of editing that is just 'cutting' is pass-through, but color-grading would require a second encoding step.)
Cinema cameras need to be able to record at 4k (real 4k, 4048x2028) at upto 32bit per channel. 24 times a second.
Thats a lot of data. Now, RED at the time didn't have a way of recording to large disk arrays, (unlike the alexa) so they used their own SSD pack things.
This limited the amount of space, and shooting time.
So they needed a way of doing more than RLE compression.
This meant that they had to start chucking away some data. With standard JPEG, you throw away 3/4 of the blue and green channels (kinda, its a different colour space) and then compress the rest.
The problem? in VFX the blue and green channels are crucial for "pulling a key" (green/blue screen work, the less clear, the more manuakl cleanup needed, which costs $$$). So all that 4k resolution will be useless because in practice, the bit that the VFX team need will be < HD res.
So RED used JPEG2000 that uses wavelets to compress things. Roughly speaking, instead of storing a per pixel value, you group chunks of the image together and store the _change in frequency_, that is the difference in colour between pixels.
This doesn't reduce the resolution so much and doesn't produce square artefacts like oldschool JPEG. The problem is that its quite CPU intensive. To the point that it would take >30 seconds to decode a frame.
GPUs make it trivial to do it in real time now, but back then, its was a massive faff.
Also, RED are masters of bullshit and marketing. There is quality loss, its just they never tell you that.
It means lossy. I think the actual situation, which the article kind of misstates, is that footage from cinema cameras like RED's is likely to undergo significant grading and postprocessing, which is where compression artifacts are likely to become a problem. Annoyingly, "Raw" as applied to video codecs seems to be a marketing term that means "you can grade it and still probably not notice compression artifacts."
Virtually all video capture is lossy, even on multi-million movies, because lossless capture is still fairly impractical in terms of storage, and there's generally not much advantage over a good lossy codec. But I don't think film people generally know that, and I suspect some of them would get upset if you tried to make them use a "lossy" camera.
You should define what lossy means for you because your statement is incorrect for me. Shooting Raw with an ARRI Alexa is lossless, this is not the case when you shoot R3D (Raw for RED). R3D is using JPEG2000 compression and will effectively destroy the data the sensor captured to some degree. This can have dramatic consequences when doing green screen keying and/or shooting in low light.
Is the part you're disagreeing with "there's generally not much advantage over a good lossy codec"? There are definitely times it's not true, but I think I covered myself with "generally". Most footage is shot with adequate exposure and doesn't require keying etc.
I'm disagreeing with the "Virtually all video capture is lossy" which is simply not true. ARRI cameras and especially the Alexa are some of the most used cameras in the movie industry and the Alexa Raw format is lossless.
Those cameras also record ProRes though, which I would expect to be used far more commonly. At least outside of $100m+ movies where money is no object.
Edit: I had a look at https://www.indiewire.com/2018/05/cannes-2018-camera-cinemat... and annoyingly I have to admit that ARRIRAW looks to be much more popular I expected among smaller features. Like 10/32 specifically mention it, and some of the others are ambiguous.
Sure they can but Directors and DOP will favour a Raw file format, e.g. ARRIRAW or R3D over ProRes. Disclaimer: I work in the industry and I'm in touch with most of the camera vendors either directly or indirectly and with a lot of people using them too. I'm also a user.
Lossless unqualified would mean lossless: recoverable exactly bit-for-bit, like PNG, FLAC, or ZIP.
The only reason to describe a system as visually or perceptually lossless is because the encoding is lossy, even if very good, like an MP3 or JPEG at their highest quality settings.
I'm finding some of the disagreements under this comment frustrating. Movie people, please don't redefine what lossless actually means. If you find a way to encode say a cryptographic key, or a text document, say, your employment contact, through the codec and then retrieve it back and it comes back garbled or useless, it's lossy! It doesn't matter if you can't see a difference with visual data, it has changed the data. That is lossy. That is what lossy means. That is what it has always meant and that is what it always will mean. And if you try to change the meaning for a narrow context driven by marketing, well, words cease to have a meaning, don't they.
In audio compression most codecs remove data outside of the typical human hearing range to improve the compression ratio - people can't hear that part anyway so there's no reason to save it. I guess RED wanted to do the same thing for visual data.
Yes and back when it was introduced with MP3, many people were really mad about it because it made quality much worse. Nowadays encoders are better and with high enough bitrates you can't tell a difference to lossless.
The resolutions of 4K/5K/8K with completely lossless at high frame rates or at native sensor resolution in terms of storage for hundreds/thousands of hours of source material for marginal audiophile-like puritanism/placebo magical beliefs is just way too expensive for essentially no marginal return, because there are fundamental limits to psychovisual and psychoacoustic thresholds that no human can discern in a given playback venue (different for large IMAX, normal theater, home theater and small phablet; and different to a degree depending on an individual's would-be double-blinded or bio-electrically/fMRI measured thresholds.) The goal of pseudo-"lossless" compression is to be very conservative on what to throw out because transcoding to other formats can always (and must) optimize (throw away) data later... it really is marketing wank because there is some compression, but the idea is it's aimed to be minimal as to not matter in the final cut product.
That extra resolution, and crucially colour dynamic range means that less time and money had to be spent cleaning up green screen masks.
It also means that there is more scope to do things like Day-for-night, change the colour temperature, and have full artistic freedom to change the look and feel of a movie without reshooting.
It's a bit more nuanced than that. Resolution is meaningless unless you know the size of the image and the distance to the viewer - so keeping a "lossless" copy makes sense for people professionally buying very expensive hardware. It's no different than keeping the original reels of a movie so you can scan them at higher and higher resolution as technology improves. In other word, the final product will always at best be limited by the format the source is kept in. That is not to say that even amateur videophiles have a use for it, they just want the best and mimic the professionals.
As for frame rate I'm pretty sure we know from experiments that the average human eye can see the difference in frame rate all the way up to 150-160 fps, and that the trained eye can detect images shown for less than 1/200 second.
So however marginal the gain is, it makes sense for professionals that take pride in having the absolutely best quality. It's the same people that will finetune encoder settings for the individual scenes in a movie just to get that tiny improvement.
”We were using a SI2K (Ari Presler camera) in August 2006... The first two shots of my reel were done with the first version of that camera, it was a sensor with a lens connected to a PC through Gigabit Ethernet...
There's a a difference between 'first to file' and 'first to invent' in patents. Most (all?) territories use the former, but the US only changed in 2013. Not sure how that applies retrospectively. More detail in https://en.wikipedia.org/wiki/First_to_file_and_first_to_inv...
As I understand it matters against “prior art” defense. Basically if you patent someone else’s work, it invalidates the patent. But if you can prove that, you are the first to work on this, doesn’t really matter when you patented.
Patenting years later is basically giving right to people to use your patent, if you started implementing before the patent date.
The USA and certain other countries allow a grace period between disclosure of an invention and the patent application for that invention before that invention disclosure becomes prior art. For the USA, the grace period is 12 months, for other countries, it's 6 months.
From an engineering perspective this does seem trivial and something that would be on a table in almost any pipeline that needs to get a speedup and allows for post processing.
IANAL but if we allow software patents that should be valid.
It may not take a genius to do what they did (Apple's point) but:
- There are the first to do it.
- They are using it commercially in their own products, they are not just trolls.
- They didn't just patent an idea. They built a whole system around it. REDCODE is not just a compression algorithm. It is a compression algorithm optimized for a certain type of professional movie camera, one that they invented.
> REDCODE is not just a compression algorithm. It is a compression algorithm optimized for a certain type of professional movie camera, one that they invented
Ah, no.
redcode is jpeg2000 in a tar file, with some metadata goop.
They didn't invent it, they spent a lot of time trying to obfuscate it, and were very put out when the VFX industry reverse engineered it. Whats worse, for a good few years the tools they made to support it were horrific. Red rockets were fragile FGPA boards that cost $5k, broke within months. The cameras themselves used to have terrible colour and rolling shutter.
They were not the first to make digital cinema cameras, they weren't even the best or cheapest at the time.
They _are_ trolls, a big example is "REDCINE-X PRO" which is a carbon copy of the foundry's Hero (well its ancestor.)
In short RED are almost as annoying to deal with professionally as Apple(Hint: I've done both, at the same time). RED have worse fanboys though.
I'm sorry but I don't understand, you seem to be saying that RED produce bad cameras, encode their files in a trivial way and then provide crap post-processing tools. They weren't even first to market.
You seem to be expecting consumers to make completely logical decisions on these matters. Past a certain saturation threshold, name recognition alone can be enough to tip the scales of consumer preference.
That's exactly why people still buy Apple hardware at all anymore - their friends/peers already have them. It's surely not due to superiority by any metric for the price point.
"That's exactly why people still buy Apple hardware at all anymore - their friends/peers already have them. It's surely not due to superiority by any metric for the price point."
I challenge this assumption. The iPad Pro is by far the best computing device for a certain subset of the market, not to mention the extremely fast Apple-made CPU. The Apple ecosystem is also leading in some of their features. And privacy-wise they are a step ahead of their competitors.
I disagree about Apple. Their laptops are junk hardware, agreed.
But the iMac is competitive for a machine with its specs, especially if you add in the advantage of OSX (IMO, better than Linux or Windows for a large-screen desktop).
And the iPhone/iPad line are the only mobile devices with decent specs, decent apps, and not built on Adware and privacy leakage. Worth money if you can afford it.
I really want to know what plane of existence you're at where Macbooks are junk. If all you care about is spec sheet per $dollar then sure you can do a little better with other consumer laptop vendors I don't think I would ever call them junk. Pricey for sure but leagues above similar HP/Dell/Lenovo sets.
When they first came out, they had some excellent promise. They were the first digital 4k movie camera. So if you wanted a digital end to end 4k workflow, for about 4 years, RED was the only show in town. However, if you wanted to use normal lenses and dump direct to a normal plate format, the alexa was what you really wanted.
A rundown:
Firstly, you couldn't buy or hire one, you had to know someone.
Secondly you had to get a bunch of adaptors for _everything_, which drove up the cost significantly
Thirdly no global shutter meant that it wobbled horribly when you panned left and right.
Fourthly its colour was off, and noisy as hell(visual noise) (some later generations are practically colourblind. cough hobbit cough)
If they had actually produced and shipped the RED one, when they said they would, then my opinion would have been different. If they'd shipped the epic on price and spec, again, they could have transformed the film industry. But they didn't. By the time 4k digital workflows were practical for most people, RED had lost its shine.
Why do I personally dislike them?
Because, whilst there are some lovely people that work there, the level of cultism and secrecy is frustratingly annoying.
I'm trying to do VFX on a shot, I've been given a couple of TB of .r3d files, it takes 30 second a frame to put into a useable format. Even then I have to fiddle with it, because the colour profiles are all messed up, or some other stupid issue.
I talk to the RED team, they try and sell me a $5k card that only works in a mac (I was working at a linux shop at the time)
Fortunately Nuke 5/6 had native .r3d handling, which meant that we could drop out to the farm and blast through all the footage at once, with decent conversion options.
Why did people used them?
Size
They are much smaller than a film camera. Its practical to mount them on a moving rig, hand operate, etc, etc, etc, without loss of quality
Mobility
Because there is no film, you can bump them, wobble them, sit them next to explosions and not worry that the film is warping.
4K
way more resolution to play with. This means that you could push the ISO more and just half the resolution and get usable footage. which you had to do, making it basically an Alexa competitor
The key difference between the RED One and the Alexa, is that the Alexa couldn't record to an onboard device, the RED could. Also the resolution of the Alexa's sensor was lower, but the sensor size was correct. It also had a much greater usable dynamic range than the RED one.
The concept was nice, the company, and the fanboys can get to fuck.
> - They didn't just patent an idea. They built a whole system around it. REDCODE is not just a compression algorithm. It is a compression algorithm optimized for a certain type of professional movie camera, one that they invented.
If they're so much more than a patent, then they shouldn't need a patent.
Patents (software or not) must non-obvious as well:
"A patent may not be obtained though the invention ... if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains."
Apple is saying this one is just an obvious application of pre-existing patents.
... Which has been done in photography for single frames far longer than this patent has been around. Adobe launched their DNG RAW format in 2004, for example.
The only reason for not doing this to video earlier was just a question of bandwidth and storage limitations. RAW footage is massive, and we might have forgotten just how bad the storage situation was over a decade ago.
As Apple says, RED has brought nothing to the table that an orangutan of average intellect would not have been able to come up with when storage mediums were finally capable of keeping up with RAW video footage.
Edit: By the way BRAW is a lossy format - not knocking it, it looks like it's probably great for most purposes, but if you really want lossless format like CinemaDNG, BRAW is not that.
REDs patent might not be valid according to this video [1]. Also red continues to label its products as "Made in the USA" when they are not qualifying the requirements set to be allowed to do so. In fact they may not even qualify to label the products "Assembled in the USA"
This is a controversial topic that has been the subject of a lot of discussion lately with people who work in the industry, owners of RED cameras, and other significant stakeholders.
1) They just discovered this British inventor's work and thought it was worth petitioning in order to save a few dollars for Final Cut Pro.
2) They plan to allow the iPhone/iPad to output to ProRes RAW and don't want to pay the significant royalties and so have been actively looking for prior art.
3) The IPR process is expensive to defend and has a high likelihood of cancelling at least some claims, so, like many patent battles is not about the patent at all but is about creating uncertainty and risk in order to alter the negotiating landscape.
Also to note, this is probably the same reason why Blackmagic dropped CinemaDNG [1][2](Adobe's raw video format) and made their own version - Blackmagic Raw [3]
Correct - it was more of the legal issue. The cool thing is that these cameras have FPGA's in them, is that they can support new codecs without and changes to the hardware.
There's one often-ridiculed "design patent" on the rounded-rectangle-in-a-grid home screen layout. But these "design patents" are a lot closer to copyright than actual patents.
The mere existence of Android phones are proof that at least one of the following two believes or yours must be wrong:
- That trivial stuff is easily patented
- That Apple is willing to abuse the patent system for anti-competitive reasons.
If that was remotely true they would be filing lawsuits all over the place. In fact the only time I can recall them suing anyone was Samsung who is hardly an innocent party.
Apple has filed more inter partes review petitions than any other company since the process came available a few years back. IPRs are very expensive for a patent owner to defend (low- to mid-six figures) so from the patent owner perspective Apple definitely fits the description of warmonger.
Of course, if you think these patents are junk that never should have issued, Apple is doing God’s work.
> Funny how a patent warmonger company like Apple tries to fight against others' patents
Implying that Apple is a particularly aggressive company when it comes to patent enforcement. Which I think most all of us can agree they're not Qualcomm or IBM.
Apple has sued for plenty of patents before including suing for “data detectors”, eg linkifying onscreen text or adding actions based on regexp or other pattern matching, some trivia and obvious to most engineers and with prior art in old email/news readers.
You are criticizing them for making use of a system they have no choice but to aggressively participate in or face destruction by someone who does. Patent right payments and lawsuits are in the billions.
Further, I think RED has a pretty close relationship with Nvidia [2]. For RED customers, it'd stink to buy a beefy Mac Pro and not be able to edit 8K REDCODE RAW as well as they could on other OSes that have better Nvidia support.
Especially when you consider when the Mac Pro finally ships, it will probably be up against Zen 2 Threadripper (at least 32 cores, likely more), Nvidia GPUs, and PCIe 4.0 SSDs at a significantly lower price point. To not have solid REDCODE RAW support would be a huge miss for Apple.
[1] https://www.redsharknews.com/technology/item/6408-apple-s-ma...
[2] https://www.youtube.com/watch?v=bi79vUO0GMk