Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A task like editing 8K RAW RED video file that might have taken a $5000 machine before can now be done on a $699 Mac Mini M1 or a fan-less MacBook Air that costs $999

That’s insanely great. Maybe I am exaggerating but Apple’s M1 might be the best innovation in the tech industry in the past 5 years.



I did this test last night with my buddy's 6K RED RAW footage. I could play 6K realtime at 1/4 quality, while my fully loaded MBP with i7 + 32GB of ram could only playback at 1/8 quality. Keep in mind real editors NEVER edit raw footage, they use proxies.

The really impressive part was that the mac mini did NOT spin up the fan during playback, completely silent. The i7 Macbook Pro sounded like a jet turbine spinning up within 30 seconds. Awesome.


It's very easy to perform very well on a specific benchmark when there's dedicated hardware for this.

I'm not saying this isn't good, it's great for people editing, but this isn't a general indicator of performance (which is reportedly good!)


> It's very easy to perform very well on a specific benchmark when there's dedicated hardware for this.

I'd say the same about GPUs as well. It's great for people who game, but beyond a certain baseline, it's pretty pointless for most of us.

> I'm not saying this isn't good, it's great for people editing, but this isn't a general indicator of performance

Umm, it's also great for people who watch YouTube or Netflix. Zoom calls use the encoder and the decoder. Fundamentally, modern computers do a crapload of video (and audio!) decoding and encoding. Arguably for most of us, it is more important than having a high performance GPU.

This is all trebly important when you are using Netflix, YouTube, HBO Max, Zoom, Skype, etc on battery where the specialized encoder uses about 1/3 the power as the CPU.


> […] the specialized encoder uses about 1/3 the power as the CPU.

As far as I know, dedicated HW versus CPU is closer to 1/100 the power. Although I admit I don’t have a source for this claim.


GPUs, unlike video decoders, are programmable and general in purpose.

As for Zoom, YouTube, and Netflix, existing hardware is more than fine enough. No one is streaming 8K RAW for a conference call. Unless you're an editor, you won't see much of a benefit.


> As for Zoom, YouTube, and Netflix, existing hardware is more than fine enough.

Didn't suggest these were jobs existing CPUs struggle with. I said the video encode/ decode makes the CPU much more efficient which increases battery life.


But existing CPUs already use accelerated decode for these tasks. They have been for years and years. Those hardware decode blocks just aren't powerful enough for 6K RAW video, but they are fine for YouTube, Netflix, and Zoom, and indeed it's already accelerated.

So there is literally no benefit.


> So there is literally no benefit.

If there were no benefit, Apple wouldn't be able to decode 8K video with a low end Mac mini. People wouldn't be seeing vastly better battery life when viewing videos and using Zoom.


8K video decode isn't very useful when you can barely drive an 8K display, and certainly not for the average consumer.

As for wonderful battery life with Zoom, I am fairly certain that this is because Intel CPUs have a bad process that cripples their video decode performance.

The correct comparison would be with the 7nm or imminent 5nm Renoir APUs that have accelerated decode on an actually good process. Which is what you should compare M1 against, anyway.

But sure, if you want to compare them against obsolete Intel chips, you can, and you'll find improved battery life. It's just not a logical comparison, as Intel isn't the competition to M1 chips, the competition is AMD. And AMD does have high efficiency accelerated video decode on their laptop chips, and it also even supports 8K decoding, though it's almost useless. It is less useless than on an M1 computer though, because at least then you have enough I/O to actually run an 8K screen.


> 8K video decode isn't very useful when you can barely drive an 8K display, and certainly not for the average consumer.

You are talking in circles. This same exact hardware is used to decode lower resolution video. Benefits reach down to 6k, 4k, 2k, 1080p, 720p, etc etc. Any encoding you do.

> The correct comparison would be with the 7nm or imminent 5nm Renoir APUs that have accelerated decode on an actually good process. Which is what you should compare M1 against, anyway.

How exactly do you compare an unreleased product to an actual shipping one? Do we go to the land of hypothetical benchmarks where you just make up numbers for the unshipped product?

> And AMD does have high efficiency accelerated video decode on their laptop chips, and it also even supports 8K decoding, though it's almost useless.

Please share some details on these AMD based $699 systems which can edit 8k video. No-one is claiming you can't edit 8k video on other systems. The entire point is that you can do this on the cheapest system in Apple's lineup.


The 7nm Renoir APUs already came out. As for the 5nm, we don't have them yet even though they should release in a few months, but we have processors of the same architecture on a different process.

The M1 Mac Mini can only edit 8K video at a fairly low quality if you use the accelerated encode. If you're actually going to be doing real editing, you're going to only be using the 8K decode, and for that you can look at literally any Renoir APU system.

The cheapest system with a Renoir APU capable of accelerated 8K decode is 340$, so half of the cost of the Mac Mini.

As for this :

>You are talking in circles. This same exact hardware is used to decode lower resolution video. Benefits reach down to 6k, 4k, 2k, 1080p, 720p, etc etc. Any encoding you do

It's only talking in circles if you ignore the rest of the comment. Accelerated encode and decode with similar architectural efficiency is already there. The main advantage Apple has here is that, for a few months, they have a more power efficient process.

As for encode, literally no one has a solid use case for a laptop and hardware encoding over 1080p. For streaming video, anywhere over 1080p is useless on a laptop, and for actual video encoding, no one uses embedded accelerated encode because it's inherently of lower quality.

But sure, if for some absurd reason you want to edit video directly in 8K and don't care about the abysmal rendering times at high qualities, you can buy a 340$ Renoir SBC, enable Hardware Decode on your favorite video editing software, and be on your merry way with accelerated real-time decode of 8K - as long your video files are h264 or h265.


I think many folks' experience with Zoom would show the opposite. Perhaps it's not very efficient software, but battery life often tanks, computers get very warm, and fans start spinning.

Any improvement to that is a very welcome change, if you ask me.


Even more than k8s?


Debatable, but for personal computing there is no contest.


The RAM limitation on the first gen M1s makes this claim a bit dubious.



Just for the future: you always have the option to formulate an argument in words rather than simply copying-and-pasting link "salvos" to the other.

I might even go so far as to say it might promote a better discussion!


That article is only looking at if the CPU is fast enough to keep up with an import. Basically a toy benchmark. You're going to be butting up against the memory limit in no time once you start actually editing.


That's nice. But what does CPU/GPU horse power have to do with memory?

If I want to spin up a bunch of VMs to do pre-commit test builds in clean environments, and each need RAM for the OS and user land, being able to edit a lot of raw video does nothing for me. I'm generally fine running macOS (or Linux), but sometimes I need to boot up Windows in a VM for specialized apps: how do I assign >16GB of memory to it if I only have 8-16GB of RAM? Even with fast storage I'm enamoured that I may need swap.


Setting aside that you are way off thread here...

> how do I assign >16GB of memory to it if I only have 8-16GB of RAM?

This is Apple's slowest/ lowest performance M series CPU.

Complaining that the CPU they built for the MacBook Air and the lowest end MacBook Pro doesn't have 32GB of RAM misses the entire picture. This is Apple's first and lowest end M series chip, and it's blowing away Intel chips with discrete GPUs and more RAM. Their higher end processors which will be coming out over the next couple years are likely to be much better... and will support 32GB of RAM. In fact since Apple is migrating the entire line-up, it's likely the next generation of CPUs will support discrete RAM so the Mac Pro can offer systems with massive amounts of RAM as the current Mac Pro does.

> I need to boot up Windows in a VM for specialized apps

Aside from getting ARM Windows running on the Mac hypervisor, Windows VMs seem pretty unlikely. Another possibility is someone porting or creating an x86 emulator to run on the hypervisor.

Aside from that, Crossover by Code Weavers or something like AWS Workspaces are your best bets.


> In fact since Apple is migrating the entire line-up, it's likely the next generation of CPUs will support discrete RAM

I've been wondering about how much of the general purpose performance boost of M1 is due to having the RAM in the same package. That has to have benefits in power and latency. So if a future Mx chip supports discrete RAM, it may not seem quite as magical anymore. Then again, Apple's volume and margin is high enough that they could just build a single package with lots of RAM. You wouldn't be able to tinker with it, but it's not like Apple cares about that.

Makes you wonder if AMD or Intel will come up with a similar package for x86-based laptops.


As far as I understand about chip design (not much), the fact that the memory is inside the same package allows Apple to do stuff that would never fly with unknown external memory.

They know the exact latencies and can distribute the memory between CPU and GPU as they please.

A loss in upgradeability is a huge gain in speed and reliability.

My bet is that the next M processor will just have more of everything. More cores and more built-in memory. Maybe the one for the (i)Mac Pro will have upgradeable memory on top of the built-in ones. All of the laptops will only have the on-package memory.


Perhaps it'll be possible to use external memory as a first layer of swap. For most workloads the difference would be minimal.


Apple makes it very clear in their materials that their unified memory is a very big part of their performance boost.

> So if a future Mx chip supports discrete RAM, it may not seem quite as magical anymore.

I agree, but I also doubt they will be making a Mac Pro SOC with huge amounts of RAM aboard either. I'm not sure how common they are, but Apple supports up to a terra-byte of RAM (maybe more). I could easily see SOCs with 64GB of RAM, but I'm struggling with them putting 128 or 256GB+ on the SoC.

Maybe some kind of hybrid?

Very curious to see how they are going to work around this.


Doing a hybrid approach which is basically swap to off-chip ram shouldn't be too hard.


It would be interesting if Apple treated their off-chip RAM as a RAM disk. Could make for some intriguing possibilities. So you'd "swap" from the hot/ on chip RAM into slower GDDR RAM instead of to the SSD.


So far as I can tell from this thread, the open question was: can these computers edit 8k video?

The answer per that video seems to be yes, with limitations.

I’m not trying to assert anything about anyone’s needs.


Why are we even talking about 8k video? It's something that almost noone needs and even fewer people have to edit. My guess is most people are still happy with editing their FHD videos. Something that works well on a five year old laptop.


You want to work with 6k or 8k so you can do crops and still output 4k or 6k. This means shorter productions.

8k also future-proofs the footage. In the future we'll be wanting more than 8k for this.


My point stands: Very few people edit 8K video, yet it is a popular benchmark.


It’s used because it’s a consistent workload to ensure a fair comparison, and long enough to make sure the performance seen is not just burst. Someone who plays games, edits smaller videos or photos all day, uses heavy web apps, compiles code, etc., can apply the result of a large video render to their purchase decision even though their work doesn’t aggressively use the battery as fast as possible.

Hopefully that helps. In your original example, you cited someone editing FHD doing fine with a five year old laptop, and now we’ve talked about why larger formats are used, and why someone upgrading a laptop would look at a benchmark of an intensive process, even if they themself don’t plan to run that specific process.


4 years ago “why do we care about 4K” 8 years ago “why do we care about 1080p” etc...


We'll see about that (with respect to pixel density and efficiency). I'm typing this on a 4k xps 15, and while the display is great, the battery cost is extreme. There would be no meaningful advantage in an 8k-display of the same form factor, so there is far, far less incentive for manufacturers to race to 8k.

there will be 8k tvs, sure, but lets be real - the step to 1080p was massive, the step to 4k already couldn't fit in those shoes.


> the step to 1080p was massive,

the step to 1080p I'd say that, for some, could have been seen even as a downgrade. It was possible to use 1600x1200 back in the late 90s and early 2000s, with CRTs. The concept of "high definition" was already known by PC users (gamers and professionals, that is)

4k is a nice upgrade, and I'd say that many professionals already were using it with proper monitors

8k eeh, we'll get there


We're talking about video editing, not screen resolutions.

The step to 1080p wasn't from 1600x1200, it was from PAL/NTSC (640x400 or 720x576). It was the biggest step by far.


I think diminishing returns will stop 8k from getting mass adoption.

Same thing happened to audio players with "better than CD quality". They never caught on because there was no need.

65" 4K TVs start at 74 Watts (max is 271 Watts). 65" 8K TVs start at 182 Watts and go all the way to 408 Watts. For what? An improvement you won't notice unless you get off the sofa?


Since you've been able to edit high-res (haven't tried 8K but 4K and 6K been working fine) footage on commodity hardware (that costs way less than $5000) for... years, via editing software that supports operations via GPU (like DaVinci Resolve), you still think it's the best innovation in the tech industry? Add to it that the M1 is proprietary, only developed for one OS and one company, it feels less like innovation for the industry and more like Apple-specific innovation.

Also, the argument that only $5000 could edit high-res footage is false even without the invention of GPU editing. Proxy clips been around for as long as I've done video editing, even though the experience is worse, it's been possible to edit at reduced resolutions for a long time.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: