Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



Yeah... because nobody else makes ARM compatible chips.

x86 is not an open ISA. For all practical purposes nobody besides intel and AMD can make x86 chips.

Pretty much anybody can ask ARM to get a license to make ARM chip.

Sure Macs have special chips in addition but that is true for their intel offerings as well. You have never been able to just plop an AMD chip into a Mac so it is not like that choice ever meant anything to consumers.


And what about disk controllers like the T2? How is that going to be emulated?

The ARM ISA versus x86 ISA is probably the least complicated part of this equation. At least on x86 there were widely used graphics components that were leveraged. What exactly do we know about the built-in GPU on the M1?


With Arm, maybe Hackintoshes that will be worth it (for VMs at least) will be possible in the future. If Windows 10 on Arm succeeds... to make that hardware widely available. SBCs do not count.

(there's technically nothing preventing you of running macOS arm64 on non-Apple hardware in a VM)


Yes there is: the lack of third party graphics drivers. That single issue has the potential to nobble any useable ARM based Hackintosh, though someone might step forward and implement a graphics driver, but that's quite hard.


And while I have not seen anything on that subject (yet), a much deeper integration with their Secure Enclave is likely, and it’s very much possible they extended the ISA and take advantage if that internally.


macOS arm64 doesn't require any instructions that aren't part of the standardized Arm ISA. And about the Secure Enclave, just don't expose the device in the device tree (that's enough w/ stubbing AppleSystemPolicy).


That issue is actually quite interesting.

It's because if the GPU is reverse engineered enough, you can simulate an Apple GPU on another machine _and_ run other OSes with a proper GPU stack on those macs.


The little I’ve heard about the M1 makes it sound radically different in one respect: the system RAM is unified with the video RAM and shared, rather than partitioned the way it usually is on integrated video.

Now I don’t know anything about GPU stacks, but I could imagine that one change essentially requiring a rewrite of the entire stack if certain assumptions were baked into the old one.

The new model would seem to allow the developer to allocate a buffer full of stuff and rapidly alternate between CPU and GPU instructions on that buffer, without copying. If so, that’s a pretty huge win for performance.


That's present on basically every Arm SoC, with the CPU and GPU on the same die. (and Intel & AMD APUs too, the difference is that Apple is scaling UMA beyond what anyone did for sold systems in x86 land that aren't consoles)

Currently, Apple and NVIDIA have the beefiest UMA w/ the CPU GPUs available, and those are Arm solutions.


There's two parts to this issue: one is reverse engineering the Apple driver interface, the other is reverse engineering the third party target hardware interface.

Most vendors of graphics hardware keep their interface secret, but not all. The problem is that the set of hardware that isn't secret won't overlap much with what people have or want to run, which is pretty central to what makes Hackintoshes attractive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: