Hacker Newsnew | past | comments | ask | show | jobs | submit | krunkcoin's commentslogin

PassMark is "more honest"? It represents a varied load??? No, sorry, it's just not good. Seriously, read their own documentation.

https://www.cpubenchmark.net/cpu_test_info.html

Right from the top it's amateurish stuff: their idea of an integer benchmark to measure "raw" CPU throughput (whatever that means) is to make a bunch of random ints and add/subtract/multiply/divide them.

Very few programs do a high volume of either integer multiply or divide. And when they do, they generally aren't doing it on random numbers. This is the kind of thing which gives synthetic benchmarks their highly deserved bad rep. It might be even worse than Dhrystone MIPs, and believe me, in benchmark nerd circles, that is a fucking diss.

If you look up Geekbench's docs, you'll find that it's all about real-world compute tasks. For example, one of the int tests in their suite is to compile a reference program with the Clang compiler. Compilers are a reasonably good litmus test of integer performance; they heavily stress the CPU features most responsible for high integer performance in this day and age. (Branch prediction, memory prefetching, out-of-order execution, speculation, that kind of thing.)

You claimed that PassMark reflects "complex" software, and Geekbench doesn't. However, I would be willing to bet that Clang alone is far more complex than all of PassMark's CPU benchmarks put together, whether you measure by SLOC or program structure.

Note that none of this has anything to do with Mac vs PC. Passmark is simply a bad benchmark that should not be used, period. That said, there are a bunch of warning signs that PassMark's ports to everything outside its native x86 Windows are probably quite sloppy, so it's even less useful for crossplatform comparisons.


You're way offtopic just because you wanted to be all dumb and tribal. Nobody claimed APFS was better than your favorite filesystem, simmer down.

But also, I wanted to let you know that even in your tribalism you're being stupid. AFPS is not a clone of ZFS. Not even close! It has a fraction of ZFS's features. Why? Because it doesn't need them. Why? Because Apple is tightly focused on client devices, and those do not need the huge list of server-oriented ZFS features, or the overhead that comes with them. And that overhead is considerable! People are often advised that 8GB RAM is the bare minimum for a server running ZFS filesystems, and much much more RAM is desirable for performance. Apple deployed APFS to iPhones with as little as 1GB RAM. ZFS was simply not an option.

APFS is its own thing. Deal with it.

Also, your whole schtick here sucks. Only one FS gets to be "FIRST!" at anything. Other filesystems which implement that feature are not necessarily "clones". To actually make the judgement that cloning occurred, you'd have to get real technical and look at both the algorithms and the on-disk layout, and if you actually did this there is no way in a million years you could come away thinking APFS is anything other than original work.

Ideas? Well duh, people build on other people's ideas all the time. If idea stealing was completely forbidden, once Gutenberg invented the printing press, nobody else could have built them, and where would humanity be now?


Outside of a handful of special ultra-unlocked iPhones Apple sells (or loans maybe? Don't remember the details) to security researchers, no, you cannot reliably boot your own OS on iPhone hardware. Apple's secure boot ROM only trusts bootloaders cryptographically signed by Apple's private signing key.

Short of finding a flaw in that ROM or a jailbreak good enough to take over from the booted iOS, there's no path to booting something else. While people have found such boot ROM and jailbreak flaws in the past, recent hardware and iOS have no known vulnerabilities good enough to take over (AFAIK). (and note also that Apple guards against rollback attacks on iOS devices - once you've upgraded a device to a new iOS version, there's no going back to an older version with a jailbreak.)

The complexity of Apple's hardware is not a barrier. Apple Silicon Macs run essentially the same hardware as iPhone, only more complex (because there's more stuff to support in the bigger SoCs they put in Macs). When Apple updated their secure boot to work well for Mac, they chose to explicitly support that which is forbidden on iPhones: they give users an interface allowing them to locally sign a bootloader and store a cryptographic hash in the Secure Enclave so that the boot ROM will trust it just as if the bootloader had been provided by Apple. Because this path exists, people have been reverse engineering Apple's SoCs and there's a working Linux distro (Asahi Linux) and a BSD port. Not everything works yet, but a surprising amount does.


It seems you're right -- I'm not in the "install other OSes" business, and I didn't know about the bootloader restriction. I was going off the fact that jailbreaks existed at some point, I didn't know that more recent iPhones have gotten harder/impossible to break.

And certainly, since the concept of hardware lockdown pretty much wasn't a thing twenty years ago, the Jornada probably puts up no intentional barriers.

I think the complexity issue holds -- it doesn't make it impossible, but surely it's easier to implement an alternative OS for something as simple as the Jornada. Although playing devil's advocate, the Asahi page only lists 8 people (compared to the 5 for the Jornada) so it can't be that much more complex. And it's irrelevant because of the bootloader thing.

It appears from some quick reading that the bootloader is a security technique -- that same lockdown that prevents alternative OSes also prevents malware. I understand if people disagree, but that seems like a reasonable trade-off to me.


1. The consequences of playing loud sounds with no protection are unlikely to involve open flames. You're just going to fry speaker coils, after which they won't sound good (or make much sound at all).

2. Apple doesn't implement TrustZone. They don't even implement EL3 (the exception level typically used for TZ) in their Arm core designs, just EL0 (userspace), EL1 (kernel), and EL2 (VM monitor). They appear to have completely rejected the concept of an above-the-OS supervisor responsible for security and other system maintenance tasks, which is the idea behind things like TrustZone and Intel's insane System Management Mode.

3. One of the fairly novel things about Apple's Arm Macs is their secure boot infrastructure. Although it's clearly derived from iOS secure boot, it has been greatly extended, in part because (contrary to what you assume) Apple doesn't believe Macs should have to be jailbroken when users want to run something not signed by Apple. See this for more details than you probably require:

https://support.apple.com/guide/security/startup-disk-securi...

So, no jailbreak is required to boot Asahi Linux, or any other unsigned OS. Note that while Apple's documentation refers to booting an unsigned "XNU kernel" (XNU being the macOS kernel), the binary doesn't actually have to be XNU. In Asahi's case, it's a bootloader which loads another bootloader which loads the Linux kernel. (IIRC - I might have that chain a bit wrong.)

Also, if you want to take the time to read through all of that link, you'll find Apple did put a lot of thought into making it possible to bypass while simultaneously keeping it quite robust against malware. Barring truly horrific implementation flaws, it should be impossible to remotely automate the bypass, and the procedure is designed to provide some level of warning to victims of social engineering. And, it's a per-OS preference, so you don't have to reduce the security of your macOS install at all if you just want to play around with Asahi on a different partition.

Even more interesting - the low-security "unsigned" boot path is technically still signed, in a very useful way. Attesting that you'd like to boot an unsigned "XNU kernel" stores the "XNU" binary's cryptographic signature in Apple's Secure Enclave. Every time you boot it, its signature is checked to make sure it hasn't been modified since you authorized it. Which means... if that binary is a loader which does its own cryptographic signature check on the next stage loader, which does its own check on the next stage (or Linux kernel etc), you've built your own secure boot chain on top of Apple's, without having to ask Apple to sign anything. Pretty cool.


"MacOS on ARM handles memory differently, try it you might like it."

The only sense in which this claim is true is that macOS on Arm uses a 16KiB page size, while macOS on Intel uses a 4KiB page size. This is actually less space efficient (more wasted RAM due to partially filled pages), although it has performance benefits (N CPU TLB entries cover 4x as much RAM).

If you needed 32GB RAM to avoid swapping on Intel hardware, you still do. There isn't anything which profoundly reduces memory use. Arm Mac hardware does tend to perform lots better while swapping than Intel, which fools some into thinking it is using less memory, but it isn't.

(The reasons for the better performance: one, Apple put in a hardware accelerator block for the memory compression feature which allows macOS to sometimes avoid swapping to disk, so light swapping which only invokes the compression-based pager is faster. And two, the SSD controllers integrated into Apple Silicon are sometimes much faster than the SSDs in the Intel Macs they replaced.)


re: four M1 variants, the clever thing is that from a certain perspective, they've only designed two. It's just that one of those two (M1 Max) was designed such that it could be scaled down down to M1 Pro and up to M1 Ultra.

On scaling down, they simply designed and laid out M1 Max such that half the GPU cores, memory controllers, and media encoders were divisible from the rest of the chip, with the interface between the two a clean, straight cut line. To make M1 Pro masks, they likely just took the M1 Max mask artwork, cropped it, and did minor cleanup on the cropped edge to terminate all the dangling connections.

To make M1 Ultra, they just "glue" two M1 Max die together with advanced packaging and interconnect technology. Every M1 Max ships with an unused die-to-die interconnect block along one edge of the chip.


"There is already a living example of a custom Apple-designed external graphics card. Apple designed and released Afterburner, a custom "accelerator" card targeted at video editing with the gen 3 Mac Pro in 2019. Afterburner has attributes of the new Apple Silicon design in that it is proprietary to Apple and fanless. [3] It seems implausible Apple created the Afterburner product for a single release without plans to continue to upgrade and extend the product concept using Apple Silicon."

---

You don't understand what Afterburner is, and as a result have read far too much significance into it, and haven't noticed that they've already moved on even before the new Mac Pro is out.

Afterburner isn't a "graphics card". It's just a FPGA on a PCIe x16 expansion card. Although in principle you could download an arbitrary bitstream to the FPGA and use it for a really wide variety of purposes, in practice Apple uses it for exactly one function: accelerating ProRes. That's Apple's proprietary low-loss video codec designed for use in video editing software, most notably their own Final Cut package.

They needed Afterburner in the Intel Mac Pro because they didn't want to burn CPU cores on a software ProRes codec, and none of the silicon available in the Intel/AMD/Nvidia ecosystem offers a hardware ProRes codec.

They don't need Afterburner in Apple Silicon Macs because they simply integrated a ProRes codec into M1 Pro/Max/Ultra. I wouldn't be surprised if they were able to share a lot of source code between the two implementations. The M1 version is lots faster than the Afterburner version, which isn't surprising when you know about the penalties FPGAs pay for being reprogrammable in the field.

I suspect the M1 Pro/Max/Ultra project came first, and the Afterburner port was a quick side project. To me, it's not implausible that it's a dead end, instead that's just about all it can be.


"The CPU can't possibly get too cold" - Untrue. There are plenty of chips with what overclockers like to call "cold bugs".

Sequential logic (flipflops) has a setup time requirement. This means the combinatorial computation between any two connected pairs of flops (output of flop A to input of flop B) has to do its job fast enough such that the input of B stops toggling some amount of time before the next clock edge arrives at the flipflop. Violate that timing, and B will sometimes sample the wrong value, leading to an error.

Setup time is what most people are thinking about when they use LN2 or other exotic forms of cooling. By cooling things down, you usually improve the performance of combinatorial logic, which provides more setup time margin, allowing you to increase clock speed until setup time margin is small again.

But flops also have hold time requirements - their inputs have to remain stable for some amount of time after the clock edge, not just before. It's here where we can run into problems if the circuit is too cold. Imagine a path with relatively little combinatorial logic, and not much wire delay. If you make that path too fast, it might start violating hold time on the destination flop. Boom, shit doesn't work.


They did open up their hardware to other operating systems: they engineered a feature into their secure boot system which allows a Mac's owner to attest (with physical presence required) that they'd like to run a kernel not signed by Apple. That's the door Asahi Linux uses. It isn't there on iOS devices, even those also based on M1.

Everything else you're asking for would require them to spend significantly more money and/or engineering resources on supporting the port, and that just doesn't seem like a thing Apple's senior management would go for.


"There is no hardware in the M1's for x86 emulation. Rosetta 2 does on the fly translation for JIT and caches translation for x86_864 binaries on first launch."

This is not quite correct.

First, as I understand it anyways, Rosetta 2 mostly tries to run whole-binary translation on first launch, rather than starting up in JIT and caching results. It does have a JIT engine, but it's a fallback for cases which a static translation can't handle, such as self-modifying code (which is to say, any x86 JIT engine).

Second, there is some hardware! The CPU cores are always running Arm instructions, but Apple put in a couple nonstandard extensions to alter CPU behavior away from standard Arm in ways which make Rosetta 2 far more efficient.

The first is for loads and stores. x86 has a strong memory ordering model which permits very little program-visible write reordering. Arm has a weaker memory model which allows more write reordering. If you were writing something like Rosetta 2, and naively converted every memory access to a plain Arm load or store, the reorderings permitted by Arm rules would cause nasty race conditions and the like.

So, Apple put in a mode bit which causes a CPU core to obey x86 memory ordering rules. The macOS scheduler sets this bit when it's about to pass control to a Rosetta-translated thread, and clears it when it takes control back.

The second mode bit concerns FPU behavior. IEEE 754 provides plenty of wiggle room such that compliant implementations can be different enough that they produce different results from the same sequence of mathematical operations. You can probably see where this is going, right? Turns out that Arm and x86 don't always produce bit-exact results.

Since Apple wanted to guarantee very high levels of emulation fidelity, they provided another CPU mode bit which makes the FPU behave exactly like a x86 SSE FPU.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: