Nonsense. NVMe is an industry standard, M.2 supports it, just like many SSDs. A Samsung 970 Pro SSD does 3.5 GB/s (not Gbps; SATA II already did 3 Gbps well over ten years ago).
Incidentally, Samsung 970 Pro SSDs come at a fraction of Apple's upgrade costs.
I have a non-NVMe Samsung AHCI M.2 SSD from 2014 that gives me ~2.3GB/s sequential reads and ~1.2GB/s sequential writes, no NVMe required.
You can run MacOS or Windows on 256 GB almost indefinitely, even as they grow it would be five or more years before it becomes a headache, to keep 128 GB happy you have to babysit/rebuild semi-regularly.
I'm yet to see an OS which doesn't bloat as it ages, even on mobile devices. 30 GB free won't be 30 GB the next major OS update, including rollback files, logs, and so on. OS bloat is also one of the few things you cannot offload to external storage.
And I'm someone that ran off of a 128 GB SSD for almost two years.
PS - Plus increased resale value will likely pay for itself.
As a counterpoint, my 2011 MacBook Air has 128GB, and works just fine. Photoshop, coding, and lots of personal projects.
Maybe 128GB isn't enough for gaming, but I don't think that's what the "mini" is for.
Can't use it to offload and edit photos while traveling, despite the convenient SD card slot, because the disk is too small.
can't download an os update, disk full. And no, I'm not a gamer. I've been fighting that small disk for at least the past 4 years.
Obviously it depends what you do with it. A 1.5tb HDD can fill up on a week under some use cases, so it is silly to mention that yours fills up. The fact that some people can use it and it doesn't full up means it is still a viable option.
It doesn't change the fact that an OS install and updates take up a significant percentage of that 128GB, whereas they take up an utterly negligible percentage of a 1.5TB disk.
It's a shame that 256GB isn't the minimum in these new, expensive Mac minis.
AnandTech reported int heir review of the 2015 Macbook that this was just NVMe. Has something changed since then?
This is just Apple again trying to pass off a standard as their own innovation to justify their proprietary incarnation.
In fact it's increasingly hard to find M.2 SATA these days with how overwhelmingly common PCI-E 3.0 x4 M.2 is.
And M.2 PCI-E isn't just stupid common, it's also cheap. Really, really cheap. 1TB of crazy fast PCI-E nand is $230: https://www.amazon.com/Samsung-970-EVO-1TB-MZ-V7E1T0BW/dp/B0...
What you're talking about is NVMe NAND—which is a standard protocol on top of PCIe. If you aren't aware, both the PCIe m.2 NVMe SSDs, and the PCIe "card" NVMe SSDs, and even the DIMM-slot SDRAM-speaking NVMe SSDs, all still contain a flash controller behind their PCIe/SDRAM controller, which implements the standard NVMe protocol, acting as a translator between the host's decoded commands from the bus, and the NAND's proprietary protocol. And, like all other flash controllers, the flash controllers in NVMe SSDs do a host of "enrichments" to the abstraction they expose, like opaque wear-levelling.
Whereas what I'm talking about—"direct PCI-e NAND"—refers to the concept of NAND memory sitting directly on the PCIe bus with no flash controller, just a PCIe controller. This would, if it existed, allow the host to directly program and erase NAND blocks—as if the host was the flash controller—by sending the NAND commands in the NAND's own proprietary command format, wrapped into PCIe packets.
Directly-addressable NAND has none of the "enrichment" a flash-controller provides to NVMe NAND, and directly-addressable NAND also speaks whatever dumb protocol the NAND's manufacturer makes it speak, requiring a special custom driver per NAND (or a host-side flash controller.)
Old embedded systems (like phones and cameras ca. 2002) used to contain directly-addressable NAND, soldered onto the motherboard. They mostly switched away from this to using eMMC chips—those being single BGA packages containing a flash controller and NAND—because they were gaining no advantage from having a separate flash controller, only inconveniences (like having to write their own storage drivers) and/or costs (of adding their own flash controller to the embedded board's BOM.)
Old iOS devices used to use eMMC. But, since the creation of the Secure Enclave, Apple has found an advantage to having a discrete flash controller: namely, that it can interact with the Secure Enclave. So, since the iPhone 6, Apple stopped using eMMC in favor of their own flash controller + directly-soldered NAND flash.
And now, Apple have done the same thing to the Mac Mini that they did to iOS devices: switched from an interface that expects to find a flash controller speaking a standard protocol (NVMe or SATA), to an interface where the flash controller is on the host side and expects to find directly-addressable NAND flash on the other side for it to manage.
Attaching an m.2 NVMe SSD to that interface, would be similar in effect to what you'd get by taking one of those old phones that uses directly-soldered NAND with a host-side flash controller, and instead of soldering in NAND, soldering an eMMC card in its place.
In short: you'd have a dead device.
And since they are using an off the shelf CPU, it's using PCI-E not something custom. Which means a flash controller is on the other side of the PCI-E bus.
They are probably still using nvme because there's no reason not to, but regardless there is still a nand controller between the pci-e bus and the nand flash.
There's nothing particularly incompatible with secure enclave + standard nvme. You'd still do encryption wherever you wanted.
Apple theoretically could have a slot for NVMe separate from the system drive, but there is no industry standard for "card with NAND on it that the T2 chip understands".
Yes, but you can achieve these things without a hardware flash controller. Old embedded devices did exactly this: they used the CPU as the flash controller.
It's simplest to understand if you look at a much older technology: floppy disk drives. There was a technology war in the 80s, between systems that had "direct-addressable" floppy disk drives, and systems that had "managed" floppy disk drives. The former (like the Apple II, and the IBM PC) had to use the CPU as a floppy drive controller, encoding individual commands to the drive's motor, managing the timings of these commands using the CPU's own clock. The latter (like the Commodore 64) could just send async commands, because there was another whole CPU inside the floppy disk drive itself, decoding those commands and managing the motor. That CPU was the "floppy disk controller."
Same was true of hard disks, before the ATA standard. The CPU needed to control the motor. This is why tools that could "low-level format" a hard disk existed: the hard disk's motor was directly-addressable to the CPU, so the CPU could literally position the disk head anywhere and write anything. (And a bad POKE to the disk's IO port could literally cause the disk's head to poke a hole in its platters!)
And this is exactly why, for the machines in the Apple and IBM lineages, the invention of a generic Disk Operating System was so foundational to making these systems easy to develop for. (A DOS is not an "Operating System" in the modern sense—though it does often contain an OS—but rather, a DOS, in the strictest sense, is a library kernel that sits beside your program, and provides your program with a software-emulated disk controller to speak to directly-addressable disk motors!)
When NAND came about, and was first used in embedded devices, it was also in directly-addressable form, with the CPU and RTOS using a set of "software-emulated flash controller" routines to speak to it. Actual hardware flash-controller chips came later!
And so, I hope you understand now that what you've said here is completely untrue:
> They only way to get anything close to what you've made up is to not use PCI-E, either.
You could put a NAND flash chip directly behind a PCIe controller, stick it into a PCIe slot, and then have the OS—using a "software-emulated flash controller"—create commands, and have the CPU wrap them into PCIe packets and push them through the PCH over to that board, whereupon the PCIe packet is unwrapped to deliver exactly the same NAND commands—with the same timings†—that you'd get by having the NAND wired directly to the CPU over e.g. the SPI bus.
† You'd think PCIe has delays, but that's only if the PCIe bus is oversubscribed using something like a PCIe splitter daughterboard. When using PCIe normally, despite the packet-oriented nature, the signalling is still synchronous between the board and the CPU. This is achievable because PCIe doesn't use any routing that can impose latency, but instead, the PCIe host controller merely acts as a network switch, where established paths are electrically continuous. If your NAND chip has four input terminals, and the processor has given it four dedicated PCIe lanes, then sending PCIe packets over those lanes (assuming a high-enough PCIe clock frequency) is exactly equivalent to signalling directly to those input terminals. (And, in the more complex case, if your NAND has four input terminals, and you've given it one 4x PCIe lane, then (assuming a high-enough PCIe base clock frequency) the PCIe controller can still internally ser/des and get your packets there in time.) If this didn't work, then you wouldn't have things like Infiniband RDMA working over PCIe. [Also, this guarantee of electrical continuity is the big difference between something being attached by PCIe, and attached by Thunderbolt. The Thunderbolt controller is a router, and so a driver for a Thunderbolt peripheral can't be coded with precise timings in mind.]
Also, I know this works, because devices do this. Ever had a PCIe board with firmware that doesn't get updated ephemerally at boot by the OS, but instead can be reflashed permanently, like a BIOS? (Network cards will often have this architecture, because of the requirement for PXE boot.) Well, if the firmware can be permanently updated, then that firmware is likely held on NAND that's direct-addressable over the PCIe bus (with some board-internal routing given the memory region accessed, that it told the CPU about as part of its ACPI SSDT.) That's why firmware flashers need to be these bulky custom binaries: they've got (simplistic) software-emulated flash controller libraries embedded in them, so that they can generate the packets to feed, over the PCIe bus(!), to the board's NAND. (You think they're using EEPROMs—i.e. how the BIOS calls these "option ROMs"? Well, NAND flash is a type of EEPROM. And the other, older type is way too low-capacity for holding modern firmware images.)
(The motherboard's BIOS firmware is, these days, NAND flash as well; and "BIOS flasher" programs contain software-emulated NAND flash controller libraries, too. But the BIOS isn't usually sitting on the PCIe bus [but rather is just mapped into some memory region, i.e. is directly attached to the PCH] so this example—while more obvious—isn't as powerful.)
> And since they are using an off the shelf CPU, it's using PCI-E not something custom. Which means a flash controller is on the other side of the PCI-E bus.
Nope. Remember that the NAND in the Mac Mini isn't on any bus that the CPU can speak to. The NAND in the Mac Mini is a peripheral of the T2, not a peripheral of the CPU itself. And the T2 has whatever peripheral interfaces Apple wants it to have. There's no reason for the T2 to speak PCIe to the NAND; the NAND isn't even on a shared bus with anything else. It's just right there, soldered beside the T2 chip on the board, output terminals (likely) wired to input terminals.
The flash controller is part of the T2. (If you think of the T2 as a generic CPU, then it's a lot like the Commodore 1541's 6502, in that it would be running an RTOS and that RTOS would have "software-emulated flash controller" routines. It's probably not, though; it probably has a hard flash controller as part of its package. That flash controller has just been customized to be wired to the Secure Enclave part of the T2 package, such that reads and writes going through the T2's flash controller "flow through" the T2's Secure Enclave's encryption, without the T2's generic CPU needing to be involved at all.)
If you want, you can think of the entire T2 and NAND part of the board, as an NVMe device. This actually makes a fair amount of sense, since the T2 is probably on the PCIe bus, and the CPU is probably talking NVMe-over-PCIe to the T2 when it wants to write to the disk. It's just that the CPU is also talking a bunch of other protocols to the T2 as well, when it wants other things.
The best way to understand the Mac Mini's CPU's relationship with the T2, is that it's exactly like having a diskless Intel PC, with an iPhone plugged into it over a Thunderbolt cable. (Except, without the aforementioned Thunderbolt latency.) This iPhone has been modified to expose itself to the PCIe bus as, among other things, an NVMe device. When the host writes to this "storage device", it's really just talking to the iPhone's CPU, which can then do whatever it wants with that received command. It probably turns it into a write to its own storage, after doing a bunch of other stuff. Neither PCIe nor [standard] NVMe is involved in that side of the storage process, because the iPhone is not a user-expandable computer that needs to speak standard protocols to its own hardwired chips.
In this analogy-that's-not-really-an-analogy-at-all, the "iPhone's CPU" is the T2, and all the iPhone's chips have been plopped onto a motherboard shared with the Intel PC. However, these chips are still isolated from the Intel PC in exactly the same way they would be if it was just an iPhone-over-Thunderbolt. It's another whole computer in your computer, just like the article says. And the NAND is part of that computer.
But also: backups. make backups.
But that just depends on use-case.
There are plenty of scenarios where you just need the built-in storage to handle the OS, apps and miscellaneous storage while your large scale storage is external somewhere.
Why make people get (and pay for) more storage than they have use for?
But anyone who has a serious professional need for large storage (e.g. video editors), probably doesn't just want 2x-4x the internal storage, with the same bus-factor-of-1 fault-tolerance. No, they probably need either a Thunderbolt RAID array (that they can put into RAID5/6/10), or a SAN. And neither of those fit in a Mac Mini.
(Though, a Thunderbolt RAID array of m.2 SSDs might fit in a Mac Mini-sized enclosure. Does anyone sell that?)
Not excusing it, just explaining.
Edit: wow, really struck a nerve there. I take it back, Apple is in no way motivated to massively overcharge for storage upgrades.
In this case the engineering choice was probably driven at least in part by the desire to take advantage of the T2’s line rate encryption features, which so far has never been supported on removable/upgradeable storage mediums (at least as far as I am aware - the T2 drive encryption is a separate feature from FileVault, which is supported on removable media). Even on the iMac Pro which technically has removable flash storage, to my knowledge no one has yet managed to produce a T2 compatible after market third party upgrade.
Thankfully thunderbolt 3 removes much of the advantage internal storage media traditionally enjoys, at least in performance terms, so there are reasonable upgrade paths.
This is a choice, not a hard engineering limitation. An m2 SSD is tiny even compared to a single 2.5” drive.
They can also be mounted at an angle just like those SO-DIMM slots.
The M.2 standard even supports 12mm, 16mm, and 30mm width modules in addition to the 22mm that seems to have become the norm. Apple could still be a dick by using a 12mm or 16mm card while still supporting the standard.
Stand-off M.2 sockets are a standard part, because a number of mainboards put components under the M.2 slot.
I think there'd easily be room, given that the soldered-on NAND chips could be removed from the board.
edit: If you're wondering how that works, here's a hint: https://raw.githubusercontent.com/ThomasKaiser/Knowledge/mas...
Cap blew on the motherboard? Data gone. CPU kicked the bucket? Data gone. Spilled water on it? Data gone.
Not literally gone, of course, just practically. No chance of popping the drive out and trying it elsewhere to recover data. Nope. And since Apple never repairs only replaces, you're not going to have any hope of saving your data with official repair support.
That's absolutely insane for consumer tech where backups are rare.
Are backups really rare? Apple couldn't have made it easier to plug in a backup disk. Plug one in and you immediately get asked if you want to use it with Time Machine. I can't imagine an easier backup solution.
I agree with the rest of your comment. Any component failure means losing all your data.
Here it's a bit more than just that: T2 is the secure boot system, but also the SSD controller (and a few other I/O things).
They could have put an m.2 slot though.
Or even a mac using the Shuttle SFF size
What happens three years down the line when the device is out of warranty and the storage fails? Between paying an absurd amount of money to replace the whole board and getting a new one I bet most people will go with the landfill option.
My Mac mini 2012 got a second life after I maxed out the ram and replaced the spinning rust. If a component fails I can easily replace it and squeeze a couple of years more out of it.
Tightly integrated hardware, and efforts to develop software that improves performance of long-discontinued products does a lot to explain why Apple products are used much longer than alternatives. Compare a Mac mini to a budget mini tower, and it is difficult to argue that attaching an external Thunderbolt 3 drive in 2024 is worse for the environment, when the competing product will likely be in a landfill, due to difficult to diagnose hardware issues and there not being an easily understood recycling program.
I started recommending against in 2014 and nothing I have seen since has given me any reason to change my mind.
1. The MacOS premium. It's impractical to run MacOS on anything else.
2. Status symbol.
Anecdotally, I see a lot of people outside of HN comparing their experiences with the quality of a $1200+ USD Macbook to ~$500 USD laptops of other OEMs. Of course the Macbook wins over things half its price.
Personally I've had great experiences with Thinkpads and Latitudes, which have held up great and are very easy to repair or upgrade.
Apple makes amazing things. I would love something as slick as a Mac laptop,but I can't make myself get one.
Old Mac Minis didn't hold their value because they were upgradeable.
All macs hold their value longer.
This isn't the early 2000s. The Mac Mini has an SSD, and for the overwhelming majority of these machines it will easily outlive their useful life.
I was immediately dismayed to see that storage is not upgradeable/replaceable ... but now that the device has USB3 ports, would it not be possible to boot, and run, in a performant manner, from a USB3 thumb drive ?
It seems to me that you should be able to keep these new USB3 minis in service "forever" with a little nub sticking out the back ...
I guess I don't know what the state of installing OSX to boot from external media is, circa High Sierra ... ?
Much faster than the real slow hdd it came with at least, plus I don't have to open it up to clear out the data when I'm finally done with it.
I still have yet to figure out how to create a similar hassle-free "live" booting environment which works across various hardware for windows and linux.
What happens when the soldered-on SSD fails? Can you alter a setting in the PRAM or SMC to let the system completely disregard the failed SSD?
Also on the documentation of the T2 chip, Apple strongly recommends to use file vault, because if you do not, the ssd even though it is encrypted by default, gets decrypted  upon being mounted.
To sum up, I guess it is possible to boot from whatever device you want.
What is the speed difference between the on-board PCIe connected SSD vs. the USB3 connected SSD ? Aren't both PCIe and USB3 faster than the SSD, making it the bottleneck ?
A flash drive would, of course, be terribly slow ... do they make USB "thumb drives" that are SSD (not flash) ?
I might have sensitive hearing. Many modern gadgets are a pain in the ear.
This type of noise digs deep and ruins the mood.
I am sending it back.
This is my data point, might not represent all new mac minis.
edit: I can probably go for a low polling rate, but then I wonder about tomorrow. What other use case will stress the same circuit components?
The higher-end configs with i5 and i7 CPU are not ECC-capable, only the entry-level i3 CPU is ECC-capable.
Edit: I mean the ECC logic is self-contained inside the module, with no support requirements on the host CPU.
You’re more likely to run into on disk than in RAM issues with silent bitrot depending on OS and FS even then pretty rare but if your NAS is huge becomes more likely just law of averages.
I’m not aware of any solutions to detect bit rot in RAM other than ECC.
Also the major issue is that I've heard about problems booting Linux on recent macs, but probably it'll be resolved soon.
But soldered storage... And storage is the wear item, d'oh!
If you've got the time to go slow its kind of therapeutic.
(OK, not quite <$100, but <$200)
I am biased, working in network engineering, but I really don't see a huge use case for 10GBaseT. It's almost unheard of in an ISP environment. Anywhere we need more than 1000BaseT we do everything optically.
Total cost €680 vs €899 for the Mac Mini - it actually makes the mini look like an incredible deal.
Yes graphics is better — the GPU on the Mac Mini is the Intel UHD 630 vs the UHD 620 on the Gigabyte Brix.
With regard to the CPU, the Gigabyte’s i7-8550U is slightly faster than the i3 in the Mac Mini, but slower than the i5 and i7 offered on the Mac Mini.
The lower TDP might be something that people who plan to run a server on it might care about. I have a desktop with a i7-5820K (which has a 140W TDP), and I realized that a DigitalOcean droplet was cheaper than keeping my computer running all day (even with the monitor turned off, idle power usage was 100W+, and my electric cost is over $0.20/kWh).
I consider 16 GiB of RAM a minimum, and it is significantly cheaper with the Gigabyte. Apple’s $200 charge for an upgrade from 8GB to 16GB (in the US) is a bit too much. You can buy 16 GiB (2x8GiB sticks) for that.
Having the flexibility to pick your own SSD is nice too. The 960 PRO is one of the fastest PCIe SSDs out there — I wonder if the SSD in the Mac Mini comes close to it in speed. Also: if you don’t care about the SSD speed but want a lot more storage, you have the flexibility to get the slower 1TiB 860 EVO (or similar SSD) for around $150 — wheread Apppe charges $600 for an upgrade to 1TB on the Mac Mini — and exorbitantly high price.
Lastly, I use Arch Linux both on my home laptop, and on my work machine (whenever possible). And the fact is most people buy Macs for macOS; but I’m quite comfortable using Linux, and don’t really need macOS.
For some reason a TB3 NVMe enclosure is dirt cheap, around €40, and gives you full speed access to whatever SSD you like!
I just got a mini for myself - main driver being the OS, I can’t be productive on my windows pc. Got the i5 and 256gb for ~$1200, which gives me same power as a high end 15”MBP at a quarter of the price. Mac OS is worth the €200 markup alone.
And I can’t run my music hobby software on Linux, at least not easily.
Probably more. They all have a cost.
... You were saying?
> [Tim] Cook commented on the inaccuracy of estimated cost breakdowns on Apple products.
> "I haven't seen [them for Apple Watch], but generally there are cost breakdowns around our products that are much different than the reality. I've never seen one that is anywhere close to being accurate," Cook said
My example was about the iPhone XS Max. Can you tell me with a straight face that Apple’s top model has a lower gross margin than their average?
Also, other producers can make perfectly comparable products at much lower price points. Add to that Apple’s logistics and supplier influence prowess and there’s no way Apple is paying more than smaller manufacturers for comparable hardware.