Hacker News new | more | comments | ask | show | jobs | submit login
Mac mini Late 2018 Teardown (ifixit.com)
211 points by nnjeremy 3 months ago | hide | past | web | favorite | 193 comments

A bit disappointing to see that the storage is not user-upgradable. I guess it matters less on a device like this where you could more easily use external storage, but still. I wish Apple stopped soldering in stuff and just made things a centimeter taller if that was the tradeoff.

I think it's more because of the PCIe interface, and access control and encryption through the T2 chip. The PCIe interfaces SSDs that Apple's been using allow really fast speeds of upto 3Gbps, which is almost double those offered by other interfaces like M2 or even more faster than SATA. What's disappointing is the base storage. 128GB in 2019 is so ridiculous. I would't otherwise mind a soldered SSD when I can just use the TB3 ports to connect external drive with virtually no performance loss. But 250GB internal storage is a necessity especially when the OS itself takes up almost 20GB out of the paltry 120GB.

> The PCIe interfaces SSDs that Apple's been using allow really fast speeds of upto 3Gbps, which is almost double those offered by other interfaces like M2 or even more faster than SATA.

Nonsense. NVMe is an industry standard, M.2 supports it, just like many SSDs. A Samsung 970 Pro SSD does 3.5 GB/s (not Gbps; SATA II already did 3 Gbps well over ten years ago).

Incidentally, Samsung 970 Pro SSDs come at a fraction of Apple's upgrade costs.

M.2 wouldn't be a bottleneck. It supports up to PCIe 3.0 x4 which has a maximum bandwidth of 4 GBps in each direction.

NVMe specifically, as M.2 is just the connector (which can also use the slower SATA interface) rather than the interface.

Actually PCIe specifically. NVMe improves performance slightly, but it's mostly all due to PCIe.

I have a non-NVMe Samsung AHCI M.2 SSD from 2014 that gives me ~2.3GB/s sequential reads and ~1.2GB/s sequential writes, no NVMe required.

The two I installed this week are using 70GB and 90GB all set up for development with my files. I think there are plenty of people who needn’t spend on an extra 128GB. Especially since the USB ports are plenty fast for bulk, non speed critical storage should you decide to archive something locally.

128 GB doesn't work because it doesn't scale well with age.

You can run MacOS or Windows on 256 GB almost indefinitely, even as they grow it would be five or more years before it becomes a headache, to keep 128 GB happy you have to babysit/rebuild semi-regularly.

I'm yet to see an OS which doesn't bloat as it ages, even on mobile devices. 30 GB free won't be 30 GB the next major OS update, including rollback files, logs, and so on. OS bloat is also one of the few things you cannot offload to external storage.

And I'm someone that ran off of a 128 GB SSD for almost two years.

PS - Plus increased resale value will likely pay for itself.

128 GB doesn't work because it doesn't scale well with age

As a counterpoint, my 2011 MacBook Air has 128GB, and works just fine. Photoshop, coding, and lots of personal projects.

Maybe 128GB isn't enough for gaming, but I don't think that's what the "mini" is for.

I have the exact same machine, 2011 MacBook Air, and it's always bloody out of disk space. It's the only machine I am constantly shuffling data around on. Delete photo library so I can upgrade xcode, let it resync from icloud next time I feel like having my photo stream.

Can't use it to offload and edit photos while traveling, despite the convenient SD card slot, because the disk is too small.

can't download an os update, disk full. And no, I'm not a gamer. I've been fighting that small disk for at least the past 4 years.

You can at least upgrade the storage on that era Airs, and in the last couple of years there are now adaptor cards for normal M2 SATA drives.

> 2011 MacBook Air, and it's always bloody out of disk space

Obviously it depends what you do with it. A 1.5tb HDD can fill up on a week under some use cases, so it is silly to mention that yours fills up. The fact that some people can use it and it doesn't full up means it is still a viable option.


It doesn't change the fact that an OS install and updates take up a significant percentage of that 128GB, whereas they take up an utterly negligible percentage of a 1.5TB disk.

I think the reason it works well for me is that I don't keep any photos on the machine except for the ones I'm using in Photoshop on current projects. And even then they're on a teeny weeny USB that barely sticks out of the side, and doubles the storage.

Counter counter point, my 128gb 2013 mbp I use for development has 90gb of “system” cruft somehow and I regularly have to shuffle files around, and have been thinking I need to re-image it since I can’t dig deeper into what the “system” bucket is made up of easily.

I discovered that the Chrome browser had been storing every previous version as it updated them. I had over 20 GB of old versions of Chrome, which I recovered when I deleted them.

GrandPerspective does it for me. Really intuitive interface to figure out what's taking how much of storage and delete it.

I have the exact same issue. I have 256gb in my MBP, but 162gb are used by system files. I can't seem to figure out what is being used and how to clear it out.

I’ve been very happy with Daisy Disk over the years.


Omni Disk Sweeper is free and works pretty well: https://www.omnigroup.com/more

https://whatsizemac.com - been invaluable over the years with my 256GB 2012 MBA.

On SSDs, you also want as much as free space as possible, to avoid wearing out due to write amplification the few free blocks that are constantly rewritten.

Modern SSDs (last 5 years or so) will wear level by relocating infrequently written blocks.

90GB out of 120GB when I've just barely installed the apps and files I need mean I'll run out of space in less than 3 months.

It's a shame that 256GB isn't the minimum in these new, expensive Mac minis.

> The PCIe interfaces SSDs that Apple's been using allow really fast speeds of upto 3Gbps

AnandTech reported int heir review of the 2015 Macbook that this was just NVMe. Has something changed since then?

This is just Apple again trying to pass off a standard as their own innovation to justify their proprietary incarnation.

It's NVMe to the T2 SoC, which acts as the storage controller (among many other tasks). It would be nice to have the raw NAND on a removable card, but the market for "raw-NAND-on-a-stick" would be entirely limited to these machines.

Come on guys, they are just ripping us off. There is no sane reason for removing m2 connector. SSDs can run pcie over m2 and there is no way they run something custom there. Its bullshit marketing that enables them to sell you overpriced ssd and mem upgrades or make you replace your PC faster.

It's not that they couldn't make m.2 PCIe NAND; it's that nobody else is making m.2 PCIe NAND. The T2's storage controller wouldn't know what to do with any other (NVMe or SATA) m.2 SSD, since then it'd be the T2's storage controller talking to the SSD's storage controller.

That's incredibly false. Literally everyone makes m.2 PCIe NAND.

In fact it's increasingly hard to find M.2 SATA these days with how overwhelmingly common PCI-E 3.0 x4 M.2 is.

And M.2 PCI-E isn't just stupid common, it's also cheap. Really, really cheap. 1TB of crazy fast PCI-E nand is $230: https://www.amazon.com/Samsung-970-EVO-1TB-MZ-V7E1T0BW/dp/B0...

Er, no, you didn't understand my statement at all.

What you're talking about is NVMe NAND—which is a standard protocol on top of PCIe. If you aren't aware, both the PCIe m.2 NVMe SSDs, and the PCIe "card" NVMe SSDs, and even the DIMM-slot SDRAM-speaking NVMe SSDs, all still contain a flash controller behind their PCIe/SDRAM controller, which implements the standard NVMe protocol, acting as a translator between the host's decoded commands from the bus, and the NAND's proprietary protocol. And, like all other flash controllers, the flash controllers in NVMe SSDs do a host of "enrichments" to the abstraction they expose, like opaque wear-levelling.

Whereas what I'm talking about—"direct PCI-e NAND"—refers to the concept of NAND memory sitting directly on the PCIe bus with no flash controller, just a PCIe controller. This would, if it existed, allow the host to directly program and erase NAND blocks—as if the host was the flash controller—by sending the NAND commands in the NAND's own proprietary command format, wrapped into PCIe packets.

Directly-addressable NAND has none of the "enrichment" a flash-controller provides to NVMe NAND, and directly-addressable NAND also speaks whatever dumb protocol the NAND's manufacturer makes it speak, requiring a special custom driver per NAND (or a host-side flash controller.)

Old embedded systems (like phones and cameras ca. 2002) used to contain directly-addressable NAND, soldered onto the motherboard. They mostly switched away from this to using eMMC chips—those being single BGA packages containing a flash controller and NAND—because they were gaining no advantage from having a separate flash controller, only inconveniences (like having to write their own storage drivers) and/or costs (of adding their own flash controller to the embedded board's BOM.)

Old iOS devices used to use eMMC. But, since the creation of the Secure Enclave, Apple has found an advantage to having a discrete flash controller: namely, that it can interact with the Secure Enclave. So, since the iPhone 6, Apple stopped using eMMC in favor of their own flash controller + directly-soldered NAND flash.

And now, Apple have done the same thing to the Mac Mini that they did to iOS devices: switched from an interface that expects to find a flash controller speaking a standard protocol (NVMe or SATA), to an interface where the flash controller is on the host side and expects to find directly-addressable NAND flash on the other side for it to manage.

Attaching an m.2 NVMe SSD to that interface, would be similar in effect to what you'd get by taking one of those old phones that uses directly-soldered NAND with a host-side flash controller, and instead of soldering in NAND, soldering an eMMC card in its place.

In short: you'd have a dead device.

Ah, you're talking absolute nonsense. That's why I didn't understand. They didn't slap the nand directly on pci-e because that doesn't work. There has to be an interface chip, a controller if you will. They only way to get anything close to what you've made up is to not use PCI-E, either. And it doesn't even make sense because NAND doesn't work in "directly accessible" anyway. It still needs to read and write in blocks. It still needs those batched up, it still needs a queue depth to achieve its speed, it still needs garbage collection, etc...

And since they are using an off the shelf CPU, it's using PCI-E not something custom. Which means a flash controller is on the other side of the PCI-E bus.

They are probably still using nvme because there's no reason not to, but regardless there is still a nand controller between the pci-e bus and the nand flash.

There's nothing particularly incompatible with secure enclave + standard nvme. You'd still do encryption wherever you wanted.

The T2 chip is the flash storage controller. It is not talking to a soldered-on PCIe storage device, it is addressing the NAND directly.

Apple theoretically could have a slot for NVMe separate from the system drive, but there is no industry standard for "card with NAND on it that the T2 chip understands".

> And it doesn't even make sense because NAND doesn't work in "directly accessible" anyway. It still needs to read and write in blocks. It still needs those batched up, it still needs a queue depth to achieve its speed, it still needs garbage collection, etc...

Yes, but you can achieve these things without a hardware flash controller. Old embedded devices did exactly this: they used the CPU as the flash controller.

It's simplest to understand if you look at a much older technology: floppy disk drives. There was a technology war in the 80s, between systems that had "direct-addressable" floppy disk drives, and systems that had "managed" floppy disk drives. The former (like the Apple II, and the IBM PC) had to use the CPU as a floppy drive controller, encoding individual commands to the drive's motor, managing the timings of these commands using the CPU's own clock. The latter (like the Commodore 64) could just send async commands, because there was another whole CPU inside the floppy disk drive itself, decoding those commands and managing the motor. That CPU was the "floppy disk controller."

Same was true of hard disks, before the ATA standard. The CPU needed to control the motor. This is why tools that could "low-level format" a hard disk existed: the hard disk's motor was directly-addressable to the CPU, so the CPU could literally position the disk head anywhere and write anything. (And a bad POKE to the disk's IO port could literally cause the disk's head to poke a hole in its platters!)

And this is exactly why, for the machines in the Apple and IBM lineages, the invention of a generic Disk Operating System was so foundational to making these systems easy to develop for. (A DOS is not an "Operating System" in the modern sense—though it does often contain an OS—but rather, a DOS, in the strictest sense, is a library kernel that sits beside your program, and provides your program with a software-emulated disk controller to speak to directly-addressable disk motors!)

When NAND came about, and was first used in embedded devices, it was also in directly-addressable form, with the CPU and RTOS using a set of "software-emulated flash controller" routines to speak to it. Actual hardware flash-controller chips came later!

And so, I hope you understand now that what you've said here is completely untrue:

> They only way to get anything close to what you've made up is to not use PCI-E, either.

You could put a NAND flash chip directly behind a PCIe controller, stick it into a PCIe slot, and then have the OS—using a "software-emulated flash controller"—create commands, and have the CPU wrap them into PCIe packets and push them through the PCH over to that board, whereupon the PCIe packet is unwrapped to deliver exactly the same NAND commands—with the same timings†—that you'd get by having the NAND wired directly to the CPU over e.g. the SPI bus.

† You'd think PCIe has delays, but that's only if the PCIe bus is oversubscribed using something like a PCIe splitter daughterboard. When using PCIe normally, despite the packet-oriented nature, the signalling is still synchronous between the board and the CPU. This is achievable because PCIe doesn't use any routing that can impose latency, but instead, the PCIe host controller merely acts as a network switch, where established paths are electrically continuous. If your NAND chip has four input terminals, and the processor has given it four dedicated PCIe lanes, then sending PCIe packets over those lanes (assuming a high-enough PCIe clock frequency) is exactly equivalent to signalling directly to those input terminals. (And, in the more complex case, if your NAND has four input terminals, and you've given it one 4x PCIe lane, then (assuming a high-enough PCIe base clock frequency) the PCIe controller can still internally ser/des and get your packets there in time.) If this didn't work, then you wouldn't have things like Infiniband RDMA working over PCIe. [Also, this guarantee of electrical continuity is the big difference between something being attached by PCIe, and attached by Thunderbolt. The Thunderbolt controller is a router, and so a driver for a Thunderbolt peripheral can't be coded with precise timings in mind.]

Also, I know this works, because devices do this. Ever had a PCIe board with firmware that doesn't get updated ephemerally at boot by the OS, but instead can be reflashed permanently, like a BIOS? (Network cards will often have this architecture, because of the requirement for PXE boot.) Well, if the firmware can be permanently updated, then that firmware is likely held on NAND that's direct-addressable over the PCIe bus (with some board-internal routing given the memory region accessed, that it told the CPU about as part of its ACPI SSDT.) That's why firmware flashers need to be these bulky custom binaries: they've got (simplistic) software-emulated flash controller libraries embedded in them, so that they can generate the packets to feed, over the PCIe bus(!), to the board's NAND. (You think they're using EEPROMs—i.e. how the BIOS calls these "option ROMs"? Well, NAND flash is a type of EEPROM. And the other, older type is way too low-capacity for holding modern firmware images.)

(The motherboard's BIOS firmware is, these days, NAND flash as well; and "BIOS flasher" programs contain software-emulated NAND flash controller libraries, too. But the BIOS isn't usually sitting on the PCIe bus [but rather is just mapped into some memory region, i.e. is directly attached to the PCH] so this example—while more obvious—isn't as powerful.)

> And since they are using an off the shelf CPU, it's using PCI-E not something custom. Which means a flash controller is on the other side of the PCI-E bus.

Nope. Remember that the NAND in the Mac Mini isn't on any bus that the CPU can speak to. The NAND in the Mac Mini is a peripheral of the T2, not a peripheral of the CPU itself. And the T2 has whatever peripheral interfaces Apple wants it to have. There's no reason for the T2 to speak PCIe to the NAND; the NAND isn't even on a shared bus with anything else. It's just right there, soldered beside the T2 chip on the board, output terminals (likely) wired to input terminals.

The flash controller is part of the T2. (If you think of the T2 as a generic CPU, then it's a lot like the Commodore 1541's 6502, in that it would be running an RTOS and that RTOS would have "software-emulated flash controller" routines. It's probably not, though; it probably has a hard flash controller as part of its package. That flash controller has just been customized to be wired to the Secure Enclave part of the T2 package, such that reads and writes going through the T2's flash controller "flow through" the T2's Secure Enclave's encryption, without the T2's generic CPU needing to be involved at all.)

If you want, you can think of the entire T2 and NAND part of the board, as an NVMe device. This actually makes a fair amount of sense, since the T2 is probably on the PCIe bus, and the CPU is probably talking NVMe-over-PCIe to the T2 when it wants to write to the disk. It's just that the CPU is also talking a bunch of other protocols to the T2 as well, when it wants other things.


The best way to understand the Mac Mini's CPU's relationship with the T2, is that it's exactly like having a diskless Intel PC, with an iPhone plugged into it over a Thunderbolt cable. (Except, without the aforementioned Thunderbolt latency.) This iPhone has been modified to expose itself to the PCIe bus as, among other things, an NVMe device. When the host writes to this "storage device", it's really just talking to the iPhone's CPU, which can then do whatever it wants with that received command. It probably turns it into a write to its own storage, after doing a bunch of other stuff. Neither PCIe nor [standard] NVMe is involved in that side of the storage process, because the iPhone is not a user-expandable computer that needs to speak standard protocols to its own hardwired chips.

In this analogy-that's-not-really-an-analogy-at-all, the "iPhone's CPU" is the T2, and all the iPhone's chips have been plopped onto a motherboard shared with the Intel PC. However, these chips are still isolated from the Intel PC in exactly the same way they would be if it was just an iPhone-over-Thunderbolt. It's another whole computer in your computer, just like the article says. And the NAND is part of that computer.

Is encryption optional? In general I'm more worried of forgetting my password (and thus losing all my data) than of unauthorized access to my home desktop.

It provides an option to either display a long recovery key at first encryption (which you can store somewhere) or to archive it online with Apple in iCloud so that you have a second key available in the event that you lose your password.

But also: backups. make backups.

If you really don't care about unauthorized access, just write the password in sharpie on the box. I have a bunch of machines hooked up to test equipment that sit on their own network. I just write the username and password on the box and no one ever needs to worry about trying to find the sticky note or find me and ask or waste time guessing. The name of the computer, the username, and password are all the same and reflect what the box setup for.

I also endorse writing the "password when passwords don't matter" on the side of the machine. ISPs have been doing it with their gateway modems for years, and I've never heard of any threat model that calls that out as a problem.

I do this with wifi passwords on routers. Though I use masking tape.

There are better ways to solve the forgetting password problem than turning off encryption.

You can always writw it on paper and save it at a physical location where it can be retrieved at a later time or use a password manager.

Why not use a password manager? I use it for every kind of username/password entry. Safe and reliable at least relatively depending on which manager you use.

If you need a password to log in to your system and decrypt your storage, a password manager is useless.

Password managers sync across devices and so if you need a password to log in to your system, then you can retrieve it from the other synced device/s. Or am I missing something?

It can use the icloud account to encrypt the disk, not a separate password to encrypt.

PCIe SSDs are still a common point-of-failure, aren't they? I feel like they should be replaceable.

Not any more so than other major components. Last SSD failure I saw personally was back in the Sandforce days - probably 5+ years ago.

> What's disappointing is the base storage.

But that just depends on use-case.

There are plenty of scenarios where you just need the built-in storage to handle the OS, apps and miscellaneous storage while your large scale storage is external somewhere.

Why make people get (and pay for) more storage than they have use for?

Because storage is cheap for everyone except Apple, that knows they can get 60%+ markup for everything they sell.

meh, Apple's certainly not alone in their profit margins on upgrades. It's a fairly universal strategy to keep the baseline appealingly low. Dell, for instance, charges $400 to upgrade from 16GB to 32GB RAM on some racks.

Apple's memory upgrades somehow still manage to be significantly more expensive than the significantly overpriced Dell upgrades.

That’s not what necessity means. I think most users of this device will be fine with 128gb. I think you underestimate cloud usage for this machine’s target market.

I'll bite. My parents have got a ~2014 Macbook Pro and 1 megabit upload on ADSL. They're just starting to use "cloud" services like OneDrive. If a large upload starts, no one else in the house can use the 'net. I'm dealing with ~100GB of photos that only have one backup now and it's a nightmare.

I wouldn’t want this little storage onboard, but recent macOS versions do include the feature to automatically shuffle infrequently files up to iCloud.

Not justifying anything...but back when Steve was alive I had a conversation with him on this topic. I was at NeXT writing drivers (NeXTStep) and I had tested a few disk drives on my cube. When I suggested that we might certify them, Steve said that he wanted simplicity, and only wanted to provide a few well-tested and working choices - so he didn't want more than a few basic choices. He also said (disparagingly) that more sophisticated users would figure out how to change things. From his tone I got the impression that it was ok if they did that, but we weren't going to support them at all. Such users weren't our target market, since all my interactions with Steve were focused on beauty and simplicity (Steve wasn't technical at all, as has been well documented - and I can certainly confirm.)

Maybe Apple should build SSD storage thingies that are exactly the same size as the mini so they can be stacked. They're already stacking them for demos and it would definitely look cool. Especially if you get Thunderbolt cables that exactly match 1-4 mini heights (since it got 4 Thunderbolt ports) + daisy chaining.

Other World Computing (OWC) actually sells a device called a miniStack which is essentially that. An external enclosure the same form factor and design as the (last-generation) Mini. It's still uses an older USB 3 connection though, no daisy-chaining.

That’s been a thing from a lot of companies for past Minis (e.g. https://www.newertech.com/products/ministack_classic.php ); I see no reason why it won’t happen again.

Contemporary SSD storage only takes up as much space as a pack of gum, just like sticks of RAM. Even SSDs with terabytes of capacity! Any desktop computer, even a small one, has all the physical space needed to have upgradeable storage.

Yes, you could fit one m.2 SSD slot into a Mac Mini.

But anyone who has a serious professional need for large storage (e.g. video editors), probably doesn't just want 2x-4x the internal storage, with the same bus-factor-of-1 fault-tolerance. No, they probably need either a Thunderbolt RAID array (that they can put into RAID5/6/10), or a SAN. And neither of those fit in a Mac Mini.

(Though, a Thunderbolt RAID array of m.2 SSDs might fit in a Mac Mini-sized enclosure. Does anyone sell that?)

Those already exist but it's kind of missing the point. An NVMe SSD is the size of a stick of gum, I'd much rather plug one of those in than have to stack a box the size of the Mini on top of my Mini and connect it with TB3. In the end you've got something more expensive, probably slower, and less portable than the obvious solution.

You could put a 3.5” drive in a Mac Mini sized box. And with Tunderbolt you could have it double as a hub or dock

Something that allows maybe 4 or more M.2 SSDs and 2 x TB3 controllers...

My understanding is that Apple pairs the individual hard drive in any given unit with the "Secure Enclave" that they use for TouchID within the same unit. So they are paired and one cannot be replaced without also replacing the other.

Not excusing it, just explaining.

Ideal solution would be small secure enclave storage for keys then user-upgradable main storage encrypted with those keys.

Of course, but then how could you charge 2-3x for storage upgrades that way?

Edit: wow, really struck a nerve there. I take it back, Apple is in no way motivated to massively overcharge for storage upgrades.

Console manufacturers never seem to have any issue with it, and hard drives are user upgradable

Has that changed in the current generation? On the XBox 360, you could physically stick any standard IDE drive in the drive bay, but the console wouldn't mount it unless it had a hidden file verifying that it was purchased from Microsoft, which charged well over market value.

IIRC they were also using a clever (sneaky) application of copyright law to enforce this. They used a few special sectors on the disk to store an image to which they hold the copyright. The Xbox checks to make sure the image is there in the right spot before it will use the drive. So you can't pass off another drive as valid without violating their copyright.

That was tried with early video game consoles, and did not succeed legally.

Console manufacturers have a different revenue model. They get money from games sold for their platform, this allows them to sell hardware with low margin or even at loss.

Is this not the idea behind the Mac App Store?

Not really, no. Most of Apple's money comes from hardware sales.

“Services”, as apple like to call the heading they report App Store revenue under, is largely the fastest growing thing at Apple. While historically it’s true Apple has made its money on high margin hardware, the future growth seems much, much more reliant on services. I wouldn’t be so quick to discount the increasing importance of service revenue such as the app stores to Apple.

There is no Touch ID on the Mac Mini.

It still has a T2 chip.

That does make more sense than the size thing. Still wish Apple didn't do it, but that's a better excuse.

The trade-off isn't height--an M.2 slot takes up a relatively large amount of floor space on the motherboard. There's nowhere to put it: https://d3nevzfk7ii3be.cloudfront.net/igi/ZLFbiRsOdiemeWLA.h....

My previous Mac Mini had 2 HDD slots and upgradable ram, so it IS possible, for Apple even. Apple has gotten from a 1 step backwards, 2 forwards to a 1 step forwards 2 backwards company. The devices where they removed optical drives were groundbreaking. Nowadays their devices are compromise ridden.

Given previous versions of this enclosure (with I believe the same exterior dimensions) managed to fit the computer plus two 2.5” old school hard disks, Apple clearly could engineer the interior of this device to fit a much, much smaller m2 SSD. This is a question of engineering choices, not physical space limitations.

In this case the engineering choice was probably driven at least in part by the desire to take advantage of the T2’s line rate encryption features, which so far has never been supported on removable/upgradeable storage mediums (at least as far as I am aware - the T2 drive encryption is a separate feature from FileVault, which is supported on removable media). Even on the iMac Pro which technically has removable flash storage, to my knowledge no one has yet managed to produce a T2 compatible after market third party upgrade.

Thankfully thunderbolt 3 removes much of the advantage internal storage media traditionally enjoys, at least in performance terms, so there are reasonable upgrade paths.

They had mobile processors.

Its very unlikely the TDP differences would prevent a design with an m2 SSD if Apple wanted it. TDP is 65 watts vs the 45 in the models with two relatively enormous 2.5 inch drives. This might vary depending on spec, but nowhere near enough to prevent an m2. We are talking about modern CPUs fabricated on more efficient processes vs those from 2012.

This is a choice, not a hard engineering limitation. An m2 SSD is tiny even compared to a single 2.5” drive.

There's plenty of room for an M.2 module, especially on the backside of the board which is how they're traditionally mounted on desktop ITX boards.


They can also be mounted at an angle just like those SO-DIMM slots.

The M.2 standard even supports 12mm, 16mm, and 30mm width modules in addition to the 22mm that seems to have become the norm. Apple could still be a dick by using a 12mm or 16mm card while still supporting the standard.

> They can also be mounted at an angle just like those SO-DIMM slots.

Stand-off M.2 sockets are a standard part, because a number of mainboards put components under the M.2 slot.

It doesn't take up that much space. Here's a Raspberry Pi sized board with an M.2 slot: https://www.cnx-software.com/wp-content/uploads/2018/11/RK33...

I think there'd easily be room, given that the soldered-on NAND chips could be removed from the board.

edit: If you're wondering how that works, here's a hint: https://raw.githubusercontent.com/ThomasKaiser/Knowledge/mas...

Because of T2 being the SSD controller, chances are those soldered SSD chips are actually raw flash, in which case even if they use M.2 slot, you won’t find any 3rd party raw flash M.2 drive (without controller) anyway.

The bigger problem isn't that it's not user-upgradable, it's that you can't access your data in the event of a failure of any other component.

Cap blew on the motherboard? Data gone. CPU kicked the bucket? Data gone. Spilled water on it? Data gone.

Not literally gone, of course, just practically. No chance of popping the drive out and trying it elsewhere to recover data. Nope. And since Apple never repairs only replaces, you're not going to have any hope of saving your data with official repair support.

That's absolutely insane for consumer tech where backups are rare.

That's absolutely insane for consumer tech where backups are rare.

Are backups really rare? Apple couldn't have made it easier to plug in a backup disk. Plug one in and you immediately get asked if you want to use it with Time Machine. I can't imagine an easier backup solution.

I agree with the rest of your comment. Any component failure means losing all your data.

> I guess it matters less on a device like this where you could more easily use external storage, but still. I wish Apple stopped soldering in stuff and just made things a centimeter taller if that was the tradeoff.

Here it's a bit more than just that: T2 is the secure boot system, but also the SSD controller (and a few other I/O things).

They could have put an m.2 slot though.

I do wish they had gone with m.2. It really feels like T2 is wasted in the Mac mini and just there to check a feature list. It doesn't even make much sense in a machine that is turned off instead of put to sleep.

T2 is Apple’s SSD controller. I suppose Apple does not trust controllers from other manufacturers and has enough data to see that with T2 the storage is sufficiently realible not to warrant an option to replace it even at own service facilities.

I assume they will have an expandable solution for the New Mac Pro which should of been put into the mini. This lack of repairablity is getting old.

They also of course charge $1400 on top for a 2TB SSD, a user accessible storage medium would simply eat into the margins real quick. That was the tradeoff, and it's why the base models have so little storage.

Someone should sell a kit that replaces the bottom panel of the new Mac Mini with a slightly thicker one containing an "external" NVMe drive enclosure. It could be coupled with a piece which replaces the Mac Mini power plug assembly, which could then tap mains power, allowing the "external" enclosure to be a Thunderbolt 3 NVMe enclosure. (Could have thicker ones for thin profile and regular 2.5" SATA drives.)

Would it be feasible to order the model with the smallest buil tin storage and put the home (/User/*) directories on a fast external TB3 SSD?

It has 4 Thunderbolt 3 ports, you don't really need to upgrade the internal storage when you plug in as much storage as you want and not pay much if any performance penalty.

I have always though a Mac Midi slightly wider by an inch and 3x-$x the Hight and able to take a short graphics card update with more options would have been a good basic mac for a lot of apple shops.

Or even a mac using the Shuttle SFF size

Apple can be proud of using plastic recycled at 60 percent but a not easily serviceable computer will end up in a landfill way sooner than it should.

What happens three years down the line when the device is out of warranty and the storage fails? Between paying an absurd amount of money to replace the whole board and getting a new one I bet most people will go with the landfill option.

My Mac mini 2012 got a second life after I maxed out the ram and replaced the spinning rust. If a component fails I can easily replace it and squeeze a couple of years more out of it.

I think holding Apple to higher standards for its environmental impact has a lot to do with why they innovate in this respect, and I think that it is appropriate, given their size as a manufacturer and their positioning as a premium product.

Tightly integrated hardware, and efforts to develop software that improves performance of long-discontinued products does a lot to explain why Apple products are used much longer than alternatives. Compare a Mac mini to a budget mini tower, and it is difficult to argue that attaching an external Thunderbolt 3 drive in 2024 is worse for the environment, when the competing product will likely be in a landfill, due to difficult to diagnose hardware issues and there not being an easily understood recycling program.

From where does the belief that apple makes durable products come? I have the exact opposite experience, and I have been fixing my own and my friends computers for a long time.

I started recommending against in 2014 and nothing I have seen since has given me any reason to change my mind.

My anecdotal evidence is the opposite. But I do not trust that to judge about reliability in general. The prices for used Macs are consistently higher even after 4-6 years than for comparable PC and this is much better statement about quality.

Two other reasons I can think of why Macs cost more, including used:

1. The MacOS premium. It's impractical to run MacOS on anything else.

2. Status symbol.

Anecdotally, I see a lot of people outside of HN comparing their experiences with the quality of a $1200+ USD Macbook to ~$500 USD laptops of other OEMs. Of course the Macbook wins over things half its price.

Personally I've had great experiences with Thinkpads and Latitudes, which have held up great and are very easy to repair or upgrade.

One does not get 5 year old Mac as a status symbol.

My wife's old MacBook (from before the whole unibody thing) is a tank. But I have fixed so many strange things in Macs. Chips becoming unsoldered (apple soldering is quite frankly not very good), heat issues and even broken SATA cables (!!!).

Apple makes amazing things. I would love something as slick as a Mac laptop,but I can't make myself get one.

OK, sure, but why do they need to manufacture everything out of CNC'd billets of aluminum? That seems extremely wasteful.

This is one of the least wasteful things they do. The cuttings are recycled, and aluminum is one of the few materials that is more energy efficient to recycle than to mine/manufacture from scratch.

They also do CNCing pretty efficiently. The same piece of aluminum that makes an iMac screen bezel makes two wired Mac keyboards.

You know exactly why: you don’t get the quality that Apple wants any other way. Supposedly they reuse the shavings, so it might not actually be as bad as you think.

And when the machine is decommissioned, the aluminum from the chassis is easily recyclable...

As a counterpoint, when I wanted to sell my Mac mini after 5 years of 24/7 use, I found a buyer within one hour. Try that with any other maintainable PC.

Your counterpoint is a bit weak. You sold a 5 year old Mac Mini, the OP literally just wrote how the old Mac Mini held its value precisely because it could be upgraded with replaceable components, while the updated Mac Mini is not. I doubt a current gen model will hold its value nearly as well as the older models because parts cannot be replaced down the line.

I've never had trouble using craiglist as an upgrade path for any Mac, even the everything-soldered to the board Macbook Airs.

Old Mac Minis didn't hold their value because they were upgradeable.

All macs hold their value longer.

The important thing is that RAM is upgradable. New memory technologies in 5 years may make all storage to use RAM slots.

I am still using my Mini from 2012, and it runs the latest OS, Mojave.

Sadly, my 2010 MBP 17" is not Mojave compatible :sob:

macOS Mojave Patcher Tool for Unsupported Macs http://dosdude1.com/mojave/

Some Google searches indicate to me SSDs don't have a worrisome failure rate. If it happens to be a widespread problem for Apple products, there's precedent for them eventually having a replacement program. [1] [2]

[1] https://www.apple.com/support/macbookpro-videoissues/ [2] https://support.apple.com/en-my/iphone/repair/service/batter...

I think he means the inevitable failure of all storage devices. They will eventually break down like anything. If they could be replaced, the machine would keep on living.

> What happens three years down the line when the device is out of warranty and the storage fails...

This isn't the early 2000s. The Mac Mini has an SSD, and for the overwhelming majority of these machines it will easily outlive their useful life.

I agree that Apple could make a more serviceable machine, but instead of throwing the Mini into the landfill, why not bring it to the Apple store so they can recycle it?

Because many countries don't have one.

And even when they do have one, it can be a day trip (or more) to visit. Onsite and courier options provided by other vendors such as Dell put Apple to shame.

"What happens three years down the line when the device is out of warranty and the storage fails?"

I was immediately dismayed to see that storage is not upgradeable/replaceable ... but now that the device has USB3 ports, would it not be possible to boot, and run, in a performant manner, from a USB3 thumb drive ?

It seems to me that you should be able to keep these new USB3 minis in service "forever" with a little nub sticking out the back ...

I guess I don't know what the state of installing OSX to boot from external media is, circa High Sierra ... ?

I've booted my iMac from an external thunderbolt SSD for years. Once you set it to boot from an external SSD, it will happily continue doing so.

Much faster than the real slow hdd it came with at least, plus I don't have to open it up to clear out the data when I'm finally done with it.

I still have yet to figure out how to create a similar hassle-free "live" booting environment which works across various hardware for windows and linux.

Once you set it to boot from an external SSD, it will happily continue doing so.

What happens when the soldered-on SSD fails? Can you alter a setting in the PRAM or SMC to let the system completely disregard the failed SSD?

Yes, if the soldered-on SSD has failed you can boot from an external drive by holding down the alt/option key while the system powers on to bring up the boot selection menu.

I thought this no longer works with the T2 chip? Don't you run into issues with e.g. upgrading the firmware because it will only accept firmware updates from the internal drive?

I got a brand new mac with a T2 chip this week from my work. The first thing I did was do all the updates, including a security update for the OS, which required a restart. During the restart it was giving errors, so it was not able to boot properly. It would just give the error message and you could only restart again, just to have the same thing happen. A colleague more familiar with macs, brought up a menu, where you could choose which drive to boot from, which eventually fixed the problem, since it booted from the hard disk. In my case this menu had only the internal ssd, since I did not have anything else connected. I would guess that if I had an external hard drive it would show up as well.

Also on the documentation of the T2 chip, Apple strongly recommends to use file vault, because if you do not, the ssd even though it is encrypted by default, gets decrypted [1] upon being mounted.

To sum up, I guess it is possible to boot from whatever device you want.

[1] https://support.apple.com/en-us/HT208344

It remembers which drive it boots from so as long as you keep the external drive connected it should boot from it

Still, another dongle to carry (with Mac Mini it's not that important) and a loss of speed/added latency for e.g. video editing. With normal usage patterns even USB 3.0 external thumb/SSD drives were just fine.

"Still, another dongle to carry (with Mac Mini it's not that important) and a loss of speed/added latency for e.g. video editing. With normal usage patterns even USB 3.0 external thumb/SSD drives were just fine."

What is the speed difference between the on-board PCIe connected SSD vs. the USB3 connected SSD ? Aren't both PCIe and USB3 faster than the SSD, making it the bottleneck ?

A flash drive would, of course, be terribly slow ... do they make USB "thumb drives" that are SSD (not flash) ?

So use a tb3 enclosure with an nvme inside.

There are no bus-powered TB3 M.2 PCIe enclosures, and there never will be any[1]. This means you will have to buy a huge bulky externally-powered TB3 PCIe dock costing hundreds of dollars to accomplish this.

1. https://twitter.com/sephr/status/817883011165233152?s=19

No, it says there won’t be any that are sold unbundled.

The new mac mini makes a constant electrical noise when I move the mouse. It is audible and irritating >2 meters away from the mac. I have tried hubs, different ports and nothing else connected. It is ok with a wireless mouse, lower polling rate. The fan is quiet, not irritating.

I might have sensitive hearing. Many modern gadgets are a pain in the ear. This type of noise digs deep and ruins the mood. I am sending it back. This is my data point, might not represent all new mac minis.

edit: I can probably go for a low polling rate, but then I wonder about tomorrow. What other use case will stress the same circuit components?

Take it back, get a replacement.

I've had coil whine on a Windows PC before, but I fixed it by using a USB hub between. Your monitor might have one built-in, see if that fixes it. It also only occurred with one mouse for me, but it was my favorite one so I wanted to keep it.

That whine's almost 100% coming from the CPU as it's a common issue with Intel CPUs these days. It's the weirdest thing ever, but my Hackintosh makes it as well (i7-6700k) so you're not crazy. It also very notably comes and goes independent of workload and mouse polling rates, but no matter the case I'd still return/replace the Mini just because it's an option.

This is abnormal.. I would try to get a replacement from Apple

I really hope someone could figure it out if Mac mini with the i3-8100B CPU can work with ECC memory. According to Intel the CPU is ECC-capable, but I’m not sure about the mobo. If it works with ECC, it would be a truly mini Mac Pro.

The higher-end configs with i5 and i7 CPU are not ECC-capable, only the entry-level i3 CPU is ECC-capable.

ECC is starting to show up inside the RAM chips themselves. Don't know if it's reached consumer devices like this Mac Mini though.

Edit: I mean the ECC logic is self-contained inside the module, with no support requirements on the host CPU.

What is your use case where ECC ram is that important?

I plan to ditch NAS and just connect external storage directly to the mini, and it needs to run 24/7. ECC should give me better reliability, no?

Yeah, but for a NAS it’s overkill in my opinion.

You’re more likely to run into on disk than in RAM issues with silent bitrot depending on OS and FS even then pretty rare but if your NAS is huge becomes more likely just law of averages.

Bit rot on cold storage is at least detectable if using filesystem with checksumming (e.g. ZFS).

I’m not aware of any solutions to detect bit rot in RAM other than ECC.

What's a good and ergonomic way to attach something like 4-6 HDDs? I'm using HP Microserver Gen8 and it's a wonderful machine but it has its drawbacks. I'm thinking about replacing it, but it's hard to find something of its size and compatible with enough HDDs.

Also the major issue is that I've heard about problems booting Linux on recent macs, but probably it'll be resolved soon.

I have done this for 6 years, mini with two drobos connected to it running plex 24x7. No issues without ECC ram. I have since switched to a synology because it's easier to manage and won't have bit rot.

To clarify, I’m more concerned about silent data corruption instead of crashing. Why does Synology not have bit rot? Becuase of ZFS? I thought ECC memory is almost required for ZFS?

There is nothing special about ZFS that makes it require ECC. It's a myth. See: http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-yo...

Good point. Thanks! :)

FWIW Synology uses BTRFS, not ZFS

More accessible than the previous gen (although oddly iFixit gives it the same score of 6/10), but it's harder than I'd hoped to get to the RAM.

The previous one was a horror. Security t6 screwdrivers are a nightmare to source. I have bought 5 sets and none have been correct. Resorted to attacking them with pliers and eventually had success.

We include good T6 security bits in our Pro Tech Toolkit: https://www.ifixit.com/Store/Tools/Pro-Tech-Toolkit/IF145-30...

There was a period where you were all sold out - it’s the first place I look as your kits are great.

Agreed. Hopefully it is only something you'd do once per device. I'd hate to work somewhere where they spend all day performing all those steps.

I've worked in one of those places. Once you work on the same machine a few times you get really good at it.

If you've got the time to go slow its kind of therapeutic.

Yes, but it is upgradeable again, so Apple listened a little to us raising a stink about it.

Replaceable RAM, finally, THANK YOU!

But soldered storage... And storage is the wear item, d'oh!

What I really like about the Mac mini is the internal power supply. I'd wish all the PC vendors would offer their mini PCs with internal power supplies too.

Curious as to which 10Gb Ethernet controller is being used, no mention in the teardown. Anyone know?

Aquantia, same as the iMac Pro.

Is there anyone else manufacturing 10G controllers at $100 end-user prices?

Yes, Tehuti Networks[1], which is used by some Thunderbolt 3 Ethernet adapters, like the Akitio Thunder3 10G [2].

(OK, not quite <$100, but <$200)

[1] http://www.tehutinetworks.net/?t=LV&L1=3&L2=0&L3=0&L7=156 [2] https://www.akitio.com/adapters/thunder3-10g-network-adapter

Not for Mac Minis, but you can get two port, Intel X520 series 10GbE (optical) PCI-Express cards for $85 to $110 each on ebay these days. 1310nm/LX SFP+ are really cheap, like $25 a piece.

I am biased, working in network engineering, but I really don't see a huge use case for 10GBaseT. It's almost unheard of in an ISP environment. Anywhere we need more than 1000BaseT we do everything optically.

What about connecting to storage, either an SSD NAS or a local disk array? At work we have regular old HDDs in RAID 1 and it saturates a 1 gig connection, so I'd imagine SSDs in some faster RAID would get way above that, and you don't have to redo wiring everywhere if you already have Cat6 installed.

Flash memory soldered to the main board also doesn't seem like a good idea, since this type of memory has a limited durability (depending on # writes).

I wonder if it's hard to resolder it. It's not a problem physically for experienced repair service, but may be Apple locked it with software.

They say it’s got impressive specs, but I don’t find it impressive at all. This brick-sized Gigabyte (NUC) computer has far more impressive specs than the Mac Mini: https://www.gigabyte.com/Mini-PcBarebone/GB-BRi7-8550-rev-10

What's impressive about it? That's a mobile CPU, ~20% slower than the Mini base model. Slower RAM (2400 vs 2666). Two USB3.1 ports vs four in the mini, no Thunderbolt (10gbps vs 40gbps). Weaker GPU, can't output 4K. To top it off, it doesn't come with RAM (+€80 8gb) or an SSD (+€50 128GB).

Total cost €680 vs €899 for the Mac Mini - it actually makes the mini look like an incredible deal.

Those are fairly good points. I upvoted you.

Yes graphics is better — the GPU on the Mac Mini is the Intel UHD 630 vs the UHD 620 on the Gigabyte Brix.

With regard to the CPU, the Gigabyte’s i7-8550U is slightly faster[1] than the i3 in the Mac Mini, but slower than the i5 and i7 offered on the Mac Mini.

The lower TDP might be something that people who plan to run a server on it might care about. I have a desktop with a i7-5820K (which has a 140W TDP), and I realized that a DigitalOcean droplet was cheaper than keeping my computer running all day (even with the monitor turned off, idle power usage was 100W+, and my electric cost is over $0.20/kWh).

I consider 16 GiB of RAM a minimum, and it is significantly cheaper with the Gigabyte. Apple’s $200 charge for an upgrade from 8GB to 16GB (in the US) is a bit too much. You can buy 16 GiB (2x8GiB sticks) for that.

Having the flexibility to pick your own SSD is nice too. The 960 PRO is one of the fastest PCIe SSDs out there — I wonder if the SSD in the Mac Mini comes close to it in speed. Also: if you don’t care about the SSD speed but want a lot more storage, you have the flexibility to get the slower 1TiB 860 EVO (or similar SSD) for around $150 — wheread Apppe charges $600 for an upgrade to 1TB on the Mac Mini — and exorbitantly high price.

Lastly, I use Arch Linux both on my home laptop, and on my work machine (whenever possible). And the fact is most people buy Macs for macOS; but I’m quite comfortable using Linux, and don’t really need macOS.

[1] https://www.cpubenchmark.net/compare/Intel-i3-8100-vs-Intel-...

When I looked at userbenchmark.org the i3 came out ahead. Apple’s SSDs are top notch, crossing 3Gbps. RAM is upgradeable on the mini so you can save a couple hundred, plus OSX memory management is so much better than windows you’ll only feel any difference if doing heavy video/audio/3D workloads.

For some reason a TB3 NVMe enclosure is dirt cheap, around €40, and gives you full speed access to whatever SSD you like!

I just got a mini for myself - main driver being the OS, I can’t be productive on my windows pc. Got the i5 and 256gb for ~$1200, which gives me same power as a high end 15”MBP at a quarter of the price. Mac OS is worth the €200 markup alone.

The Brix uses a 15W processor while the Mac mini is 65W; that's going to result in significant performance difference.

I'm disappointed that the CPU isn't upgradable. I got my hopes up with it being a desktop part number (i3-8100 according to everymac.com).

Can you case-swap into the classic Silver body?

If it bothers you, you can remove the anodization chemically with household products. That gets you to silver, but easily scratched. Then you can anodize it using more household and hot tub chemicals and a current limited power supply, a bench supply is fine. That gets you to silver and hard, but will stain. Soak it in whatever dye you like to color it, or not, then boil it to seal off the pores in the anodization and you have the case of your dreams.

No, the LED and IR are in different locations which means the internal mounts are probably different too.

Soldered storage, soldered CPU, connectors on same board. Meh...

These are all par for the course for Apple’s products. The fact that the RAM is upgradable is quite surprising, since it’s a departure from the direction they went in for the earlier Mac Mini.

That thing has like $400 worth of hardware in it.

For your services, do you charge only the amount you need to live on, or as much as you think you can get?

Heh every post about Apple products has this comment, doesn't it?

How much would you value MacOS license at ?

Bingo. Its worth a few hundred dollars to me to never see MS Windows again.

And I can’t run my music hobby software on Linux, at least not easily.

I also used to run DR DOS decades ago to avoid MS then, too.

There’s more to it than the components. For instance;

- manufacturing.

- packaging.

- margin.

- profit.

- tax.

- warehousing.

- dev.

- r&d.

- marketing.

- distribution.

- stores.

- licensing.

- logistics.

Probably more. They all have a cost.

So does the iPhone XS Max. When has that ever stopped them? :)

This isn’t true for the Mac Mini, and it isn’t true for iPhone either. Apple’s margins are rarely over 40% on their consumer hardware.

That analysts are full of shit

They literally priced out the components. I would love to see any information you have that shows they are full of shit if you can provide any


> [Tim] Cook commented on the inaccuracy of estimated cost breakdowns on Apple products.

> "I haven't seen [them for Apple Watch], but generally there are cost breakdowns around our products that are much different than the reality. I've never seen one that is anywhere close to being accurate," Cook said

The iPhone margins are around 40%: https://www.statista.com/statistics/666993/iphone-estimated-...

My example was about the iPhone XS Max. Can you tell me with a straight face that Apple’s top model has a lower gross margin than their average?

Also, other producers can make perfectly comparable products at much lower price points. Add to that Apple’s logistics and supplier influence prowess and there’s no way Apple is paying more than smaller manufacturers for comparable hardware.

If only someone could figure out how to put components into a larger box, so that there is no need to solder things down. The motherboard is surely the same whether it is i7 or i3, so if there was some sort of a socket, you could replace a cpu like a bulb.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact