The exploit as described doesn't "brick" the device; that would require permanently disabling it. A tethered restore would be all that's required to recover in this case.
There's physically no way to permanently "brick" an iPhone.
DFU mode boots entirely from read-only ROM, and from there, you can just restore everything via USB cable.
Same applies to Apple Silicon Macs. You can damage the system, recovery and emergency recovery volumes, but even then, you can still boot into DFU from ROM and re-initialize everything via another Mac.
This is in contrast to some PCs, where if you damage the BIOS (e.g. by suddenly losing power during a firmware update), your device may or may not be bricked. There have even been stories of peoples' computers being bricked via rm -rf /, due to removing everything at /sys/firmware/efi/efivars/ (which is actually stored inside the motherboard), and sometimes contains things that the motherboard won't boot without
> There's physically no way to permanently "brick" an iPhone.
There definitely are (If you count jailbroken iPhones). I've managed to brick one by removing all thermal throttling limits and subsequently damaging the motherboard with the world's shittiest watercooling setup.
Can't use DFU to restore if you've got damaged hardware
"Bricking" is typically used to describe software failures specifically. Physical damage isn't "bricking" in the traditional sense, it's just broken.
Besides, physical damage is the obvious exception to GP's statement. Nobody is reading this and thinking that a ROM DFU will recover a phone that is physically destroyed.
no amount of monkeying around with the software will brick the iphone (botched updates, etc.). Once we get to hardware failures the vectors are innumerable. Lightning strikes. Device drops from height. Botched water cooling. Magnifying glass lasers. Children's vomit...
"physically" is overstating it. Certainly during development it's possible, which means that -- conditioned on a bad enough bug -- it could hypothetically happen to customers too. Not that I think that's likely, of course, but it is physically possible.
I'm gonna walk through this because I have a bit of experience here on the computer side of things, but I'm not really making an excuse for the fact that the PC version of this is less user-friendly; from my perspective, I fully respect that Apple has done a good job with user experience where PC manufacturers have lagged. However, my main concern is devices turning to e-waste, so the important thing for that isn't UX, it's just how plausible it is to recover once you've bricked. With that out of the way...
> This is in contrast to some PCs, where if you damage the BIOS (e.g. by suddenly losing power during a firmware update), your device may or may not be bricked.
I accidentally destroyed the firmware on a machine that did not have any recovery features, when flashing modified UEFI images, leaving it mostly inoperable. I wound up recovering it using flashrom and a Raspberry Pi. I think this counts as a hard brick, but the modular nature of PCs (e.g. most BIOS chips are on sockets so you can pull them out easily) it's not nearly as big of an issue if you hard-bricked a device that's more integrated and locked down. It's not instant e-waste because no bricks are permanent.
(It's a little harder for laptops, but I did also flashrom a laptop in a similar fashion, in-circuit using a SOIC8 clamp. This was not a brick recovery but rather messing with coreboot.)
Definitely not as much for the faint of heart, but a repair technician could do it for you. Alternatively, for PCs with socketed BIOS, you can buy a new EEPROM that's already flashed with the right firmware--they were readily available on eBay last I looked.
That was probably a decade ago or more by now. Many modern PC motherboards from many vendors have mitigations for this; it was a common enough pain point after all. For example, my desktop PC has an embedded controller that can boot and rewrite the flash chip for you, using a copy of BIOS from a USB stick. (Works even if the CPU isn't installed. Pretty cool.)
> There have even been stories of peoples' computers being bricked via rm -rf /, due to removing everything at /sys/firmware/efi/efivars/ (which is actually stored inside the motherboard), and sometimes contains things that the motherboard won't boot without
EFI vars are stored in NVRAM, not the EEPROM. You can usually clear that a couple of ways:
- Use an integrated NVRAM reset system. Some machines have a way to do this listed in their manual. On desktop PC motherboards, it tends to be a jumper you set for a few seconds. Sometimes you will have an easier option, like a button somewhere, or even possibly a key combination at boot (Long time Macintosh fans probably have memorized the NVRAM reset key chord for Apple computers... I wonder if it still works on Apple Silicon.)
- Remove the battery for a few seconds. Usually easily accessible on a desktop. Usually a little less easy to get to on a laptop, but nothing absurd, usually just some screws.
Certainly it could be easier to recover from, I'd say it's actually not very easy to brick a typical desktop PC in a particularly permanent fashion. The only time I've ever done it was because I was modifying my UEFI image intentionally. Screwing up EFI vars doesn't make most systems unbootable. I have corrupted my EFI vars quite a few times trying to do funny things. UEFI implementations do tend to be buggy, but not all of them are that catastrophically bad.
--
Now... as for whether or not an Apple Silicon device can "physically" be bricked by software, the most obvious way to do that would be to wear the SSD down to the point where it can no longer be rewritten. I think the M4 Mac Mini finally no longer solders these and that the Mac Minis do have a way you can recover from this brick (using another Mac to restore to a new SSD) but there are many Macs where if the SSD is destroyed, it's pretty hard to fix since you need Apple tools that are hard to obtain if you want to pair a new SSD. This is unfortunate because Apple has often had dodgy hardware choices around the SSD (e.g. the notorious TPS62180 buck converter) and doesn't always use SSDs that have the best reliability (IIRC they use a lot of Kioxia in the newer Apple Silicon devices, which are not considered to be bad devices by any means, but are generally considered less durable than e.g. Samsung SSDs.)
Rather than have an Apple device become ewaste due to software issues, in recent years, it is much more likely that it will become ewaste due to hardware issues, as a result of parts pairing and having failure-prone components that are not modular even when they really can and should be (Good on them for rectifying this lately, e.g., with the Mac Mini SSD, but it's a bit sad that it took this long. And on the note of that SSD... Apple, you really could've used a standard electrical interface for that.)
This is somewhat a testament to Apple's software and system design, but it's simultaneously a condemnation of their recent track record with repair, too. Best we can hope is that they don't go backwards from this point forward, because they created a lot of devices that will become ewaste over time for almost no gain for anyone. (I strongly dislike but can understand the justification for something like parts pairing in iPhones and iPads, but much less so for similar sorts of mechanisms in computers.)
> Screwing up EFI vars doesn't make most systems unbootable. I have corrupted my EFI vars quite a few times trying to do funny things. UEFI implementations do tend to be buggy, but not all of them are that catastrophically bad.
For what it's worth, I have a laptop here that can be irrevocably (short of having a flash memory dump on-hand that can be flashed back) bricked just by messing around with EFI variables through fully intentional operations (i.e. operations that would be available to any program with Administrator privileges on Windows, or the root user on Linux).
As far as I know virtually all of the EFI vars will be stored on battery-backed NVRAM, so the usual solution is to just clear that, by removing the CMOS battery. I am pretty sure the only solution are things you definitely can not read or write from the host OS (e.g. BIOS passwords.) Does require partially disassembling the laptop though, and I know there's at least a couple random models of laptop that actually stop working if you clear the NVRAM (lol)
>Rather than have an Apple device become ewaste due to software issues, in recent years, it is much more likely that it will become ewaste due to hardware issue
Apple not supporting their hardware after a short time is a software issue causing e-waste. I have a big box full of non-viable Apple hardware that is working perfectly well, Apple just decided to stop supporting all those devices - a bunch of tablets, a couple apple tvs, an old apple watch, several laptops, etc.
Sure, other manufacturers do this too, but none as badly as Apple does IMHO.
If you had root access on your iPhone, to do that, you would see a recovery message on your phone at the next reboot. Then, plugging it into your macBook, you would get a popup asking if you would like to restore it. Bricks don't usually have software restore prompts.
From observation, "brick" has evolved, as things do in language. In practice, it rarely means the traditional definition you refer to, but the softer one used here.
And for that reason I wouldn't hassle laymen over it but among the HN crowd I expect a bit more care. An "anything goes" attitude makes communication more difficult.
"Soft brick" is the correct term that already exists.
Also, although HN readers probably have many devices in their homes there are people out there who have only a phone and no computer. For them this would be pretty catastrophic. Hopefully they’d take their device to Apple or a third party technician
I appreciate the sentiment but I don't see that catching on. I think a variant of bricked makes sense as it basically means you can't use the device until you can figure out how to fix it. Which the "muddied" analogy doesn't really fit - it's usually possible to use muddy things if not necessarily pleasant.
"fix it" was the definition of the the old usage of brick though, with the "fix it" generally meant a hardware fix, like replaced components. if the fix is reinstalling software, then that means it's still a completely functional piece of hardware. there's nothing wrong with it. you don't say your car is broken because someone who can't drive sits in the driver seat! get off my lawn!
I remember this being referred to as "the OS needs to be reinstalled", a trivial thing that nobody bothered to give a name to, because it was frequent and non-consequential.
“Bricking” isn’t a rigorously defined term, it’s more like “realtime” in the sense that it comes with an implicit “(for this particular user in this particular scenario)”. For most users a device is bricked if it doesn’t turn on and work when you press the power button. For most readers here, using dev tools to re-flash a bootloader would be fairly easy but if USB stops working it might be game over. I’m sure there are a few around who could de-cap an ASIC and circuit bend it back to life.
Incorrect. Bricking means a device becomes a doorstop that cannot be resurrected or repaired by the user non-invasively. That's the whole point of the term.
I beg to differ. "Boat anchor" not only has the connotation of uselessness, but is also well-documented, such as in the official Jargon File by Eric S. Raymond:
1. Like doorstop but more severe; implies that the offending hardware is irreversibly dead or useless. “That was a working motherboard once. One lightning strike later, instant boat anchor!”
I contend that brick is a neologism based on this boat-anchor analogy. A brick is rather small, handheld, portable. No computer component was this way when the "boat-anchor" term was coined.
Indeed, many of my colleagues in the 90s based their trust and confidence in hardware on its volume and mass. If we could lift it, or throw it across the room, it was not worthy of respect. Those were the days of magnificent racks loaded with equipment that did comparatively very little!
I think you skimmed too fast. Widgets, allegedly, are eagerly evaluated when an app is opened, and the system keeps trying failed attempts:
> After restarting, as soon as SpringBoard was initialized, the extension would be woken up by the system, since it had failed to produce any widget entries before, which would then start the process all over again.
The article claims that they had to manually cause the widget to crash so that the result doesn't get cached, right? So that's at least one line to trigger the reboot, one line (likely more) to register as a widget, and one to crash it.
Correct. The terminology is wrong. It's an annoying, repeated DoS that doesn't ruin the device permanently but could lose user data if it must be erased.
It would never have worked. So many of the things that owning xnu has made possible would never have happened on top of Linux. The things you can do when you know each and every customer of the stack, and you all belong to the same business with common objectives and leadership direction just can’t be done in the open-source context.
For a long time we held the line on not having file operations originate from within the kernel (layering reasons). Eventually less sane heads prevailed.
One does not "hone in" on anything. To hone a thing is to make it sharper or more acute by removing parts of it with an abrasive. The word you are looking for is "home", as in a homing missile, etc.
Yes, this is a criticism. Hopefully it's twice as effective as being nice. 8)
Multiple dictionaries recognize the usage of "hone in" to mean "sharpening" your focus on something rather than "home in" which is to move towards something.
I went down a slight rabbit hole for this: apparently both are correct, although "hone in" doesn't seem to have a ground source and has gotten institutionalized in our lexicon over time.
By the way, I don't mind the nit at all! English is not my first language and I slip up occasionally, so refreshers are welcome :-)
You knew what they meant, which is clear if you’re able to correct the use of language accurately. This isn’t a criticism per se, but an acknowledgment that language evolves and part of the way it does that is acceptance that “incorrect” usage, once common enough, is seldom reversed.
This makes some very naïve assumptions about the relationships between entities in a program; in particular that you can make arbitrary assertions about the representation of already-allocated datastructures across multiple versions of a component, that the program's compositional structure morphs in understandable ways, and that you can pause a program in a state where a component can actually be replaced.
By the time you have addressed these, you'll find yourself building a microkernel system with a collection of independent servers and well-defined interaction protocols. Which isn't necessarily a terrible way to assemble something, but it's not quite where you're trying to go...
Worth reading his son @snakesofself's comment below that video for additional context.
As a fan of the show, and what it did to advance the art of visual storytelling, learning more (and understanding less!) about the artists just makes the whole thing more interesting and more human.