He's a hacker's hacker.
His ebook has been made freely available in memory of Aaron Swartz:
- "I don't care, this is just my job. And I was told to do it by management." [what can I say? This sums up a lot of grunt coders I know]
- "What are the chances that anyone will find this?" [lack of appreciation for how smart and dedicated attackers can be]
- "So what if they do? It's not like it's useful" [lack of proper analysis]
- "How else are we going to run tests?" [poor design / fear]
- "Huh?" [absolutely oblivious about security]
I've worked on projects where we made the very conscious choice to leave doors like this open, but I doubt that most firmware shops are that intentional about it.
There seems to be a threshold under which a device ought to just do what you expect, what the manufacturer decreed it would do and no more, even if you own it and it could do more. I propose that this level varies widely across individuals.
† Granted, this capability is undocumented. But if it were documented on that scrap of paper that fell out of the packing materials, in 4pt type, using gray ink on not terribly white paper, would it be that different?
But on the other hand, I think we should be enjoying the relative freedom of today (and trying to preserve it for the future); it seems too many are trying to spin "security" as something beneficial, when what they are really saying is "we're making things secure against you and taking away your freedom so we can control what you do; it's also effective at securing against attackers, which is all we're going to promote". If this line of thinking continues we may see devices in the future that are even more locked-down and user-hostile.
(FYI I've worked with embedded systems for quite a bit and also knew SD cards had firmware in them that could be modified, but never really investigated it - just put it in the back of my mind as one of those "I'm curious enough that if I had the time I'd have a go at it" things - along with several dozen others.)
It's a flaw if people are not aware of it. Most people see things like SD cards and USB sticks as "dumb" storage devices with no real ability to run software, and are totally unaware of the risks they can cause.
It can be a feature if people are aware of it.
People (well, most reasonably aware people) understand that a cellphone is basically a small computer, and it has a processor and memory and storage, and executes code, etc. And it's nice to be able to update/modify that code to change how the device operates, and somewhat obnoxious when you can't. (To a limit: it's also obnoxious and dangerous when someone else can change that code without letting you know.)
In the case of SD cards, many people assume that they are "dumb" devices. They don't realize that they have a processor (microcontroller) which executes code, and that it's not functionally equivalent to a floppy / Zip disk / CD / pick-your-favorite-dumb-storage-metaphor.
I don't think this is the last time we're going to run into this issue ... an increasing number of devices have embedded, potentially-reprogrammable microcontrollers (laptop batteries, power supply bricks, headphones, to name just a few) that could be used as attack vectors, or as platforms for cool hacks.
The solution, IMO, is not to just further obfuscate the programming method, but to make the code easier to inspect/validate and maybe even reflash, so that users can ensure that the devices are running what they think it's running.
Before Microsoft's big security push (whatever else you want to say about it, they made a huge effort) most of the above attitudes existed. Now you can't turn around without going through a security review . . . some more effective than others, but at least they're trying.
Armoring a system that will accept only signed updates isn't that hard (just check signatures and refuse updates that fail). This is different from armoring a system against hardware-level attacks, which Bunny and the NSA and a LOT of other people are good at.
Armoring a system against intentional holes is not an engineering problem, it's a people slash attitude problem.
Armoring a system against bugs (buffer overruns, etc.) requires that you solve the people / attitude problem first, and then do meaningful security engineering. This might be really easy for a flash drive, which should have a really simple surface area.
Is that true on extremely resource-constrained devices, e.g. the SD cards being discussed?
 HN Discussion at https://news.ycombinator.com/item?id=6195627
http://en.wikipedia.org/wiki/Commodore_1541 indicates that my memory is correct, but http://en.wikipedia.org/wiki/Commodore_64 says that the chip on the C64 itself is a bit different, though.
They're pin-compatible enough that in some devices you can swap them around and get things to work (e.g. putting a 6510 in a 1541 has a decent chance of working unless the GPIO pin register clobbers something important in the 1541 memory map; with the reverse you'll at least have problems, though a 6502 in a C64 might work if you only run things like cartridges and/or put the right voltage on the right pins to map the ROMs into place).
Also, the Amiga 500 keyboard had a 6502 compatible CPU with built in PROM and RAM as well (MOS 6570).
I assume it would be possible to, for instance, make every "delete" operation a secure delete operation...wherein data gets overwritten a specified number of times. Shortening the useful life of the device, sure, but if security matters, that's a small price to pay.
Going further, what about a handler that serves out one set of data about what's on the device to any random person that plugs it in (like empty or with a few harmless photos or something), and another set of info to someone that has a key? Sure, for a high capability attacker, they might even know about this kind of firmware magic and know how to circumvent it, but it would make it very unlikely that some random person picking up your device would find anything that you want to keep secret.
Obviously, if your data is encrypted on the host system before writing to the card, that's reasonably safe...but for people in really dangerous situations, where torturing someone to obtain their key is not out of the question, making it seem like there's no data to obtain a key for is the best of all possible solutions.
I was reading just today a similar article, but involving HDDs instead of Microsd cards (and even with a PoC): http://spritesmods.com/?art=hddhack
There might very well be more micro-controllers in them that I don't know about. And these are quite run-of-the-mill rack mountable servers...
It’s as of yet unclear how many other manufacturers leave
their firmware updating sequences unsecured. Appotech is
a relatively minor player in the SD controller world;
there’s a handful of companies that you’ve probably never
heard of that produce SD controllers, including Alcor
Micro, Skymedi, Phison, SMI, and of course Sandisk and
They managed to read out the embedded raw-flash on one device, and when they searched for the vendor/device, the third link that popped up on Baidu brought them directly to a download for the windows based firmware-update-tool (in chinese, of course)... so much for a headstart in analyzing the firmware :-).
Also, more fun would be "cryptolocker" disk-based malware. The aspects of capability exist elsewhere today as mentioned in the article and cryptolocker's $15 million USD and counting.
Also also: is there any HIDS yet for checksumming various chipset/peripheral firmwares?
TCG (TPM, TXT, ...) Measured boot sort of includes firmware checksumming. It's often turned off.
Not having finished the article, one of my initial thoughts: I guess my thoughts and intuition were right. It's not time to throw away those optical disks (and drives), yet.
USB isn't to be trusted, either.
I hope he chooses the latter option.
But that's just me. Maybe I am the only one. If so, pay no mind.
To set the bar any higher reeks to me of elitism (i.e. creating an "exclusive" club of "hackers"), which doesn't seem like his intent.
 http://www.swissbit.com/images/stories/pdfs/S-300u_data_shee... (page 8).
If someone hands you an SSD in an external enclosure do you automatically suspect it too? A similar hack is known to work there, witness the number of SSDs that needed a firmware upgrade after their field release.
I do applaud the finding of how to do it and the proof that it really does work. It is a nice work in that regard and I have a few SD cards I'd be happy to hack their firmware for fun if nothing else (damn fake SDs, if they at least just advertised their real capacity they could at least be useful).
What a lot of us thought, however, is that the uC would be in the form of what's found in other single purpose devices with similar interfaces (e.g. temp/humidity sensors) : code exists in some ROM table whose mask is set in production.
Secondly, flash is a highly competitive product with narrow margins. Check out some other posts on his blog to get an idea, esp. the ones about the ghost runs.
It's only after you read up on the complexities of bad cell management in flash that you get a sense of this problem. And that it involves complex on-device logic. In the end, the devices (uC) become so high-spec that the firmware update feature is a no-brainer. Compare it to cell phones that increase in complexity until one day they're capable of running Linux, at which point a floodgate of possibilities opens up.
(This is a streamdump, so don't expect seeking to work, and it might cause issues for your player)
http://30c3.ex23.de/fahrplan_d3.html (search for bunnie)
You can theoretically use something like  or  to connect the SPI bus to an ethernet controller.