If Apple subverts their updates, that's mostly interesting as a signal of their trustworthiness moving forwards. The coolest thing about this is that we know it's happening at all, I think.
Wait, what? That is not the recourse that open source provides.
The great thing about open source is that you don't need every person to read the code, just one person who can either catch or verify the absence of user-abusive material.
Moreover, even if zero people read the code today, it is preserved so that state (or corporate) abuse can be revealed later, providing another disincentive to introduce abusive material.
In a word: no. With open source code, you could use software authored by the NSA, like SELinux, or you could even hire a manifestly untrustworthy party like Hacking Team to author some code and still be able to trust the code.
In Apple's case, there is a fairly good reason to trust Apple because it would be a hell of a kabuki theatre production to have the FBI and Apple battle in a Supreme Court case while colluding in secret. But would you trust a defense contractor? A telco? Limit or ideally eliminate the need for trust. Fortunately it is possible to reduce the need for trust below having to trust groups or individuals.
In theory, we can perform the same analysis on the compiled program's bytecode. As the decompilation ecosystem gets better, we may view machine code or bytecode as transparently as source code.
Of course, your apple EULA may bind you against decompiling the machine code -- but it can be argued that you're not 'reverse engineering', you're just doing a virus scan.
The repeatable builds projects go a long way towards preventing this by producing identical bytes from different compilation chains. Ultimately it's good to have a combination of static analysis, multiple toolchains & 'many eyes' providing checks and balances for each other.
The bitcoin community, for example, uses Gitian to reproducibly build bitcoind. Both Bitcoin Core and Bitcoin Classic host repositories with signed hashes for the output of those builds:
(As I understand it, several Altcoins do the same as well)
Anybody can follow the published guides for how to perform such a build, and compare their results with the published ones. Because the published hashes are signed, you have a reasonable degree of certainty that a variety of people are involved in the process, which also gives you a greater degree of confidence in the quality of the binary releases even if you don't want to compile it yourself (and, if you do compile it, you're free to add to the consensus that the binary build is good by PRing your own results)
Or, to put it in better words, where do you get the certificate to check your build from? At extreme paranoia levels, you simply can never be sure you have the same software as everybody else, thus the only safe alternative is reviewing your copy yourself.
(How do you know the computer is showing you the correct contents of your files? Didn't think that well enough yet.)
Here is some advice from Schneier on running secure software against a state-level adversary . However, even that is not immune from a black bag job .
Obviously, the mainstream way is a hash-based file verification.
Which again, everybody needn't do - only a small number - in order to catch a bad actor in the act.
But I presume you are trying to make some bigger point. What is that?
Do you trust the developers? Okay.
Do you trust the developers, their infrastructure, AND the supply chain? Maybe a bitter pill to swallow.
Recommended reading: https://defuse.ca/triangle-of-secure-code-delivery.htm
The reason people mostly don't bother is because they can't also trust the hardware (in fact, our software is often more trustworthy than the hardware). Thus, the point is moot.
I'd say that is not sufficient because even in this case you trust someone: the manufacturer of the CPU on which the code would run.
It might surprise some people but you can examine code of a piece of software to check whether it has a backdoor even if it is closed-source by reading disassembly. Surely it requires some skills and is a bit time-consuming but it's doable for an ordinary individual. Reverse engineering software is not as difficult as many think. And as a matter of fact, a large number of people are reading disassembly of widely-used software to find vulnerabilities to sell in black markets. So I think it's unlikely for Windows or iOS to have maliciously planted backdoors.
On the other hand, it's tremendously difficult to reverse engineer hardware especially CPUs for an individual without a large budget. So if I were them I'd choose CPU as a place to put a backdoor because virtually nobody reverse engineers a modern CPU and thus it'd be very unlikely to be found.
By the way, contemporary CPUs can update itself through microcode updates.
We may want to reach a point where we trust things we use, but if we're using a security-grade definition of trust and we're honest with ourselves, I think every one of us would admit that we're using something(s) that we do not trust. There just isn't enough time to properly review, test, and verifying things.
I don't trust apple ios software (unlike their OSX software), because I am not in position to choose if I trust them or not. The device decides.
If 3rd parties can audit software (including analysis of binary-only software), and can observe the software's behavior, and can watch the software's network traffic, then the chance of being caught violating user trust will generally be high enough to make the liability of being caught a genuine concern.
However, if updates are automatic, encrypted, and platform DRM prevents 3rd-party audit/analysis, the chance of being caught starts to dwindle down towards zero. That entire trust ecosystem disappears, and what we're left with is absolute trust in a corruptible third party, and no mechanism with which to verify.
The latter is exactly what Apple has built. They have a backdoor: the means to push absolutely trusted software while preventing all third-party audit and analysis.
Unfortunately that doesn't really work if you have to trust your software _not_ to do tasks that you don't want done, like sending your personal stuff to a third party.
They of course can't use compilers either  and have to write the machine code by hand. Now if you consider the CPU's microcode as general code as well which can be updated, what is he to do now?
So I agree, you need trust, and the less parties you need to trust, the less chance of getting bit.
Personally, I prefer stability over "new features", turn off automatic updates, and read changelists carefully. Anything which doesn't have a good description and rationale of why it needs to be changed, and how that is relevant to my usage of the software, doesn't get changed. (And software which doesn't give me the option of doing so, doesn't get used either.)
Keep in mind that the new environments for any pc now is interconnected, as such a mechanism to deliver fixes quickly is actually more a solution than a problem.
For reference, take a non patched xp, connect to the internet and search for "how to get thin while eating" and your system will automatically install a antivirus for you ;)
>"I prefer stability over "new features", turn off automatic updates, and read changelists carefully. Anything which doesn't have a good description and rationale of why it needs to be changed, and how that is relevant to my usage of the software, doesn't get changed."
for that you need to become an expert on anything become patched or updated.
second, it may be more about how someone else would use your system than you... as in an exploit.
It's not a solution, it's an excuse for lazy testing.
Same as better hardware supported sloppyness in engineering, turning a 15MB CD/DVD burning software to 500MB+ software package (software starts with a "N").
While there does exist a pressure to push a product before enough testing is done, to say that everything that's wrong is all down to one "simple" fact is just trying to reduce a complex system into something simpler to understand, and then patting yourself on the back as having "solved" it.
How do you test absolutely everything? in all the environments? with every possible user?
On the other hand it can be also used as you mention and here I think about the software on calculators as the 12C... which probably could have a bug... that you can't fix.
> It seems like we've regressed in terms of quality and the
> general principle of doing it right the first time.
Meanwhile, nothing changes.
The only successful agile method I've personally seen work is XP, and it requires enormous and near-universal discipline. In that respect it is no different from pre-Agile methods that worked.
Agile isn't about jettisoning good software practices. It's about making them smaller. And the signers of the Agile Manifesto weren't the first to think of it. All the ideas were in the literature, in the open, for decades before they coalesced. Even the concept of taking necessary activities and shrinking them is old -- Watts Humphreys beat everyone to it with the Personal Software Process, which is the CMMI shrunk down to the scale of a single engineer.
Disclaimer: I work for one of the most notoriously pro-agile shops of all -- Pivotal Labs. Before I came here I was skeptical. Now I'm notoriously pro-agile, with a generous dash of No True Agile Methodology.
In fact CMMi mentions the importance of measure, iterate and fix as soon as you can... which is the response to the premise of being unable to "just do-it it right"
>Agile ideologues -> just doesn't lead to working software
it's a flawed idea; if you want to take agile core values all of them are focused on actually trying to reduce errors and improve quality .
On the other hand it's a very broad generalization, and leaves the open question "what does actually lead to working software?"
I don't take much issue with the underlying ideas of the Agile manifesto - it's the ritualisation and thoughtlessness that it can lead to that irks me. The "anti Agile manifesto", many though its faults are, rings true in a lot of places, to my mind: "backlogs are really just to do lists" seems pretty much inarguable, for one.
> There is no way of guaranteeing doing it right
> the first time.
I'm old enough that I actually do remember those days. Back then we had "works best with Internet Explorer" and MS Office viruses. I don't want to go back to that world, rolling updates make the world better for everyone.
Those really were the days...
We didn't have to upgrade software all the time only because most computers weren't directly connected to the Internet and because the Internet was a much smaller, safer place.
As the security threats have increased, software has gotten better. Mobile OSes are better sandboxed than PC's or workstations ever were. A Chromebook is pretty secure too. But the threats are getting worse.
Perhaps someday we will get the point where there is some critical software that's proven correct and no longer needs to change. But I think it will require open source hardware that doesn't change much either, and software that's written in newer, safer languages that make it easier to prove correctness.
That's not where we are today. People buy a new phone every 2-3 years, and they just came out with a new USB standard. When we get to the point where new phones are built using essentially the same the same components as old ones and the interconnects never change, perhaps things will slow down a bit and longevity might become a thing.
As to a key being a single point of failure, PGP allows for multiply signed documents. Couldn't Debian require packages to be signed by at least two keys?
Yes. The case that Apple is making is, in part, that the FBI is forcing them to use a back door by putting Apple's stamp of approval on it with their signing key. Whether or not you agree that Apple must create the back door or not, they are still being asked to approve its use. Apple says this is compelled speech and violates the first amendment. Their brief has more details about it. The Tech Dirt summary has the most details . Here's one notable passage,
“The government asks this Court to command Apple to write software that will neutralize safety features that Apple has built into the iPhone in response to consumer privacy concerns... This amounts to compelled speech and viewpoint discrimination in violation of the First Amendment.”
> I think many people might argue that industry-standard systems for ensuring software update authenticity do not qualify as backdoors, perhaps because their existence is not secret or hidden in any way.
It's relative. That used to be somewhat hidden. Now it's very out in the open.
> Having access to a "secure golden key" could be quite dangerous if sufficiently motivated people decide that they want access to it.
Yeah. So let's not compromise Apple's existing security procedures by forcing that out in the open.
> I expect that in the not-too-distant future, for many applications at least, attackers wishing to perform targeted malicious updates will be unable to do so without compromising a multitude of keys held by many people in many different legal jurisdictions.
I hope this day comes soon. For now, let's continue fighting for our right to privacy.
As in write error-free software?
On the other hand it does also lacks a reference or numbers on how many times it has been exploited.
Why doesn't Apple create a mechanism which "only allows updates to be applied after the correct pin is entered"?
Then, an update created by the FBI, which would disable security mechanisms, could not be installed (without knowing the correct pin).
Google Certificates (issue and validate)
Many machine learning is run on google code using google opensourced software.
google connect (starbucks, other retailers)
google domain registration
google CDN for serving code/fonts/libraries
google self driving cars are coming
I am not saying google is bad. Not even referencing this article because I think it was well thought out and there aren't that many people even saying this, although it is fairly intuitive, but the larger paradox is this:
Many people talk about some database being sloppy and how they would have it replicated, maybe even a cold backup in AWS glacier on top of the backups, and yet the world is using like 3 stacks: google, apple and microsoft.
Outside of this community, and a few other places, it would be highly unlikely someone is running their own openwrt router and have freebsd on their computer, fully encrypt all their emails and run their own mail server, don't own a cellphone and use garmin to navigate AND
all of their network does this too and therefore they are insulated sufficiently.
So... yeah it's a mistake, annoying. But we're in some sense expecting much more privacy, security, reliability of our mobile devices than our desktop systems. And I think that's an interesting shift in expectations.