I only have my experience with this so it requires you to have a phone that is off and without a battery or in a faraday(foil) shielded bag. Be in an area your government doesn’t want regular people to be (unacknowledged military base), turn on the phone.
I’ve done this many times so I know how long it takes to power on my phone to a “usable” state on my iphone and android.
I can’t take my phone inside where I work and they have mobile phone detectors which set off alarms if you bring one near any door or the inner facility fence. I put my phones inside a foil cooler bag with ice packs so they won’t overheat inside the car.
My guess is that there was a cell site simulator and it was setup to take over any phone which comes in the area. I got the same result with my android and iphone. Phone boots, weird hang where all indicators appear but I cannot interact with the phone. Wait at least one minute then I can use the phone.
I think this is why governments don’t like China developed 5G technology. It doesn’t have their default back doors.
Absence of evidence is not evidence of absence, especially when searching for evidence left behind by competent adversaries (e.g. NSA, GCHQ, etc) who have a strong motivation to remain undetected.
But it is also not evidence of the thing for which there is absence of evidence.
EDIT:
> especially when searching for evidence left behind by competent adversaries (e.g. NSA, GCHQ, etc) who have a strong motivation to remain undetected.
No, there is no “especially”; absence of evidence means no basis for any affirmative belief, period, equally for any fact proposition. Arguing for “especially... ” is exactly arguing for a case where absence of evidence is evidence for the thing for which there is an absence of evidence.
In risk management, you shouldn't ignore known unknowns like that, you should either adapt your threat model or risk accept, not simply consider that risk nonexistent until proven.
How could we know for sure? Basebands are 100% proprietary, we have no idea how they operate and even less of an idea of how their operation might be subverted.
This is why I'm an open source advocate. It's not that open source automatically makes software/firmware trustworthy, it's that closed source empirically guarantees the software/firmware can never be deemed trustworthy.
And yet there have been plenty of long standing security issues in Linux…
Why would you think that a bunch of people volunteering their time would be more motivated to look for security issues and even those that are found, how many would be disclosed responsibly instead of being sold to places like Pegasus?
>And yet there have been plenty of long standing security issues in Linux…
• See the first half of my second sentence.
>Why would you think that a bunch of people volunteering their time would be more motivated to look for security issues
• So they're not harmed by the vulnerabilities. I'm on a big tech red team. I routinely look for (and report) vulns in open source software that I use - for my own selfish benefit.
>and even those that are found, how many would be disclosed responsibly instead of being sold to places like Pegasus?
• Not all of them, that's a fair point. But I'd rather have the ability to look for them in source than need to look for them in assembly.
• Keep in mind that the alternative you're proposing (that proprietary code can be more trustworthy than open source code) is pretty much immediately undermined by the fact that the entities who produce proprietary code are known to actively cooperate and collaborate with the adversary - look no further than PRISM for an example. Microsoft, for instance, didn't reluctantly accept - they were the first ones on board and had fully integrated years before the second service provider to join (yahoo, iirc).
• If you want to start a leaderboard for "most prolific distributor of vulnerable code", let's see how the Linux project stacks up against Adobe and Microsoft. I wouldn't even need to research that one to place a financial bet against "team proprietary".
> Why would you think that a bunch of people volunteering their time would be more motivated to look for security issues
I don't. I trust that bad actors are less motivated to insert malicious code, and I trust that transparency enforces good practices. All sufficiently complex code has unintended behavior, what matters to me is how you stop third parties from using my device beyond my control.
> and even those that are found, how many would be disclosed responsibly instead of being sold to places like Pegasus?
What do you think everyone else does with their no-click exploits? Send them to Santa?
FOSS doesn't mean "volunteers." FOSS means that the source is viewable, legally usable, and that changes can be made and redistributed without permission from the author(s).
Volunteers can make closed source software, massive corporations and governments can make FOSS.
Seems like some people really believe that FOSS is basically perfect when it comes to security. "It's FOSS so people would find any serious vulnerabilities". Heartbleed, anyone?
As an aside, I wonder if there's a term for this kind of "nobody says...but some do" thing. Everyone sees their own reality, blah blah. I trust that you're speaking in good faith, but that doesn't account for everyone, and good faith doesn't magically resolve arguments.
I know it is a valid threat, but even in the cases that set this precedent there was a team of 140 and they did not leverage a baseband exploit.