On the Neo900, the modem is connected via USB (bus; there is no physical connector) which means it doesn't have DMA. There is no feasible open-source baseband. OsmocomBB (http://bb.osmocom.org/trac/) is the closest thing to one, and it is relatively incomplete and works on a very limited range of mostly badly outdated hardware, none of which would not really be reasonable to use in a phone to be manufactured today.
You'll find the modem on most smartphones is connected via USB - or rather its chip-to-chip version, HSIC. For SoCs where it's on-die - on the same bus/fabric - they will (if it's not an idiotic design) use an IOMMU of some sort, to prevent DMA from having access outside of its sandbox.
Even if it's a modem on the other end of USB - which will almost certainly be using DMA, but at least host-programmed DMA - that's no guarantee. Google "Evil USB". USB is a overly complex stack which has and will continue to result in countless vulnerabilities, regardless of what you do with it.
OTOH, on the device like Neo900 it is well-known what kind of device is connected to the internal bus and software stack (at least on Linux) can easily be advised to not accept anything that doesn't look and behave like the included modem should.
In a properly configured user OS, the modem would need to use some software vulnerability to exploit the USB stack, so the same principles apply there as with, say, OpenSSL, browser or the kernel. Secret zero-days aside, when some bug is found, it is patched and you upgrade the vulnerable component, just like on PC.
An easy way to see this is recompile the USB keyboard driver to ignore keyboard descriptors with a particular address or vendor ID. If you do this, the pendrive can't do anything because USB is host-controlled, as implemented by the OS. Without the OS initiating a conversation with the pendrive and saying "ok, I'll configure you as a keyboard and interpret responses from you as keystrokes", it can't happen.
As you point out, the main CPU on a phone does not implement HID autoconfig on the internal baseband bus.
Any USB link requires the host to maintain some persistent state in data structures mirroring what it thinks the state of the device is. There's no "DMA" in the sense that the device has direct access to the host - but that doesn't preclude something as mundane as a buffer overflow, use-after-free and so on.
"DMA", with appropriate IOMMU, is just fancy shared memory communication. You're just as likely to mess that up as a serial link. It's happened. A lot.
Sure it can. Just pop up a dialogue on the screen displaying a random character string and ask the user to type the string. The dialogue could simply instruct the user to unplug and cast suspicion upon a device that is pretending to be a keyboard.
There's an easy way to tell. Does the bus carry memory addresses? Then it supports DMA. Does it just send messages? No DMA to protect against.
The USB controller on a PC does support DMA. The OS device driver allocates buffers and passes them to the controller to fill. If it's properly programmed, it will only store data into those buffers. An IOMMU is there to prevent malicious kernel privileged code from bouncing through peripherals that support DMA to compromise other privileged code.
Messages on the USB bus side have no addresses in them and there is no DMA involved.
"DMA" as the parent puts it implies a core in the same SoC, rather than external. It would be hair brained to let this have unfiltered access to the fabric. However, that's exactly how older SoCs used to do it - in fact it used to be in charge and the AP shoved behind the IOMMU.
These days nobody I know of is stupid enough to have that arrangement. So it's not really a choice of "DMA" vs USB. External isn't buying much, unless you distrust the fabric filter (IOMMU), which isn't necessarily paranoid... but a level beyond this kind of system decision.
Do you happen to know whether this is the case for the latest generation of Qualcomm SoCs such as the Snapdragon 810 / MSM8994?
I have been "Mr. Cry About Baseband Ownage From The Rooftops" for years around here, and even I have to admit that a lot of baseband implementation in modern smartphones uses this same USB connected model.
It's not universal, but a lot of USB-connected baseband is out in the world...
Still closed source and owned by the provider. I would love to see an open source baseband with a hard switch to disable it.
The closest way I can see to get something to be trusted is to start from a GNU approved laptop like  and add to it whichever modem you had on the neo9000 via usb... just as you said, but then you have this situation...
For something to be secure... there has to be a secure chain of trust...
-who creates the cpu? which kind of microcode has on it?
-video controllers? blobs? drivers?
-anything with DMA, who created your memory controller?
-are you sure about the media you are using to install?
-who creates the hdd? did someone "touch it" before you?
--what about the controllers on the hdd? ram? take a look on the latest technology and you'll see there is no way you can trust anything .
--anything with a controller... this is a cpu... this is something not being driven by the main cpu it's an attack vector.
"The potential of reprogrammable computers that, at a low level, run the code you ask them to doesn't seem to get through to most of the HN crowd" Do not generalize, different people, interests and approaches are part of how nature works, and remember we are as a friend said the "nature virtual machines".
From the perspective of trying to guarantee security when you don't trust manufacturers, there are system-level invariants dictated by the laws of physics: you can be sure, to within measurement precision, that different chips from different manufacturers are not colluding wirelessly, and you can use signal analyzers on the bus to see what is being transmitted in typical operation. This won't save you from timebombs, but it gives you some idea of what's going on at least, and practically you can expect that unrelated corporations do not have the spare time/money/engineering smarts to invest to collaborate to spy on the users of heterogeneous devices far down the line. So you may not quite be able to trust your CPU, or your modem, or your RAM (well... maybe RAM is regular enough that you could sample and decap and borrow a microscope and see if things look fishy), but you can at least make sure they are behaving generally as CPUs or modems or RAM might at the boundaries between them. If you build a system with enough small chips doing simple tasks, though you'll end up with a slow, inefficient thing (think naïve implementations of a microkernel), you no longer need to trust the individual components much as there's little room for them to screw you over at a high level.
And of course it's not true that all of HN is the way I described--it's just a distressingly large, or distressingly vocal proportion. I know it's at least enough people that projects like OsmocomBB are tragically underfunded, underhyped, and under-hacked-on.
I'm not sure how readily available are this kind of tools for the average joe, correct me if I'm wrong I do also thing is a destructive process, taking as a basis the Core 2 Duo P8400 that the X200 on the libreboot laptop which as per intel page is built on 45nm fabrication and as Wikipedia mention (I know wikipedia could not be a trusted source of information for this) "This attack is not very common because it requires a large investment in effort and special equipment that is generally only available to large chip manufacturers. "  which still sounds logical.
about " but you can at least make sure they are behaving generally as CPUs or modems or RAM might at the boundaries between them" you know this is close to impossible, check for example how the RAM tests are performed where a set of patterns are tested and you have some degree of certainty  and for which they last concludes "It should be obvious that this strategy requires an exact knowledge of how the memory cells are laid out on the chip. In addition there is a never ending number of possible chip layouts for different chip types and manufacturers making this strategy impractical. However, there are testing algorithms that can approximate this ideal. " notice approximate to ideal.
"If you build a system with enough small chips doing simple tasks" and this is the path to take, a new set of trust-able cpus... but it's not one of the current options.
Is it possible to build a chip that only executes instructions encrypted by your key? I'm not talking about just decoding to L1 and executing plaintext there, but having a full pipeline that can only work on your encrypted instructions.
Of course it would be orders of magnitude slower than a modern CPU.
"His approach is clever and is known as a “glitching attack“. This kind of hardware attack involves sending a carefully-timed voltage pulse in order to cause the hardware to misbehave in some useful way" from http://rdist.root.org/2010/01/27/how-the-ps3-hypervisor-was-...
I like RockChip ChromeBooks, with no microcode and libreboot support. You can buy them brand new, unlike x200.
As a result, we just recently got cheap, widely accessible SDRs, and even they only came about through more or less an accident. And their performance renders them pretty much useless for even just GSM. Now serious SDRs like USRPs have been available for a long time, and their specs make it possible to run GSM and other advanced RF protocols. So if you pay the 2k+ required to get one of these functional SDRs and have the knowledge that is predominantly still only found in dead trees or lectures in fields quite unrelated to CS, you still won't be able to get anything resembling a GSM phone.
There are the obvious legal hurdles, you are not allowed to transmit willy-nilly, not even to just talk to a public GSM network. If you make sure no RF gets out the real world, you can pickup horribly outdated, discarded GSM test equipment from eBay for more big bucks and use that.
In related news, they usually have a GSM(+) test network at the CCC congress, but they won't in 2016 because the regulatory agency just auctioned off the last bit of available test spectrum in the right band you could acquire through a lengthy, convoluted petition.
And the list just goes on and on and on.. remember you'll need to first communicate with a SIM (really, Java!) card that is your key into any network. They pretty much mandated the equivalent of a DRM dongle!
All the interfaces to this are open: the wire protocol to the SIM and the over-the-air protocols for authentication and security (what a phone needs to do) are specified completely by freely available 3GPP/ETSI specs.
The details of how the SIM computes the derived keys and authentication response are a black box. 3GPP suggests a couple of sample algorithm sets, but there's no way to know whether the card did that unless you have the secret key (shared between the card and a box deep inside the carrier's network).
However, the phone just passes the authentication-request parameters in an authenticate command to the card, passes the result of that in an authentication-response message back to the network, and obtains integrity and ciphering keys (for GSM) from a couple of files on the card (UMTS/LTE use some keys derived from these, but again, it's documented).
An alternative, used in some older US specs like late AMPS, CDMA, and (EIA-136) TDMA is to store the secrets in the phone, generally at manufacturing time. This means that there's no real boundary between the carrier's authentication widget and "your" phone.
Much better to have the identity contained in a SIM that belongs to the carrier, that your phone talks to at arm's length.
Java seems to be for applications other than plain (U)SIM like prepaid account management. The card can ask the phone to add a menu to its interface, show messages, collect inputs, and other things. Maybe all plain USIM apps are written in java these days instead of having a masked ROM like they probably used to.
The most interesting thing your SIM card does is run arbitrary programs that can be uploaded to them, without your knowledge, by the carrier:
The SIM card is a full computer, with its own CPU and memory, that lives inside your phone and that you have no control over.
How many opportunities are there to work on secure communications software full time and still put food on the table?
But the FCC isn't overly concerned if you're doing WiFi or Bluetooth, their area are the broad analog strokes, correct bandwidth and correct power. As such, if you use something that has the correct filters in hardware, you'll be just fine though technically breaking the law.
Their mailing list is fairly active.
The systems security of modern phones is surprisingly complex. Google and Apple both care very deeply about these problems, and both have extremely capable engineers working on them. Without getting too far into the weeds: they haven't ignored the baseband.
My objection is with the idea that you can look at a design, not see an IOMMU, and extrapolate from that the notion that the baseband has full access to the memory of the other chips in the design.
That's a reasonable assumption in a PC design. There may have been a point, for some phones, where it was a valid assumption for phones. It's not with a modern phone design.
I would imagine that things have changed a little bit, but the baseband back then, and I imagine still now, is considered to be the ultimately trusted element of the system. I'd be surprised to hear that they've changed so much that the baseband doesn't still have full control over its host system.
Your statement is general to the point of misinformation.
The connection is usually HSIC, which is a chip-to-chip USB derivative.
The AP is responsible for setting up buffers for communication and manages its own host controller. But like I2C or even older UARTs, the AP remains in control of the communications.
Yes, basebands need more auditing and a security model more like modern APs (e.g., separation of privileges and exploit countermeasures like ASLR and non-exec). Yes, getting baseband access then lets you monitor regular voice and SMS comms. But no, it does not instantly compromise the AP so using the Signal app would still be secure.
Mobile phones have a readily available control/backhaul channel and there's a long history of carrier enforced device control and state mandated telecom surveillance effecting [sic] the design culture. Qualcomm obviously works with the NSA, if only to protect against infiltration by other intelligence agencies. So it's really a question of whether the NSA is willing to have their root kits require physical installation or not.
I have a hard time believing that HSIC is the entire extent of interconnection, with the processors being on the same die and all. Are you asserting that the baseband and application processors use completely independent rams and flashes? Independent memories seem more expensive (price+power) than a single shared bank with MMU, but since the storage requirements of the baseband are known at design time then perhaps not terribly.
If they aren't independent memories, then the term DMA actually still applies even if the interconnect protocol is not based on it. Mobile literature is quite inaccessible (another symptom), but everything I've seen refers to having an MMU as the advancement. I have a hard time believing that would be controlled by the application processor (leaving the baseband vulnerable), but please correct me with specifics if this is wrong.
Some of the cores on a complicated mobile device might have their own memories, and some of them might be isolated from the memories of other cores with silicon. I'm sure there are devices where there are insecure cores with no isolation at all --- just like there's a ton of C code that will read a URL off the wire into a 128 buffer on the stack.
The problem you're suggesting device designers have to solve --- allowing core A access only to a range of the total memory available "on the die" --- isn't a hard one.
From the suggestions you've made in your comments --- and I mean this respectfully --- I think you'd be very surprised by the hardware systems design in a modern mobile device. They are in some ways more sophisticated than the designs used for PCs.
So, the point of this subthread is that mobile devices are much more complicated than the simplistic ("no IOMMU? the baseband can read/write AP memory!") model proposed in the article. It makes an OK overall point (we should care about baseband security!) but uses a very flawed argument to get there.
I have no doubt mobile chipsets contain a surprising amount of complexity. I'd love to be surprised by it! But I've only ever run across vague references to various improvements, which are worth just as much as saying "Our code uses AES!".
It's obviously easy to restrict a core to certain memory ranges. But how is that restriction set? Is it fixed in the mask (leading to inflexibility), or is it set through registers? The bullet point is enough to satisfy a PHB's sense of security, but we know it's the details of those loose ends where the exploits lie. And the threat model of Qualcomm is quite different from the threat model of a phone's owner.
Simplistic points like the OP are really a symptom of this not knowing. Can you blame them for not knowing the exact vulnerability? It's like someone picking on a binary blob, saying it can contain a backdoor password. Well, the industry having moved from backdoor passwords to challenge-response isn't really a defense to the overriding point, is it?
What I'm able to find looking around amounts to Apple verifying that the firmware loaded matches what's expected--but that's simply checking the binary, and doesn't give users any assurance whether that baseband enables backdoors or not. I didn't find anything about mitigations present in Android.
Articles like this one (http://mobile.osnews.com/story.php/27416/The_second_operatin...) indicate that it's pretty easy for an attacker running a cell tower (so, organized crime, or governments, or a blackhat with a few thousand bucks) to get code execution on (some) baseband processors. How do phone vendors mitigate this? My (admittedly pedestrian) knowledge of typical SoC setup makes it seem like that would be very difficult to do in software.
The way manufacturers "mitigate" baseband to main CPU compromise is by using a protocol that allows no initiation from the peripheral device (baseband). It can only talk to the main CPU via a serial-like protocol, not access its memory directly.
Other routes, such as side channel leakage or possible flaws in the main CPU software that interpret messages received from the baseband are still a potential source of problems, but there is no such DMA capability in the HSIC protocol.
It is valid to have general distrust and annoyance with the spy agencies for their actions to create backdoors. But there is no technical basis for this article's claims, and the author should retract them.
Kind of like vmware provides nice interface for folder sharing, but in practice can just write directly to whatever files/memory they want.
However, that doesn't make for a very sexy headline.
Nice Freudian slip. "Muddle" to me sounded like one, probably because of its similarity to huddle, but a bit to my surprise it is a word, with an even more appropriate meaning "an untidy and disorganized state or collection."
Pretty sure the parent poster was getting close to the end of the day, and could almost feel its minty relief on their tongue, hence the slip.
Not enough to force SoC manufacturers to isolate baseband from the main memory and not enough to make baseband firmware FOSS and transparent.
There are lots of reasons GSM won't/is hard to make work. What are the options? As more and more carriers in the USA provide wifi-dongles that are connected to 3G, maybe it's better to just do that, and move off making calls directly from your phone completely?
For example, it might make sense to buy some phone, connect it to a device (or flash it with some software) that makes it essentially a portal for phone calls of sorts, and give it sandboxed access to your network. It's significantly harder for GSM backdoors to be effective if the entire device is sandboxed right? Maybe this way, as you roam around, you can somewhat securely communicate over IP to your call-making device, and make/receive calls?
[EDIT] - Thinking about it, the suggestion is moot, since all someone would have to do is write some software to replay messages, or leak messages or some other nefarious thing, and stick it on the baseband of the device -- even if it can't damage your network it's still quite insecure.
Maybe we should just give GSM up altogether, and start trying to move ourselves (and the world) to only communicating over IP (which we have a shot at securing, assuming modern crypto isn't completely broken)? What is the situation like with completley open source wifi connectivity?
This is an obvious approach, and the one that I originally pursued when I became worried about baseband exploits, security and privacy. I was looking into using a "samsung galaxy player" (basically, a galaxy S4 with no mobile phone chip in it) and using USB modems to use the cellular network when it suited me.
The problem is, for a variety of weird reasons, a LOT of the realtime voice processing is also built into the baseband, along with the radio functions that we're all talking about here.
So a lot of voice quality and noise cancellation and other things that you would really miss are built into the baseband and difficult to replicate on the main, more general purpose, CPU.
LTE doesn't seem like a solution to this problem, it sounds like just a more efficient baseband. Even if it's easier to reverse engineer, use of it might still be outlawed (as is an issue with the neo9000).
Does LTE use some publicly accessible/modifiable spectrum that I don't know about or something?
Of course the transmission itself is still as proprietary and filled with regulations as all GSM standards; I just noted that it should be easier to somehow "sandbox" the communication via LTE now than with older technologies.
It's still not the solution and I'm fully aware of it.
> It would, in my view, be abject insanity not to
> assume that half a dozen or more nation-states (or
> their associated contractors) have code execution
> exploits against popular basebands in stock.
Still, even after Snowden, the whole world uses Microsoft, Apple, Intel, Google technology everyday.
This is not a prove that your logic is wrong, but I am wondering how much today's intelligence services are interested in protecting their own companies and citizens from data breaches. It might be that they are more interested into stealing as much data as possible themselves in order to be able to negotiate with foreign states/services.
Indeed. Such design coupled with very obscure and closed baseband firmware is a security nightmare. One should ask, who was pushing for such an approach.
According to the iOS security white paper, the baseband firmware is part of the secure boot chain, and has its own secure boot chain.
This allows me to assume it's very hard to inject or replace the the firmware with a malicious code. Whether or not the firmware itself has a backdoor or whatever I don't know, but at least one major phone manufacture knows this firmware is very important for security.
It's very interesting to just download a baseband firmware (usually called radio.img) from a random Android forum, unpack it and run strings on the code you get. It'll usually be an ARM processor running a homebrew RTOS, which is concerning enough. When you search for NMEA and realize the baseband processor is running the GPS chip, that's when you are starting to get doubts. And when you finally realize theres a bunch of audio codecs and the baseband is controlling the microphones, that's when the full force of despair hits you. You don't own it, you don't control any part of it. Your Android doesn't either, it begs the baseband for a slice of its information.
> Testing DDR Read/Write: Memory map.
> Testing DDR Read/Write: Data lines.
> Testing DDR Read/Write: Address lines.
> Testing DDR Read/Write: Own-address algorithm.
> Testing DDR Read/Write: Walking-ones algorithm.
> Testing DDR Deep Power Down.
> Testing DDR Deep Power Down: Entering deep power down.
> Testing DDR Deep Power Down: In deep power down.
> Testing DDR Deep Power Down: Exiting deep power down.
> Testing DDR Deep Power Down: Read/write pass.
> Testing DDR Self Refresh.
> Testing DDR Self Refresh: Write pass.
> Testing DDR Self Refresh: Read pass.
> Testing DDR Self Refresh: Entering self refresh.
> Testing DDR Self Refresh: In self refresh.
> Testing DDR Self Refresh: Exiting self refresh.
Yes Galaxy s4 modem, please provide unlimited access to all government agencies of all my phones content and communications.
> Samsung Root CA cert1%0#
Fantastic, you also inject your own root certificate. Thanks.
If it could, anyone with a logic probe could grab un-DRM'd video data out of the RAM, which would make many people very unhappy.
Of course the same branded (eg "Galaxy S6") has many different models for across the world, most using integrated Qualcomm chips. Honestly looking at the list of variants and thinking about the conservatism of RF and telecom regulatory regimes, you'd have to be naive to think the whole ecosystem doesn't simply exist under the control of major intelligence agencies. Communications have always been regarded as dangerous.
An MPU has all of the same protection domains that an MMU does, except for a few major differences:
- The total number of protected regions is very small (12 or so), such that the hardware cost is somewhat smaller than a TLB cache.
- To offset the small number of regions, the size of each region can be almost any power of two.
- The MPU does not perform address translation, again reducing the hardware cost.
Thus, the kernel can configure the peripheral's DMA engine to only allow access to a page or few.
> It can be safely assumed that this baseband is highly insecure. It is closed source and probably not audited at all. My understanding is that the genesis of modern baseband firmware is a development effort for GSM basebands dating back to the 1990s during which the importance of secure software development practices were not apparent. In other words, and my understanding is that this is borne out by research, this firmware tends to be extremely insecure and probably has numerous remote code execution vulnerabilities.
I've read in several places that basebands now widely use the OKL4 microvisor, based on the formally verified (fwiw) seL4 microkernel, and are much more secure than before. Does anyone know more about this?
Surely baseband processors are not based on seL4, since it currently has no realtime support (though it is in development: https://wiki.sel4.systems/seL4%200.0.1-rt-dev).
OKL4 is based on a kernel of the L4 family (source: https://en.wikipedia.org/wiki/L4_microkernel_family#Commerci...):
seL4 is another kernel of this family that has been formally verified.
However I'm thinking a different approach can be taken, suppose we abstract the different ways for communication a device has and use them as sockets or layers and then create an algorithm that distributes the communication through several channels.
For example two cellphones
-one against another using the light on one screen against the camera of the other
-the vibrator motor captured by the microphone
-introducing certain pattern of noise in Bluetooth communication by the other radios
-communication through stenography
-sending huge amounts of information (the more information the more power needed to discriminate, understand)
-abuse how things are not supposed to work (instead of sending packets in the correct order, use ping as a way of sending data by crafting the requests).
-Custom network stack
-use of a customized version of encryption with extra large keys (think 20480 bits instead of 2048)
-and last but not least use an algorithm that mutates the distribution logic based on certain algorithm dependent on time (in the same fashion virus mutate themselves)... using as key for example your voice
Don't think it's a wise idea.
This was briefly discussed recently: https://news.ycombinator.com/item?id=10794991
Resume: "if you could break a 4096-bit RSA key, you've probably found a fundamental weakness in RSA that means you should move to a different algorithm entirely"
In 2016 that's just the price you pay for using computers. You just have to live with it. Mitigate it the best you can, rely on the ol' "mossad or not mossad" strategy now and then, hope for the best, etc. If you have a strong need for increased security, well, god help you (spoilers: you will receive no help), you're going to pump a lot of effort into building something that will still have tons of vulnerabilities.
It's not obvious that FDE will totally mitigate hard drive firmware attacks because your boot loader and kernel are likely running from an unauthenticated partition (for instance, most Linux-based systems with LUKS today have all of /boot on a separate non-LUKS partition, and something like a GRUB boot sector too). If the hard drive understood how to recognize the kernel in /boot, it could inject a vulnerability into it. In a few presentations a couple of years ago, I walked through a real example of how a single bit flip in a binary can introduce (or re-introduce) a fencepost error that can compromise security, because the opcodes for extremely similar yet different conditional branches often differ by a single bit. One form of the conditional branch like jump-if-less may be the correct invariant, while another form like jump-if-less-or-equal may be the exploitable erroneous form of the same loop.
I'm not knocking people's work on trying to get a handle on what firmware is running on their devices; I think that work is great. The trouble is that attackers are looking for exploits and persistence in every layer of the system, so you won't get a silver bullet against all attacks by shoring up one aspect.
Good stuff on this: Halvar Flake's "Why Johnny Can't Tell if He is Compromised", and Joanna Rutkowska's recent CCC presentation on persistent state.
Hard drive replaces your GRUB payload with a compromised one that records your encryption key. You lose.
You could put your bootloader and /boot on a USB stick, but then you're putting the same trust in the stick.
Unfortunately, hardware security is really difficult if your threat model does not assume your vendors are trusted. In fact, I don't think it's actually possible today to get verifiably trustworthy hardware under that threat model.
1. Put ##4636## into the Dialer (or use any application available on the Market like 4636 — takes you to the same service screen)
2. Choose Phone Information
3. Press Turn off radio
(more discussion) http://android.stackexchange.com/questions/7133/how-do-i-tur...
In reality, the baseband should be connected more like a serial port (with some audio channels to the mic and speaker). In fact, it's treated very much that way in software - you interact with it by sending AT commands. But, as others have pointed out, it can send commands directly to the phone's CPU and access the phone's internal memory.
You can always walk around with a portable hotspot and leave the phone on airplane mode. Only use textsecure/signal and download a free software reversed clone of Google services app to use GCM. If going to all this trouble to avoid targeted spying probably easier to just buy a chromebook with RockChip instead and use OTR or Signal whenever it comes out with desktop client (if not already, I haven't kept up with their current status of porting).
1. Software. The phones run complex, low-assurance software in unsafe language and inherently-insecure architecture. A stream of attacks and leaks came out of these. The model for high-assurance was either physical separation with trusted chip mediating or separation kernels + user-mode virtualization of Android, etc so security-critical stuff ran outside that. There was strong mediation of inter-partition communications.
2. Firmware of any chip in the system, esp boot firmware. These were privileged, often thrown together even more, and might survive reinstall of other components.
3. Baseband standards. Security engineer Clive Robinson detailed many times of Schneier's blog the long history between intelligence services (mainly British) and carriers, with the former wielding influence on standards. Some aspects of cellular stacks were straight designed to facilitate their activities. On top of that, the baseband would have to be certified against such requirements and this allowed extra leverage given lost sales if no certification.
4. Baseband software. This is the one you hear about most. They hack baseband software, then hack your phone with it.
5. Baseband hardware. One can disguise a flaw here as debugging stuff left over or whatever. Additionally, baseband has RF capabilities that we predicted could be used in TEMPEST-style attacks on other chips. Not sure if that has happened yet.
6. Main SOC is complex without much security. It might be subverted or attacked. With subversion, it might just be a low-quality counterfeit. Additionally, MMU or IOMMU might fail due to errata. Old MULTICS evaluation showed sometimes one can just keep accessing stuff all day waiting for a logic or timing-related failure to allow access. They got in. More complex stuff might have similar weaknesses. I know Intel does and fights efforts to get specifics.
7. Mixed-signal design ends up in a lot of modern stuff, including mobile SOC's. Another hardware guru that taught me ASIC issues said he'd split his security functions (or trade secrets) between digital and analog so the analog effects were critical for operation. Slowed reverse engineering because their digital customers didn't even see the analog circuits with digital tools nor could understand them. He regularly encountered malicious or at least deceptive behavior in 3rd party I.P. that similarly used mixed-signal tricks. I've speculated before on putting a backdoor in the analog circuits modulating the power that enhances power analysis attacks. Lots of potential for mixed-signal attacks that are little explored.
8. Peripheral hardware is subverted, counterfeit, or has similar problems as above. Look at a smartphone breakdown sometime to be amazed at how many chips are in it. Analog circuitry and RF schemes as well.
9. EMSEC. The phone itself is often an antenna from my understanding. There's passive and active EMSEC attacks that can extract keys, etc. Now, you might say "Might as well record audio if they're that close." Nah, they get the master secret and they have everything in many designs. EMSEC issues here were serious in the past: old STU-III's were considered compromised (master leaked) if certain cellphones got within like 20 ft of them because cell signals forced secrets to leak. Can't know how much of this problem has gotten better or worse with modern designs.
10. Remote update. If your stack supports it, then this is an obvious attack vector if carrier is malicious or compelled to be.
11. Apps themselves if store review, permission model, and/or architecture is weak. Debatable how so except for architecture: definitely weak. Again, better designs in niche markets used separation kernels with apps split between untrusted stuff (incl GUI) in OS and security part outside OS. Would require extra infrastructure and tooling for mainstream stuff, though, plus adoption by providers. I'm not really seeing either in mainstream providers. ;)
That's just off the top of my head from prior work trying to secure mobile or in hardware. My mobile solution, developed quite some time ago, fit in a suitcase due to the physical separation and interface requirements. My last attempt to put it in a phone still needed a trusted keyboard & enough chips that I designed (not implemented) it based on Nokia 9000 Communicator. Something w/ modern functions, form-factor, and deals with above? Good luck...
All smartphones are insecure. Even the secure ones. I've seen good ideas and proposals but no secure[ish] design is implemented outside maybe Type 1 stuff like Sectera Edge. Even it cheats that I can tell with physical separation and robust firmware. It's also huge thanks to EMSEC & milspec. A secure phone will look more like that or the Nokia. You see a slim little Blackphone, iPhone, or whatever offered to you? Point at a random stranger and suggest they might be the sucker the sales rep was looking for.
Don't trust any of them. Ditch your mobile or make sure battery is removable. Don't have anything mobile-enabled in your PC. Just avoid wireless in general unless its infrared. Even then it needs to be off by default.