Hacker News new | comments | show | ask | jobs | submit login
There are no secure smartphones (devever.net)
374 points by moehm on Jan 14, 2016 | hide | past | web | favorite | 118 comments



The folks at http://neo900.org/ are well aware of this and that phone is designed accordingly (details at http://neo900.org/faq#privacy). Hype-driven products like BlackPhone misrepresent their devices as being perfectly secure when this significant attack vector is completely unmitigated.

On the Neo900, the modem is connected via USB (bus; there is no physical connector) which means it doesn't have DMA. There is no feasible open-source baseband. OsmocomBB (http://bb.osmocom.org/trac/) is the closest thing to one, and it is relatively incomplete and works on a very limited range of mostly badly outdated hardware, none of which would not really be reasonable to use in a phone to be manufactured today.

It's incredibly difficult to get people to care and help with the lack of software for tasks like GSM communication. Somehow even among people who describe themselves as "hackers", most just want to run Android or iOS and buy/run closed-source apps, and are more interested in Javascript and employment than reverse-engineering and doing things that have never been done before. The potential of reprogrammable computers that, at a low level, run the code you ask them to doesn't seem to get through to most of the HN crowd.


> On the Neo900, the modem is connected via USB (bus; there is no physical connector) which means it doesn't have DMA.

You'll find the modem on most smartphones is connected via USB - or rather its chip-to-chip version, HSIC. For SoCs where it's on-die - on the same bus/fabric - they will (if it's not an idiotic design) use an IOMMU of some sort, to prevent DMA from having access outside of its sandbox.

Even if it's a modem on the other end of USB - which will almost certainly be using DMA, but at least host-programmed DMA - that's no guarantee. Google "Evil USB". USB is a overly complex stack which has and will continue to result in countless vulnerabilities, regardless of what you do with it.


"Evil USB" (or "bad USB") is possible thanks to the U part of USB - universal. If you connect a pendrive to your computer, it can easily say that it's a keyboard, because your computer cannot easily verify that you haven't just connected a keyboard. It would need to ask you in some trustworthy way to be sure, which sometimes can be problematic.

OTOH, on the device like Neo900 it is well-known what kind of device is connected to the internal bus and software stack (at least on Linux) can easily be advised to not accept anything that doesn't look and behave like the included modem should.

In a properly configured user OS, the modem would need to use some software vulnerability to exploit the USB stack, so the same principles apply there as with, say, OpenSSL, browser or the kernel. Secret zero-days aside, when some bug is found, it is patched and you upgrade the vulnerable component, just like on PC.


This is correct. The problem in this example is not the pendrive, but the automatic selection of device drivers based on the PC OS 100% trusting physical access.

An easy way to see this is recompile the USB keyboard driver to ignore keyboard descriptors with a particular address or vendor ID. If you do this, the pendrive can't do anything because USB is host-controlled, as implemented by the OS. Without the OS initiating a conversation with the pendrive and saying "ok, I'll configure you as a keyboard and interpret responses from you as keystrokes", it can't happen.

As you point out, the main CPU on a phone does not implement HID autoconfig on the internal baseband bus.


I picked a terrible example which doesn't demonstrate what I intended to.

Any USB link requires the host to maintain some persistent state in data structures mirroring what it thinks the state of the device is. There's no "DMA" in the sense that the device has direct access to the host - but that doesn't preclude something as mundane as a buffer overflow, use-after-free and so on.

"DMA", with appropriate IOMMU, is just fancy shared memory communication. You're just as likely to mess that up as a serial link. It's happened. A lot.


Yes, parsing of untrusted data, race conditions, etc. are still a problem in general. My complaint was with the uninformed article, not your analysis.


I'm sure a udev rule would be sufficient to whitelist only the trusted hardware. However, nothing can stop a malicious one to spoof serial numbers or other characteristics of a trusted hardware.


If you connect a pendrive to your computer, it can easily say that it's a keyboard, because your computer cannot easily verify that you haven't just connected a keyboard.

Sure it can. Just pop up a dialogue on the screen displaying a random character string and ask the user to type the string. The dialogue could simply instruct the user to unplug and cast suspicion upon a device that is pretending to be a keyboard.


There is no IOMMU in USB. You've got it backwards: the IOMMU in PC's is on the host side of the USB controller, not the device side.

There's an easy way to tell. Does the bus carry memory addresses? Then it supports DMA. Does it just send messages? No DMA to protect against.

The USB controller on a PC does support DMA. The OS device driver allocates buffers and passes them to the controller to fill. If it's properly programmed, it will only store data into those buffers. An IOMMU is there to prevent malicious kernel privileged code from bouncing through peripherals that support DMA to compromise other privileged code.

Messages on the USB bus side have no addresses in them and there is no DMA involved.


This is exactly what I'm saying.

"DMA" as the parent puts it implies a core in the same SoC, rather than external. It would be hair brained to let this have unfiltered access to the fabric. However, that's exactly how older SoCs used to do it - in fact it used to be in charge and the AP shoved behind the IOMMU.

These days nobody I know of is stupid enough to have that arrangement. So it's not really a choice of "DMA" vs USB. External isn't buying much, unless you distrust the fabric filter (IOMMU), which isn't necessarily paranoid... but a level beyond this kind of system decision.


How does one find out about this for a particular phone? I just tried it for one model and the SoC was on the Wikipedia page, but a cursory web search couldn't tell me what kind of modem setup is in it. The SoC in question was the Snapdragon 410. I suppose a dmesg or lsusb on a (rooted?) Android device might do it.


> For SoCs where it's on-die - on the same bus/fabric - they will (if it's not an idiotic design) use an IOMMU of some sort, to prevent DMA from having access outside of its sandbox.

Do you happen to know whether this is the case for the latest generation of Qualcomm SoCs such as the Snapdragon 810 / MSM8994?


"On the Neo900, the modem is connected via USB (bus; there is no physical connector) which means it doesn't have DMA."

I have been "Mr. Cry About Baseband Ownage From The Rooftops" for years around here, and even I have to admit that a lot of baseband implementation in modern smartphones uses this same USB connected model.

It's not universal, but a lot of USB-connected baseband is out in the world...

Still closed source and owned by the provider. I would love to see an open source baseband with a hard switch to disable it.


I have kind of confusion, your rant about people on HN not understanding that "re-programmable computers that, at a low level, run the code you ask them to" and the fact that no one has a complete ownership of all the parts inside that phone or any other option makes difficult to sustain an option as secure, because those options would comprise several different cpus.

The closest way I can see to get something to be trusted is to start from a GNU approved laptop like [2] and add to it whichever modem you had on the neo9000 via usb... just as you said, but then you have this situation...

For something to be secure... there has to be a secure chain of trust...

-who creates the cpu? which kind of microcode has on it?

-video controllers? blobs? drivers?

-anything with DMA, who created your memory controller?

-are you sure about the media you are using to install?

-who creates the hdd? did someone "touch it" before you?

--what about the controllers on the hdd? ram? take a look on the latest technology and you'll see there is no way you can trust anything [0][1].

--anything with a controller... this is a cpu... this is something not being driven by the main cpu it's an attack vector.

"The potential of reprogrammable computers that, at a low level, run the code you ask them to doesn't seem to get through to most of the HN crowd" Do not generalize, different people, interests and approaches are part of how nature works, and remember we are as a friend said the "nature virtual machines".

[0] http://bgr.com/2015/02/17/nsa-hard-drive-firmware-virus/

[1] http://recode.net/2015/02/17/nsa-can-hide-spyware-in-hard-di...

[2] https://www.fsf.org/news/libreboot-x200-laptop-now-fsf-certi...

edit: format


I agree completely that fully trusted hardware, although a laudable goal, is not achievable in today's reality. However, there are a lot of cool things you can do with hardware that looks like it runs your code, even if you don't know whether it contains backdoors or not.

From the perspective of trying to guarantee security when you don't trust manufacturers, there are system-level invariants dictated by the laws of physics: you can be sure, to within measurement precision, that different chips from different manufacturers are not colluding wirelessly, and you can use signal analyzers on the bus to see what is being transmitted in typical operation. This won't save you from timebombs, but it gives you some idea of what's going on at least, and practically you can expect that unrelated corporations do not have the spare time/money/engineering smarts to invest to collaborate to spy on the users of heterogeneous devices far down the line. So you may not quite be able to trust your CPU, or your modem, or your RAM (well... maybe RAM is regular enough that you could sample and decap and borrow a microscope and see if things look fishy), but you can at least make sure they are behaving generally as CPUs or modems or RAM might at the boundaries between them. If you build a system with enough small chips doing simple tasks, though you'll end up with a slow, inefficient thing (think naïve implementations of a microkernel), you no longer need to trust the individual components much as there's little room for them to screw you over at a high level.

And of course it's not true that all of HN is the way I described--it's just a distressingly large, or distressingly vocal proportion. I know it's at least enough people that projects like OsmocomBB are tragically underfunded, underhyped, and under-hacked-on.


"you can be sure, to within measurement precision"

I'm not sure how readily available are this kind of tools for the average joe, correct me if I'm wrong I do also thing is a destructive process, taking as a basis the Core 2 Duo P8400 that the X200 on the libreboot laptop which as per intel page is built on 45nm fabrication[0] and as Wikipedia mention (I know wikipedia could not be a trusted source of information for this) "This attack is not very common because it requires a large investment in effort and special equipment that is generally only available to large chip manufacturers. " [1] which still sounds logical.

about " but you can at least make sure they are behaving generally as CPUs or modems or RAM might at the boundaries between them" you know this is close to impossible, check for example how the RAM tests are performed where a set of patterns are tested and you have some degree of certainty [2] and for which they last concludes "It should be obvious that this strategy requires an exact knowledge of how the memory cells are laid out on the chip. In addition there is a never ending number of possible chip layouts for different chip types and manufacturers making this strategy impractical. However, there are testing algorithms that can approximate this ideal. " notice approximate to ideal.

"If you build a system with enough small chips doing simple tasks" and this is the path to take, a new set of trust-able cpus... but it's not one of the current options.

[0] http://ark.intel.com/products/35569/Intel-Core2-Duo-Processo... [1] https://en.wikipedia.org/wiki/Reverse_engineering#Reverse_en... [2] http://www.memtest86.com/technical.htm


To add to this madness, even if you design your own chips and software stack, the fab that printed your chips might have modified your design. Perhaps you can design a chip that's resilient to modifications?

Is it possible to build a chip that only executes instructions encrypted by your key? I'm not talking about just decoding to L1 and executing plaintext there, but having a full pipeline that can only work on your encrypted instructions.


How about implementing your CPU on a FPGA? It should be easier to verify the physical hardware hasn't been tampered with.

e.x. https://en.wikipedia.org/wiki/Amber_(processor_core)

Of course it would be orders of magnitude slower than a modern CPU.


"To add to this madness, even if you design your own chips and software stack, you the fab that printed your chips might have modified your design" True, changes can be so subtle, remember how the Playstation were hacked on the basis of altering the current provided to the CPU?

"His approach is clever and is known as a “glitching attack“. This kind of hardware attack involves sending a carefully-timed voltage pulse in order to cause the hardware to misbehave in some useful way" from http://rdist.root.org/2010/01/27/how-the-ps3-hypervisor-was-...

edit: typos


Yes, security is hard.

I like RockChip ChromeBooks, with no microcode and libreboot support. You can buy them brand new, unlike x200.


There just aren't the resources, hardware and information wise, you need. RF engineers don't seem to share the drive for free and open source technology that has been the common theme with software.

As a result, we just recently got cheap, widely accessible SDRs, and even they only came about through more or less an accident. And their performance renders them pretty much useless for even just GSM. Now serious SDRs like USRPs have been available for a long time, and their specs make it possible to run GSM and other advanced RF protocols. So if you pay the 2k+ required to get one of these functional SDRs and have the knowledge that is predominantly still only found in dead trees or lectures in fields quite unrelated to CS, you still won't be able to get anything resembling a GSM phone.

There are the obvious legal hurdles, you are not allowed to transmit willy-nilly, not even to just talk to a public GSM network. If you make sure no RF gets out the real world, you can pickup horribly outdated, discarded GSM test equipment from eBay for more big bucks and use that. In related news, they usually have a GSM(+) test network at the CCC congress, but they won't in 2016 because the regulatory agency just auctioned off the last bit of available test spectrum in the right band you could acquire through a lengthy, convoluted petition.

And the list just goes on and on and on.. remember you'll need to first communicate with a SIM (really, Java!) card that is your key into any network. They pretty much mandated the equivalent of a DRM dongle!


The SIM in 3G isn't really much like a DRM thingy, apart from containing a key; it's more like a securid keyfob. The only interesting thing it does (apart from storing constants like its serial number and variables like your phone book and text messages) is to generate authentication responses (to prove to the network that you have it) and derive session keys (so your phone can encrypt and authenticate what it sends over the air).

All the interfaces to this are open: the wire protocol to the SIM and the over-the-air protocols for authentication and security (what a phone needs to do) are specified completely by freely available 3GPP/ETSI specs.

The details of how the SIM computes the derived keys and authentication response are a black box. 3GPP suggests a couple of sample algorithm sets, but there's no way to know whether the card did that unless you have the secret key (shared between the card and a box deep inside the carrier's network).

However, the phone just passes the authentication-request parameters in an authenticate command to the card, passes the result of that in an authentication-response message back to the network, and obtains integrity and ciphering keys (for GSM) from a couple of files on the card (UMTS/LTE use some keys derived from these, but again, it's documented).

An alternative, used in some older US specs like late AMPS, CDMA, and (EIA-136) TDMA is to store the secrets in the phone, generally at manufacturing time. This means that there's no real boundary between the carrier's authentication widget and "your" phone.

Much better to have the identity contained in a SIM that belongs to the carrier, that your phone talks to at arm's length.

Java seems to be for applications other than plain (U)SIM like prepaid account management. The card can ask the phone to add a menu to its interface, show messages, collect inputs, and other things. Maybe all plain USIM apps are written in java these days instead of having a masked ROM like they probably used to.


"The only interesting thing it does (apart from storing constants like its serial number and variables like your phone book and text messages) is to generate authentication responses"

The most interesting thing your SIM card does is run arbitrary programs that can be uploaded to them, without your knowledge, by the carrier:

https://www.defcon.org/images/defcon-21/dc-21-presentations/...

The SIM card is a full computer, with its own CPU and memory, that lives inside your phone and that you have no control over.


These programs can't do much that the carrier couldn't do without your help: set up calls, send/receive SMS/USSD, sort of thing. It's not as if they ran on the application processor as root or had DMA to the app processor's memory.


>somehow even among people who describe themselves as "hackers", most just want to run Android or iOS and buy/run closed-source apps, and are more interested in Javascript and employment [read: eating, having shelter] than reverse-engineering and doing things that have never been done before.

How many opportunities are there to work on secure communications software full time and still put food on the table?


A LOT. But on the other side of the fence. LEO are paying like mad for secure communication solutions. And breaking into others.


* ethics not included


Good money are extremely good at tuning your ethics compass.


Not only is OsmocomBB incomplete, but it's illegal to use. I've heard it mentioned multiple times that the baseband and the full stack that communicates with the modem has to be verified by the FCC, in order to comply with regulation on RF bandwidth and power. Even if you got it working in your new cell phone, using it would be illegal. I'm not how true that actually is when you can flash router firmware to use illegal wifi channels, though.

http://bb.osmocom.org/trac/wiki/LegalAspects#Usingmodifiedph...


Well any RF equipment that you either modified or built yourself is per default illegal to use. You can buy a bluetooth-stack-on-a-chip and talk to it with your Arduino but once you sell that as a product you'll still require FCC certification.

But the FCC isn't overly concerned if you're doing WiFi or Bluetooth, their area are the broad analog strokes, correct bandwidth and correct power. As such, if you use something that has the correct filters in hardware, you'll be just fine though technically breaking the law.


Well, that's actually a problem: if you are forced to use something like OsmocomBB, law enforcement will be more than glad to see they have a new reason to have you in jail.


I'm not convinced that you're average beat cop is looking for OsmocomBB on your phone in order to trump up a reason to nab you. The tried and true method of following you for a couple blocks and interpreting traffic laws strictly is sufficient and requires much less knowledge and work.


There is also https://www.freecalypso.org/ which aims to produce libre firmware for the TI Calypso baseband chipset.

Their mailing list is fairly active.


Please remember that FreeCalypso is not free software in commonly used sense - it uses unlicensed code from leaks. People behind FreeCalypso don't care about law, so they don't mind.


The only problem with the Neo 900 is it's in the ballpark of $1000 and production is very limited because it relies on rare parts.


It's good to draw attention on baseband processors, but there are technical assertions in this post that are probably not accurate (lack of auditing and the notion that you can assess the security of a whole phone system by whether or not there's an IOMMU).

The systems security of modern phones is surprisingly complex. Google and Apple both care very deeply about these problems, and both have extremely capable engineers working on them. Without getting too far into the weeds: they haven't ignored the baseband.


The team behind Replicant reported that they found a baseband backdoor in Samsung Galaxies[0][1]. I think it's perfectly fine to extrapolate from that that you probably shouldn't trust the baseband, even if Google and Apple are looking into it. Until they have a concrete solution shipped, all the in-the-closet work on the problem is meaningless.

0: https://www.fsf.org/blogs/community/replicant-developers-fin... 1: http://redmine.replicant.us/projects/replicant/wiki/SamsungG...


I don't think you should trust the baseband.

My objection is with the idea that you can look at a design, not see an IOMMU, and extrapolate from that the notion that the baseband has full access to the memory of the other chips in the design.

That's a reasonable assumption in a PC design. There may have been a point, for some phones, where it was a valid assumption for phones. It's not with a modern phone design.


In my research on phones from the Unrevoked project (admittedly, 4+ years ago), this was the case: the baseband and the CPU shared the same memory. The baseband memory was carved out from the CPU such that the CPU could not access it, but the microcontrollers serving the baseband had CPU access, as I recall from the Qualcomm boot documentation: the chain of trust from CPU boot was established by the baseband processor, not the other way around.

I would imagine that things have changed a little bit, but the baseband back then, and I imagine still now, is considered to be the ultimately trusted element of the system. I'd be surprised to hear that they've changed so much that the baseband doesn't still have full control over its host system.


The baseband doesn't have full control over phone systems.


You should qualify which specific phone designs you are referring to (or if it's "all new phones", when the cutoff date was). After all, you are responding to someone who has researched this and found (like others) that indeed, the baseband in older phones did have full control.

Your statement is general to the point of misinformation.


Bear in mind that the baseband's host system doesn't necessarily need to be the same as the phone's core systems. You could just isolate the baseband and only communicate with it via a defined interface.


This is why I wish Replicant was more successful. They mention a CM issue to fix it, and nobody did. As an owner of several Samsung devices, the fact there is a wholly proprietary processor in my phone with complete control of it makes me not treat it like a computer.


The premise of the article is flat out wrong. Mainstream smartphones do not provide DMA access from the baseband to the application processor's memory.

The connection is usually HSIC, which is a chip-to-chip USB derivative.

https://www.synopsys.com/dw/dwtb.php?a=hsic_usb2_device

The AP is responsible for setting up buffers for communication and manages its own host controller. But like I2C or even older UARTs, the AP remains in control of the communications.

Yes, basebands need more auditing and a security model more like modern APs (e.g., separation of privileges and exploit countermeasures like ASLR and non-exec). Yes, getting baseband access then lets you monitor regular voice and SMS comms. But no, it does not instantly compromise the AP so using the Signal app would still be secure.


What is the impetus to trust assertions of independence given both processors are still on the same die?

Mobile phones have a readily available control/backhaul channel and there's a long history of carrier enforced device control and state mandated telecom surveillance effecting [sic] the design culture. Qualcomm obviously works with the NSA, if only to protect against infiltration by other intelligence agencies. So it's really a question of whether the NSA is willing to have their root kits require physical installation or not.


The original post talks about DMA being a method the baseband can use to compromise the AP, which is false for any mainstream design. Trust doesn't have anything to do with it; the protocol literally does not support DMA.


The conclusion of the original post isn't wrong though, despite the exact details being out of date by (a scant) 5 years.

I have a hard time believing that HSIC is the entire extent of interconnection, with the processors being on the same die and all. Are you asserting that the baseband and application processors use completely independent rams and flashes? Independent memories seem more expensive (price+power) than a single shared bank with MMU, but since the storage requirements of the baseband are known at design time then perhaps not terribly.

If they aren't independent memories, then the term DMA actually still applies even if the interconnect protocol is not based on it. Mobile literature is quite inaccessible (another symptom), but everything I've seen refers to having an MMU as the advancement. I have a hard time believing that would be controlled by the application processor (leaving the baseband vulnerable), but please correct me with specifics if this is wrong.


You can use this same kind of logic to suggest that any system is broken.

Some of the cores on a complicated mobile device might have their own memories, and some of them might be isolated from the memories of other cores with silicon. I'm sure there are devices where there are insecure cores with no isolation at all --- just like there's a ton of C code that will read a URL off the wire into a 128 buffer on the stack.

The problem you're suggesting device designers have to solve --- allowing core A access only to a range of the total memory available "on the die" --- isn't a hard one.

From the suggestions you've made in your comments --- and I mean this respectfully --- I think you'd be very surprised by the hardware systems design in a modern mobile device. They are in some ways more sophisticated than the designs used for PCs.

So, the point of this subthread is that mobile devices are much more complicated than the simplistic ("no IOMMU? the baseband can read/write AP memory!") model proposed in the article. It makes an OK overall point (we should care about baseband security!) but uses a very flawed argument to get there.


Actually the real problem I'm suggesting device designers need to solve is doing the bare minimum to make the public (ie decentralized vulnerability seekers) believe that the application processor is likely secure from the cell network. Doubly so because the entire history of their industry has been the exact opposite philosophy.

I have no doubt mobile chipsets contain a surprising amount of complexity. I'd love to be surprised by it! But I've only ever run across vague references to various improvements, which are worth just as much as saying "Our code uses AES!".

It's obviously easy to restrict a core to certain memory ranges. But how is that restriction set? Is it fixed in the mask (leading to inflexibility), or is it set through registers? The bullet point is enough to satisfy a PHB's sense of security, but we know it's the details of those loose ends where the exploits lie. And the threat model of Qualcomm is quite different from the threat model of a phone's owner.

Simplistic points like the OP are really a symptom of this not knowing. Can you blame them for not knowing the exact vulnerability? It's like someone picking on a binary blob, saying it can contain a backdoor password. Well, the industry having moved from backdoor passwords to challenge-response isn't really a defense to the overriding point, is it?


I can blame them for not knowing any vulnerability, and then proceeding to make new ones up on message boards, yes.


"New" one? Was shared memory communication actually never that prevalent?


Rather than asking us to take this on faith, do you know of any resources where it's possible to learn about the technical details hidden "in the weeds"?

What I'm able to find looking around amounts to Apple verifying that the firmware loaded matches what's expected--but that's simply checking the binary, and doesn't give users any assurance whether that baseband enables backdoors or not. I didn't find anything about mitigations present in Android.

Articles like this one (http://mobile.osnews.com/story.php/27416/The_second_operatin...) indicate that it's pretty easy for an attacker running a cell tower (so, organized crime, or governments, or a blackhat with a few thousand bucks) to get code execution on (some) baseband processors. How do phone vendors mitigate this? My (admittedly pedestrian) knowledge of typical SoC setup makes it seem like that would be very difficult to do in software.


For one example, see the Android kernel for Qualcomm HSIC baseband interface (baseband-qct-mdm-hsic.c)

https://git.sphere.ly/Lloir/android_kernel_htc_evitareul/tre...

The way manufacturers "mitigate" baseband to main CPU compromise is by using a protocol that allows no initiation from the peripheral device (baseband). It can only talk to the main CPU via a serial-like protocol, not access its memory directly.

Other routes, such as side channel leakage or possible flaws in the main CPU software that interpret messages received from the baseband are still a potential source of problems, but there is no such DMA capability in the HSIC protocol.

It is valid to have general distrust and annoyance with the spy agencies for their actions to create backdoors. But there is no technical basis for this article's claims, and the author should retract them.


There's a difference between "what it does" and "what it can do" however. I mean, whatever well-defined interface is used, hardware design may leave other options open and unused.

Kind of like vmware provides nice interface for folder sharing, but in practice can just write directly to whatever files/memory they want.


It can't DMA. It's not that it chooses not to.


It all depends on particular design. There are even some out there where baseband OS and Linux run under the same proprietary hypervisor. On modern phones it's pretty rare to be able to say with 100% certainty that the baseband OS doesn't have any access to user memory - and often you can say with 100% certainty that it can access other interesting stuff, like GPS or microphone, without any supervision from the user.


I think this is an instance where the truth is somewhere in the muddle. I'm sure there are engineers who care very much about securing cellular devices and making the attack surface as small as possible, probably working for Apple, Google, and perhaps for a few of those nation-state actors as well. I'm equally sure there are time crunched engineers who are more than happy to not go looking for problems when they have a piece of hardware that works (for their employer's intents and purposes), and that there are actors and entities out there who are dedicated to taking advantage of those insecurities.

However, that doesn't make for a very sexy headline.


"the truth is somewhere in the muddle"

Nice Freudian slip. "Muddle" to me sounded like one, probably because of its similarity to huddle, but a bit to my surprise it is a word, with an even more appropriate meaning "an untidy and disorganized state or collection."


It's how you properly process the mint for a mojito:

http://drinks.seriouseats.com/2011/05/cocktail-101-how-to-mu...

Pretty sure the parent poster was getting close to the end of the day, and could almost feel its minty relief on their tongue, hence the slip.


As I understand the author, the implication is that the baseband firmware is closed source. So, it's possible that Google/Apple have done an audit. But, in my experience, and the author's suggestion that's unlikely to have happened.


I belive he is saying that the baseband code is closed source even to google and apple. Thats where much of he danager lies, it has exterordinary levels of access to tne device and there is no accountability for its security.


I think the author is radically oversimplifying the access the baseband has, based on the erroneous notion that the baseband in a phone design is connected in the same fashion as a peripheral is in a PC design --- hence the "IOMMU" bit.


I'd very much ask you to go into the weeds here, because right now basebands implement specifications that are diametrically opposed to any notion of security from the "application processors" and users perspective. They trust the network unconditionally.


> Google and Apple both care very deeply about these problems

Not enough to force SoC manufacturers to isolate baseband from the main memory and not enough to make baseband firmware FOSS and transparent.


Semi-related: I feel like there should be an open source dumbphone project. Smartphones have a lot of bells and whistles with a large attack area but many people just want to make calls. Sure this wouldn't fix the baseband issue, but it seems the safest way to ensure isolation between personal information and potentially hostile cellular blobs is to never put the information on the device in the first place. A small, open, dialer-only yet potentially extensible cellular platform would be really welcome, I think. Maybe something like this [1] using the rpi zero as a base with some 3d printed cases, reverse engineer what you can otherwise. Replicant is a noble project, but chasing android versions, screen sizes, video codecs, proprietary hardware locks etc is an uphill battle and kind of a distraction IMHO.

[1] https://www.raspberrypi.org/blog/piphone-home-made-raspberry...

edit: typo



Would really love to see this upvoted more. This basic truth should be common knowledge for privacy-minded or security-minded technologists/developers.

There are lots of reasons GSM won't/is hard to make work. What are the options? As more and more carriers in the USA provide wifi-dongles that are connected to 3G, maybe it's better to just do that, and move off making calls directly from your phone completely?

For example, it might make sense to buy some phone, connect it to a device (or flash it with some software) that makes it essentially a portal for phone calls of sorts, and give it sandboxed access to your network. It's significantly harder for GSM backdoors to be effective if the entire device is sandboxed right? Maybe this way, as you roam around, you can somewhat securely communicate over IP to your call-making device, and make/receive calls?

[EDIT] - Thinking about it, the suggestion is moot, since all someone would have to do is write some software to replay messages, or leak messages or some other nefarious thing, and stick it on the baseband of the device -- even if it can't damage your network it's still quite insecure.

Maybe we should just give GSM up altogether, and start trying to move ourselves (and the world) to only communicating over IP (which we have a shot at securing, assuming modern crypto isn't completely broken)? What is the situation like with completley open source wifi connectivity?


LTE is basically VoIP, a 180° change from the monstrosity of 3G, although the providers still manage to fail spectacularly at it.

https://media.ccc.de/v/32c3-7502-dissecting_volte


LTE is a protocol/standard with more efficient methodology/tech right? -- my main issue was the involvement that certain agencies have with the basebands put into mobile phones and the resistance that people trying to develop completely open-source basebands encounter.

LTE doesn't seem like a solution to this problem, it sounds like just a more efficient baseband. Even if it's easier to reverse engineer, use of it might still be outlawed (as is an issue with the neo9000).

Does LTE use some publicly accessible/modifiable spectrum that I don't know about or something?


LTE, when compared to 3G, drops a lot of complexity and basically defines a data link for IP to flow in. Telephony services are then implemented via IP.

Of course the transmission itself is still as proprietary and filled with regulations as all GSM standards; I just noted that it should be easier to somehow "sandbox" the communication via LTE now than with older technologies.

It's still not the solution and I'm fully aware of it.


Holyshit, you can play so much with VoLTE moved to Application Processor.


"There are lots of reasons GSM won't/is hard to make work. What are the options? As more and more carriers in the USA provide wifi-dongles that are connected to 3G, maybe it's better to just do that, and move off making calls directly from your phone completely?"

This is an obvious approach, and the one that I originally pursued when I became worried about baseband exploits, security and privacy. I was looking into using a "samsung galaxy player" (basically, a galaxy S4 with no mobile phone chip in it) and using USB modems to use the cellular network when it suited me.

The problem is, for a variety of weird reasons, a LOT of the realtime voice processing is also built into the baseband, along with the radio functions that we're all talking about here.

So a lot of voice quality and noise cancellation and other things that you would really miss are built into the baseband and difficult to replicate on the main, more general purpose, CPU.


I think what the author was implying is that the code that runs the UMTS/LTE/whatever else stacks is still based on that nineties source, when the firmware was initially conceived for GSM.


The article asserts:

    > It would, in my view, be abject insanity not to
    > assume that half a dozen or more nation-states (or
    > their associated contractors) have code execution
    > exploits against popular basebands in stock.
To me this ignores the flip-side of the argument. If US intelligence services really thought the Chinese and Russians could remotely and invisibly hack all/most smartphones then no-one with access to sensitive information would be allowed to do work on one, unless they're very confident that they've managed to secure their devices without leaving a stone unturned.

Soz Hilz[0].

[0] http://media4.s-nbcnews.com/i/newscms/2015_10/913276/150303-...


Following your logic, then any other sovereign non-US states must forbid their their citizens and companies to use US technology in order to avoid US stealing the data. At least they should forbid it to those organisations and citizens who as you say access sensitive information.

Still, even after Snowden, the whole world uses Microsoft, Apple, Intel, Google technology everyday.

This is not a prove that your logic is wrong, but I am wondering how much today's intelligence services are interested in protecting their own companies and citizens from data breaches. It might be that they are more interested into stealing as much data as possible themselves in order to be able to negotiate with foreign states/services.


Couldn't that same argument be used on routers/switches/other equipment that we find back-doors in all the time?


> Modern smartphones have a CPU chip, and a baseband chip which handles radio network communications (GSM/UMTS/LTE/etc.) This chip is connected to the CPU via DMA. Thus, unless an IOMMU is used, the baseband has full access to main memory, and can compromise it arbitrarily.

Indeed. Such design coupled with very obscure and closed baseband firmware is a security nightmare. One should ask, who was pushing for such an approach.


> For devices with cellular access, the baseband subsystem also utilizes its own similar process of secure booting with signed software and keys verified by the baseband processor.

According to the iOS security white paper, the baseband firmware is part of the secure boot chain, and has its own secure boot chain.

This allows me to assume it's very hard to inject or replace the the firmware with a malicious code. Whether or not the firmware itself has a backdoor or whatever I don't know, but at least one major phone manufacture knows this firmware is very important for security.


That's just iOS being iOS, on Android the baseband firmware can usually be flashed using fastboot. However, there will presumably still be a bootloader on the baseband processor that checks the authenticity of what you just flashed to it.

It's very interesting to just download a baseband firmware (usually called radio.img) from a random Android forum, unpack it and run strings on the code you get. It'll usually be an ARM processor running a homebrew RTOS, which is concerning enough. When you search for NMEA and realize the baseband processor is running the GPS chip, that's when you are starting to get doubts. And when you finally realize theres a bunch of audio codecs and the baseband is controlling the microphones, that's when the full force of despair hits you. You don't own it, you don't control any part of it. Your Android doesn't either, it begs the baseband for a slice of its information.


> Testing DDR Read/Write.

> Testing DDR Read/Write: Memory map.

> Testing DDR Read/Write: Data lines.

> Testing DDR Read/Write: Address lines.

> Testing DDR Read/Write: Own-address algorithm.

> Testing DDR Read/Write: Walking-ones algorithm.

> Testing DDR Deep Power Down.

> Testing DDR Deep Power Down: Entering deep power down.

> Testing DDR Deep Power Down: In deep power down.

> Testing DDR Deep Power Down: Exiting deep power down.

> Testing DDR Deep Power Down: Read/write pass.

> Testing DDR Self Refresh.

> Testing DDR Self Refresh: Write pass.

> Testing DDR Self Refresh: Read pass.

> Testing DDR Self Refresh: Entering self refresh.

> Testing DDR Self Refresh: In self refresh.

> Testing DDR Self Refresh: Exiting self refresh.

Yes Galaxy s4 modem, please provide unlimited access to all government agencies of all my phones content and communications.

> Samsung Root CA cert1%0#

Fantastic, you also inject your own root certificate. Thanks.


Looking at the kinds and amounts of just standard software vulnerabilities in those firmwares that have been found in the past, I presume there's plenty of low hanging fruit still to be found there. It doesn't really need to be an intentional backdoor, it can be just good old buffer overflow with remote execution.


Don't modern ARM chips encrypt memory (for DRM reasons)? If that's in use (modulo the area the the baseband needs to read/write), it doesn't matter. An adversary can read the memory, but can't read the encryption key out of the CPU.

If it could, anyone with a logic probe could grab un-DRM'd video data out of the RAM, which would make many people very unhappy.


The Samsung i9300/i9500/etc use separate Exynos application processors coupled with stand-alone communication chipsets from Intel (originally Infineon). Check out the Replicant wiki for more info. The Replicant devices are unfortunately quite old (i9300 being the latest), but newer models do continue the trend (SM-G920H for the S6 international Exynos, I believe). You'll just be stuck on their proprietary OS, or need to fund a bunch of CM/Replicant development.

Of course the same branded (eg "Galaxy S6") has many different models for across the world, most using integrated Qualcomm chips. Honestly looking at the list of variants and thinking about the conservatism of RF and telecom regulatory regimes, you'd have to be naive to think the whole ecosystem doesn't simply exist under the control of major intelligence agencies. Communications have always been regarded as dangerous.


The article author is not well-informed. You can verify this yourself by physical inspection, via leaked schematics, by paying a teardown analyst or via examination of available documentation for the components used in modern smartphone designs.


There is something in between nothing and a full IOMMU. I've been working with a TMS570 processor lately whose DMA engine supports an IOMPU. This hardware is equivalent to the Cortex-R/M Memory protection unit.

An MPU has all of the same protection domains that an MMU does, except for a few major differences:

- The total number of protected regions is very small (12 or so), such that the hardware cost is somewhat smaller than a TLB cache.

- To offset the small number of regions, the size of each region can be almost any power of two.

- The MPU does not perform address translation, again reducing the hardware cost.

Thus, the kernel can configure the peripheral's DMA engine to only allow access to a page or few.


Based on what I've read from more authoritative sources (or is the author an authority in this area?), this information is outdated:

> It can be safely assumed that this baseband is highly insecure. It is closed source and probably not audited at all. My understanding is that the genesis of modern baseband firmware is a development effort for GSM basebands dating back to the 1990s during which the importance of secure software development practices were not apparent. In other words, and my understanding is that this is borne out by research, this firmware tends to be extremely insecure and probably has numerous remote code execution vulnerabilities.

I've read in several places that basebands now widely use the OKL4 microvisor,[1] based on the formally verified (fwiw) seL4 microkernel, and are much more secure than before. Does anyone know more about this?

[1] https://gdmissionsystems.com/cyber/products/trusted-computin...


> I've read in several places that basebands now widely use the OKL4 microvisor,[1] based on the formally verified (fwiw) seL4 microkernel, and are much more secure than before. Does anyone know more about this? > > [1] https://gdmissionsystems.com/cyber/products/trusted-computin...

Surely baseband processors are not based on seL4, since it currently has no realtime support (though it is in development: https://wiki.sel4.systems/seL4%200.0.1-rt-dev).

OKL4 is based on a kernel of the L4 family (source: https://en.wikipedia.org/wiki/L4_microkernel_family#Commerci...):

> https://en.wikipedia.org/wiki/L4_microkernel_family

seL4 is another kernel of this family that has been formally verified.


I have address as a response to gue5t how creating a chain of trust may not be a doable goal as probably every part of the system is not trust-able.

However I'm thinking a different approach can be taken, suppose we abstract the different ways for communication a device has and use them as sockets or layers and then create an algorithm that distributes the communication through several channels.

For example two cellphones

-one against another using the light on one screen against the camera of the other

-the vibrator motor captured by the microphone

-introducing certain pattern of noise in Bluetooth communication by the other radios

-communication through stenography

-sending huge amounts of information (the more information the more power needed to discriminate, understand)

-abuse how things are not supposed to work (instead of sending packets in the correct order, use ping as a way of sending data by crafting the requests).

-Custom network stack

-use of a customized version of encryption with extra large keys (think 20480 bits instead of 2048)

-and last but not least use an algorithm that mutates the distribution logic based on certain algorithm dependent on time (in the same fashion virus mutate themselves)... using as key for example your voice

edit: explanation


> extra large keys (think 20480 bits instead of 2048)

Don't think it's a wise idea. This was briefly discussed recently: https://news.ycombinator.com/item?id=10794991


Thank you for the comment, I learned something new today.

Resume: "if you could break a 4096-bit RSA key, you've probably found a fundamental weakness in RSA that means you should move to a different algorithm entirely"


Would this device be protected? http://www.gsmarena.com/blackberry_priv-7587.php


This is just a special case of the fact that there's no secure anything.

In 2016 that's just the price you pay for using computers. You just have to live with it. Mitigate it the best you can, rely on the ol' "mossad or not mossad" strategy now and then, hope for the best, etc. If you have a strong need for increased security, well, god help you (spoilers: you will receive no help), you're going to pump a lot of effort into building something that will still have tons of vulnerabilities.


Just get an FSF laptop for security. You can even trust the hard drive if you use software encryption of all your data. I'm not sure what remains insecure on that class of hardware?


They have a better story about firmware than other devices, but they're still running Linux and GNU software, which often has remotely exploitable vulnerabilities. Just today OpenSSH (which is neither Linux nor GNU but which you would obviously use on your FSF laptop) announced an extraordinarily serious remotely exploitable vulnerability. That's presumably not the last vulnerability of that severity in an operating system you install today onto your FSF laptop.

It's not obvious that FDE will totally mitigate hard drive firmware attacks because your boot loader and kernel are likely running from an unauthenticated partition (for instance, most Linux-based systems with LUKS today have all of /boot on a separate non-LUKS partition, and something like a GRUB boot sector too). If the hard drive understood how to recognize the kernel in /boot, it could inject a vulnerability into it. In a few presentations a couple of years ago, I walked through a real example of how a single bit flip in a binary can introduce (or re-introduce) a fencepost error that can compromise security, because the opcodes for extremely similar yet different conditional branches often differ by a single bit. One form of the conditional branch like jump-if-less may be the correct invariant, while another form like jump-if-less-or-equal may be the exploitable erroneous form of the same loop.

I'm not knocking people's work on trying to get a handle on what firmware is running on their devices; I think that work is great. The trouble is that attackers are looking for exploits and persistence in every layer of the system, so you won't get a silver bullet against all attacks by shoring up one aspect.

Good stuff on this: Halvar Flake's "Why Johnny Can't Tell if He is Compromised", and Joanna Rutkowska's recent CCC presentation on persistent state.


> You can even trust the hard drive if you use software encryption of all your data.

Hard drive replaces your GRUB payload with a compromised one that records your encryption key. You lose.

You could put your bootloader and /boot on a USB stick, but then you're putting the same trust in the stick.

Unfortunately, hardware security is really difficult if your threat model does not assume your vendors are trusted. In fact, I don't think it's actually possible today to get verifiably trustworthy hardware under that threat model.


Or you can just use WIFI and turn the baseband off like I do. The cell companies are all crooks anyways (in the US), and I don't want to do business with them.


On Android:

1. Put ##4636## into the Dialer (or use any application available on the Market like 4636 — takes you to the same service screen)

2. Choose Phone Information

3. Press Turn off radio

(source) http://forum.xda-developers.com/showpost.php?s=c8b62d54e971b...

(more discussion) http://android.stackexchange.com/questions/7133/how-do-i-tur...


for easier remembering and correctly formatted:

    *#*#INFO#*#*


Problem is, when you use VoIP over WiFi, you loose echo-cancellation, because the echo cancel hardware resides in the baseband and is not used in WiFi calls.


You may also consider echo cancellation software that is capable of doing server side echo cancellation that can handle the long echo tail. For example, take a look at the options provided by SoliCall.


Just use a headset that supports echo cancellation in hardware?


Indeed, if you have a second cell modem or smartphone, the modem can connect to the cellular network and provide an IP over Wifi, and the smartphone can use wifi to connect to it. Thus the smartphone is protected by its OS's firewall and any compromises stay on the cell modem, the same way they do with home internet connections.


Doesn't the baseband control wifi too anyway? You should see the amount of things the SoC/baseband does.


The baseband chip is a separate chip than the one providing wifi, bluetooth, nfc, etc. Essentially, the wifi/bt/etc chips fall on the computer side and can be controlled/manipulated by the phone's OS. The baseband chip is a standalone system that is controlled by the cell towers, and then is tied into the phone's cpu directly. Phone builders essentially buy the baseband chip that is certified on different network types and add it to their PCB.

In reality, the baseband should be connected more like a serial port (with some audio channels to the mic and speaker). In fact, it's treated very much that way in software - you interact with it by sending AT commands. But, as others have pointed out, it can send commands directly to the phone's CPU and access the phone's internal memory.


Interested in the answer to this question.


"...no secure smartphones." There's always a weakness somewhere. Makes me wonder why governments want backdoors. Just in case we manage to make a perfectly secure device? It's too onerous to ask a judge for permission to "hack" a phone through weaknesses rather than asking permission for an escrowed key?


How about those http://www.cryptophone.de/en/company/news/gsmk-introduces-ne... phones? Is a "baseband firewall" just a gimmick?


It's basically (the older Samsung models) a Stingray detector, plus will shut down the device if activity is detected in the baseband CPU while the application CPU is dormant meaning shenanigans like SMS 'stealth' spamming is going on or evil OTA update.

You can always walk around with a portable hotspot and leave the phone on airplane mode. Only use textsecure/signal and download a free software reversed clone of Google services app to use GCM. If going to all this trouble to avoid targeted spying probably easier to just buy a chromebook with RockChip instead and use OTR or Signal whenever it comes out with desktop client (if not already, I haven't kept up with their current status of porting).


Tor has a discussion (2014) of Android security, including baseband processors, https://blog.torproject.org/blog/mission-impossible-hardenin...


When I spoke to John Callas he said there was a serial link to the baseband on the Blackphone. I might be misremembering, but it certainly is an issue that designers are working on solving.


Why is it that the baseband has full access to main memory?


Because most chips and peripherals that work at a decently high speed on any computer use DMA. A parallel port on a computer, a thunderbolt connector, Ethernet cards, etc all use DMA for similar desires to speed up communication, and all can have the potential to be attacked. In order for a phone to give decent performance, DMA becomes more and more of a necessity.


They don't.


Integration, miniaturization, power management, cost reduction... it's the easiest way, so when you disregard proper isolation and security, then it actually seems like the way to go.


So what are the tools and knowledge required for one to tinker hardware enough to understand and build security for smart phones?


That's right. Focus on the baseband is kind of the new fad in mainstream ITSEC. These problems are long-known in high-assurance as the cert requires all things that can compute, store, or do I/O to be assessed. The reason is that, historically, these were all where attacks came in. I'm pretty tired but I can do at least a few points here.

1. Software. The phones run complex, low-assurance software in unsafe language and inherently-insecure architecture. A stream of attacks and leaks came out of these. The model for high-assurance was either physical separation with trusted chip mediating or separation kernels + user-mode virtualization of Android, etc so security-critical stuff ran outside that. There was strong mediation of inter-partition communications.

2. Firmware of any chip in the system, esp boot firmware. These were privileged, often thrown together even more, and might survive reinstall of other components.

3. Baseband standards. Security engineer Clive Robinson detailed many times of Schneier's blog the long history between intelligence services (mainly British) and carriers, with the former wielding influence on standards. Some aspects of cellular stacks were straight designed to facilitate their activities. On top of that, the baseband would have to be certified against such requirements and this allowed extra leverage given lost sales if no certification.

4. Baseband software. This is the one you hear about most. They hack baseband software, then hack your phone with it.

5. Baseband hardware. One can disguise a flaw here as debugging stuff left over or whatever. Additionally, baseband has RF capabilities that we predicted could be used in TEMPEST-style attacks on other chips. Not sure if that has happened yet.

6. Main SOC is complex without much security. It might be subverted or attacked. With subversion, it might just be a low-quality counterfeit. Additionally, MMU or IOMMU might fail due to errata. Old MULTICS evaluation showed sometimes one can just keep accessing stuff all day waiting for a logic or timing-related failure to allow access. They got in. More complex stuff might have similar weaknesses. I know Intel does and fights efforts to get specifics.

7. Mixed-signal design ends up in a lot of modern stuff, including mobile SOC's. Another hardware guru that taught me ASIC issues said he'd split his security functions (or trade secrets) between digital and analog so the analog effects were critical for operation. Slowed reverse engineering because their digital customers didn't even see the analog circuits with digital tools nor could understand them. He regularly encountered malicious or at least deceptive behavior in 3rd party I.P. that similarly used mixed-signal tricks. I've speculated before on putting a backdoor in the analog circuits modulating the power that enhances power analysis attacks. Lots of potential for mixed-signal attacks that are little explored.

8. Peripheral hardware is subverted, counterfeit, or has similar problems as above. Look at a smartphone breakdown sometime to be amazed at how many chips are in it. Analog circuitry and RF schemes as well.

9. EMSEC. The phone itself is often an antenna from my understanding. There's passive and active EMSEC attacks that can extract keys, etc. Now, you might say "Might as well record audio if they're that close." Nah, they get the master secret and they have everything in many designs. EMSEC issues here were serious in the past: old STU-III's were considered compromised (master leaked) if certain cellphones got within like 20 ft of them because cell signals forced secrets to leak. Can't know how much of this problem has gotten better or worse with modern designs.

10. Remote update. If your stack supports it, then this is an obvious attack vector if carrier is malicious or compelled to be.

11. Apps themselves if store review, permission model, and/or architecture is weak. Debatable how so except for architecture: definitely weak. Again, better designs in niche markets used separation kernels with apps split between untrusted stuff (incl GUI) in OS and security part outside OS. Would require extra infrastructure and tooling for mainstream stuff, though, plus adoption by providers. I'm not really seeing either in mainstream providers. ;)

That's just off the top of my head from prior work trying to secure mobile or in hardware. My mobile solution, developed quite some time ago, fit in a suitcase due to the physical separation and interface requirements. My last attempt to put it in a phone still needed a trusted keyboard & enough chips that I designed (not implemented) it based on Nokia 9000 Communicator. Something w/ modern functions, form-factor, and deals with above? Good luck...

All smartphones are insecure. Even the secure ones. I've seen good ideas and proposals but no secure[ish] design is implemented outside maybe Type 1 stuff like Sectera Edge. Even it cheats that I can tell with physical separation and robust firmware. It's also huge thanks to EMSEC & milspec. A secure phone will look more like that or the Nokia. You see a slim little Blackphone, iPhone, or whatever offered to you? Point at a random stranger and suggest they might be the sucker the sales rep was looking for.

Don't trust any of them. Ditch your mobile or make sure battery is removable. Don't have anything mobile-enabled in your PC. Just avoid wireless in general unless its infrared. Even then it needs to be off by default.


What does one need a secure smartphone for?


Between -sec, cloudcuckoolander designers, and architecture astronauts it feels like all the fun has vanished from tech.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: