There was a great Defcon talk about this called The Secret Life of SIM Cards that I can recommend watching (they release the video for these some time after the conference).
The talk itself was about a group that had an enormous camping trip (I hope that phrasing doesn't diminunise it) called Toorcamp of a few thousand people that thought it would be fun to also put together their own cell network for just them. They bought and programmed SIM cards and hid puzzles in the programs on them.
I wrote my Master's thesis on software platforms for smartcard applications in 1999. An interesting platform running javacard apart from SIMs is/was the Java iButton from Maxim (then Dallas Semiconductor).
Also, all ATM cards which are smartcards (i.e. almost all of them in countries such as France, Norway) can also hold several applications. The banks just doesn't allow it. In theory you could, even with today's technology, buy a blank card (say, with a David Bowie picture if that's your thing) and have the bank, visa, mastercard, grocery loyalty programme, library card, frequent flyer applications etc on it. Just carry one card! But no, everyone wants to own the card have their logo in it. Sigh.
I'm in London and they are rolling out contactless applications for certain things now. Subway sandwiches all take them. London buses you can pay by swipign your debit card and there is one bank that has a debit card/oyster card(oyster card is use on all London transport).
We used to have them in Poland for a few years now, virtually every single place has contactless terminals. In the UK I managed to use mine at Greggs,and the lady working there had absolutely no idea what I've just done, they were very confused after I told them that they have contactless terminals. Other than that, Subway and McDonalds have them.
One of the side effects of software eating the world is that the world becomes more exploitable. I expect that over time we may see the emergence of general 'software building codes' much like there are physical building codes, and more importantly liability associated with failing to provably meet such codes.
The current 'random person implements firmware that controls the this chip' practice and the 'no warranty etc etc' disclaimers will, I predict, be replaced by manufacturers who are willing to warrant their code.
I am getting tired of seeing physical building codes thrown around in comparison to software. I've looked at residential building codes, because I've done my own work like building a deck, framing and finishing my basement, and researching how to build concrete & steel suspended floors, or wood floors - seeing what their spans are - seeing how lightweight steel framing vs wood framing works - it's quite fascinating. (I was considering designing my own house, but the wife shot that down)
And there is simply NO COMPARISON in complexity to software. In physical building, you over engineer and get redundancy. You also know that almost everyone builds plumb and level, rectangular rooms, with easy span and shear calculations. You also know that wood follows a predictable strength pattern. So you put in enough nails of the right length, to enough wood of the right thickness, and you're good. How much intelligence do you think you need to follow a building code? Building materials and techniques change GLACIALLY compared to software. (A good thing!)
Whereas in software, one single bit flip will just fuck you up.
The complexity of software doesn't preclude standardized "building codes." Obviously the codes would have to be just as complex, but software could be used to check that those codes are met. It wouldn't be perfect, but current building codes for physical buildings also have their flaws (every building [vs design] needs to be physically inspected by a person [much easier to exploit than a machine]).
These already exist, for example it is very difficult to build biomedical software that gets hooked up into a living thing - even if you use it only for research. In practice such a hard realtime guarantee can only be met by a devices like an FPGA. All of this is in a code.
Real issue here is why such codes don't exist for for this particular problem.
Between writing and formally verifying HDL, logic synthesis (often using third-party or vendor-provided blocks or soft IP, mind you), behavioral and logic simulation (software!), layout, MDP, and building test harnesses that actually test the final hardware, I would actually trust a lower-level language compiled using a well-known compiler and running on a small RTOS on a proven CPU more than I'd trust a new design synthesized into FPGA or especially ASIC form.
You probably don't want a system where all the hardware and software for controlling mobile radios is readily accessible to and modifiable by anyone. You might think you do, but the first time you try to call an emergency service and you can't get through because one idiot somewhere in range of the same base station has screwed up his debugging code and jammed a control channel, you'll change your mind.
I've been the guy who drives around in a truck with a lot of mobile scanning equipment to try and figure out where the rogue device is. There is no magic button like in the movies, where they can immediately triangulate the source of the interference to within a 5 cubic metre box. You basically have to rely on simple physics and boots on the ground. The device you're hunting for isn't playing nicely, so any assumptions you could normally make based on things like which base stations it's in contact with won't necessarily be valid. Things are a bit smarter in modern networks than they were back when I worked in the field, but physics is still physics.
In short, there is a legitimate justification, born of experience, for every telecommunications regulatory authority in the known universe requiring this stuff to be certified before you can legally use it. This is also why in some jurisdictions agents acting for telecommunications regulators have certain legal rights to access private property.
Of course this only affects the radio equipment. I see no reason it should be necessary or possible for such software to have any control over other integrated peripherals such as cameras, speakers, microphones or local storage. And the primary concern is people who could modify the code and break the network, not preventing any legitimate audit to prove that devices are only doing what they say they're doing.
The FCC already regulates who can broadcast what over the airwaves. And if you're not licensed for that part of the spectrum, the source code will do you a fat lot of good with your radio experimentation.
Nobody is asking for some sort of hacker radio anarchy, here. They're asking to see the source code for machines they own, machines that reside in their pockets, machines that are responsible for storing and communicating their most sensitive personal data.
If you cause the device you own to operate according to your own will (i.e. the core concept of FOSS) instead of the will of the carrier, there is a strong likelihood it will cause a denial or degradation of service for everyone else.
Verizon has the right to transmit on spectrum allocated to it using consumer devices as its agents. It employs engineers and QA processes to make sure that any device transmitting on its spectrum plays well with others before it is allowed to leave RF-isolated testing facilities.
The public does not and should not have the right to transmit on Verizon's spectrum, even using devices they own which are legally and technically capable, except according to Verizon's carefully vetted programming. If they were able to run their own radio firmware, you'd have the situation described in the parent.
Cellular radios necessarily cannot be open source. The source could be released for inspection and audit, but it cannot be possible or permissible for you to run modified source on "your" radios.
Open source != free (libre) baseband hardware that you can directly modify the software on. Being able to compile and modify is fine and has no impact on the network if you're not able to run it on the primary network.
A country could enforce openess of the source code for imported software and firmware.
- If Toyota (or any car manufacturer) wants to import cars into my country, then they better show us the sources of their firmware and software (and let us re-compile it and re-install it, to make sure it corresponds to the embedded code). And let the papers compare the code quality of Toyota vs. BMW.
- If Microsoft (or any software vendor) wants to import software into my country, then they better show us the sources of their systems and applications (and let us re-compile it and re-install it, to make sure it corresponds to the binary code, and doesn't contain backdoors to the NSA (or the MSI, or the MI5 or whatever).
- and so on.
And actually, citizens can do the same at their level, not letting enter their house any device or software whose code is not open source or even libre software (so they can recompile it and reinstall it on their hardware).
But a country has more weight than a few citizen that would be qualified of lunatics, and has more resources to analyse and validate the software and firmware.
This is essentially what Alexandria used to do: any ship coming to port with any books on board was required to give the library at Alexandria those books for as long as it took for their scribes to copy them. The library then gave the ship owners the copied version (because hey, data is data, what does it matter if you have the original or secondhand copy?)
This helped propel a lot of the world's best ancient thinkers, including Euclid, Archimedes and Eratosthenes.
> And actually, citizens can do the same at their level, not letting enter their house any device or software whose code is not open source or even libre software (so they can recompile it and reinstall it on their hardware).
I tried that, but at the moment it means you can't even own a cellphone, etc
Thats easy to just say, but how wil you financially protect the company developing the software / hardware, if their competitors can just copy the work. Or stated the other way round, how will you keep companies encentivised to develop new software / hardware?
Actually quite the opposite. If legislation were passed that required an entire industry to open source their firmware (automotive, voting machines, medical, etc), it would provide better IP protection. Did company A copy company B's code? Well it's open source ... go look! As opposed to being able to hide things in binaries.
But yes, this only applies if the competition must also open their kimonos. That seems practical for industries where regulation is required or provides reasonable benefits to society (automotive, voting machines, medical, etc).
I think it is a consequence of the fast improvements in functionality made. People would rather be on a feature rich beta branch, than a safe stable branch. I also expect this to change as the industry matures.
After some particularly huge disasters I've had to deal with coming onto new projects and with general consumer electronics, I'd go for the latter every time both as a consumer and company these days. Time is valuable and losing it to a feature packed unreliable mess is a big risk to that time.
You're right - there is a tradeoff between features and stability, but "consumers vs companies" is an oversimplification. It's about the probability of something breaking, and the consequences of what happens when it does.
For a company with a large workforce, the probability of something breaking is higher - many different machines being used in different ways, and the cost of a botched upgrade can run into millions, with people losing their jobs, and legal action being taken.
For consumers, the risk is typically lower and less costly if it goes wrong. If a Facebook upgrade causes the app to crash repeatedly, it means I can't post for a while, so I'll just upgrade and hope for the best. But if a personal finance or medical app goes wrong the consequences could be extremely serious, so stability is more important.
Extending that metaphor, a profession could evolve around zoning for permissible uses of code based on types of software and intended audience, much like urban planners do with zoning codes and city master plans.
> applying pretty much any "software building codes" to the web app industry would probably kill startup culture entirely.
Not necessarily. To use the analogy with the building industry, there are different requirements depending on what the building will be used for. Startups holding or processing financial information might have one set of regulations, whereas those hosting cat pictures might be less rigorous.
But at the same time, the consequence for access of account information for cat pictures site on the site holding financial information shouldn't be discounted (passwords, social engineering info etc).
To continue the (somewhat stretched) building analogy my house falling down doesn't affect yours, but someone stealing my access codes from my poorly secured building potentially could affect the security of yours.
This is very unlikely because such embedded software operates on extremely low level and it's completely dependent on a particular hardware design (not just the chip, but the whole architecture). It's extremely hard to make that portable enough to make it sellable, especially with the speed of technology advances today.
... The voice came from an oblong metal plaque like a dulled mirror ... The instrument (the telescreen, it was called) could be dimmed, but there was no way of shutting it off completely. (1.1.3)
Oceanians live in a constant state of being monitored by the Party, through the use of advanced, invasive technology.
It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself – anything that carried with it the suggestion of abnormality, of having something to hide. In any case, to wear an improper expression on your face (to look incredulous when a victory was announced, for example) was itself a punishable offense. There was even a word for it in Newspeak: facecrime, it was called. (1.5.65)
Is the the google input box a door to the world or a window into your mind?
I am assuming that the RTOS has direct and full unrestricted access to the hardware such as the camera and microphone? If so then I would also assume that an over the air attack to silently suck data from the camera and microphone would be pretty easy for those with access to the RTOS (such as governments)?
I know there has been software to do just this in the past on some Nokia devices but I would assume (I am doing that a lot in this post!) it is just as possible in pretty much every mobile phone?
Anyone with knowledge of this care to comment on my assumptions?
I would also assume that an over the air attack to silently suck data from the camera and microphone would be pretty easy for those with access to the RTOS (such as governments)?
This is correct. The rule of thumb is this: If you need to avoid being tracked, do not under any circumstances carry a cell phone unless you have removed the battery. Even if it's powered off, it can still be activated to remotely track you as long as the battery is in it. This tactic was used in catching the recent serial killer Luka. http://en.wikipedia.org/wiki/Luka_Magnotta
From the article: "His cell phone signal was traced to a hotel in Bagnolet, but he had left by the time police arrived."
You can bet that, since he was on the run, his cell phone was off.
There are other examples besides Luka. Circumstantial evidence is very strong that law enforcement can track you if you've powered off your phone but haven't removed the battery.
(I feel so strange posting this comment, since those who would benefit from this advice are probably of dubious character.)
"There is reason to believe phones have been remotely hacked by law enforcement using carrier credentials to leave the cellular radio running and registering with the cell network even after the off button has been pushed and the phone appears to be off. Starting point for further reading: http://www.brighthub.com/electronics/gps/articles/51103.aspx "
tlb = Trevor Blackwell, one of the best electronics hackers in the world. You may know him as the creator of the first robot that walks like a human. http://paulgraham.com/anybots.html
Now I've presented circumstantial evidence and an appeal to authority, so of course feel free to doubt me. But don't be surprised to discover you've been tracked when carrying a powered-off cellphone with a battery in it.
I was skeptical the last time this subject came up, too, but someone pointed out that for an attacker with root access, it would be easy enough to reprogram the phone's power button to act like it was turning the phone off without actually doing so.
Given the fact that US law enforcement and security agencies are (now) indisputably known to be in the zero-day hacking business, the burden of proof is on those of us who've always claimed these stories were exaggerated.
Personally, I no longer feel confident about calling bullshit on much of anything. With the US government's infinite budget and unaccountable influence over device manufacturers and telcos, no hack is impossible.
That's the irritating thing -- that so much money is being spent for such poor results. The NSA isn't very cost-effective at preventing terrorist attacks, and healthcare.gov isn't very cost-effective at providing health insurance.
When confronted with my government's incompetence, I never know whether to be pissed off or relieved.
NSAs job isn't just to prevent terrorist attacks, it's also to gather and analyse information for the US government and to keep US communications secure. I don't know whether or not they're "cost effective", but I don't think anyone would doubt that they've given the US a huge strategic advantage.
It's purposely constructing a false reality to make a victim think they are going insane. I am taking some rhetorical liberties by not intending the second part of that and using it to refer to the false information only.
>the baseband can control everything
Why would that be the case? Baseband in this case refers to a frequency, eg taking data(audio, SMS, etc) and mod/demodulating it. It should take raw data at the bottom of the OSI model and prep it for the RF front end, and vice versa. I may absolutely be wrong, but it doesn't have any connection to the microphone other than receiving preprocessed digital audio from the application processor. Look at the interface for an e.g TI cc2500, that's the sort of device that's being discussed
"Baseband" in this case refers to a dedicated CPU and software that handles all of the high-level cellular radio protocol work. It's where the logic would be implemented to handle the data coming from a chip like the cc2500.
"I feel so strange posting this comment, since those who would benefit from this advice are probably of dubious character"
I don't think there's too much social harm done in pointing this out. It seems to be so well known that it's referenced (visually or even verbally) in movies and TV shows like Breaking Bad, where the criminals take the batteries out of their phones as an extra precaution against being tracked.
How does airplane mode fit in with this? Wasn't it until recently that FAA regulations required cellphones have an explicit means to halt ALL radio communication? If the phone's radio is still potentially active even when the device is "off", how could these baseband OS's get government certification?
There is no confirmation whether baseband processor can be reached while device is OFF/in Airplane mod.
And I join the crowd who think that is impossible. I bet someone would notice weird patterns if the baseband kept working despite of device off. (Speakers catching 2G, battery drain, interference with other devices, etc.)
The radio interface could listen/wait without even replying, ie wouldn't make the GSM RFI speaker noise. If governments, carriers and law enforcement could all manage to use this so incredibly rarely that it's never been observed.. then it could be real.
Given the types of people that would have to have access / knowledge of this though. For example people that suspect their partner of infidelity and is on the police liaison team of a carrier say...
I agree it's very unlikely, someone would have noticed it by now.
Or reply in some side channel, piggy back the next (expected, ie when the user has switch back to normal mode) UMTS radio packet for example. I don't know the packet structure but I expect there are areas that could be re-purposed covertly. We did after all fit the entire SMS system into such a space.
"Airplane mode" is essentially and AT command sent to the baseband to disassociate and go to sleep, it doesn't disable the baseband CPU, DSP or anything else.
You could argue that "off" is the same thing, for instance, many Qualcomm devices boot with the BP first and and can do a lot before the AP is even taken out of halt, without initializing the LCD display, backlight, etc.
The addition of a mechanical power switch would probably make a successful product nowadays. Shouldn't be too hard to implement on individual phones if you have nimble fingers (not sure though, it's been ~decade since I last dissected one).
I worked telecoms a few years back. One of the things I worked on was an anti spam system for SMS. It had the ability to blacklist, throttle and log messages based on a number of identifiers including imei, phone number, Tower id, service center id and message content (typically binary patterns in non text payloads but there was nothing stopping it to be used to block messages containing certain text or keywords).
So if we were using these numbers to block messages then I'd absolutely expect government agencies to use them to monitor or track phones.
Not necessarily your GPS, using it consumes more power ( ~ 30 - 100mA @3.3V ) than detecting your location passively using the cell-towers. It also takes longer to get a precise fix unless the phone is out in the open.
Then, when you get stopped by the cops you have to explain why your phone is wrapped in foil. For which there can only be one possible reason. You might as well also carry a set of lock picks, crowbar and an acetylene torch, perhaps with a set of scales so that you can be processed that bit easier...
Not sure how it could be fully applied if in the police interrogation room having been apprehended in a dark alleyway with two large bin bags full of steaming skunk weed, freshly purchased off a Vietnamese gentleman insistent on counting every note of that £20000 just handed to him. Maybe that is just an edge case though.
Perhaps a better idea (and market opportunity) could be a 'walkers rucksack' that has a pocket for your cellphone. This pocket could be lined specifically to act as a Faraday Cage, explicitly so that you can have easy access to your phone for maps etc., yet be fairly certain that your day strolling in the hills will not be interrupted by the office, the wife and other cold callers. Such a bag could be plausibly denied in a way that plain old tin foil could not be.
Don't feel bad about yourself. I had a history teacher on college who was a very important political criticize of Hugo Chavez government. Each time she was going to talk about politics, she took the battery of her cell phone.
No, not really. The actual answer is more complex.
The high data rate required for imaging module means that it doesn't run on a shared bus. There's a number of standards, some proprietary and others loosely defined, but they all use direct connections to the image processor (which is in many cases part of the SoC).
Probably not going to access this one, because microphone wouldn't be on any sort of bus. They'll have a direct connection to a ADC, which would have a direct connection to an audio signal processor in the SoC.
This depends on the implementation. If they use I2C, there's a good chance they'll be on a shared bus on which the baseband processor is also located. However, accessing that data requires details about the specific sensor. If a particular model of phone is being targeted, this isn't too hard. For example, I know the iPhone 4 uses the LIS331DLH Accelerometer. I can find the datasheet for that part, and then write a simple driver to access it's data.
If they use SPI, which isn't a bus-based protocol, there's little chance of accessing the information directly. SPI devices can be daisy chained or separately selected to put more than one on a single port, but in such a configuration there's only ever one master (which would be the CPU).
There's also the possibility of reprogramming a PINMUX to move access from the AP to the BP on the same SoC for something like SPI.
Essentially, most of the pins on the SoC are re-programmable to have different functions or connect to different logical blocks within the SoC.
Along these lines there's also bit-banging SPI or I2C, which should work fine for infrequent updates from an accel or compass (and now things like pedometers being added to devices.)
As for microphone, if it's being read by the audio DSP, it's certainly accessible from the baseband, at least on Qualcomm . If nothing else you can load a DSP module that allows access to the raw PCM (or vocoder) data.
Same is probably true for the camera, which is usually on an HSIF or similar bus connected to the DSP.
TL;DR yes, the BB cpu has full access to everything, including the "main" cpu and all running processes. Why not, right? :)
It is important to note that all smartphone chips are optimized first for low cost and low power usage.
To actually isolate the baseband processor+GPS+Cell radio+Mic+Speaker would require a second high-speed bus.
Most cell phone processor designs put both the baseband and application processor in the same package both for cost and power saving reasons. Since both processors are typically ARM cores they will easily interface to the same bus for memory and peripherals. Only having one external bus means fewer external components, which is typically the strongest factor relative to the total power and cost.
There is also the legacy element. The article notes that most of the BB code is at least a decade old by now. Unless that code got a major rewrite, it would not run on a new, isolated architecture.
That's not always the case. For instance, on Galaxy Nexus (CDMA), the radio is split from the AP, and are in fact manufactured by two completely different folks (the AP is TI OMAP, and the radio is VIA Telecom). I'd imagine the same thing with Mediatek, who is a large and growing player. You are right that Qualcomm does fuse their radio with their AP, though, and they do control quite a lot of the US market.
Well, good, if the radio and baseband do not live on-die with the AP. Market forces are pushing for a completely integrated design, but it's interesting from a security perspective.
The baseband is still considered the master CPU during boot - at least on the CDMA Nexus. So although there are some corner cases in terms of architecture, the security model is still completely broken. Send a payload to the baseband over the air, compromise it, and the entire phone is yours.
"I am assuming that the RTOS has direct and full unrestricted access to the hardware such as the camera and microphone?"
You're thinking small ... the baseband processor (typically) has DMA access to the processor itself. Never mind pedestrian stuff like peripherals...
Further, since carriers interface with the baseband processor for OTA updates, that means your carrier has (essentially) DMA access to your phones cpu. I wish people appreciated just how deeply (as deep as deep gets, basically) your carrier can control the device in your hand - even if you have "rooted" it.
For one thing, putting the voice signal processing in the baseband means that it's not vulnerable to timing glitches from "noisy neighbor" apps running on the application processor. For another, as a practical matter, a lot of the baseband software started out as the entire software stack for the single processor in a dumb-phone/feature-phone, which necessarily included the voice processing. Simply leaving it there avoids the technical effort of doing a port.
Yep. I had no idea this was the case until I read an article that came out shortly after the original iPhone. The dialer app in iOS during those first few months was pretty buggy and had an occasional tendency to crash or lock up while on calls, especially when pressing the "hang up" button. Every time it locked up on me, it'd never impact a call in progress though (much to my annoyance when trying to hang up).
It made perfect sense once I read that the speaker and mic were wired straight to the baseband, and the state of the dialer application had zilch to do with what was going on in the baseband.
Way back when a wrote a couple chapters for Android Application Development I wrote about Android's RIL daemon and the underlying device-specific RIL libraries. It's gotten a lot more complicated, but the source code is open and includes what appears to be a reference implementation: https://github.com/android/platform_hardware_ril
I have not looked at this part of Android source code in depth in a while, but from a quick look it still looks very edifying about how this part of a smartphone works.
Coming from a background of developing audio hardware drivers for the Blackberry (I worked on the last generation and current generation before getting bored and leaving a year ago), I can tell you that even if the baseband were able to turn on auto-answering, (I have no idea if that's possible, by the way) it wouldn't know how to configure the microphone and speakers to allow for recording or playback unless it convinced the application processor to help.
If you are concerned about your Blackberry spying on you, there's a special "security plug" that you can insert into the headphone jack which will short all of the pins to ground, disabling the microphone. I assume other phones support this as well.
Re: the security plug, I'll believe you that it might work for your line of devices, but in the schematics for the relatively few phones I've looked at there was always active selection on the phone pins.
Think about it: you put normal headphones in the jack (no microphones, only tip, ring, sleeve): it will already short the mic input
I don't know where, but it sounds fairly simple to make: cut the connector off from an old headset, and solder all of the wires in it together. One of the wires (the uninsulated one) in there is ground, the other three are signal.
In the case of the baseband processor, the company that put code into it is not the company that put the phone together.
It is a complex trust situation. In this case, you can reduce the number of agents you have to trust by using the security plug. This is good for security.
The handset manufacturer could still be spying on you, but if the security plug actually works as advertised it would disable all attacks that would listen in on your mic. These attacks could be deliberate by any of the companies that have code in your handset, or it could be via an accidental weakness in any of this code that is exploited by a third party. This last kind of attack is what the linked article talks about, and a security plug would actually reduce the severity of such an attack.
In the hypothetical in which the phone has been created to act as a bug, surely it would be easy to detect the security plug and disable the microphone for normal uses while leaving it enabled for hidden use. The security plug is only secure if the wiring means that when plugged in, software cannot access the normal mic - if software can access it, it can trick the user into thinking anything.
And if you don't, you can crack the phone open and look at (much of) the wiring yourself. This is of course a lot harder with modern multi-layer PCBs, but I'd imagine still not impossible. You can at the very least take a multimeter to the microphone pins, and test whether they are indeed shorted or not.
Right, it's not like wifi adapters have independent processors of their own with closed-source, potentially buggy firmware that does DMA into main processor memory. :-)
It's also worth thinking about netboot (which comes in several flavors), in which the main processor's potentially buggy BIOS may be independently decoding and processing packets coming over physical wires.
As it happens, the wifi adapters I'm using don't have any firmware at all - most Atheros ones are old-fashioned peripherals with no processor of their own, and some of the remaining ones have offical open source firmware. Good luck finding a baseband interface like that.
And on USB, don't forget SMM code emulating PS/2 input devices by parsing USB HID packets. I think part of the reason why real mode exploits was never very common was that the address DOS allocated memory depended on for example what TSRs you were running.
Remember that story a few months ago about how some government agency had replaced all its keyboards and mice in response to a malware infestation? A lot of folks (including some here) took this as Yet Another Show of Government Cluelessness. I found myself wondering instead if there was a world in which folks advised by government security experts (i.e., you-know-who) would have a good reason to do something like this and not say why.
There is. It's a world in which their opponents had a zero-day against the Windows USB driver, and a way into the government's supply chain. And in which you-know-who wants to play the same game themselves against opponents elsewhere.
I don't think you realize how much noise computers radiate. If someone wanted to emit a signal instead of noise, it would be quite possible to do so. (People are doing DX work by connecting an IO pin of their Raspberry Pi to a long wire.)
"That complexity is exactly one of the reasons why it's not easy to write your own baseband implementation. The list of standards that describe just GSM is unimaginably long - and that's only GSM. Now you need to add UMTS, HSDPA, and so on, and so forth. And, of course, everything is covered by a ridiculously complex set of patents. To top it all off, communication authorities require baseband software to be certified."
This is HN.
I don't think implementing a replacement is all that daunting given enough time and money. I wonder if there's a business model that will pay for it?
For an example of an open-source GSM implementation that would allow one to build a base station, see http://en.wikipedia.org/wiki/OpenBTS . There are lots of videos about it on youtube where you can see it in action.
Often the RTOS is not exactly free, but not entirely closed either. A while back, i used to work on Nucleus RTOS by Mentor Graphics with a pretty impressive global foot print http://en.wikipedia.org/wiki/Nucleus_RTOS. It used to be sold as an api (with source code given to customers) who developed applications based upon it. I have written portions (IPsec/IKE, SNMP, Ipv6) of its networking stack and at least all of its customers have access to source code. It is pretty well written with very decent coding conventions and can be compared to any good well known open source project (VLC, even Linux kernel). Then there are others such as Wind River's VxWorks among the more popular ones. Though i am not very sure of its licensing model, but it is pretty well recognized and established in the embedded world. Just that these are not as well known in the over all software community but rather more restricted towards those in the embedded industry.
Nucleus pretty much comprises of a very small foot print. With Architecture specific assembly isolated from like 95% of the code neatly. Rest is Ansi C. It contains tasks which are sort of equivalent of kernel level threads in POSIX but implementation logic is quite different i.e. RTOS constraints are handled by classifying interrupts at two levels. In terms of constraints, there is no dynamic loading i.e. you have to build a single binary. But at the same time it was pretty fascinating with os, networking stack, drivers all contained in a separate folder building up one project. Lately they have added power management, Android like UI and even some hyper-visor support. Most importantly, it is small and consistent enough for a programmer willing to learn through the entire stack. Helps with much better visualization from hardware to application. A couple of former colleagues (one of which incidentally now works with QNX and hence compared both) highlights both i.e. strengths vs weaknesses of each. But it didn't feel like one was superior to another. However, Nucleus severely lacks any certifications (and ability) to get into HARD real time industry such as aviation.
I would donate for somebody setting up a server that streams audio (and video, …) from all phones in reach. With bitcoin this could even be pulled off anonymously. I would hope for such a server streaming data from financial districts, one at a time would finally lead to something to change about this. Donations would help buy antennas and rent space in financial districts.
The baseband processor may have unrestricted access to the entire address space of the device or to address space which the application processor (and the operating system it's running) implicitly trusts.
AFAIK, access to the baseband and vice versa is through a network inside the phone. Physically, these two computers are separated, and communicate only through a well-defined network interface. No poking in other computer's memory.
Most modern phones don't really have a physically separate baseband processor and application processor, they run a real-time microvisor (usually an L4 based system, and frequently OKL4 - see http://wiki.ok-labs.com/#OKL4Microvisor4.0). The microvisor runs multiple virtual 'cells' (which are ARM operating systems that think they are running as supervisor), and ensures that the hard realtime requirements of the radio driver are met even if the kernel in the application cell is stuck in a loop.
Because the application processor is actually running in a virtual cell and not on bare metal, it doesn't have full access to the hardware and can't interfere with the radio cell - but the radio cell might still have access to everything.
If you want a mobile phone that you control, you need to buy something like a samsung galaxy player (contains no baseband processor, contains no mobile phone infrastructure) and then attach a USB modem to it (or carry a MIFI or whatever).
There's one problem, however, and that is all of the fancy noise cancellation and voice smoothing are actually done on the baseband proc, and userland implementations of this for VOIP apps are typically pretty crummy.
I'd be (pleasantly) surprised if devices like the Galaxy Player didn't still have binary blobs for the Wifi/BT and video hardware. Even NICs for otherwise fairly open PC/ATX machines all still have the proprietary blob firmware drivers.
The Mifi is the same thing as the baseband in your cell phone, and can even do SMS. Many of them are running Linux on the AP side now to support more advanced Wifi routing features.
If you were really paranoid, you might consider the possibility that it has a microphone or a speaker that can be treated like a microphone. (Though I don't know of any that actually have a beeping, vibrator or piezo output.)
Also, they may have some form of e911-compliant GPS receiver, though whether the RF is hooked up for it I wouldn't know.
Well even a completely trustable cell radio is tracked with tower triangulation. The only way I see to fix this is to completely rearchitect the mobile network by getting rid of subscriber IDs, using anonymous payments for tower access, and then a mix network for transit privacy. That is to say, location data is a wash for the foreseeable future..
Surreptitious microphones and other sensors are indeed still a problem, but they seem easy to audit/remove in the short term, and if this model catches on and they become a real threat, the physical audits just have to go deeper.
What you do gain is a processor that can be trusted by the user (in the same way we all trust Intel CPUs), with the Mifi only seeing encrypted communications. Also we've moved the demarc point solidly between two separate physical devices - upgrade your pocket computer without involving your cell provider, and replace your communications ability without affecting your user environment.
Well first you're assuming that those that created the system consider known identity to be a misfeature, even beyond that necessary for payments.
Trustable/trusted doesn't mean trustworthy in the sense of an individual citizen's expectation of privacy, it means that you have given the entity your private information, in trade for service and convenience.
The irony of old spy tradecraft is that we all possess hardware that could conceal in plain sight anything that previously had to be hidden completely, the existence of electronics or recording/transmitting ability (including film) would immediately indicate the role of the person as someone engaged in some kind of espionage. Now we can all carry sophisticated sensors and communication devices in most places, and all but cameras in many others.
What physical audit would tell you if the MEMS sensor in your device has been repurposed for audio pickup. (Assuming that the capability isn't already in the firmware, or that simple observation of the signal output (including power fluctuations) could reveal the same information to the microprocessor.
The only case I'm aware of where the courts have blocked this kind of surveillance involved an OnStar vehicle system using an analog cellphone which could only serve it's intended purpose or the government's, but not both simultaneously. This is not an issue with digital and IP-based systems, which can easily serve two masters.
Ah, you're use of "trust" again, Intel CPUs have features that actively work against you, such as with vPRO. I would agree that a non-cellular PDA and Mifi are superior to an integrated device from a privacy and personal autonomy perspective.
> you're assuming that those that created the system consider known identity to be a misfeature
No, I'm just speaking from the perspective of system design, for what it would take to actually hide your location data - to put boundaries on the problem we're talking about.
> trusted ... means that you have given the entity your private information
Yes, which is why I used "trustable", which I'll admit isn't necessarily the best word either as I'm currently trusting my phone with call/text data, even knowing how broken it is. On the other hand, I personally don't have my phone setup to access my general files, because I simply don't feel that the thing in my pocket is actually my agent.
> physical audit would tell you if the MEMS sensor in your device has been repurposed for audio pickup
Well the point is that a Mifi (with untrusted baseband) shouldn't have these sensors.
> Intel CPUs have features that actively work against you, such as with vPRO.
And probably other ones ones that don't show up in marketing materials. Which is why I made an explicit parallel to trusting Intel CPUs to generally run the code we tell them, even if this isn't necessarily true. We have to define a boundary so that we can solve the problem we're talking about with a platonic ideal of a trustable CPU, while separately solving the problem of not having trustable CPUs.
I talked to a friend of mine who is an engineer at Qualcomm, and he said the article is exaggerated and out-dated. Current basebands don't use REX OS anymore, and they put mitigation mechanisms in place, so this piece seems like FUD.
I happened to be reverse engineering some firmware the other day, which has "AMSS" all over it; this was in new Sierra Wireless devices built on Qualcomm ARM926EJS baseband. It might not be in the latest and greatest, but it's still out there all over the place.
as someone who closely works on qualcomm baseband processors, i can say that security is one of the top priorities of qualcomm. There are whole bunches of teams dedicated to sec/vuln analysis. Not saying that the issues mentioned in the article did not occur...but I believe that those probably occured in older chips (a few generations older)
Views above are personal and do not reflect views of Qualcomm
Heck, I'm sure it's been longer than a decade. Previous generations of mobile phones used FM unencrypted. You could eavesdrop on them with just a television that has a UHF dial on channels 70 to 83! (Audio carrier 811-889 MHz)
I thought that the refrigerator was more about soundproofing (that way it wouldn't matter if it was listening or not.)
I think it depends on the model. I tested a couple of phones, including a blackberry, inside my fridge a few years before Snowden did his thing. We put them in and then tried to call them and none of them rang. But the fridge was one of those trendy stainless steel models. Perhaps a more ordinary fridge would have less effect on signal strength.
I've only seen fridges that are made with steel or aluminum. They would still leak RF around the door gaskets, but probably such a small amount that you'd have to be very close to a base station to be vulnerable, and it'd still be mostly soundproof.
The second operating system hiding in every mobile phone? Really?
There's a ridiculous number of operating systems hiding in every mobile phone. What do you think runs on the GPU? What about bluetooth, wifi and GPS? What about all those sensors? The camera interface? The video acceleration? The SIM card? The NAND flash?
The GPU, bluetooth, wifi, and GPS chips are not running their own operating system kernels. They have firmware microcode that gets loaded when their drivers are loaded, but they aren't running a completely separate dedicated realtime OS.
Some of the common Bluetooth and WiFi chips out there are definitely running their own realtime OS; the Bluetooth chips at least are apparently quite complicated (WiFi hardware designers seems to be less keen on doing everything in firmware).
Default register initialization values and functions to encode/decode and transmit/receive packets of data do not equal an operating system in my book. Maybe you draw the line at a different level of the stack than I do.
I'm totally willing to be admit that I might be wrong about this, but I wasn't under the impression that Broadcom and Atheros and Intel were using ARM CPUs in their wifi/bluetooth/GPS chipsets.
The missing piece here is that WiFi/Bluetooth/GPS chipsets ARE usually using ARM CPUs internally. GPUs generally run a funky DSP-like core but there's still some kind of OS scheduling tasks and running code to interact with the main CPUs.
The cost of laying down a fully-fledged CPU has reduced to the point where it's simpler and less risky to use an off-the-shelf ARM core (or similar), instead of a big bunch of hard logic combined with coefficient. And most of those CPUs have some sort of runtime, which is an OS, depending on where you draw the line on that.
Maybe the future is in making calls over the Internet, not a private cellular network?
Or maybe the future is in open source software defined radio?
I never tried it, but I heard OpenMoko could run BSD.
In any event, I hope the future is one where I can read, modify and compile the source for my handheld's bootloader and operating system, as I currently can do with my laptop's bootloader and operating system.
It's from Frank Herbert's Dune :( Sad because I read this series (all six, I kid you not!! 1,2,6 are good 3,4,5 not so good) 25 years ago nearly and I had forgotten this specific detail. Time is indeed a cruel mistress and thanks for making me feel old :)
So maybe a relevant question as we move away from desktop computing is whether your mobile device can be identified through online activity, such as commenting, searching, email etc. This would be useful for locating dissidents.
This is all a bit over the top. Yes, the baseband may be compromisable, that doesn't mean that the operating system is. Your photos, data etc should be safe as long as there aren't further exploits (which of course exist).
Furthermore, i have yet to hear of a slave high level operating system to the baseband. iOS or android being initialised and commanded by a secondary baseband OS would just be a bizarre setup. That of course does not mean that the baseband doesn't pass commands to the high level OS. Though if the interface is well shielded, exploiting it could be tough (correct me if I'm wrong, but I don't think baseband exploits exist for iPhone 5/5s).
Now, I'm sure the NSA however have some interesting possibilities that Angela Merkel would be all to keen to know about ;).
.. and by unfettered, we mean it has DMA access to the application processor. It's not a hook or an API call or some functions it can call - it has low level bit for bit access to manipulate the CPU that your OS (like android) runs on.
Further, your carrier (verizon, att, whatever) can push OTA updates and commands straight to baseband via the radio, bypassing the CPU and OS (like android) and manipulating the phone on a low level bit by bit basis.
Even the most well secured, rooted, reloaded phone has every piece of it totally owned by the carrier, via the baseband processor.
DMA has security modes, implemented by pretty much every standard baseband chip in use today (Qualcomm, Infineon, you name it). In the oldern days baseband exploits were used to crack iPhones. As far as I'm aware of, this hasn't happened in the last two generations of iPhones. So no, you do not have unfettered access to all the data, only the "baseband" segment, unless you manage to hack your security setting for the baseband. For which you first have to hack the Primary OS. I hope you get the theme..
While the reasons stated by the gp might not be the best ones to justify his assertion, browsers are an environment for running arbitrary untrusted code, and have a lot of similarities with operating systems (like job control, memory management, hardware access, etc).