I dated a journalist once. She used some random free app for phone calls because recording calls isn't built into iOS and she needed to record calls. I suggested a small device for her to plug her headphones through, but she declined.
I'm sure there's a few journalists out there that take cybersecurity seriously, but I'd wager the vast majority are pretty trivially monitored.
I see your point, however, having worked in newsrooms - it really is about their beat and their threat-model. My organization covers a wide range of beats and folks covering national security or other sensitive topics have an entirely different workflow compared to those covering, e.g. housing.
I think being responsive to their needs and building trust will go much further. Also, designing a one-size fits all model will just mean that your reporters will either ignore the guidance or find a way to work around it.
For instance, the most recent credible threat we have had against one of our reporters wasn't a state-level actor, but rather folks on the internet (trivially) finding their address and doxing/harassing them and their family. No amount of technology hygiene will change the fact that voter registrations are public records.
If someone gets access to the housing reporter's systems, that seems a great way to move horizontally or vertically to get access to the other reporter or to the entire organization.
I don't envy your challenge. Security must make it more expensive to the attacker than it's worth. Even the housing reporter's data could be highly valuable; with inside knowledge, someone could make a killing on real estate. The value of the national security beat information is astronomical.
I don't grasp why, with all the news about breaches, reporters still don't care.
Well, she wrote about scary stuff. Murderers, etc. Feature stories for one of the few fact checked Canadian magazines left. Some stuff in The Atlantic about politics.
Was she getting leaks from NSA staffers? No. But it does feel kinda silly to me that journalists, generally speaking, have insecure setups by default. But I get it, it's a hard industry to squeeze a living out of these days.
But it also depends on what kind of journalism they're doing, right? Not all report on criminal activity, or on investigating the government. It's kinda like threat-models, no need to be super secure if your work brings no risks to you, your organisation, or those you come in contact with.
Journalists from celebrity gossip reporters to foreign affairs correspondents needs to take security seriously. Even gossip journalists receive information from sources that ranges from information that would get the source fired or blacklisted to put in jail (e.g. LA sheriffs leaking celebrity photos).
How likely is it that people are exploiting zero days against reporters in any of those examples though. That's why threat models are different for different types of journalism.
That's a fair point, although bad actors will also wait around for years for your work to become more interesting/relevant, if they think there's a chance of it.
I did help desk support at a news agency. We were constantly cleaning up malware from journalists computers... The journalists were constantly downloading all sorts of sketchy files as part of their job. Basically, if you're leaking state secrets / embarrassing repressive governments, don't leave a digital trail that can be traced back to you. Just assume everyone (especially journalists on national security or human rights beats) have been hacked.
Yes! In our newsroom (which isn't perfect by any means) - I have been testing using Qubes for really sensitive/untrusted documents. We also open un-trusted documents (from e.g. FOIA responses) on a machine live-booting from a CD.
However, it adds enough friction (especially with remote work) that it's hard to get it right 100% of the time.
If you want to share really sensitive documents, one way to ensure proper handling of your documents is to use a service like SecureDrop [0] which for e.g. only accepts submissions over Tor and requires the use of a secure viewing station [1] (air-gapped machine live-booting Tails w coreboot rom + webcam/networking card physically removed) to decrypt/access leaks.
That being said, I don't think there's a perfect tech-only solution because nothing is stopping folks handling it carelessly after they access the file.
You could also use Dangerzone [0]. It opens a document in two docker containers and converts it into a safe version. It was created by the director of infosec at The Intercept.
> I dated a journalist once. She used some random free app for phone calls because recording calls isn't built into iOS and she needed to record calls. I suggested a small device for her to plug her headphones through, but she declined.
Sounds like she dodged a potential honeypot and surveillance attempt.
Perfectly secure computers are an oxymoron. They don’t exist.
iOS is the least worst mobile option and it’s ridiculous to say Apple is lying about security if any exploits are found, ever.
If you look at e.g. how messaging works in iOS 14 [0] you’ll see that they do in fact work on making secure systems. But parsing and memory safety are hard. Like, really hard. The fact that NSO found exploits doesn’t mean Apple is doing anything, but Apple is clearly making it more and more difficult to find and abuse such exploits.
For the average person that isn’t being specifically targeted by sophisticated malware from companies funded by -governments-, iOS is pretty damn secure. Dealing with being attacked is a different threat model.
> But parsing and memory safety are hard. Like, really hard.
This doesn't have to be the case. Start by avoiding C and C++. Use Java (on Android) to write parsers. It is very hard to take a buggy parser written in Java, and to escalate to a memory corruption attack.
If you really can't use a language like Java, write your parser in safe Rust using slices over Vec<u8>. Then run a fuzzer over it. You'll find a few runtime panics, but you're vanishingly unlikely to encounter memory corruption.
Buffer overflows and memory corruption can be almost entirely avoided these days, at a price.
Yes, I imagine that in the future we'll be writing these sorts of tools in memory-safe languages like Rust.
In fact I believe that it's hubris to think that we can write massive, complex systems in unsafe languages and -not- overlook some bugs here and there. We had no choice but to use these languages before, but Rust, etc, give us alternate choices now.
>Perfectly secure computers are an oxymoron. They don’t exist.
Absolutely, but creating a platform the encourages or forces users to do the wrong thing is a regression from where we were ten years ago.
>iOS is the least worst mobile option
No. Devices running a FOSS operating system like the Pinephone are the least worst mobile option, people don't like it because it's not sexy and it's currently very inconvenient. The rest of the options are so bad that you're probably better off without a mobile phone at all.
RE: iMessage
You have everyone using exactly the same messaging client, so you have one piece of software to exploit and now you can attack everyone. The extreme lack of diversity makes these sorts of complex exploits much more profitable.
>iOS is pretty damn secure
Sure, if you don't do anything with it. But it encourages users to download unaditable closed apps and reassures them that doing so is totally safe despite the fact that most of them are using 3rd party telemetry services run by data brokers.
>No. Devices running a FOSS operating system like the Pinephone are the least worst mobile option, people don't like it because it's not sexy and it's currently very inconvenient
Just because it's FOSS doesn't mean it's secure. If your problem is privacy then sure, the PinePhone is the least worst mobile option. If your problem is security I don't see how a phone that doesn't have hardware embedded key manager is a step up. It's not like the Linux Kernel, and whatever messenger you do decide to use is free from zero-days either.
>But it encourages users to download unaditable closed apps and reassures them that doing so is totally safe despite the fact that most of them are using 3rd party telemetry services run by data brokers.
And for the very same reason your bicycle is safer than a car because it doesn't encourage you to drive 75mph. I agree the world might be a lot better if we "return to monkey" but I don't think anarcho-primitivism is a solution.
Right, but it does mean you won't be forced to do things the wrong way because it makes Apple money.
>hardware embedded key manager
This means keeping copies of keys unencrypted (or encrypted with a key on the same device which is effectively the same) on the device. You're just a couple exploits away from sharing the keys at that point so many people argue that these make things worse and not better.
>It's not like the Linux Kernel, and whatever messenger you do decide to use is free from zero-days either.
Sure but you can't even guess at which messenger I use. Attacking me means taking expensive professional time and focusing it on one person. As for zero days in the kernel, they seem to appear less often than for iOS but I could be missing some.
>anarcho-primitivism
There's nothing more primitive than flinging binary artifacts around the way you do on closed OSes. The FOSS OS approach where knowledgeable people protect those who aren't knowledgeable (without restricting their rights) is a significantly more advanced social structure.
>Right, but it does mean you won't be forced to do things the wrong way because it makes Apple money.
I don't understand this point. What's wrong with downloading binaries from a trusted distributor (Apple)?. If you agree that just because it's FOSS doesn't mean it's secure, then downloading binaries is as "right" as you are going to get when it comes to mobile app distribution. It's no different than downloading binaries from apt.
>This means keeping copies of keys unencrypted (or encrypted with a key on the same device which is effectively the same) on the device.
No. The whole point of the Secure Enclave means the keys never leaves the hardware - they never touch the main memory and the keys can never be read out of the chip. You are never "a few exploits away" from getting the keys because there is no mechanism to read the keys at all. This also prevents attacks on the device itself - you cannot brute force an iPhone without the Secure Enclave locking you out. I'm not certain (and I really doubt) the PinePhone is resistant to physical attacks.
>Sure but you can't even guess at which messenger I use. Attacking me means taking expensive professional time and focusing it on one person.
The article is about journalists who were targeted by a state sponsored cyber security firm. This is a moot point, not to mention security by obscurity doesn't work.
>The FOSS OS approach where knowledgeable people protect those who aren't knowledgeable (without restricting their rights) is a significantly more advanced social structure.
Except that, in practice, this is no different (and arguably worse) than just trusting Apple. It turns out knowledgeable people do not work for free, most other knowledgeable people don't read the code or recompile sources, and FOSS maintainers aren't always properly equipped to ship secured software. Heartbleed is poster child for this.
I'm not saying that it's impossible for there to be secure FOSS code, but that it's incredibly difficult to ship secure code at all in any situation. For the non-technical person it's far easier to trust platform that is hardened from the outset (like the iPhone) that has a well-funded security team (like Apple) and is recommended by other security professionals.
> No. Devices running a FOSS operating system like the Pinephone are the least worst mobile option, people don't like it because it's not sexy and it's currently very inconvenient. The rest of the options are so bad that you're probably better off without a mobile phone at all.
There's nothing about FOSS that makes something secure, and building secure software is so hard and expensive that my guess is that you need the sponsorship of a government of major corporation to do so. Some FOSS does have such sponsorships, but a lot doesn't.
IIRC I've even heard that OpenBSD, despite its reputation, may no longer more secure than Linux due to Linux's manpower advantage. I don't even have to look up the numbers, but Apple definitely has a major security manpower advantage over the people making the Pinephone.
That's not to put down the Pinephone, but we have to be reasonable about what a project like that is and what is can (and cannot) achieve.
> There's nothing about FOSS that makes something secure, and building secure software is so hard and expensive that my guess is that you needs the sponsorship of a government of major corporation to do so. Some FOSS does have such sponsorships, but a lot doesn't
The F/OSS community has a weird collective amnesia about exploits that rubs me the wrong way -- just because someone can look at it doesn't mean that someone is looking at it, or even that the person looking at it is going to fix it instead of exploit it. Heartbleed was sitting out in the open for 2+ years, despite OpenSSL being a very popular package available under a permissive license.
> The F/OSS community has a weird collective amnesia about exploits that rubs me the wrong way...
If you repeat something frequently enough, a lot of people will regard it as true. And a lot of people are extremely reluctant to reevaluate their judgements after they've made them, even in light of new information.
IIRC, the "FOSS is more secure" refrain started in the 90s/00s, when security was an afterthought even at companies like Microsoft and Apple and Linux was unusual enough to fly under the radar when there were a lot of big, high-profile worms circulating. But since then some closed-source commercial software has gotten much more secure, and FOSS has gotten more popular, but remains plagued by important projects that get by on shoestring resources.
>so you have one piece of software to exploit and now you can attack everyone. The extreme lack of diversity makes these sorts of complex exploits much more profitable.
The flip side is the lack of diversity makes patching easy. Good luck pushing an update patching a 0-day affecting 3-4 Android versions to 60% of devices.
To be fair it's probably the most secure environment for the average Joe, you're just saying that it's not perfectly secure, which would be impossible in this world.
You could do far better than iOS. Worse though is that it encourages very poor infosec because when it's profitable for Apple and often makes doing things correctly difficult or impossible.
It makes checking the hygiene of apps you use impossible, building them from source artificially difficult and expensive and pushes users towards services with serious flaws like icloud backup.
We could have taught people such things, but there’s no profit in that. We want to maximize the number of people using our devices and our software, so that we get richer, even if it means putting some fraction of these users in grave danger. That’s simply negligence. That it’s distributed across an entire industry doesn’t change the ethics. Selling people tools that put them at risk is much different than sharing foss.
Some things you say make sense, but suggesting that “people” can/should learn how to build from source is simply nonsense. Heck if I had to build my OS I’d stick to a feature phone instead.
>the marketing (lying) that iOS is secure is pretty intense.
I don't see how it's lying. If you are going to consider that iOS is not secure because they got owned by a couple 0 days, then by that definition there isn't a secure piece of software on the planet.
But as a *platform* I am intimately convinced that iOS is far more secure than Android...
I agree that a few apps have been authorised by Apple to be published on the App Store, but when it happens to the Play Store it is not only one or two apps... it is mostly 5 to 10 apps developed by the same developer and which contain *the same* flaws.
Also, as demonstrated AdGuard a few years now (https://adguard.com/en/blog/popular-android-apps-are-stealin...), it is way easier to extract user informations from random apps on Android than iOS.
However the Android API has been improved since two years now (and Android 12 is better than ever to secure user informations).
> However, it is unlikely that Pegasus will be a problem for the vast majority of iPhone users. While the tool is used as intended against criminals by governments, the attacks against innocent people are seemingly against those who could be critics to a regime, including journalists and human rights activists.
Attacks against the freedom of others and critics of government are a much larger threat to ordinary people than if they were surveilled themselves.
Just because it was only used to target journalists, supposedly, does not mean someone could not also target random individuals. I doubt NSO has such control over their customers that the uses can't be expanded to almost anything, like blackmail, theft and harassment.
This coupled along with the fact that iMessage's E2EE has been backdoored by the non-E2EE iCloud Backup key escrow is a good argument for leaving iMessage, FaceTime, and iCloud all turned off on a device.
I go one step further and leave the SIM card out, which means the SMS vulnerability path is closed too.
But then you are using SMS, which your cell carrier can absolutely see and intercept because it's decrypted.
So in either case... turn off native messaging and use Signal or something if you are paranoid. You aren't really using the "phone" part anymore, so buy an iPod touch or something.
Also, iMessage is fully E2E if you disable iCloud Backup. Which can easily do in Settings.
It's not paranoia when it's true. While most people value the convenience of conventional phones calls and default messaging applications over true privacy, those who prefer privacy aren't being paranoid. Companies are monitoring communication to increase ad revenue; government are monitoring communication to catch criminals, enable industrial espionage, and suppress dissent. It's only paranoia if it's delusional. We know that we're being spied on, even if we're not being individually targeted. Even democracies that supposedly value freedom engage in widespread surveillance in direct violation of their own laws.
I'm in the camp of pragmatic resistance to surveillance. I use browser plugins to block ads and cookies where it doesn't get in the way of reaching the content I want; I use Signal for messaging even though almost none of my recipients do; I disable location services except for things like Maps that actually need to know where I am; I turn off all the spyware I know about that's built into operating systems; etc. I'm not a tin-foil-hat-wearer; I'm not doing anything illegal that I need to hide; I'm just trying to push back in a small way against the erosion of privacy and rights that permeates everything electronic.
But the parent isn't paranoid. They really are watching. And we shouldn't be so complacent.
Yes, but if you are that paranoid and worried about it, the fact remains you should not carry an electronic device.
This person is so paranoid, that they believe that a cyberweapon developed by a private company in Israel that uses previously-unknown bugs in the most sandboxed messaging system you can get on a phone are going to be deployed against them, so they should not use the calling, texting, or any other "phone-like" functionalities of a phone.
They then distrust that the End to End Encryption is in-fact End to End, and then think that using Signal or something is more secure, when if a bug in a system more sandboxed than Signal was found (iMessage, which has BlastDoor which Signal does not have), it is more than likely that Signal has it's own zero-days in it, so you shouldn't be using that either.
That's paranoid, and if you are that paranoid (which, maybe you have a reason to be), your solution isn't well thought-through. You shouldn't be using a phone if you can help it.
> I disable location services except for things like Maps that actually need to know where I am
Fun fact: having systemwide location services on, even if you don't enable it for any apps, means that your location is sent in realtime to Apple/Google at all times (via Wi-Fi triangulation data). It's not just passive GPS reception.
If you want actual location privacy, you'll want to leave location services off systemwide on your smartphone, and consider getting an offline GPS receiver device. Good car satnav devices from China are like $60 now, and include continent-wide maps, though you lose realtime traffic info, being offline.
There is a way around this. If you use an Android distribution with UnifiedNlp (part of microG) and without Google Play Services, you can install only the location providers that you want to use for Wi-Fi and cell tower triangulation. Google would not be monitoring your location queries. Provider options include:
UnifiedNlp is preinstalled on Android distributions that include microG. CalyxOS is the only one of these that supports relocking the bootloader with the developers' key:
This falls under "close enough" for me. Even with systemwide location off, cell providers and your ISP still know where you are; there's simply no way to stop them from knowing. If I fire up an app and Android gives me a popup saying it won't work with location off, then at least I know which apps are asking for it, and can enable the very few that I want to share that with because I get something out of it (like navigation).
Paranoia is an irrational suspicion that you are being watched.
If you just don't want to be watched, either by people or algorithms, and have a rational understanding of what tracking/surveillance you are under, and you are actually not paranoid.
There is a degree of rational fear, rational expectation of being tracked. Your degree of fear though is irrational unless you are, in fact, a journalist in an authoritarian state.
You are saying that you are so paranoid, you don't trust iMessage to be End-to-End Encrypted because it has zero-click exploits developed as part of a cyberweapon that is explicitly targeted against high-profile journalists. You then think using Signal or something is more secure, even though if this was pulled off in iMessage (more sandboxed than any other messenger security-wise), your other messengers probably are also flawed and you shouldn't use any of them.
In fact, you shouldn't use a mobile device. And maybe for your situation, that is right and rational. But for most people, it's not.
No, he is right that you are using bad words because you disagree. I wouldn't have added this but the thread just keeps going.
Just because someone want to be as secure as possible while using their electronic devices and you think they are being extreme doesn't mean that they are being paranoid. It has nothing to do with being paranoid. It could simply be because it is fun to try and secure your devices or to gather knowledge on how to do so in case you need to apply the skill-set at work or a thousand other reasons.
>you don't trust iMessage to be End-to-End Encrypted
I don't secure my devices as GP does but I also do not trust for a second that iMessage is securely E2EE. It is not something you hear rarely if talking about the topic, in fact it is very common argument on HN that iMessage messages are saved unencrypted to iCloud.
>this was pulled off in iMessage (more sandboxed than any other messenger security-wise)
That is almost the opposite opinion of iMessage than what was posted by researchers yesterday on HN (well, Twitter originally). In fact they stated:
>"BlastDoor is a great step, to be sure, but it's pretty lame to just slap sandboxing on iMessage and hope for the best. How about: "don't automatically run extremely complex and buggy parsing on data that strangers push to your phone?!"
In short, Paranoid is misused a lot like this. Just like Schizophrenia (it is often used about having multiple personalities or many opinions that clashes, but neither is correct usage).
> It could simply be because it is fun to try and secure your devices or to gather knowledge on how to do so
It could be the case, absolutely. But the OP doesn't sound like their having fun, they are in earnest.
> do not trust for a second that iMessage is securely E2EE
Ask a security expert, and they will tell you it has been verified by just about everyone who has inspected it that this is, in fact, the case, including the EFF. But it is proprietary code, not open, which is a downfall.
> are saved unencrypted to iCloud
And can be turned off with the flip of a switch in Settings if that's something you are worried about. For most people who aren't OP-sec (like my Grandma), having all of her messages deleted because someone stole her phone isn't worth it.
> "buggy parsing on data that strangers push to your phone?!"
Yes... Except that every other secure messenger also does the exact same thing. And they don't have BlastDoor sandboxing like iMessage does. Yes, BlastDoor has flaws, but at least it's there unlike other messengers which don't sandbox.
> Your degree of fear though is irrational unless you are, in fact, a journalist in an authoritarian state.
You are putting words in OP's mouth. OP never said he was fearful, only that he didn't want to be tracked.
Someone friendly could follow me around in real life and watch what I'm doing - and keep suggesting products to me based on getting to know me. I'm not going to be afraid but I am going to be freaking annoyed, and feel like my privacy is violated when he says he isn't going away.
I'm trying to say his game-plan for not being tracked is immensely flawed. He thinks a nation-state weapon could be used against him, so switch to a third-party messenger which doesn't do the same degree of sandboxing for security. What could go wrong?
If you are worried about a threat that is that niche, and will almost certainly be patched soon, you shouldn't be using any messenger, logically speaking.
IMO you are putting words in their mouth and misrepresenting what OP was saying.
OP wasn't talking just about the pegasus attack, they were talking about the key escrow not being held under end to end encryption on iCloud. That's not going to be patched any time soon, and there are other messengers which don't do this.
We aren't in the 1970's. It's cheap and easy to do dragnet surveillance, and it costs a fraction of a cent to store text communications and to perform speech-to-text on audio and video.
You don't have to be interesting, you just need to exist to be caught up in the dragnet.
The other end of the conversation escrows the key on any messenger. Otherwise how would you read the message? Unless you consider Snapchat, but that's not End to End Encrypted.
And are you really sure that Signal or your preferred messengers don't also have Zero-Click exploits? After all, they aren't sandboxed to the degree iMessage is with BlastDoor.
>"BlastDoor is a great step, to be sure, but it's pretty lame to just slap sandboxing on iMessage and hope for the best. How about: "don't automatically run extremely complex and buggy parsing on data that strangers push to your phone?!"
This is false. Snapchat has "snaps" protected, but text messages and group messages are not end to end encrypted.
Also, Signal putting your escrow keys in iCloud? I don't think you know what you are talking about. You can set iMessage to not put your keys in iCloud like I said above by turning off iCloud Backup which makes it fully End-to-End with your own key on your device, just like Signal.
If you are worried about the other party having their conversations being backed up, tell them to disable iCloud Backup. If you are this worried about the privacy of your communications, hopefully the other party would be as well.
And Signal and any other E2E messenger is absolutely storing copies of your key on the recipient's phone, just like iMessage would. If it didn't, there'd be no way to verify that a message was sent from the same sender.
Signal doesn't need keys for the messages you previously received. You received a message from Jim, it's received, done, no need to retain keys to decrypt the message from Jim.
You might be thinking, "But what about the next message from Jim?" but that message is encrypted with a new key so the previous key isn't useful, your Signal works out what that key will be and remembers it until it receives a message from Jim.
It's a ratchet, you can go forwards but you can't go backwards, if I didn't keep the message Jim sent me last week then even though you've got the encrypted message, and I've still got a working key to receive new messages, we can't work back to decrypt the old message.
You might also be thinking, "There must be a long term identity key so that I can tell Jim and Steve apart?". Indeed there is. But Signal doesn't use this to sign messages since that's a huge security mistake, instead this long term identity key is used to sign a part of the initial keys other parties will use to communicate with you.
This design deliberately means you can't prove to anybody else, who sent you anything or what they sent. Sure, you can tell people. You can dish the dirt to your spouse, your friends, the Secret Police, but you can't prove any of it cryptographically.
You are trying to say that iMessage does not have forward secrecy.
That's true, and is a perfectly legitimate reason to use Signal.
I'm saying that the OP was dissing iMessage because of the Pegasus zero-click exploit, and was saying that switching to Signal gives zero guarantees of protecting you from that, because it likely has it's own zero-click exploits, especially because it doesn't attempt to sandbox unlike iMessage does with the flawed BlastDoor.
>You can set iMessage to not put your keys in iCloud like I said above by turning off iCloud Backup which makes it fully End-to-End
"Fully" smells like a weasel word here. Either it is E2EE or it isn't. iMesssage isn't by default from what you are saying and if it requires the other end to also turn off icloud backup before it is E2EE then I'd go as far as stating that it is a completely useless attempt to be E2EE. In fact I'd argue Apple is full of sh*t if they actually ever stared that it is E2EE (but I have no idea if they did).
Comparing Signal to such a mess is... well at a minimum it is disingenuous.
The messages are fully end-to-end encrypted, we know that, the EFF has stated as such. However, iCloud Backup means copies of your messages that arrived after the end-to-end process are backed up online. For most people who buy iPhones, having their messages not be permanently lost if their phone is stolen is a fair trade. If you don't want copies of your messages backed up after they arrived through the end-to-end encryption process, then turn it off.
...or I could just use a truly-secure option that doesn't destroy my personal security model. Owning an iDevice presents a considerable security risk to my current setup.
There is no such thing as a "truly-secure option." As anyone truly concerned about security will tell you.
You will be forced to make compromises somewhere unless you want to live under a rock in the desert. You can't drive without a State ID, can't get a home loan without credit, can't work without a Social Security Number except under limited circumstances, can't make money without reporting to the IRS, and so on. It's entirely about what compromises you want to make, and the tradeoffs therein.
Like the nation you live in doesn't have its own Tax Authority with information on you, and doesn't have its own ID Number you need to use for working.
The technicals are different, the point is the same.
I don't count on a "truly secure" option existing, I just manage my risk by reducing the amount of Big Tech thumbs in my personal pie. Apple, much like Amazon, Microsoft and Facebook, have no right to any of my personal information, end of story.
Apple needs to make it possible for users to choose other ways of sending and receiving messages and listening to music, or of choosing not to do either of those things if they don't want to. Obviously, you can currently install and use other applications that provide the same functionality, but you cannot uninstall or disable defaults.
The most shocking experience to me in trying to evaluate the Mac ecosystem when they released the M1 and I bought a Macbook Air is being in meetings where I'm using bluetooth headphones, take the headphones off and put them back on, and music.app automatically opens and comes to the foreground of my desktop. There is no supported way of disabling this user-hostile anti-feature. I look on Google and StackOverflow and all of the suggestions for how to disable it dating back to 2014 or whenever no longer work. Apparently, the likely answer is turn off System Integrity Projection, reboot, rename or remove the file containing the application launcher, turn SIP back on, and hope that doesn't break anything else and hope Apple doesn't revert your changes on the next system update.
That did not seem worth it. The fact that Apple Music can and has been used as an attack vector makes it even worse that it is so tightly integrated with the audio subsystem of the hardware as to take over your device thanks to movements you are making in the physical real world even when you may not be touching the device at all.
I just can't understand what the thought process was in making this a default behavior, let alone one that cannot be disabled.
>where I'm using bluetooth headphones, take the headphones off and put them back on, and music.app automatically opens and comes to the foreground of my desktop
I think your bluetooth headphones are sending a play command to your device when it's connected. I'm sure it's annoying, but I think your macbook is doing the right thing here.
> your bluetooth headphones are sending a play command to your device
Yep this is what's happening. I have a car bluetooth addon that I purchased that does the same thing -- it sends Play commands to the phone on-connect repeatedly until something starts playing.
By default the phone will open Apple Music but if I already had been playing music on the Spotify app, it'll just start playing that instead.
> I just can't understand what the thought process was in making this a default behavior, let alone one that cannot be disabled.
I do not get the bluetooth-automatically-starts Apple Music behavior.
I haven't tried but I just checked the iMessages preferences and you can disable being contacted via your phone number or email addresses, with check boxes for each. As Macs don't have phone numbers I think this would work? I do use apple messages (which is why I didn't try disabling it), but use WhatsApp and signal more than I use the default.
I have no idea how good the mac's security might be, just pointing out my experience.
I agree that Apple could do better with eliminating their bundled apps, but I use third party calendar, address book, reminders, photo, etc with no issues. And I hear quite a few people are willing to use chrome (ugh) as their default browser and safari doesn't get in the way.
Your headphones are probably sending a play command to your computer. There should be some software you can grab that captures that command and routes it to the application of your choosing, or disables it entirely.
Unfortunately it's not built in, but I think it's your headphones doing something nonstandard because my Sony XM4s and AirPods do not fire this behavior when I put them in.
I'm using XM4s and have never done anything to change whatever their own default behavior is. And this doesn't happen on any other device except the Macbook.
I guess it's worth looking into to see if there is some outside of the OS way to force the OS to route requests to the application I already have opened and foregrounded that plays sound, but I would expect that to be the default behavior. What is a "play" request to music.app even supposed to do when I have never intentionally opened the app and don't have a playlist set up? It doesn't actually play anything since there is nothing to play. It just opens the app and takes over my screen.
This is pretty easily fixed and took 5 seconds to google a solution. I have like 4 or 5 pairs of various headphones form AirPods to Jabra devices and they all do this when I take them off or put them on. The Bluetooth settings all have options to turn it off.
>If you remove the headphones or put them back on, this will pause or resume playback. If you're not wearing the headphones, make sure there's nothing else around the sensor because it may activate and resume playback.[1]
Nope, I’ve never had this happen to me on my M1 machine, Bluetooth or 3,5mm headphones. If you don’t have another media app focussed or active, pressing a media control button/key will open Music iirc.
No. It is generated by the headphones. Some Bluetooth devices, when activated, will try to restart playing and they send a “play” command to the host which responds with the most recent audio app. Many car audio systems do this, too.
> , or of choosing not to do either of those things if they don't want to. Obviously, you can currently install and use other applications that provide the same functionality, but you cannot uninstall or disable defaults.
With Apple Configurator you can disable Music and Messages. It’s not the most user-friendly method, but it is possible.
I think that might just be a bug. Or maybe something in your headphones is causing it to send a "play" command through Bluetooth? That will open the Music app if you have nothing playing already.
Given that the headphones cannot know if there's an app playing already, this should be configurable in the OS: i.e. allow selecting which app (or no one) to launch when receiving a Play command
Only allowing their own app to be associated with the default audio player is anti-competitive, at the very least
It should be configurable (it is, but only through Terminal), but it's also such a minor problem that I can't blame Apple for not wanting to create the API for third parties, design and build the UI, and then document, support, and maintain all of it for years to come. You have to pick your battles as a developer and being an OS dev is no different.
Happens on my non M1 Mac with Sony XM4s. I am pretty sure it is the headphones sending the play command to the computer. Apparently there is a setting in the Sony headphones app to disable this. But this did not work for me. Music.app still opens up everytime I remove the headphones and put them back on.
there are apparently 50k names in that list, last I've checked they confirmed ~180 journalists are among them. spying on journalists is atrocious, but who are the other 49.800?
They're releasing more names including, "lawyers, human rights defenders, religious figures, academics, businesspeople, diplomats, senior government officials and heads of state"
I like the end of the article, he says "its concerning, but unless you happen to be a major critic of a government, you probably won't be a target of the spyware tool"
I wonder if there is a way to disable iMessage and iTunes usage.
With windows server I used to have a target of balance in any attack footprint.. if Microsoft provided the OS, the component services that the server exists to provide should always try to be third party software (db, web server, etc) to try and minimize one type of escalation vulnerabilities… while possibly opening up to another, hopefully less worse set of holes.
You can use a NextDNS configuration profile at https://apple.nextdns.io and a NextDNS account to block the device communicating with many Apple services.
A good way to disable iMessage and iTunes, though, is to simply not have an Apple ID. (This prevents the install of applications via the App Store, however.) You can of course set up the device with no Apple ID and then only add the Apple ID to the App Store (and not iTunes or iMessage/FaceTime/iCloud). This is what I do.
Would it actually have more resources that say Apple? I think if Apple can not do it, I am unsure if anyone else could. All supposedly secure smart phones are not, but they are at least obscure.
I think that one should probably buy an Apple (at least they control everything rather than the cobbled together android clones) and disable basically everything except exactly what is needed. At least that reduces the surface area. And keep personal stuff on a separate phone.
Apple can do it (create a security focused phone), it just isn't anywhere near what they want to do. The instant security (or privacy for that matter) gets in the way of profit for Apple they will back away.
Apple is actually not in the business of selling the data of their users. They will also risk aggravating large players in favor of improved privacy. A recent example: App Tracking Transparency [1] which makes tracking an opt-in feature to be requested from the user. To no one's surprise users are happily declining when made this offer. Companies like Facebook aren't too happy about it. [2]
Privacy and security are related, but distinct. Apple has been pushing privacy, but we're talking about security here. Typically the tradeoffs around increasing security have to do with user experience, something Apple typically does not like to compromise on.
Well, keeping things private certainly rests on the security of devices and protocols. That being said, Apple investing heavily in making security unobtrusive isn't in itself a sign of weak security. A lot of it is just well engineered and thus unseen. But documented in parts for everyone to see: https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...
If I somehow made it seem as I thought Apple sell data then that wasn't my intent (but neither does Facebook or Google sell their data).
However I do believe that Apple is only doing what you describe as a PR move. At the same time Apple fight other's advertising and tracking they are strengthening their own version of this. That users get something good out of it is strictly a side-effect. Promoting Apple because of this is in my opinion worse than promoting Facebook for their behaviour as they don't try to sell it as "protecting their users" as far as I know. Using an Apple phone is likely better than one Facebook had its hands on but the thinking and ethics behind is worse in an Apple product as they are successfully being extremely disingenuous towards their users about protecting their privacy.
Or maybe it's because they're doing their best to make every iPhone the security-focused phone, while not doing anything that would anger the FBI enough to try to pass legislation. When you are that big of a company, the things you can get away with are much more restricted than a small company.
iOS seems the worst solution, like you are forced to used Apple web engine so a bug or zero day in that engine will own all users. Apple would need to give the users the ability to uninstall preinstalled stuff and replaced them with safer or better alternatives.
The parent post is saying that many of these "secure phones" are, on paper, secure - but that's because companies like the NSO Group don't give them much attention. If they did become the focus of attention, they'd probably burst from a thousand leaks.
I agree. To break apart from the Android/Apple world, surely a team of people could disrupt the ecosystem. It wasn't that long ago that flip phones were state of the art. Somewhere in between then and now, we passed all the barriers to lose privacy.
BlackBerry, blackphone didn't succeed to be profitable, but perhaps that was not the right time. Perhaps privacy was not so completely lost yet, to be relevant to the public. Perhaps there is enough of a market to sustain that model?
Simple solution: just use "dumb phones" or burners
No non-open source "smart" phone is going to be secure enough. If you never store your data on your phone, you are safe from these hacks. Now you have to just protect from physical attacks :)
Except for physical attacks. No root of trust means that if your phone was ever stolen, installing a PIN guessing app is easy. Extracting the encrypted data for attacking it elsewhere is also easy.
These apparently exist in the criminal underworld (see the FBI's recent sting using such a project) and for state security organizations (developed by major defense contractors, afaik).
Those are always targeted extra hard since they tend to be used by criminals. See the recent "encrypted phones" (Encrochat, Anom, ...)
If you really care about security maybe it's better to get a really dumb 4G phone and share it's connection with a Linux small form tablet (but not running Android).
Of course, inconvenient as hell, but much more secure, especially since you are not running the iOS/Android mono-culture, so for anyone to target you it would require customized service.
But then you are vulnerable to physical attacks. You don't have hardware root of trust, so installing a PIN-guessing tool is easy. Extracting the encrypted data for attacking it on a computer outside the phone is also easy.
That's a little silly. The iPhone is a "cyber security focused smartphone" and Apple has billions in R&D money going into its phone. That's a nice thing to say but it doesn't really mean much unless you have some way to achieve that in a way that Apple's vast resources can't.
> ave some way to achieve that in a way that Apple's vast resources can't.
I think "can't" here runs up against "choose not to". So far as we can tell opsec tends to be a pain in the ass in ways that are fundamental, not a problem with tools. Apple, like any other consumer focused company, doesn't lose focus of this.
The silly thing is that Apple advertises their phone as something cyber security focused, when it can be totally pwned in so many ways.
And you don't need Apple's resources to make something better, just a more secure phone would have much worse UX. Just some examples for a much more secure phone, where you dont need Apple's budget:
- Runs some barebones Linux with minimal packages. An SMS app is an SMS app, not something that makes HTTP requests.
- app store is very heavily vetted
- forced updates, you can't dismiss update notifications.
- minimal attack interface, no smart connection features or accessories.
- Forced Updates? The FBI takes over the update server, forcibly sends out an update that sends all messages to the FBI immediately, and there's no way to stop it. That suggestion is idiotic. Or even better, install Pegasus on all the phones, have them be quietly reporting back to home for a few weeks, with journalists having no way to prevent updating.
- You forgot Hardware Root of Trust and Secure Enclave, like on an iPhone. Otherwise, the FBI can install a tool which just guesses PINs over and over while resetting the PIN attempts counter. It is not possible to build this protection in software only. You need chip-level hardware, and only iPhones in Fall 2020 and later have the Enclave set up to block repeated PIN attempts even if Apple-signed code is loaded. No other phone is safe from their own manufacturer like that.
In that case, you would still need to trust the mostly proprietary drivers and hardware. And if you aggressively remove features, I guess the question becomes why you would even need a phone. Maybe for some use cases it would be better to simply use a laptop.
The Iphone have never been a "cyber security focused smartphone" unless you define security being in focus while it is at least a few steps down from profit, design, and usability.
An intelligence agency cannot have the following properties simultaneously:
(1) The ability to detect espionage from China and Russia
(2) The inability to access journalists' phones
If you want an intel agency to be able to thwart Chinese intelligence activities, you can't also publicly state you won't be looking closely into members of a profession who act a lot like spies.
We understand that the intelligence agencies can and do monitor a number of people associated with hostile foreign governments. For example, this is believed to be how "Tucker Carlson got surveilled by the CIA" -- he is believed to have contacted a surveilled Russian agent to discuss interviews with the Russian president.
This is called "incidental collection" and it's a touchy subject for sure.
But this subject is different than the DoJ directly surveilling journalists who leak, which is a problem, and governments surveilling their own citizens directly, not incidentally.
We can and should hold our government(s) to a standard of effective fire-walling of acceptable intelligence gathering and holding them accountable when they go beyond to surveil citizens directly, or indirectly through spying agreements.
We can make sure that the people who surveil Chinese or Russian "diplomats" are totally different than the people who execute search warrants against our citizens, and expect there to be zero crossover there.
> For example, this is believed to be how "Tucker Carlson got surveilled by the CIA" -- he is believed to have contacted a surveilled Russian agent to discuss interviews with the Russian president.
Yes, that happens all of the time but one difference here with Tucker is he was deliberately "unmasked." Normally when an American is caught up in foreign surveillance, their identity is blocked out or masked, "incidental collection" as you said. Someone purposefully unmasked it. And someone purposefully leaked it. The same thing was done to General Flynn.
That is not the case, and it seems like you've been misled. It's completely routine and legal for unmasking to occur, and in fact is integral to understanding the intelligence.
How could an analyst understand the conversation without knowing both parties?
My understanding is that some 10,000 legal unmaskings occur per year, and Gen Flynn and Tucker's unmasking were both routine, legal, and integral to analyzing the intelligence
When one considers the litany of crimes the disgraced lunatic Flynn committed, it's no wonder he got caught up in collection and that his identity was important to understanding the collection!
I have never heard of MVT (Mobile Verification Toolkit) before this article, but now I may just have to test it out; seems like an interesting project.
I'm sure there's a few journalists out there that take cybersecurity seriously, but I'd wager the vast majority are pretty trivially monitored.