We at the Home Assistant Companion for iOS team have been wanting to implement end to end encryption for our push notifications for a while now but Apple has denied our request for the com.apple.developer.usernotifications.filtering [0] entitlement multiple times. Wondering if with today's news we could apply again and get it.
For context, we are sending ~35 million push notifications per month on iOS and ~67 million on Android, see more at [1]
We implemented APNS encryption for Firefox iOS without much trouble. Keys are negotiated out of band and message decryption is done in a Notification extension that allows you to pre process incoming notifications. Did not need any special entitlements.
Not saying you are obviously compromised, but the simplest explanation after this news is that maybe they relay the authorization to NSA et al, and they OK'ed your case and not theirs for some reason...
The simplest explanation is that reviewer A was responsible for Home Assistant Companion's request and reviewer B was responsible for Firefox's request, and they judged the request differently. Or that implementation details made the two cases different. Or that the company policy changed over time between the two requests. "Apple can break Firefox's encryption so they happily allowed it" is certainly not the simplest explanation.
The simplest explanation is not that Apple is colluding with multiple national government's intelligence agencies which all specifically agreed to a conspiracy against Firefox's mobile browser notifications.
You clearly do not work on the field. There's an industry for that. It's not uncommon as you think.
Consulted for a company who implemented access logs for law enforcement to telephone records. A friend works at ring just fetching data from the DB for the legal team after they receives pro forma requests.
And those are the regular cases since the 80s. with judges from regular courts. If there's a law those companies must follow they will implement the same systems as they had for other laws. Only that these new ones prevent them from discussing. That's the only difference. the rest is banal day to day from legal and the industry that exist to sell compliance services to them.
The filtering entitlement allows us to decrypt messages and, depending on the content, choose to not send any notification (for example if a user sends an app specific command, like asking for a location update). The example you linked requires that a notification is emitted at the end, which we don't want.
Zac also just let me know the other reason we need filtering is so we can properly unsubscribe users from notifications when one is received from a server they no longer are connected to.
for my understanding, you need that entitlement so you can send an encrypted invisible notification which you can then decrypt locally in your app and push out again as a local notification that doesn't go over the network (i.e. not use apns)? Or is doing this kind of stuff just weirdly tied to that specific entitlement?
No, you do not need this just for decryption. This entitlement is only required if you want your Notification Extension to be able to silently eat the notification. Normally an extension must transform the notification then the system presents it to the user.
APNS is not a "let my server wake up my app in the background whenever and however often I like" mechanism.
Defer handling other things until either your extension or your app would have run anyway and do them at that time.
You can do quite a few things - it’s not a widely written about area of development (relatively speaking) but you have a surprising amount of stuff at your fingertips.
I built a finance app a few years ago that would take market data via a push notification, and then the transform extension would render it to a chart image and attach the image to the notification to avoid generating them server side.
> You can do quite a few things - it’s not a widely written about area of development (relatively speaking) but you have a surprising amount of stuff at your fingertips.
I would pay to commission a blog post on the topic if you’re willing to write it.
I unfortunately don't have time to do that, but I believe the Guardian did a similar trick some years ago for election result updates. Try googling around for that, it's very interesting reading.
Yup that is also a great way. Just send a message ID and fetch the actual content in the notification extension that can pre process incoming notifications.
I may be misreading the article, but does it seem like it alludes to metadata- and timing-related analytic techniques, rather than the contents of the notifications?
“...for metadata related to push notifications to, for example, help tie anonymous users of messaging apps to specific Apple or Google accounts.”
So maybe more, they (or somebody) send some messages to this account they want to ID, then request the specific device identifier that received notifications for that app at all of those times?
Would obfuscating the content make much difference with respect to that category of technique?
They are not currently as we need to roll out e2ee with iOS and Android in lockstep as they both use the same mobile_app component as well as the local push stuff which bypasses Apple and Google but we would also like to encrypt.
He even inspired Snowden to expose the illegal mass surveillance programs. IIRC Snowden reached a breaking point when James Clapper, then director of national intelligence, lied under oath to Congress when pressed about domestic surveillance by senator Wyden.
It's sad we don't hear more about people like this in positions of power.
The entire time period of the Bush admin is a microcosm for unresolved issues of today: Voting machines, government over reach and spying, security, encryption, copyright, bad behavior by corporate entities (M$ has a cohort).
That said, I also think things happen way before the first term. It requires consent of the Party to get on the ballot and hundreds of millions of dollars, increasingly trending towards billions, to run a campaign with a chance of winning, because in a democracy it's quite self evident that the person who spends the most money, must be the better person. Results don't lie!
And on top of all of this, if you aren't shaping up to be who the Party wants, then the completely independent, free, and honest media will demonize you. And even if this doesn't destroy you in the eyes of your own supporters, it'll rile up your opponent's base enough as they race to vote (for somebody they also don't even particularly care for) because if they don't, then you might win! That cannot be allowed to happen as it would obviously be the literal end of the world.
This is why it's ever more important for social media to be controlled, lest somebody angle-shoot around the traditional path to success - the media. If somebody's gaining traction on social media, then he's saying things that disagree with the powers that be. Since he's disagreeing with the powers that be, he is spreading misinformation by definition, so he must be censored. For our safety.
I would like to ask you a serious question: do you not think it is extremely weird that so many people will at least sometimes[1] agree that "democracy" is essentially fake, but then at other times take quite literally the opposite stance...praising it, defending it, singing its virtues, etc? Granted, it is theoretically possible that each individual is 100% consistent at all times, and I am simply observing people who are on different sides of the argument, but now and then I'll check into someone's post history and find evidence that pretty soundly rules that out (subjectivity noted).
A standard response to this is something along the lines of "Oh, that's people just being X (dumb, etc)", but I do not believe that is an even remotely accurate description of what is really going on. But for even more irony: on one hand, most people tend to think democracy, governance, and all the things downstream of it (ie: their literal experience here on Earth) is a very big deal (the passionate debate over what exactly the events of January 6 "were" is a prime example), yet it is almost impossible to get anyone to engage in a highly serious conversation about just what the fuck is going on here on Planet Earth, 2023. It is as if there is some sort of a yet to be discovered phenomenon in play.
Do you think I might be crazy? I very often genuinely feel like I am living in The Truman Show.
[1] I feel like this is a crucially important detail that is rarely investigated or even contemplated.
That's a very interesting observation that deserves much more thought that I can give in a quick reply.
But I think one major issue is that we're living through an obvious inflection point in history. When the US was competing against the USSR, we achieved numerous objectively incredible things. For but one example, we went from having never put a single man in orbit in 1962, to putting a man on the Moon, in 7 years! In modern times, with modern tech and knowledge, we struggle to replicate what was done 60 years ago.
And so I am exceptionally critical of our systems in modern times, but also look at them with glowing fondness at what they have achieved in the past. But socially we rarely distinguish between the two - Democracy is Democracy. And so this can appear to be cognitive dissonance when one loathes 'current democracy' yet holds dear 'classical democracy.' And I think many people also feel similarly, even if they may not actually even realize it.
But back to the inflection point, for the past ~80 years, and especially the past 30, the US has reigned relatively supreme owing to a large technological edge driving a large economic edge. But now China is equalizing technologically which is also driving their economy upward. It's already the largest in the world PPP adjusted, and will soon enough also be the largest in nominal terms as well. A country of 340 million simply will never be able to compete against a country of 1.4 billion. So they are set to become the world hegemon.
What will that mean? Well historically they've always been relatively insular, even when going through periods of great power, and I don't really expect that to change. In the early 15th century they were, by far, the greatest maritime power yet one does not find Chinese colonies in the Americas, India, Africa, or pretty much anywhere. But on the other hand, I think this change does scare many people, because what if China becomes the new US and just starts trying to put its tendrils into everything and turn everywhere into little clones of the Chinese political system?
And fear is a such a strong driver of irrationality, the same reason people continually vote for people they don't like. I think this fear tends to cause a sort of rally around the flag effect, where people can see that this flag is one they scarcely recognize, let alone truly love, yet it looks far less scary than the one they don't know at all. Drive that fear of the unknown into people, which our political types thrive at, and the people will come racing back, not entirely dissimilar to a woman in an abusive relationship.
I still feel this is not even scratching the surface, but it's at least a first effort! It's such an interesting question that one could easily write several books on the topic. And in the future, people looking back, probably will do exactly that!
This is all well and good, but I sincerely believe that it does not address the question: why are humans, including the genuinely much smarter than average folks here on HN, unable to have a very serious discussion about ~~whether~~ the degree to which "democracy" in the US (or Western countries in general), is fake?
I believe it is very fair to say that it is not a question of whether it is fake(!) at all (as a binary), but a question of how fake is it, and in what specific ways?
And yes, I can certainly appreciate why people would have an aversion to this, and the various other "just so" memes that are trotted out when the topic comes up, but the question remains: why is NO ONE capable of taking this topic very seriously?
I'd expect in this exact case the answer might be easier. It's because fake has an ambiguous meaning. It includes everything from the gamut of completely absurd such as elections aren't even actually happening, to the improbable but possible such as elections being rigged, to the self evident such as the people being elected, and their subsequent actions taken, being completely unrepresentative of the people or their interests.
And while all these topics (and many more) can be encompassed by 'fake', they all are quite radically different.
The problem is that society has been exceptionally divided, again owing to fear politics. And so even if an election was rigged, the people that benefited from it would basically demand a level of evidence that would never exist, even in cases of rigging. And by contrast, those that suffered from it would take the slightest irregularity as undeniable evidence of their rightness. And so it's quite unlikely that either side could ever reach a burden of proof that the other side would be happy with.
I think an important point in general is that democracy is what people think it is. If people think elections are fair or a viable means to change their political fate, then they're going to act accordingly, and vice versa if they don't. So personally I think having as absurdly transparent and open a system is critical. Irregularities and oddities should be publicly emphasized across the aisle so (1) everybody can discuss things and be on the same page, and (2) they can be remedied and not repeated.
> I think an important point in general is that democracy is what people think it is.
I think you hit the nail on the head here - and, it doesn't apply only to democracy, it applies to everything. It's funny, because lots of people also know this, yet do nothing about it.
> If people think elections are fair or a viable means to change their political fate, then they're going to act accordingly, and vice versa if they don't.
Where "acting accordingly" is usually ~behaving on social media in the corresponding manner.
Gosh, I wonder why it seems like humanity has little control over its fate.
Penalties for whom? Clapper was bound by conflicting laws requiring both honesty and secrecy. This problem goes all the way to the root of government and legal system.
There are ways to respond to a question without violating secrecy and being dishonest. It’s like pleading the fifth, there are situations where it is acceptable to not tell the whole truth because it is counter to a law or personal right. In those instances the person needs to explicitly proclaim the basis in which they are obscuring the answer. Giving untruthful, absolute answers about the non-existence of something is inconsistent with the above.
Nine times out of ten, when there's a news piece about a senator advocating for privacy and constitutional rights with regards to tech, it's senator Wyden. He's on the senate intelligence committee and has a decent track record of getting shit done with bipartisan support, so he's not just virtue signaling for votes either (not to mention that he's basically unbeatable in state election with all the support he has in Oregon). He's 74 years old, I do hope someone will step up and carry the torch when he retires. It's a losing battle but it's still important that we have someone who is competent and well respected to fight it for us.
I know it's the Oregonian in me and getting to meet him as a kid where he spent a decent amount of time with my class, but he strikes me as a senator that Oregon can be proud of. I might not agree with him on everything, but in my personal opinion, he's advocating and pushing for change on what he personally believes in. Makes me wish my current senator was more like that.
> he's advocating and pushing for change on what he personally believes in
That's certainly a step above many of the grifters we have in government, but it's also not necessarily a good thing. People can truly believe in stuff that's harmful or flat out wrong.
Gosh I am so happy to have like the best senator in the senate next to Bernie Sanders in Oregon.
Oregon is an extremely based state. Y'all crap on PDX but the reality is that we have more freedom and less tyranny here than in any other state in the nation, and possibly in the world. PDX is "bad" because it's one of the only places in the world that hated the cops enough to actually muzzle them - and not living in fear of the boot is worth needing to deal with homeless people.
Want to smoke weed? Check (lowest prices in the world). Want to do psychedelics? (functionally legalized) Check. Want to shoot guns? (relatively lax gun laws for a blue state) Check. Want to not be spied on? As check as Ron Wyden can make it!
The tyranny of the masses is still a tyranny. I'd personally like to move to a state where all smoking, but at least weed smoking, is illegal. I really don't like second hand smoke, especially when it smells and hangs as much as weed smoke does.
It's already not legal to smoke in public for weed and in most places for cigarettes. Frankly I don't think outright prohibition addresses that any better than the existing system. Nor do I see how having bodily autonomy is necessarily a tyranny of the masses.
In all seriousness, Utah sounds like your ideal so long as you stay outside of Salt Lake City. I'm glad to no longer be a resident
Not enough trees. Nor enough employment in my non-remoteable field.
Public smoking is a concern, but the smoke will leak even if smoked inside of a home. With edibles and inhalers I don't understand why people thought it was a good idea to legalize marijuana smoking.
> Nor do I see how having bodily autonomy is necessarily a tyranny of the masses.
Generalizing the principle of the swinging your fists near someone else's nose saying.
Being able to smell from the outside that someone is smoking inside their home is not "second hand smoking". You'd need to be in the same room as them to get the necessary level of exposure.
When did the goalposts move from 'smelling cannabis in the air should be banned' to 'prolonged second hand smoke in the same room at cannabis dispensary levels has an effect on your blood vessels if you're a mouse'?
I don't agree with that. If blasting music can be a matter for legislation (nuisance laws and the like), then so can bothering people around you with the reek of smoking weed.
I recently visited Manhattan and walked many miles touring it. The smell was in many places, with people smoking out in public. It is offensive and rude.
If you take one of the world's most popular pastimes that personally, and want it legislated against on that basis, I have no time for your arguments.
Like - Manhattan. Yeah, you're gonna smell cannabis in Manhattan. It doesn't matter the tiniest bit what the laws are; you're gonna smell it. It's not something to take as a personal insult, and it's wild you insist that it is.
Anyway, weren't we talking about a serious issue? Like, is this why Americans are ok with all the domestic spying - they're too worried about sniffing a reefer on the wind? Ugh.
> If you take one of the world's most popular pastimes that personally, and want it legislated against on that basis, I have no time for your arguments.
Many popular activities are expected to be done where they do not impact others. In my local park, a man was arrested for masturbating in public, certainly a popular activity.
> It doesn't matter the tiniest bit what the laws are; you're gonna smell it.
Over the course of the last few decades, I smell it more in Manhattan and in many other places where it has been legalized. Apparently laws do matter. Not that I want any laws against it. I would just prefer that people not be assholes about it.
I did not interpret such sociopathic behavior as a "personal insult".
As for the more serious issue, I did not take the thread here. I replied to your comment, "OP is complaining that he might get a whiff coming from his neighbors house." I pointed out that one does not need to enter someone's house in order to be forced to inhale their smoke.
> Your sense of smell is subjective, and not a good reason for legislation.
You do understand that many tort suits, and outright laws, are over subjective harms, right? (trash in neighbors yards, loud sounds late at night, smells from chemical industries, etcetera) That laws such as disability protection laws exist?
> ""In this case, the federal government prohibited us from sharing any information," the company said in a statement. "Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.""
When they were building the CSAM detector: "what if the government asks you to extend the detection to include other media such as political meme images?" "we would refuse".
Being prohibited from disclosure does not in any way refute their promise to refuse. It would make it hard to prove one way or the other, but that is not the same problem.
This is really the conclusion of the debate over whether privacy protections should be legal or technological.
The answer is both, which in particular means that they have to be technological. We need to prove their inability to defect with math because otherwise they can just lie about it.
What you need from the law is the right for everybody to use that kind of technology by default.
We can safely assume they are already doing it, it's just that laws are coming slowly to normalize this survelance so they can't tell us just yet. Vote for those laws to learn more.
Legitimately scary stuff but not surprising. Snowden risked everything to tell us what was going on and where things were headed yet here we are. At this point, it seems the only way to not be subject to this type of treatment by our governments is to completely unplug from the system, but of course, practically speaking, this isn’t feasible for the overwhelming majority of our society. So what are the alternatives here?
Are powerful mobile phones packed with Apps and constant notifications so necessary to a full, fun, enjoyable techy life, really?
I am legitimately surprised that more tech-heads didn't see this state-of-affairs (and all the other obvious drawbacks of The World's Most Featureful Spy Device, controlled end-to-end by a giant multinational, becoming ubiquitous in peoples back pockets) as an obvious, absolute given, right from the very start of the whole smartphone trend. Instead we all seem to have bought into it, hook-line-and-sinker.
The really scary thing is that, forget what you said, they're starting to become more and more necessary for the bare minimum existence. We're not quite there yet, but it's becoming harder and harder to simply exist without one of these things.
Conduct yourself on your phone the way you would in public in front of friends and family. Only text/browse with stuff you'd be okay with a stranger knowing. I've operated this way for many years for the exact reason that this article highlights.
Stop being wilfully ruled by war criminals and start prosecuting their crimes.
The civil means for wresting back control over our government exists - we have to have the courage to use it. That means, prosecuting our own war criminals.
After all, it is the criminals with the most blood on their hands which want to use the tools of the state to repress the public, from which they derive their actual power, and who are the only ones with the resources to actually do something effect about the criminals getting away with it.
These rights-violating mechanisms exist to protect the criminal ruling elite only.
Seriously, to clean up our government: prosecute our war criminals. The war crimes are real, the crimes against humanity are real, the human rights violations are real. What isn't, is the general publics' stomach for the embarrassment they must experience in order to confront the fact of their own wilful rule by dyed-in-the-wool war criminals.
This discomfort at the fallacy of our own moral authority over nations considered to be 'worse human rights violators' has to be replaced with outrage at the actual human rights violations we are allowing to be committed in our name, or else we continue the slide into the abyss..
You have to be willing to live with something less feature-rich than what you can get on the latest iPhone 27 Max Pro(TM). And you have to be gutsy enough to click an "Install some other OS" button in your web browser with your phone plugged into a USB port.
Then to extend to services, a lot of it depends on your ability to deploy your own stuff. This can involve a lot of time reading how-to guides after you've installed Linux on a machine in your house. Given how much documentation is readily available online most people with a high school diploma can probably figure it all out, but you have to be motivated enough to refuse to be helpless.
Today you can purchase a Pixel 7[|a|Pro] and flash GrapheneOS on it. There's a lot you can get from F-Droid, but if you really want Google Play Store apps, GrapheneOS does a reasonable job sandboxing it. Create a new Google account just for that installation of Google Play Store.
Never sign into anything Google, Microsoft, Apple, Facebook, Twitter/X, LinkedIn, or whatever from your phone. Or at least if you absolutely have to, use a trusted web browser in Incognito or Private Browsing Mode.
Keep location tracking disabled for everything but your favorite maps app. Put your phone in Airplane Mode when you're traveling if you don't want cell towers to capture your location info. GPS reception still works.
WG Tunnel can get you to your server when you're not on your home network. Some people swear by Tailscale, but you have to trust them with your node info.
Syncthing works for backup for a lot of people.
For private maps I've been using Organic Maps with some success. Searching for places isn't necessarily trivial, but the navigation feature has always worked well for me.
For private comms you really need it to go both ways (you and the recipient). The weak point is likely to be the recipient's environment, but at least something like Signal gives you a chance.
Something like Fastmail works for email and calendar, since they're probably not building a profile on you and selling that to advertisers. DAVx5 is free from F-Droid for calendar sync.
Kagi works really well for search. Also, they probably haven't sold out to advertisers. DuckDuckGo is another option with another set of trade-offs.
For music you can serve FLAC files via minidlnad to VLC. minidlnad was a 3-minute tweak to a config file after I apt-got it. There are tons of options here.
Explore F-Droid for stuff that might do better for privacy, like Spotube, FreeOTP, Podverse, Librara FD, Cheogram, etc. I'm not claiming that the F-Droid apps will all give you perfect privacy, but in general they're probably better than a lot of the stuff that's pushed in the Play store.
Check out e-books and audiobooks from your local library. Or copy them to your device via Syncthing after feeding your e-books through Calibre's DeDRM extension. The idea is to keep from having to context license servers from your phone.
Give up on Apple or Google Pay, credit cards, and loyalty programs if you don't want your eReceipts collected and added to your consumer profile by companies that do that sort of thing.
None of this is a surefire way to give yourself perfect privacy, but it can greatly reduce the amount of your personal information that your government and/or corporations collect on you via your mobile device.
> You have to be willing to live with something less feature-rich than what you can get on the latest iPhone 27 Max Pro(TM). And you have to be gutsy enough to click an "Install some other OS" button in your web browser with your phone plugged into a USB port.
I agree with all of this, but realistically it's not just a simple matter of being willing to live with less features - this is a significant amount of work to investigate, implement, and upkeep for someone who is techy, let alone a less technically-inclined person.
I can barely get my family to use Signal, let alone install F-Droid or learn how to configure Syncthing.
Ultimately, this does indeed come down to "if you use a big product, you're likely being spied on", but this shouldn't be the individual consumer's fault.
This is an excellent reference. It is worth emphasising though, this does not make the device secure.
No matter what OS you put on, there's still a proprietary baseband blob with executuon permissions underneath. All of these devices are built compromised.
Absolutely! I was focusing on moving toward a generally more privacy-centric way of using a mobile device. Of course an insecure device can be made to neutralize any privacy-protecting measures I've described. However just because a device has a vulnerability doesn't necessarily mean that it will be compromised. In fact I'd be surprised if there is more than, say, a 1% chance that any given random Pixel 7 phone is actually compromised via the baseband code.
Also, that said, if you are personally targeted by your government for surveillance, all bets are off. I don't know how to defend against that, but a potential start would be to eliminate all electronic devices from your person and your house and then to set off a powerful EMP every time you walk through your door when coming back home.
It's refreshing that Google, the same company that makes Android, has recently called out baseband blobs for their poor security.
While I'm not convinced it's causing widespread exploitation, baseband blobs are definitely a problem, and hopefully some of the advocacy that Google's Android org puts on phone vendors can get us to a better place. And maybe efforts from organizations like Librem can push us toward modems with fully OSS firmware.
We are headed in a direction where you will need the Google Play store or Apple's store to do groceries, read messages from the government, use two-factor authentication, pay, show your ID, order food, and much more. Web sites are being phased out and so are physical / legacy alternatives.
I feel that way too. Which is why I feel it's important to push back by at least asking every business whether they accept cash, and always using cash when they do. If they don't accept cash, I always make it a point to mention that they're paying processing fees for that transaction that they could have avoided if they accepted my cash instead. Simply raising the issue in a non-confrontational and casual way keep them thinking about it, which can lead to some of them acting on those thoughts after it happens often enough.
Simply acquiescing without any mention of cash makes one complicit in the pernicious slide toward a surveillance-infused market.
Build parallel networks for sections of society to operate and associate outside of what govt has their hands in or with technological guarantees of privacy and safety. I understand this is a tricky constraint to scale but it’s not impossible, current iterative solutions are at hand, and people have coordinated before around successfully building alternative societies in terms of communications, mutual aid, and safety provided to public regardless of family; these are a threat to gov and business though as they minimize people’s reliance on those institutions which is a kind of power money alone can have less control over (so they lean on violence historically - eg battle of blair mountain). I believe technology uniquely makes it possible to scale potential solutions because of how much it’s cheapened unit cost and labor cost thru automation and commodity and open src
California with the support of Gavin Newsom is building "no go" zones for wildfire response. Sounds OK except - a video recording of a local Mayor at a wildfire update press conference, asking with deference, when the main highway to his town will re-open, and the response from a tense and aggressive CHP leader was "maybe that road will be closed for six months, maybe next year" with no respect... instantly snapped at a Mayor, on camera. How are these zones decided upon? "immediate area" is not what was being done in that event.
much respect to rsync -- I believe it was a press CalFIRE event during one of the many ultra-serious fires, and I did not record that video. The interaction was distinct in my mind however, for the reason stated here. The town was a smaller town on the edge of a large forested area. The scope-creep and instant-authority of it all, as bad as that is, is dwarfed by the loss of life and property, flora and fauna during the wildfire. In the execution of emergency duties however, all the rules of engagement should be ON, IMHO.
If they use IP to deliver notifications, then the gov can demand they hand over the IP address a notification was delivered to. From there, location isn’t hard.
I should have been more specific. Although they could use IP geolocation, they can also get data from the cell carrier that delivered the notification to that IP address.
So a gov finds that IP address 7.8.9.0 received one of these notifications at 12:34. They then see that 7.8.9.0 is one of ATT’s addresses. They go to ATT and learn that address was used by their customer onionisafruit at 12:34 and the device was 5ms away from tower A.
That's hardly necessary. I think the attack goes like this:
You have captured the device of some group member, and you want to investigate his associates, but you don't know who they are. So you ask Google and Apple: Make a list of all of the devices that have received a push notification sent by <list of messaging apps> where those devices have received at least 200 notifications within 50ms of a notification received by this device. (You will have to make Google or Apple share the list with the target timings with the other)
That will give you a list of everyone who is in a group chat with your target, regardless of whether or not the messages were deleted or encrypted. Now you tell Apple/Google to give all the data on those accounts. You will probably find enough in their Gmail/location history/browsing history to identify nearly all associated people without ever bothering to look at IP addresses.
This also works if you get into a chat with your target. You send some messages and then have Google/Apple identify their device via timing, then identify all their associates.
Just to make it crystal clear, we recently learned that the FBI served Twitter a search warrant for Trumps account which gave then access to all of his twitter followers. https://www.bbc.com/news/world-us-canada-66365643.amp
Protip: the harder a company pushes you to download their app, the more they have to gain from it. 99.999% of the time it's because they want access to as much of your data as they can sneak out of your device, usually for selling it.
One notable corollary is, the shittier the mobile browser webapp implementation is, the more they want to push people onto their app. See: Facebook, Twitter, Reddit, etc.
Exactly this. Never install a company's app unless you absolutely need to. Use websites instead whenever possible. If you do need to install an app, uninstall it as soon as possible even if you know you'll need it again at some point.
> the shittier the mobile browser webapp implementation is, the more they want to push people onto their app. See: Facebook, Twitter, Reddit
Yelp is the gold standard in this regard, blithely pretending that they can't show you any photos (or is it more than a few photos? I avoid yelp on mobile so much I can't recall). It's probably the right move for them, because the photos are 99% of the reason I ever want to use Yelp. Reviews can be outright lies or simply written by people ~~with no taste~~ whose tastes are not simpatico with mine, but photos don't lie*.
No, having a dumb phone is not enough. A malicious actor can pretend they need to deliver an SMS to you, which may result in a network disclosing your location (anywhere in the world). Mobile networks probably don't honour aggressive probing for just about any peer but it's not like nobody can do this at scale. None of this is new.
This isn't necessarily true. When you install the Signal app on an Android phone that doesn't have Google Play Services installed, it receives push notifications using its own notification daemon instead of using Google's. This, of course, has significant battery life costs.
All I can say is that Signal uses more battery than any other app on my phone. Since my Pixel 7a's last complete charge 3 days ago, Signal has been in the foreground for only 2 hours, but it has been running in the background the entire time.
This reminds me, whatever happened to mesh networks? If you wanted to be out and about in public, you could simply carry a very anonymized device that had only more basic abilities. But among those abilities, you could certain send messages and maybe even smaller-sized files - all over a mesh network. Feds could infiltrate it, but it wouldn't be nearly as trivial as it is right now. And users could rotate their devices. Furthermore, if the device in question wasn't a real phone, but rather something more generic (a wifi-capable device with a keyboard, virtual or physical), then it wouldn't even need to have an IMEI.
Apple AirDrop was basically this, but they neutered it at the request of the Chinese government. It still works, but it automatically turns itself off every 30 minutes, so you can't (for instance) opt-in to allowing people to automatically push uncensored news to your phone during your daily commute (without interacting with the phone every half hour).
(It isn't technically a mesh, since it doesn't support multi-hop routing. Still, it is peer to peer, and doesn't require a data connection.)
Apple also has an API called MultiPeerConnectivity[0] that handles this better than AirDrop. I’ve long wanted to try building a mesh network with this. Not sure about multi-hop, maybe that could be part of the business logic.
A better example is perhaps Apple's Find My network in which they explicitly said that locations of your Apple devices (including AirTags) would be transmitted over a mesh network and eventually to Apple's servers so you can see them on your iCloud console.
From an anti-censorship / user interface perspective, AirDrop is closer to the mesh network dream, but, yeah, from a network topology perspective, Find My is an actual mesh network, and AirDrop is not.
I'd love to see something with the characteristics of Find My, but open to arbitrary data transfers. The cell connectivity in my area is poor (due to monopolies not bothering to build out infrastructure, unless you believe there's no demand for network connectivity in the SF Bay Area...), and most places I drive to have guest wifi networks anyway.
I can easily imagine dropping my cell phone plan and only being available for high-latency text messages (and emergency calls to 911) when out and about.
They're still a thing, and more of a happening thing than ever because they're useful for IOT. There's a bunch of private LoRa network operators offering a mix of free and paid services. Amazon is already a large player in this space because of their delivery network.
That’s not how an AirTag works. Anyone can request any AirTag by simply knowing the public key (which is broadcast via BLE). Then you just need the private key which doesn’t need to be backed up by disabling keychain backups.
Some issues could be prevented if push messages added end-to-end encryption by default, something that shouldn’t be particularly hard to use if it was built into the dev tooling. Instead, developer recommendations like this one [0] suggest that you should put content into your push messages and optionally use a separate library to encrypt them. Clearly developers aren’t doing this, hence the opportunity for surveillance.
The timing would still give you away - with a privileged network position you can tell that a user sent a message to an messaging service, and that some set of users got notifications from that messaging service moments later. Observe that enough times and you'll have good confidence in the members of a group.
If you're trying to hide from that type of attack you need to send a fixed rate stream of messages (most of which are dummy messages, except the occasional message containing genuine content -- like number stations). Furthermore, every point in the chain also needs to avoid revealing which messages are genuine (by fetching the encrypted message from the server when it receives a genuine notification, you're giving data away).
The operator of the app could send messages at fixed intervals to make it more difficult to correlate the messages (more samples required to have confidence in the recipient). If they send dummy notifications they'd probably fall foul of Apple/Google's constraints around invisible-to-the-user notifications (I know Apple prohibits them, I assume Google does as well)
I can't see that frustrating this type of attack would be interesting to Apple/Google: it would push up power & radio bandwidth requirements for everybody pretty significantly.
In fact, at least on Android, the contents of most push notifications are not the actual messages to be displayed to the user, but just empty notifications letting the app know it must poll for something on the server or some other activity which may result in a notification.
It's all about the timing (and meta-data like which app), not about the contents.
Isn’t this somewhat defeated if the service is large enough?
E.g: if I get a signal notification and the notification has no data except “event happened, call server for updates” - and then you fetch updates as a batch - doesn’t the sheer number of people making that same generic batch update call somewhat mask it?
I’m curious where Apple prohibits dummy notifications, by the way - I used them for a financial app I worked on a few years back and never got dinged for it.
What you're talking about is achieving perfect privacy/security.
Even just E2EE on the notifications themselves would be an improvement over the current situation. It would make certain categories of data unavailable to eavesdroppers. The fact that it would not protect against 100% of all types of data/metadata exfiltration is not sufficient reason to oppose implementing it.
I think (reading between the lines on their docs) that you'll get throttled/dropped if you abuse the system by sending a regular push notification but do not notify the user. Apple doesn't like app developers using invisible notifications because it risks wasting device battery without the users being aware that their device is constantly being awakened by your app.
However, I was actually wrong more generally because Apple does have push notification type for this, Background Updates[1] are permitted to run invisibly. They say not to try sending more than 2-3 per hour, and that "the system may throttle the delivery of background notifications if the total number becomes excessive" - which sounds like you're permitted some unspecified small number between app launches.
These notifications seem to only be able to send a single boolean flag, so it doesn't seem like an awfully viable way of implementing a fixed rate message system (especially because you'd also want to be sending messages out on that same fixed rate to frustrate analysis)
Technically you can make malformed messages visible: store them in spambox and display their metadata or maybe even as a source of randomness, like random.org
If it’s metadata they’re after (according to the article) would it really matter if the push notifications themselves were encrypted? As long as you’re using Apple/Google’s servers to manage push notifications it seems like there would be some metadata that could be useful for surveillance purposes, encrypted or not.
Getting rid of all metadata is fundamentally hard, unless providers are willing to deploy PIR or anonymity networks. But I think it's a mistake to assume metadata means "just the timing of a message": these push messages may include a lot of detailed content that is being described in this article as metadata, and all of that stuff can and should be encrypted.
Additionally, with a little bit of work (well, really quite a lot) the push messages can be made to hide the source. This would make it harder to distinguish a Gmail or DoorDash notification from a WhatsApp notification.
Some apps actually do that. I know at least Rocket.Chat has an option to handle push that way. I'd like to believe other similar chat apps used by groups and communities have it too.
But as others have pointed out, just having the timestamp and target of the notifications already tells a lot.
Encryption wouldn’t help as the whole point would be to look for coincident timings. I.e. after activity from one user to a known service you see a push occur going to another user. If this pattern repeats you can build confidence they are in contact.
Encrypted messengers aren’t sending unencrypted push payloads, at least not deliberately.
A lot of apps don’t even put much in the push messages themselves at all, they are mainly an indicator to phone home for more information.
Consequently no gov has been getting meaningful info from the content of this stuff for many years - it will all be what you can infer from observed patterns, which is a lot.
I'm not sure I'd trust dating apps and weaker chat apps not to just be sending the contents of messages to a TLS push notification endpoint that Apple/Google could do whatever with before forwarding on to devices.
Differential privacy, meet notifications: just add random notifications as noise to everyone. If payload decrypts to junk, then drop/ignore as a faux-notification; else, trigger notification.
Eh, what’s a few orders of magnitude increase in notification infrastructure overhead anyway? /s
I don't see why. The system operator knows to whom the message is being sent. They get a court order, ordering them to track messages sent to enumerated entities and they have to comply.
Metadata in this case apparently means Apple and Google are helping find “this real user connected to that real user at this time”. So governments may or may not be able to decrypt a push message payload, or data delivered because of that payload.
how will actual data not be more informative? you can easily infer what the appointment was because the phone call will mention the name of the doctor or office and you can look that up plus all the details they discuss
you'd still have to look up who the doctor they called is from the metadata; it's still info but absolutely not more informative than the real data
so this line of thought makes no sense, and glenn greenwald should be looked at very skeptically in general, he sounds smart but when you look at his logic closer it breaks down
>you can easily infer what the appointment was because the phone call will mention the name of the doctor or office and you can look that up plus all the details they discuss
You're assuming these things are mentioned. "Hi, I'd like to book/confirm an appointment with Dr. Jones." doesn't leak information about "abortion".
Yes, these things obviously depend on what information is transmitted. The point, however, is that metadata more reliably transmits sensitive information than does "the data".
no it doesn't, it'll contain a phone number to a clinic, then you'll have to look up what kind of clinic it is
you're telling me the receptionist answering the phone at the clinic won't mention the name of the clinic when someone calls?
and stick to objective points and don't fault me by calling it nitpicking because your argument falls apart when basic critical thinking is applied to it
This is tangential to a comment I read (probably on HN) perhaps a decade ago, when scandals were being reported that laptop webcams could (surprise!) be activated remotely and people/kids being spied on (I think the article was a school-issued laptop disciplining a child from evidence gathered by the webcam at the child's home).
Someone pointed out that, while being watched is creepy, the real damning information on people actually comes from being listened to.
Or applying for a job, or surveying local businesses for a story, or transposed the numbers, or…
It can simultaneously be true that metadata contains less information than real data and that metadata is still dangerous. But when one is known for breathless hyperbole, should we be surprised when that’s what we get?
The only way out of this mess is with new laws and that will require new lawmakers. Any other solution - relying on the kindness of corporations, toiling away with obscure technologies, gong 'off the grid' - are all foolish or unrealistic for 99% or so of people and shouldn't even be considered.
The most promising starting point is probably at the state level.
Just because laws don't matter 100% of the time does not mean they don't matter. And the solution to better enforcement of laws is the same as the solution to passing better laws: elect better lawmakers.
This legal structure of governance already kills so many people unintentionally, it's unethical to keep trying to reform it when it was designed from flawed principles. Time for a full redesign.
I mean, I'm already a target for various things/reason. That's why I'm an advocate for viral movements so we hit a point where the movement's so large, we skip over the part of the process where killing us would be an effective means of stopping the movement.
But yeah, I'm not going to let threat of death keep me from plowing forward. Embracing death is part of death practice. I'm still going to move forward playfully and through harm reduction.
Personally, I think we have seen too much change within our system in my lifetime for me to think hundreds of thousands or millions of people should die in the off chance that we can change it to something remarkably better.
A bigger problem, IMO, is that companies are allowed to hold and exploit this data in the first place, with no obligations to act in their customer's interests. Google and Apple arguably govern our lives just as much as the actual government, but with fewer mechanisms for representation, transparency, or accountability.
But, who do you think sanctions this stuff in the first place? I think it's an insane expectation to think that government would sanction itself, when it is also requesting and enabling the ability to spy on citizens!
I think you've read the government's self promotional material, and believe it - that it's trying to do the best for its citizens, keep people safe, etc as opposed to seeing it for what it is, which is a mafia exploration racket that keeps it's major beneficiaries out of public view.
The Libertarian party might fit our needs for privacy, but very few people belong to the party. As a liberal, I started listening to the Ron Paul (Libertarian, retired US Senator) podcast at least once a week. Maybe because I am older, but what he says mostly makes sense to me.
(Now I expect to get in trouble here because I mentioned a third party, that is fine with me.)
This, to me, is the more disturbing part of the article:
> In this case, the federal government prohibited us from sharing any information," the company said in a statement. "Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.
What is the point of transparency reports if they don't include major vectors of government surveillance?
IMO such gag orders shouldn't be legal when applied to dragnet surveillance. If you want to gag a company from notifying an individual they're being surveilled (with a warrant), then fine. But gagging a company from disclosing untargeted or semi-targeted surveillance, especially if it involves American citizens, seems like it should be unconstitutional on free speech grounds.
> But gagging a company from disclosing untargeted or semi-targeted surveillance, especially if it involves American citizens, seems like it should be unconstitutional on free speech grounds.
I see you have not read the Patriot Act, an Orwellian double-speak of a title if there ever was one.
The first "paper" I ever wrote was an anti-USA PATRIOT Act paper for a scholarship competition in 2003 when I was 17 where I was awarded $1,000. Literally the only thing I remember is what the acronym USA PATRIOT stands for.
Uniting and Strengthening American by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism.
It really is one of the best double-speak bill titles ever.
Having data on illegal searches would require an insider leaking that information. Nobody has any semblance of a clue how much illegal data sniffing is happening, and it’s even more questionable since the USA and five eyes continues to degrade basic privacy.
You're missing the point, in this case they don't even need the warrant at all. And yes, it is because you would have to ask a judge for each and every person surveiled and then provide a reason. They wouldn't have any reason for the drag net and would be denied.
Seems like a pretty open and shut case of unconstitutional restriction of speech in the US. Especially when you consider the wording of the Apple communication saying that they can talk about it openly now that it's public knowledge.
Given the US has a 4th Amendment-free zone within 100 miles of all national borders in the name of national security, I expect the same justification and level of oversight here.
This is a common misconception. The 100 mile radius does not waive 4th Amendment protection. A reasonable suspicion of immigration law violation is still required to detain, search and ultimately arrest individuals. To wit: please name a single instance of someone having their rights abused by this so-called "zone".
This article [0] lists several cases of warrantless searches, one of which was in Florida. Apparently that 100 mile radius isn't just from the Canadian border or the Mexican border, it's also 100 miles from any coast, which means that 2/3 of the population lives within that radius.
As far as "reasonable suspicion" goes, I'm increasingly unwilling to support the right of law enforcement to independently, without oversight, determine what is "reasonable".
> strong legal basis and case law going back to Terry
I frankly don't care what's legal or not at this point. The surveillance and police state has gotten out of control, and needs to be rolled back. If we constantly just accept past precedent as dictating our future, our rights will be chipped away one by one.
I don't want to live in a society where I can be stopped and asked for identification by law enforcement at any time. Most Americans don't, that's why we still don't have a proper national ID. I consider that to be a warrantless search regardless of what the law currently says.
> Arm yourself with knowledge, just as the Founding Fathers intended.
I find that most people who pretend to speak for "the Founding Fathers" are extremely ignorant of the actual motivations of these people who lived 200 years ago. I won't pretend to speak for them, but I will note that I strongly suspect that the smugglers and tax evaders who signed the Declaration of Independence would probably not be in favor of the ever-growing police state we have today.
Regardless, what they wanted is immaterial—they set up this country for us, and presumably expected us to lead it after their deaths.
> I frankly don't care what's legal or not at this point.
Oh, but you should - your freedom may depend on it.
> police state has gotten out of control, and needs to be rolled back
Maybe, but this is the world we presently find ourselves living in, and we can either choose to become empowered with knowledge about it, or throw a hyperbolic tantrum and wish for the moon.
> I don't want to live in a society where I can be stopped and asked for identification by law enforcement at any time.
You don't, at least not in the US. If you took more time to care about the laws you decry, you would know there is no such requirement, unless you have been suspected of a crime by a lawful sworn agent of the state. Which is a reasonable compromise in a society.
> smugglers and tax evaders who signed the Declaration of Independence ... would probably not be in favor of the ever-growing police state we have today
I agree. Those individuals knew well what an unchecked government can do, and took many reasonable precautions to safeguard against such infringements and tyranny. They were of course imperfect in their implementation, but the principals they set forth (freedom of speech, defense, religion, &c.) formed a radically different society to anywhere else on the planet today. Which is why I'm always puzzled when people disregard their hard work to take some agency's word and propaganda at face value, rather than consulting the original tenets which founded this great country.
> You don't, at least not in the US. If you took more time to care about the laws you decry, you would know there is no such requirement, unless you have been suspected of a crime by a lawful sworn agent of the state.
If you took the time to read the article I sent you, you would know that CBP asserts that it has the right to get onto any bus at any time and demand to see proof of citizenship for anyone on board.
You can wave the book at me all day long, but what actually matters is how the law is implemented in practice, and it's pretty clear that law enforcement does, in fact, claim the right to stop anyone at any time and ask for ID.
The format of this podcast is insufferable, like listening to two befuddled people in a retirement home exchange "witty" banter.
I looked it up though. This was 30 years ago. The court issued Border Patrol an injunction and protected students from discimination. A perfect example of the legal system acting justly and prudently, which only supports my argument that unbridled searches within 100 miles of the border is hyperbole only.
Not to get too far off on a tangent here, but I can't agree more. This style of podcast where a simple story is endlessly drawn out with unnecessary audio being inserted, useless details, and constant repetition without getting to the point makes getting any information at all feel like pulling teeth. I've seen it imitated in other podcasts too so the poison is spreading.
Not sure why down voted. Even the quoted article states:
> Border Patrol, nevertheless, cannot pull anyone over without “reasonable suspicion” of an immigration violation or crime (reasonable suspicion is more than just a “hunch”). Similarly, Border Patrol cannot search vehicles in the 100-mile zone without a warrant or “probable cause” (a reasonable belief, based on the circumstances, that an immigration violation or crime has likely occurred).
If you're taking this view, any armed forces can do whatever they want and the constitution is just a piece of paper.
In practice, the evidence gathered by unlawful searches is going to be discarded in a court of law. Other wise said, there is no carving in penal law for "100 miles " from the border.
> If you're taking this view, any armed forces can do whatever they want and the constitution is just a piece of paper
I don't understand how you reach this conclusion.
> In practice, the evidence gathered by unlawful searches is going to be discarded in a court of law
Yes, of course. What I'm talking about is the threshold for when evidence is considered "unlawful".
The "reasonable suspicion" threshold is intentionally an extremely low bar. Low enough that it's barely a meaningful threshold. In practice, it's incredible easy for any officer to make up some articulable suspicion for pretty much anything.
If I’m not mistaken they’re called NSLs and the legality of them when challenged are reviewed by a secret court with secret laws that have secret interpretations of words. The whole thing as far as I can tell is an out of control nightmare and our corrupt congress doesn’t give a shit.
Actually quite a few members of congress do give a shit. Unfortunately they're the same members of congress maligned as MAGA extremists or whatever (in some cases that might be accurate, but it doesn't mean they're wrong about every political position they hold).
If you actually take a second to listen to Matt Gaetz, for example, you might be surprised to learn his (rather principled) positions are much closer to those of AOC than to President Orange, at least in some dimensions. He wants to require single-issue bills, and to completely eliminate FISA-702. Ironically, it seems like FISA will be reauthorized as part of an omnibus spending bill...
I meant Congress as a body doesn’t care, which IMHO is proven by the fact that decade after decade congress as a body does nothing to remedy these problems. Actually the 1984 nightmare just gets worse.
Support from members here and there is nice but in reality for the 20 years I’ve been paying attention has resulted in nothing.
This is why warrant canaries can be useful in privacy policies, at least for smaller/startup companies. The apple/google/microsoft/amazon/metas of the world would have had to remove the canary long ago, though.
No competent startup or small business would take on such a legal risk. And anyway, a sure conclusion can already be reached on the basis of reasoning about the complete and total lack of warrant canaries anywhere.
I think companies publishing whatever they can is a good thing. We would be worse off if they took the attitude of if we can't publish everything we might as well publish nothing.
But this is also a great reminder that there's a bunch of things they can't publish -- so "transparency reports" are of extremely limited value. Their greatest value is encouraging people to have a false sense of security.
I'm infinitely more cynical about corporations. For me, it's always about what they can do to mitigate any and all possible blame, regardless of circumstance, context, and the world itself. Always.
Nine times out of ten, the person saying this will turn around and complain about all the "political hacks" running things, referring to political appointees with no experience or background in the area of government they are tasked to run.
The term "unelected bureaucrats" applies to people like...I dunno, the director of the NIH and field office managers. Heck, even a police captain is an "unelected bureaucrat". Sheesh.
The director of the NIH is a prime example of a position the people should have direct control over. As is the police captain. Are you claiming otherwise? Have we really forgotten about 2020 so soon?
That’s part of the ploy. Give people a million menial jobs to elect so they feel exhausted by the process instead of demanding to have control over the real power.
See also the California senators, which have at this point been unilaterally appointed by Gavin rather than elected by the people. If that wasn’t bad enough, he appointed this latest one based on a personal promise made to put a Black woman in the seat, in exchange for some union to aid in his personal election campaign.
If anyone cared about civics, separation of power, or indeed democracy itself, there’d be rioting in the streets.
You’re saying part of the problem is people overwhelmed by menial job elections yet say people should elect police captains. Should they then also elect deputy police chiefs, police chiefs? Also, anyone that knows their civics would know that what Newsome did is covered in the US Constitution.
I was imprecisely using "captain" in the "person in change" sense, Chief/Commissioner of police would be the more accurate term, and yes they should absolutely be elected.
As for your alleged lesson in civics, the actual matter is covered by the 17th amendment to the constitution, which states:
> When vacancies happen in the representation of any State in the Senate, the executive authority of such State shall issue writs of election to fill such vacancies: Provided, That the legislature of any State may empower the executive thereof to make temporary appointments until the people fill the vacancies by election as the legislature may direct.
- U.S. Cons. amend. XVII § 2, emphasis mine
So then the question is how has the CA legislature directed the executive thereof to make temporary appointments? The answer to that lies in the California Code:
> If a vacancy occurs in the representation of this state in the Senate of the United States, the Governor may appoint and commission an elector of this state who possesses the qualifications for the office to temporarily fill the vacancy until a person is elected at a statewide general election...
- Cal. Elec. Code § 10720, emphasis mine
So we're left with a very simple question: Was Laphonza Butler an elector of the state of CA at the time she was appointed to fill the vacancy by Gavin? If not, Gavin was operating outside his authority as granted by the CA Legislature, and accordingly in volition of the 17th amendment to the US Constitution.
And the answer to that is very simple, a resounding "No":
> Butler is a longtime California resident but now lives in Maryland. She owns a home in California also. The governor’s office said she would re-register to vote in California soon.
At least elected bureaucrats are theoretically accountable to the electorate. The gripe comes from things like the unelected bureaucrats at the US Department of Justice deciding that as part of implementing the Americans with Disabilities Act, there are only two limited and inadequate questions you can ask of someone with an apparently bogus service dog or else. That rule didn't come from the people who wrote the law.
In practice that shouldn’t matter, as the law states that any service animal can be turned away so long as the business provides accommodation to the human (which is the point of the limited questions).
The fact this rarely happens is more due to people not actually knowing the law and typically wanting to avoid potential conflict.
"people not knowing the law" can be a symptom of bureaucracy though. How many pages of law do you think exist to open a bagel shop or add a room to your house in SFO?
It's a remark about the broader topic of bureaucracy and how you can't blame people for not knowing the nooks and crevasses of modern liberal legislature. You know, "We have to pass the bill so that you can find out what is in it.”
I'm not sure why you're being downvoted. That's been a common charge against our vast unelected bureaucracy, most of whom hold qualified immunity. We're trillions of dollars in debt, maybe it's time to peel some of it back a little.
Are they though? How about the FDA getting most of its funding by the companies they are supposed to regulate? It's comforting to just trust that bureaucracies are doing what's good for the country, but also naive.
>> The only teeth congress has with these bureaucracies is the power of the purse.
>Not true. Congress can make laws defining what those agencies are and are not allowed to do.
>And if the agencies go outside the bounds of those laws like some currently do?
>Then those who are victimized take it to court.
Right, the court isn't congress. My point was the only teeth congress has in regards to the bureaucracies is the power of the purse.
>successfully done all the time.
It depends on how you define successfully. I mean they employ people, is that good enough? Do you think they would be more or less effective with a 20% haircut? I don't really know, but members congress probably don't either. Plus, it's bad politics to cut jobs come election time, right? Seems like a perverse incentive for the people charged overseeing the bureaucracies.
Congress created these agencies, they can write laws that fundamentally change how they work, what they do, and what they focus on. They can even just disband these agencies. Congress has all of the power it needs. If they don't use it, maybe what you think should happen doesn't align with the majority of Congress.
You're assuming that the shadow government can't or won't institute regime change when it's threatened. The US Government killed a president, why wouldn't it blackmail congress as well?
you're right.... The CIA and, by extension, the US government as a whole (or any subgroup thereof) have never altered the outcome of elections anywhere for regime change, and have never instigated color revolutions for regime change.
If your belief is correct in that the Congress and President are coerced into doing what the shadow government wants, then they would have zero need for a revolution or regime change in the United States.
The CIA and FBI definitely didn't like Trump. You see this same thing with police departments at the local level. They don't like what a mayor says (like de Blasio), they get the blue flu and cause crime to increase and the mayor takes the heat for the increase in crime.
Push notifications are sent from an app server to an individual device, correct? And the device enrolls with the server for receiving push notifications.
Why isn't there key exchange happening at the time of enrollment? Why is it something apps have to manually do? We moved the web to https everywhere for a reason, why are apps behind the web in privacy?
Potentially stupid question - how is iMessage encrypted end to end if the notifications aren't?
Apps can still do what they want in the content of the notification. This includes encrypting the content however they'd like. By default, though, apps don't encrypt the content. And the metadata (what appleID is receiving notifications from what app) is still known to Apple.
"The source declined to identify the foreign governments involved in making the requests but described them as democracies allied to the United States"
Because the requests likely contain legal cladding to forbid disclosing the request, as is the case in Australia. A lot of people would be vindicated if it turned out one of the “democracies” making these requests was Australia.
Yep. Most likely to try and catch Chinese spies or other countries like India, Iran, Russia, and others as they continue to go after dissidents abroad.
Or to track US activists and resell the information to the US government, in exchange for data on other five-eyes citizens or access to other surveillance systems (US ones are obviously the best, from a military standpoint).
More like "or (insert country that shouldn't be doing something according to its own laws) to (do something against its own laws) for the purposes of (someone's profit)".
Sure and that applies (at least) to the EU (and friends), US, UK, China, Russia, Japan, South Korea, Singapore, Australia, New Zealand, Saudi Arabia, Israel, Turkey, India, etc.
Let me think - could it be the one country with a complicated situation where most of the security-services apparatus is nominally allied but actually supporting forces opposed to the US (talibans etc), with a sclerotic political system defaulting to military dictatorship every other decade; or the long-standing allied democracies (plural) with a well-documented history of structural cooperation in matters of espionage and surveillance, particularly at the IT level...? Which of the two would the US government rather let run surveillance on US citizens? Mmmh, I wonder!
Most likely group, since they info share and this is the standard end-around on laws prohibiting "domestic" surveillance; government has some other country run the surveillance on their nationals.
I know Pinephone isn't ready for daily use from all the threads here, but I just ordered one to get some stick time with it. Getting real tired of having to fight my phone to keep my data mine.
I just want the equivalent of debian, but on mobile. I understand I'll have to give up a bunch of apps, but honestly I think its worth it. As soon as its possible I'd like off this ride.
PostmarketOS (based on Alpine Linux rather than Android, and what's used in Pinephones): https://postmarketos.org/ (for some reason the site is currently down)
Does Waydroid work well on mobile Linux GUIs like Phosh and Plasma Mobile? If it does it could be real handy to sandbox some Android apps you need for work or whatever while still using a proper Linux base
Librem needs to do something PR-wise to fix the reputation they developed regarding massive product/delivery delays.
They exist in the frustrating spot of “I want to like them, but I can’t trust the purchase based off of everyone I know who tried getting burned, so now I’ll just look at a Pinephone because it’s easier”.
One question I have as someone who tries to maintain (some) data sovereignty: is there any way as an end-user to circumvent/mitigate this kind of surveillance — aside from abandoning iOS and Android completely?
Google-free Android will allow you (force you) to use alternative push servers. That could be your own server (using something like Unified Push) or querying your apps' servers directly. This comes at the cost of battery life, sometimes significantly so, but it does decentralise the notification system.
Of course, your data will still be in the hands of app vendors unless you choose your apps wisely.
You should also block analytics on the network level (using firewall apps or alternative means) because these days developers like to send analytics events for every button pressed, all associated with your phone's unique identifier. If the government can use push notifications for tracking, imagine the tracking they can do through Firebase Analytics or one of its many data hoarding alternatives.
You're suggesting a deviation from the norm (99.99% of users) by installing a custom operating system (which they will now also be on the hook to secure and update regularly) by developers with nothing to lose.
This will greatly increase scrutiny on you, or colloquially speaking definitely put you on a watch list, the opposite of what is allegedly desired. Rather, accept the plain fact electronic communications are subject to government surveillance and adjust your threat model accordingly. Don't try to fight the bear with a flyswatter.
> You're suggesting a deviation from the norm (99.99% of users)
Which still leaves you in a large enough group that it's not practical to deploy full-press individualized surveillance against all of them. A group which contains a fairly large number of people who're doing it just to piss off the spies, and an even larger number of people who happen to be of no interest to you as a particular spy deciding where to apply your resources.
As for mass surveillance of that group, that can happen, but there still aren't such good, cheap choke points to use. The cost per bit of actionable information is still relatively high even if the group is relatively rich in targets.
> by installing a custom operating system (which they will now also be on the hook to secure and update regularly)
... as opposed to the stock operating system, which may very well not get updated at all.
I get constant updates for GrapheneOS. And they're automatic.
> by developers with nothing to lose.
What the hell does that mean? They have reputations on the line, much more so than the faceless people doing the OS work inside the vendors. Some of them depend on this for their livelihoods.
> Which still leaves you in a large enough group that it's not practical to deploy full-press individualized surveillance against all of them.
Assuming no advances in technology obscured from public view, of course.
> Some of them depend on this for their livelihoods.
You sort of answered your own question there. Consider whether foreign nationals writing software in near destitute are susceptible to MICE, in relation to Bay Area millionaires.
> This will greatly increase scrutiny on you, or colloquially speaking definitely put you on a watch list
Every last one of us is being constantly surveilled by the government. If there is any kind of "list" individuals can get on at this point, it's reserved for a very small number of people who are ignored or whose data is excluded.
AOSP is not a deviation from the norm. It's the thing Google ships, vendors install play services as separate apps on top, so there is nothing oddball about your device fingerprint just by not installing Google specific services like the push handler. Your traffic will look like any other android making web requests, but then those requests will only be tracked by the servers they target instead of the OS itself betraying you and sharing metadata about them with various 3rd parties. Running non-vendor ROM alone will not get you "on a list".
"Custom" ROMs also get OTA updates, so keeping up to date is as easy as it is on a vendor spyware ROM. In fact, you will usually get updates from the community well beyond when vendors stop support.
But that's tracking your web requests to search engine servers. The way those requests look is dependent on your browser, not which ROM you are running. You can setup your user agent to be whatever you'd like at least on android or desktop browser.
Governments view legibility of their constituencies as a feature, not a bug. They want to be able to query the population like a database in order to manage it better. This is exactly like a product manager at a tech company who wants to know whether a certain feature is being used, and asks for more instrumentation in the next release of the product if needed. Over time the product (the population) becomes better and better instrumented.
Of course, the other side of the coin of better legibility is worse privacy. Their feature is your bug.
Are there ways to circumvent or mitigate what's happening? For you, personally, sure. You can turn on all the buried options, add VPNs, proxies, additional profiles/accounts, etc. And for a while it will work.
But you're defeating legibility by doing that, so you're fighting against a very strong opposing force. Over time, the bugs that reduce legibility coverage will be fixed. The options will go away, VPNs will be banned or at least instrumented well enough to nullify their utility, COPPA and porn age-verification laws will extend to make multiple or anonymous identities impractical, and so on. And the few of us who do manage to go online fully anonymously might as well be wearing a "CRIMINAL" hat, because the public will have been trained that only bad actors want privacy, but not to worry if they themselves have nothing to hide.
You can see this already happening with financial transactions. Try to conduct a significant low-legibility transaction (in other words, buy something big with cash). Your bank will ask why you want to withdraw $20,000. Cops might seize the cash, legally and without probable cause, while you're driving to the seller. And when the seller deposits the cash, the bank might file a SAR. This is all working as designed. You're being punished for adding friction to legibility.
Even on HN, where you think people would be ahead of the curve, the PR campaign against financial privacy and censorship resistance is winning. Mention The Digital Currency That Shall Not Be Named, and suddenly the Four Horsemen of the Infocalypse are in control. Why HNers are pro-VPN but anti-Bitcoin, when both stand for privacy and censorship resistance at the price of reduced legibility, is beyond me.
The battle to fight is not just protecting your own privacy. It's protecting your right to protect your privacy without being ipso facto declared a criminal for doing so. Turn on all the options, hold Bitcoin, use VPNs, pay with cash, delete cookies, etc. But above all, be an ordinary, conscientious, law-abiding citizen. Render unto Caesar what is Caesar's. Be average. Be unremarkable. Privacy should be the default. Not unsavory, not for those with something to hide. Just the default.
Oh boy. I was shaking my head in agreement while reading your comment, until that part:
> Why HNers are pro-VPN but anti-Bitcoin, when both stand for privacy and censorship resistance at the price of reduced legibility, is beyond me.
neither vpn nor btc are "for privacy and censorship resistance". Maybe in some dystopian neoliberal every-man-is-an-island way. I think you were thinking about "overlay networks (tor et al) and communal economies" maybe? Those would fit with the rest of the claims.
The actual mechanisms don't matter that much. The point is not letting government or big corporations default to being the gatekeepers for (or monitors of) basic -- and legal -- social activities like communicating or transferring value. Information technology has shrunk the world, but our rights shouldn't also shrink.
I'm not talking about a choice of implementation. I'm pointing out you suggested a brick wall for a road. It's not picking a dirty road or asphalt. it's a wall. it's not a fitting concept.
but again, you are correct on the concepts. I would only add that corporations and gov are not that separate as you think. They have that power because they must have that power. capital will flow. rentiers will get paid. and those rentiers either have connections or they are the inteligence agencies. Do you think cesar could just fire the praetorian guard?
On iOS, all notifications must go via the centralized APNS, but on non-Google Android (eg Graphene) it is possible to run the device with the Google FCM stuff blocked off. Some apps will break, but stuff that runs in the background for polling or does non-Google notifications will continue to work.
The Reuters article says that the government is getting this data from Apple and Google, which means it doesn't matter if your phone displays or even receives the notifications, no?
> aside from abandoning iOS and Android completely?
These platforms are so opaque and completely controlled by US corporations (so we know they are beholden to NSLs etc). If you care about your data and privacy, the best suggestion is to avoid phone platforms completely for anything important.
Absolutely and confidently incorrect. Local notification settings have no bearing on this metadata, which is generated, collected and stored with your consent by using Apple/Google app stores.
Wait. Are you saying that even if I disable notifications on my phone, the app backends will continue to send notifications to Apple and Google only to be ignored? If this is the way it is intended to work, I find that hard to believe.
It's a huge problem for both privacy and the open source ecosystem that Apple and Google mandate use of their own notification system for apps to be included in their stores.
There were huge downsides for battery life before, and privacy is somewhat orthogonal since you’d be at risk from more companies and they’d all be subject to the same legal demands, so I think the answer has to be regulatory. In the EU, that seems possible but I’m not sure the U.S. government is currently functional enough to do anything about this.
It certainly had an impact when Apple and Google shipped platform notifications because each of those systems kept the radio active.
It’s possible that a better interface could be developed but it wouldn’t help privacy unless the implementers were in different legal jurisdictions: the same government which can subpoena or NSL Apple or Google could’ve asked e.g. Urban Airship for the same details. There’s also a challenge in that each implementation is a chance to make mistakes or fail to deliver promised privacy protections, and someone in a country which isn’t the United States might have stronger privacy laws but is also a legitimate NSA target. This kind of problem just doesn’t have simple solutions.
It's a much bigger nuisance and risk to have several smaller parties to handle court orders; some of which could indeed be in other jurisdictions by the way.
Before the platform notifications every single app kept their own connections open; allowing (completely) third part notification platforms would have a small or non-existent impact
> It's a much bigger nuisance and risk to have several smaller parties to handle court orders; some of which could indeed be in other jurisdictions by the way.
I’m not sure this is true: a small company is less likely to have the legal resources or confidence to stand up for their customers’ rights. I’m sure you could find examples going either way at either size.
Being in a different country helps but only if the company has sufficient security to even notice if the NSA decides to take advantage of them being outside of the US. I would bet Apple and Google have that level of expertise but not everyone else.
It is driven entirely by battery life. Android used to allow 3rd party apps to receive push notifications, and it caused battery life to be terrible compared to Apple. Forcing a single path was done for that reason.
This complain is nonsense. Android _still_ allows background applications, the only limitation they added in that release is that such background applications have to show a notification that they are running (actually a feature if you ask me). You are still allowed to listen on a gazillion sockets perfectly fine.
It's more problematic that some Android "skins" tend to kill background applications at random https://dontkillmyapp.com/, but at least, one cannot squarely blame Google for that one...
The "battery life" argument that that they constantly use is also a very poor excuse. Even when Conversations (the Jabber client) didn't use push notifications at all and would just listen on noisy XMPP sockets, it still had about the lowesst power consumption of all Android messaging programs, lower than Google's own push notifications client app (play services).
Certainly I might imagine that if all 1,000 adware apps your average Android user installs all needed to be wired and listening to a socket in order to receive the latest offers (all in the legitimate interest of the user, of course) you might literally run out of memory. But even then there are many solutions (such as inetd like services) that do not require centralizing everything into Google.
> Android _still_ allows background applications, the only limitation they added in that release is that such background applications have to show a notification that they are running (actually a feature if you ask me). You are still allowed to listen on a gazillion sockets perfectly fine.
...I'm not even clear on what they're complaining about (the page github links to seems to have been changed, it describes the current state rather than what happened in 8), because this was actually a thing as far back as Android 2: you had to have one of those notifications up to prevent Android from killing your service.
Android's behavior WRT to background services over the years changed from "MAY kill" to "WILL kill", so for people who never honored the official docs (unfortunately most of them), this was a problem.
Additionally, Alarms and Broadcast Listeners were seriously crippled which made common workarounds much harder.
Pardon my ignorance but would block all push notifications stop this specific act of surveillance? I usually don't need any notifications' content on the screen apart from "you have a new message on <app>, go check it". Or is that what's being discussed here?
The article says that Google and Apple know about the push notifications being shown on the phone and governments can make these companies turn over customer data.
I'm not sure if it only covers (for example) the unified notification service on Android or whether Apple and Google know of notifications that don't make use of that API. It's not clear from the article.
I don't know about Apple but on Android it's almost a capital sin to strive to use other services, and they work a lot worse than GCM (because of all the artificial limitations that Google imposed over the years).
It does seem to be notifications on the phone, but (a) that's incredibly surprising and disturbing and (b) it's really unclear why or how that would work when a phone is disconnected from the network. In any event, Google inserting themselves into notifications would be tantamount to reading all my email, texts and everything else, so ... why wouldn't this be restricted to opt-in? Many questions.
Why didn't Apple pull the plug on these services as soon as the government started spying with them? Why didn't they rearchitect them to use E2E encrypt? Do they actually have principles about privacy or is it just a thing they want us to believe?
Apple uses “privacy” as a marketing term. They market themselves as protecting your “privacy” from advertisers unlike Google.
Apple open complies with all data requests from government agencies and law enforcement. It is not a hard process for law enforcement to get someone’s iCloud data with a warrant.
A paranoid part of me has wondered if some of the text/phone spam we all receive is actually used to stimulate cellphones for tracking purposes.
If you have deeper access to the OS, then fingerprint unlock or FaceID also seem important for positive identification prior to, for example, a Predator strike.
Plus, you can always ask the carriers to which tower(s) a phone is connected and simply triangulate from there, without sending any (user) data to the phone.
It's important to know that the entire worldwide mobile phone network needs to have a reasonable estimation of the location of each device in order to work.
"Phone call for XYZ", "SMS for XYZ", "Establish TCP connection to XYZ". Every single device that hears this has to decode the message to the point that it can say "Nope, this isn't for me. Ignore". You've got billions of devices online at once, doing things that require messages to be sent to them. The network has to find a way to broadcast these messages to the tiniest geographic area that it possibly can, or else the whole thing breaks down. So yes, there are plenty of completely normal, standard ways that the network can make your phone say "I'm over here" without anything showing up on your screen.
(I worked at Motorola in infrastructure tech for many years)
It's fascinating that about half hese comments appear to be from younger people unfamiliar with "USA PATRIOT" Act gag orders, FISA, Five Eyes, Least Untruthful Response and related controversies that were big in the news 10-20 years ago.
Amusingly and sadly, the law was called PATRIOT as a normal "give a bad law a Good name", but over time "patriot" has become synonym for "traitor" in common use.
There’s probably some you’ve missed but yeah, I like the “they can’t do this because of * “ comments.
Reminds me of the Eufy issue where they said everything was encrypted except for push notification images.
Hard to pick the most appropriate Orwellian quote. "All tyrannies rule through fraud and force, but once the fraud is exposed they must rely exclusively on force." ~ George Orwell
Why would it be unusual for a generation that’s been under surveillance since they were in the incubator to not hold quaint and obsolete views of privacy?
If we held a poll, what percentage of privacy-loving HN parents don’t have tracking on their kids phone? 5%? 10%?
You do _not_ need push notifications in the first place. Most definitely not for messaging programs anyway. The "saves battery" arguments are always very fluffy and devices/clients who don't do push notifications (or at least don't force you to) sometimes even have better battery life than devices/clients which do.
Stop promoting and trusting Conversations. Is it bad software which never did OTR verification properly before yanking it unexpectedly and without explanation. To my knowledge it has never been independently audited let alone taken seriously enough by any infosec professionals to warrant such study.
AIU deanonymization happens due to pseudonymity. There are 3 pseudonyms: chat id, push id, phone number. Since all three are constant and linked, they can deanonymize the user. You need some sort of anonymous or confidential protocol to work around it.
It is ultimately ignorant to think one is not spied upon in daily comings and goings, when the entire human economy is based on data and the study of it (especially at scale), whether by government, private enterprise or sole evil individual.
With Apple/Google you get the comfortable padded jail cell with 24/7 guards to protect - and monitor you; the digital equivalent of having a police officer live with you. You can't go outside of the walled garden and you're told this is for good reason.
Without them, you're totally on your own; you better be prepared and know how to defend yourself. No one will care about your security and privacy. But don't for a second think you're not still under the all-seeing eye of panopticon surveillance, and possibly additional scrutiny therein.
They mention metadata in the article. Imagine sending a message to a Signal account at time X, then asking Apple a list of all users that received a Signal notification at that specific time.
This ^. approach and modified forms of it can bu used to track lots of things, and have be done so for decades by some goverment agencies. You can use a method like this even if people are using encryption and lot of anonymous tunnels. You simply shape the traffic and watch where the shape of that traffic stops. Can track people realtime across almost any link, including things like Tor, etc.
I had to anonymize some data while still keeping some details. You could imagine individual trees that needed to be put into groups of similar trees so individual details were lost.
Anyway, these "trees" were effectively user behavior across all our products. I was shocked that simply knowing *when* (to within a second or two) a person did two or more things, you could narrow it down to *one single person* out of hundreds of millions.
Unless I’m mistaken - and I might be or it may have changed - Signal notifications on iOS just tell the app “hey, something happened, call the service and check for updates”.
I.e, the push notification itself contains little to nothing in terms of data/metadata.
You can also of course decrypt a notification by shipping an extension to do so, and maybe Signal does - it’s been awhile since I poked around it. I’d just be surprised if the Signal team didn’t analyze the issue to death and find the gaps.
What you've said is correct, but it doesn't stop the attack vector described.
If the question to Apple or Google is "who received a notification from Signal at 17:15 UTC?" then even if the notification is “hey, something happened, call the service and check for updates”, you've got your answer.
To defeat it, one would have to regularly send cover traffic (i.e. push messages saying "nothing happened"), and accept that notification of messages may be delayed until that regular period.
i.e. the app sends its push token to its back end, together with a "use by" date. The server sends a push by that time, even if there is nothing to send. In the case of receiving such a "nothing happened" push, the app gets a new token, and informs the back end server.
The constraint there is how frequently Apple / Google will allow pushes, and how well the respective central server can scale to sending all of those dummy notifications.
The cost for the mobile being extra data use, and extra battery from the forced wake ups. So it may have to be a configurable option in the app.
So do Apple / Google allow at least one notification per hour?
That doesn't make sense. I would expect Signal notifications to happen completely out-of-band with "normal" push notifications (e.g. NYT news alert). Otherwise that completely defeats the purpose of the service. Basically you're saying Apple/Google are MITM'ing Signal.
This is just how push notifications work on iOS and Android. The app requests a push token from the operating system, sends that to its backend and stores it against the user's identity. To send a push a message is sent from the backend to a push service maintained by Apple or Google, who then deliver the push to the phone in question. In the case of Signal, their backend cannot access the message content, so the notification does not contain this, i.e. it's not MITM.
On iOS in particular background modes are finicky and you cannot generally have an continuously poll notifications in the background. Further, if every app did this battery drain would be significant.
I'm not so familiar with Signal, but could you explain why you would expect Signal notifications to happen out-of-band with normal push notifications?
Assuming Signal sends push notifications of some sort, as most messaging services do, that would make them vulnerable to the metadata-level attacks described in this thread.
What kind of "out-of-band" are you thinking of that would mitigate this issue?
Why: because otherwise the service, which is supposed to be private, is no longer private.
I dunno how it would work, maybe something like a third-party push? Why does everything have to be channeled through central service? A service like Signal could operate its own push channel.
no, that's not basically it. MITM to me means being able to read the data by placing yourself in the encrypted chain. that's not how push notifications work. they don't need to know the contents of the message
The notification is separate from the message. It absolutely is MITM, just for the notifications, which are messages themselves with real content (you have received a message from so-and-so).
I don't know what you think you are proving here. They did not view the contents of the message. An MITM "attack" would allow them decrypt the content of the message. This is just metadata being used. It's no different than all of the other metadata uses that the TLAs have been using. We've known for a long time (for me since Snowden was the first time I ever even considered it) that metadata can tell us a whole hell of lot about people that is just as much evidence that the actual contents of the message are irrelevant. With metadata alone, you can build up an entire network of people to investigate. You can do that investigation without ever decrypting anything. It's no different from the police following a suspect to see who they meet, and then following that person, and continuing until they find the bigBoss. They can then roll up the entire network in one fail swoop if they so choose.
Others have mentioned the timing attacks but also payloads are not encrypted unless the app developers remember to build that. This linked essay discusses both threats:
Thank you I was wondering about that. A couple of days ago I heard somebody mention that push notifications go through the backend and that it was a huge privacy issue, and I just couldn't believe that messaging apps that are "encrypted" would go through all that work just to then send the unencrypted message to Google's servers
I'm surprised hyper-private services like Signal haven't foreseen this as a potential vector and given you options to eg. exclude different details from push notifications (or warned you to disable them altogether if you're worried about it)
Fortunately, they did foresee this! The push notification only contains enough information to tell the phone that it should fetch the actual notification content from Signal's servers.
My Signal notifications on iOS just say 'Message received!', not sure what else is in the payload but nothing else is displayed... It seems unfathomable that they would push any unencrypted message content or information relating to who is messaging you through notifications that travel over third party servers, so I very much doubt there's much of interest in the payload...
Unless my memory is seriously off, Signal push notifications just tell your device to call and fetch. It’s not like they’re unaware and just sending you stuff in plain text.
The gist of it is that when Signal sends you a push notification, it's just a marker for "hey, you have updates". It doesn't contain unencrypted data that could be passed to another actor - Signal isn't stupid enough to do this. The app will then call out to pull down any updates.
Thus, we wind up in the following situation: the US govt could ask Apple for a list of people who got notifications at X/y/z time to try and tie it to someone who sent at those times, but Signal is so large and widely used that it'd be finding a needle in a haystack (and that's probably putting it lightly).
The news from this article is concerning, no doubt... but I'm not particularly worried about Signal is all.
This is yet another example of: If the data can be collected it will be used by governments
You can slow this down by making data explicitly built to be impossible to read in transit (eg e2e) and then deleting or never saving it, but the fact that data flows through multiple stops means each transition is an opportunity for third party observation
This is deterministic and is built into the structure of data production transport and consumption. This is part of the infrastructure and cannot be extricated
See [1] for an overview of "state of the art" metadata-protecting communications protocols. There has been much research into this problem over decades and the effectiveness of such protocols very much depends on real world use cases and practicalities. For example, protocols may require 100 seconds to send a message to ensure adequate mixing, and then may be limited to always-transmitting-24/7 endpoints consuming much power, and then also requiring participants in the network to trust each other not to mount a denial of service attack.
[1] SoK: Metadata-Protecting Communication Systems, Sajin Sasy and Ian Goldberg, Cryptology ePrint Archive, Paper 2023/313, https://eprint.iacr.org/2023/313.pdf
This depends on how the app implements notifications, and which mechanism is used to disable them. I know FCM/Android, not APNS/iOS, so here's a breakdown:
1. The app registers a push token with their backend. This can happen without granting notification permissions, and without notifying the user. So the backend is free to start sending push messages immediately after registration, which is typically done on the first app launch.
2. The controls available in Android's per-app notification settings have nothing to do with push messaging. These allow the user to limit or change how the app displays notifications, regardless of the reason the app is displaying them. Some apps have additional options to disable push messages, but that preference must be communicated to the app's backend to prevent the backend from sending pushes in the first place. Some apps may consider Android's notification settings to determine this preference, but it's extra work to do so.
The concepts of "push messaging" and "notifications" are often used interchangeably, but at least on Android these are separate systems that are tied together with client code. The push messages may also contain notification data, and the official FCM client will display these automatically, so this confusion is understandable.
I’m no expert but in my experience developing mobile applications & push notifications, I’ve only registered a device for notifications (and subsequently sent notifications) if the user opted in. Based on my own experience, I would say if you didn’t enable notifications for a particular service or app, they don’t get sent.
Dunno how it is now but it used to be that Apple would tell you which push tokens (recipients) were rejected (app uninstalled, push disabled for your app, or you stored a bad token to begin with) and you were supposed to stop sending to them, with the implication that Apple would get upset with you if you kept sending to rejecting tokens for too long.
It'd be cool if Signal and other privacy-focused apps added an option to delay push notifications. That would obfuscate the connection between two accounts.
You'll never be a criminal with that level of opsec.
You have to randomly leave your phone at home for criminal and non-criminal things. That way, there's a plausible alibi that your phone was at home or on you at the time of the crime.
Yes. Every time someone changes lanes without signalling 200ft prior. Every time someone goes 56 instead of 55. Every time someone operates any kind of vehicle after having more than one drink. Any time someone is drunk in public (in many states). Probably a huge number of gun owners in states with legal cannabis. Any time someone walks across a street without a protected "walk" sign.
These are the ones I can brainstorm in 30 seconds.
If the government could enforce every law on the books with perfect accuracy and with 100% effectiveness, it would be intolerably oppressive.
Laws are written often with the expectation that enforcement will not be perfect, that between impracticality and officer discretion, that such laws will be a net positive without being silly.
We are coming up on a time of government surveillance and data analysis technology (AI) that we will not be able to escape the panopticon. Laws or enforcement will have to adapt.
Statistically if everyone is 5% evil, the chances of someone being evil to you in the course of the day is pretty high. That sounds like the makings for a downward spiral and "don't be evil at all" is much safer for society.
Obviously there will be people who choose to be mostly evil regardless of what everyone else is doing, but society trying not to be evil in general is still the best case scenario.
It's time for a privacy bill of rights. You have to attach inalienable rights to people and then enforce them at the civil rights level.
These things are troubling now. In the post AGI world these are much more difficult problems because the data becomes training for purposes far beyond anything that could be foreseen in the data collection questions.
With respect to the US, I would be more worried about Apple and Google spying on users through push notifications. Americans have legal protections against government spying but they have basically zero protections against spying by so-called "tech" companies. Neither Apple nor Google can demand information about citizens from the government, but the government can demand this from Apple or Google, which they do, successfully, with increasing frequency. People share details of their lives with Apple and Google they would probably never share with the government but the government has little trouble getting it from these so-called "tech" companies, without any notice to the user, so sharing these details with Apple and Google is arguably even worse. The ability for people to fight against this sharing of information is nonexistent; it's up to the companies to resist. Given the number of users whose data they hold, that simply is not feasible. These companies do not care about peoples' privacy. They seek to profit from learning every detail of peoples' lives. Commercial surveillance.
When the government asks citizens for information it's usually for a specific purpose and can only be used for that purpose. When so-called "tech" companies collect information, it is for any purpose. They might assure users that "the information is only used to improve the software or service". What limits does this create, if any. How dow we verify that the company is not using our information in ways that compromise our interests if we are not allowed to learn how the company is using the information. Imagine if the government assured people that the information it collects "will only be used to improve the government".
Not every computer is a national security threat or even a common criminal, i.e., a person that the government has some need to spy on. That's not who I am referring to in this comment. These so-called "tech" companies spy on everyone. And they don't just want to know about one thing, for one purpose, they want to know everything for any purpose.
Are the legal protections against government spying actually worth anything though?
Eg. Parallel construction, FISA, etc etc.
But also, you're whispering right in that if Google and Apple are able to do this, then is that "laundered" spying, I'm that various law enforcement and government agencies can buy this information?
I must be fundamentally missing something here. I thought all this data scooping was to find the bad guys. Are the bad guys really so stupid as to use Apple or Android (or any closed system) to communicate? Cryptonomicon was written 25 year ago.
> In a statement, Apple said that Wyden's letter gave them the opening they needed to share more details with the public about how governments monitored push notifications.
> "In this case, the federal government prohibited us from sharing any information," the company said in a statement. "Now that this method has become public we are updating our transparency reporting to detail these kinds of requests."
If Apple knew about this why wouldn't they limit their exposure to this user data?
My startup (LaunchKey, now part of TransUnion) encrypted the data in our push authentication requests as late as a decade ago. This was painful until they expanded the size of the message allowing for more encrypted data. It is possible to do so (I would use pub/priv ec keys now) but remember you are limited in the amount of data you can include so you might need a “pull” to deliver all of the content necessary.
This is a question you would ask if you/they had already provided some evidence for the claim and it was deemed insufficient. Making a bold claim should come with some ability to justify it ready-to-go.
That's why we at Tuta do not send any information with the push on Apple and have built our own push notification for Android (we'd never use Google Push): https://tuta.com/blog/posts/open-source-email-fdroid
Completely unrelated, but sort of related. With all this surveillance and spying going on, what's a normal citizen to do?
For example; Cloud storage? Streaming music? Online note-taking?
Should the more technically-inclined, but average, person start looking at taking more and more of these things off-line given the state of mass surveillance going on and the crazy push towards all things AI?
Is this a timing side channel attack, where say I am a member of a Signal group, or have a Proton email client or Matrix/Element or something, are they sending patterns of beacon messages that may look normal, and then watching the traffic across mobile networks (or directly on platforms) that matches, and then narrowing endpoints that show it?
I guess you have to assume that any message in transit over a public network is public. Of course, you can use something like PGP to encrypt messages before sending them, provided that the recipient has your key. I know of a few people who do that.
Outside of that kind of thing, we're probably yelling everything out loud to anyone who wants to listen.
What sort of metadata or information can be gathered from a push notification from an app like iMessage? I know a timestamp is there and most likely the sender's phone number.
But is there some sort of sensitive info that these governments are trying to glean? Or is it more so they can build info maps and communication maps on targets?
This is not necessarily true. You’re assuming that all the info is in push notifications themselves.
E.g: if I get a push notification that is simply “you have a new event, poll the server”, and then I poll the server for (encrypted) batch updates, where exactly do you see the leak that ties an anonymous profile to an Apple ID? Given a large enough service, that same generic batch update endpoint would be getting hammered and I have to think it would effectively be camouflaged to a degree.
Granted, not every app is going to use this design - but if or when done properly I don’t see that much of an issue here.
Very delayed reply here, but it's a timing attack, I think.
If the government has access to telco resources (I think it's safe to assume that they can and do), then they can line up the timing of a chat message with the push notifications it triggers.
If we are chatting and the government doesn't know who I am, it will only be a matter of time before the number and timing of the push notifications I receive line up in a unique way to the messages you sent me. That would work for every member of the group.
Apple could bundle up multiple push notifications to obfuscate it a bit, but it would hurt real-time communications and wouldn't be that strong of a mitigation anyway.
If you were able to do this, and you also had control of the person's ISP/cell network (not unusual for the threat model here), then one thing you could do is interfere with their communications, "shadowbanning" them from their friends/contacts. Say you used a particular app, like LINE, to speak to one particular friend who your "benefactors" didn't want you speaking with, they could drop connections between your device and that app's servers whenever they intercept a push notification from Google or Apple targeted to that app on your device. Effectively preventing the two parties ever communicating.
Depending on specifics, it seems it would be possible to do this cleverly, so the app still thinks it's connected, but just never receives these messages.
I'm not an expert on this, it just seems a plausible possibility. Best effort response to your question! :)
This would only work if the protocol doesn’t have the concept of retries, which it does. They’d have to block all communications which would be highly noticeable - especially since you’d get a flurry of messages any time you opened the app or migrated onto a Wi-Fi network.
I suppose it depends on which protocol, and which app, we're talking about, but...Interesting. Good analysis!
It's conceivable that connectivity checks flow to other servers than delivery traffic, and these are passed-through. Although addressing your more general critique of the "flurry" (good word! :)), requires noting that accomplishing this capability would involve compromising the app's servers. Such backdoors are again not outside the realm of possibility in the given threat model.
Do you see any possibilities for interference in the push interception capability described?
I know iMessage is E2E encrypted, and I wonder if that extends to the content shown within a push notification. Maybe the push notification servers receive the content encrypted, pushes it to the device, and then decrypted on-device?
Why on earth would you _still_ believe that iMessage is E2E encrypted? Even if it was _today_, _nothing_ prevents Apple from silently pushing an update that simply does not encrypt _your_ messages. And even if you trusted Apple on their word, they could still be sent a security letter (like here) from _some_ government forcing them to do it, and forcing them to be silent about it.
Simply put, sunk cost fallacy. Apple marketing has completely inundated and exhausted consumers by front-running a BS privacy narrative, to an extent such that users of their products lack even the vocabulary to meaningfully reason about and discuss what privacy is and is not; instead opining an opaque, closed source platform could defend their individual interests from asymmetrically powerful forces at all.
> "The source declined to identify the foreign governments involved in making the requests but described them as democracies allied to the United States"
Oh look! The US end-running constitutional protections again via 5+Eye proxy governments. Who could ever have guessed.
> …a source familiar with the matter confirmed that both foreign and U.S. government agencies have been asking Apple and Google for metadata related to push notifications to, for example, help tie anonymous users of messaging apps to specific Apple or Google accounts.
In the past, Google, Apple, Amazon, Facebook, and a slew of other companies would have been broken up using anti-trust laws. These aren't just monopolies at this point, they are clusters of monopolies. This is leading us down a dark path.
It should only[0] be meta data, though. The push notification should signal the app that there is data to fetch, then the app goes and fetches it. The push notification itself should carry none of the data.
I so hate when people put words "only" and "metadata" in the same sentence...
They know you rang a phone sex line at 2:24 am and spoke for 18 minutes. But they don't know what you talked about.
They know you called the suicide prevention hotline from the Golden Gate Bridge. But the topic of the call remains a secret.
They know you got an email from an HIV testing service, then called your doctor, then visited an HIV support group website in the same hour. But they don't know what was in the email or what you talked about on the phone.
They know you received an email from a digital rights activist group with the subject line “Let’s Tell Congress: Stop SESTA/FOSTA” and then called your elected representative immediately after. But the content of those communications remains safe from government intrusion.
They know you called a gynecologist, spoke for a half hour, and then called the local abortion clinic’s number later that day.
You're using the internet afterall which isn't your network- it's someone else's! When you send a packet there is a header w/ information required for routing. Some call this the "outside of the envelope" if using the mail analogy. We can pass the buck by using a VPN but this also adds a VPN org that we need to trust. On the other hand, it's not your network! Why do you think you have a right to absolute secrecy and anonymity on someone else's network?
No, it's just a case of facing reality. The internet is built by other people and we have to trust (or not) that they are going to honor the responsibility that entails, from security to ethics. The internet is also funded by learning as much as possible about users in general so using the internet is accepting that you will be tracked. Increasing personal security is good, but no silver bullet.
I'm not saying things shouldn't change, just that the reality we live in right now is that using the internet means you are tracked. Of course we shouldn't just accept that and not push back, and of course we should build things like the internet we had before social media "became the internet".
Being aware of the tracking and risks means people can make efforts to reduce the tracking, but it's almost becoming impossible to use the internet if you don't AGREE to the tracking in many cases, such as websites that won't risk GDPR violations and chooses to deny access to people blocking cookies entirely.
People who remember the old internet want it back, people who grew up with social media don't know what they're missing, and there's not much we can do to convince people to care about changing the DNA of the internet so that it's no longer perversely gobbling up all data.
This requires legislation, and a court system that upholds the law.
In the US, the courts just decided there's no right to privacy (despite what the 4th amendment says) as part of rolling back Roe v. Wade.
So, the path forward is to vote in legislators that respect basic human rights, followed by court packing (or just impeaching the judges that have been publicly accepting bribes and failing to recuse themselves on cases where they have a clear conflict of interest).
Since the above is supported by way more than 50% of the US population, the main obstacles are gerrymandering and ending the currently common practice of appointing blatently corrupt judges to state supreme courts (and also restoring recently stripped powers to state governors, since they're elected via simple majority).
Exactly, and all of that is hard and slow. We live in the now, with the internet tracking our every move by current design. Pretending it isn't tracking us doesn't mean it actually isn't.
People are generally keeping themselves monitored as they use the internet. It's a panopticon with more steps. So it's no surprise governments are using the plaintext of anything they can find to track people.
And if people don't care about that because they are more focused on their pet political issue, it will never change, and silently get worse.
Push notifications don't signal an active line of communication like that though nor do they connect who's talking, only the means. In all your examples the equivalent would be "They know someone called you."
I don’t agree with them plagiarizing the EFF’s blog post[0] but I think it is a mistake to use “only”. Both can be damaging and neither is clearly more or less bad since so much depends on the circumstances – like if the police have compromised one party in a conversation, they already have the payload so the real risk would be things like location data. We should probably treat both of those as equivalent risks until enough specific details about a situation are available to say which is riskier.
But my intention was to point out that actual content wasn't being transmitted and that "only" meta data was gleaned since some people seem to think that chat messages are being scooped up. Other people have rightly pointed out that meta data is bad and why and I didn't feel the need to reiterate that.
But my intention was to point out that actual content wasn't being transmitted and that "only" meta data was gleaned since some people seem to think that chat messages are being scooped up. Other people have rightly pointed out that meta data is bad and why and I didn't feel the need to reiterate that.
If you wish privacy then just get a linux phone. May not have the coolest features but if you need more than a classic phone, linux phones will do fine. Less apps means fewer distractions - a win win situation.
Must be interesting to work on the teams responsible for compliance at Apple/Google. Would talking to someone about these kinds of orders qualify as treason under US law?
Great news considering we're now getting an extreme-right fascist government in Holland. Why not give them all our data on a platter, they can be trusted.
> I'm probably naive, but what insights could a government gleam from Push Notifications?
Looking at my own phone right now, it just got a push notification that my wife has arrived at home. That could be useful if you wanted to track my wife.
> And why aren't push notifications E2EE?
That's a great question. And I hope the answer is "we're on it, they will be E2EE in the next release."
If the notifications were to be truly E2EE, it would have to work something like this:
1. Generate a local key pair per app (never uploaded to Apple).
2. Each app can request their public key from iOS (or provided with (void) application:(UIApplication )application
didRegisterForRemoteNotificationsWithDeviceToken:(NSData )deviceToken andPublickKey: (NSData *)publicKey;).
3. App uploads token + public key to their own server.
4. Server encrypts notification payload with the public key before sending to APNS.
5. Apple forwards encrypted payload to device.
6. Device uses the bundle name to look up the local private key and uses it to decrypt the payload.
In this case, no. But as a data point it is useful at providing a named location and a timestamp. Presumably any governmental agency with access to the push notification stream can already determine my wife's home address. We could lie in the app and call some other place "Home" but I expect very few people are resorting to codewords in their mundane daily life.
Is anyone surprised? Why would there be pen registers, and tap and trace for phone calls and email, but not for other traffic? The ability of governments to do secret surveillance of such metadata is well established in law and jurisprudence, variously in various countries.
It is a Weird Nerd Thing to believe that old laws can't apply to new computer thing.
It's crazy to me that so much effort is being expended pretending that companies and the government are doing anything in the name of privacy, when we have all the proof by Assange and Snowden that they're doing realtime surveillance of ALL communications, 24x7 -- no matter what any laws say -- and we don't even talk about it any more. What's the point of any of this? All we can do is assume that our every position, purchase, and electronic communication is being tracked and saved, and act accordingly. The Constitution no longer matters, and there's no one coming to save us.
I think where we go wrong is to allow the conversation to revolve around what evil corporations are doing with our information, rather than what the evil government is doing with it. I believe the risk to our freedom is much greater from the latter. Of course governments can extract the information from corporations that have it, but let's keep the spotlight on the government itself, and use THAT as a reason to give corps less information about us.
Corporations showing me better-targeted ads is the least of my troubles.
> Of course governments can extract the information from corporations that have it, but let's keep the spotlight on the government itself, and use THAT as a reason to give corps less information about us.
Yep. Treating the two as distinct makes no sense. Corporate dragnet surveillance collecting forever-datasets isn't meaningfully different from the government doing the same thing, directly. People who fear government power ought to support outlawing corporate collection of the same types of things they don't want government collecting.
Granted that's relying on the government to prevent corporations from doing things in order to limit... the government (and, incidentally and IMO beneficially, also the corporations themselves). However, that's the only effective mechanism we've got—and the basis of all the other mechanisms we have available, ultimately, short of violence and strikes and such—and I think it's implausible that, even assuming a great deal of bad-faith behavior, such a move wouldn't significantly curb this activity.
The person you're replying to is making a statement about democratically accountable consolidation of power; not necessarily today's current (and broken) implementations of such things.
No non-broken implementation of such things is known to exist. Democracy itself is the tyranny of the majority even when majority rule is what is actually happening. Concentration of power has to be prevented because of this, not in spite of it.
> I think where we go wrong is to allow the conversation to revolve around what evil corporations are doing with our information, rather than what the evil government is doing with it.
I think it would be wrong to ignore either. Especially since most of the data the government gets is from corporations.
> Corporations showing me better-targeted ads is the least of my troubles.
You're right about that. That data sure isn't only used for ads. Companies use it to decide what services you're allowed to get and under what terms. The policies a company tells you they have are different from the polices they tell others they have. Companies use it to set prices so that what you pay can be different from what your neighbor does for the same goods/services. Companies even use that data to determine how long to keep you on hold when you call them.
Employers use it to make hiring decisions. Landlords use it to decide who to rent to. It's sold to universities who use it to decide which students to accept or reject. It's sold to scammers who use it to select their victims. Extremists use it to target and harass their enemies. Lawyers use it in courtrooms as evidence in criminal cases and custody battles. Insurance companies use it to raise rates and deny claims.
The data companies are collecting about will cost you again and again in more and more aspects of your life. Ads are absolutely the least of your troubles.
“Better-targeted advertisements” is not the most nefarious way this information is used. That’s just one of the selling points to entice advertisers. It’s also been used extensively to determine content that you will find the most engaging, regardless of whether it’s to your benefit or not, so that ad-driven marketplaces may harvest and sell your attention.
If you have any contemporary examples of the way the government has used the same information, in a way that’s been more widely destructive, I would be curious to know more.
Wouldn't the exact opposite focus have a better effect? Going after the "evil corporations" would mean nobody was collecting the data in the first place, which would also take away the "evil government" as they have nobody to buy that data from.
Right now they just write fat checks to Google, Apple, Amazon and the telcos and badda bing, badda boom it's done.
A government can (in some cases) force a company to collect information they otherwise wouldn't have. The reverse is not true. So I do think the bigger danger here is the legal framework that not only permits this but keeps it secret, rather than the mere fact of information collection.
You already mentioned this higher in the thread, no need to repeat yourself.
For the record I agree with the grandparent post's question: at least, gov is _supposed_ to be controlled by the citizenry through elections, corporations are not. I can have ("should have") visibility into what the government is doing, corporations can hide (and do hide) as much real information as they can and there's no way for me to get to it.
Whether it's naive of me to think so or not is not what is being discussed here.
Assembly 2023 had a fantastic presentation[1] from @BackTheBunny (from X) about precisely this. When the US really wants to do something, the constitution is a parchment guarantee and the media runs cover for them. Many US gov agencies are basically supranational and extrajudicial.
I don't agree with everything he said but the information was well presented and enjoyable.
I don't think many people actually care much about privacy. There are a few, and they're loud. But look at what matters in politics -- both major political tribes in the US are only interested in privacy and protection from the government as it relates to their own interest, but they are perfectly happy to use that power against their perceived opponents.
Thirty years ago, one perceived element of moral superiority in the West was revelations of the extensive internal surveillance in places like East Germany and own-spying. There used to be news items and documentaries mocking this behavior and intimating how backward and uncouth those governments were to stoop to furiously wiretapping irrelevant private conversations.
So, whether the world has changed enough to justify it, people still do care and when adequately informed about some magistrate furiously eavesdropping on private matters, people universally recognize this is antisocial bizarre conduct.
It is my opinion that people do not about privacy as much as they did in your mention Cold War-era times (or the tail end of it, anyway). They've been shown how easy it is to trade their privacy for considerable convenience and now they're in so deep that the idea of our governments tracking us seems remarkably mundane. Normalization is a helluva drug.
Great point. Convenience plays a hell of a role in a lot of society's issues. I go back to a song by Deee-lite where she sings "Convenience is the enemy" - I've always thought that was pretty pertinent in a lot of ways, this is just one more example.
Meh, collecting information is different from acting on it. My underdtaning, which could be wrong, was that people legitimately lived in fear of getting found out by the stazi. There isn’t a good reason to fear the NSA based on current actions, that I’m aware of anyway.
I’m afraid the NSA regularly funnels information to the FBI and other domestic policing entities, and this has been widely documented [1]. The government even deigned to declassify proceedings from their special secret (!) court that decry the practice where NSA gives illegally-obtained surveillance to the FBI, which then manufactures a reason to go after someone using a technique known as “parallel construction,” concealing the surveillance source(s).
> I don't think many people actually care much about privacy.
People absolutely care about their privacy. If you don't believe me try going outside and following someone in public with a video camera. They'll scream at you about how horrible and illegal what you're doing is. They'll probably call the police on you. Upset as they are, they ignore the fact that they've been being filmed from the moment they stepped outside and have in fact been being extensively tracked and recorded even while they were still inside their homes.
People don't understand the extent that their privacy is being violated. It's mostly out of sight/out of mind. They also don't understand the impact the data they give up has on their daily lives. They aren't allowed to know when or how much that data costs them. The moment they are confronted with the reality of the situation, they suddenly care very much about their privacy. Mostly they feel powerless against the invasion of their privacy.
While I believe that you can't solve (at least permanently) political problems with technology, and we need political action, you can prevent a good bit of surveillance with technology if you invest in setting it up.
E2EE for chats (Matrix, Signal, or XMPP) is pretty solid I think.
More shaky, Tor/reputable VPNs or some combo for browsing.
FOSS ROMs for phones (Graphene), or Librum/PinePhone if you can deal with not always having a working phone.
It's not a great situation, but it's not hopeless!
Unfortunately, the constitution isnt very clear on privacy. It should be. There should be a new amendment which makes it crystal clear that the Patriot Act, for example, is completely unconstitutional.
But what the 14th amendment says is that people and their property are protected against searches by the government wherever there is a “reasonable expectation of privacy.” That and some combination of other details imply a right to privacy, but its mot very explicit and clearly limited. In light of this, the Supreme Court has actually ruled quite favorably In practice, the Supreme Court has actually ruled pretty favorably towards a right to privacy, considering whats actually in the constitution.
> IX. The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.
> X. The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.
Operating a surveillance apparatus isn't an enumerated power of the federal government. The courts screwed up by reading its enumerated powers so unreasonably broadly that this even came up.
The only real way to fix this in the US is via election reform.
The GOP is trying to create an apartheid state where minority rural areas dictate the laws for the majorities that live in urban areas while they extract resources from those areas.
They know this is incredibly unpopular, so they don't even pretend they're trying to get the majority of the vote in most places. Instead, they've been trying to set vote thresholds to > 60% for ballot measures and stripping authority from all elected offices that aren't subject to gerrymandering.
It's also crazy to me that people are frequently arguing over what is the best security app to use for communication arguing over privacy maximalist viewpoints but not considering the old and have forgotten the major flaw we learned about from PGP: can't decrypt, please resend unencrypted. It doesn't matter how good your encryption is if no one will use it. Pareto is a bitch. (This is a crack at the Signal vs Threema or whatever app is hot this month and we discuss next month. But when usernames?)
You're kidding yourself if you think three letter agencies don't have LOS users on a list and have capabilities to spy on them on demand with tailored access.
Hey other states, can you elect a few more Ron Wydens? He's been doing a ton of the heavy lifting lately. Every time we hear about the intelligence community egregiously violating civil liberties, it's always Wyden.
I believe he sits on intelligence committees and has a security clearance so he gets briefed on all kinds of outrageous things he can’t publicly talk about. But he does his best with what he can.
In May 2017, Wyden co-sponsored the Israel Anti-Boycott Act, Senate Bill 720, which made it a federal crime, punishable by a maximum sentence of 20 years imprisonment,[88] for Americans to encourage or participate in boycotts against Israel and Israeli settlements in the occupied Palestinian territories if protesting actions by the Israeli government. The bill would make it legal for U.S. states to refuse to do business with contractors that engage in boycotts against Israel.[89]
https://en.wikipedia.org/wiki/Ron_Wyden#Israel
Wyden knows such a bill wouldn't pass specifically because of its unconstitutionality. This was about picking up media coverage by throwing red meat at voters.
Congress has been in a state of deadlock for too long to pass any actual laws, so this type of performative theater ahead of midterm elections is what passes for statesmanship.
Thanks for the link. Scary to see that the state is re-drafting these laws specifically to find loopholes in the constitutional definitions of freedom of speech. Check out this other loophole:
> The act does not apply to contracts worth less than $1,000, or to companies that offer to provide the goods or services for at least 20 percent less than the lowest price quoted by a business that has complied with the certification requirement.
So, a contractor if free to boycott as long as they cost the taxpayer a little bit less.
It’s already pretty much the law. You can submit your complaints to the Office of Anti-Boycott Compliance [1].
Foreign governments can’t force government contractors to comply with boycotts. This bill AFAIK simply closes the loophole of Palestine not technically being a foreign government.
That's not the same thing. This isn't about foreign government demands, it's about US states being legally able to discriminate against contractors who participate in BDS. (Edit: in fact it's about contractors who refuse to sign a pledge that they won't ever participate in BDS)
I am being sarcastic ;) the guy is supposed to be a freedom fighter for privacy/security but is trying to ban boycotts, the most basic form of protest, and integral to US democracy.
Your previous comment came off very genuine. If clarity of statement is important, it might be worth ensuring your actual intent is made unambiguously clear somewhere in message, if that message is otherwise ironic or sarcastic.
True, which is why it must be balanced with realistic judgements about the people you support and knowing what issues are truly important compared to what the current buzz is telling us is important.
True, but you can also refuse to excuse any behavior, nor give even an inch, and then look around after a while and realize you've won the wrong contest. You won the never giving an inch and remaining morally unblemished contest, and lost the making allies and getting anything done contest.
Given a lot of journalists and activists use encrypted communications to be able to do their job without being unduly or unjustly persecuted (yes, the bad guys use them too!), and 12 US State Attorney Generals just signed a letter and delivered it to the major news agencies (NYT, CNN, Reuters, AP, etc.) that warns of any "support to terrorist organizations" and specifically points out Hamas, but is not very clear on what "support" or "business relationship" means (sending a camera to do a report where the press is not allowed due to Israel's complete control of the media - echoes of US journalist access during the Iraq War), and puts them on notice. Nothing is safe from Big Brother, anywhere, any country.
At this point any “both sides are the same” argument should be seen as either incredibly misguided or intentionally malicious.
Look for representatives who represent more or at least some of our actual interests.
An ideologically united group which has been working to actively disrupt the election process and turn women into breeding property, combined with unlimited surveillance? Might not be the “same”.
I noted that Apple says the governments in question are allies of the United States. I wonder if this is a case of American intelligence outsourcing the surveillance of American citizens to foreign intelligence. If that is indeed the case, I’d expect a quid pro quo.
> I wonder if this is a case of American intelligence outsourcing the surveillance of American citizens to foreign intelligence. If that is indeed the case, I’d expect a quid pro quo.
Yet it is the US government who revealed it: "In a letter to the Department of Justice, Senator Ron Wyden said foreign officials were demanding the data from Alphabet's (GOOGL.O) Google and Apple (AAPL.O). Although details were sparse, the letter lays out yet another path by which governments can track smartphones." - https://www.reuters.com/technology/cybersecurity/governments...
Less "the government" and more "a member of government", the same member who has revealed and demanded accountability when discovering domestic government overreach.
>We should choose our congress critters carefully.
Agreed 100% and sadly, quite rare. I'm not going to start naming names, because that would devolve this into a political conversation about the parties. That isn't this. I suspect most people know who the criminals are. Now to see if they care.
I think people put way to much trust it political institutions, at least at the scale of national, which are, for the most part, only really used to protect a certain classes of people, the people who run it.
The problem with corruption is scale, when you have too large of an institution, it's easier to hide intent. I don't see how you can police that by voting when so much of what goes on is not easily seen.
For every persons that gets voted in to do the right thing, there are 4 others who are doing the wrong thing.
Indeed. But government is also a process and in this case I think it is fair to say that the process is leading to good outcomes (transparency, accountability).
It doesn't seem like enough. The PATRIOT act has been on the books for 20+ years now and we only rarely get a peek at what it's being used for. James Clapper (in)famously lied to Congress[1] and still got to keep his job, so I'm not sure about accountability either.
This is some wacko BS. Congress has tons of power which can impact your daily lives. If you think it doesn't have that power, you're just not well read on the subject. If you think modern day politics of us vs them divisiveness gives the impression that they cannot do any thing is a dangerous interpretation. It's also a bit sophomoric of an interpretation as well.
Congress very much has too much power. If it was a fighting game character, it would be the overpowered character people would want banned.
Repeatedly Congress has shown that it's checks and balances have more power than others.
If Congress picks the supreme court and there are multiple ways for a massed power to keep it's power then nobody else has any real power. The US system is actually rather poorly designed in that form.
Wyden is far removed from the part of the government which engages in surveillance. He's the same person who was questioning James Clapper in Congress about mass surveillance before the Snowden leaks [1].
That's how they circumvent the ban on domestic spying. The US spies on Australians* and the Australians spy on US citizens, then they exchange the data. Easy.
Why do they need to confirm an already known fact: FAANG platforms are built to spy on users? We've known about this fact for at least a decade since the Snowden revelations.
Nothing has materially changed since then, technically, politically, legally, or even culturally. Yet people still believe for-profit corporations have their best interests in mind, thanks to clever marketing and groupthink, clutching to "encrypted apps" and empty "we value your privacy" double-speak: neither will defend you.
There is no privacy on proprietary closed source platforms - it is simply infeasible; it is trying to squeeze blood from a stone. I know this truth will likely trigger and upset people with their $1,000+ iPhones, MacBooks and other iToys, and this sunk cost fallacy is really pathetic to witness in grown adults.
From the companies not needing this, it does not follow that various governments don't need this.
My first thought is that this is looking like an especially fun (for the rest of us) popcorn session where someone in one government is shocked to discover that other governments pull the same stunts that they think should be reserved for "our people"… but then I looked up Senator Ron Wyden's Wikipedia page and he seems to be genuinely opposed to such shenanigans from everyone including the US.
In what way would ignoring a viable SIGINT source be incompetent?
Just thinking about only my push notifications yesterday and they revealed that I am clearly a developer or technologist (push notifications from Git/AWS/etc), who got a haircut (time and location were revealed in the message, but I'm sure government-level agencies could have tracked which SportClips location the appointment belonged to), that I am interested in generative AI, and working out.
Another day might have yielded far more interesting facts, but those bits added to a record of my interests and habits can become quite powerful over time.
> Just thinking about only my push notifications yesterday
See, the gist in the letter is this sentence:
"As with all of the other information these companies store for or about their users, because Apple and Google deliver push notification data, they can be secretly compelled by governments to hand over this information."
Do you really think that a foreign government is interested in push notifications when issuing a demand to disclose data from a phone?
It seems silly to imagine they'd ask for anything less than everything they could get.
More information means more ways to hone in on whatever allegation you're trying to prove. If it's investigative, then it gives more of a picture of what's happening.
I used to imagine EZPass data as innocuous, but now it's used routinely in criminal trials to show that a defendant was at a given place at a given time. Divorce attorneys also request it, as it can be used to illustrate patterns.
It would indicate that they're lonely and looking for a partner. If you were looking to turn them into an intelligence asset, you could have an officer approach and seduce them.
If it's Grindr instead of Tinder, or if they're married, you have a blackmailing angle. In a lot of countries it would be very effective.
There's no need for notification snooping when these apps are spamming requests to unique subdomains on analytics services and their own APIs. DNS snooping is a much easier method of getting that metadata.
Although I suppose one advantage of push notifications compared to DNS is that they're delivered even when the app isn't open, and more generally they can also serve as a liveness check (successful delivery means your device is online).
Push notifications would be most valuable for p2p metadata (e.g. iMessage key exchange handshake between two users) and, to the degree they can snoop on the message content, obviously that would be valuable.
Just because some apps use insecure but highly identifiable DNS lookups doesn’t mean everyone does, or that DNS-over-HTTPS will never be deployed (iOS shipped support in 2020 and Android was only a couple of years later). There’s a 0% chance that anyone smart would say they should rely on that alone and not develop other sources for that information, and intelligence agencies have hired many smart technical people.
In general, I agree DNS-over-HTTPS is a step in the right direction, in terms of eliminating the low-hanging fruit of snooping over the wire. But it's still the same major companies providing the resolvers. And if you're sending them an NSL for push notifications, you may as well send one for DNS too.
That’s usually untrue - for example, if I’m on Comcast but I use Firefox, my DoH requests go instead to Cloudflare who don’t log IPs – but also the larger point is that DNS isn’t complete enough: sometimes it’s unique companies but a lot of the time it’s just a shared endpoint. Push notifications don’t have that problem and happen every time, not just when a cache expires.
Cloudflare is one of the "major companies" I was alluding to. It's still an issue of centralized authorities that are accountable to governments. But I do trust Cloudflare more than my ISP or Apple, and in fact I route much of my traffic through them so I hope I'm right in giving them my trust.
It’s also a question of what information is available. In the United States, for example, it’s generally seemed to be the case that they can compel release of existing data but not changing systems to record new data or remove encryption. That’s not the case in every country, of course.
Suppose you use an anonymous app for messaging. The government sees the conversation ("good day to you") but doesn't know who is on one side (perhaps both).
So they ask Apple "who exactly sent or received on their phone a push notification for 'good day to you'?" Or perhaps "who sent or received push notifications from secure messaging app around 8:24:39.124 pm, 8:26:12.322, etc.?"
Apple tells them, and now they know the identity of the "anonymous" recipient. Replace "good day to you" with any text disliked by any format or current or future government.
The content of almost every messaging app push notification is already encrypted. So I think it's mainly for knowing when someone receives a notification.
For context, we are sending ~35 million push notifications per month on iOS and ~67 million on Android, see more at [1]
[0]: https://developer.apple.com/documentation/bundleresources/en...
[1]: https://threadreaderapp.com/thread/1721717002946191480.html