Hacker News new | past | comments | ask | show | jobs | submit login
Apple's plan to “think different” about encryption opens a backdoor to your life (eff.org)
2260 points by bbatsell 82 days ago | hide | past | favorite | 824 comments




Thanks! Macroexpanded:

Expanded Protections for Children - https://news.ycombinator.com/item?id=28078115 - Aug 2021 (291 comments)

Apple plans to scan US iPhones for child abuse imagery - https://news.ycombinator.com/item?id=28075021 - Aug 2021 (349 comments)

Apple enabling client-side CSAM scanning on iPhone tomorrow - https://news.ycombinator.com/item?id=28068741 - Aug 2021 (680 comments)


I've been maintaining a spare phone running lineage os exactly in case something like this happened - I love the apple watch and apple ecosystem, but this is such a flagrant abuse of their position as Maintainers Of The Device that I have no choice but to switch.

Fortunately, my email is on a paid provider (fastmail), and my photos are on a NAS, I've worked hard to get all of my friends on Signal. While I still use google maps, I've been trialing out OSM alternatives for a minute.

The things they've described are in general, reasonable and probably good in the moral sense. However, I'm not sure that I support what they are implementing for child accounts (as a queer kid, I was terrified of my parents finding out). On the surface, it seems good - but I am concerned about other snooping features that this portents.

However, with icloud photos csam, it is also a horrifying precedent that the device I put my life into is scanning my photos and reporting on bad behavior (even if the initial dataset is the most reprehensible behavior).

I'm saddened by Apple's decision, and I hope they recant, because it's the only way I will continue to use their platform.


> with icloud photos csam, it is also a horrifying precedent

I'm not so bugged by this. Uploading data to iCloud has always been a trade of convenience at the expense of privacy. Adding a client-side filter isn't great, but it's not categorically unprecedented--Apple executes search warrants against iCloud data--and can be turned off by turning off iCloud back-ups.

The scanning of childrens' iMessages, on the other hand, is a subversion of trust. Apple spent the last decade telling everyone their phones were secure. Creating this side channel opens up all kinds of problems. Having trouble as a controlling spouse? No problem--designate your partner as a child. Concerned your not-a-tech-whiz kid isn't adhering to your house's sexual mores? Solved. Bonus points if your kid's phone outs them as LGBT. To say nothing of most sexual abuse of minors happening at the hands of someone they trust. Will their phone, when they attempt to share evidence, tattle on them to their abuser?

Also, can't wait for Dads' photos of their kids landing them on a national kiddie porn watch list.


> designate your partner as a child.

That's not how it works, unless you control your partner's Apple ID and you lie about their DOB when you create their account.

I created my kids Apple IDs when they were minors and enrolled them in Family Sharing. They are now both over 18 and I cannot just designate them as minors. Apple automatically removed my ability to control any aspects of their phones when they turned 18.

> Dads' photos of their kids landing them on a national kiddie porn watch list.

Indeed, false positives is much more worrying. The idea that my phone is spying on my pictures... like, what the hell.


> That's not how it works, unless you control your partner's Apple ID and you lie about their DOB when you create their account.

Rather than reassuring me, this sounds like an achievable set of steps for an abuser to carry out.


I recently had a friend stay with me after being abused by their partner. The partner had paid for their phone and account and was using that control to spy on them. I wish that cyber security was taught in a more practical way because it has real world consequences. And like two comments on here and it’s now clear as day how this change could be used to perpetuate abuse. I’m not sure what the right solution is, but I wish there was a tech non profit that secured victims of abuse in their communication in an accessible way to non tech people.


Most people on this platform understand Cyber Sec and OpSec relatively well. The problem is you are concerned with the people not on a platform like this who require a good learning system and ways of making it interesting to actually retain and understand.


More than achievable. Abusers often control their victims' accounts.


exactly, as the IT guy in the family you set up accounts for everybody all the time


Here’s better -

There’s a repository built from seized child porn.

Those pictures and videos have hashes. Apple wants to match against those hashes.

That’s it.

That’s it for now.


How do you prevent photo's from your kids ending up in such a database? Perhaps you mailed grandma a photo of a nude two year old during bath time during a Covid lockdown — you know, normal parenting stuff. Grandma posted it on Facebook (accidentally, naively, doesn't matter) or someone gained access to it, and it ended up on a seedy image board that caters to that niche. A year later and it's part of the big black box database of hashes and ping, a flag lights up next to your name on Apple's dashboard and local law enforcement is notified.

I don't know how most people feel about this, but even a false positive would seem hazardous. Does that put you on some permanent watch list in the lowest tier? How can you even know? And besides, it's all automated.

We could of course massively shift society towards a no-photo/video policy for our kids (perhaps only kept on a non-internet connected camera and hard drive), and tell grandma to just deal with it (come back after the lockdown granny, if you survive). Some people do.

And don't think that normal family photos won't get classified as CEI. What is titillating for one is another's harmless family photo.


This is implying all the concerns about possible future uses of this technology are unreasonable slippery slope concerns, but we're on our like fourth or fifth time on this slope and we've slipped down it every previous time, so it's not unreasonable to be concerned

Previous times down this slope:

* UK internet filters for child porn -> opt out filters for regular porn (ISPs now have a list of porn viewers) + mandatory filters for copyright infringment

* Google drive filters for illegal content -> Google driver filters for copyrighted content

* iCloud data is totally protected so it's ok to require an apple account -> iCloud in China run by government controlled data centers without encryption

* Protection against malware is important so Windows defender is mandatory unless you have a third party program -> Windows Defender deletes DeCSS

* Need to protect users against malware, so mobile devices are set up as walled gardens -> Providers use these walled gardens to prevent business models that are bad for them


The first slippery slope for this was when people made tools to do deep packet inspection and find copyrighted content during the Napster era.

That was the first sin of the internet era.

Discussing slippery slopes does nothing.

Edit: It is frustrating to see where we are going. However - conversations on HN tend to focus on the false positives, and not too much on the actual villains who are doing unspeakable things.

Perhaps people need to hear stories from case workers or people actually dealing with the other side of the coin to better make a call on where the line should be drawn.


I don't think anyone here is trying to detract from the horrors and the crimes.

My problem is that these lists have already been used for retaliation against valid criticism. Scope creep is real, and in case of this particular list, adding an item is an explicit, global accusation of the creator and/or distributor for being a child molester.

I also wrote some less commonly voiced thoughts yesterday: https://news.ycombinator.com/item?id=28071872


Rather than try to rehash the arguments myself, I'll just point you to Matthew Green's detailed takedown: https://twitter.com/matthew_d_green/status/14230910979334266...

But just to highlight one aspect, the list of maintained hashes has a known, non-negligible fraction of false positives.

> That’s it for now.

If this is an attempt at "first they came...", we're not biting.


Bit confused here.

My statement was to clarify incorrect statements of the issue. Someone was worried about incorrect DoBs entered by jilted lovers would get people flagged.

I just outlined what the actual process is. I feel that discussing the actual problem leads to better solutions and discussions.

Since this topic attracts strong viewpoints, I was as brief as possible to reduce any potential target area, and even left a line supporting the slippery slope argument.

If this was not conveyed, please let me know.

Matter of fact, your response pointing out the false positive issues is a win in my book! Its better than what the parent discussion was about.

But what I am truly perplexed by, is when you talk about "firs they came..." and "we're not biting".

Who is we, and why wouldn't YOU agree with a position supporting a slippery slope argument?

You seem to disagree with the actions being telegraphed by Apple.

Could you clarify what you mean?


This isn't a question about condoning child abuse. It's a question of doing probabilistic detection of someone possessing "objectionable content". Not sharing, not storing - possessing. This system, once deployed, will be used for other purposes. Just look at the history of every other technology supposedly built to combat CP. They all have expanded in scope.

Trying to frame the question along the usual slippery slope arguments implicitly sets up anyone critisicing the mechanism as a supporter of fundamentally objectionable content.


Sure, and i have no objection to what you are saying.

This thread however was where I was making a separate point that helps this discussion by removing confusion or assumptions on how Apple’s proposal works.

Perhaps you may have misread what I was saying ?


Sorry about the really long delay with answer, the week got better of me.

Your original post posited a reasonable question, but I felt the details were somewhat muddled. The reason I reacted and answered was that I have seen this style of questioning elsewhere before. The way you finished off was actually a little alarming: it'd be really easy to drop in with a followup that in turn would look like the other person was trying to defend the indefensible.

With my original reply I attempted to defuse that potential. The issue is incendiary enough without people willingly misunderstanding each other.


What forms of abuse will this open up to the prospective abuser that they couldn't do previously?


Distrust your spouse/partner isn't cheating? Designate them a minor and let the phone tell you if they sext.


If you already control their apple account, then you already have access to this information. Your threat model can’t be “the user is already pwned” because then everything is vulnerable, always


The real problem here is that the user can't un-pwn the device, because it's the corporation that has root instead of the user.

To do a factory reset or otherwise get it back into a state where the spyware the abuser installed is not present, the manufacturer requires the authorization of the abuser. If you can't root your own device then you can't e.g. spoof the spyware and have it report what you want it to report instead of what you're actually doing.


i wish I could downvote this a million times. if someone has to seize physical control of your phone to see sexts thats one thing. this informs the abuser whenever the sext is sent/received. this feature will lead to violent beatings of victims who share a residence with their abuser. Consider the scenario of sally sexting jim while tom sits in another room of the same home waiting for the text to set him off. in other circumstances, sally would be able to delete her texts, now violent tom will know immediately. Apple has just removed the protection of deleting texts from victims of same residence abusers.

Apple should be ashamed. I see this as Apple paying the tax of doing business in many of the worlds most lucrative markets. Apple has developed this feature to gain access to markets that require this level of surveillance if their citizens.


> Consider the scenario of sally sexting jim while tom sits in another room

Consider Sally sending a picture of a bee that Apple’s algo determines with 100% confidence is a breast while Tom sits in another room. One could iterate ad infinitum.


Well yeah, but this makes the UI better to pwn your victims - now it tells you when they do something instead of you needing to watch for it


Parent police the phones of 16 and 17 years old? That's some horrifying over parenting, Britney spears conservatorship level madness. Those kids have no hope in the real world


Clearly you are not a parent.

Well, as a parent, I can tell you that some 16/17 year olds are responsible and worthy of the trust that comes with full independence. Others have more social/mental maturing to do yet and need some extra guidance. That's just how it goes.


When you write that out, the idea of getting 'Apple ID's for your kids doesn't sound that great.

Register your kids with a corporate behemoth! Why not!? Get them hooked on Apple right from childhood, get their entire life in iCloud, and see if they'll ever break out of the walled garden.


> false positives is much more worrying

This is an argument for me to not start using iCloud keychain. If Apple flags my account, I don't want to lose access to literally all my other accounts.


The “child” would be alerted and given a chance to not send the objectionable content prior to alerting anyone else. Did you read how it work?

Also, a father would only land in a national registry of their child’s photos are known to be CSAM. Simply taking a photo of your child wouldn’t trigger it.


> That's not how it works, unless you control your partner's Apple ID and you lie about their DOB when you create their account.

The most annoying thing about Apple Family sharing is that in order to create accounts for people you must specify that they are under 13 (source: https://www.apple.com/lae/family-sharing) - otherwise the only other option is for your "family member" to link their account to the Apple Family which is under your purview, which understandably many people might be hesitant to do because of privacy concerns (as opposed to logging into the child account on a Windows computer exclusively to listen to Apple Music - which doesn't tie the entire machine to that Apple ID as long as it's not a mac).

And so in my case, I have zero actual family members in my Apple Family (they're more interested in my Netflix family account). It begs the question, why does Apple insist on having people be family members in order to share Apple Music? We have five slots to share, and they get our money either way. They also don't let you remove family members - which may be the original intent for insisting on such a ridiculous thing - as if they're trying to take the moral high ground and guilt trip us for disowning a family member when in fact it simply benefits them when a fallout occurs between non-family members, because there's a good chance that the person in question will stop using the service due to privacy concerns, and that's less traffic for Apple.

It's actually kind of humorous to think that I still have my ex-ex-ex-girlfriend in my Apple Family account, and according to Apple she's 11 now (in reality, she's in her 30s). I can't remove her until another 7 years pass (and even then it’s questionable if they’ll allow it, because they might insist that I can’t divorce my “children”). And honestly, at this point I wouldn’t even remove her if I could, she has a newborn baby and a partner now, and I’m happy to provide that account, and I still have two unused slots to give away. I’ve never been the type of person who has a lot of friends, I have a few friends, and one girlfriend at a time. But the thing is she’s never been a music person and I assume that she isn’t even using it - and so even if I made a new best friend or two and reached out to her to let her know that I wanted to add them, Apple currently wouldn’t let me remove her to make room for those theoretical friends. While I'm a big fan of Apple hardware, it really bothers me that a group of sleazy people sat around a table trying to figure out how to maximize income and minimize network traffic, and this is what they came up with.


Did you ever stop to realize if licensing has anything to do with this? You also lied about someones age when creating their Apple account and continue to provide access to someone outside your family. Call them, and remove them, and then likely they'll ban you for violating the ToS


Curious, how would licensing affect this? Would the assumption be that everyone resides under the same roof? Because that's not a requirement for being in a family.


No, generally to license content you pay a fee per screen in this case as well as a fee per viewer. In the case of families this is calculated by the amount of people in the account. They don’t charge your costs per person, they charge a flat rate based on the maximum number of people you can add to your account. So doing it this way they’re not circumventing the per device fee they are charged that you’re trying to get them to pay for you for free.


> They don’t charge your costs per person, they charge a flat rate based on the maximum number of people you can add to your account. So doing it this way they’re not circumventing the per device fee they are charged that you’re trying to get them to pay for you for free.

I'm confused, how am I trying to get them to provide anything for free? I pay for the service, and that service has a limited number of account slots, and the people using those slots have their own devices. What am I missing?

Are you under the assumption that child accounts don't occupy a slot, and are free-riding? If so, that's not the case. Child accounts occupy a slot all the same, the only difference is that by providing child accounts to my adult friends, they aren't required to link their existing Apple accounts to the service that's under my control.


Moving the scanning to the client side is clearly an attempt to move towards scanning content which is about to be posted on encrypted services, otherwise they could do it on the server-side, which is "not categorically unprecedented".


> can be turned off by turning off iCloud back-ups

Until they push a small change to the codebase...

  @@ -7637,3 +7637,3 @@
  -if (photo.isCloudSynced && scanForIllegalContent(photo)) {
  +if (scanForIllegalContent(photo)) {
       reportUserToPolice();
   }


You mean

  -if (photo.isCloudSynced && scanForIllegalContent(photo)){
  +if (photo.isCloudSynced & scanForIllegalContent(photo)) {
       reportUserToPolice();
   }
https://old.reddit.com/r/chromeos/comments/onlcus/update_it_...


This has been a risk by using closed source OS.


Even open source operating systems have closed source components, and unless you're in charge of the entire distribution chain you can't be source used to compile it was the same that was shared with you. On top of that most devices have proprietary systems inside of their hardware that the OS can't control.

So it would be better to say "this has been a risk by using modern technology".


"Things are currently bad, therefore give up" isn't satisfactory.

Even if everything is imperfect, some things are more imperfect than others. If each component that makes you vulnerable has a given percent chance of being used against you in practice, you're better off with one than six, even if you're better off with none than one.


I don't see them giving up anywhere. I see them saying what's the current state of things and changing something that was said above them.


Just to add to this: trying to compile an open source Android distro is a tricky proposition that requires trusting several binaries and a huge source tree.

Moreover, having a personal route to digital autonomy is nearly worthless. To protect democracy and freedom, practically all users need to be able to compute securely.


There's a classic Ken Thompson talk about Trust where he shows how a compiler could essentially propagate a bug forward even after the source code for that compiler was cleaned up.

http://cs.bell-labs.co/who/ken/trust.html

It's a fantastic concept.


It's a concept that relies on a great deal of magic to function properly. The binary-only compiler we have must insert code to propagate the infection. To do so it must know it is compiling a compiler, and understand precisely how to affect code generation, without breaking anything. That... feels like an undecidable problem to me.


sure it's undecidable, you can reduce a decider for the word problem to that pretty easily, but in practice you probably only have to recognize a few common compilers to do a good enough job


But the viral property that makes this attack so insidious is, in the end, not sufficiently viral: the compiler will eventually drift away (as a consequence of it being developed further, of course) from the patterns it recognizes. The attack is not as dangerous as it sounds.


I'm not really arguing that too much, since it's a pretty elaborate and finnicky thing ultimately, although I would wager to say that you could probably find patterns to recognize and modify clang/gcc/other relatively stable compilers for a long time to come (assuming no active mitigations against it)


On most hardware, the closed source components are optional.

For example the only closed source component that I use is the NVIDIA driver, but I could use Nouveau, with lower performance.

The real problem is caused by the hardware backdoors that cannot be controlled by the operating systems and which prevent the full ownership of the devices, e.g. the System Management Mode of Intel/AMD, the Intel ME and the AMD PSP.


On most hardware, the closed source components come pre-installed. Just walk into any store and find a phone with a verifiable open source distro.

The real problem is that most people don't have the knowledge to compile a distro and load it onto a phone. Most people don't even know that's a possibility or that the distro on their phone isn't open source.


Photosync can automatically move photos from iDevices into consolidated NAS, SFTP, cloud or iXpand USB storage, https://photosync-app.com

GoodReader has optional app-level file encryption with a password that is not stored in the iOS keychain. In theory, those encrypted files should be opaque to device backups or local filesystem scanning, unless iOS or malware harvests the key from runtime memory, https://goodreader.com/


If The Verge's article is accurate about how/when the CSAM scanning occurs then I don't have a problem with that, sounds like they're moving the scanning from server to client side, the concerns about false positives seem valid to me but I'm not sure the chance of one occurring has increased over the existing icloud scanning. Scope creep for other content scanning is definitely a possibility though so I hope people keep an eye on that

I'm not a parent but the other child protection features seem like they could definitely be abused by some parents to exert control/pry into their kids private lives. It's a shame that systems have to be designed to prevent abuse by bad people but at Apple's scale it seems like they should have better answers for the concerns being raised


The CSAM scanning is still troubling because it implies your own device is running software against your own self-interest. If Apple wanted to get out of legal trouble by not hosting illegal content but still make sure iOS is working in the best legal interest of the phone's user, they'd prevent the upload of the tagged pictures and notify that they refuse to host these particular files. Right now, it seems like the phone will actively be snitching on its owner. I somehow don't have the same problem with them running the scan on their servers since it's machines they own but having the owner's own property work against them sets a bad precedent.


And it’s a bat shit stupid business move


Yes, what happens if many people refuse to upgrade to iOS 15 where this will be implemented? Will Apple have to issue security updates for iOS 14?


I know, I was going to upgrade my 2016 SE to a 12 Mini. Now I'm not interested at all.


Today it does that. Tomorrow who knows...

It would be easy to extend this to scan for 'wrongthink'.

Next logical steps would be to scan for: confidential government documents, piracy, sensitive items, porn in some countries, LGBT content in countries where it's illegal, etc... (and not just on icloud backed up files, everything)

This could come either via Apple selling this as a product or forced by governments...


I give it a month before every Pooh Bear meme ends up part of the hash DB.


I'd guess more like 6 months, but I agree that it will be trivial for the CCP to make them fall in line by threatening to kick them out of the market. Although... maybe they already have this ability in China.


Maybe the CCP is the entire reason they're doing this.


My first thought exactly. How badly do they want to operate in the Chinese market.


First it’s just CSAM

Next it’s Covid misinformation

Then eventually they’re coming for your Bernie memes


> It's a shame that systems have to be designed to prevent abuse by bad people but at Apple's scale it seems like they should have better answers for the concerns being raised.

Well the obvious response is that these systems don't have to be designed. Child abuse is a convenient red herring to expand surveillance capabilities. Anyone opposing the capability is branded a child molester. This is the oldest trick in the book.

I mean the capability to spy on your kid can easily be used to abuse them. Apple could very well end up making children's lives worse.


It's similar to the German Covid contact-tracing app Luca, which German police is already using for other purposes.

It seems the only way to opt-out is to get out of the Apple ecosystem.

https://www.golem.de/news/hamburg-polizei-nutzt-corona-konta...

https://www.ccc.de/de/updates/2021/luca-app-ccc-fordert-bund...


Luca isn't an Apple app is it? And I thought the system Apple developed with Google had much better privacy guarantees? Although I don't think it was ever actually deployed.


> sounds like they're moving the scanning from server to client side, the concerns about false positives seem valid to me but I'm not sure the chance of one occurring has increased over the existing icloud scanning.

That's the irony in this: This move arguably improves privacy by removing the requirement that images be decrypted on the server to run a check against the NCMEC database. While iCloud Photo Library is of course not E2E, in theory images should no longer have to be decrypted anywhere other than on the client under normal circumstances.

And yet – by moving the check to the client, something that was once a clear distinction has been blurred. I entirely understand (and share) the discomfort around what is essentially a surveillance technology now running on hardware I own rather than on a server I connect to, even if it's the exact same software doing the exact same thing.

Objectively, I see the advantage to Apple's client-side approach. Subjectively, I'm not so sure.


If Apple has the ability to decrypt my photos on their servers, why do I care whether or not they actually do so today? Either way, the government could hand them a FISA warrant for them all tomorrow.


If photos become E2EE, then Apple can no longer turn over said photos, while still not completely turning down their CSAM scanning obligations.


That’s an interesting way to look at it. Funny how this news can be interpreted as both signaling Apple’s interest in E2EE iCloud Photos or weaking their overall privacy stance.


My issue with my own statement is that we have yet to see plans for E2EE Photos with this in place - if apple had laid out this as their intention on apple.com/child-safety/ it would have been clear-cut.


If everything is E2EE, I don’t see how they can have any scanning obligations.

Google drive does not scan for child porn if the user uploads a user encrypted file. They literally can’t.

Personally, I don’t think E2EE photos is coming.


If they are only doing it on iCloud what's wrong with continuing that? What was their incentive for having it on Phone?


Been wondering this myself.

It’s a huge move, and a big change in the presumptions of how their platform works.

I’m heavily invested in the Apple ecosystem and it’ll take a years work to get off it.

I’m thinking of the prevailing principles of whatever I do best, and one of them is, excise integrated platforms as much as possible.

But most consumers will probably not care, and this road will get paved for miles to come.


They pay for running iCloud; you pay for running your phone.


> sounds like they're moving the scanning from server to client side

That is good, but unless a system like this is fully open source and runs only signed code there really aren't many protections against abuse.


And who audits the DB hashes? The code is the easiest part to trust.


You don't even need to sneak illegitimate hashes. Just use the next Pegasus-esque zero-day to dump a photo from the hash list on to the phone.


Yeah, but adding whatever document/meme/etc that will represent the group you hate and boom you have a way to identify operatives of a politician party.

eg. Anyone with a "Feel the Bern" marketing material -> arrest them under suspicion of CSAM. Search their device and find them as dissidents.


> I hope people keep an eye on that

This will be the statement they increase the scope the next time.

Hard to imagine that Tim Cook would have scanned Epsteins photos...


The reality is that actual child abusers know who they are. They realize that society is after them. They are already paranoid, secretive, people. They are not going to be uploading pictures to the cloud of their child abuse.

And let's not forget the minor detail that this is now public knowledge. It's like telling your teenage son you're going to be searching his closet for marijuana in the future.


“Think of the children” is the oldest trick in the book.


> designate your partner as a child

This is way too much work to gain hardly anything. It's just as easy to just log into another device with their iCloud password and literally read everything they send. Less work, more result.


I feel like you’re sensationalizing this a lot.

There’s two functions here. Both client side.

First, machine learning to detect potentially inappropriate pictures for children to view. This seems to require parental controls to be on. Optionally it can send a message to the parent when a child purposefully views the image. The image itself is not shared with Apple so this is notification to parents only.

The second part is a list of hashes. So the Photos app will hash images and compare to the list in the database. If it matches then presumably they do something about that. The database is only a list of KNOWN child abuse images circulating.

Now, not to say I like the second part but the first one seems fine. The second is sketchy in that what happens if there’s a hash collision. But either way it seems easy enough to clear that one up.

No father is going to be added to some list for their children’s photos. Stop with that hyperbole.


This is Apple installing code on their users' devices with the express intent to harm their customers. That's it! This is inarguable! If this system works as intended, Apple is knowingly selling devices that will harm their customers. We can have the argument as to whether the harm is justified, whether the users deserved it. Sure, this only impacts child molesters. That makes it ok?

"But it only impacts iCloud Photos". Valid! So why not run the scanner in iCloud and not on MY PHONE that I paid OVER A THOUSAND DOLLARS for? Because of end-to-end encryption. Apple wants to have their cake and eat it too. They can say they have E2EE, but also give users no way to opt-out of code, running on 100% of the "end" devices in that "end-to-end encryption" system, which subverts the E2EE. A beautiful little system they've created. "E2EE" means different things on Apple devices, for sure!

And you're ignoring (or didn't read) the central, valid point of the EFF article: Maybe you can justify this in the US. Most countries are far, far worse than the US when it comes to privacy and human rights. The technology exists. The policy has been drafted and enacted; Apple is now alright with subverting E2EE. We start with hashes of images of child exploitation. What's next? Tank man in China? Photos of naked adult women, in conservative parts of the world? A meme criticizing your country's leader? I want to believe that Apple will, AT LEAST, stop at child exploitation, but Apple has already estroyed the faith I held in them, only yesterday, in their fight for privacy as a right.

This isn't an issue you can hold a middleground position on. Encryption doesn't only kinda-sorta work in a half-ass implementation; it doesn't work at all.


> So the Photos app will hash images and compare to the list in the database.

I am wondering what hashes are now and will be in this database. Or combine Pegasus exploit , put a few bad images on the journalist/politician iPhone, cleanup the tracks and wait for Apple and FBI destroy the person.


> in their fight for privacy as a right

I kept the lineage phone in my back pocket, confident that it would be a good 4-5 years before they shipped something that violated their claims. I figured, the alternatives would be stable and widespread.

My timing was off.


> with the express intent to harm their customers.

This of course gets into 'what even is harm?' since that's a very subjective way of classifying something, especially when you try to do it on behalf of others.

For CSAM you could probably assume that "everyone this code takes action against would consider doing so harmful", but _consequences in general are harmful_ and thus you could make this same argument about anything that tries to prevent crime or catch criminals instead of simply waiting for people to turn themselves in. You harm a burglar when you call for emergency services to apprehend them.

> This isn't an issue you can hold a middleground position on. Encryption doesn't only kinda-sorta work in a half-ass implementation; it doesn't work at all.

This is the exact issue that the U.S. has been entrenched by - thinking that you can't disagree with one thing someone says or does and agree with other things they say or do. You can support Apple deciding to combat CSAM. You can not support Apple for trying to do this client-sided instead of server-sided. You can also support Apple for taking steps towards bringing E2EE to iCloud Photos. You can also not support them bowing to the CCP and giving up Chinese citizens' iCloud data encryption keys to the CCP. This is a middle ground - and just because you financially support Apple by buying an iPhone or in-app purchases doesn't mean you suddenly agree with everything they do. This isn't a new phenomenon - before the internet, we just didn't have the capacity to know, in an instant, the bad parts of the people or companies we interfaced with.


You do harm a burglar when you call for emergency services; but the burglar doesn't pay for your security system. And more accurately: an innocent man pays for his neighbor's security system, which has a "one in a trillion" chance of accusing the innocent man of breaking in, totally randomly and without any evidence. Of course, the chances are slim, and he would never be charged with breaking in if it did happen, but would you still take that deal?

I've seen the "right to unreasonable search and seizure" Americans hold quoted a bit during this discussion. Valid, though to be clear, the Constitution doesn't apply for private company products. But more interestingly: what about right against self-incrimination? That's what Apple is pushing here; that by owning an iPhone, you may incriminate yourself, and actually it may end up happening whether you're actually guilty or not.


Regarding your second paragraph of the legality, Apple doesn't incriminate you even if they send the image off and a image reviewer deems something CSAM. If Apple does file a police report on this evidence or otherwise gives evidence to the police, the police will still have to prove that (A) the images do indeed depict sexually suggestive content involving a minor, and (B) you did not take an affirmative defense under 18 USC 2252A (d) [0], aka. they would have to prove that you had 3 or more actual illegal images and didn't take reasonable steps to destroy the images or immediately report them to law enforcement and given such law enforcement access to said photos.

The biggest issue with this is, of course, that Apple's accusation is most certainly going to be enough evidence to get a search warrant, meaning a search and seizure of all of your hard drives they can find.

0: https://www.law.cornell.edu/uscode/text/18/2252A#:~:text=(d)...


Based off of your A and B there, I think we’re about to see a new form of swatting. How many people regularly go through all of their photos? Now if someone pisses someone else off and has physical access to their phone they just need to add 3 pictures to the device with older timestamps and just wait for the inevitable results.


I believe the account gets disabled..? More of that, no thanks.


> Sure, this only impacts child molesters.

Um. No?

I would be very surprised if more than 10% of people in possession of sexual images of under 18s molested (pre-pubecent) children.


There’s a database of known child porn. The hashes of these images are compared with the content on peoples phones.


I don't think GP missed that point, and it seems like you missed theirs. Having illegal CSAM and molesting children are two different crimes.


The OP gets away with that argument, because many people who have such images are also, hopefully, also minors.

However, this is NOT the use case being applied for here. Holding those images, which are not part of known CP, will not be an issue, brining it up is a red herring. ~The issue most people have fruitfully started discussing is the scanning of content on your own phone.

Secondly - the correlation between holding known CP and child molestation IS, sadly, high.


I think Apple has always installed software on their users' devices with explicit intent to harm their customers. This instance just makes is a little bit more obvious what the harm is but not enough to harm Apple's bottom line. Eventually Apple will do something that will be obvious to everyone but by then it will probably be too late for most people to leave the walled garden (prison).


there is no e2e encryption of icloud photos or backups, and they never claimed to have that (except for keychain) - the FBI stepped in and prevented them from doing so years ago.


“easy enough to clear up”

I prefer not having my door busted in at 6 am and my dog shot in the head because of a hash collision


"We are so sorry that we raided your house and blew the whole in your 2 years old baby, but the hash from one of the pictures from your 38,000 photos library, matched our CP entry. Upon further inspection, an honest mistake of an operator was discovered, where our operator instead of uploading real CP, mistakenly uploaded the picture of a clear blue sky".

https://www.aclu.org/gallery/swat-team-blew-hole-2-year-old-...

PS. On a personal note, Apple is done for me. Stick a fork in it. I was ready to upgrade after September especially since I heard touch-ID is coming back and I love my iPhone 8. But sure as hell this sad news means i8 is my last Apple device.


It does not trigger for a single match. You would need more than 1 hash collision


> the Photos app will hash images and compare to the list in the database. If it matches then presumably they do something about that. The database is only a list of KNOWN child abuse images circulating.

This seems fine as it's (a) being done on iCloud-uploaded photos and (b) replacing a server-side function with a client-side one. If Apple were doing this to locally-stored photos on iCloud-disconnected devices, it would be nuts. Once the tool is built, expanding the database to include any number of other hashes is a much shorter leap than compelling Apple to build the tool.

> it seems easy enough to clear that one up

Would it be? One would be starting from the point of a documented suspicion of possession of child pornography.


> Would it be? One would be starting from the point of a documented suspicion of possession of child pornography.

Ive actually witnessed someone go through this from someone else getting caught with these types of images and attempting to bring others with him. It’s not easy. It took him over a year of his life calling constantly asking when the charges will be dropped. They even image your devices on the spot yet still take them and stuff them in an evidence locker until everything is cleared up. You’re essentially an outcast to society while this is pending as well as most people assume if you have police interest related to child pornography you must be guilty.


Okay. Keep going with the scare tactics. Clearly you missed the real point of you being incredibly hyperbolic

I’d be happier if Apple wasn’t doing this at all. I’m not defending them necessarily but I am calling bullshit on your scare tactics. It’s not necessary.


> as a queer kid, I was terrified of my parents finding out

I think many queer people have a completely different idea of the concept of "why do you want to hide if you're not doing anything wrong" and the desire to stay private. Especially since anything sexual and related to queerness is way more aggressively policed than hetero-normative counterparts.

Anything "think of children" always has a second order affect of damaging queer people because lots of people still think of queerness as dangerous to children.

It is beyond likely that lots of this monitoring will catch legal/safe queer content - especially the parental-controls focused monitoring (as opposed to the gov'ment db of illegal content)


> Anything "think of children" always has a second order affect of damaging queer people because lots of people still think of queerness as dangerous to children.

For example, YouTube does this with some LGBT content. YouTube has demonitized LGBT content and placed it in restricted mode, which screens for "potentially mature" content[1][2].

YouTube also shadowbans the content[1], preventing it from showing up in search results at all.

From here[1]:

> Filmmaker Sal Bardo started noticing something strange: the views for his short film Sam, which tells the story of a transgender child, had started dipping. Confused, he looked at the other videos on his channel. All but one of them had been placed in restricted mode — an optional mode that screens “potentially mature” content — without YouTube informing him. In July of that year, most of them were also demonetized. One of the videos that had been restricted was a trailer for one of his short films; another was an It Gets Better video aimed at LGBTQ youth. Sam had been shadow-banned, meaning that users couldn’t search for it on YouTube. None of the videos were sexually explicit or profane.

There are more examples like that here[2].

[1] https://www.rollingstone.com/culture/culture-features/lgbtq-...

[2] https://www.washingtonpost.com/technology/2019/08/14/youtube...


And it's not just YouTube. Most platforms are at least partially guilty of this.

Then there is tumblr, which is all but dead - explicitly for a "think of the children" concern from apple.


> Then there is tumblr, which is all but dead - explicitly for a "think of the children" concern from apple.

Tumblr had their porn ban decided 6 months before the snafu with the app and CSAM. It's nothing to do with Apple.


Especially since anything sexual and related to queerness is way more aggressively policed than hetero-normative counterparts.

I find it intensely ironic that Apple's CEO is openly gay.


Irony is rarely good in the news


I get the impression that he's at least somewhat asexual.


> probably good in the moral sense

How, how is it even morally good?? Will they start taking pictures of your house to see if you store drugs under your couch? Or cook meth in your kitchen??

What is moral is for society to be in charge of laws and law enforcement. This vigilante behavior by private companies who answer to no one is unjust, tyrannical and just plain crazy.


> Will they start taking pictures of your house to see if you store drugs under your couch? Or cook meth in your kitchen??

How many people have homepods? When will they start listening for illegal activity?


No worries, it's totally local voice recognition ! We'll only send samples when you speak about herbs.


Unfortunately with SafetyNet, I feel like an investment into Android is also a losing proposition...I can only anticipate being slowly cut off from the Android app ecosystem as more apps onboard with attestation.

We've collectively handed control of our personal computing devices over to Apple and Google. I fear the long-term consequences of that will not be positive...


Loosing sight of the forest for this one tree.

1) Google doesn't release devices without unlockable bootloaders. They have always been transparent in allowing people to unlock their Nexus and Pixels. Nexus was for developers, Pixels are geared towards the end user. Nothing changed with regards to the bootloaders.

2) Google uses Coreboot for their ChromeOS devices. Again, you couldn't get more open than that if you wanted to buy a Chromebook and install something else on it.

3) To this day, app sideloading on Android remains an option. They've even made it easier for third party app stores to automatically update apps with 12.

4) AOSP. Sure, it doesn't have all the bells and whistles as the latest and greatest packaged up skin and OS release, but all of the features that matter within Android, especially if you're going to de-Google yourself, are still there.

Any one of those points, but consider all four, and I have trouble understanding why people think REEEEEEEE Google.

So you can't play with one ball in the garden (SafetyNet), you've still got the rest of the toys. That's a compromise I'm willing to accept in order to be able to do what I want to and how I want to do it. (Eg, Rooting or third party roms.)

If you don't like what they do on their mobile OS, there's nothing that Google is doing to lock you into a Walled Garden to where the only option you have is to completely give up what you're used to...

...Unlike Apple. Not one iOS device has been granted an unlockable bootloader. Ever.


> Google doesn't release devices without unlockable bootloaders. They have always been transparent in allowing people to unlock their Nexus and Pixels.

True but misleading. If you unlock your bootloader, you can no longer use a lot of apps, including Snapchat, Netflix, Pokemon Go, Super Mario Run, Android Pay, and most banking apps. And before you say this isn't Google's fault, know that they provide the SafetyNet API, which has no legitimate, ethical use cases, and is what allows all of the aforementioned apps to detect whether the device has been modified, even if the owner doesn't want that.


> most banking apps

This really depends on the apps. I have used over 10 banking apps on an Android phone with an unlocked bootloader without ever encountering any issues. On a device rooted using Magisk, the MagiskHide masking feature successfully bypasses the apps' root checks in my experience.


> On a device rooted using Magisk, the MagiskHide masking feature successfully bypasses the apps' root checks in my experience.

Sure, the protection currently isn't bulletproof. But wait until it becomes mandatory for TrustZone to participate in the attestation.


You're right that more advanced forms of hardware attestation would defeat the masking if Google eventually implements them.

I'm hoping that Microsoft's support for Android apps and integration with Amazon Appstore in Windows 11 will hedge against Google's SafetyNet enforcement by making an alternative Android ecosystem (with fewer Google dependencies) more viable. Apps that require SafetyNet would most likely not work on Windows 11.


> I have used over 10 banking apps on an Android phone with an unlocked bootloader without ever encountering any issues.

You have 10 accounts at different banks? I thought I was bad with 4


Well, this also includes credit cards. In some countries, unused and barely used credit lines improve one's credit score.


Ah yea, I forgot about unused credit lines. I used to close those until I checked my credit one day after closing on of the older ones.


Obviously anecdotal, but literally none of those examples I care to use on my phone anyway. Overtime, my phone has just become a glorified camera with some messaging features.


I've used banking apps and Google pay on my rooted unlocked phone for several years now. True, I'm still on Android 9, so perhaps it will be worse when I upgrade.

Using Magisk and Magisk Hide. Though oddly enough, none of my banking/credit card apps make an issue of being rooted, so they're not even in the Magisk Hide list.


I have an unlocked Xiaomi loaded with Lineage OS and Magisk, all the apps work - banking, Netflix, you name it.


That is likely to change in the near future. Hardware attestation of bootloader state is increasingly available. This is currently bypassed by pretending to be an older device that doesn't possess that capability. As long as device bootloaders continue to differentiate between stock and custom OS signing keys it won't be possible to bypass SafetyNet.


Yeah, it seems you are right. I haven't been actively tracking the custom ROM market, but it seems Google is trying really hard to achieve widespread hardware attestation. Or they could just be waiting until all the old devices are off the market, so all of the "Hardware attestation: Unsupported" response cases can be marked as UnlockedBootloader with great confidence.


SafetyNet also exists to prevent people from running Android apps on platforms other than Android. You can't use SafetyNet-enabled apps on Anbox, which is what SailfishOS uses as their Android compatibility layer, nor on emulators.

If you wanted to do a WSL but for Android, SafetyNet guarantees many apps won't work.

It also puts alternative Linux-based mobile operating systems, like SailfishOS or postmarketOS, at a disadvantage because they won't be able to run certain Android apps for no real technical reason other than the protection of Google's money firehouse.


Safetynet is becoming a problem, and the trend shows to signs of slowing down.

I shouldn't have to choose between keeping full control over my device and being able to use it to access the modern world.


For instance: The McDonald's app uses SafetyNet and won't run on an unlocked device.[1] Google doesn't place any restrictions on which types of apps can use SafetyNet. Banking apps tend to use it, but so do an increasing number of apps that clearly shouldn't need it.

(For the record, I don't think SafetyNet should exist at all, but if Google is pretending it's for the user's security and not just to allow developers to make it harder to reverse engineer their fast food apps, they should at least set some boundaries.)

It's frustrating that Google has fostered an ecosystem where not all "Android apps" work on vanilla Android.

[1] https://twitter.com/topjohnwu/status/1277683005843111936


I think a system to verify the integrity of the operating system and make the user aware of any changes is a Good Thing. Of course, the user should be in control of what signing keys are trusted and who else gets access to that information.

Instead, what Google has done is allowed app developers to check that the user isn't doing anything surprising - especially unprofitable things like blocking ads or spoofing tracking data. Since Google profits from ads and tracking, I must assume a significant part of their motivation is to make unprofitable behavior inconvenient enough most people won't do it.


"1) Google doesn't release devices without unlockable bootloaders. They have always been transparent in allowing people to unlock their Nexus and Pixels. Nexus was for developers, Pixels are geared towards the end user. Nothing changed with regards to the bootloaders."

This is not accurate. Pixels that come from Verizon have bootloaders that cannot be fully unlocked.


That's because Verizon doesn't want you using a discounted phone with another carrier. If they let you unlock your phone, you could flash a stock radio and ditch Verizon for Google Fi or AT&T. Different issue at play.

As long as you buy a Pixel directly from Google or one of a few authorized resellers, it is unlockable. (I recommend B&H, they help you legally evade the sales tax.) You can also use a Pixel you buy from Google with Verizon.


Not to nitpick here, but there is no way any device you buy from Verizon is discounted, regardless of what they advertise. Everyone pays _full_ price for any device they get on contract or payment plan.

Back when contract pricing was a more regular thing, I ended up doing the math on the plan rate after I requested for the device contract subsidy to be removed as I didn't want to upgrade the device. I had a Droid DNA at the time.

The monthly rate dropped by $25 just to keep the same device. (Nevermind that I had to ASK for them to not continue to charge me the extra $25/mo after 2 years)

$25 a month for 24 months is $600.

The device on contract was $199.

Full retail price if you didn't opt in for a 2 year contract when getting it? $699.

So I ended up paying an extra $100 for the device than if I had just bought it outright.

Even if the offerings/terms are different now... Verizon, regardless of how they market anything, absolutely makes you pay full price (and then some) for the device you get at 'discount.'

It's funny now that we're seeing people being able to BYOD to Verizon these days and AT&T is the one engaging in aggressive whitelisting.


Even if it's not discounted, it's on a deferred payment plan, so VZ doesn't want to let you steal the phone.

And 10%/yr simple interest is a reasonable price for a retail loan.


$100 on $699 is a little more than 10%…


Other carriers will provide a bootloader unlock code to you on request once the device is paid off. As far as I know, Verizon refuses to do so under any circumstances for any device.


Carriers will NOT provide a bootloader unlock, they have no access to that, only the OEM does.

Carriers might provide a network unlock -- these are 2 VERY different things.


I didn't check HN for a while so chances are no one will ever see this response. Nonetheless! I am well aware that bootloader and network locks are different things.

In many cases you have to get an authorization code from the carrier that sold the device in order to unlock the bootloader. That may or may not involve retrieving a code from your device, and it may or may not also involve interacting with the OEM. It depends on the details negotiated between the carrier and the OEM.

For example, T-Mobile sells devices that are both bootloader and network locked but (for some devices) provides a process to unlock both of those once certain criteria have been met (length of device ownership, account standing, etc). To be perfectly clear, for devices sold by T-Mobile they generally have to authorize you somehow before the OEM will send you a bootloader unlock code.


> Pixels that come from Verizon

> Verizon

again.

> Verizon

Google didn't sell you that device. Verizon did.


> all of the features that matter within Android, especially if you're going to de-Google yourself, are still there

Except, uh, GPS. Even for third party navigation apps.

Yes, I know about microG, but the fact that there had to be a third party reimplementation of what should be a standard system API is still a problem.


> Except, uh, GPS. Even for third party navigation apps.

AOSP does support GPS without needing any additional software, but does not have built-in support for Wi-Fi and cell tower triangulation. As you mentioned, UnifiedNlp (bundled with microG) optionally supports triangulation using a variety of location providers (including offline and online choices) for a faster location lock.


Agreed, it's shitty of Google to have moved so much functionality into it's proprietary Play Services. Push Notifications API being in it bothers me even more. Unfortunately until Linux mobile operating systems catch up in functionality though I'm going to stick with GrapheneOS.


Yes, it's true you can't get everything you want for free with zero effort. Life is not fair.


> We've collectively handed control of our personal computing devices over to Apple and Google

Hey now, the operating system and app distribution cartels include Microsoft, too.


Windows, for all the shit they do to antagonize users, does let you choose what programs you install on your PC without forcing you to use an app store.


And yet, they've been guilty of uninstalling programs they don't like before (https://www.zdnet.com/article/microsoft-windows-10-can-now-a...).

The only real way to avoid this is to sandbox all Windows and macOS systems and only run them from Linux hosts, but you're still taking a performance hit when you do this and sometimes usability just isn't up to par with the other two.


Correct me if I'm wrong, but i believe Microsoft enforces certificate checks, meaning you need to buy certificates regularly and be in good standing with the company so the apps you signed with the certificates will run on Windows without issues. I believe Defender can act similarly to Apple's Gatekeeper.


I think on Windows it's not only based on a missing signature. I sometimes get the "this file may damage your computer" message. There's also an "ignore" button hidden below a "more" button, but it in the end it lets you use it. But it doesn't always happen. [0]

It's not very user friendly, but it might be a bit more intuitive than apple's special dance of click right -> open to bypass said controls.

---

[0] For example, the Prometheus exporter for windows x64 is not signed and doesn't trigger the alert. I can download it (no alert) click open (no alert) and it runs . The x32 version does have a "this may damage your computer" alert in the browser (edge).

https://github.com/prometheus-community/windows_exporter/rel...


Well, for the most part at least. Remember Windows RT and Windows 10 S?


Yep, got a tablet here that is now a brick as Windows RT is no use


Oddly enough, Windows RT still gets security updates until 2023.

But with it stuck on IE11...


What ? I can install anything I like on windows. I cannot on my iPhone.


Totally agree, I was thinking more within the context of mobile phones.


To be fair, even if Blackberry was still a viable platform, RIM is no better and in some ways even worse.

https://news.ycombinator.com/item?id=1649963


WebOS and Maemo both were much more open, the latter even had apt.


I don't think it's implausible that I carry around a phone that has mail, contacts, calendars, photos, and private chat on it. And then, have a second, older phone that has like Instagram and mobile games. It's tragic.


Unfortunately a big bulk of the data they profit off of is simply the ads and on-platform communication and behavior. Doesn't really matter if you use a different device if you still use the platform. Sure, it's slightly better, but it really isn't a silver bullet if you're still using it. And this is coming from someone who does this already.


I don't really mind if they make a profit off of the free things I use.

What I mind is when my personal life, the stuff that _actually_ matters, is being monitored or has a backdoor that allows ANY third party easy access to monitor it.


Yes, my history was Linux 95-04, Mac 04-15, and now back to Linux from 2015 onwards.

Its been clear Tim Cook was going to slowly harm the brand. He was a wonderful COO under a visionary CEO-type, but he holds no particular "Tech Originalist" vision. He's happy to be part of the BigTech aristocracy, and probably feels really at home in the powers it affords him.

Anyone who believes this is "just about the children" is naive. His chinese partners will use this to crack down on "Winnie the Poo" cartoons and the like...before long questioning any Big Pharma product will result in being flagged. Give it 5 years at max.


[flagged]


I don’t think anyone is arguing that making it harder to abuse children is a bad thing. It’s what is required to do so that is the bad thing. It’d be like if someone installed microphones all over every house to report on when you admit that you’re guilty to bullying. No one wants bullying, but I doubt you want a microphone recording everything and looking for certain trigger words. Unless you have an Alexa or something, then I guess you probably wouldn’t mind that example.


Alexa and iPhones with Siri enabled, and Android phones, are all continuously listening with their microphones for their wake word, unless you've specifically turned the feature off.

The difference is that the Alexa connects to your wifi, so if you wanted to, you could trivially tell if it's communicating when it shouldn't be. When I worked at Amazon, I was given the impression that the system that handles detecting the wake word was implemented in hardware, and the software system that does the real speech recognition doesn't "wake up" or gain access to the audio channel unless the wake word is detected by that hardware system -- and it's very obvious when you've woken it up (the colored ring lights up, it speaks, etc.)

Echo devices also sit in one room. If you're like most people you take your phone everywhere, which means that if it's spying on you, it could literally have a transcript of every word you spoke the entire day, as well as any people you've been around. To make matters worse, it would be difficult to tell if that was happening. Unless you're an uber-hacker who knows how to root an iPhone, or a radio geek who knows enough to monitor their device's cellular transmissions, good luck figuring out whether Siri is listening to and passing on audio that it shouldn't. The problem is that phones have so many apps and responsibilities -- given that they are essentially full computers -- these days that nonstop data transfer on a wifi network from my phone wouldn't be alarming: it might be backing up pictures to a cloud, or syncing the latest version of apps, etc.

I think the dedicated devices like Echo/Alexa are what you should buy if you're the most privacy-sensitive, since they have zero reason to be uploading to the Internet unless you're actively talking to them, and they have zero reason to be downloading unless they're receiving a software patch, which should be very rare. And because they're on your wifi (not cell) you can monitor their network traffic very easily.


It's not unreasonable to expect the speech recognition models to be run locally.

As to the wake word point, I agree. I don't think alexa/siri/etc are currently bad or disrespecting privacy. I actually have a smart home with a voice assistant.

However, my smart home is all local mesh network (zwave and zigbee) based through a FOSS bridge that doesn't talk to the internet. All lights are through smart switches, not bulbs. The end result is such that if the voice assistant service ever pisses me off, I can simply disconnect from it.

If you read my comments in this article, I think I come off as a tin foil hat wearing lunatic, to some degree at least.

But actually, I'm not a super privacy paranoid person. Telemetry, voice recognition datasets, etc... I think those are a reasonable price to pay for free stuff! I just want to have my thumb on the scale and a go-bag packed for when/if these services become "evil" ;)


You are correct. The model training is pretty intensive and needs to be run remotely in powerful machines.

The final model is just a series of vectors that are typically pretty fast to run on the local machine.


Is this a bot comment? Account is two hours old.

You want to not suffer through political memes? And jump to scanning private messages for dissent, by authoritarian governments being okay!?

What?!


No, I'm being serious. technology like this could be very beneficial.

there's a good chance that if we continue to improve surveillance law enforcement agencies and justice departments could begin to focus on rehabilitation and growing a kinder world.

right they are like firemen trying to put out fires after the building has been ruined

if it doesn't work we're doomed anyway, so what's the problem?


The problem is what you're describing is literal facism?


that's kind of a stretch, but I hope not.

the way the laws go good or bad is going to be up to everyone


You lost me with the last sentence.


Can you speak at all to the negative aspects of this position?


Once you give them the power, they'll never willingly hand it back.


Helen Lovejoy voice Won't somebody please think of the children‽


This can happen only because whenever any slippery-slope action was taken previously, there is an army of apologists and "explainers" who rush to "correct" your instinctive aversion to these changes. It's always the same - the initial comment is seemingly kind, yet with an underlying menace, and if you continue to express opposition, they change tack to being extremely aggressive and rude.

See the comment threads around this topic, and look back to other related events (notably the tech giants censoring people "for the betterment of society" in the past 12 months).

Boiling a frog may happen slowly, but the water continues to heat up even if we pretend it doesn't. Very disappointed with this action by Apple.


This is typical obedient behavior. Some abused spouses get through lengths to come up with excuses for their partners. Since I don't own an iOS device, I don't really care about this specific instance.

But I don't want these people normalizing deep surveillance and fear that I have to get rid of my OSX devices when this trend continues.


Signal is still a centralised data silo where by default you trust CA to verify your contacts identify.


Yeah, but it's also useful for getting my friends on board. I think it's likely that I eventually start hosting matrix or some alternative, but my goal is to be practical here, yet still have a privacy protecting posture.


Your friends aren't going to want to install an app to have it connect to trangus_1985's server. Be happy just getting them on Signal.


My friends are significantly more technical (and paranoid) than the average user. We've already discussed it.

But... yeah. Yeah. Which is why I got as many people on Signal as I could. Baby steps. The goal here, right now, is reasonable privacy, not perfection.


Well said, as I like to point out sometimes, Signal is a privacy app not an anonymity app.


> Signal is still a centralised data silo where by default you trust CA to verify your contacts identify.

You can verify the security number out-of-band, and the process is straightforward enough that even nontechnical users can do it.

That's as much as can possibly be done, short of an app that literally prevents you from communicating with anyone without manually providing their security number.


I said, 'by default'. I know that it is possible to do a manual verification, but I am yet to have a chat with a person who would do that.

Also, the Signal does not give any warnings or indication that chat partner identify is manually verified. Users are supposed to trust Signal and not ask difficult questions


> I said, 'by default'. I know that it is possible to do a manual verification, but I am yet to have a chat with a person who would do that.

I'm not sure what else you'd expect. The alternative would be for Signal not to handle key exchange at all, and only to permit communication after the user manually provides a security key that was obtained out-of-band. That would be an absolutely disastrous user experience.

> Also, the Signal does not give any warnings or indication that chat partner identify is manually verified

That's not true. When you verify a contact, it adds a checkmark next to their name with the word "verified" underneath it. If you use the QR code to verify, this happens automatically. Otherwise, if you've verified it manually (visual inspection) you can manually mark the contact as verified and it adds the checkmark.


> I'm not sure what else you'd expect.

Ahem. I'd expect something that most xmpp clients could do 10+ years ago with OTR: after establishing an encrypted session the user is given a warning that chat identify of a partner is not verified, and is given options on how to perform this verification.

With CA you can make a mild warning that identity is verified by Signal, and give an options to dismiss warning or perform off-the-band verification.

Not too disastrous, no?

> That's not true. When you verify a contact, it adds a checkmark next to their name with the word "verified"

It has zero effect if the user is given no indication that there should be the word verified.

It is not true what you say. This [1] is what a new user sees in Signal - absolutely zero indication. To verify a contact user must go to "Conversation settings* and then "View safety number". I'm not surprised nobody ever established a verified session with me.

[1]: https://www.dropbox.com/s/ab1bvazg4y895f6/screenshot_2021080...


I did this with all my friends who are on Signal, and explained the purpose.

And it does warn about the contact being unverified directly in the chat window, until you go and click "Verify". The problem is that people blindly do that without understanding what it's for.


Please show me this warning in this [1] freshly taken screenshot from Signal.

[1]: https://www.dropbox.com/s/ab1bvazg4y895f6/screenshot_2021080...


Hm, you're right. What I was thinking of is the safety number change notification. But if you start with a fresh new contact, it's unverified, but there's no notification to that effect - you have to know what to do to enable it.


yes, that is exactly what I am talking about.


Tap the user, then the name at the top, then “View Safety Number”. I’m not sure if there’s another warning less buried.


That's the point, see my other comment [1]. User has to know about it to activate manual verification, and by default he just has to trust Signal's CA that his contact is, indeed, the one he is talking to.

[1]:https://news.ycombinator.com/item?id=28081152


I agree Signal’s default security is a whole lot better than iMessage, which trusts Apple for key exchange and makes it impossible to verify the parties or even the number of parties your messages are being signed for. Default security is super important for communication apps because peers are less likely to tweak settings and know about verification screens.

Aside: Signal data never touches iCloud Backup (https://support.signal.org/hc/en-us/articles/360007059752-Ba...). That’s an improvement over a lot of apps.


If my parents had the feature to be alerted about porn their kid’s device while I was a teen they would have sent me to a conversion camp, and that is not an exaggeration.

Apple thinks the appropriate time for queer kids to find themselves is after they turn 18.


Maybe Apple will decrease child abuse cases but increase cases of child suicides..


If you're just downloading and looking at porn, no problem. It only becomes an issue if you're sharing porn via Messages or storing it in iCloud. And to be fair, I don't think they're alerted to the nature of the pornography, so you might be able to avoid being outed even if you're sharing porn (or having porn shared with you).

Edit: I'm wrong in one respect: if the kid under 13 chooses to send a message with an explicit image despite being warned via notification, the image will be saved to a parental controls section. This won't happen for children >= 13.


Sure, but the parents can then unlock and go thru the phone and out the poor kid.


Organic Maps on Fdroid is a really clean osm based map.


I'm impressed, it actually has smooth scrolling unlike OsmAnd which is very slow loading tiles in.

Critical points I'd make about Organic Maps, I'd want a lower inertia setting so it scrolls faster, and a different color palette.. they are using muddy tones of green and brown.


Does it let you select from multiple routes? I've been using Pocketmaps, but it only gives you a single option for routing, which can lead to issues in certain contexts


And I also invite everyone to contribute to OSM through StreetComplete, it's quite intuitive and it adds something to look for when taking a walk.


Nearly the same as MagicEarth...I use it all the time.


>While I still use google maps

You can still use Google Maps without an account and "incognito". I wish they'd allow app store usage without an account though- similar to how any Linux package manager works.


That's not really the issue. The issue is that for google maps to work properly, it requires that the Play services are installed. Play services are a massive semi-monolithic blob that requires tight integration with Google's backend, and deep, system-level permissions to operate correctly.

I'm not worried about my search history.


People need to remember that most of Android got moved into Play Services. It was the only way to keep a system relatively up to date when the OEMs won't update the OS itself.

Yeah, it's a dependency... as much as the Google Maps APK needing to run on Android itself.


Last I checked (about a year ago), the Google Maps app did work with microG (a FOSS reimplementation of Google Play Services).


I use maps on my phone on a regular basis - I would vastly prefer to have something less featured and stable versus hacking the crap out of my phone. But that's good to know.



OsmAnd is an absolutely brilliant map app for android. Very fully featured but also pleasant to use (though I dislike some of the defaults).


One workaround is to use the mobile web app, which is surprisingly pretty decent for a web app. And because it's a web app, you can even disable things like sharing your location if you want to


I've just tested Google Maps on an Android device without Google Play Services or microG. The app works fine, although every time you cold-start it, the app shows an alert (which can be dismissed but not disabled) and a notification (which can be specifically disabled) that Google Play Services is unavailable. On an Android device with microG, Google Maps works without showing the alert or the notification about Google Play Services.


Ahhh, gotcha. Did not realize that. Makes sense.


There is a Google Maps website that works on mobile. No need for the app.


In addition to F-Droid, you can get Aurora Store (which is on F-Droid) which lets you use an anonymous login to get at the Play Store. I use it for a couple free software apps that aren't on F-Droid for some reason.


I also recommend Aurora Store as a complete replacement for the Play store. The one thing is that I've never tried using apps that I paid for on it but it works very well for any free apps. There is an option to use a Google account with Aurora but I've only ever used the anonymous account.

The only slight dowbside is that I haven't figured out how to auto update appd, so your apps will get out of date without you being notified and you have to manually do it. This problem might literally be solved by a simple setting thay I haven't bothered to look for, IDK.

On the plus side it includes all the official play store apps, along side some that aren't allowed by play store.

For examples, Newpipe, the superior replacement YouTube app that isn't allowed on play store due to it subverting advertisements and allowing a few features that are useful for downloading certain things.


What are the apps?


Both are for marking anime. One for MAL[0], one for Kitsu[1].

[0] https://github.com/Drutol/MALClient

[1] https://github.com/hummingbird-me/kitsu-mobile


It's not the device that's less secure or private in this context, it's the services. There's no reason you couldn't just continue using your NAS for photo backup and Signal for encrypted-communications completely unaffected by this.

Apple seems to not have interest in users devices, which makes sense -- they're not liable for them. They _do_ seem interested in protecting the data that they house, which makes sense, because they're liable for it and have a responsibility to remove/report CSAM that they're hosting.


So they should do that scanning server side at their boundary instead of pushing software to run on phones with potential to extend scope later if no push back.


They don't want to do it serve side because they don't want to see your unencrypted data!


Apple already holds the key to iCloud Photos content, and regularly responds to search warrants.


Well good thing they're looking at it anyway client side...


That's not the issue. The issue is that they have shipped spyware to my device. That's a massive breach of trust.

I suspect that this time next year, I'll still be on ios, despite my posturing. I'm certainly going to address icloud in the next few weeks - specifically, disusing it. However, I would be surprised if I'm still on ios a year or two after that.

What Apple has done here isn't horrible in the absolute sense. Instead, it's a massive betrayal of trust with minimal immediate intrusiveness; and yet, a giant klaxon that their platform dominance in terms of privacy is coming to an end


> with icloud photos csam, it is also a horrifying precedent

That precedent was set many years ago.

>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect’s Gmail account.

Microsoft’s “PhotoDNA” technology is all about making it so that these specific types of illegal images can be automatically identified by computer programs, not people.

PhotoDNA converts an image into a common black-and-white format and size the image to a uniform size, Microsoft explained last year while announcing its increased efforts at collaborating with Google to combat online child abuse.

https://techcrunch.com/2014/08/06/why-the-gmail-scan-that-le...


cloud versus local device is a massive distinction imo. or maybe im a dinosaur ;)


No, you're not a dinosaur. It is entirely reasonable for a hosting provider not to want certain content on their servers. And it is also quite reasonable to want to automate the process of scanning for it.

My physical device, on the other hand, is (supposed to be) mine and mine alone.


The data is only scanned when you attempt to upload it to the cloud in either case.

Apple scans the data on device before it is sent. Google scans it on it's own servers after it is sent.


I think no matter what devices you use, you've nailed down the most important part of things which is using apps and services that are flexible, and can be easily used on another platform.


I knew that eventually it'd probably matter what devices I used, I just didn't expect it to be so soon.

But yeah, I could reasonably use an iphone without impact for the foreseeable future with some small changes.


What I am reminded of is all of the now seemingly prophetic writing and story telling in a lot of cyber-punk-dystopian anime about the future of the corporate state, and how mega corps rule EVERY THING.

What I always thought was interesting was that the Police Security Services in Singapore were called "CISCO" -- and you used to see these swat-APV-type vans driving around and armed men with CISCO emblazened on their gear/equip/vehicles...

I always was reminded of Cyberpunk Anime around that.


Interesting! But actually this is not the singly thing with an "interesting" name in Singapore - well at least as long as you speak Czech. ;-)

You see, the mass transit company in Singapore is handled by the Singapore Municipal Rapid Transit company, abbreviated SMRT. There is also a SMRT corporation (https://en.wikipedia.org/wiki/SMRT_Corporation), SMRT buses, the SMRT abbreviation is heavily used on train, stations, basically everywhere.

Well, in Czech "smrt" means literarily death. So let's say for Czech speakers riding the public transport in Singapore can be a bit unnerving - you stand at a station platform and then a train with "DEATH" written on it in big letters pulls into the station. ;-)


Wow. Thanks for that.

Imagine if you were a Czech child who was old enough to read but not old enough to disassociate the locality of the spellings... that would be odd.


I’ve been thinking about switching my main email to Fastmail from Apple, for portability in case the anti-power-user trend crosses my personal pain threshold.

But if your worry is governments reading your mail, is an email company any safer? I’m sure FM doesn’t want to scan your mail for the NSA or its Australian proxy, but do they have a choice? And if they were compelled, would they not be prevented from telling you?

“We respect your privacy” is exactly what Apple has been saying.


I think self-hosting email has too many downsides (spam filtering, for example) to be worth it; I’m more concerned about losing my messages (easily solved with POP or mbox exports while still using a cloud account) than government data sharing. Email is unencrypted in transit anyway, and it’s “industry standard” to store it in clear text at each end.


> if your worry is governments reading your mail

complicated. As long as they require a reasonable warrant (ha!), I'm fine. Email is an inherently insecure protocol and ecosystem, anyways.

I haven't used email for communication that I consider to be private for a while - I've moved most, if not all, casual conversation to signal, imessage. Soon, I hope to add something like matrix or mattermost into the mix.

My goal was never to be perfect. My goal is to be able to easily remove myself from an invasive spyware ecosystem, and bring my friends along, with minimal impact.


I have been self-hosting email for 7 years successfully. But it required a physical server in a reputable datacenter, setting up Dovecot, Exim, SpamAssasin, reverse-DNS, SPF, DKIM. It took a bit of time to gain IP reputation but then it has worked flawlessly since. Occasionally some legit mail is flagged as spam or vice versa but it is not worse than any other mail provider. So it can be done! But my first attempts to do that on a VPS failed as IP blocks of VPS providers are often hopelessly blacklisted in major email providers.


there's always protonmail which is supposedly e2e, so they shouldn't be able to scan your mail


unfortunately, self hosting is a pretty clear alternative. Not much else seems to be.


Have you found any decent google maps alternatives? I'd love to find something but nothing comes close as far as I've found. Directions that take into account traffic is the big thing that I feel like nobody (other than Apple, MS, etc.) will be able to replicate.

Have you tried using the website? I've had some luck with that on postmarketOS, and it means you don't need to install Play services to use it.


Organic maps is pretty good: https://github.com/organicmaps/organicmaps


I'm using since many years HERE Maps https://wego.here.com/


I really like Here WeGo, and it allows you to download maps for specific countries too to have available offline.


OsmAND


> While I still use google maps

I use Citymapper simply because I find it better (for the city-based journeys that are my usual call for a map app) - but it not being a Google ~data collection device~ service is no disadvantage.

At least, depending why you dislike having everything locked up with Google or whoever I suppose. Personally it's more having everything somewhere that troubles me, I'm reasonably happy with spreading things about. I like self-hosting things too, just needs a value-add I suppose, that's not a reason in itself for me.


> While I still use google maps, I've been trialing out OSM alternatives for a minute.

Is there a way to set up Android to handle shared locations without Google Maps?

Every time someone shares location with me (in Telegram) it displays as a tiny picture and once I click it it says I have to install Google Maps (I use an alternative for actual maps and don't have Google Maps installed). So I end up zooming the picture and then finding the location on the map manually.


> it is also a horrifying precedent that the device I put my life into is scanning my photos and reporting on bad behavior

apples new customers are the various autocratic regimes that populate the earth. apples customers used to be human beings. there exist many profiteers in mountain view, cupertino menlo and atherton in the service of making our monopolies more capable of subjugating humanity.


I also use Fastmail but being fully aware that Australia where its hosted is part of the 5 eyes spy network, and also one of the countries acting extreamly oppressive towards its citizens when it comes to covid restrictions.

So I don't actually expect my mail to be private. But at least it's not Google.


Fastmail is hosted in New Jersey. If it was hosted in Aus the user experience would be pretty bad for most of its users.


Signal is next on the list since it's a centralized solution - you can expect they will come for it next.


I'm just trying to buy time until open source and secure alternatives have addressed these problems. Apple doing this has moved my timeframes up by a few years (unexpectedly).


> I hope they recant

This is very much like driving a car through a crowd of protestors. They will slowly, inexorably, eventually push through.


> Fortunately, my email is on a paid provider

Paid doesn't mean more secure, it's popular mistake.


I'm really loving fastmail. Thanks for the heads up!


What is your home NAS setup like?


Freenas, self-signed tightly-scoped CA installed on all of my devices. 1TBx4 in a small case shoved under the stairs.

tbh, i would vastly prefer to use a cloud based service with local encryption - I'm not super paranoid, just overly principled


If you haven’t already heard of it, cryptomator might be just what you’re after.


What do you use to sync phone photos to your NAS? I like Google Photos' smartness, but I also want my photos on my Synology NAS.


I personally am a fan of Mylio for that. https://mylio.com/


Syncthing


Take a look at https://internxt.com. Been using them for a couple weeks and am incredibly impressed. Great team, great product, just great everything. It was exactly what I was looking for


how does apple protect againat hash collisions?


It doesn't trigger for a single match, I guess that's the first line of defence.


[flagged]


Oh hey wait you're the freenode guy. While we're on the topic of hostile actions by a platform provider...


He's uh, a prince, or something. (Probably got the crown out of a cereal box.)

But from the looks of his numbers ( https://upload.wikimedia.org/wikipedia/commons/8/83/IRC_top_... ) he's doing a real bang-up job!


Yeah, sorry, I mixed them up in my head. I'm currently running Lineage on a PH-1, not Postmarket. I would not consider what I have set up to be "production ready", but I'm going to spend some time this weekend looking into what modern hardware can run Lineage or other open mobile OSes


Lineage OS is 100% production ready, it's been my daily driver for almost 2 years and I've been Google and apple - free since.


Sorry, wasn't ripping on Lineage. It's more the entire ecosystem. I mentioned in prior comments, but I think that in a few years we'll have a practical, open source, third party in the mobile phone os wars - one that has reasonable app coverage.

I don't care if I use google or apple services, btw, I just want the data flow to be on my terms.


Why is that weird?


> I'm not sure that I support what they are implementing for child accounts (as a queer kid, I was terrified of my parents finding out)

If you don't want your parents to look at your phone, you shouldn't be using a phone owned by your parent's account. The new feature doesn't change this calculus.

As a queer kid, would you enjoy being blackmailed by someone who tricked you into not telling your parents?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: