> with icloud photos csam, it is also a horrifying precedent
I'm not so bugged by this. Uploading data to iCloud has always been a trade of convenience at the expense of privacy. Adding a client-side filter isn't great, but it's not categorically unprecedented--Apple executes search warrants against iCloud data--and can be turned off by turning off iCloud back-ups.
The scanning of childrens' iMessages, on the other hand, is a subversion of trust. Apple spent the last decade telling everyone their phones were secure. Creating this side channel opens up all kinds of problems. Having trouble as a controlling spouse? No problem--designate your partner as a child. Concerned your not-a-tech-whiz kid isn't adhering to your house's sexual mores? Solved. Bonus points if your kid's phone outs them as LGBT. To say nothing of most sexual abuse of minors happening at the hands of someone they trust. Will their phone, when they attempt to share evidence, tattle on them to their abuser?
Also, can't wait for Dads' photos of their kids landing them on a national kiddie porn watch list.
That's not how it works, unless you control your partner's Apple ID and you lie about their DOB when you create their account.
I created my kids Apple IDs when they were minors and enrolled them in Family Sharing. They are now both over 18 and I cannot just designate them as minors. Apple automatically removed my ability to control any aspects of their phones when they turned 18.
> Dads' photos of their kids landing them on a national kiddie porn watch list.
Indeed, false positives is much more worrying. The idea that my phone is spying on my pictures... like, what the hell.
I recently had a friend stay with me after being abused by their partner. The partner had paid for their phone and account and was using that control to spy on them. I wish that cyber security was taught in a more practical way because it has real world consequences. And like two comments on here and it’s now clear as day how this change could be used to perpetuate abuse. I’m not sure what the right solution is, but I wish there was a tech non profit that secured victims of abuse in their communication in an accessible way to non tech people.
Most people on this platform understand Cyber Sec and OpSec relatively well. The problem is you are concerned with the people not on a platform like this who require a good learning system and ways of making it interesting to actually retain and understand.
How do you prevent photo's from your kids ending up in such a database? Perhaps you mailed grandma a photo of a nude two year old during bath time during a Covid lockdown — you know, normal parenting stuff. Grandma posted it on Facebook (accidentally, naively, doesn't matter) or someone gained access to it, and it ended up on a seedy image board that caters to that niche. A year later and it's part of the big black box database of hashes and ping, a flag lights up next to your name on Apple's dashboard and local law enforcement is notified.
I don't know how most people feel about this, but even a false positive would seem hazardous. Does that put you on some permanent watch list in the lowest tier? How can you even know? And besides, it's all automated.
We could of course massively shift society towards a no-photo/video policy for our kids (perhaps only kept on a non-internet connected camera and hard drive), and tell grandma to just deal with it (come back after the lockdown granny, if you survive). Some people do.
And don't think that normal family photos won't get classified as CEI. What is titillating for one is another's harmless family photo.
This is implying all the concerns about possible future uses of this technology are unreasonable slippery slope concerns, but we're on our like fourth or fifth time on this slope and we've slipped down it every previous time, so it's not unreasonable to be concerned
Previous times down this slope:
* UK internet filters for child porn -> opt out filters for regular porn (ISPs now have a list of porn viewers) + mandatory filters for copyright infringment
* Google drive filters for illegal content -> Google driver filters for copyrighted content
* iCloud data is totally protected so it's ok to require an apple account -> iCloud in China run by government controlled data centers without encryption
* Protection against malware is important so Windows defender is mandatory unless you have a third party program -> Windows Defender deletes DeCSS
* Need to protect users against malware, so mobile devices are set up as walled gardens -> Providers use these walled gardens to prevent business models that are bad for them
The first slippery slope for this was when people made tools to do deep packet inspection and find copyrighted content during the Napster era.
That was the first sin of the internet era.
Discussing slippery slopes does nothing.
Edit: It is frustrating to see where we are going. However - conversations on HN tend to focus on the false positives, and not too much on the actual villains who are doing unspeakable things.
Perhaps people need to hear stories from case workers or people actually dealing with the other side of the coin to better make a call on where the line should be drawn.
I don't think anyone here is trying to detract from the horrors and the crimes.
My problem is that these lists have already been used for retaliation against valid criticism. Scope creep is real, and in case of this particular list, adding an item is an explicit, global accusation of the creator and/or distributor for being a child molester.
My statement was to clarify incorrect statements of the issue. Someone was worried about incorrect DoBs entered by jilted lovers would get people flagged.
I just outlined what the actual process is. I feel that discussing the actual problem leads to better solutions and discussions.
Since this topic attracts strong viewpoints, I was as brief as possible to reduce any potential target area, and even left a line supporting the slippery slope argument.
If this was not conveyed, please let me know.
Matter of fact, your response pointing out the false positive issues is a win in my book! Its better than what the parent discussion was about.
But what I am truly perplexed by, is when you talk about "firs they came..." and "we're not biting".
Who is we, and why wouldn't YOU agree with a position supporting a slippery slope argument?
You seem to disagree with the actions being telegraphed by Apple.
This isn't a question about condoning child abuse. It's a question of doing probabilistic detection of someone possessing "objectionable content". Not sharing, not storing - possessing. This system, once deployed, will be used for other purposes. Just look at the history of every other technology supposedly built to combat CP. They all have expanded in scope.
Trying to frame the question along the usual slippery slope arguments implicitly sets up anyone critisicing the mechanism as a supporter of fundamentally objectionable content.
Sure, and i have no objection to what you are saying.
This thread however was where I was making a separate point that helps this discussion by removing confusion or assumptions on how Apple’s proposal works.
Sorry about the really long delay with answer, the week got better of me.
Your original post posited a reasonable question, but I felt the details were somewhat muddled. The reason I reacted and answered was that I have seen this style of questioning elsewhere before. The way you finished off was actually a little alarming: it'd be really easy to drop in with a followup that in turn would look like the other person was trying to defend the indefensible.
With my original reply I attempted to defuse that potential. The issue is incendiary enough without people willingly misunderstanding each other.
If you already control their apple account, then you already have access to this information. Your threat model can’t be “the user is already pwned” because then everything is vulnerable, always
The real problem here is that the user can't un-pwn the device, because it's the corporation that has root instead of the user.
To do a factory reset or otherwise get it back into a state where the spyware the abuser installed is not present, the manufacturer requires the authorization of the abuser. If you can't root your own device then you can't e.g. spoof the spyware and have it report what you want it to report instead of what you're actually doing.
i wish I could downvote this a million times. if someone has to seize physical control of your phone to see sexts thats one thing. this informs the abuser whenever the sext is sent/received. this feature will lead to violent beatings of victims who share a residence with their abuser. Consider the scenario of sally sexting jim while tom sits in another room of the same home waiting for the text to set him off. in other circumstances, sally would be able to delete her texts, now violent tom will know immediately. Apple has just removed the protection of deleting texts from victims of same residence abusers.
Apple should be ashamed. I see this as Apple paying the tax of doing business in many of the worlds most lucrative markets. Apple has developed this feature to gain access to markets that require this level of surveillance if their citizens.
> Consider the scenario of sally sexting jim while tom sits in another room
Consider Sally sending a picture of a bee that Apple’s algo determines with 100% confidence is a breast while Tom sits in another room. One could iterate ad infinitum.
Parent police the phones of 16 and 17 years old? That's some horrifying over parenting, Britney spears conservatorship level madness. Those kids have no hope in the real world
Well, as a parent, I can tell you that some 16/17 year olds are responsible and worthy of the trust that comes with full independence. Others have more social/mental maturing to do yet and need some extra guidance. That's just how it goes.
When you write that out, the idea of getting 'Apple ID's for your kids doesn't sound that great.
Register your kids with a corporate behemoth! Why not!? Get them hooked on Apple right from childhood, get their entire life in iCloud, and see if they'll ever break out of the walled garden.
This is an argument for me to not start using iCloud keychain. If Apple flags my account, I don't want to lose access to literally all my other accounts.
The “child” would be alerted and given a chance to not send the objectionable content prior to alerting anyone else. Did you read how it work?
Also, a father would only land in a national registry of their child’s photos are known to be CSAM. Simply taking a photo of your child wouldn’t trigger it.
> That's not how it works, unless you control your partner's Apple ID and you lie about their DOB when you create their account.
The most annoying thing about Apple Family sharing is that in order to create accounts for people you must specify that they are under 13 (source: https://www.apple.com/lae/family-sharing) - otherwise the only other option is for your "family member" to link their account to the Apple Family which is under your purview, which understandably many people might be hesitant to do because of privacy concerns (as opposed to logging into the child account on a Windows computer exclusively to listen to Apple Music - which doesn't tie the entire machine to that Apple ID as long as it's not a mac).
And so in my case, I have zero actual family members in my Apple Family (they're more interested in my Netflix family account). It begs the question, why does Apple insist on having people be family members in order to share Apple Music? We have five slots to share, and they get our money either way. They also don't let you remove family members - which may be the original intent for insisting on such a ridiculous thing - as if they're trying to take the moral high ground and guilt trip us for disowning a family member when in fact it simply benefits them when a fallout occurs between non-family members, because there's a good chance that the person in question will stop using the service due to privacy concerns, and that's less traffic for Apple.
It's actually kind of humorous to think that I still have my ex-ex-ex-girlfriend in my Apple Family account, and according to Apple she's 11 now (in reality, she's in her 30s). I can't remove her until another 7 years pass (and even then it’s questionable if they’ll allow it, because they might insist that I can’t divorce my “children”). And honestly, at this point I wouldn’t even remove her if I could, she has a newborn baby and a partner now, and I’m happy to provide that account, and I still have two unused slots to give away. I’ve never been the type of person who has a lot of friends, I have a few friends, and one girlfriend at a time. But the thing is she’s never been a music person and I assume that she isn’t even using it - and so even if I made a new best friend or two and reached out to her to let her know that I wanted to add them, Apple currently wouldn’t let me remove her to make room for those theoretical friends. While I'm a big fan of Apple hardware, it really bothers me that a group of sleazy people sat around a table trying to figure out how to maximize income and minimize network traffic, and this is what they came up with.
Did you ever stop to realize if licensing has anything to do with this? You also lied about someones age when creating their Apple account and continue to provide access to someone outside your family. Call them, and remove them, and then likely they'll ban you for violating the ToS
Curious, how would licensing affect this? Would the assumption be that everyone resides under the same roof? Because that's not a requirement for being in a family.
No, generally to license content you pay a fee per screen in this case as well as a fee per viewer. In the case of families this is calculated by the amount of people in the account. They don’t charge your costs per person, they charge a flat rate based on the maximum number of people you can add to your account. So doing it this way they’re not circumventing the per device fee they are charged that you’re trying to get them to pay for you for free.
> They don’t charge your costs per person, they charge a flat rate based on the maximum number of people you can add to your account. So doing it this way they’re not circumventing the per device fee they are charged that you’re trying to get them to pay for you for free.
I'm confused, how am I trying to get them to provide anything for free? I pay for the service, and that service has a limited number of account slots, and the people using those slots have their own devices. What am I missing?
Are you under the assumption that child accounts don't occupy a slot, and are free-riding? If so, that's not the case. Child accounts occupy a slot all the same, the only difference is that by providing child accounts to my adult friends, they aren't required to link their existing Apple accounts to the service that's under my control.
Moving the scanning to the client side is clearly an attempt to move towards scanning content which is about to be posted on encrypted services, otherwise they could do it on the server-side, which is "not categorically unprecedented".
Even open source operating systems have closed source components, and unless you're in charge of the entire distribution chain you can't be source used to compile it was the same that was shared with you. On top of that most devices have proprietary systems inside of their hardware that the OS can't control.
So it would be better to say "this has been a risk by using modern technology".
"Things are currently bad, therefore give up" isn't satisfactory.
Even if everything is imperfect, some things are more imperfect than others. If each component that makes you vulnerable has a given percent chance of being used against you in practice, you're better off with one than six, even if you're better off with none than one.
Just to add to this: trying to compile an open source Android distro is a tricky proposition that requires trusting several binaries and a huge source tree.
Moreover, having a personal route to digital autonomy is nearly worthless. To protect democracy and freedom, practically all users need to be able to compute securely.
There's a classic Ken Thompson talk about Trust where he shows how a compiler could essentially propagate a bug forward even after the source code for that compiler was cleaned up.
It's a concept that relies on a great deal of magic to function properly. The binary-only compiler we have must insert code to propagate the infection. To do so it must know it is compiling a compiler, and understand precisely how to affect code generation, without breaking anything. That... feels like an undecidable problem to me.
sure it's undecidable, you can reduce a decider for the word problem to that pretty easily, but in practice you probably only have to recognize a few common compilers to do a good enough job
But the viral property that makes this attack so insidious is, in the end, not sufficiently viral: the compiler will eventually drift away (as a consequence of it being developed further, of course) from the patterns it recognizes. The attack is not as dangerous as it sounds.
I'm not really arguing that too much, since it's a pretty elaborate and finnicky thing ultimately, although I would wager to say that you could probably find patterns to recognize and modify clang/gcc/other relatively stable compilers for a long time to come (assuming no active mitigations against it)
On most hardware, the closed source components are optional.
For example the only closed source component that I use is the NVIDIA driver, but I could use Nouveau, with lower performance.
The real problem is caused by the hardware backdoors that cannot be controlled by the operating systems and which prevent the full ownership of the devices, e.g. the System Management Mode of Intel/AMD, the Intel ME and the AMD PSP.
On most hardware, the closed source components come pre-installed. Just walk into any store and find a phone with a verifiable open source distro.
The real problem is that most people don't have the knowledge to compile a distro and load it onto a phone. Most people don't even know that's a possibility or that the distro on their phone isn't open source.
Photosync can automatically move photos from iDevices into consolidated NAS, SFTP, cloud or iXpand USB storage, https://photosync-app.com
GoodReader has optional app-level file encryption with a password that is not stored in the iOS keychain. In theory, those encrypted files should be opaque to device backups or local filesystem scanning, unless iOS or malware harvests the key from runtime memory, https://goodreader.com/
If The Verge's article is accurate about how/when the CSAM scanning occurs then I don't have a problem with that, sounds like they're moving the scanning from server to client side, the concerns about false positives seem valid to me but I'm not sure the chance of one occurring has increased over the existing icloud scanning. Scope creep for other content scanning is definitely a possibility though so I hope people keep an eye on that
I'm not a parent but the other child protection features seem like they could definitely be abused by some parents to exert control/pry into their kids private lives. It's a shame that systems have to be designed to prevent abuse by bad people but at Apple's scale it seems like they should have better answers for the concerns being raised
The CSAM scanning is still troubling because it implies your own device is running software against your own self-interest. If Apple wanted to get out of legal trouble by not hosting illegal content but still make sure iOS is working in the best legal interest of the phone's user, they'd prevent the upload of the tagged pictures and notify that they refuse to host these particular files. Right now, it seems like the phone will actively be snitching on its owner. I somehow don't have the same problem with them running the scan on their servers since it's machines they own but having the owner's own property work against them sets a bad precedent.
It would be easy to extend this to scan for 'wrongthink'.
Next logical steps would be to scan for: confidential government documents, piracy, sensitive items, porn in some countries, LGBT content in countries where it's illegal, etc... (and not just on icloud backed up files, everything)
This could come either via Apple selling this as a product or forced by governments...
I'd guess more like 6 months, but I agree that it will be trivial for the CCP to make them fall in line by threatening to kick them out of the market. Although... maybe they already have this ability in China.
> It's a shame that systems have to be designed to prevent abuse by bad people but at Apple's scale it seems like they should have better answers for the concerns being raised.
Well the obvious response is that these systems don't have to be designed. Child abuse is a convenient red herring to expand surveillance capabilities. Anyone opposing the capability is branded a child molester. This is the oldest trick in the book.
I mean the capability to spy on your kid can easily be used to abuse them. Apple could very well end up making children's lives worse.
Luca isn't an Apple app is it? And I thought the system Apple developed with Google had much better privacy guarantees? Although I don't think it was ever actually deployed.
> sounds like they're moving the scanning from server to client side, the concerns about false positives seem valid to me but I'm not sure the chance of one occurring has increased over the existing icloud scanning.
That's the irony in this: This move arguably improves privacy by removing the requirement that images be decrypted on the server to run a check against the NCMEC database. While iCloud Photo Library is of course not E2E, in theory images should no longer have to be decrypted anywhere other than on the client under normal circumstances.
And yet – by moving the check to the client, something that was once a clear distinction has been blurred. I entirely understand (and share) the discomfort around what is essentially a surveillance technology now running on hardware I own rather than on a server I connect to, even if it's the exact same software doing the exact same thing.
Objectively, I see the advantage to Apple's client-side approach. Subjectively, I'm not so sure.
If Apple has the ability to decrypt my photos on their servers, why do I care whether or not they actually do so today? Either way, the government could hand them a FISA warrant for them all tomorrow.
That’s an interesting way to look at it. Funny how this news can be interpreted as both signaling Apple’s interest in E2EE iCloud Photos or weaking their overall privacy stance.
My issue with my own statement is that we have yet to see plans for E2EE Photos with this in place - if apple had laid out this as their intention on apple.com/child-safety/ it would have been clear-cut.
Yeah, but adding whatever document/meme/etc that will represent the group you hate and boom you have a way to identify operatives of a politician party.
eg. Anyone with a "Feel the Bern" marketing material -> arrest them under suspicion of CSAM. Search their device and find them as dissidents.
The reality is that actual child abusers know who they are. They realize that society is after them. They are already paranoid, secretive, people. They are not going to be uploading pictures to the cloud of their child abuse.
And let's not forget the minor detail that this is now public knowledge. It's like telling your teenage son you're going to be searching his closet for marijuana in the future.
This is way too much work to gain hardly anything. It's just as easy to just log into another device with their iCloud password and literally read everything they send. Less work, more result.
First, machine learning to detect potentially inappropriate pictures for children to view. This seems to require parental controls to be on. Optionally it can send a message to the parent when a child purposefully views the image. The image itself is not shared with Apple so this is notification to parents only.
The second part is a list of hashes. So the Photos app will hash images and compare to the list in the database. If it matches then presumably they do something about that. The database is only a list of KNOWN child abuse images circulating.
Now, not to say I like the second part but the first one seems fine. The second is sketchy in that what happens if there’s a hash collision. But either way it seems easy enough to clear that one up.
No father is going to be added to some list for their children’s photos. Stop with that hyperbole.
This is Apple installing code on their users' devices with the express intent to harm their customers. That's it! This is inarguable! If this system works as intended, Apple is knowingly selling devices that will harm their customers. We can have the argument as to whether the harm is justified, whether the users deserved it. Sure, this only impacts child molesters. That makes it ok?
"But it only impacts iCloud Photos". Valid! So why not run the scanner in iCloud and not on MY PHONE that I paid OVER A THOUSAND DOLLARS for? Because of end-to-end encryption. Apple wants to have their cake and eat it too. They can say they have E2EE, but also give users no way to opt-out of code, running on 100% of the "end" devices in that "end-to-end encryption" system, which subverts the E2EE. A beautiful little system they've created. "E2EE" means different things on Apple devices, for sure!
And you're ignoring (or didn't read) the central, valid point of the EFF article: Maybe you can justify this in the US. Most countries are far, far worse than the US when it comes to privacy and human rights. The technology exists. The policy has been drafted and enacted; Apple is now alright with subverting E2EE. We start with hashes of images of child exploitation. What's next? Tank man in China? Photos of naked adult women, in conservative parts of the world? A meme criticizing your country's leader? I want to believe that Apple will, AT LEAST, stop at child exploitation, but Apple has already estroyed the faith I held in them, only yesterday, in their fight for privacy as a right.
This isn't an issue you can hold a middleground position on. Encryption doesn't only kinda-sorta work in a half-ass implementation; it doesn't work at all.
> So the Photos app will hash images and compare to the list in the database.
I am wondering what hashes are now and will be in this database. Or combine Pegasus exploit , put a few bad images on the journalist/politician iPhone, cleanup the tracks and wait for Apple and FBI destroy the person.
I kept the lineage phone in my back pocket, confident that it would be a good 4-5 years before they shipped something that violated their claims. I figured, the alternatives would be stable and widespread.
> with the express intent to harm their customers.
This of course gets into 'what even is harm?' since that's a very subjective way of classifying something, especially when you try to do it on behalf of others.
For CSAM you could probably assume that "everyone this code takes action against would consider doing so harmful", but _consequences in general are harmful_ and thus you could make this same argument about anything that tries to prevent crime or catch criminals instead of simply waiting for people to turn themselves in. You harm a burglar when you call for emergency services to apprehend them.
> This isn't an issue you can hold a middleground position on. Encryption doesn't only kinda-sorta work in a half-ass implementation; it doesn't work at all.
This is the exact issue that the U.S. has been entrenched by - thinking that you can't disagree with one thing someone says or does and agree with other things they say or do. You can support Apple deciding to combat CSAM. You can not support Apple for trying to do this client-sided instead of server-sided. You can also support Apple for taking steps towards bringing E2EE to iCloud Photos. You can also not support them bowing to the CCP and giving up Chinese citizens' iCloud data encryption keys to the CCP. This is a middle ground - and just because you financially support Apple by buying an iPhone or in-app purchases doesn't mean you suddenly agree with everything they do. This isn't a new phenomenon - before the internet, we just didn't have the capacity to know, in an instant, the bad parts of the people or companies we interfaced with.
You do harm a burglar when you call for emergency services; but the burglar doesn't pay for your security system. And more accurately: an innocent man pays for his neighbor's security system, which has a "one in a trillion" chance of accusing the innocent man of breaking in, totally randomly and without any evidence. Of course, the chances are slim, and he would never be charged with breaking in if it did happen, but would you still take that deal?
I've seen the "right to unreasonable search and seizure" Americans hold quoted a bit during this discussion. Valid, though to be clear, the Constitution doesn't apply for private company products. But more interestingly: what about right against self-incrimination? That's what Apple is pushing here; that by owning an iPhone, you may incriminate yourself, and actually it may end up happening whether you're actually guilty or not.
Regarding your second paragraph of the legality, Apple doesn't incriminate you even if they send the image off and a image reviewer deems something CSAM. If Apple does file a police report on this evidence or otherwise gives evidence to the police, the police will still have to prove that (A) the images do indeed depict sexually suggestive content involving a minor, and (B) you did not take an affirmative defense under 18 USC 2252A (d) [0], aka. they would have to prove that you had 3 or more actual illegal images and didn't take reasonable steps to destroy the images or immediately report them to law enforcement and given such law enforcement access to said photos.
The biggest issue with this is, of course, that Apple's accusation is most certainly going to be enough evidence to get a search warrant, meaning a search and seizure of all of your hard drives they can find.
Based off of your A and B there, I think we’re about to see a new form of swatting. How many people regularly go through all of their photos? Now if someone pisses someone else off and has physical access to their phone they just need to add 3 pictures to the device with older timestamps and just wait for the inevitable results.
The OP gets away with that argument, because many people who have such images are also, hopefully, also minors.
However, this is NOT the use case being applied for here. Holding those images, which are not part of known CP, will not be an issue, brining it up is a red herring. ~The issue most people have fruitfully started discussing is the scanning of content on your own phone.
Secondly - the correlation between holding known CP and child molestation IS, sadly, high.
I think Apple has always installed software on their users' devices with explicit intent to harm their customers.
This instance just makes is a little bit more obvious what the harm is but not enough to harm Apple's bottom line.
Eventually Apple will do something that will be obvious to everyone but by then it will probably be too late for most people to leave the walled garden (prison).
there is no e2e encryption of icloud photos or backups, and they never claimed to have that (except for keychain) - the FBI stepped in and prevented them from doing so years ago.
"We are so sorry that we raided your house and blew the whole in your 2 years old baby, but the hash from one of the pictures from your 38,000 photos library, matched our CP entry. Upon further inspection, an honest mistake of an operator was discovered, where our operator instead of uploading real CP, mistakenly uploaded the picture of a clear blue sky".
PS. On a personal note, Apple is done for me. Stick a fork in it. I was ready to upgrade after September especially since I heard touch-ID is coming back and I love my iPhone 8. But sure as hell this sad news means i8 is my last Apple device.
> the Photos app will hash images and compare to the list in the database. If it matches then presumably they do something about that. The database is only a list of KNOWN child abuse images circulating.
This seems fine as it's (a) being done on iCloud-uploaded photos and (b) replacing a server-side function with a client-side one. If Apple were doing this to locally-stored photos on iCloud-disconnected devices, it would be nuts. Once the tool is built, expanding the database to include any number of other hashes is a much shorter leap than compelling Apple to build the tool.
> it seems easy enough to clear that one up
Would it be? One would be starting from the point of a documented suspicion of possession of child pornography.
> Would it be? One would be starting from the point of a documented suspicion of possession of child pornography.
Ive actually witnessed someone go through this from someone else getting caught with these types of images and attempting to bring others with him. It’s not easy. It took him over a year of his life calling constantly asking when the charges will be dropped. They even image your devices on the spot yet still take them and stuff them in an evidence locker until everything is cleared up. You’re essentially an outcast to society while this is pending as well as most people assume if you have police interest related to child pornography you must be guilty.
Okay. Keep going with the scare tactics. Clearly you missed the real point of you being incredibly hyperbolic
I’d be happier if Apple wasn’t doing this at all. I’m not defending them necessarily but I am calling bullshit on your scare tactics. It’s not necessary.
I'm not so bugged by this. Uploading data to iCloud has always been a trade of convenience at the expense of privacy. Adding a client-side filter isn't great, but it's not categorically unprecedented--Apple executes search warrants against iCloud data--and can be turned off by turning off iCloud back-ups.
The scanning of childrens' iMessages, on the other hand, is a subversion of trust. Apple spent the last decade telling everyone their phones were secure. Creating this side channel opens up all kinds of problems. Having trouble as a controlling spouse? No problem--designate your partner as a child. Concerned your not-a-tech-whiz kid isn't adhering to your house's sexual mores? Solved. Bonus points if your kid's phone outs them as LGBT. To say nothing of most sexual abuse of minors happening at the hands of someone they trust. Will their phone, when they attempt to share evidence, tattle on them to their abuser?
Also, can't wait for Dads' photos of their kids landing them on a national kiddie porn watch list.