Wow, we made Apple blink. I did not see that coming. People like me who sold their iPhone and cancelled all services probably are a drop in the bucket, but it looks like the general public and the news picked up on this enough to get the ball rolling to make them step back. So now the question is in what 15.X update do they just sneak it in.
Part of me wonders if the calculus includes either (a) the open letter signed by (effectively) ~10k verified developers [1], (b) the complaint letters from groups like the EFF [2, 3], or (c) expert and qualified input from their (presumably huge) marketing department...
G) Turns out they can't know if they're scanning for other forms of content, since they delegate computing the hash lists of banned content to government organizations like NCMEC anyway.
They are probably just waiting for several countries to spin up their databases of gay porn and blasphemous / regime critical memes. Shouldn't be too hard to find two of those countries working on a shared "child abuse" database now that the feature has been widely publicized.
There's so much censorship now worldwide, and it's increasing. Russia doesn't allow promotion of the gay agenda. Parts of the Islamic world view criticism of Islam as a killing offense.
China won't tolerate the flag of Taiwan, Falun Gong, or criticism of the Party. Afghanistan no longer allows music.
A decade ago, all those things were on the Internet worldwide.
Many of the closed WeChat accounts display messages saying that they had "violated" Internet regulations, without giving further details.
The account names have also been deleted and just read "unnamed".
"After receiving relevant complaints, all content has been blocked and the account has been suspended," the notice said.
The crackdown is the latest example of what some call growing intolerance toward the LGBT community.
Last year, Shanghai Pride week, modelled on Pride events in the West, was cancelled without explanation after 11 years of it going ahead.
In 2019, the Oscar-winning Freddie Mercury biopic Bohemian Rhapsody was released in Chinese cinemas, but references to the Queen singer's sexuality and AIDS diagnosis were censored.
In 2018, Weibo said all posts related to homosexuality would be taken down, although it backtracked after massive outrage.
I have a theory about this that isn't politically correct, but here goes. This has nothing to "conservative values" or homophobia or whatever you call sexual repression in places like Arkansas and Afghanistan. It's not based on religion or culture. It's just the CCP running an actuarial table.
The CCP is a blunt force instrument. It realized some time ago that the one-child policy had left it holding the bag on taking care of a rapidly aging population without enough young people to power the economy in 10-20 years. Not only that, you couldn't easily just repeal the policy and expect a baby boom. They took a look at Japan and realized they were about to hit a demographically driven, deflationary wall. So the party planners moved from repealing the 1CPolicy to actually offering cash bonuses for second children. This volte-face happened within a few short years. But it didn't work as well as expected. Their sudden magnanimous gesture didn't bump their 5-year plan's crop of new Han for a few reasons: A shortage of women (unexpected consequence of the 1CP), more women in the workforce who don't want to have children, video game culture which makes young men stay home instead of going out and impregnating girls -- which is why they're now limiting screen time, gender fluidity / queerness which suppresses baby-generation, and of course western individualism, the great bugbear of "harmony," which encourages people to wait for love and financial stability before having children. At the end of 5 years of encouraging people to have babies, they don't have enough babies. So now they have to get harder on the edge cases.
It's probably safe to assume that gays and lesbians represent at least 10% of the Chinese population as they do in most countries. So that's what, 120 million people? Let's say half of whom are young enough to have at least one child? So we're talking about an extra 30-60 million children if you can somehow get a replacement rate.
That's what I believe all these recent moves by the CCP have been about. And banning test study programs? Same thing. No point having more babies if you're also getting overproduction of elites. They need construction workers and factory workers. And someone was told, we ain't gonna become Germany by bringing them in from Tajikistan, so take this data and figure out how to squeeze as much Han production as possible out of it in the next 5 years.
The CCP and other countries are cracking down on rainbow activism because rainbow activism is, in some sense, American imperialism and the enforcement and promotion of American interests and culture on their soil. And the difference between "off-target" boners and the political ideology is hard, so they rather overreach than underreach. History shows you can be really godawful to gay people and not have the society suffer much from it in terms of stability and the like. Those societies are unnecessarily nasty to people, but they did live.
I 100% agree with your assessment. I've thought the same thing for a while. The downfall of the West is already a foregone conclusion purely from demographics. CCP's hardline on reversing the inverted pyramid trend will catapult it as indisputable leader within the next 20-30 years.
I don't think their attempts at upping the birthrate are going to succeed. The CCP is fighting against the larger trend of decreasing fertility rates in all post-industrialized nations, while trying to promote stability... they started later and they got to the wrong side of the inverted pyramid faster than the west. Now, if they do have success with a baby boom it may come with unintended consequences. The party leadership is aging, worse, ossifying. It's going to get less and less easy to put down youth protests the more Xi starts to look like Fidel Castro. Maybe their technological cage can keep citizens too afraid to talk to each other, but they're too well connected to the rest of the world to not know what's going on. And you can't cut the cord. That's why the laws against video games, and the censorship of LGBTQ media, have to be pushed as a nationalistic marketing campaign rather than forced as Maoist dictats. (Although the idea of Mao banning video games is kinda funny). China will never be a superpower unless they have a liberal French-style democratic revolution. Then all bets are off. A democratic China might take over the world. Other than that, the worst the west has to fear from that quarter is losing our own souls in the process of doing business with a genocidal government. It hasn't stopped us from buying toasters on Amazon yet.
In the West we have already tried to force LGBTQ people to accept the behavior of their assigned sex. Result? Increased suicides, depression and drug use.
Obviously, after decades/centuries of failure we finally decided to recognize the reality of the facts: you cannot force people into a heteronormative life, nor to make children.
The French Revolution actually marked the end of France ever being considered a super power again. The entire western world mocks their pathetic legal system
WeChat seems like a pretty big part of life in China and with the know your customer requirements I imagine those accounts are links to individuals. It must be a nightmare to have one canceled.
You left out the "in public schools" part. You can still look at all the CRT stuff you want on the internet in the US. Stuff like White Fragility are top sellers in the US.
This is not even remotely the same as the Taliban banning music or China cracking down on LGBTQ stuff.
Yes, critical race theory is being taught in public schools[1]
> Christopher Rufo reported[2] that 30 public school districts in 15 states are teaching a book, Not My Idea, that tells readers that “whiteness” leads white people to make deals with the devil for “stolen land, stolen riches, and special favors.” White people get to “mess endlessly with the lives of your friends, neighbors, loved ones, and all fellow humans of color for the purpose of profit,” the book adds.
> There are plenty of other examples that prove racial essentialism and collective guilt are being taught to young students. In Cupertino, California, an elementary school required[3] third graders to rank themselves according to the “power and privilege” associated with their ethnicities. Schools in Buffalo, New York, taught students[4] that “all white people” perpetuate “systemic racism” and had kindergarteners watch a video of dead black children, warning them about “racist police and state-sanctioned violence.” And in Arizona, the state’s education department sent out[5] an “equity toolkit” to schools that claimed infants as young as 3 months old can start to show signs of racism and “remain strongly biased in favor of whiteness” by age 5.
30 school districts (out of how many?) use a book Rufo doesn't like.
Any reason why anyone, aside from Rufo, should care?
Half of the nation doesn't teach sex ed because skyperson doesn't like when people bang without signing an exclusive banging agreement in public first. Half the nation teaches kids that the Confederacy was formed to protect "states' rights", carefully omitting that the right in question was to own black people as livestock[1]. But hey, 30 districts use a white-people-bad, and that's the real problem.
And what does the last part of your comment (about AZ) have to do with anything? Telling educators that 5-year-olds can absorb shitty beliefs is now a cOnTrOvErSiAl tHeOrY?
I don't even know where to start here, let's set this one aside.
So, let's focus on this question first: assuming the book you mentioned is bad, which percentage of pubic schools use it, and in which way?
[1] TX is my go-to example. They were teaching that slavery was a "side issue" in the Civil War when I was living there in 2010-2017.
I'm not even going to bother dissecting the bullshit they peddle these days. Feel free to dig in.
Texas oversees 1,247 school districts. Tell me more about the problem of CRT in schools though.
Christopher Rufo defines critical race theory as everything he doesn't like, so of course he'd see phantom CRTs lurking behind every shadow and around every corner.
From here[1]:
> Christopher Rufo, a prominent opponent of critical race theory, in March acknowledged intentionally using the term to describe a range of race-related topics and conjure a negative association.
> “We have successfully frozen their brand — ‘critical race theory’ — into the public conversation and are steadily driving up negative perceptions,” wrote Rufo, a senior fellow at the Manhattan Institute, a conservative think tank. “We will eventually turn it toxic, as we put all of the various cultural insanities under that brand category. The goal is to have the public read something crazy in the newspaper and immediately think ‘critical race theory.’”
> According to recent reports, public and private elementary schools across the United States have used, as part of racial equity education, an illustrated children’s book called “Not My Idea,” in which a devil with a pointy tail offers the young reader a “contract binding you to whiteness.” The contract promises “stolen land,” “stolen riches,” and “special favors”; in exchange, whiteness gets “your soul” and power over “the lives of your friends, neighbors, loved ones, and all fellow humans of COLOR.”
CRT is something taught in law school, not in elementary schools. You have an indoctrination problem in the US but it definitely does not come from CRT.
I looked at some of the illustrations of "Not My Idea"... it's a bit wired and cringe worthy sometimes.
Still compare that to the indoctrination that is in some of the schoolbooks:
> Christopher Rufo defines critical race theory as everything he doesn't like
Not quite. Rufo knows what CRT is, and importantly, what shares the same problems CRT does. That is to say, different branches of the same general ideology that operate on different topics. The CRT model is kinda like ideological lego, you can just plug in a topic). The demon is legion and has many names. Rufo's stuff is mostly about calling woke thought in general CRT, since the name stuck. I haven't yet seen Rufo label things that aren't wokeness or influenced by wokeness CRT.
It's the same kind of thing as when we call Catholicism, Eastern Orthodoxy and all the Protestant sects "Christianity" even though they are not the same, or how we still call Zen "Buddhism" even though it lacks the supernatural component of its older cousins. None of those things is the same, but they share a family resemblance, they have a character to them that makes them unique among the sea of ideologies, religions and philosophies. Rufo's nailing "CRT" to that family resemblance, because the family is there, the demon is real, and it needs a name to be talked about usefully, not a legion of them.
It seems he is.
I see a much bigger push to prevent any type of critical discussion about issues of slavery etc.
I'm also always surprised how little US citizens know about their own history. Ask somebody about "Birth of a Nation" and see what they can tell you about it.
Ask them if they know about the "Pro American Rally" in 1939 and see what they say.
These things are not taught in school and it's a shame.
I know what I'm talking about I'm German and I hated our history education in school as we were discussing the 3rd Reich nearly every year. Yet, reflecting on it and seeing that the same stupid ideas get a hold today again, it was not nearly enough. We should have taught more. The same holds for the German colonial history (that was not covered and is changing slowly).
If you don't know your past, you are condemned to repeat it.
Critical race theory is a mythic story - the sort Yural Harari describes as a religion. It sits alongside replacement theory, humanism, environmentalism, crypto anarchism, liberalism, rand’s objectivism, and so on.
These stories are lenses through which you can describe the world. They’re like software for the mind. And like software, they aren’t objectively right or objectively wrong. Each of these stories (dubiously) explain and obsesses over some aspects of the world, and ignores other aspects completely.
Though there is a mythic anti-”Critical Race Theory” story created by the American Right, and the thing within it called “Critical Race Theory” is a (particularly incoherent, because a number of unrelated and opposing things from the real world that share only that they concern race, and they are disliked by the American Right, and they are not actually Critical Race Theory, are jammed into it) mythic story.
> These stories are lenses through which you can describe the world. They’re like software for the mind. And like software, they aren’t objectively right or objectively wrong.
Actual critical race theory (like critical legal studies, from which it stems) holds the existence of objective features of social structures, with tangible, material, effects.
Like many hypothesized social phenomenon, the complexity of the systems involved may make falsification difficult on a practical level, but the claims it makes are fact claims which are objectively true or false.
There is International Centre for Missing and Exploited Children) (ICMEC), Child Exploitation and On-line Protection (CEOP), Parents and Abducted Children Together (PACT), Missing Children Europe (MCE), and more just in Europe.
The cite even researchers as leading to the decision. Maybe flaws with perceptual hashes? It took a mere day to discover a hash collision after reverse engineering the system. Granted, the colliding image looked like a grey mess, but I wonder how far a malicious party might get after a year.
They also could have had a legitimate claim to non-technical people (at the FBI/NSA/blah) that "you sure you want this? this system will be abused...someone could write a hash collision". The Gov't agency scoffed because they didn't believe it.
So Apple called them out on it by letting the market decide. They now have that evidence.
I don't really understand the argument that was thrown out by many people: "Attackers will frame people by planting hash collisions on their phones!"
So...
a) a bad actor is going to target someone, and have the resources to generate enough collisions that(b) look like CP, but aren't CP? but are close enough (c) to pass human review and cause an investigation so (d) they need the hash collisions to look like CP, but not be real CP? or ????
If a bad actor wants to frame someone, it's easy to do this today - hack their system or home network, open it to the internet, place the photos, call the FBI and report an anonymous tip, with the URL where it is hosted and open to the internet. Don't need hash collisions.
Hacking someone's iPhone (How do you get the photos on there without forensic logs that they were added?) or iCloud so that you can place hash collisions that look like crap and fail to pass the review doesn't make sense and leaves too much of a trail. Oh? And someone won't notice thousands of photos just added to their phone?
A bigger threat would be deepfake CP. When that becomes a reality, it will be a mess, because an attacker could theoretically generate an unlimited amount of it, and it will be extremely difficult to tell if it is authentic. Those hashes wouldn't be in the CSAM database, but if the attacker put them out on a server to be taken down, they would get added eventually and then show up in a scan elsewhere.
But I'd be pretty horrified (and definitely call my attorney, Apple, and local FBI field office) if thousands of CSAM images just showed up in my iPhone photo stream.
Edit: Downvotes are fine, and I completely understand other arguments against this, but this one has just not made sense to me...
This is a very simple attack to pull off; email someone with the colliding image embedded in the MIME document. Set height and width equal to zero or one pixel. Even if the user never actually sees it or marks it as spam, it's still on their device and in their email. For bonus points, spoof the "From" as from Apple. At the very least, they're getting SWATted.
Then why have a client side scanner in the first place?
You're exposing the hashes to the world, you're not able to E2E encrypt anything if you need to server side scan, which you probably do since trusting the client no matter what is generally bad in potentially adverserial situations, and you get all this negative press and loss of reputation and potentially pressure from governments around the world to use this for other ends. Cui Bono?
The second scan applies only for those images which are flagged as positive, which are then accessible by Apple.
This is applied to detect adversarial hashes. The rest of the images stays encrypted. So, indeed on-device scanning is the only way to enable at least partial E2EE with CSAM detection.
Yes, this was PR failure Apple. They rushed the announcement because of the leaks, and secondly they thought that people will understant the system when they did not. There is too much misundersting. That scanning for example is built-in so deep into the iCloud pipeline, that one does not simply change the policy for scanning the whole phone.
On a technical level I think you're correct. As a holistic approach to the problem, I still disagree. This is too cute for its own good. The PR misunderstanding is a symptom of that.
>The second scan applies only for those images which are flagged as positive, which are then accessible by Apple.
In the end, Apples software is scanning all of the images, why is it any more privacy respecting to do it this way? I guess reasonable people can disagree on that, personally I wasn't fully aware of the cloud side scanning either, and I don't think the public was either. This is similar to Snowden's revelations, if you were paying attention you probably already knew a lot of that, but the incident made everyone aware of it in a very blunt way.
>The rest of the images stays encrypted
I think this is unclear, Apple can still decrypt those other images, how else could you view them in a browser?
This goes back to what Stratechery said about capability vs policy.
You might want to reconsider holding your breath for that. No they aren't doing E2E encryption, if/when they will, they will announce it. Until they announce something everything else is pure speculation.
Surely the sender is also likely to get reported if they email the image with the perceptual hash collision. Lots of companies run these scans. E.g. if you send the image via gmail it will most likely be scanned by Google.
Assuming the system works as Apple's described it, then this attack doesn't work. The image needs to be added to the Photos library and the Photos library needs to be syncing with iCloud. Images in email aren't scanned, and an image that's zero or one pixel can't even be clicked on to be added to the library.
And, to at least respond to two obvious counter-arguments I've looked into:
"But it's just one line of code to change it to scan images (that aren't uploaded to iCloud) (that are anywhere on the device)!" No, it isn't; if you read the technical documentation and the more technically-oriented interviews Apple's given on this, there isn't just one hash that needs to be matches, there are two hashes, one on the device and one on the server. (I think Apple did a very poor job of communicating this to a general audience; it certainly wouldn't have alleviated all the concerns, but if it was understand as "client-server scanning" rather than "client-only scanning" it might have at least changed the tenor of the conversation.) That doesn't mean they can't do a combination client-server scan on every single image or even file on the device, but it makes it both more difficult to do and more difficult to hide.
"But what if the system doesn't work as Apple's described it?" Well, if you don't trust Apple to some degree, all bets are off. They already do ML-based image analysis of all photos in your photo library regardless of iCloud status and they've literally demoed this on stage during iPhone keynotes, so if Apple was going to secretly give government access to on-device scanning, a different technology -- one that works (questionably well) on all images, not just already-learned ones -- is literally already there. The only way you "know" what Apple is doing with your data on or off device comes from a combination of what they tell you and what third-party security researchers discover.
That is a huge-huge difference if they do it secretly, or if they do it announced.
If they get caught doing secretly this in China that would be a big blow. But if they are doing openly and it is known that the government provides the hash, they can wash their hands.
I think there's an unspoken assumption that photos are only the first part of this scanning exercise. IMO, Apple will likely end up scanning all content you choose to put on their servers.
Tangentially, I think keeping their servers clear of illegal material is actually Apple's main motivation. This, in turn, supports claims made by nay-sayers that Apple could scan for other types of images/content in more repressive countries (but not necessarily report the people who did it). However, this assumption also contradicts arguments that Apple will start scanning for pictures of (e.g.) drugs, or weapons. Such images are not inherently illegal and therefore of no interest to Apple.
> Tangentially, I think keeping their servers clear of illegal material is actually Apple's main motivation
My guess about this whole debacle is that - with pressure from the government to scan their cloud storage - that this is the alternate scenario to avoid giving up (or being forced to) the "encryption" guarantees of their cloud. I'm not sure what technical process they have in place to "only decrypt with valid law enforcement requests" or allow account rescue, but it seems likely that not just any employee can view whatever they want, before or after this system.
Saying that I can maybe see a way the pressures are on this doesn't mean that I'm saying this is a good solution though. Clearly technically implementing this is opening a can of worms that can't really be closed again and makes a lot of other scenarios "closer".
Also, evidently, people are a lot more comfortable with the idea of them actively scanning stuff people store in the cloud than transmitting the information in a side channel so they don't even need to handle decrypted data without a hit.
> My guess about this whole debacle is that - with pressure from the government to scan their cloud storage - that this is the alternate scenario to avoid giving up (or being forced to) the "encryption" guarantees of their cloud.
> I'm not sure what technical process they have in place to "only decrypt with valid law enforcement requests" or allow account rescue, but it seems likely that not just any employee can view whatever they want, before or after this system.
They have master keys that can be used to decrypt almost everything you upload. They can be compelled to decrypt and turn over information on anyone. Another (unsourced) comment in this thread indicated they do so 30,000 times per year. The new encryption scheme will effectively stop this for photos, and no doubt other files in the future.
Apple's side will begin using shared key encryption, which will require ALL ~31 keys to decrypt the offending images.
The decryption keys are only generated on-device, and only from a hash that results from a CSAM match. The other photos, simply won't have the decryption keys generated, so they don't even exist.
As an interesting side note, this means that a person who surpasses the CSAM threshold will still only reveal the images that actually match the CSAM database. Every other image, including those that have CSAM unknown to authorities remain encrypted. This is hardly a big win for the big scary government. They now have far less ability to search for evidence of any other crimes. You could upload video of your bank robbery to iCloud, and as long as your personal device remains secure, nobody will know.
> IMO, Apple will likely end up scanning all content you choose to put on their servers.
Even if that was a goal (and I would argue they have a hard stance against it), this system as built is not usable for that.
While they can scan locally, every step of recording, thresholds, and subsequent automated/manual auditing is built to require content to be uploaded to iCloud Photos.
I agree that the current photo scanning system won't work for other types of files. Further, I don't think they really even need to scan text files except to see if it's actually a filetype that should be scanned. And they can already easily scan music and such, but have shown no interest in it from a legal standpoint. Video seems like a prime candidate for the next generation of this tech. And I very much think they will be able to do that before the decade is over.
iCloud is trivial to hack. Recall the massive leak of celebrity nudes a few years back. As long as iCloud is forced on users (you cannot apply security updates to an apple device without an Apple ID and an attached iCloud account) these attacks will be simple to do in mass, with very little risk to the perpetrators. Terrible security model, with a long history of spectacular and avoidable breaches which go totally unsolved.
Let’s ignore the fact that you just shifted the goalposts from “emailing a 1 pixel image SWATs them” to an extremely wide scope for a minute -
Even assuming that is true that iCloud is trivially “hackable” - and as I understand it, that was never clear how those leaks happened - how does uploading to iCloud help when it specifically needs to be uploaded from the users phone along with the scanning metadata.
In fact, isn’t apples proposed implementation here the _only_ cloud service that protects against your proposed attack - while other clouds scan stored data and can be triggered by your attack, Apple’s requires you to upload specifically from a registered phone on their account; data stored on-cloud is never scanned.
You can choose to use an iPhone. Most people will never be targeted. Hell you could use a phone with no security at all and odds are you’ll be safe. But if you have enemies or are a high profile target, apple has made you easy to destroy. iPhones themselves can be hacked with no click attacks and we know this because macron’s phone was one of many that was listed in the recent fiasco. Icloud can be hacked because legions of celebrities had their nudes published. If the device is not secure, even for the president of France, and iCloud is not secure for legions if celebrities, then people can and will be hit with this attack and have their lives permanently destroyed. Worse than having you sex photos leaked, worse than having all you calls and communications intercepted, you’ll become known as a pedo. You can’t undo that. To be sure, that probably won’t happen to most people, but political figures, those with enemies working in infosec, etc…. I want a secure phone, and iPhone no longer meets that need for me and many others. You can do you.
The described attack is completely unnecessary. Just send colliding images to people on WhatsApp. They go right into Photos, and then to iCloud if that’s enabled. There’s no reason most people would assume that this is what was happening if somebody sent them some slightly weird looking images.
The response to this is “yeah but then a human will review it and nothing will happen to the victim of the attack, because it’s just some slightly blurry ordinary images”. But it ignores to entirely likely harms that could result from that.
1) Law enforcement use it as the basis of getting a search warrant, but conveniently leave the bit about the alerts being false alarms off the warrant application.
2) The list of people who have had CSAM alerts is inevitably leaked to the public, and the victim has to spend the rest of their lives explaining to people like employers why Apple flagged them as possessing child sexual abuse material.
At the end of the day, all the gaslighting about “no potential for inadvertent harm” is bs, because it’s my device, so get lost. Go run your anti-privacy software somewhere else, imo.
- Law enforcement doesn't get _anything_ unless it triggers a large number of images that match the perceptual hash
- It also needs to match a -private- perceptual hash, that isn't distributed to devices, and so we don't have a reliable way of generating collisions for or even knowing that collisions are generated
I mean this whole thing is bad enough on its own without having to artificially manufacture extremely specific scenarios and extrapolating from there to invent hysteric conclusions.
Well if you think a tech giant that’s partnered in a surveillance program with the NSA isn’t going to share anything they feel like with the government, then go ahead as use that as the basis of coming up with your own opinions.
But you’re right though, the possibility of the list being leaked and ruining countless innocent lives is the much more likely of the two scenarios I described.
> if you think a tech giant that’s partnered in a surveillance program with the NSA isn’t going to share anything they feel like with the government
In that case, nothing is stopping them from scanning everything already uploaded anyway, and nothing is stopping them pushing code to your device to scan it without telling you about it. Nothing is stopping them or the government from making these "lists" anyway.
I'm not saying you (or anyone) should trust Apple, but if you already don't - then this changes literally nothing.
By my own personal assessment it changes the way I view them, because this is the most overtly anti-user, anti-privacy thing they’ve ever done. Perhaps I should have never had any level of trust for them, but you live and learn I guess.
Feel free to respond to my point about the alert catalog inevitably being leaked and ruining lives. Or you could just have a go at gaslighting me a little more if you prefer.
- image _also_ collides with private hash <- completely unclear how this happens
- Only the colliding images are looked at by apple and are determined to be innocent
- user goes on a list (This is an imagined scenario)
- User is reported to law enforcement even though the images are innocent (This is an imagined scenario)
- Law enforcement uses this hypothetical report to file a warrant (This is an imagined scenario)
- Law enforcement uses the hypothetical warrant to extract images that are completely innocent, and somehow build a case around this
- The "List", which is entirely a hypothetical of yours, "leaks" (This is an imagined scenario)
Which also requires:
- Apple does not counter the meaning of "the list"
- Apple is not sued for vast quantities of money
I expect the first argument is that none of this matters as long as "the idea" is out there, the reputational damage is already done. Except if that's true, then none of this is necessary at all, just make the accusation.
So, sure, continue to thread the needle between "They are automatically sending all information to the government, so promises are meaningless" and "This new process, on top of them potentially sending all information to the government, somehow makes it worse".
I mean, this is all a million times more difficult and less likely than just, like, sending them CP in the first place. Or uploading it to their Gmail or any other cloud they use. Or just send a report that they have it to the police without actually doing anything.
That’s absolutely not an accurate representation of what I’ve said, or how this system works.
All it requires is somebody to send somebody else a colliding image.
This will send an event to Apple. There is nothing imaginary about that, it is exactly how the system works.
Now that Apple has this information, the only thing left is for it to be leaked or compromised in some way.
This is much simpler than the scenario you’ve described, because they require the attacker to first commit the crime of possessing CP. Its also possible to do without tipping off the victim in any way.
Apple, in case you didn’t know, is a company that had already been the source of a couple of the most notorious data breaches ever (and has somehow managed to so far avoid getting “sued for vast quantities of money” for them).
What you’re trying to do here is quintessential gaslighting.
Apple's proposed system does not scan all the pictures on the device, only images about to be uploaded to iCloud. A picture in an email would not be uploaded (whether visible or not).
Even if uploaded to iCloud (such as pictures sent via WhatsApp by default), and above the threshold, they would still be scanned by a second algorithm and subject to human review. So, your "very simple" attack fails on at least three counts.
I would expect the human review ("Swatting") to be transparent, unless it was actually CP or looked like CP, at which point the person would be taken for a ride... But this is a real threat with actual CP now.
And again, there is the whole fact that you received the email and there is a log that you received it.
"Human review" here means basically minimum wage person who's looking through a lot of abuse images on a daily basis & has significant fatigue. I view it more as a CYA thing than any meaningful protection mechanism.
Additionally, the images sent to review are significantly downscaled versions of the original & could easily be made to be ambiguous.
The most difficult challenge in this SWAT story is that Apple has a secret secondary hash that's checked on a collision. That's the part of the SWATting story that feels difficult on a technical level. However, there are also really smart people out there so it wouldn't surprise me if a successful attack strategy is developed at some point given time.
Sure. I mean, my main concern would be people planting actual illegal images. Not just colliding hashes. If anything, a handful of images that have hash collisions, once they made it to the authorities, would then be reviewed in detail and shown as a false positive, and then ignored in the future / whitelisted. Or investigated because someone is generating images to directly tamper with the program / frame people.
No one is going to be prosecuted on "significantly downscaled ... ambiguous" versions of original fake images with a hash collision that flagged a review and was handed to the FBI accidentally because a "minimum wage" fatigued person passed it on.
I get the counter-arguments, but the hash collision thing is just, sort of... weird? I even get the argument that an innocent hash collision may have your personal and private images reviewed by some other human - and that's weird. But I can't really see it going further (you'll be arrested and sentenced to life in prison from the HASH COLLISIONS!).
It's just using technical terms to scare people who don't understand hashes and collisions and probability, and not really founded on reason.
> once they made it to the authorities, would then be reviewed in detail and shown as a false positive, and then ignored in the future / whitelisted
Which typically will be a court case, or at least questioning by police. This can be quite a destructive event on someone's life. Also, there's no mechanism for whitelisting outlined in the paper, nor can I imagine a mechanism that would work (i.e. now you've got a way to distribute CP by abusing the whitelisting fingerprint mechanism or you only match exact cryptographic hashes which is an expensive CPU operation & doesn't scale as every whitelisted image would have to be in there).
Also, your entire premise is predicated on careful and fair review by authorities. At scale, I've not seen this actually play out. Instead either the police will be underworked & not investigate legitimate cases (too many false positives) or they'll aggressively police all cases to avoid missing any.
I also don't understand the vector everyone seems worried about. Also considering that perceptual hashing isn't new, as far as I'm aware, and it hasn't seemingly yet led to any sort of wave of innocent folks being screwed over sham illicit images.
I think there _is_ an argument to be made about a system like this being used to track the spread of political material, and it's easy to see how such a system would be terrible for anyone trying to hide from an authoritarian power of some type, but that'd already require Apple is absolutely and completely compromised by an authoritarian regime, which isn't high on my list of concerns.
The main issue is that the system is basically a bit of snitching spyware dolled up with cryptographic magic to make it only work with really bad stuff, honest.
As recently posted on HN [1] one should be very wary of backdoors, no matter how much one believes only the right guys could ever use them. Once they're there, they're there: it's arrogant beyond believe to think that opponents of yours won't exploit them.
I won't use an apple product with this feature. I've been using apple's ecosystem since about 1994.
I understand your points on backdoors, but I don't understand people trying to argue that the "Cryptographic Magic" is the bad part that will send people to prison due immediately upon "HASH COLLISIONS", in my view, takes away from the actual concerning points.
What most people are focused on is that this is spyware built in to the operating system. The fact that perceptual hashes have some weak points could also lead to problems, but that's not the main focus of most opposition to this system.
And they already are, Apple (and indeed all companies) have to follow all sorts of rules in all sorts of countries. Some of those rules are fairly harmless, some of them are about radio power levels, some of them are "don't show the Taiwanese flag in China".
And I also understand the concerns with this (and the potential downsides of the technology).
But a motivated bad actor has a lot easier time just putting an image of the Taiwanese flag or propaganda on someone's phone than trying to make it a hash collision that triggers... If an attacker is really after someone, I would expect them to put the actual material on that person's phone...
... It's like trying to frame someone for drugs in their car and going through the hassle of synthesizing a chemical that triggers drug dogs to react, but it isn't actually the drug. Wouldn't they just... buy drugs and stick them in the car?
> Wouldn't they just... buy drugs and stick them in the car?
Maybe they want to create too many false positives deliberately, or maybe they do it just because they want to see if and how synthesizing such a chemical can be done.
When putting pictures on someone's phone or computer, there are ways to add false positives as well as maybe doing false negatives sometimes (e.g. by encrypting the picture or using an uncommon file format so that it must be converted, or using complicated HTML/JS/CSS to render it instead of just a picture file).
Also, if someone has the picture or drug or whatever to find if it is what they are, can you accuse them (maybe they are the police) of liking pictures and drugs, too?
Yeah sorry, I was a little vague. I consider that willingness to comply as already having been compromised. And I think, to be fair, if companies like Apple are ever in the position of using technologies like this to track individual dissidents of an authoritarian regime, we are already royally screwed.
But enabled only for the US region initially. That is bad in itself, though, I think, insofar as it signals that the system can be switched on and off, and thus maybe configured, on a per-region basis, which does seem to open the door for government pressure and abuse.
They ruined their reputation for being relatively secure devices. I’m done with them, for good. Anybody who cares about security has likely jumped ship now too. Normies will eventually ditch them when they realize most tech people have bailed. Unfortunately the best alternative, google, is even worse. Hopefully the loss of demand causes a decrease in device sales across the board. Landlines still work. Smartphones and their consequences have been a disaster for the human race.
A more charitable take is that both are true. Apple blinked, and one of several large contributing factors (probably a very big one) is the upcoming iPhone launch.
When the tech news cycle is dominated by bad press about a move like this, and every tech nerd community is discussing this topic almost daily, it would be insane for Apple not to take some notice.
I know many are pessimistic about this, but Apple has learned from bad mistakes before. They rarely directly admit they're wrong, but conceding to pressure from the community is not unprecedented: see the return to the "Magic Keyboard" in their laptops.
Edit: removed the word “never” and replaced it with “rarely directly”.
Could we please not say silly, easily-disproven things like Apple "never admit[s] they're wrong", when 10 seconds of Googling instantly gives the lie to that assertion?
It might not happen every day, but it happens frequently enough that it's easy to find a lot of examples.
Ok, Apple rarely admit they are wrong, and when they do, they tend to spin it heavily. I don’t think this meaningfully changes the primary message of my comment though.
But that was GP's point: They listen to feedback and improve things, but typically quietly, rather than admitting a mistake. Exceptions do prove the rule.
Every single one of them was to the point Apple could no longer hold the bad press or damaging to sales figure they admit they were wrong. Or more like they didn't, they just accept defeat.
Mac Pro was a mess when one of their largest customer told them they will switch their whole studio away from Apple right in front of Eddy's Cue face before some thing was being done.
I wondered if Apple would ever say that you want more than one mouse button. They started having an option to use each side of the mouse as a separate button (even if it looked like a single one and defaulted that way), but I've never heard that two buttons are better than one from Apple. With Steve gone, who even cares now?
When Apple was designing Mac as personal computer for everyone, a single button mouse is truly godzillion times better than two button. There used to be an Apple guideline on how to move a mouse and not to lift the mouse up into the air, but mapping the horizontal plane of the mouse to the vertical plane on the screen. Even that was difficult for some people.
People just dont realise how normal people have problem with mouse. Of course as we progress I think Two Button mouse could make sense as default. The role of PC also changed. The PC for everyone is now an Smartphone.
In the early days of Macintosh they tried hard to make this true. The excessive and inconsistent use of click vs double-click in apps even then was confusing.
These days it's hard to find an app that can get by without Cmd+click which is harder than click on the right button, which would be even easier if the right button was physically distinguishable. Long-press is super annoying as well as force click--I never want the action that comes up when I accidentally force-click. With the prevalence of touch phones, the two-finger-tap might be the easiest of them to remember (if not as precise).
The primary input device to Apple hardware is the trackpad, so it’s kind of irrelevant. But two button mice have been well-supported since classic Mac OS circa 1997-1998.
This is the most likely outcome. Moreover, they won't even announce it.
Apple learned a valuable lesson here. Roll the panopticon out in secret, and don't announce it. They've done a very good job locking down their modern devices, enough that security researchers would have an exceptionally difficult time proving that they're doing it anyway.
GrapheneOS is still a viable option until Google ends upstream security updates in 2023. That's a solid two years for Purism, PinePhone, and anyone else working on linux phones to bring their performance and feature set up to modern standards.
The correct course of action is to buy a Pixel 5, run GrapheneOS for the next couple years, while donating to the main privacy-focused linux phone projects, and make the switch again in 2023.
That's a terrible idea. Apple knows that researchers are going through every bit of assembly code on the phone.
If all of a sudden they say "Hey we found some code that scans all your photos and sends information up to the cloud" how bad would that look? It's better to explain upfront rather than get found, because they will get found.
Is this still possible? My lay understanding is that the SEP decrypts kernelspace on-demand, loads the opcodes on a very temporary (and randomly allocated) basis, and makes dumping any meaningful amount of the contents out of RAM very difficult.
Not a jailbreaker at all, so happy to be completely wrong on this.
> The correct course of action is to buy a Pixel 5, run GrapheneOS for the next couple years, while donating to the main privacy-focused linux phone projects, and make the switch again in 2023.
Alternatively, get a Linux phone right now and donate time by contributing bugfixes.
Exactly, it's (sadly) a temporary victory at best.
Ten bucks says they take a page out of our politicians' handbooks and sneak it in as a small, vague footnote in a much larger, unrelated announcement once the initial bad press blows over.
Honestly, that's what I assumed they did in MacOS 11.5.2 where they refused to give details of the change. It's likely the last update for many intel devices.
It was a torpedo to the heart of the privacy marketing campaign they've spent who-knows-how-much-money building up over the past several years. That translates directly to wasted dollars, which they do care about.
Are you asking Apple to come out and say that they are supporting privacy against state-level actors? After snowyD, why would anyone believe that anyway?
I'm not advocating against or for - I'm only wanting to add more clarity to the discussion because I think it is an important distinction that is often left out.
Congrats on selling your iPhone so fast. I managed to purchase a System76 laptop in the meantime, typing from it now.
The cat is out of the bag. You and I have realized Apple can make these type of changes whenever they want.
I'm disappointed to report that Linux needs a lot of improvement to be viable for most people. I'm forging ahead, donating to open source software and hardware projects, filing bug reports, and generally bringing the Mac spirit to Linux. "It just works."
> I'm disappointed to report that Linux needs a lot of improvement to be viable for most people.
I'm continually surprised that people are continually surprised by this. I'm sure this post will get a lot of anecdotal replies; of course it can work fine for many people, but that's not the point. I used it exclusively on the desktop from 1995 to 2002. I could make it work for me, today, if I wanted to.
> You and I have realized Apple can make these type of changes whenever they want.
Was this not obvious? I'm pretty shocked that, for example, Docker can decide to change their license, and now in a few months a bunch of companies owe them money for every single user they have, even if the users don't do anything different. If they opened the license agreement and chose NOT to upgrade. If they never use the software again. But bam, terms changed, now you owe us money.
> I'm sure this post will get a lot of anecdotal replies
I tried Ubuntu last year, briefly. Almost everything worked fine, except ... OK, I'm a trackpad user. I have a couple Apple Magic Trackpads and I prefer to use those over a traditional mouse.
It worked fine for, like, a day? And then out of nowhere it just stopped responding. Reset or reinstalled everything I could think of and it still failed to work. OK, whatever, I bought a traditional bluetooth mouse. Which went to sleep every time it remained still for more than a split second, rendering it basically unusable (not a problem on Windows on the same machine).
Maybe a whiz could have fixed one or both issues, but googling completely failed me. This was pretty basic stuff, surely? And yet I couldn't even get bluetooth mice to work on the most mainstream distro.
(Now, admittedly, the trackpad doesn't work properly out of the box in Windows either, since Apple doesn't provide drivers that work outside of Boot Camp. But at least there are paid drivers that work really well.)
Anecdotally, I have installed Linux on older MacBooks and have had the Magic Mice working fine without any issues.
Anyway, I don't know. People have all sorts of annoying issues with Linux that end up being things that I have never encountered before. I usually use wireless mice with USB transceivers, which have never had problems and I feel like not everyone has such a bad experience (although I know many people who do). Perhaps it's because it's Apple hardware or something, but, admittedly, I have had MacBooks work perfectly on Linux before.
I think that, however, you use Linux on a desktop rather than a laptop, you will find that the hardware experience is quite pleasurable and does not have nearly the same number of issues. Between the two, Linux does manage to shine decently on desktops.
> did anything lead you to use non-bluetooth wireless devices
Not really. I just had them ["non-bluetooth wireless devices"] laying around and I think the non-Bluetooth ones are cheaper.
> I'm wondering if the future of Linux might be abandoning bluetooth entirely.
Abandoning Bluetooth just because your experience was unsatisfactory is not something I agree with at all, sorry. IME, Bluetooth support is completely fine for all the Bluetooth devices I have (like speakers or my cell phone)—and I know others who haven't had any issues either.
Besides, Android uses the Linux kernel. While I don't know if Android uses Bluetooth drivers from the mainline kernel, if it does, that would just make Android's life harder while pointless removing something that usually works fine.
> I'm disappointed to report that Linux needs a lot of improvement to be viable for most people. I'm forging ahead, donating to open source software and hardware projects, filing bug reports, and generally bringing the Mac spirit to Linux. "It just works."
IMO it would go a long way if we taught computer literacy and specifically Linux literacy to everyone in school. Microsoft and Google have all the school contracts and they bring people up on systems that "just work" (actually they have paid system administrators). If we taught people from a young age to use Linux and handle problems, then the small problems that often come up with open source need not be barriers to adoption. Even if every regular person can't always solve every linux problem, if they were familiar enough to use it and had at least one expert friend, they'd be fine.
That's not to say we shouldn't strive to make Linux "just work" for most people, but we can attack the problem from other angles as well.
I agree more tech literacy is needed. The learning curve for Linux seems so steep. For what it's worth, I took UNIX classes in high school, and know how to get around.
But Linux is just far more than I can handle, even just to install apps and configure settings. For example just trying to get my mouse not to stutter, I dig and find a solution. I copy and paste it into the terminal. It doesn't work and I try 3 different solutions. I'm curious about solutions but it wears my curiosity out pretty fast. I need to get work done.
In the sense that computers are like cars, I'm okay not knowing how exactly it works under the hood.
Yes this kind of thing is frustrating. And I may be very wrong, but I have been wondering, what if this is what open source is always going to be like. I am 100% pro open source and I think everything (cars, trucks, washing machines, factory machines) should be open source. But I guess if we got to that point, where the people producing the computers and the people writing the code were all on the same side, maybe things wouldn't break so often.
Also FWIW I have learned, even though I run debian, to check the Arch Wiki for tips on problems like that. One more place to check in addition to stack overflow.
Ya I wonder as well. To me it seems like Linux will always need a gooey center of tinkerers and builders. But it needs to establish mutual respect with people who can't use the CLI but need to get work done. Open source is important to the success of Earth.
Agree. And the grumpy user-hostile developer archetype is a real problem. I’ve been very pleased with the diversity movements inside of for example Debian. My end notion is a world where no one strictly must work for survival, and in such a case there would be lots of volunteers to keep the systems running. But for now it’a tricky to attract devs to non paid work.
I mean I already have accepted the fact that they allow Signal type apps because if the FBI or whoever requests it they can just access your display driver layer and see what you’re typing locally, no need to ban encrypted messaging. I just keep remembering how Lavabit shut down because he refused to comply and otherwise had no choice in the matter. Computers and software should not be easily trusted.
S21 ultra. Out of one place into another I know. But it's also the principle of the thing. I don't share my location data with the bare minimum, and sideload stuff. I don't see hardware good enough to go completely off-grid from Google yet so this was my compromise.
Understandable. I suggest donating to open source options as part of your compromise. Purism, PinePhone, Ubuntu Touch....or buying something from them if it's not an option.
Unpopular opinion but I mostly donate to charities that are confirmed to spend the money on more than administrative people. Open source donations I make for software I use. If an open source thing comes around that can replace my Samsung phone I'd buy in a heartbeat.
Yes! And because of this, this means you need to donate ahead of time so things can be built to replace your Samsung phone. After it's built well...what would they spend your money on? Donate early, donate often.
> I managed to purchase a System76 laptop in the meantime, typing from it now.
If I could just get a decent higher-than-1080 panel from System76, I might consider it for my next laptop. I've been spoiled by Macs. As it is I might go with a Framework laptop instead.
Is the System76 experience worthwhile so far? I'm about to invest in a Pangolin [0] this Labor Day weekend and could use an honest review from a fellow HN ex-Mac user.
I ordered a Pangolin yesterday as I've been waiting for an AMD laptop forever. This is my third System76 laptop, and the first two are still great.
My first was a Galago Pro, and my only complaint was that I went HiDPI. That was just a bad choice on my part. The software supports it fine, I just don't prefer it in a laptop unless it's literally a Mac-level Retina-quality panel. It wasn't, but it's half the price point.
My other is a Gazelle 15", which dual-boots Arch and Windows. I use it primarily as a gaming box when I travel (remember those days?). I spent a lot of raid nights on various MMOs from hotel rooms. It works great.
Really looking forward to the Pangolin. Mine is still in "building" status.
I see a lot of sh*t written about System76 because they are just rebranded Clevos. Well...yeah? They are, and they are fine. I would rather give money to a company like System76 a thousand times over than someone dripping with anti-consumer behavior like Dell.
System76 is incredible. I have nothing but good things to say. They are the future of Linux.
Re: Clevo - obligatory copy pasta from System76 Chief Engineer right here on HN [1]
"System76 UX architect here!
This vastly trivializes the work System76 does for months and sometimes years leading up to a product release. We don't simply take an off-the-shelf product that already exists, throw an OS on it, and sell it.
System76 works with upstream manufacturers (like, yes, Clevo for laptops) to determine what types of products to develop, including their specifications, design, etc. for months up to a release. These products do not exist before we enter into these conversations.
Once that has been determined, designed, and goes into production, we start on firmware. We ensure all components are working together and with the Linux kernel (often requiring changes to the components' low level interactions with the OS, since the upstream components themselves are often manufactured with the assumption they will be used by Windows).
Once that is complete, we test with Ubuntu and Pop!_OS specifically, ensuring the OS is working perfectly with the hardware. If there are any OS-specific changes to be done, we write that behavior into Pop!_OS and/or our "driver" which is preloaded on all machines (and available in any Ubuntu-based distro, Arch, Fedora, etc.), with the intent to upstream that into Ubuntu, GNOME, and/or Linux itself as quickly as possible. When this is more generic like ensuring HiDPI works great out of the box, this actually ends up benefiting competitors like Dell's XPS 13 probably as much as it benefits us, but we put in the effort to file the bugs, track them, write the code, and get it upstreamed.
Once all of that is complete, we finally offer it for purchase and market it with all of our pretty photographs, sales pages, etc.
What ends up happening, then, is Clevo offers a machine with a similar-looking chassis for sale as a barebones laptop. This is the result partially of the decision making System76 has made for what to produce in the first place. These products, however, do not contain any of the firmware or driver work that System76 has invested in. They do benefit from the nice photography and advertising System76 has done, and since they look similar, people assume they're going to get the same machine for cheaper "directly from the manufacturer."
Edit: regardless, this is a bit beside the point of the linked blog post, and is also becoming less and less true as we work on designing and manufacturing our products completely in-house."
Thank you for the quote, I hadn't seen that. I certainly didn't mean to trivialize their work (obviously? since I've voted with my wallet 3 times)
Their work on both the firmware front and with Pop!_OS should not be overlooked. And, it should be mentioned, if one is familiar with their whole product line - they now go far beyond just laptops for the open source community. And their powerhouses are absolutely not from some other OEM.
I've been using an Oryx Pro (v5) for a few years. It's mostly fine.
My main disappointment is that it doesn't "just work" with the kernels shipped by Ubuntu. System76 provides their own kernel packages, but they sometimes cause weird conflicts with other low level packages, and it can't hold its charge when suspended. I haven't attempted to debug the issue because I usually stay plugged in when I'm working, but I suppose I ought to engage tech support. They have been helpful in the past.
At the end of the day, I'll probably still give them my first look if I'm shopping for another laptop.
I also have had an Oryx Pro for several years. I stuck with System76's branded Linux, just do updates as they are available, and so far all has been well. I like that they keep CUDA working with very few hassles.
Compare this to my migration to a M1 MacBook Pro: I love it, but it is rough doing deep learning on it.
see my comments above in this thread. specific comments are welcome. i love talking about this thing :)
As a Mac user, the build is about what you would expect from a PC. It's not one of those nightmare shit PCs. It does its job.
Get that Pangolin before they sell out! Is there a Labor Day sale or something?
Also, what pushes you to get a Pangolin (AMD) instead of Lemur (Intel)? I'm curious because I don't really understand the meaningful difference and bought the Lemur when it was in stock.
As a designer, the only thing that I miss on Linux is graphical software at least on a level of Affinity Designer. Everything else is more than viable. Blender. Resolve. BitWig Studio. RawTherapee, Krita.
For web development is perfect, my engineers run ARCH/Manjaro over soon to be abandoned Apple hardware.
Even my old MacBook Air is supported, Pop!_OS is working perfectly with properly mapped keyboard keys (keyboard backlit controls, audio, playback, etc). The only thing that I have done is to plug thunderbolt/ethernet adapter, clicked on popshop update, restarted and wifi started working flawlessly.
Just a gut check -- what about wake from sleep using an external keyboard? That required me to do github work, install a system service. It's these touches I miss from MacOS.
Just before everyone starts high-fiving each other, even the release itself brands it as a delay. I accept that it is a positive development, but do not read as anything but that: a temporary delay to let the furor die.
"So now the question is in what 15.X update do they just sneak it in."
I think they cannot really sneak it in. Some people do read updated TOS.
And I think a "featurechange" like this, requires one.
And yes, the question will be, if the broad attention to the topic can be brought back on to it, if that happens. And they clearly say it will come, only with some "improvements" (but I cannot think of any improvements, that can be made, without abondoning the basic concept):
"we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features"
> I cannot think of any improvements, that can be made, without abondoning the basic concept
The concept of scanning for CSAM during the upload process, or the concept of making the user's device do it? Apple could do the former on their own servers and essentially nobody would be upset.
The basic concept of Apple running private pictures through an KI algorithm to detect something worthy of investigation.
The algorithms never work 100% right. That means there will be always humans in the loop, sorting through false positives etc. which means probably very private pictures here in this context.
And to the end user it does not matter, whether this happens on the server or on their device. The message is: your pictures are not private with apple.
It matters very much with regard to the distinction between yours and theirs. I strongly believe the operating system of your device should never actively work against your interests, and checking whether you're using the device to commit a crime, then reporting it to the device manufacturer who will tell the government is potentially strongly against your interests.
If that's too abstract, it matters in that it's a foothold for further scanning of data on your device. Without this system, if a state wanted to pressure Apple to create spyware and add it to their operating system to look for something else, that would have been easy to refuse. With this system, the spyware is already there, and despite Apple's protests to the contrary, it's just a matter of a few tweaks to repurpose it.
> People like me who sold their IPhone are a drop in the bucket
I doubt it , they will bring this up again very soon , if they cancel it, they’ll get very bad press too.
And second, a company that once shows its intention to breach your personal privacy while disregarding its user’s best interests altogether.
Wont hesitate to do it again.
If the million customers they lose also happen to be people who build apps and influence the tech buying decisions of everyone they know, that could be a very big problem over time.
The developer mindshare really matters to apple. As long as they are on apple devices, some significant fraction of them will be creating software for apple devices. Without 3rd party software, apple is toast.
Do you believe that most developers are up in arms about this, ready to abandon the platform? I doubt more than a tiny fraction really care about this at all, and the hysterics on HN (threads brigaded by a small percentage of HN users) are not representative. Predictions of some massive sales drop -- or even a sales drop at all -- are deluded nonsense.
Apple is delaying this, I suspect, because it confuses the privacy message: If every time they talk about privacy someone can make them look like hypocrites by pointing out the on-device scanning, well that's just ugly for Apple. I suspect the on-device scanning element is going to be removed and it'll be on ingress to iCloud, which is what I said on my very first post to this. It was ill-considered to do it on device given Apple's privacy push.
Most? Probably not. But developers are more sensitive to privacy issues in general, because they work in the most privacy invasive industry that ever existed. They know how much corporations dig into your stuff, and they might have even implemented some of it. At least speaking personally, I know my purchasing behavior changed on Amazon when I worked for them and realized how much they tracked me.
More importantly, there is an oligopoly in tech and up until now, only one of the manufacturers of tech devices cared about privacy. And to that extent, I wouldn't be surprised if vast majority of developers who care about privacy have biased themselves towards apple. And now they have a reason to treat apple like every other privacy invading company.
> And now they have a reason to treat apple like every other privacy invading company.
That’s right. I like the idea that on android, I can use stock tools like rsync to manage the photos/music on my phone. With Apple it’s been a constant battle against iTunes to do those basic things. The only reason I chose Apple last time I bought a phone is because I’m willing to sacrifice a bit of convenience for the perception of better security/privacy. If they lose on that front, then I’ll hop to whichever phone is easiest for me to use the way I want to.
> Predictions of some massive sales drop -- or even a sales drop at all -- are deluded nonsense.
I remember, when that recalcitrant dentist was dragged of a United plane, endless predictions of United's imminent demise, people would never fly United again, yada yada yada. Needless to say, nothing much happened.
> I suspect the on-device scanning element is going to be removed and it'll be on ingress to iCloud, which is what I said on my very first post to this. It was ill-considered to do it on device given Apple's privacy push.
Not sure about that. Apple's approach (private set intersection run on half client, half server) preserves more privacy than scanning it in the cloud, and leaves the door open for E2EE.
> The developer mindshare really matters to apple.
The developers are going nowhere so long as the money is there to be made from iOS.
Did they all quit when it was revealed that Apple was part of PRISM? Of course not. They barely blinked.
It doesn't matter if a billion developers sign a petition. It's empty. The vast majority of them will promptly capitulate if Apple goes forward. By vast majority I mean 99%.
Did you see all the people desperately fleeing from Australia due to the rise of the extremist police state there? Nope. Did you see the tens of millions of people flee from the US when the Patriot Act was passed or PRISM was revealed? Nope. Did you see everyone rapidly flood over to mainland Europe as Britain became a police state? Nope. How about the hundreds of millions of people fleeing out of China over the past decade with the rise of Xi and the entire demolition of all human rights in China? Nope. Perhaps you're spotting a predictable human nature trend in all of this.
All the tech employees of Google, Facebook, Amazon, Microsoft, Apple, etc. They've all quit in droves over the past decade over human rights abuses - including vast privacy abuses - and concerns for what these companies are doing? Whoops, no, the exact opposite. There was a price for their integrity as it turns out, and they grabbed the money, which is what most people do.
The iOS developers are going nowhere, either. Besides the fact that their livelihoods depend on it, there's no vast green field to run to from what's happening. Sure, some might change jobs, or switch industry segments, it will be a small group though. It's going on in just about every liberal democracy simultaneously. What huge platform are developers going to flee to that's better and is never going to feature forced on-device scanning spyware? It's coming for Android too, bet on it.
iOS is where all the profitable users are. While that holds true, developers will be there to sell things to the profitable users.
The principled indie developers might leave in protest, and that would be a terrible loss for the platform's soul, but realistically the vast bulk of money flowing through the App Store is not going to the indie developers.
Hmm, having experienced the influx of 1.5 million people into an area smaller than Alberta, and the social consequeces in the last 5 years, i'd say you are being very glib. Thr shit show is just starting.
The fact that they are going to sneak this backdoor in the back door later against the wishes of their customers is sufficient for me to decide that Apple has abandoned any values they may have once had.
I won't be purchasing any more iPhones, and I've had every one.
I think the internal backlash helped a lot. The initial move was very contrary to the culture Apple has cultivated over the years and it did not go over well with them.
>People like me who sold their iPhone and cancelled all services probably are a drop in the bucket
Yeah, very few have the financial independence to be able to do something like that. However, over the next few years when everyone gets their free upgrades, I imagine we'll see some slower, yet more robust, switching.
Most people finance their phones. To sell them means to pay them off which is extremely hard for most people. I had the luxury of paying off my phone, and buying another while waiting to sell my current phone. In that timeline it was about $2,000 in cashflow. So yeah, I'm blessed to be able to do so at the drop of a hat.
Gotcha, yeah, I was focusing on the "sell phone / cancel services" side and didn't even think of the prerequisite to be able to sell the phone. (and, i guess, buy a new one before you ship your old one off!)
I posted on HN [1] that I really wish they stand ground, it makes decision on Apple a lot easier. Now they wont. Half of the people will forgive them, the rest will most likely forget about it in a few years time.
But to me, it did at least look like this scared the shit out of Apple PR.
"So now the question is in what 15.x update do they just sneak it in."
It would be nice if the general public and the news therefore now picked up on the issue of opaque "automatic updates". To date, they have been trained to always every update without question.
Conventional wisdom: Never, ever question any particular automatic update. Don't think, just enable. All updates are created equal.
Assumptions: Every update is for users benefit. Software companies never operate out of self-interest. There are no trade-offs. Whats good for them is good for users and vice versa. Theres no reason for end users to trial updated versions of software ("A/B test") before deciding to use them "in production".
Thought experiment: User installs Software from Company to perform "Function #1". Company makes changes to Software, described as "Fixes and other changes." Software now performs Function #1 plus Function #2. User allows new software to be installed automatically. (Automatic updates enabled per "expert" advice.") When user downloaded and installed Software she based her desicion to use Software on its ability to perform Function #1; however, did she have any interest in Function #2. Did she agree to Function #2. Company says yes. Developers say yes. What does User say. The license agreeement places no limits on what might comprise Function #2. It could be anything. It could have no benefit whatsoever for User. Moreover, Company is under no oblgation to disclose Function #2 to User. With automatic updates, Company is free to keep changing Software. User originally chose Software for Function #1 but Software may look very different after many "automatic updates". Would she choose it again in its current state. Segue to discussion of "lock-in".
My bet is they wait a few months and quietly announce that they’ll do the scanning on the iCloud servers instead. That would be relatively uncontroversial in the tech press and probably never make it to the mainstream press.
A "blink", or the iterative strategy of a skilled manipulator/abuser/groomer?
Push to the level of their target's discomfort, then back off – without respecting the "no!", pretending to hear only "not now".
Come back later from a superficially-different angle, when the victim's defenses are down.
Couch the next attempt, or the next, or the next, in some combination of feigned sweetness ("but I waited & changed things for you"), or esteem-attacks ("you're so nasty you deserve this"), or inevitability ("this is going to happen whether you like it or not, so stop fighting").
"We're sorry to inform you that Apple iCompliance AI Eyes, as of last night's updated version 16.4.2, has identified the following violations of laws & emergency edicts in your recent local photo roll:
• Unmasked proximity with non-household member
• Unlicensed gathering of more than 6 people
• Association with unidentifiable individuals (no faceprints on record)
These are your 2nd, 3rd, and 4th strikes so your phone has been disabled and you are now under arrest. Do not leave your immediate vicinity. As always, keep your phone on, charged, and in your personal possession at all times.
Due to large violation volumes, we are currently experiencing long physical arrest team hold times. Your arresting agents should arrive at your location in approximately... 5... hours.
Thank you for using Apple products, the leaders in Social Safety. 'Better Safe – Or You'll Be Sorry!'™"
It's a way to inch toward the unthinkable by obtaining concessions. You propose the unthinkable and after extreme push-back, the concession seems completely reasonable. However, if the concession would have been proposed alone instead of the unthinkable, it would have been rejected. It's an absolute deceptive manipulation technique.
Another example of manipulative Overton window shift at play: You may also recall deep state puppet Kathy Griffin showing Trump's severed head as the deep state testing the waters of a possible coup or assassination. The more you expose the public to the unthinkable, the more it becomes acceptable.
The door-in-the-face (DITF) technique is a compliance method commonly studied in social psychology.[1][2] The persuader attempts to convince the respondent to comply by making a large request that the respondent will most likely turn down, much like a metaphorical slamming of a door in the persuader's face. The respondent is then more likely to agree to a second, more reasonable request, than if that same request is made in isolation.[1][2] The DITF technique can be contrasted with the foot-in-the-door (FITD) technique, in which a persuader begins with a small request and gradually increases the demands of each request.[2][3] Both the FITD and DITF techniques increase the likelihood a respondent will agree to the second request.[2][3]
Foot-in-the-door (FITD) technique is a compliance tactic that aims at getting a person to agree to a large request by having them agree to a modest request first.[1][2][3]
This technique works by creating a connection between the person asking for a request and the person that is being asked. If a smaller request is granted, then the person who is agreeing feels like they are obligated to keep agreeing to larger requests to stay consistent with the original decision of agreeing. This technique is used in many ways and is a well-researched tactic for getting people to comply with requests. The saying is a reference to a door to door salesman who keeps the door from shutting with his foot, giving the customer no choice but to listen to the sales pitch.
Correct. If we could magically end the rape of children and stop the propagation of images of such, I think we'd all be on board. But when a system would realistically just push most of it elsewhere and violate a bunch of innocent people's privacy, people get nervous.
It's like being asked to have an invasive, detailed scan of your body shown to TSA personnel (and recorded) when you fly. The motive is apparently to prevent violence. But there were still incidents on flights when that was being done. It wouldn't have caught the box cutters on 9/11. The stated motive ceases to be part of the equation for me.
Pushing it elsewhere is still a win. You don’t want this to have easy cover. Keep in mind FB reported close to 17M pieces of CSAM last year. So if having the ability to detect it was such a deterrent, why are their numbers so high?
Their reported numbers were like 350 last year, compared to almost 17M for Facebook. You could theorize Apple has little CP, except in the Epic lawsuit we saw exec emails released that said the exact opposite. So clearly not much scanning has been going on.
I think it's actually impossible to use an AI to determine this type of thing. Even courts of humans sometimes rarely get it wrong sometimes with images that are open for debate (for example missing context) on whether it was abusive or not (parents bathing their kids, as one example). Then you have the whole can of worms that pedophiles will find sexual interest in images taken in non-abusive contexts but will use them none the less. How is an AI supposed to determine that. Even our laws can't handle that.
There's also images that can be abusive if made public but non-abusive if kept private. For example what about a kid taking pictures of themselves nude (many kids don't have parental moderated accounts) and putting them on icloud (iphone does this automatically with default settings).
It's a complete nuthouse and there's no way to do it.
(Also don't get me started on drawn artwork (or 3D CG) that people can create that can show all sorts of things and is fully or partially legal in many countries.)
They're not using an AI for that purpose. They're just sending a hash of the image and sending it along with iCloud uploads. The hashes are of images that have been identified by humans as containing child pornography.
It's also possible that a picture of an avocado will get flagged. A nude photo of someone who's shaved themselves is unlikely to look so much like an actual CSAM photo that it'll get flagged. It's certainly possible, but again, so's the avocado.
They're not hashes in the way md5sum or sha256 is a hash. They are _neural_ hashes. Which is not trying to fingerprint a specific image, but tag any image that is deemed sufficiently similar to the reference image. Hashes are unique, neural hashes aren't.
To add, there are natural neural hash collisions in ImageNet (a famous ML dataset). Images that look nothing alike to us humans.
Apple already does virus scanning on device and has for a decade. I fail to see how you trust one proprietary binary file scanner to stay in its lane and not spy on you, but fear a different one with a different stated purpose.
Not really - there are a ton of ways to profile and monitor a Mac to determine what processes are calling into what, and spotting something activating this code oth would be trivial if it were happening.
You say that, but there are also tons of ways to hide such things (there is an entire field dedicated to side-channel leakage). The point is you don't know unless you have the source code (and can compile it yourself, I'm ignoring the trust of compilers at the moment). You can try to reverse engineer and analyse all you want, but it doesn't mean you know what the system is really doing...
It may be trivial to look under a given rock, but when there are hundreds of thousands of rocks, the likelihood of someone noticing something left under a rock is low.
Well I don't mean by doing it without telling everyone. I mean wait until it dies down, then just be like oh btw 15.X includes CSAM scanning as a change log bulletin.
Given the uproar on this change, and the fact that the uproar was big enough to cause this delay, can you imagine the bad press if they come back later and roll this out quietly?
Editing to add: I think this will play out more like the butterfly keyboard debacle. They won’t ever really acknowledge how bad the original thing was, but they’ll return to a solution the community finds palatable (server side scanning), and wrap it up in a bow like they did with the “magic keyboard, now on our laptops”.
They were trying to make a small press release about it and be done hoping no one would notice. Instead everyone got mad and called them out on it. That is where they attempted to sneak it in. Not everything is black and white and the terminology is sufficient to describe the situation.
Wait a sec - this wasn’t a ‘small press release’, it was a huge one, made several months in advance of the feature launching, accompanied by a bunch of interviews with execs.
If you look at the tone of all of it, they also honestly felt this was the most privacy preserving way to do CSAM scanning - likely so they can enable E2EE on everything iCloud.
What they got very very wrong was the public reaction to it.
> What they got very very wrong was the public reaction to it.
What they got very wrong is that this fundamentally changes the nature of iPhones -- they are no longer user agents; in fact, they're actively scanning user data on behalf of another party. That change doesn't just make people feel uncomfortable, it opens the door for untold myriads of abuses.
It's one thing if iCloud servers scan user data - those servers never were assumed to be user agents. It's entirely different when user-owned devices do this.
What they got very very wrong is the idea that spyware running on the user's device is the most privacy preserving way to scan for illegal content hosted on their servers.
Apple has not announced E2EE for files stored on iCloud. If that is their intent, they likely would have had a somewhat improved public response by announcing it at the same time as the on-device spyware.
Their practice of suripticiously pushing updates to older phones to kill the battery life would seem to fit that bill. Their updates aren't open either, people just take them at their word. If the law or EULA doesn't ouright prohibit them from doing something it's prudent to assume they already are.
Google or (worse) Facebook would have just put this in with no press release.
My read is that Apple is trying to placate governments by throwing them a bone without totally abandoning their privacy stance, hence the drive to put the scanning on the actual device so all your photos don't have to be shared as-is with Apple. That way they can encrypt iCloud but still keep the feds from accusing them (in bad faith) of being a child porn trading platform.
The alternative is to keep iCloud unencrypted and scan in the cloud, which is what they and everyone else already does.
One of the pieces of feedback they got though is that people are more okay with that than with scanning on the device. People expect that things that go to the cloud are no longer private unless you run some kind of local encryption that you control.
The reason they don't care about photos not sent to iCloud is the trading platform angle. It's one thing to let people store shit on their own devices. It's another thing to be a large scale and easy to use secure channel that people can use to share CP. The nightmare scenario for Apple would be to become a de-facto standard in the CP underworld for sharing content.
Sure people can use Tor, private VPNs and mesh networks, PGP, or many other things, but that requires a level of technical knowledge that greatly reduces the size of the audience.
Maybe a better alternative for Apple would be to add iCloud encryption but to stipulate that any sharing of content disables encryption for the shared content. That way they could scan shared content only. For unshared content iCloud is basically just an encrypted backup.
>My read is that Apple is trying to placate governments by throwing them a bone
You would believe this if you only read/watch Apple PR and ignore reality. The actual reality is that Apple always collaborates with all governments. The reality is that Apple did not announce any new end to end encryption that would use this feature and did not promise such a feature.
Apple isn't altruistic, but I do think they want to have strong privacy. They cooperate with governments to what seems to be the minimum extent required to do business in a given country. They can't change governments.
In China and other more authoritarian nations that means cooperating a lot. If you don't you do no business in China. In the USA they seem to be looking for a new minimum level of cooperation, or trying to enhance privacy a little without moving the needle.
In any case this particular strategy looks like a failure in the market. They might re-engineer it or backpedal or just toss the whole idea of encrypting iCloud.
>Apple isn't altruistic, but I do think they want to have strong privacy.
Maybe if you define privacy that Google and FB do not track you and only Apple do. You open an application on your Mac and the event was sent insecure to Apple , find a definition of privacy that is not contradictory with what Apple was doing.
what apple are trying to do is minimise damage to their bottom line from bad PR. At the end of the day, more people must realise that corporations are beholden to one thing - money.
Morality barely matters and legality is something easily skirted around with the right legal argument. Apple, google etc with continue working with governments so long as their profit can flow from those countries.
Then why wouldn't they say, "the government asked us to do this"? Apple is taking all the negative publicity this way.
My favorite theory is this:
Apple wants to create a chilling effect on reverse engineering iOS. They're starting to catch regulatory attention and lost their recent suit against Corellium. By putting neural hashes in the encrypted OS image, they can accuse anyone who reverse engineers iOS of:
1. Being in cahoots with child abusers
2. Knowingly possessing CSAM
I would hope that most courts would see through that paper thin logic, but the idea would be simply to introduce enough doubt that most reverse engineers don't want to risk it.
Did we? I suspect the real issue the original client-side proposal had a lot of holes. What if the bad guys upload CSAM which hasn't been tagged by NCMEC yet? What if they then delete it from the device (but keep it on iCloud)? Or what if they zip the CSAM images and backup that?
In order to be even semi-effective, the client-side scanning has to be more invasive, or they have to implement server-side scanning too. Apple may well be looking whether they can implement this scanning without even more backlash.
I don't think you're right. Apple didn't care about CSAM policing, it's just they weren't opposed to it and throwing governments a little bit of what they wanted to make them shut up. They thought this through very thoroughly. I read the whitepapers they put out on it. Clearly this wasn't done overnight by a confederacy of dunces. I think they thought they were picking a middle ground on surveillance and privacy (clearly they weren't as they broke the 4th wall of "this is my damn iphone you're scanning on!") . Anyway, they didn't want to be the police and they aren't going to be doing more to police your device as you are pointing out. They never cared if they caught anyone, just trying to make the US government a bit happier.
Apple may have thought this thorough technically, but politically I find it very unlikely the proposed method is a sufficiently effective scanning.
Sooner or later government and NCMEC will push them to complete the feature (understandably so from their POV), and when they do, Apple will have to expand scanning. Apple may have already been pressured to do so.
Sometimes choosing the middle ground is like choosing to drive on the white line in the middle of the road as a 'compromise' between the lanes.
In my opinion, Apple will break its brand if they actually go ahead and release "self-reporting to the authorities" device features.
Even if they somehow manage to make it do exactly the thing it is supposed to do and not slide into a feature for mass governmental control, I think a lot of people will loose trust into the devices they own.
Automatically scanning your stuff and reporting the bad things to the police is like the concept of God always watching. Even if you don't do anything wrong and you agree that the stuff they are watching for is bad and should be prevented, there is something eerie about always being watched and controlled.
The iPhones should mind their own business and leave the police work to the police.
That being said, I would be O.K. if the scan is performed on the device upon physical access by the authorities. You caught someone, You have enough evidence to investigate the person, you are allowed to search his/her home then maybe it can be O.K. to run a fingerprint scan on the phone to look for evidence.
> That being said, I would be O.K. if the scan is performed on the device upon physical access by the authorities. You caught someone, You have enough evidence to investigate the person, you are allowed to search his/her home then maybe it can be O.K. to run a fingerprint scan on the phone to look for evidence.
This was an unreasonable and unnecessary capability, no amount of comprise makes it a good idea.
I don't want my devices to tattle on me. They exist to serve me, not the state.
That being said, I would be O.K. if the scan is performed on the device upon physical access by the authorities.
Let me call Yet Another Insideous Concession (YAIC). It seems like this has the sell that "this has safe search built in so in the event of seizure, authorities won't have to do a full search". But actual situation is authorities would never be satisfied with things limiting their search on seizure but would be happy for this to be added where they'd wind-up using it primarily through some automatic mechanism.
> Automatically scanning your stuff and reporting the bad things to the police is like the concept of God always watching.
What I find interesting is how our perspectives shifted over time. 1984 was considered a dystopian nightmare and within that world Big Brother wasn't always watching but rather _could_ watch at any time. The simple ability to be able to tune in was enough to create the fear and have people report others. We're well past that to be honest and interestingly our interpretations have shifted as well to Big Brother _is_ always watching and that being the dystopian nightmare. In this sense we're there in some areas of our lives and it has shifted to "I have nothing to hide" or "what can I do?"
So it is interesting to see that essentially the window shifts further and further in that direction and as long as the shift is sufficiently slow the general public seems not to care. I wonder if there is a turning point.
I think it's critical to understand that a single law can shift that window drastically. If Apple can allow for a small shift in that direction – however distasteful – it may prevent the larger shift caused by anti-privacy legislation.
I don't know why more people didn't come to this conclusion. The choice between scanning the files before upload to the cloud or after upload to the cloud is largely a policy decision and not a technical one since Apple already controls both ends of that process.
If you don't trust them to scan things on your device why are you trusting them to scan things in the cloud and why are you trusting them with both the hardware and software you run?
1. They own their cloud, you own your device. Having your own device work against you is more problematic in principle than having another party you give your data to working against you. I don’t like the concept of devices that I own betraying me.
2. When scanning is done locally, it’s easier for them to secretly or openly modify the software to scan everything on your device, rather than just what’s just uploaded to iCloud. If they tried scanning everything (including non-iCloud images) in the cloud, the network traffic of uploading every image on your device to the cloud would be observable.
You seemed to have missed my point. They control the entire ecosystem. If you are worried about them doing something secretly to violate your trust, you should assume they are already compromised because the only thing stopping them is a policy decision and not a technical limitation.
The difference is, they promised not to look at personal files. If you are using their devices you trust their promises. Now, they broke their promise and a lot of people are loosing the trust.
Where was this promise stated in these explicit terms, especially the definition of "personal files"? Because Apple is also promising that this only applies to file that are set to be uploaded to iCloud and the expectation of privacy is different for files uploaded to the cloud.
Either Apple's promises can be trusted or they can't. And if they can't, you can't trust anything about their devices.
Of course it is not an explicit promise and doesn't refer to "personal files" but I think it shows where OP is coming from. You can reasonably understand this message in the way they do.
"What happens on your iPhone stays on your iPhone" can't be literally true or else your phone wouldn't be able to communicate with any other devices. There is therefore an implicit exception for data that you choose to send elsewhere. As I said, Apple was only planning to scan photos you were sending to iCloud. Therefore these are already files that you agreed to send off your device and they shouldn't be assumed to be covered by that marketing slogan as you are the one choosing to contradict it.
You want "What happens on your iPhone to stay on your iPhone"? Turn off iCloud.
Are you asking for the out of the box default to be that your iPhone can't send an email or iMessage because that wouldn't "stay on your iPhone" either?
The marketing speak is marketing speak and obviously not literal.
Sending the email and messages is what the user does.
Auto-sending all your photos to a server owned by someone else (and without end-to-end encryption!) by default cannot be described as "what happens on your iPhone stays on your iPhone" in my opinion.
It has been a while since I have setup an iDevice from scratch. Isn't there a prompt during setup that asks the user whether to enable this iCloud backup? I believe you can even setup devices without entering an Apple ID at all. Wouldn't that make this scanning also a response to something the user does?
I don't know the answer, but regardless, any non-technical user will simply not understand the implications of using "the cloud". They don't know that there is no cloud, just other people's computers and that Apple has access to all photos. I really doubt it's explained well during the setup.
> In my opinion, Apple will break its brand if they actually go ahead and release "self-reporting to the authorities" device features.
I'm always astonished that some (random) people feel more confident on their brand analysis than the best people at this specific topic paid thousands of dollars at Apple to do just that. Or maybe I'm the one who is wrong
What makes you think that I am random, I did brand analysis, that I am confident , that confidence is over the confidence of the people Apple paid, and those highly paid people concluded differently?
That comment looks like partly uneducated guess, partly a fallacy to refer to the “authorities of branding”. I’m always amazed to see people who assume that employees of large companies cannot make mistakes or that they have unlimited power over the decision making.
You are probably not random, I just added the word to make the message clear.
There is no authorities of branding, it is just that there are people who are paid for that, and it is very unlikely that shareholders will let Apple do things that decrease the market value over a long period of time.
How do you explain, among many other things, for example, the Digg fiasco? Could they not have seen it coming?
People make mistakes everywhere. Teams make mistakes everywhere.
Of course it's much more likely that they know what they are doing and that they have weighed the pros and cons. But, then again, even if they still go forward with it, they have changed their course slightly. Do you believe it was part of their plan? (genuine question)
Do you trust that the code written by experts has no flaws? How about teams of experts? Do rockets not sometimes have bugs which cause them to fail? If a team of highly skilled programmers in N-version controlled critical systems can fail, why can't someone or some team in such unpredictable topics such as market analysis and brand perception fail as well?
Again: I share your sentiment that surely they have thought about this. It's just that I'm not so sure that they were as competent as you might think -- maybe because fundamentally it's a very reactive and unpredictable market!
> I'm always astonished that some (random) people feel more confident on their brand analysis than the best people at this specific topic paid thousands of dollars at Apple to do just that. Or maybe I'm the one who is wrong
I'm afraid you are wrong.
I'm also astonished that Google has allowed search to detoriate to a point where competition actually has a chance.
I'm astonished that Microsoft cheapest their brand by serving me ads on my Windows with Professional license.
The brand is ultimately in the minds of consumers, and if you followed discussions, many of them (and press) had the same impression that it is inconsistent with Apple positioning wrt. privacy.
While household analysts get things wrong all the time, companies are not infallible, including Apple.
You can make a mistake once or two, but if a company like Apple keep pushing for something ( that can hypothetically kill its brand) it is unlikely that it will actually kill the brand.
How could a big bank like Lehman Brothers with their highly paid risk analysis people fail?
How could Rome fall?
The sun never set on the British Empire.
Both me and my partner were very close to joining the apple eco system because we felt completely betrayed by G, now I'm researching alternative OS to flash my phone. We both abandoned Google search and browsers. The only FAANG I'm not actively avoiding is Netflix.
> That being said, I would be O.K. if the scan is performed on the device upon physical access by the authorities.
In the U.S., due to how the rules of evidence work, this would be of limited utility. You want the results of the scan to come from a trustworthy expert who can testify, who uses equipment and software that is tested and calibrated frequently. Because the suspect's own device won't have those calibration records, trying to use the results from such a scan would raise issues of foundation and authenticity.
If I were a prosecutor and I had a case like that come in, I would still send the device in to the lab for an expert forensic extraction.
(Edit to reply to a response below: Yes, theoretically, such an on-device scan could be used to establish probable cause, but that seems circular, since the authorities wouldn't be able to seize the phone in the first place unless they already had probable cause or consent from the owner, either of which would obviate the need to establish probable cause via the on-device scan.)
It would be used to get a warrant to trace the persons online activities and get his/her phone as evidence. This isn't going to be used to send out a SWAT team upon notice from apple.
The argument made in several comments that this will result in SWATing isn't cited because it's an opinion on potential future events. A counter opinion, therefore, also would not need citation.
I guess the legislators and the tech companies can work something out on the practical issues instead of turning people's devices into always watching policemen.
> Automatically scanning your stuff and reporting the bad things to the police is like the concept of God always watching.
... like all cloud providers scan your stuff and report the bad things (and Apple has not been doing it, but now proposed doing it in a more privacy preserving manner).
There's a big difference though between cloud providers scanning files stored on their hardware that they own and Apple scanning files on your hardware that you own.
It is your own device actively checking on you for misbehaving. Completely different from cloud providers checking on renters for misbehaving in their infrastructure.
Your phone will listen to instructions from 3.rd parties to check on you and report you to the police(with Apple as a dispatcher).
Well, no... the cloud provider (Apple in the case of iCloud) will check on you and potentially report you (just like all the other cloud providers do), but without scanning all your files in the cloud (unlike all the other cloud providers).
As I said, your phone doesn't even know if anything matched.
The point is, your phone follows instructions from 3rd parties on scanning your data and reporting back the results. It’s processing your files accordingly, with the intent to to make sure you are not doing anything wrong according to those 3rd parties. Your own device is policing you according to the requirements of those 3rd parties.
I really wish the people – especially those on HN – would take a broader look at what Apple is proposing and better understand the forces at play before being so critical of the tech. I understand the initial knee-jerk anti-privacy response, but since there has been some time for people to learn all the facts, I remain amazed that they never come up in these threads.
First, you hear a lot of people, including Snowden (while contradicting himself), say this isn't really about CSAM. That point is absolutely correct. This is ALL two things, each addressed here:
1. Legal liability, and the cost of processing as many subpoenas as they do.
Ultimately, Apple has the keys to the data they store on their servers. They could easily encrypt all the data using on-device keys, before uploading to ensure they can't actually see anything. But this would cause a huge backlash from law enforcement that would cause congress to pass legislation mandating backdoors. In fact, Apple (big tech) has been trying to hold off that legislation since at least 2019, when they met with the Senate Judiciary committee [1].
Quote from EFF article:
> Many of the committee members seemed to arrive at the hearing convinced that they could legislate secure backdoors. Among others, Senators Graham and Feinstein told representatives from Apple and Facebook that they had a responsibility to find a solution to enable government access to encrypted data. Senator Graham commented, “My advice to you is to get on with it, because this time next year, if we haven't found a way that you can live with, we will impose our will on you.”
Apple is doing exactly what Graham told them to do. They have come up with a system that manages to increase security for most users by ensuring that nobody - not even Apple - has the decryption keys for your data, while also satisfying law enforcement to the degree necessary to prevent really harmful anti-privacy legislation. They managed to do it in a really creative way.
It's not perfect of course. There are plenty of people with valid concerns, such as the potential for hash collisions and how a country like China might try to abuse the system and whether Apple would give into that pressure (as they did in the past). All of that is valid, and I'm glad to see Apple stop and examine all the complaints before pushing the release. But strictly on the topic of privacy, the new system will be a massive improvement.
2. User privacy. Yes, everyone thinks this is an invasion of privacy, but I just don't see how. The proposed on-device scanning solution provides MORE privacy than either the current iCloud system (in which Apple can be compelled to decrypt nearly all of your data) or the proposed [2] legislation – MORE privacy even for people found to meet the CSAM threshold!
It seems to me there must be a lot of misunderstanding surrounding the encryption mechanisms Apple has proposed. But having read the technical documents, my view (again, strictly from a privacy standpoint) is that it appears to be extremely sound.
Essentially, there are currently two parties that can decrypt your iCloud data with master keys – you and Apple.
In VERY greatly simplified terms the new system will set one master decryption key on your device. But Apple will now instead use shared key encryption, which requires ALL of the ~31 keys to be present to decrypt the photos. Apple will have one of those keys. The other 30 (the "threshold") keys will be generated by a hash (of a hash of a hash) of the match found in the CSAM database. If no match is found, then the shared key needed to decrypt that image is never generated. It doesn't exist.
One way to look at this is that it's the CSAM images that are the keys to unlocking the CSAM images. Without them, Apple cannot comply with a subpoena (for photos ... for now). Even people who meet the CSAM threshold, can only have the CSAM images decrypted. All other photos that have no match in the CSAM database cannot be decrypted without access to the suspect's phone.
On the flip side, Apple is bending to congress's demands by voluntarily sharing information with law enforcement. I can absolutely understand how this behavior could make even perfectly innocent people feel uncomfortable. But in the context of the understanding that you have more privacy for yourself, while exposing those who deal in CSAM (and are dumb enough to store it in their iCloud account), I have to push my logical understanding to overcome my natural but unwarranted discomfort. Anything that prevents the government from getting universal backdoor into everyone's phone is a win, in my opinion.
A great many people have attempted to explain why they are against this particular mechanism for detecting CSAM. I agree with you that Apple's implementation is technically impressive and probably the most private way to performing this action on device. However, I disagree that it's more private than the current cloud scanning. If the scanning happens client-side, then I have absolutely no control over what gets scanned and when. If the scanning is server-side, then I can simply not upload anything to the cloud and no scanning happens. I can't avoid client-side scanning like I can avoid server-side scanning.
I realize this is a simplification of the actual method Apple has implemented and as it currently stands it would only scan photos that are destined to be uploaded to the cloud. If it were guaranteed that would never change then I think a lot more people wouldn't have a problem with it. But it will be abused. Every[1] single[2] time[3] this sort of system is implemented "for the children" it gets abused. The slippery slope here is real and well-demonstrated in various countries around the world.
For my part I have come across an imperfect analogy that I feel accurately captures how I feel about Apple's solution. My phone is like my diary. There's nothing illegal in there. But there is stuff that is deeply personal, private, and even some that would be terribly embarrassing if the wrong person saw it. As long as I keep my diary to myself and don't let anyone see it I have nothing to worry about. If I were to send my diary off to someone else known to read diaries then it's my own fault as much as anything else if it gets read and intimate details of my life known.
If the phone and the server agree that enough CSAM matches have been detected, the matches get reviewed by Apple. If Apple agrees with the phone and the server, the matches get reviewed by the NCMEC. If the NCMEC agrees with Apple and the phone and the server, the NCMEC may get law enforcement involved.
Bad-faith actors want people to think that taking a picture of your baby in the tub will bring a SWAT team to your door the next day.
There is nothing "bad faith" about pointing out the mentality behind this of total device surveillance. Most people who are angered about this aren't worried about false positives of baby's first bath. They are worried about Apple scanning stuff on your phone as a proxy for police. What's next? Copyright infringement? combing through texts for drug activity? thought crimes against the CCP? It's not slipery slope as all through history governments have Always wanted more power over their citizens and to not even have your phone be safe from arbitrary scanning is just too damn far.
It's not Apple scanning your device. It's your device scanning itself, and you sharing the results of those scans to Apple with iCloud if you decide to use it. Don't want Apple to see your data ? Just don't send it to them, it's that simple.
Look, I'll just say it directly because (slippery-slope fallacy aside) we all know this is how things will play out in a few years after this gets rolled out:
gov: "hey Apple, pedophiles are rampant rn, do you really need to wait before the devices upload photos to icloud before you use that on-device scanning? here's some cool proposed legislation to motivate the conversation. oh also here's some more images that are illegal."
This conversation will happen behind closed doors with much better wording and probably with real stakes for Apple. They've built a dream technology for control freaks in government across the world, something that we in the technology sphere have traditionally fought against for decades. All it just needs is a honest day's work to scan all the time vs on upload.
This is the fight to fight if you'd rather just not deal with a future when, surprise, the scope creeped but now its too normal to oppose. People will get vanned by less benign government who interpret things far more mundane than pedophilia as undesirable.
> This conversation will happen behind closed doors
It did.
> with much better wording and probably with real stakes for Apple.
Pretty much. Quote from the infamous Lindsay Graham (2019): “My advice to [big tech] is to get on with it, because this time next year, if we haven't found a way [to get at illegal, encrypted material] that you can live with, we will impose our will on you.”
If Apple wanted to make the iPhone a sandwich, we could eat it. But until then you can't.
Apple could read your whole phone content and send it to governments without your consent if they wanted to, on device-scanning or not. Util they do, there is no breach of privacy.
If you believe Apple can implement one-party consent snitching on iPhone without telling anyone, then you shouldn't use closed source software in the first place, because there is zero reason to believe that they haven't already implemented this.
> Apple could read your whole phone content and send it to governments without your consent if they wanted to, on device-scanning or not. Util they do, there is no breach of privacy.
This misses the point. Apple has repeatedly assured its customers (I’m waiting for my Pinephone to get here so I no longer need to be one) that the devices they buy are theirs. This is how things always have been, and although you’re right they could in theory slip some law enforcement update on my phone, they’re publicly refused to do so before and I trusted them as a result.
They’re not secretly slipping in a government backdoor though, they’re backpedaling on their promises and openly installing a backdoor on my phone for no reason. Why let this slide like it’s normal, or pretend like this isn’t a betrayal? They refused to unlock an iPhone owned by a mass shooter!
Plus, for many people, Apple and Android are basically the only options for modern computing. Their decisions in this market normalize stuff like this. I don’t want future devices to be forced to implement easily-abused crap like this because Apple convinced people this is a solvable problem with machine accuracy. We’re already seeing this for open questions like moderation and copyright enforcement, with horrific results that are now just normal.
> Apple has repeatedly assured its customers that the devices they buy are theirs
That is definetly not true. Apple has always been anti-ownership. By making the OS closed source, preventing installation of other OS, preventing sideloading applications, preventing self repair, preventing parts distribution etc etc.
What they used to say is that what's on your device stays on your device, and that you decide what you share. And they stay true to that promise. If you don't want to share you pictures with Apple, you don't use iCloud photos, end of the story.
> openly installing a backdoor on my phone
A backdoor is defined by two characteristics: one, the user do not know the existence of the backdoor. Two, the backdoor allows the attacker to bypass device security in order to gain access. You know the existence of the scanning feature, and this feature does not allow Apple nor anyone to gain access or information about what's on your phone without your consent. It is not a backdoor in any way you can think of it.
> It's your device scanning itself, and you sharing the results of those scans to Apple with iCloud if you decide to use it.
While technically correct, there is much power in the defaults that are set. iCloud photo sharing is on by default and provides some useful features, especially if you have multiple apple devices. Also apple doesn’t provide a good automatic way to do backups outside of iCloud.
And realistically speaking how many “normal” people will even be aware of how to turn this off and be willing to go to the effort? Will they have a choice? Sure, technically. Practically though the majority of people stick to the defaults.
I fully admit to being a bit pedantic here, but you are not logged into iCloud on a device by default. Once logged in, then yes, iCloud photos is enabled, but that seems like a fair assumption if you are using iCloud services.
> And realistically speaking how many “normal” people will even be aware of how to turn this off and be willing to go to the effort? Will they have a choice? Sure, technically. Practically though the majority of people stick to the defaults.
I fully agree with your point here, but also, how many "normal" people deal in CSAM? The vast majority of people just won't be affected by this in any noticeable way.
And learned-from-history actors think of all the previous "think of the kids" campaigns, and their subsequent scope creep: Oh, we sorta kinda extended the hash database to also include, say, Bitcoin. Because we can. No, there's nothing really wrong with it, and anyway, the innocent have nothing to fear, right? And also Winnie the Pooh and the tankman; the Chinese government asked very nicely. Oh, and the Afghan government also had some requests on what hashes they need reported. Yeah, that's the Taliban, but if you're innocent, you have nothing to fear, right? Brb, there's an email from Lukashenka asking for detections of anything white-red-white...
In what ways do you think those human steps can fail?
The method only returns positive on a hash-match with a known piece of CSAM from NCMEC.
The human process is looking at a picture somebody has already been convicted of child abuse for having, and comparing it with yours to see if they're the same.
Excuse me, but that is reporting to the authorities.
You’re simply describing some of the details of how it happens.
It’s nice that there are some manual review steps, but that hardly changes what’s happening. Especially since the human reviewers are almost surely going to pass the case through to the next step if there is any ambiguity.
I guess we're splitting hairs over "reporting" and "self-reporting." I think the idea is that there is a layer - yes, a fallible, human one - between detection and reporting.
So if a creep amasses a database of those even though individually they aren't "illegal", what do you do? You can't call it CP for one but not for the other. Especially if they're made publicly available on facebook. It's a lame slippery slope argument but it's gonna get hashed out in criminal defense cases where Apple is calling someone a pedophile.
If your phone were to detect illicit pictures, nothing would happen unless you send them those files to Apple, in which case they would have know that you had them anyway. There is nothing even remotely close to the truth with saying that your phone `self reports` to the authorities. Not only Apple has no idea about what photos you have on your phone nor what they are scanned against unless you send them your photos with iCloud, but it also isn't your phone reporting to the authorities but Apple, once they know for sure that you have illegal content on their servers.
Yes I get it, grammar nazi blah blah blah. I don't normally ever correct people's grammar (because who cares), but this one error is just so mindblowing to me, and the SHEER amount of times I see it makes me go absolutely nuts, because if you've spoken english for any length of time, you know about 1000 words with long-o soft-s that end in "oose", yet I see people saying "lose" as "loose" about.... 2 times a day, 1 if I'm lucky, and it makes no sense whatsoever.
It's just a typo that doesn't get corrected. People just arn't super duper proof reading, they're just watching for the red squiggles that indicates a typo.
You wouldn't be really mad if you spotted an "an" instead of an "and" in the wrong place, right? Because it's the same accident.
If you think about it in these terms, you wouldn't of had as much of a reaction ;)
No it's definitely not a typo lol, it's people who are reading it wrong in their head. The sheer amount of time that specific mistake is made very strongly leads me to believe it's most definitely not a typo.
Watch, now that I pointed it out you will see it everywhere.
This is only one combination of words for example, but you get the idea
While super rough/anecdotal... "loose my" has 38,000 results and "lose my" has 900,000 results. Literally 1 out of 25 times the word is used it's used wrong (if google is including roughly the same amount of items in its index)
The sheer amount of time that specific mistake is made
Don't you mean the shear number of times that mistake is made? I'm sure you have no insight in to the aggregate duration of time all these people took to incorrectly press 'o' on their keyboard.
You're making a point that I'm not ("someone is correcting someone else's spelling therefore ill dig through every single word to find a mistake they make and call them a hypocrite").
I just said above I don't correct people's grammar because a) it doesn't matter and b) I make mistakes all the time.
This on the other hand is if you see someone making the mistake (2 * 2 = 6). Even the most basic/out-of-the-math-loop person should not make this mistake. Or at the very least, it shouldn't be a common mistake made 1 out of ~25 times
These are 2 very common words, that sounds completely different, and there's nothing tricky about them. That's why it makes no sense to me, and each time I see it it's maddening. It's one thing to mess up something like....... "if I was____" (improper subjunctive mood), I see that mistake all the time, but who cares because how is anyone going to remember that? But lose and loose is just so fundamental to know the difference between when to use hard/soft "S" when preceeded by "oo". Furthermore, how many words are there that precede a soft "s" that only has one "o"... ? I can't think of any other than "dose".... but there are tons that fit the bill otherwise: hose, lose, nose, pose, rose..............
> because if you've spoken english for any length of time, you know about 1000 words with long-o soft-s that end in "oose"
> it's people who are reading it wrong in their head
After speaking english (which is my second language and has nothing common with the slavic language I usually speak) for over 15 years, I learned TODAY that those words are supposed to sound different.
Thank you and next time please consider that people like me exist.
They already collect as much information about you as possible. Whenever you turn on a Mac, you're essentially agreeing to a ton of data collection. It is naive to think that they will never share it with anyone.
> There are even more when it comes to actual search warrants.
Probably not much, since there is very little that would be within third-party custody that they would need a search warrant rather than one of various forms of subpoena (including administrative ones like NSLs) for, and the latter would already be counted in the “without a search warrant” count.
>That being said, I would be O.K. if the scan is performed on the device upon physical access by the authorities.
Why? You have physical access to the device. Just look at the data. The CSAM scan will only ID known imagery. What if this person has new content that is unknown to the database? This is a non-sensical justification.
I want my data to stay encrypted no matter what however I accept that there are people who can do bad things and access to their data could be valuable.
So it is a compromise, the data stays encrypted but the authorities can get an idea about the nature of the data.
Then it is up to the person to reveal their data as a part of a cooperation agreement. If there's a CSAM material match, the person can prove innocence by revealing the data(false positives) or the judges can increase the severity of the penalty for not cooperating.
Milage may vary depending on your jurisdiction however I think it is a good way to look at someones stuff for offensive material without actually looking at unrelated stuff. So no match, no reason to try to get into the device.
What are you on about? The comment stated that the authorities have physical access to the device. That means that they can see whatever data you have because it's your device showing it to them.
Also, this idea is even more dystopian than the original concept. The original idea was only going to scan content the user chose to push to Apple servers. This current suggests scanning the entire device just because johnny law has the device. That is soooo much worse of an idea.
Physical access doesn't mean access to a meaningful data if it is properly encrypted.
I didn't say anything about Apple servers. They can implement a protocol that works over a wired connection only, gives the device the instruction to match all the files on the device against the given list of fingerprints and returns the results. No match, no results. If there's a match, return just enough to tell what fingerprints matched and nothing more.
The difference is huge. Apple’s proposed system scans everyone’s(who uses iCloud) photos and reports them to the authorities(with Apple being a proxy). It’s a snitch/God system.
On my proposal, all photos on the device are scanned, scan results(hashes) are stored on device but no attempts to pin you down are made. Once there’s a reason to suspect that you might be a criminal, only then the scan results are evaluated by the phone upon the request of the authorities to check on you as part of the investigation. This is blackbox system.
>Physical access doesn't mean access to a meaningful data if it is properly encrypted
And what is properly encrypted if you can take advantage of 0-day vulns to unlock the phone so you have access to the data in an unencrypted state. You're not playing this through to the end of what physical access to the device means.
These risks are valid for any device, it has nothing to do with Apple or the feature in question. The police can have a physical access to your devices with or without implementation of this feature.
I don't understand what are you arguing for, physical access is not a software thing and it is guarded by exactly the same mechanism that guard the access to you apple(the fruit) in your fridge.
That's 100% not what the feature is about. People need to stop spreading this nonsense. Apple only get the result of the scans when the files are send to them via iCloud, then they, Apple, not your device, would report to the authorities. Also, they already were looking for CP in your iCloud content and report to the authorities anyway, as was shown recently with the doctor that was caught with 2k images. So whether you were or weren't using iCloud, privacy-wise on-device scanning makes literally 0 difference.
It makes all the difference in the world on whether Apple scans their own servers for content they don't want versus your phone snitching on you.
I can choose to not upload things to Apple.
If my iPhone decides to scan my photos just because, I can do nothing.
I won't accept my device being a snitch under someone elses control. Companies change values and Apple already brazenly allows the CCP to snoop on Chinese iCloud data.
Except your phone is not a snitch. Snitching is sending informations to a third party, which your phone doesn't do, unless you explicitly accept it by using iCloud syncing.
You're basically upset that your operating system can read your files. Well good luck finding an operating system that cannot read your files.
Right now Apple says they only scan iCloud uploads. We can't verify that, but let's assume they aren't lying.
Why not scan your whole library? They can, they could make the argument it's to fight CSAM. I mean if you aren't uploading to iCloud because you're a pedophile, obviously we should scan your whole phone just in case. Think of the children!
You seem to be contradicting yourself. You have the option to turn off iCloud photos in which your phone will not scan the photos because there is no point.
> Why not scan your whole library?
That may very well be the case in the future, but nobody can say for sure. But even if they do scan other files on your phone, it's unlikely they will do that for files that don't sync to iCloud. If this bothers you, you can always just sign out of iCloud completely. You can still retain access to the app store and music subscriptions separately.
No it is not, as zero information is sent to Apple unless you use their cloud service. Now if you consider that a cloud syncing service is spyware because it sends information to the provider, I don't know what to say.
For everyone who is saying that the technical community “won”... Nobody’s been fired. Rather, Apple’s senior execs publicly doubled down and basically labeled those of us who thought this scheme to be insane, as a bunch of uninformed idiots...
You can change a product decision in a day, but it takes a LONG time to change a culture that thinks these sort of insane product decisions make any sense whatsoever. Making a long-term bet on Apple has become precarious at best.
Yeah. I think some celebration is warranted, but the "disaster averted, my iPhone is back to being a trustworthy user agent" take seems to be a bit myopic when the core problem is that Apple possesses enough control over the devices it sells to be able to implement something like this by fiat. Sure, Apple backed down today, but until users are able to exercise the four essential freedoms they're living under the sword of Damocles.
Not by Apple, much as the media would love to have you think otherwise. The 'screeching' description comes from Marita Rodriguez, executive director of strategic partnerships at the National Center for Missing and Exploited Children, in a memo to Apple employees.
[Edit, since I can't reply anymore because I'm "posting too fast"] I didn't say you thought that personally; in a sub-thread about Apple senior execs applying negative labels to people, and this being one of the most insulting labels applied, I think it's important to note that this label came from somewhere else. (and maybe important to note "the technical community" were not labelled "screeching voices"; the memo said "the air will be filled with the screeching voices of the minority". It didn't say "everyone who disagrees is a screeching voice").
> [Edit, since I can't reply anymore because I'm "posting too fast"]
This means you've had your account sanctioned by Dang. You might want to send him an email and offer him your most profuse and sincere apology for whatever it is you did to anger him.
> but it takes a LONG time to change a culture that thinks these sort of insane product decisions make any sense whatsoever
Assume Good Faith. If there's one thing I learned about sensationalist stories or situations, it's that the position of "the other side" / reality is far more nuanced than the story tellers would like to make you believe. The result still may be poor, but the steps that brought people there are not nearly as insidious as it's normally presented.
Perhaps they need to be reminded of what happened when RSA/NSA backdoored Dual EC. Eventually a foreign power discovered it and used it for their own intentions.
A system that is trivial to circumvent and which only catches people sharing pics that have already been added to a database is not going to move the needle on actual physical abuse of children.
Are you objecting to the general concept of CSAM scanning, something every online storage provider does?
I think it's important to note that Apple is late to this, and they're just playing catch-up in implementing this scanning. The only difference between Apple and others is that they do it completely (or partially) on-device rather than all on their servers. And they they explicitly told people exactly what and how they're doing it all.
It's kind of a funny situation where in Apple being more transparent than any other implementer of this same process, they've gotten themselves in more strife.
CSAM scanning stops common proliferation of already identified materials and can help keep those from casually entering new markets. It does not protect children from being newly exploited by people using these services unless the services are also doing things they claim not to be doing.
In that case, Apple's claim of critical importance, if we interpret that as any more than rhetoric, doesn't mean what they imply it to mean.
Edit: I replied before you edited your comment. Leaving this one as is.
> It does not protect children from being newly exploited by people using these services unless the services are also doing things they claim not to be doing.
While I only know about this through anecdotes people share, I understand this systems generate leads so that agents (usually federal) can infiltrate groups and catch people uploading new material, thus preventing them for further victimizing minors.
That smacks of those harebrained terrorist entrapment plots where the federal and undercover agents are the only members of groups they've "infiltrated".
Scaling up detection also makes it more tempting for bad actors to seed more content to fabricate the appearance of an epidemic. We already have inaccurate gunshot detection systems, field drug tests, and expert bite mark analysis being used to convict people. Would a jury even be able to examine the evidence if it involved CSAM?
Yeah it's pretty bad where this is headed. It also weaponizes digital images that are very easy to find.
Not sure if you are honestly asking about the jury thing but judges do have the legal superpower to see illegal images and they will usually instruct the jury to vote based on what they saw.
The US federal prison system being one of the biggest bullies on earth though, this cases rarely go to trial.
> Would a jury even be able to examine the evidence if it involved CSAM?
The prosecutor will hire expert witnesses who will explain that their extensive training and education has led them to believe that the evidence is certainly illegal material.
These systems rely on the premise that catching pdos and putting them out of circulation does* protect children, since many children are abused by relatives and such. I think it's a reasonable assumption!
You'd have to actually have a reasonable chance of catching pedophiles though. If the system is as trivial to bypass as turning off photo sync it isn't a very good justification.
>It's kind of a funny situation where in Apple being more transparent than any other implementer of this same process.
It isn't the same process, it's an on-device scanning process and Apple is the first to implement this. Had Apple said they were scanning iCloud directly nobody bat an eye (I, for one, assumed they already did).
It’s just the continuation of the irony: Apple is also the only really popular consumer cloud provider with a proper e2e implementation, so they can’t scan photos in iCloud. It’s why they had to implement this process in the first place.
Some of Apple’s management must ponder that if only they didn’t go to such lengths in denying themselves access to user data, they’d have it so much easier—given no other takers of such a challenge among mid- to high-end device manufacturers, we all would probably have settled for e2e just never being a feature of commonly available cloud services. I wonder how successful the privacy-focused promotion had been for Apple; they seemed pretty OK making people want their devices for all the other reasons.
I stand corrected, they do have e2ee but not in Photos/iCloud at this point. Then I suppose the only reason for them to have gone for all this complexity is to invalidate an excuse law enforcement could use to access user data left and right. (LE: Show us all the photos so we know who’s bad! A: We have a reliable system that auto-reports, see paper, therefore this is unjustified and we’ll sue.)
I would not say it is "scanning" as the device doesn't know the final result (match or not). If I had to summarize very broadly the thing to my understanding: it will be like doing a "loose" SHA on every photos in the Photos app and sending these to Apple (if iCloud is enabled). Server side, Apple checks if the hashes match or not. Isn't this like "scanning in iCloud" but without Apple needing to have a decrypted photos in their servers ?
>I would not say it is "scanning" as the device doesn't know the final result (match or not)
I don't think this is a relevant distinction. If the device had known the result and just sent Y/N to Apple, what would change in your argument? Nothing. Your last sentence would be just as arguable.
>Isn't this like "scanning in iCloud" but without Apple needing to have a decrypted photos in their servers?
Note that Apple right now already has the decrypted photos since they have the decryption key. There's no evidence they are even considering E2EE right now, and since there are other legal requirements for scanning (e.g. terrorist material) I'm not sure this allows them to implement E2EE.
And I don't see how the client-side feature can remain as is. There are obvious cases that would be caught by the typical server-side scanning and won't be caught here, so once the door was opened, the government will pressure. For example:
* What happens when the NCMEC database updates? It could be that the phone rescans all your images. Or that apple keeps the hash and rescans that. Note that the second case is identical to some server-side scanning implementations.
* What happens when you upload something to iCloud, delete it from your phone and then the NCMEC database updates?
If Apple keeps the hash, it's just server-side again.
If Apple doesn't keep it but uses client side scanning, the phone has to keep the hashes so you can't ever truely delete an image you've taken.
If there's no rescan, the bad guys get to keep CSAM on iCloud so long as they passed the original scanning with the older NCMEC database - surely the government wouldn't accept that.
(I considered scanning on download but I don't know if Apple/the government would like this compromise, since with that approach CSAM can remain on iCloud undetected so long as it's not downloaded, anyway it's not in the original papers).
Basically, either they scan on client-side more than they let on or we end up in a world with both server-side and client-side scanning. The latter is arguably worst of both worlds, the former has implications which need to be looked at.
Isn't the "interpreting" step the one that matters?
Apple takes a photo, runs it through some on-device transformations to create an encrypted safety voucher, then it gets "interpreted" once it's uploaded to the cloud and Apple attempts to decrypt it using their secret key.
Google uploads a raw photo, which itself is essentially a meaningless value in the context of identifying CSAM, and Google "interprets" it on the server by hashing it and comparing it against some database.
In both cases, the values that are uploaded by the respective companies' devices don't mean anything, in the context of CSAM identification, until they are interpreted on the server.
It doesn't matter what the purpose of this move is. It's not "catch-up" at all. The scanning will happen on the devices using the users' resources, before uploading. See also: https://news.ycombinator.com/item?id=28309202.
It’s catch-up in that Apple submits less than a thousand CSAM reports to NCMEC annually:
> According to NCMEC, I submitted 608 reports to NCMEC in 2019, and 523 reports in 2020. In those same years, Apple submitted 205 and 265 reports (respectively). It isn't that Apple doesn't receive more picture than my service, or that they don't have more CP than I receive. Rather, it's that they don't seem to notice and therefore, don't report.
> In 2020, FotoForensics received 931,466 pictures and submitted 523 reports to NCMEC; that's 0.056%. During the same year, Facebook submitted 20,307,216 reports to NCMEC.
I understand people's objection with Apple's on-device implementation of this scanning, but that didn't seem to be parent's objection. I was just trying to clarify.
If I were a US citizen, and I accepted that scanning cloud-stored photos for CSAM was neccesary, I would greatly prefer Apple's approach because any such search coerced by Government are protected by the Fifth Amendment. Whereas if the search occurs in the cloud, Government can force Apple or Google to search for whatever they want, using whichever algorithm they want.
Yes, I am. I object to any and all forms of mass surveillance and spying.
The fact that cloud providers aren't building their systems in such a way that they are unable of conducting such spying is cause for concern. The fact that they are conducting such spying is outrageous.
> Are you objecting to the general concept of CSAM scanning, something every online storage provider does?
Not really. I'm just saying it's laughable to think that CSAM scanning in its current form is critically important. Maybe it's critically important for feelgood and PR but not for actually preventing child abuse. It's almost like saying scanning for catalogued images of violence stops violence. If it only were that easy, damn, the world would be a good place.
Now as to what online providers should or shouldn't do, I can't say. But a part of me hopes that they continue the march towards egregious censorship and privacy violations so that people would eventually 𝐖𝐀𝐊𝐄 𝐓𝐇𝐄 𝐅𝐔𝐂𝐊 𝐔𝐏 and realize that there's no way to have freedom and privacy when you're handing your life to corporations and proprietary software. I really do wish for a world that is de-facto free software and fully controlled by the user.
As for iCould, I have no skin in the game. I've never used an Apple product.
Unfortunately this clearly won't happen because of the staggering amount of people even on HN who see nothing wrong with having on-device scanners reporting back to HQ. I'm seriously baffled by the number of apologists and people who see nothing wrong with this or who flat out refuse to admit that this can lead to abuse.
Once the on-device scanning Pandora's box is open, it's trivial for governments to request more stuff to be added to the databases and Apple can't claim the defense of "we have no such ability currently" anymore.
If I was paranoid I'd wonder how much astroturfing is going on here.
> If I was paranoid I'd wonder how much astroturfing is going on here.
Ah, good old apophasis:
> a rhetorical device wherein the speaker or writer brings up a subject by either denying it, or denying that it should be brought up. - https://en.wikipedia.org/wiki/Apophasis
> Unfortunately this clearly won't happen because of the staggering amount of people even on HN who see nothing wrong with having on-device scanners reporting back to HQ.
This is a mischaracterization of many of the arguments.
short_sells: My goal here is meta-goal: I am not trying to change your mind on this issue; rather I want you to acknowledge the strongest rational forms of disagreement.
This is a complex issue. It is non obvious how to trade off goals of protecting kids, protecting privacy, minimizing surveillance, catching predators, and dealing with the effects of false positives.
It is too simplistic (and self defeating) to think that just because people disagree with a particular conclusion that they are not allies with many of your goals.
Yes, but we have seen in the past that privacy once lost is nigh impossible to regain, and it is also obvious that the scanning Apple is proposing is trivial to bypass.
So what is it actually trying to accomplish?
I really struggle to believe that they are trying to protect kids (out of the goodness of their hearts).
The only explanation I can think of is that this is some attempt at appeasing government agencies.
Below, I'm going to push back on what I see as overconfident and overgeneralized claims. Both of these can inhibit understanding and listening.
> but we have seen in the past that privacy once lost is nigh impossible to regain
Yes, this is a key argument in the mix.
Some follow up questions:
1. As I understand it, here on HN, we are an international audience of various ages. With that in mind, I don't know your contextual experience. Are you scoping this (a) to the internet era (roughly 2000 to present)? / (b) to particular countries?
2. The statement quoted above is stated as if it is a fact, but I hope you realize it is actually a prediction. What is the historical context for this prediction? How far out into the future are you predicting?
3. Can you pin down your prediction more precisely? What does "nigh" mean? (There is a lot of variation in what "approximately" means to different people. Often the 'exceptions' are quite informative.)
4. The argument, as written, is quite general, which makes it hard to discuss. Whose privacy and in what context? Chinese citizens searching the internet? Journalists doing investigative reporting? Americans shopping in surveilled supermarkets? (Think of this as an opportunity to explain)
5. Do you mean all of the above? If you do, yes, people say that online privacy has eroded in many senses. At the same time, the tools for encryption have become more powerful, understood, and used. My point: if you make a very general statement, it is only fair if you cover the full range here.
In summary, with the above questions, I want to both better understand you -and- push back too. Unfortunately, I don't find the discussion chain above (the ~3 ancestors) particularly persuasive. I say this even though I agree with some aspects of it.
So you know where I'm coming from: in almost all situations, I've found it is more effective to understand, discuss, explain, persuade rather than 'writing off' a group of people because you don't really understand them.
P.S. I've addressed your other points in a sibling comment.
This form of question, as written is unnecessarily limiting ...
(a) there doesn't have to be one thing that Apple was trying to accomplish
(b) there doesn't have to be one motivation
... so I'm going to respond to the spirit of the question with a set of explanations, all of which may be true (to some degree) at the same time.
- parents are fearful of their kid's online activities;
- parents are open to trying new ways to give their kids freedom with some guardrails;
- yes, many people at Apple do want to protect kids out of the goodness of their hearts. This fundamental instinct is widely shared, particularly among parents.
- putting the 'why' aside, many customers perceive value and will pay for it;
- Apple executives are mostly profit-seeking (with certain constraints such as: mental models, brand constraints, regulations);
- shareholders seek profits and generally have less loyalty to any particular company's 'values' -- meaning they will 'shop around' for the best performing companies;
- as a group, shareholders see mostly upside and little downside -- don't perceive significant direct harm from these changes (at least, not until this became a public relations issue);
- generally, corporations benefit from playing nice with the U.S. government;
- Apple, in particular, has quite publicly pushed back on law enforcement's calls for decryption;
- in particular, with heightened scrutiny of the large tech companies, olive branches are particularly useful;
- some at Apple may prefer to lead with a proactive solution rather than wait for imposed regulations;
- some at Apple see this as a proactive branding opportunity;
- some Apple engineers are at the top of their field regarding encryption, security, etc and may have deemed their offering the best practical option available;
- some at Apple certainly understand the risks but assess the balance of false positives and false negatives differently than you do;
My hope is to make it a bit easier to recognize the complexity here. Though it may be true that organizations act as one entity, it is not true that they have singular intent. Attempts to claim a singular intent or goal are subjective interpretations.
Note: the list above is presented sequentially, but I am not claiming any causal ordering. They would be better presented as a network/graph connected by topics and relationships.
No other company is scanning for CSAM on your phone, if you have proof of that please bring it forward. I would love to see evidence of that on my Microsoft, Linux, and Android devices. We'll be waiting. Everyone and their uncle knows everything in the cloud is scanned unless it's encrypted before being uploaded.
That "only" is doing a huge amount of work in your comment. Turning my device into something that snitches on me is a huge difference from scanning my uploads.
If you send any of those illegal photos to Facebook, Facebook will snitch on you. If you send them to iCloud (under this described system), Apple will snitch on you. It is no different to "scanning your uploads" because it /is/ "scanning your uploads". That it's the CPU in the phone doing most of the work and iCloud servers making the connection, vs Facebook's CPU and cloud doing all the work, makes zero practical difference to anything.
Arguing about where in the combined system the processing happens is shifting the deck-chairs on the Titanic, it's not making one part of the system more or less responsible than any other part.
Have you considered the number of client-side SDKs surreptitiously doing things on “your” mobile device?
One of the most common is “scanning” your location constantly, feeding your daily routines to marketing data aggregators, which is arguably more invasive of the average person’s privacy than CSAM flagging.
I’ve posted in the past a list of a short-list of offending SDKs frequently phoning home from across multiple apps from multiple developers. Since weather apps sending your location surprised people, I thought this problem would get more traction.
This Apple thing, where this is iCloud file upload client SDK doing a thing on upload, is an instance of this class of problem.
It’s not an Apple thing, it’s a client SDK thing, and the problem of trusting actions on “your” end of a protocol or cloud service is not a solved thing.
I fail to see any difference, when your data is as much "yours" whether cloud backup is turned on or off. Google and others scan data stored in Drive (and other cloud storage) not just for CP, but for copyright infringement, etc; even files not shared with anyone. Someone here just yesterday complained that Google in 2014 kept deleting personal files of ripped DVDs from their Drive. Given that many people use GDrive/OneDrive/iCloud as their main storage, where most of their PC files are stored in the cloud, I fail to see any logic in making a cloud vs personal device distinction.
When you hand it off to google it's now on their servers and as their user agreements point out your shit will be scanned and made available to law enforcement upon request and proper authority (whether that's a warrant or just Bill from computer forensics asking for it, I don't really know). When it's on your phone it is not being scanned. It's just like having to get a warrant to rifle through your house for evidence. The same should be required for personal devices. It being digital doesn't mean it should be any less safe than if it was on paper.
Would you argue there’s no difference between the police searching a safety deposit box (which is a bad example, since it’s actually quite private) versus constantly going through and indexing the contents of your home, trying to find illegal objects?
So the only way to opt out of an intrusive, sloppy and flawed algorithm now is based on you hoping Apple sticks to their word? The same company that gives the CCP unfettered access to Chinese iCloud data?
Also, people are annoyed about the fact that the scanning is done client-side, which is ironic because Apple are doing that in order to increase the security/privacy of the system. I can see why people are uncomfortable about client-side scanning, but there’s still some amusing irony in it.
If you compare it against the industry standard of in-cloud scanning, the privacy comes from the fact that an on-device model results in 0 Apple servers ever examining your photos (unless you have 30 CSAM flags on your iCloud account), whereas an on-server model results in Apple servers examining every single photo you upload.
You can argue it's better to have neither type of scanning, but if Apple considers it a critical business issue that their cloud is the preferred hosting site of CSAM[0], then they presumably have to pick one or the other.
You can also argue that on-device seems creepier or more invasive, even if it doesn't result in Apple examining your photos, which is a reasonable reaction. It certainly breaks the illusion that it's "your" device.
But it's a fact that the on-device model, as described, results in less prying eyes on your iCloud photos than the on-server model.
[0] I'm not claiming this is the case, just saying for example
> the privacy comes from the fact that an on-device model results in 0 Apple servers ever examining your photos (unless you have 30 CSAM flags on your iCloud account), whereas an on-server model results in Apple servers examining every single photo you upload.
Every single photo you upload is getting scanned -- it's just that Apple is doing the scanning on "your" device instead of their servers.
From the point of view of the privacy of your photos, I fail to see what the difference between the two is. I mean, if they did the exact same type of scanning on their servers instead of your device, the level of privacy would be identical.
In terms of general privacy risks, not to mention the concept that you own your devices, there is an enormous difference between the two, and on-device scanning is worse.
> From the point of view of the privacy of your photos, I fail to see what the difference between the two is.
Good point. The question is privacy "from whom". For me, privacy "from apple" means mostly from malicious individuals working for Apple (indeed, if you have an iPhone running proprietary Apple software, you could never truly have privacy from Apple the corporation).
There are documented cases of employees at various cloud services[0] using their positions of power to spy on users of said services. Performing on-server scanning implies that servers regularly decrypt users' encrypted data as a routine course (or worse, never encrypt it in the first place), which provides additional attack vectors for such rogue employees.
On the other hand, taking the on-device scanning as described, the on-device scanning process couldn't be possibly used as an attack vector by a rogue employee, since Apple employees do not physically control your device. Maybe an attack vector here involves changing the code and pushing a malicious build, which is a monumentally difficult task (and already an attack vector today).
For me, privacy means that I have control over who I disclose what to. But context matters. If I'm in my own house, I (should) have almost total control over disclosure. When I'm in someone else's house, I have very little control as I'm subjecting myself to their rules.
A smartphone is probably the most intimate, personal device most people will ever own, and it's the equivalent of their house. However, if you're using cloud services, then you're in someone else's house and are subject to their rules.
That's why, in my view, doing the scanning on-device is not only dangerous, but unethical. Doing the scanning on the servers is neither of those things.
I get the argument about rogue employees, but I don't find it persuasive. I'm told that Apple keeps your data encrypted on their servers, although they hold the keys. If that's so, then "rogue employees" are something that Apple can, and should, control.
None of this is an increase in security or privacy. What you described is merely a mitigation of a massive loss in security and privacy, when compared to other massive losses in security and privacy.
> What you described is merely a mitigation of a massive loss in security and privacy
If we're looking at the end result, a mitigation of a loss of privacy is an increase in privacy compared to the alternative, no?
I mean clearly what you're saying here is "scanning always bad!". I understand that, I really do. I'm saying that scanning was never not on the table for a large corporation hosting photos on their server. Apple held out on the in-cloud scanning because they wanted a "better" scanning, and GP's point is that it's ironic that the one cloud provider willing to try to make a "less bad" scanning option is the one most demonized in the media.
None of this is to argue that scanning is anything less than a loss in security and privacy. Yes, yes, E2EE running on free and open source software that I personally can recompile from source would be the best option.
> GP's point is that it's ironic that the one cloud provider willing to try to make a "less bad" scanning option is the one most demonized in the media.
I think that may be because it's far from clear that Apple's solution is "less bad".
> If we're looking at the end result, a mitigation of a loss of privacy is an increase in privacy compared to the alternative, no?
I guess you could say that in the same way that you can say that a gambler who just won $10 is "winning", even though their bank account is down $100,000. It only works if you completely lose all perspective.
You would think that, but facebook detects 20 million shares of this material per year. Presumably at least part of that group that shares the known material are also abusing children and creating new material, so this does look like an effective detection method for finding some of the people who abuse children.
I'm not convinced that's a good argument. Shouldn't every bag of coffee you buy include a camera that reports any illegal activity to the government? A lot of criminals drink coffee, after all!
We must mandate speed monitoring devices for every car sold. Each time the vehicle speed exceeds the speed limit we will deduct 100 social credit points from the owner's balance.
Hundreds of thousands of people die each year due to filthy criminals driving too fast. Introducing this measure will save lives.
If you are uncomfortable with this measure, you drive too fast.
Very good point. Actually, the next version of iOS should include a feature to make sure that you don't roll through a stop sign at 1MPH at 3 o'clock in the morning. STOP means stop, not kind of look both ways and carefully evaluate the situation! An Authorized Phone Receipticle built into your car will ensure that the phone's camera is facing forward so our Machine Learning Artificial Intelligence can detect stop signs, and the accelerometers will detect whether or not you stopped completely. If you didn't, your device will do some bullshit with bitcoins or something eventually leading to it sending a report to the Apple Safe Driving Headquarters ("ASDHQ") where totally trained technicians that definitely have health insurance and get paid more than $1/hour will check the data to see if it's a false positive. If it's not a false positive, and BTW, from exactly zero hours of real world testing we have determined that the false positive rate is less than 1 in 1 trillion, we'll report you to our government partners who will take care of your reeducation from there. Driving is a privilege, not a right!!!
Over ONE HUNDRED people die per year from people not coming to a complete stop at stop signs, and at Apple, we care!!!!!! We'll never let the screeching voices of the minority stop us from invading your privacy for no good reason. We like the screeching, it makes us feel important and relevant. As the inventors of a small computer with a battery in it, we consider ourselves better than God.
You joke but if everyone had a tag, the speed limit could adjust itself in real time, enabling you to go faster when it is safe to do so and go slower when it isn't.
In other words, we could improve the driving experience and save lives with a little bit of work if we had folks with a working brain in government and elsewhere.
"We found that more than 90% of this content was the same as or visually similar to previously reported content. And copies of just six videos were responsible for more than half of the child exploitative content we reported in that time period."
"we evaluated 150 accounts that we reported to NCMEC for uploading child exploitative content in July and August of 2020 and January 2021, and we estimate that more than 75% of these people did not exhibit malicious intent (i.e. did not intend to harm a child). Instead, they appeared to share for other reasons, such as outrage or in poor humor"
So a lot of this is memetic spreading, not pedos and child abusers sharing their stash of porn. And people don't react to child porn by spreading it like a meme on Facebook, so what are these pictures that get shared a lot? What's happening is people find a funny/hilarious/outrageous picture and share that. Funny moments might happen when kids play with pets, for example.
The other part is consenting teens sexting their own photos. And then there's some teens (e.g. 17-year-olds, which by the way is old enough to consent in some countries) getting accidentally shared along with adult porn by people who don't know the real age.
"Unintentional Offenders: This is a broad category of people who may not mean to cause harm to the child depicted in the CSAM share but are sharing out of humor, outrage, or ignorance.
Example: User shares a CSAM meme of a child’s genitals being bitten by an animal because they think it’s funny.
Minor Non-Exploitative Users: Children who are engaging in developmentally normative behaviour, that while technically illegal or against policy, is not inherently exploitative, but does contain risk.
Example: Two 16 year olds sending sexual imagery to each other. They know each other from school and are currently in a relationship.
Situational “Risky” Offenders: Individuals who habitually consume and share adult sexual content, and who come into contact with and share CSAM as part of this behaviour, potentially without awareness of the age of subjects in the imagery they have received or shared.
Example: A user received CSAM that depicts a 17 year old, they are unaware that the content is CSAM. They reshare it in a group where people are sharing adult sexual content."
So there's reason to think that the vast majority of this stuff isn't actually child porn in the sense that people think about it. It might be inappropriate to post, it might be technically illegal, but it's not what you think. And if it's not child porn, you can't make the case that it's creating a market for child abuse. By reporting it, you're not catching child rapists.
I don't have a reference handy but I recall reading about the actual child porn that inevitably does get shared on every platform, much of it is posted by bots over VPNs or tor. So its volume isn't representative of the amount of child abusers on the network, and reporting these accounts is not likely to lead to anything.
Also: In May 2019 the UK’s Independent Enquiry into Child Sexual Abuse heard that reports received by the National Crime Authority from the United States hotline NCMEC included large numbers of non-actionable images including cartoons, along with personally identifiable information of those responsible for uploading them. According to Swiss police, up to 90% of the reports received from NCMEC relate to innocent images. Source: https://www.article19.org/resources/inhope-members-reporting...
Out of those remaining 10%, how much leads to convictions? Very little. Sorry can't dig up a source right now. Point is: millions of reports lead to mostly nothing. Meanwhile, children continue to be abused for real, and the vast majority of those who want to keep producing, sharing, and storing such imagery surely have heard the news and will find a different way to do it.
Platform operators are incentivized to report everything, if they're playing the reporting game. Tick box "nudity or sexual conduct", tick box "contains minors"? Report it.
It doesn't help the discussion at all that everything involving nudity and minors (even cartoons) gets lumped together as "CSAM" with actual child porn produced by adults who physically abuse kids.
> Very little. Sorry can't dig up a source right now.
I don't think the NCMEC shares numbers about how many of their reports result in actions by law enforcement agencies. They probably also don't really know, its kind of like a dragnet.
Also under the current "I'll know it when I see it" CSAM doctrine in US courts cartoons can actually be illegal and cartoons depicting the abuse of children are usually banned on most big media sharing platforms in the US, and most companies in the US won't host you or let you serve ads if you have that material. So yeah, it's not only muslim totalitarians that are ok with banning cartoons and punishing people for drawing them or sharing them, Uncle Sam may also send you to jail and deprive you of your rights if you draw the wrong thing.
>I don't think the NCMEC shares numbers about how many of their reports result in actions by law enforcement agencies.
I'm not surprised. Quantifiable accountability can be extremely problematic if you aren't actually making meaningful impact on the problem that you are claiming to help solve.
I am so confounded on why they are burning through so much good will on something they don't even seem to really benefit from. Has anyone explained this?
Pressure from governments of course. Apple would rather control how this works then cede control to governments/legislation. Problem is that this still doesn't stop governments from stepping in anyway. Apple has simply laid their own framework instead.
It might only help a little bit, but it also won't hurt anyone so there's not really a reason not to.
Stupid child abusers who put known-pornographic images in their iclouds are still child abusers. The fact that fruit is low hanging isn't a reason not to pick it.
Not only that, but identifying those storing existing, known CSAM means that, unless they have previously unknown CSAM also, no additional harm has been done, as it's just file sharing.
It's only production of new/novel CSAM that harms children. Sharing of existing, known CSAM (what this system detects) doesn't harm anyone.
Regardless of the larger conversation, why do you think that adding friction wouldn't stop people from engaging in an activity? I mean, generally speaking A/B testing is used to find the path to most engagement and doesn't get this kind of pushback.. but as soon as you want to make something harder you get people saying "but that wont do anything". It demonstrably does! 'trivial to circumvent' in a world where people regularly have the experience of being called out to plug something in, when "have you checked the power button and power cord" has been a common trope for over two decades!
I'm genuinely curious about your thoughts but to be clear my focus is on this very narrow, nigh nitpick tangent.
> Regardless of the larger conversation, why do you think that adding friction wouldn't stop people from engaging in an activity?
See war on drugs or piracy, or alcohol prohibition for instance.
Now there's a million ways to share pictures online in a manner that bypasses the few big platforms' interference, and you really don't need to be a genius to use (say) password-protected archives. That's how a lot of casual online piracy happens.
This thing does very little to prevent spreading CSAM pictures, and it does nothing to prevent actual child abuse.
Well, those things were actually pretty effective at stopping people. People clearly circumvented those restrictions, but from my perspective the number of people that smoke weed probably went up after it became easy to access in my state, and the quality and variety of products increased enormously. They weren't effective at achieving any broader aim, like stopping or slowing drug production but frankly that wasn't even the point. Regardless, I think we can both agree that these broad bans decreased the number of people participating in that behavior. I think the shift in our societies viewpoints on them came about as an awareness that they were ineffective at achieving any moral or justifiable aim. I think the correct criticism of them isn't that they failed to prevent or impede a given behavior, but that they weren't worth the negative externalities (if you'll forgive the understatement).
Which brings me to your example of a password protected archive. I'm ignorant on specifics so I'm just going to sketch a broad argument and trust that you'll correct me if my premise is incorrect or my argument otherwise flawed. Essentially, if something is opt-in instead of opt-out, a non-trivial portion of the population won't do it. Especially if it is even slightly technical, there are just a lot of people who will just stop thinking as soon as they encounter a word they don't already know. So if the preventative measure is something that is not automatic and built in to whatever tool they are using to share images, then that preventative measure will not protect most of them. So to bring it full circle, I think it would do much to prevent the spreading of CSAM because other bans have been effective and I don't think most people have the technical literacy to even be aware of how to protect themselves form surveillance. As you say you don't have to be a genius, but I'd suggest you'd need to be above average, which gives you over half the population.
Also thanks for responding, I hope I'm not coming across as unpleasantly argumentative, I mean all this in the most truth-seeking sense of a discussion (I was going to say 'sporting sense of a debate' when I realized I had never been part of a formal debate team and that might not mean what I thought it meant, heh).
Sorry I haven't had the time to write a thoughtful reply.
Are you talking about a state where cannabis was de-criminalized?
I agree that ease of access does have an effect on people, but the effect of iCloud scanning is so marginal in the grand scheme of things that it's almost like fighting drugs by installing surveillance cameras in malls. They just go trade and smoke elsewhere. The friction is virtually zero, but the privacy concern of scanning on half a billion Apple devices is much worse.
It's worth keeping in mind that CSAM is already highly illegal and banned, whether Apple scans iCloud photos makes no difference on that front. So it's nothing like the difference between weed being de-criminalized or not.
Also, fact is you already have to jump through hoops to obtain CSAM. It's very rare to stumble upon it being casually shared online (I think the last I witnessed it must've been around 15 years ago on 4chan, and somewhere between 5 and 10 years ago in a spam attack on freenode). Trying to search for it on the clearnet is mostly going to yield nothing.
In general, people also tend to know when they're doing something highly illegal. And yet they still do it, just taking the steps to try avoid being caught. No difference with CSAM. They will jump through hoops, and "don't store child porn on iCloud" is the tiniest of hoops to jump for real.
Password protected archives were meant to be just one example of how to bypass scanning on cloud platforms, and one that happens to be widely used among casual pirates. Google drive might be one of the biggest pirate services around these days. The bigger point I was trying to make is just that there are countless ways to share files without exposing their contents to scanning. Nothing for people who are willing to jump through hoops to get CSAM.
Finally, one point I've had to try make over and over again is that detecting the storage or distribution of old (catalogued) CSAM photos is only very tangentially related to actual abuse of childen. Unfortunately that keeps happening even if you destroy the internet and make sure no photo is ever stored in the cloud again.
I've said it before: child abuse and violence existed before cameras and internet, and will continue to exist. Detecting images of abuse or violence is not going to stop abuse or violence.
And if someone makes a system that is efficient at detecting all catalogued (=old) images of CSAM, then that might just create a larger market for "fresh" (uncatalogued) child abuse. Credit to nullc for realizing this.
And on protecting-the-children front, there are much bigger problems than stashes of old CSAM. Like grooming, or chatrooms where child prostitutes are forced to stream for an audience..
I'm betting a sizeable group has images from the database. They don't know which images are known, I think. But the fact that it's nearly trivial to circumvent does make me wonder.
I used to work on a system with a heavy amount of UGC (gaming network). We had a big team of people doing moderation of reported content, and the amount of stuff they had to deal with was monstrous. Lots of death threats, racism, evil images, SWAT-ing. They'd invite people to do ride-alongs to see the types of things that were reported, and it was brutal. A very tough job. A lot of those folks were ex-military, we needed really tough people.
They were kind of a weird org, the police of the network, and they kind of kept to themselves but carried a ton of weight. When they were looped into a meeting for a new feature -- for example, "Should we let users send videos to each other?" -- they'd list out the evil ways people would use them, and then list out the various CSAM scans we'd be obligated to do if we hosted the content on our servers. They didn't have the means to block features, but they did put a healthy fear-of-God into us, and made it so we didn't have a lot of big public debacles of kids seeing porn on the home screen of the device, or things like that.
I can imagine that Apple has a similar group. I can imagine they went to some executive, told them that this would be their best way to comply with the law, of course you don't need to access the images themselves, that would be a privacy violation and we don't do that! We just need the hashes, may we remind you this is CSAM we're talking about? Then it went up the chain and people were excited to "protect the kids" and protect the brand, hashes != user data, everyone wins!
I'm guessing this move was mostly misaligned priorities, and possibly some genuine incompetence or perhaps people not speaking up when they saw the potential here.
I used to work on a system with a heavy amount of UGC (gaming network)
And here we have it, a situation where the content a user generates on what's ostensibly "their machine" is being treated more and more, by companies and the state as being equivalent to "UGC", user generate content - stuff explicitly being shared. Obviously, the insidious thing is that no one will be able to create anything without even the creation process falling under someone's "terms of use".
So this group of good hardened souls...That from the description could exist at Apple...and went to speak to their management about all the good thinks they need to do. Do they also speak up to their management about all the activities Apple is complicit with, so as to keep their presence in the Chinese market?
There's no reason to obfuscate your message by using unnecessary abbreviations. What's wrong with avoiding ambiguous abbreviations and typing out "user generated content"? Letters on HN are free. Why be lazy when you can be clear and unambiguous?
Apple's brand is tarnished in my eyes. I already got rid of my iPhone and won't be returning. The fact that they even announced this is proof that they only ever cared about providing a facade of privacy.
I can imagine that the following is going to happen.
1. Apple announces the scanning feature (done)
2. People dislike it; backslash (happened)
3. Apple is worried about the brand, publishes statement that they are going to delay (happened)
4. People say "great", "wonderful" (ongoing)
5. Apple waits 5 months, release this crap feature anyway
6. People moved on, media does not want to repeat the same messages again, not many people care anymore
7. DONE.
This is the same thing that Facebook did with its TnC. People complained, Facebook took it back, re-introduced it 3 months later. Nobody cares anymore.
Most people don't care even at step 2. It's only technical people with concerns about privacy and corporate overreach behind this backlash. Most of that group will continue using iPhones no matter what out of convenience and unwillingness to stand up for their principles.
I stood my ground and actually took some action when something made me unhappy. This is the only way things change.
Just curious, what did you switch to? All the truly open source phones still have a lot of work that needs doing, which means the only real competitors are android devices with a custom rom, and that doesn't really fix privacy either.
I switched to a Pixel 5 running GrapheneOS. It does a good job of reducing the chat to Google, but it would be an outright lie to say that it's completely detached from Google. I'm comfortable with the measures GrapheneOS has taken and the way it sandboxes Play Services seems quite clever.
The battery life is really good, not sure if that's just the hardware or whether the debloated OS means less power is consumed.
Not op but I started moving away from Apple a few months ago, much before this Apple CSAM debacle. This is a pretty big move for me because I am a developer who makes apps for both iOS and MacOS, so I pretty much need Apple software for work.
No longer buying iPhones or Macs. I was planning on upgrading to the Mac Mini with M1 chip later this fall but now I plan on building a hackintosh instead. I also no longer recommend Apple devices to friends/family.
I got myself a cheap android phone which I have de-googled myself. I got this Android phone ($190 USD for a very good phone - 8GB ram, 12gb space):
KDE Connect[0] has a MacOS implimentation that will restore some of your 'magic' to your ecosystem. If you find yourself missing AirDrop, Continuity Clipboard, or synced media controls, give it a try and see if it feels right. Even the earlier builds of this were a lifesaver when I was still hackintoshing, and it made it a snap when I was ready to switch to Linux. Best of luck to you in the future!
I am aware. What I meant was that I don't want to support Apple with my dollars directly. Since I need MacOS for work (app development), I will use a hackintosh. That way I can run Ubuntu on the side if needed. Might look into how MacOS operates in virtualbox.
If you're hosting on a Linux machine then you can use QEMU for a surprisingly sturdy Mac virtualizationb setup. I've heard legends on IRC of people with meticulously-picked hardware who get perfect GPU-acceleration working, but ymmv.
Yes but there are other tradeoffs that are introduced. Carrier-specific ROMs, difficulty upgrading to new android versions, generally more malware found in the Play store [1] and non-privacy-centric defaults [2] aren't really great to see either.
I'm not looking to start another android vs iOS debate here. There's plenty of that online that I and others can reference. I'm more interested in seeing how people are finding alternatives or what privacy tradeoffs they are willing to accept to avoid Apple's recent photo scanning move.
Does anyone have a success story of using a non-Android / non-iOS smartphone as their main phone?
I've been on a flip phone and never owned a smartphone but a recent role I'm taking requires having a smartphone for email / Slack access.
I know it's a matter of picking between 2 lesser evils but has anyone avoided both and went with an open Linux based phone that has a really nice user experience and decent battery life? All of the research I've done leads to Linux / open phones just not being there yet, even phones that cost $900 like the Purism Librem 5. The Librem 5 looks like it came a long way in the last year and I am really rooting for them to succeed, but is anyone using a current one as of today or know of a viable alternative?
Get a pixel 4a and put CalyxOS on it and you're good. Only use it for work and keep your flip phone for personal stuff for maximum security I guess. Going from a flip phone to ANY smart phone is going to be a massive upgrade in terms of capability, even the librem phone, so I don't really know what you're going on about there.
I'm still waiting for my Librem 5, ordered in early 2019. Although to be fair, I chose to be in a later release group, in order to benefit from design iterations, bugs being worked-out etc.
I say don't change your setup for them; get a dedicated device… consider a tablet. Having two devices is a pain, but it'll improve your work-life balance when you have to actively decide to carry it. You'll also never accidentally send something personal to a coworker.
I use an older iPhone in airplane mode. I get over a week of battery life. I forward calls to my personal if I'm away from wifi and on-call, and they do not get my personal phone number.
I was considering "upgrading" my existing phone instead of carrying 2 devices. They said I can use a personal phone and I wouldn't have to install anything company specific or have random audits done. In return they would pay half the monthly phone cost. Since I would be on-call a tablet might be too bulky to carry around, the idea with the smartphone from their POV would involve being able to respond by email or Slack in case a downtime event happens. Probably wouldn't technically need a phone number tho.
But yes, the above has flaws in that there's always a 0.01% chance I send something to the wrong person if my contacts co-exist in 1 app. I'm dumb when it comes to smart phone capabilities, maybe there's a way to 100% silo off 2 sets of contacts or profiles? It's something I'll need to research.
I've considered trying, since I'm buying a second line as a "distraction-free tool" for the daytime hours.
I think the easiest thing to do (aka the #1 method of de-Googling) is to run certain versions of Android that have the play store removed, that has a version of android built from the AOSP with an alternate store like F-Droid enabled.
If you search "AOSP phone" or "de-google android" you could get places.
The other thing I have thought about is getting a Pine phone.
I had a Microsoft Lumia and used it into the ground until the screen died. It was a good phone, lacked some critical app support, but otherwise was a very solid smartphone alternative to iOS and Android. And you could even replace the battery! I can't say that is MS were still in the game, things would be markedly better, but at least there would be a decent, if distant, 3rd place smartphone platform.
We all have had, but in the past. it isn't feasible now.
I mean, you can buy "safe" smartphone, but first you can't prove beyond reasonable doubt that it is actually safe and private, and second, you attract more attention because the same phones are being bough by the criminals.
I've been heavily considering getting a PinePhone, when a new iteration comes along. I only need my phone for calling and texting, anything on top of that is just icing on the cake. If I can run all my favorite GTK apps on it, then I'll consider it a slam dunk.
I was wondering how Apple would handle their imminent iPhone 13 announcement event in light of the public outcry over the photo scanning. Now we know the answer, they will pretend to listen until everyone buys a new shiny, and then quietly slip in the changes without making much noise after everyone forgets about them.
It may be easy to hide that as part of an update, after mainstream focus settles down. It could be part of "Various privacy improvements" changelog line and it may not be easily noticable. And if some security researcher discovers it many years later, they will say "we told everyone in 2021 we cared about children and that we would implement this".
Honestly the trust damage is already done. They announced it, the genie is out of the bottle. I can never trust Apple as much as I did before. They are no longer the benevolent dictator of the walled garden. They have proven their judgement is fallible. They have proven that their customers interests do not matter to them as much as their own agenda. I'm not sure how they can ever fully recover from this, or even promote themselves as a privacy focused brand. That will always ring hollow.
This reminds me a bit of the DNS-over-HTTPS hand wringing as well. DoH points out that applications can run their own DNS and you can't easily control that behavior at a network level like with conventional DNS. That is pretty troubling from a technically literate user perspective but it's not actually new. Applications could always hard-code DNS. DNS-over-HTTPS just made us think about it.
Similarly Apple has complete control of your device. They always have, it's actually part of the value proposition. There has been a lot of debate about what Apple should or shouldn't do here but the fact is they could have pushed this out silently and it could be scanning your entire device right now, we just don't know. We have to trust Apple at their word and after their announcement that they would push this kind of capability I'm not sure how we can ever trust them again.
As many have been saying for decades, the only way to have privacy when computing is by using publicly auditable open source software. No amount of pinky promises made by the marketing department can change that.
It doesn't matter if the end user doesn't audit everything themselves, that's impossible to do in a reasonable manner, but the constant auditing and testing by many independent entities gives, in my opinion, a better assurance of privacy than some marketing material saying they deeply care about me.
Somehow a large subset of the general public doesn't seem to understand this. And act all surprised when Apple, Google, Microsoft or whatever screws them over.
You don't deserve this personally, and I'm genuinely interested if you have more insight, but I feel the need to vent:
Really? What the fuck did Apple do, that you gave them total control to dictate _your_ walled garden in the first place?
I just don't get it. My best guess is Sunk-cost-color glasses,. From the moment they launched itunes (which is 15y old for me) I understood their value was a walled garden as a platform. Users as a commodity. Valuation as overlords.
What I find most interesting is that they have apparently delayed both the iCloud photo upload scanning and the Messages parental control scanning which uses a completely different mechanism.
The latter when it flags something in a message to the child warns the child and asks if they want to view it. If the child says "yes" and is 13+ it shows it to them and that is the end of it. If the child is 12 or under, they are warned again and told that if they view it their parents will be notified. If they say "yes" again they are shown the image and their parents are notified.
Note that no one is notified unless the recipient is an under 13 child and the child explicitly decides to allow their parents to be notified.
I've seen much fewer objections to this. I don't see why they seem to be tying this and the iCloud scanning together in one release.
It helps them conflate the feature as “for the children” and not as a privacy violation, a general confusion about the coupling of these features is desirable for their marketing.
don't worry, they're offsetting environmental harm by excluding chargers and other accessories from your purchase. You can pay more to get these items, because that's Apple's commitment to the 'environment'.
That affects iPhones not Macs, but there is actually an environmental benefit in the form of shipment density due to the omission. It makes sense to not ship chargers to most people who already have a way of charging their new phone.
A year later and Ampere GPUs are still near impossible to get. So much for my home ML machine... (literally only waiting on a GPU and been that way for a year)
Docker desktop for Linux is coming. They’ve already said so.
Your boss is going to end up paying for Docker. Giving your security department the ability to restrict what images you can pull is going to become a checkbox in your infosec audit.
On Linux you don't need Docker at all. You can use containerd directly (or via alternative tooling such as CRI-O or Podman). Kubernetes has already deprecated the Docker APIs and uses OCI APIs to spawn containers.
> Docker desktop for Linux is coming. They’ve already said so.
I can't officially say how Docker will implement DD on Linux but I have a hunch it will be optional.
If it's not optional then it means you will need to install DD on your servers to run Docker which I can't realistically see happening. I'm guessing DD on Linux will just provide a few quality of life enhancements that exist on Windows and Mac, such as the host.docker.internal address and a UI for certain things for the folks who prefer that.
Hell yes!!!! Consider System76, one of the few Linux-focused vendors. Their tech support is great. Not sure the scale is a match for them, 1,000 is a lot of laptops.
But Linux is a side project for Dell and Lenovo. They put up with it but don't really support it. Support a vendor who is all in on Linux.
Maybe consider the Framework laptop which has made the rounds here a couple times recently. They made good hardware choices for a productivity laptop (3:2 screen, good keyboard), is easily reconfigurable with USB4 modules for a custom array of ports, is designed to be easily upgraded and repaired, and is sold for a fair price. There's an argument to be made that a device designed for upgradability and repairability can reduce TCO. Mine is shipping this month.
Build quality is a big deal. I feel like the midline for PC makers is pretty low -- Dell has mostly been kinda trashy, though the XPS machines have a good feel to them (it's a shame they've been kind of lemony for us).
Absent the keyboard kerfluffle, Apple's build quality has traditionally been very high -- IME, on par with the IBM-era ThinkPads in the sense that the hardware lasts longer than technical viability. How is S76 here?
I've not used a System76 laptop, but I had a coworker who used them for a long while. He never had particularly kind things to say about the polish. I'm overall not too impressed by what I see on their website; I really want a higher-res display at least.
The coworker did eventually move to a Mac, but has recently expressed dissatisfaction and may move back to Linux. Not sure if that'll be with a System76 machine, though.
Lemur Pro:
Fit and finish is a A- for PCs (B compared to a Mac) Solid build, no super weird stuff on the Lemur Pro. The silkscreen for one of the USB ports says 3.1 but it's 3.2. The webcam has a little lip. Trackpad is pretty clickey (ive been used to apple synthetic click)
(while on linux one can use the docker command line tool without any docker desktop thing, on mac docker desktop wraps together running a linux vm to run the docker daemon & containers inside, making it the most straight forward way of running docker on mac)
Just going to assume this is due to how the hypervisor on macOS won't allow dynamic memory allocation to the runtime.
On a Mac, you need to allocate a fixed amount of RAM to Docker, which isn't the case on other platforms where it can dynamically allocate and release as much as your containers demand.
It's very painful in workloads that need Docker. Especially on MacBooks where you can't upgrade the RAM to compensate.
Seems like the unspoken story here isn't really about Apple, it's that the govts of the world want access to your phone data and its backups. They're already getting some of it from exploits, telcos and the cloud. They'll be back and will keep coming and it's hard to imagine any company will have the ability to stand up to them without seriously impacting their business.
And, I might add, they've created the infrastructure that allows governments to scan for other 'questionable content'. How long until they begin to scan for hate-speech? What if you download files from Wikileaks? Or a journalist gets their hands on classified materials? The government could know about this before they have a chance to release it.
Well, I personally feel that the trust was broken.
I've migrated to a self-built Android (that was quick and mostly painless, I would recommend CalyxOS and/or Graphene to anyone) and have a long-term plan to completely pull myself out of Apple ecosystem.
Also it was a good reminder to degoogle myself, so I did.
Apple doing this to users' phones is like the police entering every house in the city (or in this case, the COUNTRY), just in case someone has something illegal in their basement. Nothing to worry about if you have nothing to hide, right?
The presumption of guilt is the problem. Freedom means that I shouldn't even be suspected, and certainly not searched or monitored, unless there is a clear reason.
If there is a reason to suspect me, all of these tactics are fair game. But not before that.
This analogy doesn't accurately represent the technology, at least as I understand it.
In Apple's implementation, the device never knows if a particular picture is a CSAM match. That determination is made in iCloud when the server attempts to decrypt the safety voucher. Until that point, it's just an encrypted payload that the device can't interpret one way or the other.
In your analogy, where "your home" is the equivalent of "your device", the police never enter the home to determine whether you have anything illegal. Instead, there's some process that boxes up all your stuff into nondescript, anonymous boxes that can only be opened if someone has the key.
To determine illegality, you'd have to voluntarily send them off to the police (police = iCloud), where they only have a handful of keys - they have a "gun" key, a "knife" key, and a few other keys for boxes containing illegal items. But the boxes are nondescript, so the police don't know whether you have anything illegal until they insert the key and turn it. If the "gun" key successfully opens the box, the box contains a gun, and you are reported. If all the police's keys fail on a particular box, then whatever is inside must not be illegal and the police never learn its contents.
Needless to say, this analogy is tortured because it's hard to apply Apple's tech to a physical process, but the point is that whether something is "illegal" isn't able to be determined until you voluntarily ship it off to an entity that has the keys to unlock it.
>In Apple's implementation, the device never knows if a particular picture is a CSAM match.
That's a distinction without a difference w.r.t the end result but I'll offer a more apt analogy regardless.
A better analogy would be the police installing a device in your house that's capable of seeing or hearing anything that happens and then claiming there's nothing to worry about. The device is only watching a specific door in your house and forwarding a hash of that information to their servers. Nevermind that it would only take a policy change and an OTA update that you have no visibility into, or chance of blocking, before it's watching your entire house in real-time.
But hey, you have several other doors to enter or exit your house from, and it's not like the camera actually knows anything, only the people on the other end do, so what's the big deal right?
I hope this is a case of Apple delaying it long enough to silently cancel the feature or completely change it to not be on-device scanning rather than Apple delaying it long enough to be well out of the news cycle and silently enable it at a later date. Both features mentioned have potential for abuse and create an adversarial relationship with ones own device so I'm not sure what they do to implement them without these remaining concerns.
The modus operandi of the surveillance state is to back off something newly intrusive just long enough for the outcry to die down and then slip it in when no one is looking. I'd expect this not to be canceled, but to find its way in to an update on a holiday weekend when no one is checking the news.
Well the day they announced it was also just a largely unremarkable day towards the start of a weekend but it still turned into a fiasco, so hopefully not.
I’d say very likely canceled quietly. Otherwise they are just taking the punishment twice, there isn’t a change they could make that would allow them to go live with this kind of thing at a later date and not get raked over the coals.
Yep, I did too, and set up a local+remote self hosted solution instead. Probably not going back now, it was very convenient for the iPhone upgrade program with a new phone each year, but I no longer really want use Apple services...
That doesn't make much sense. People would have cancelled their subscriptions during the month of August. There's no need to wait until the end of the month to know how many people are cancelling.
I was going to buy an iPhone before they announced this, because I'm tired of all the Google tracking stuff.
If they finally roll this out, I think my next phone will be a Nokia 1100. I don't have anything to hide, but I'm tired of everyone trying to track and sneak into your personal stuff, for whatever reason.
The most botched and nonsensical feature announcement ever.
> Here's an extremely sophisticated mechanism that has insane negative potential ramifications for our users and our brand, but that's ok because it's all in the name of catching pedos.
> PS: And pedos can turn off this mechanism with one toggle.
Since this sounds too idiotic to be real, people conclude it must be evil, by Occam's Razor.
I have a deeper fear: That it's actually that idiotic.
This is good but not good enough. Why? Because delayed means it might still happen. What we're aiming for is "cancelled".
Scanning your phone and your content is a terrible capability to exist and precedent to set. It's a small step to extend that system to, say, scanning for DMCA violations, pirated content, etc. And those will be justified by those activities supporting terrorism.
Yep, Apple currently only scans iCloud Mail attachements (I believe they confirmed this, don't have the source at hand though), which explains the discrepancy.
There's a big difference though between scanning a library (which is closer to what DropBox/OneDrive are doing) and when it's messaging platform whose ultimate goal is communication between people. It does make sense that those catch way more.
If Apple had roll those scans on iMessage attachements (which they didn't planned to, though the feature looked to be designed for it way more than it was for a library scan), you would probably see comparable numbers (modulo those platforms relative marketshares).
One big problem though is are those large numbers actually actionable ? I think I remember seeing a quote from Swiss (?) law enforcement that complained that the reports they got from NCMEC were nearly all of the time unactionable.
There are possibly many reasons from this, it could be differences in local laws, algorithms that wrongly report things, or other factors (maybe not enough to identify the user of the acount, etc).
This is where one feature of Apple's design was a bit better: they didn't plan to report every single occurence, but would only if 30 such images were detected. Part of the reason they did was because of the imperfections of the NeuralHash mechanism, sure, but in the end it doesn't matter.
One can argue doing this would at least generate more actionnable reports. It shouldn't be a numbers game.
The article touched on the actionable parts. "They say it is “highly unlikely” that a search warrant could be made from a tip that does not include content “and instead only includes IP data and an indication from an algorithm that there is a high probability of suspicious or anomalous activity”."
It sounds like simply being able to say "This user was messaging with 4 known offenders,and sending something the size of images" is unlikely to be helpful to law enforcement.
I have no idea what the right answer is, however "solving" implies that the states are "huge problem" and "no problem".
I might say that it's possible there are actions that can reduce, but not eliminate, the problem. What if another 20 million reports saved one child from exploitation? 10 children? 1000 children?
At some level, this boils down to a society question. We all agree CSAM is evil. How does it stack up against other things we think are evil? I fear that, in the end, there is no middle ground that includes eliminating CSAM and keeping 100% privacy.
I personally think one of the best things you can do is turn off icloud and automatic updates. No doubt they collect that telemetry. That sends a bigger "vote" than complain on HN, although both are welcome. I turned it all off and set idrive as my backup for now. Not optimal but the best I could do on short notice.
“Child safety features” because only a monster would be against such a thing.
I seem to recall gun trigger locks being pushed with the exact same phrase (because “self-defense obstacles” would be too accurate)
“For the children” is the last refuge of politicians and conmen.
So what’s the bottom line?
i’m guessing that scanning (for many things) is mandated by multiple governments, and on-device scanning is orders of magnitude cheaper than server-side scanning.
I hope this PR nightmare for Apple continues until the full extent of mandated surveillance is exposed
But it probably won’t - Apple’s mistake was in thinking that people would blindly accept this as a desirable feature, instead of just quietly implementing it.
Or, maybe Apple did it this way on purpose to expose the issue to public scrutiny.
It never ceases to astonish me how people keep reacting to this while apparently not really caring to understand how the thing actually works and how it fits into the existing workflows and thus what are the alternatives.
I mean, I'm not surprised this whole thing backfired and that there was a strong uncanny valley reaction to the prospect of having parts of the workflow happening in your device ("a policeman in my pocket" I read somewhere).
I am surprised though that it seems impossible to resolve the issue (or at least making progress in the framing of) this with a honest and nuanced conversation.
I don't want any functionality capable of scanning photos on my device for "illegal" content; I simply don't want that code living on my device no matter how restricted its (initially planned) use is.
Fair enough. Framing it like that it seems like a small step from a dystopia.
But let's face it, we're feet deep in it already: unless you control what code runs on your device you're never safe from code that scans data on your device.
I not sure I really buy the slippery slope argument. If in the future Apple wanted to be more intrusive, they would just be more intrusive and scan your photos on your device for real, not with a convoluted and probabilistic fingerprint method.
What is the weak point? To get people accustomed to being scanned? Aren't people already? Your content is already scanned the moment it reaches to the cloud.
What does this extra client-side scanning add to the dystopia that we're not already experiencing?
It's the opening of Pandora's box. Scanning your device is a huge paradigm shift; we all know the cloud is "someone else's computer" and untrustworthy but thus far one's device was sacred and no one dared touch it.
The floodgates are open now; politicians and other "interested parties" have watched this unfold very carefully and gleefully noted the majority didn't care as much as everyone expected, so they'll definitely be pushing for it now.
Imagine Windows Defender (an antimalware / antivirus distributed with all versions of Windows and enabled by default) starts scanning one's hard drive (and attached external drives) for image files (it already scans documents and executables and even uploads samples of malicious binaries to Microsoft for analysis): how would you / the world react?
But that's not what they want to do. They want to perform a client side fingerprint of a subset of images before they get uploaded to the cloud.
You can argue that they could in the future turn this into a scan of everything on the device. But you can also argue that if that's what they want to do in the future they'll do it in the future. It's all about trust. If you don't trust Apple to not push nefarious code on your devices, stay away from Apple, and that's true even before all this.
It’s really not a compromise. They found a way to scan your photos for illegal stuff without during upload without compromising encryption. Meanwhile all the major competitors just openly inspect All your photos.
This is equivalent getting up set with Signal because it spell checks your text as you write it: “you are scanning what I write!!!!”
Yes the currently do. The goal of this new system is to allow them to no longer have such keys while at the same time not allowing just any content to be uploaded to their cloud systems.
If you want to upload anything on their cloud storage you need to pre-encrypt it with some other application and the upload it as a raw file.
> The goal of this new system is to allow them to no longer have such keys while at the same time not allowing just any content to be uploaded to their cloud systems.
As do all the competitors… but they can also access your photos for random data mining. Which no one has shown Apple do that. We know 100% google do it…
Apple already tried the "critics don't understand our system!" deflection a while ago.
It's a deflection because critics' objections don't necessarily hinge on the technical implementation details of the system, they object to it based on its value proposition to the user, and on principle.
Once we move past the core of those objections, then yes, some critics also object to the system's technical implementation, and some of them are correct in their analysis and some are less so.
> they object to it based on its value proposition to the user, and on principle.
Not sure everybody is on the same page what the value proposition to the user is. That's intimately tied on the "how the system works" which is not merely an implementation detail.
I'd like to talk about that. Most of the threads about this topic I found are full of flames without much content. Hopefully I found somebody who can help me clarify what is it that bothers so much.
I absolutely do not want this on any devices I own.
I'm preparing to destroy any Apple devices in my possession (thankfully old and due for replacement) when my pine64 arrives. Don't care about any details. How's that for nuanced?