For context, I deeply hate the abuse of children and I've worked on a contract before that landed 12 human traffickers in custody that were smuggling sex slaves across boarders. I didn't need to know details about the victims in question, but it's understood that they're often teenagers or children.
So my initial reaction when reading this Twitter thread was "let's get these bastards" but on serious reflection I think that impulse is wrong. Unshared data shouldn't be subject to search. Once it's shared, I can make several cases for an automated scan, but a cloud backup of personal media should be kept private. Our control of our own privacy matters. Not for the slippery slope argument or for the false positive argument, but for its own sake. We shouldn't be assuming the worst of people without cause or warrant.
That said, even though I feel this way a not-small-enough part of me will be pleased if it is deployed because I want these people arrested. It's the same way I feel when terrorists get captured even if intelligence services bent or broke the rules. I can be happy at the outcome without being happy at the methods, and I can feel queasy about my own internal, conflicted feelings throughout it all.
Having known many victims of sexual violence and trafficking, I feel for the folks that honestly want that particular kind of crime to stop. Humans can be complete scum. Most folks in this community may think they know how low we can go, but you are likely being optimistic.
That said, law enforcement has a nasty habit of having a rather "binary" worldview. People are either cops, or uncaught criminals. ..and they wonder why they have so much trouble making non-cop friends (DISCLAIMER: I know a number of cops).
With that worldview, it can be quite easy to "blur the line" between child sex traffickers, and parking ticket violators. I remember reading a The Register article, about how anti-terrorism statutes are being abused by local town councils to do things like find zoning violations (for example, pools with no CO).
Misapplied laws can be much worse than letting some criminals go. This could easily become a nightmare, if we cede too much to AI.
And that isn't even talking about totalitarian regimes, run by people of the same ilk as child sex traffickers (only wearing Gucci, and living in palaces).
”Any proposal must be viewed as follows. Do not pay overly much attention to the benefits that might be delivered were the law in question to be properly enforced, rather one needs to consider the harm done by the improper enforcement of this particular piece of legislation, whatever it might be.”
> is like saying you know black people and that somehow it affords you some privilege others do not possess.
Of course it does. Interacting with black people (or any race) affords you insight into their life experiences, struggles, worldview etc...
Of course sociological discourse is highly subjective but this attitude on HN that anecdotal data has no value whatsoever is silly. Do you seriously expect every fact of every people to be published in some infallible academic journal?
Based on a fairly cursory examination, the "N-word" ban is largely enforced by white people and using it is likely to get them more riled up than black folk.
So simultaneously I can imagine how black people wouldn't care and why that doesn't matter. The word isn't banned because blacks have delicate eardrums, it is banned because white people are showing respect.
I'm not. I am very unambiguously against this and I think if word gets out Apple could have a real problem.
I would like to think I am against child porn as any well-adjusted adult. That does not mean I wish for all my files to be scanned without my consent or even knowledge for compliance, for submission to who knows where matching to who knows what reporting to, well, who knows.
That's crossing a line. You are now reading my private files, interpreting them, and doing something based on that interpretation. That is surveillance.
If you truly want to "protect the children" you should have no issue for the police to visit and inspect your, and all of your neighbors houses. Every few days. Unannounced, of course. And if you were to resist, you MUST be a pedophile who is actively abusing children in their basement.
I'm actually more OK with unannounced inspections of my basement (within reason) than with some government agents reading through my files all the time.
“If you want a vision of the future, imagine a boot stamping on a human face - forever.” I always think about this Orwell quote and think it’s up to us to try to fight for what is good, but we were too busy doom-scrolling on Twitter to do anything about it.
The NCMEC database that Apple is likely using to match hashes, contains countless non-CSAM pictures that are entirely legal not only in the U.S. but globally.
This should be reason enough for you to not support the idea.
From day 1, it's matching legal images and phoning home about them. Increasing the scope of scanning is barely a slippery slope, they're already beyond the stated scope of the database.
The database seems legally murky. First of all, who would want to actually manually verify that there aren't any images in it that shouldn't be? If the public can even request to see it, which I doubt, would you be added to a watch list of potentially dangerous people or destroy your own reputation? Who adds images to it and where do they get those images from?
My point is that we have no way to verify the database wouldn't be abused or mistaken and a lot of that rests on the fact that CSAM is not something people want to have to encounter, ever.
It’s a database of hashes, not images, though, right? I would argue the hashes absolutely should be public, just as any law should be public (and yes, I am aware of some outrageously brazen exceptions to even that).
Anyone should be able to scan their own library against the database for false positives. “But predators could do this too and then delete anything that matches!” some might say, but in a society founded on the presumption of innocence, that risk is a conscious trade-off we make.
yes, it is a database of hashes but I don't know if the hashes are public information per se although I am sure copies of it are floating around. But I am referring to the images that the hashes are generated from. There is no verification of these that I know of. No one would want to do that and if you did you might be breaking the law.
The law requires companies like Google and Apple to report when they find CSAM and afiact they would generate hashes and add to this database if new material is found.
I don't know if there is any oversight in this. It's all done behind closed doors so you just have to trust that the people creating the hashes aren't doing anything nefarious or mistaken and that's a separate point apart from what others have said on here that you should be able to trust your devices you own to not be informants against you.
Wait, so two American companies, Apple and NCMEC, are working together to install spyware on all Apple devices world-wide, with no government involvement?
Not quite, they're an NGO and the CyberTipline which it operates (the database Apple will use) was established by S.1738 [PROTECT our Children Act of 2008], and they get an appropriation from Congress to run that and work with law enforcement.
It's kind of like the PCAOB... private 501.3(c) with congressional oversight and funding.
I think the strategy is that the organization is able to do more for helping children internationally if they're not seen as part of the Justice department and the executive, which after the debacle with CBP and "kids in cages", was probably the right call.
So are we talking about images which are not actually CSAM but a reasonable person would consider to be CSAM if they encountered them? Or is it just the README.txt for whatever popular bittorrent client?
A reasonable person would call it a normal picture if they encountered it in isolation. Like I said, content which is not even borderline are included.
By both a legal and moralistic standard they're not CSAM. Not even nearly.
Of course, this is a minority of the content in the database. But even 1 such image is gross neglect of stated purpose in my book.
But why am I downloading such things to begin with? Not only do these sound like very boring photos, given their providence I don’t understand this realistic pathway to get onto my phone.
Law enforcement care because they want to get pinged whenever someone shares imagery of interest to an investigation, irrespective of its legality.
A person who creates CSAM likely doesn't just create CSAM all the time, right? Those innocuous pictures get lumped together with illegal content and make it into the database.
The database is a mess, basically. Of course it is. It's gigantic beyond your wildest estimates.
I am reminded of an infamous couch and pool that are notorious for appearing in many adult productions... Possibly stock footage of a production room or repeating prop intended for being subsampled for so that multiple or repeated works by the same person or group can be flagged. I recall a person of interest was arrested after of all things posting a completely benign YouTube tutorial video. My thought at the time was likely a prop match to the environment or some such within the video. The method is definitely doable. Partitioned out to every consumer device with unflinching acceptance? Yeahhhh.
Remember, these databases are essentially signature DB's, and there is no guarantee that all hashes are just doing a naive match on the entire file, or that all scans performedare fundamentally the same.
This is why I reject outright the legitimacy of any Client-based CSAM scanners. In a closed source environment, it's yet another blob, therefore an arbitrary code execution vector.
I'm sorry, but in my calculus, I'm not willing to buy into that, even for CSAM. It won't stay just filesystems. It won't stay just hash matching. The fact there's so much secrecy around ways and means implies there's likely dynamicity in what they are looking for, and with the permissions and sensors on a phone that many apps already ask for, my not one inch instincts are sadly firmly engaged with no signs of letting up.
I'm totally behind the fight. I'm not an idiot though, and I know what the road to hell is paved with. Law Enforcement and anti-CSAM agencies are cut a lot of slack, and enjoy a lot of unquestioning acceptance by the populace. In my book, this warrants more scrutiny, and caution not less. The rash of inconvenient people being rather frequently called out as having CSAM found on hard drives in media with no additional context indicates the CSAM definition is being wielded in a manner that produces a great degree of political convenience.
Since this is using a db of known images. I doubt that would be an issue. I believe the idea here is that once police raid an illegal site, they collect all of the images in a db and then want to know a list of every person who had these images saved.
But it said they use a "perceptual hash" - so it's not just looking for 1:1, byte-for-byte copies of specific photos, it's doing some kind of fuzzy matching.
This has me pretty worried - once someone has been tarred with this particular brush, it sticks.
You can’t do a byte-for-byte hash on images because a slight resize or minor edit will dramatically change the hash, without really modifying the image in a meaningful way.
But image hashes are “perceptual” in the sense that the hash changes proportionally with the image. This is how reverse image searching works, and why it works so well.
Sure, I get how it works, but I feel like false positives are inevitable with this approach. That wouldn't necessarily be an issue under normal police circumstances where they have a warrant and a real person reviews things, but it feels really dangerous here. As I mentioned, any accusations along these lines have a habit of sticking, regardless of reality - indeed, irrational FUD around the Big Three (terrorism, paedophilia and organised crime) is the only reason Apple are getting a pass for this.
There is also a number of flagged pictures to reach before an individual is actually classified as a "positive" match.
It is claimed that the chance of being a false-positive for a positive match is one out of a trillion.
> Apple says this process is more privacy mindful than scanning files in the cloud as NeuralHash only searches for known and not new child abuse imagery. Apple said that there is a one in one trillion chance of a false positive.
This isn't CSAM or illegal, nor would it ever end up in a database. Speaking generally, content has to be sexualized or have a sexual purpose to be illegal. Simple nudity does not count inherently.
That’s not entirely true. If a police officer finds you in possession of a quantity of CP, especially of multiple different children, you’ll at least be brought in for questioning if not arrested/tried/convicted, whether the images were sexualized or not.
> nor would it ever end up in a database
That’s a bold blanket statement coming from someone who correctly argued that NCMEC’s database has issues (as in I know your previous claim is true because I’ve seen false positives for completely innocent images, both legally and morally). That said, with the amount of photos accidentally shared online (or hacked), to say that GP’s scenario can not ever end up in a database seems a bit off the mark. It’s very unlikely as sibling commenter said, but still possible.
That's why I said it's not inherently illegal. Of course, if you have a folder called "porn" that is full of naked children it modifies the context and therefore the classification. But, if it's in a folder called "Beach Holiday 2019", it's not illegal nor really morally a problem. I'm dramatically over-simplifying of course. "It depends" all the way down.
>That’s a bold blanket statement
You're right, I shouldn't have been so broad. It's possible but unlikely, especially if it's not shared on social media.
It reinforces my original point however, because I can easily see a case where there's a totally voluntary nudist family who posts to social media getting caught up in a damaging investigation because of this. If their pictures end up in the possession of unsavory people and gets lumped into NCMEC's database then it's entirely possible they get flagged dozens or hundreds of times and get referred to police. Edge case, but a family is still destroyed over it. Some wrongfully accused people have their names tarnished permanently.
This kind of policy will lead to innocent people getting dragged through the mud. For that reason alone, this is a bad idea.
> But, if it's in a folder called "Beach Holiday 2019", it's not illegal nor really morally a problem.
With all due respect, please please stop making broad blanket statements like this. I'm far from a LEO/lawyer, yet I can think of at least a dozen ways a folder named that could be illegal and/or immoral.
> This kind of policy will lead to innocent people getting dragged through the mud. For that reason alone, this is a bad idea.
I believe both you and the other poster, but I still haven't seen anyone give an example of a false positive match they've observed. Was it an actual image of a person? Were they clothed? etc.
It's very concerning if the fuzzy hash is too fuzzy, but I'm curious to know just how fuzzy it is.
> Was it an actual image of a person? Were they clothed?
Some of the false positives were of people, others weren’t. It’s not that the hashing function itself was problematic, but that the database of hashes had hashes which weren’t of CP content, as the chance of a collision was way lower than the false positive rate (my guess is it was “data entry” type mistakes by NCMEC, but I have no proof to back up that theory). I made it a point to never personally see any content which matched against NCMEC’s database until it was deemed “safe” as I didn’t want anything to do with it (both from a disgusted perspective and also from a legal risk perspective), but I had coworkers who had to investigate every match and I felt so bad for them.
In the case of PhotoDNA, the hash is conceptually similar to an MD5 or a SHA1 hash of the file. The difference between PhotoDNA and your normal hash functions is that it’s not an exact hash of the raw bytes, but rather more like the “visual representation” of the image. When we were doing the initial implementation / rollout (I think late 2013ish), I did a bunch of testing to see how much I could vary a test image and have the hash be the same as I was curious. Resizes or crops (unless drastic) would almost always come back within the fuzziness window we were using. Overlaying some text or a basic shape (like a frame) would also often match. I then used photoshop to tweak color/contrast/white balance/brightness/etc and that’s where it started getting hit or miss.
Unless I'm missing something, those are just theoretical examples of how one could potentially deliberately try to find hash collisions, using a different, simpler perceptual hash function: https://twitter.com/matthew_d_green/status/14230842449522892...
So, it's theoretical, it's a different algorithm, and it's a case where someone is specifically trying to find collisions via machine learning. (Perhaps by "reversing" the hash back to something similar to the original content.)
The two above posters claim that they saw cases where there was a false positive match from the actual official CSAM hash algorithm on some benign files that happened to be on a hard drive; not something deliberately crafted to collide with any hashes.
You're not missing something, but you're not likely to get real examples because as I understand it the algorithm and database are private, the posters above are just guardedly commenting with (claimed) insider knowledge, they're not likely to want to leak examples (and not just that it's private, but with the supposed contents.. Would you really want to be the one saying 'but it isn't, look'? Would you trust someone who did, and follow such a link to see for yourself?)
To be clear, I definitely didn't want examples in terms of links to the actual content. Just a general description. Like, was a beach ball misclassified as a heinous crime, or was it perfectly legal consensual porn with adults that was misclassified, or was it something that even a human could potentially mistake for CSAM. Or something else entirely.
I understand it seems like they don't want to give examples, perhaps due to professional or legal reasons, and I can respect that. But I also think that information is very important if they're trying to argue a side of the debate.
> I understand it seems like they don't want to give examples, perhaps due to professional or legal reasons, and I can respect that.
In my case, it’s been 7 years so I’m not confident enough of my memory to give a detail description of each false positive. All I can say is that the photos that were false positive that included people were either very obviously fully clothed and doing something normal, or the photo was of something completely innocuous all together (I seem to remember an example of the latter was the Windows XP green field stock desktop wallpaper, but I’m not positive on that).
NCMEC is an private organization created by the U.S. Government, funded by the U.S. Government, operates with no constitutional scrutiny, operates with no oversight / accountability, could be prodded by the U.S. Government, and they tell you to "trust them".
To be fair the Twitter thread says (emphasis mine) "These tools will allow Apple to scan your iPhone photos for photos that match a specific perceptual hash, and report them to Apple servers if too many appear."
I don't know what the cutoff is, but it doesn't sound like they believe that possession of a single photo in the database is inherently illegal. That doesn't mean this is overall a good idea. It simply weakens your specific argument about occasional false positives.
Since you worked on an actual contract catching these sorts of people you are perhaps in a unique position to answer the question: will this sort of blanket surveillance technique in general but also in iOS specifically - actually work to help catch them?
I have direct knowledge of examples of where individuals were arrested and convicted of sharing CP online and they were identified because a previous employer I worked for used PhotoDNA analysis on all user uploaded images. So yeah, this type of thing can catch bad people. I’m still not convinced Apple doing this is a good thing, especially on private media content without a warrant, even though the technology can help catch criminals.
now im afraid, i have two young children < 5 years old.
i have occasionally took pictures of them naked with some bumps on the skin or mosquito bite and sent them to my wife over whatsapp to look at and decide do we need to send them to doctor, do i have to fear now that i will be marked as distributing CP.
It’s not just you. I have pictures of my kids playing in the bath. No genitals are in shot and it’s just kids innocently playing with bubbles. The photos aren’t even shared but they’d still get scanned by this tool.
This kind of thing isn’t even unusual either. I know my parents have pictures of myself and my siblings playing in the bath (obviously taken on film rather than digital photography) and I know friends have pictures of their kids too.
While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.
That you even have to consider sexual interpretations of your BABY'S GENITALS is an affront to me. I have pictures of my baby completely naked, because it is, and I stress this, A BABY. They play naked all the time, it's completely normal.
Yeah that’s a fair point. The only reason I was careful was just in case those photos got leaked and taken out of context. Which is a bloody depressing thing to consider when innocently taking pictures of your own family :(
> no court is going to indict you because you have baby pictures on your phone
Maybe, maybe not. Bad luck is possible with anything involving police, prosecutors, judges, and juries. Need justification for that point of view? Just look at the number of people who were convicted and spent time in jail who truly were innocent. That doesn't even touch on the possible repercussions that can happen from just being questioned/arrested and later let go.
Don't immediately take affront, take the best possible interpretation of the parent comment. This is about automatic scanning of people's photo libraries in the context of searching for child pornography, presumably through some kind of ML. It seems to me that the concern of the commenter is that if there are photos of their child's genitals that they'll be questioned about creating child pornography, not that they're squeamish about photographing their child's genitals. This happened in 1995 in the UK: https://www.independent.co.uk/news/julia-somerville-defends-...
> While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.
In this case it’s not AI that’s understanding the nuance, it’s authorities that identify the exact pictures they want to track and then this tool lets them identify what phones/accounts have that photo (or presumably took it). If ‘AI’ is used here it is to detect if one photo contains all/part of another photo, rather than to determine if the photo is abusive or not.
Although there is a legitimate slippery slope argument to be had here.
Is there some way of verifying that the fingerprints in this database will never match sensitive documents on their way from a whistleblower to journalists, or anything else that isn't strictly illegal? How will this tech be repurposed over time once it's in place?
You seem to be suggesting that the AI will go directly from scanning your photos for incriminating fingerprints to reporting you to journalists.
I have to assume humans are involved at some point before journalists are notified. The false-positive will be cleared up and no reputations sullied (except perhaps the reputation of using AI to scan for digital fingerprints).
>The false-positive will be cleared up and no reputations sullied...
This is dangerously naive. The US justice system alone will hound people on goosed up charges and try to get people to accept a plea deal and write a bogus confession. Parallel construction. Additionally if you can't audit the database (I'd bet very few people can, including your senator) how do you know a hash of something not CP wasn't inserted into the database. This entire system screams ready for govt overreach. It's worse than normal since there'll be no public evidence when it's abused.
The other way around. If the database of fingerprints is unauditable, and especially if the database varies from country to country, then it would be very easy to add fingerprints for classified documents, or photos documenting known war crimes, or even just copyrighted stuff to close the so-called analog hole.
Documents could also be engineered to trigger false positives, making it difficult or impossible for a corporate whistleblower to photograph incriminating evidence to deliver to the authorities.
So, if the rumors are true and every iPhone will check every photo against an opaque database of perceptual fingerprints, what safeguards exist (beyond "trust us" from the database keepers) to prevent abuse of the feature to suppress evidence and control the flow of information, and which organizations or governments will have control over the contents of the database? As always, who watches the watchers?
> While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.
I recall a story several years ago where someone was getting the film developed at a local drugstore, and the employee reported them for CP because of bath photos. This was definitely a thing before computers with normal every day humans.
I don't have knowledge of how Apple is using this, but based on what I know about how it's used at Google this would be flagging previously reviewed images. That wouldn't include your family photos, but are generally hash-type matches of images circulating online. The images would need to depict actual abuse of a child to be CSAM.
You would only be flagged if the photos (' hashes) were added as part of some investigation right? So you only have to fear for your criminal record in the event that an actual criminal gets ahold of your indecent (in their hands!) photographs. In which eventuality you might be (relatively to not, but still leaked) glad they'd been discovered and arrested etc. assuming your good name could be cleared.
Just playing devil's advocate, my gut (and I think even considered) reaction is in alignment with surely just about the whole tech industry: it's over-reach (if they're not public images).
Look at all the recent findings that have come to light regarding ShotSpotter law enforcement abuse [1] These systems, along with other image and object recognition projects are rife for false positives, bias, and garbage-in-garbage-out. They should in no way be considered trustworthy for criminal accusations let alone arrest.
As mentioned in the twitter thread, how does image hashing & recognition tools such as PhotoDNA handle adversarial attacks?[2][3]
Just as being banned from one social media platform for bad behavior pushes people to a different social media platform, this might very well push the exactly wrong sort of people from iOS to Android.
If Android then implements something similar, they have the option to simply run different software, as Android lets you run whatever you want so long as you sign the wavier.
"You're using Android?! What do you have to hide?"
-- Apple ad in 2030, possibly
I'm the person you're responding to, and I think so? My contract was on data that wasn't surveilled, it was willingly supplied in bad faith. Fake names, etc. And there was cause / outside evidence to look into it. I can't really go into more details than that, but it wasn't for an intelligence agency. It was for another party that wanted to hand something over to the police after they found out what was happening.
I see. I was responding to you, yes. And in this case I was more curious about your opinion - based on your previous knowledge - on the viability of Apple’s technology here, rather than the specific details of your work.
In my (uninformed) opinion - this looks like more of a bad faith move on Apples part that will maybe catch some bad actors but will be a net harmful result for apple’s users and society, as expressed in the Twitter thread.
Others who responded here though also seem to think it’ll be a viable technique.
This scanning doesn't prevent the actual abuse and all this surveillance doesn't get to the root of the problem but can be misused by authoritarian governments.
It's a pandoras box.
You wouldn't allow the regular search of your home in real life.
I'm not at all conflicted about this. Obviously CSAM is bad and should be stopped, but it is inevitable this new feature will become a means for governments to attack material of far more debatable 'harm'.
Well, it's a lot like everything. No one wants abusers, murderers, and others out and about. But then, we can't search everyone's homes all of the time for dead bodies, or other crimes.
We would all be better off without these things happening, and anyone would want less of it to happen.
Since they are only searching for _known_ abusive content, by definition they can only detect data that has been shared, which I think is the important point here.
> Unshared data shouldn't be subject to search. Once it's shared, I can make several cases for an automated scan, but a cloud backup of personal media should be kept private. Our control of our own privacy matters. Not for the slippery slope argument or for the false positive argument, but for its own sake. We shouldn't be assuming the worst of people without cause or warrant.
I have a much simpler rule: Your device should never willingly* betray you.
*With a warrant, police can attempt to plant a bug, but your device should not help them do so.
I don't think this rule makes any sense, because it just abstracts all the argument into the word "betray".
The vast majority of iPhone users won't consider it a betrayal that they can't send images of child abuse, any more than they consider it a betrayal that it doesn't come jailbroken.
The victims of child abuse depicted in these images may well have considered it a betrayal by Apple that they allowed their privacy to be so flagrantly violated on their devices up until now.
I don't think you read your ancestor post carefully enough. I at least don't see any room for ambiguity.
The rule is that your (note the emphasis) device won't ever willingly betray you. There's nothing here that implicates the majority in any way. Simply, your own device should never work against you.
This actually sounds like a great rule to prevent this kind of authoritarian scope creep.
> The vast majority of iPhone users won't consider it a betrayal that they can't send images of child abuse
Probably neither would child abusers, since as soon as they send an image of child abuse, they're much more likely to be caught than if it had stayed on their phone.
> a betrayal by Apple that they allowed their privacy to be so flagrantly violated on their devices up until now.
"Their" devices? Once Apple sells an iPhone, it no longer belongs to Apple. Taking "betrayal" to mean "didn't plant backdoors on other people's computers to catch your abusers" is stretching that word far beyond reason.
So if I understand correctly, they want to scan all your photos, stored on your private phone, that you paid for, and they want to check if any of the hashes are the same as hashes of child porn?
So... all your hashes will be uploaded to the cloud? How do you prevent them from scanning other stuff (memes, leaked documents, trump-fights-cnn-gif,... to profile the users)?
Or will a huge hash database of child porn hashes be downloaded to the phone?
Honestly, i think it's one more abuse of terrorism/child porn to take away privacy of people, and mark all oposing the law as terrorists/pedos.
...also, as in the thread from the original url, making false positives and spreading them around (think 4chan mass e-mailing stuff) might cause a lot of problems too.
> and they want to check if any of the hashes are the same as hashes of child porn?
... without any technical guarantee or auditability that any of the hashes they're alerting on are actually of child porn.
How much would you bet against law enforcement to abuse their ability to use this, and add hashes to find out who's got anti government memes or police committing murder images on their phones?
And that's just in "there land of the free", how much worst will the abuse of this be in countries who, say, bonesaw journalists to pieces while they are alive?
I remember the story where some large gaming company permanently banned someone because they had a file with a hash that matched a "hacking tool". Turns out the hash was for an empty file.
A malware will definitely be created, almost immediately, that will download files that are intentionally made to match CP - either for the purposes of extortion or just watching the world burn.
I'm usually sticking my neck out in defence of more government access to private media than most on HN because of the need to stop CP, but this plan is so naive, and so incredibly irresponsible, that I can't see how anyone with any idea of how easy it would be to manipulate would ever stand behind it.
Signal famously implemented, or at least claimed to implement, a rather similar-sounding feature as a countermeasure against the Cellebrite forensics tool:
If they told that, people would (try to) remove it. The whole point is that you can't know which (if any) of the hundreds of thousands of files on your device it is. So they aren't telling. Could be they have (or at least claim to have) written their system so it chooses files at random; I think that's what I would do (or claim to have done).
If you can recreate a file so it’s hash matches known CP then that file is CP my dude. The probability of just two hashes accidentally colliding is approximately: 4.3*10-60
Even if you do a content aware hash where you break the file into chunks and hash each chunk, you still wouldn’t be able to magically recreate the hash of a CP file without also producing part of the CP.
The Twitter thread this whole HN thread is about shows just how to make collisions on that hash. So any image can be manipulated to trigger a match, even if that image isn’t CP.
It's the weights from the middle of a neural network that they're calling a "hash" because it encodes and generates an image it has classified as bad. Experts have trouble rationalizing about what weights mean in a neural network. This is going to end badly.
If this was a hash then it would be as the parent describes, this is at best a very fuzzy match on an image to take into account blurring/flipping/colour shifting.
It's vastly more likely that innocent people will be implicated for fuzzy matches on innocuous photos of their own children in shorts/swimming clothes than it is to catch abusers.
The other thing is, when you have nothing to hide you won't take efforts to hide it - meaning you'll upload all of your (completely normal) photos to iCloud without thinking about it again.
The monsters making these images know what they're doing is wrong, so they'll likely take efforts to scramble or further encrypt the data before uploading.
tldr; it's far likelier that this dragnet will only even apply to innocent people, than it is to catch predators.
All this said, I'm still in support of Apple taking steps in this direction, but it needs far more protections put in place to prevent false positives than this solution allows. A single false accusation by this system, even if retracted later and rectified, would destroy an entire family's lives (and could well cause suicides).
Look what happened in the Post Office case in the UK as an example of how these things can go wrong - scores of people went to prison for years for crimes they didn't commit because of a simple software bug.
> The monsters making these images know what they're doing is wrong, so they'll likely take efforts to scramble or further encrypt the data before uploading.
The ones that make national news from big busts do, because the ones that don't get caught much sooner and only make local news, because Google and other parties are have automatic CSAM identification online already (server side, not client side, AFAIK), and are sending hits to Homeland Security.
That document you downloaded that is critical of the party will land you and your family in jail. Enjoy your iPhone.
Seriously, folks, we shouldn't celebrate Apple's death grip over their platform. It's dangerous for all of us. The more of you that use it, the more it creates a sort of "anti-herd immunity" towards totalitarian control.
Apple talks "privacy", but jfc they're nothing of the sort. Apple gives zero shits about your privacy. They're staking more ground against Facebook and Google, trying to take their beachheads. You're just a pawn in the game for long term control.
Apple cares just as much for your privacy as they do your "freedom" to run your own (un-taxed) software or repair your devices (for cheaper).
And after Tim Cook is replaced with a new regime, you'll be powerless to stop the further erosion of your liberties. It'll be too late.
But is there a realistically better alternative? Pinephone with a personally audited Linux distro? A jailbroken Android device with a non-stock firmware that you built yourself? A homebuilt RaspberryPi based device? A paper notepad and a film camera and an out of print street map?
The best bet is probably a pixel phone with GrapheneOS. (Do note, that copperhead os is a scam and is not to be used)
Gnu/linux phones have nonexistent security, other than being niche (so security by obscurity at most). And also, they are not yet usable as a daily driver for me personally, at least.
Whether or not I am allowed to check that my entrance has no locks whatsoever doesn’t make it harder to open it. And the reverse, even if I don’t know the details of the lock in my door, it will not let others pass through.
What if I have a locksmith verify it for me? Like Apple and Android have been checked by several security researchers and while they absolutely have holes, there is are at least gates that can have them. Sandboxing is the bare minimum an OS should do if it wants to have third party applications installed.
Basically Micay is a legitimate security researcher who created the project and it was later hijacked by the company funding some of it. That company since then try to badmouth Micay at any place they find and is doing shady things on top of the still open source code base. Micay was so professional to destroy the verification key at the time of the forking.
> Pinephone with a personally audited Linux distro?
Even if you don't personally audit it, you still benefit from other people doing it. Especially if the software is reproducible (and many packages are).
An Android device running non-stock is a realistically better scenario. The big problem there is that the state of Android drivers means your hardware options are severely cut down (in practice, to a selection about the size of Apple's - the Pixel line and some assorted others).
With non-stock (assuming not jailbroken but just a totally different operating system) I think (I might be wrong... I should know for sure but I awkwardly don't) you aren't even allowed to use Google Play Services at all?
You are allowed to use Google services. There is even an alternative called microG which is compatible with apps requiring Google serivices but it sends "fake" data to Google.
Viable alternatives were long gone. I really miss the days of Symbian and Meego, phones that are hackable yet intuitive to use (I.e. Nokia N900, N9).
Realistically now we have Tizen and Jolla OS, which had backings from Samsung but nobody gave two damn about it.
I bet even if any of these vanilla mobile OS gets big enough they’ll get bought by the 3 giants and suffocated to death just like how Microsoft sniped Nokia.
Not the parent commenter, but for me - Samsung are just as morally vacuous as Google, but are way less competent, at least on there software side (their component manufacture seems to be world class in at least some areas).
They'll happily do evil shit, and execute it poorly. Samsung are _way_ more likely to leak the unnecessarily and possibly illegally collected personal data they hoover up than Google are.
Not really, and I'm not going to sway anyone deeply into the ecosystem.
My hope is that those of you that share my viewpoint will call your legislators and demand regulations or a break up. There are forces of good within the DOJ that are putting together an antitrust case against Apple, and the more of us that lend our voices, the louder and more compelling the argument.
The DOJ is really the last lever we have, and that's pretty good measure for the power Apple wields.
It always starts with child porn, and in a few years the offline Notes app will be phoning home if you write speech criticising the government in China.
This technology inevitably leads to the sueveillance, suppression and murder of activists and journalists. It always starts with protecting the kids or terrorism.
Perceptual hashes like what Apple is using are already used in WeChat to detect memes that critique the CCP.
What happens on local end user devices must be off limits. It is unacceptable that Apple is actively implementing machine learning systems that surveil and snitch on local content.
> in a few years the offline Notes app will be phoning home if you write speech criticising the government in China.
A totalitarian autocracy like China does not need this technology to search for wrongspeech, sadly. You are of course aware that all Chinese iCloud users get their data stored in a special set of datacenters that Apple actually doesn't control.
The problem is, that this will be done in other, (currently) freer countries. Eg. Reddit has removed a lot of anti-china posts in the last few years. And of course, local leaders will use this to find anti-local-leader stuff on their citizens phones too.
This seems to be done voluntarily by NVIDIA, at least partially. While in Seattle I set up geo-blocking on my LAN as an experiment. Later when I tried to create an NVIDIA account I couldn't because it was attempting to store my PII at nvidia.cn. When I changed the url to nvidia.com everything worked just fine. I've always wondered what non-evil reasons one could use to explain that choice by NVIDIA. Ping was at least 2x longer to .cn.
Which'd be fine if we had one global government with world wide jurisdiction. Or technology choices from companies which couldn't be pressured by governments outside your personal regulation jurisdiction.
I wouldn't hold your breath waiting for those regulations to become law in, say, Chine or Turkey or Saudi Arabia. I'd bet even Israel won't pass them, surely NSO have enough political lobbying swing (and probably also suitable blackmail material on sitting politicians).
>Which'd be fine if we had one global government with world wide jurisdiction.
I wholeheartedly disagree. A world wide goverment would be catastrophic for whistleblowing. Atleast as a whistleblower you can live somewhat safely in a country opposing your own. With a one world government you would have nowhere to run.
And I wouldn't expect them to protect citizen's interests any better than current governments do. Contrary, I think this lack of balance in the world would embolden them further.
They sell software designed to break into people's phones to oppressive regimes all over the planet. They definitely have the capability and requisite lack of ethics to compromise their own politicians; there's no probably about it.
But we don't have a global government, so the next best thing is for individual countries to pass such regulation, which would prevent products violating privacy like this from being offered and sold in those countries.
Think of GDPR, which is essentially each member of the EU saying in unison, "your product/service must comply with these data protection laws, or you can't legally do business with any of our citizens".
Come to think of it, I wonder if this Apple thing would even fly under GDPR?
> Come to think of it, I wonder if this Apple thing would even fly under GDPR?
Possibly? I’m not a lawyer, but if this is about compliance with a legal obligation, and they’re under that category of pressure? I think GDPR would allow that?
Certainly seems more likely allowed than the stuff Facebook complained Apple was preventing them from doing.
I agree, I would add that people have generated legal images that match the hashes.
So I want to ask what happens if you have a photo that is falsely identified as one in question and then an automated mechanism flags you and reports you to the FBI without you even knowing. Can they access your phone at that point to investigate? Would they come to your office and ask about it? Would that be enough evidence to request a wiretap or warrant? Would they alert your neighbors? How do you clear your name after that happens?
edits: yes, the hash database is downloaded to the phone and matches are checked on your phone.
Another point is that these photos used to generate the fingerprints are really legal black holes that the public is not allowed to inspect I assume. No one wants to be involved in looking at them, no one wants to be known as someone who looks at them. It could even be legally dangerous requesting to find out what has been put into the image database I assume.
>I would add that people have generated legal images that match the hashes.
That seems like a realistic attack. Since the hash list is public (has to be for client side scanning), you could likely set your computer to grind out a matching image hash but of some meme which you then distribute.
At least one of these two things must be true: either Apple is going to upload hashes of every image on your device to someone else's server, or the database of hashes will be available somehow to your device.
Replying to my own question since I can’t edit anymore: it turns out “perceptual hashing,” which I didn’t know much about, has exactly this property, that small changes in the input result in small changes in the output.
One thing to note is these are not typical cryptographic hashes because they have to be able to find recompressed/cropped/edited versions as well. Perhaps a hash is not an accurate way to describe it.
There have been a number of cases where people have found ways to trick CV programs in to seeing something that no human would ever see. If you were sufficiently malicious I imagine it would be possible to do with this system as well.
No need to upload every hash or download a huge database with very hash. If I were building this system, I'd make a bloom-filter of hashes. This means O(1) space and time checking of a hash match, with a risk of false positives. I'd only send hashes to check against a full database.
No, your hashes are not uploaded to the cloud, yes, hashes are downloaded to your phone. Yes, it will be interesting to see if it gets spammed with false positives, although it seems as though that can easily be identified silently to the user.
If the details of the "hashing" scheme used is publicized, I imagine it will be near trivial. It's a long-standing problem in computer vision, to find a digital description of an image such that two similar images compare equal or at least similar.
State-of-the-art for this field is deep learning, and a /huge/ problem with the DL approach is that you can generate adversarial examples. So for example, a picture of a teacup that is identified by /most/ networks as a dog. It's particularly damning, because it seems like you don't have to do this for particular deep networks, they get tricked the same way, so to speak.
Indeed, at which point we’ll know if Apple has implemented an obviously broken solution which opens us up to egregious government surveillance, or whether that is all just speculation without a factual basis.
This isn't cryptographic though. That would make the entire database absolutely trivial to bypass with tiny imperceptible random changes to the images.
It cannot be a cryptographically secure hash, simply because avoiding detection would then be trivial: change one channel in one pixel by one. Imperceptible change, different cryptographic hash.
The probability of Apple and all its devices winking out of existence due to quantum fluctuations is not zero. ‘Not zero’ is effectively zero if the number is small enough.
128 bit hashes are “expect your first collision when each human buys 2305 iPhones, all with one terabyte storage, and then fills them up with photos that are an average of 1MB in file size”
That's the stated purpose, but keep in mind that these databases (NCMEC's in particular, which is used by FB and very likely Apple) contain legal images that are NOT child porn.
Think of it this way, take a regular, legal set of adult pornographic pictures. While legal, we'd still classify this set of pictures as known porn if we were tracking it.
Now the first few images might be the model completely clothed and not even be porn, maybe there's a picture of her lounging around a pool, then another picture of the pool itself. Still its part of a set of pictures that is known porn.
Heck most porn starts off with actors being clothed (so I hear lol).
No-no-no. It's not your phone. If it was your phone - you would have a root access to it. It's their phone. And it's their photos. They just don't like when there's something illegal on their photos, so they will scan it, just in case.
It's funny to see anyone here could find this acceptable. I wonder what's comments would be after Apple start to scan phones for anti-censorship or anti-CCP materials in China. Or for some gay porn in Saudi Arabia.
Because you know in some countries there are materials that local government find more offensive than mere child abuse. And once surveillance tech is deployed it's certainly gonna be used to oppress people.
> Because you know in some countries there are materials that local government find more offensive than mere child abuse. And once surveillance tech is deployed it's certainly gonna be used to oppress people.
In Saudi, Bahrain, and Iran there is no minimum age of consent – just a requirement for marriage. In Yemen, the age of consent for women is 9 (but they must be married first). In Macau, East Timor, and UAE, it's 14. [1]
I would allege that in all of those states they would probably find the perceptual hash of government criticism far more important to include on the "evil material" database than anything else.
Won't anyone think of the children! And Tim Cook personally promised to not look at anything in my unencrypted iCloud backup, they really care about privacy!
And you can be sure that there's no way for the PRC, that already runs its own iCloud, to use this. America's favorite company wouldn't allow that.
> I wonder what's comments would be after Apple start to scan phones for anti-censorship or anti-CCP materials in China.
I'm cynical enough to wonder whether this isn't their actual commercial reason for developing this, with CSAM being a PR fig leaf. Apple is substantially more dependent on China than its major competitors.
Exactly, now Apple has this tech, shady governments know they can mandate Apple to use it for their own databases and Apple will have to do this if they want to keep operating within a territory.
It's quite easy to extrapolate this and in a few steps end up in a boring dystopia.
First it's iPhone photos, then it's all iCloud files, that spills into Macs using iCloud, then it's client side reporting of local Mac files, and somewhere along all other Apple hardware I've filled my home with have received equivalent updates and are phoning home to verify that I don't have files or whatever data they can see or hear that some unknown authority has decided should be reported.
What is the utopian perspective of this which counterbalances the risks for this to be a path worth taking?
> What is the utopian perspective of this which counterbalances the risks for this to be a path worth taking?
Apple takes care of everything for you, and they have your best interests at heart. You will be safe, secure, private and seamlessly integrated with your beautiful devices, so you can more efficiently consume.
What's not to like about a world where child crime, terrorism, abuse, radical/harmful content and misinformation can be spotted in inception and at the source and effectively quarantined?
No one here has a problem with the worst criminals being taken out. The problem is the scope creep that always comes after.
In 2021 and 2020 we saw people being arrested for planning/promoting anti lockdown protests. Not for actually participating but for simply posting about it. The scope of what "harmful content" is is infinite. You might agree that police do need to take action against these people but surely you can see how the scope creeped from literal terrorists and pedophiles to edgy facebook mums and how that could move even further to simple criticisms of the government or religion.
It's difficult to say how we draw the line to make sure horrible crimes go punished while still protecting reasonable privacy and freedom. I'm guessing apples justification here is that they are not sending your photos to police but simply checking them against known bad hashes and if you are not a pedophile, there will be no matches and none of your data will have been exposed.
In Germany police went around and even looked at contact tracing lists (on paper) in restaurants [1]. Even while politicians still stated publicly that these lists were only used (or to be used) for contact tracing.
Also the partly state sponsored luca app (check in in locations, festivals, restaurants, concerts) that is privately developed (and riddled with security holes) is already in discussion to use the data on the people to better target them for concert tickets and the like [2].
So we see this data is already in abuse by the state and also by state sponsored private entities.
I believe, that this data, once collected, will only be (ab)used further in the future. In my experience it will be as with all data caches - somebody wants to create additional value from it.
(Note that one mostly-united political party controls 89.2% of Singapore's legislature seats, and can pass any laws or amend its constitution to their liking.)
Thank you, it clearly shows that the German government cannot be trusted to do the right thing.
And the underlying desire for having this information will no doubt prolong the Corona restrictions longer than necessary, which is certainly not in the interest of German citizens.
We also saw HN shadow banning entire IP CIDR blocks because they didn’t like argument against fleeting CDC guidance that was put forth or the Chinese lab origin theory in 2020.
You can’t register from these CIDR blocks. If you had an account before the comments would just end up in a black hole.
Dang can explain.
I mean, Apple isn't too far from the Mac thing you mention. Since Catalina running an executable on macOS phones home and checks for valid signatures on their servers.
> "What is the utopian perspective of this which counterbalances the risks for this to be a path worth taking?"
Basically victims of rape don't want imagery of their rape freely distributed as pornography. They consider that a violation of their rights.
It's interesting how many users in this thread are instinctively siding with the offenders in this, and not the victims. Presumably because they made it through their own childhoods without having imagery of their own abuse shared online.
You are actually creating a false dichotomy here. There are more sides to this. And you are creating (as said a false) black and white image here.
I strongly believe that nobody wants to further victimize people by publicly showing images of their abuse.
And I believe very strongly that putting hundreds of millions of people under blanket general suspicion is a dangerous first step.
Imagine if every bank had to search all documents in safe deposit boxes to see if people had committed tax evasion (or stored other illegal things like blood diamonds obtained with child labor). That would be an equivalent in the physical world.
Now add to this, as discussed elsewhere here, that the database in question contains not only BIlder of victims, but also perfectly legal images. This can lead to people "winning" a house search because they have perfectly legal data stored in their cloud.
Furthermore, this means that a single country's understanding of the law is applied to a global user community. From a purely legal point of view, this is an interesting problem.
And yes: I would like to see effective measures to make the dissemination of such material more difficult. At the same time, however, I see it as difficult to use a tool for this purpose that is not subject to any control by the rule of law and cannot be checked if the worst comes to the worst.
Using your bank analogy for a second: banks already do report on activity to authorities who can then identify people to investigate based on patterns. I've heard that large transactions (>10k) or near-sized ones are flagged.
A great deal of skepticism is being given to the NCMEC database in these comments, which I'm surprised by as from what information I have I think this is being exaggerated. At the same time we have no idea whether Apple would even be using that database or another one that they may have created themselves.
> I've heard that large transactions (>10k) or near-sized ones are flagged.
Thi sis transmission of funds and there are laws regulating the monitoring of those.
I used bank vaults were you put things into the vaults without the bank often times knowing what is in there. If they knew, they would need to report to authorities.
So Apple doing this scan would be the bank opening all vaults, scanning the contents and reporting things to the IRS (I think this is the tax thing in the US if I am not mistaken - in Germany it would be the Finanzamt).
This is a situation where different people's privacy is in conflict. What's infantile is claiming sole ownership of privacy advocacy while so-whating the worst privacy violation imaginable, from the victims' perspective.
That's an interesting point. However, I'm not sure victim privacy is the reason for CSAM regulations. Rather, it's reducing the creation of CSAM by discouraging its exchange. For example, suppose instead of deleting/reporting the images, Apple would detect and modify the images with Deepfake so the victim is no longer identifiable. That would protect the victim's privacy but wouldn't reduce the creation or exchange. The fact that such a proposal is ridiculous suggests that privacy isn't the reason for regulation and that reducing creation and exchange is.
There is an utterly perverse incentive to consider as well.
If the median shelf-life of abuse evidence is shortened, in that the item in question can no longer be forwarded/viewed/stored/..., what does that imply in a world where the demand remains relatively stable?
I despise the abusers for what they do, and the ecosystem they enable. But I also remember first having this argument more than ten years ago. If you, as a member of law enforcement or a child wellbeing charity, only flag the awful content but do not do anything else about it, you are - in my mind - guilty of criminal neglect. The ability to add an entry to a database is nothing more than going, "at least nobody else will see that in the future". That does NOTHING to prevent the creation of more such material, and thus implicitly endorses the ongoing abuse and crimes against children.
Every one of these images and videos is a piece of evidence. Of a horrifying crime committed against a child or children.
The two sides proposed by your argument are only logically valid opposites if you can logically/mathematically guarantee that this technology will only ever be used for detecting photos depicting blatant and obvious sex abuse. Since you cannot, the entire argument is void. I'm not siding with abusers, I simply want arbitrary spies staying the hell away from my computers.
I feel it's a little disingenuous to describe millions of innocent people being surveilled as "the offenders" because there are a handful of actual offenders among them.
There's a small number of victims, a small number of offenders (but much more than "a handful"), and hundreds of millions of other users. This change is in the direct interest of victims, direct opposition to offenders.
Most normal people probably support the measures in solidarity with group 1, HN generally doesn't.
...And direct opposition to those hundreds of millions of other users. Trying to fit this to a victims vs. offenders model is a deliberate attempt to turn those hundreds of millions of other users into uninvolved bystanders. They have been pushed out by the lack of space in the model for them and their right to not have their door kicked down based on the results of an algorithm and database they can't audit, which are susceptible to targeted adversarial attacks and authoritarian interference respectively.
If drunk driving laws were enforced by mandating a breathalyzer in every car and nobody really knew how the breathalyzer worked and also it maybe doubled as an instrument for the government to catch you doing fifteen other things then I might consider that a fair comparison.
But yes, there's a lot of drive-by engagement in this thread, thank you for at least engaging with it directly.
Having private devices randomly snooped for forbidden materials is fine, okay. So why limit this to phones?
There are kidnapped children being locked inside homes. If you don't open your doors and accept weekly full home inspections, I think it's safe to say you support offenders and hate victims if you oppose this. I mean, we're all against people kidnapping and abusing children.
There's a small number of victims, a small number of offenders (but much more than "a handful"), and hundreds of millions of other home owners. This change is in the direct interest of victims, direct opposition to offenders.
A new aspect of this is that because this is self-reported, and the end goal is to involve the criminal justice system, there is now (essentially) an API call that causes law enforcement to raid your home.
What would be the result of 'curl'ing back a few random hashes as positives from the database? Do I expect to be handcuffed and searched until it's sorted out? What if my app decides to do this to users? A malicious CSRF request even?
A report to the cybertips line does not equal a police raid. Unfortunately the scale of the problem and the pace of growth is such that only the worst of the worst content is likely to be prosecuted.
If a phone calls the API "hello, I found some porn here" the phone (and/or it's owner) become a "person of interest" very quickly.
(I'll wager) The majority of these calls will be false positives. Now a load of resources get deployed to keep an eye on the device's owner, wasting staff time and compute, wasting (tax funded) government budget that could have gone towards proper investigation.
Yeah and sadly many of those who are consumers of illicit content get away with it because it's much more important to target the creators. The unfortunate reality of finite resources.
Also, if they send perceptual hashes to your device - it's possible images could be generated back from those hashes. These aren't cryptographic hashes, so I doubt they are very good one-way functions.
Another thought - notice that they say "if too many appear". This may mean that the hashes don't store many bits of information (and would not be reversible) and that false positives are likely - ie, one image is not enough to decide you have a bad actor - you need more.
But at Apple's scale, statistically, some law-abiding users would likely get snagged with totally innocent images.
It's also just plain absurd. Hundreds of pictures of my own children at the beach in their bathing suits? No problem. Hundreds of photos of other peoples' children in bathing suits? Big problem. Of course, the algorithm is powerless to tell the difference.
Ah, I guess unsurprisingly the twitter thread is light on details. I saw "perceptual hash" which I usually interpret to mean some kind of feature-based semantic hash that is not as sensitive to small edits. Even if it isn't currently used, the door is open for it to be implemented in the future.
In cryptography creating a one-way function is not a problem. The only thing required for that is loosing information, which is trivial. For example taking the first n bytes of a file is a one-way hash function (for most files). So reversing the hashes is most definitely not a problem.
Creating collisions could be though, eg. brute forcing a normal picture by modifying random pixels by a bit into matching an illegal content’s hash is a possibility.
1) You willingly delegated the decision of what code is allowed to run on your devices to the manufacturer (2009). Smart voices warned you of today's present even then.
2) You willingly got yourself irrevocably vendor-locked by participating in their closed social networks, so that it's almost impossible to leave (2006).
3) You willingly switched over essentially all human communication to said social networks, despite the obvious warning signs. (2006-2021)
4) Finally you showed no resistance to these private companies when they started deciding what content should be allowed or banned, even when it got purely political (2020).
Now they're getting more brazen. And why shouldn't they? You'll obey.
Great, so what's the solution? What are you doing to fix it? Do you roll your own silicon? Do you grow your own food (we have no idea what someone could be putting in it)? Are you completely off-grid? Or are you as completely dependent on society writ large as everyone else?
Making holier than thou comments about everyone else being sheep isn't helpful or thought provoking. Offer an alternative if it is a bad one (looking at you Mastodon). So here's mine: we need to change the power of digital advertising. Most of the most rent seeking companies generate revenue primarily selling ads to get more people to buy more crap. I want a VAT on all revenue passing through the digital advertising pipeline. My hope is that if these things are less profitable, it will reduce the over-sized impact these companies (social, infotainment [there is no news anymore], search, etc.) have on our economy and life. People are addicted to to fomo and outrage (faux?), I don't that that will ever change but we can try to make it less profitable.
Seriously? Perhaps heed the warnings? Whenever Apple tightened the reigns, thousand of apologists came to their defense. I wouldn't even have minded if they kept their obedience to personal decisions. But they extended their enlightenment to others.
And then take what actions, exactly? “Guys this is trouble” is …fine, but without “and we should therefore do”, it’s just kind of spitting into the wind.
Google phone? This is not an alternative if you care about privacy.
The allegory presented sounds insane (growing your own food) but honestly if you're not rolling your own flashed ROM on a 'droid then your privacy is dead already;
I understand throwing all your eggs in one basket is a terrible idea but Apples walled garden and heavy sandboxing was at least somewhat protective.
One option is getting a Phone that has an unlocked bootloader so one can load their own ROM without Goople Play Services. I have a Pixel 3a with LineageOS, no Google Play. I'm pretty happy with it.
While the Pinephone isn't perfect yet, I would argue it will be able to replace Android/iOS in a few months.
> so one can load their own ROM without Goople Play Services
Yeah this is the mobile phone equivalent of growing your own vegetables.
I get it, I'm a techy, but until the day is here where rooted phones aren't treated like a special case: for most people if they buy a droid then it's actually significantly worse for their personal privacy than an Apple device.
I don't want that to be the case, but until the pine phone is here and starts being a reasonable alternative; that's how it is.
> Yeah this is the mobile phone equivalent of growing your own vegetables.
Sadly...I know. The unfortunate thing is my recommendation for my family/friends for a phone is an iPhone, because it really is the best for security/privacy today, and they at least support their phones for much longer than an Android.
> I don't want that to be the case, but until the pine phone is here and starts being a reasonable alternative; that's how it is.
Fortunately, I think that time is coming sooner than later.
Indeed it is still possible to get a Pinephone, Librem 5, or an Android phone with an unlockable bootloader such as Fairphone (in Europe), Teracube, and lots of Samsung Galaxy phones and put LineageOS on it.
The issue is that "the masses" just "go with the flow" and act like if everybody else is jumping off a cliff then it's ok to jump off a cliff too, and it is these mobs that move the markets and determine which products and services become the most popular.
However, in relatively free countries it is still possible to live one's life on one's own terms and not be in the same herd with the sheeple. It is definitely possible to live life without an iPhone and to have an Android phone with F-Droid rather than the Google Play Store as one's primary phone.
Buying a different phone or even forgoing a feature phone entirely are both valid alternatives. Just because the alternatives are unappealing doesn't make "There are no alternatives" or "The warning wasn't actionable" into true statements.
Yes! Buying a different phone or even forgoing a feature phone are both valid alternatives. Just because the alternatives are unappealing doesn't mean "There are no alternatives" or "The warning wasn't actionable" into true statements.
The first ( and only ) way to solve any problem is to first admit it is a problem.
It doesn't even need to be an "action". You will have to be at least conscious of what is happening.
That is not what is happening, right now ( or before that ) many are defending Apple, and also an attitude of not "my problem". Writing off any warning as either pessimistic or conspiracy. Having some healthy dose of skepticism is somewhat an unpopular view in modern day society. Especially in Tech and America.
An action could and will be taken once enough people are conscious of what is happening. Right now the scale and critical mass just isn't there (yet).
Yeah I think a lot of us are conscious of the problem but you're right, the scale/critical mass isn't quite there yet hmm.
BTW I think one of the roots of our problem is that we seem to end up in these really weird "winner take all" distributions with only 2-3 main winners and rarely any serious alternatives. It happens with operating systems, browsers, it also happens within programming language communities, like Javascript, etc. Everybody piles up into one of the 2-3 most popular framework options (React, etc). Same thing for Linux distributions. etc.
It would be nice if we could figure out some sort of nudge or hack that would reduce this tendency and encourage more distribution of mass.
Well that's because the more people use the thing, the easier it is to find solutions for issues with said thing. Looking up a way to do routing in react is way easier than looking it up for a framework with a 100 users.
Issues are also being found way faster simply due to a bigger user base.
Unrealistic. My non-tech family and friends don’t care about this, and I can’t make them care. They don’t understand why it’s a problem; they agree with the motive and don’t understand that it’s not actually a solvable problem. It’s not totally dissimilar to the crypto backdoor problem. Normies think it’s great for only the feds to break encryption. Doesn’t work that way, but you can’t explain why to someone who doesn’t want to know.
The only progress for things like this comes at the political level, through pressure on politicians or corporations. That’s where you should be spending effort if this sort of thing matters to you.
A few Linux nerds buying some fantasy spook-free glibertarian phone will not make a dent in the problem.
I have a different experience. My normie family and friends don't want a private company to access private media at all. Tech-illiteracy is no question of education and intelligence. This is a topic that is comparable to home searches.
Still good to keep up the pressure on public officials. That any interior minister is very happy about that doesn't have to be excused. Anyone else should be more like French people.
Perhaps as I stated it, yeah it seems unrealistic.
However I do think there’s value in gentle, consistent evangelism of privacy in ways that don’t make people feel bad. Most folks actually don’t want their deeply private stuff to be accessible. I’ve found that there are good analogies and ways to think about it that folks can get on board with and start caring to some extent.
I absolutely agree that political progress is the ultimate solution. I think the only way that will happen is if enough people demand it.
The solution is to go back to the original spirit of the Internet, when it was a set of open standards connecting people and organizations. Somehow it got forgotten and now we have a bunch of commercial companies giving you the same stuff in exchange for your privacy and who increasingly control everything you do.
The spirit of the internet won’t generate secure hardware or a transparent software stack.
Also, that spirit existed only in an adversary free environment. You may as well say the solution is for everyone to be nice to each other.
The solution is to build new technologies that are privacy preserving, transparent, don’t place trust in a central authority but are resistant to attack.
> Problem is you end up with a vastly inferior hardware device that costs nearly as much as an iPhone.
The Pinephone is $150/$200.
> Would love to hear if anyone is actually using a Linux phone and enjoying it.
I have one, and I throughly enjoy it! One of the neatest things about it, and I still have to wrap my head around it, is that it can do anything you can do on a desktop. SSH? No problem? Dev environment? "apt install build-essentials". Want to install XFCE? Knock yourself out!
It is still missing MMS, which is why I don't use it daily, but that should be changing much sooner than later.
Exactly! Speaking from the Sailfish OS perspective but thats essentially also a normal Linux distro. It's just so refreshing that studs behaves in logical and introspective manner - have an issue ? Check journal, there will most likely be some hints! Need to transfer data ? Mount the device via sshfs!
In comparison my Galaxy Tab S6 is just so much more black box and some stuff just does not work with no apparent way to debug! Like, I tried to setup a samba or SSH server on it for easy data transfer, with zero success after trying all the related apps in frdoid and elsewhere. It just does not work at all! Most likely some brain dead "security" option one can't override 2without rooting the device that is impossible to track down in the mess that is Android.
Honestly after using Android for so long too, even just AOSP has gotten much more user hostile.
Running root used to be very simple and semi-sanctioned. Now you have to essentially rootkit your phone (Magisk) for it.
Most of the AOSP programs are abandoned (thankfully ROM maintainers update them!), and you have to deal with a lot of things not working with Google Play.
> One of the neatest things about it, and I still have to wrap my head around it, is that it can do anything you can do on a desktop. SSH? No problem? Dev environment? "apt install build-essentials". Want to install XFCE? Knock yourself out!
You can do all these things with a $10 raspberry pi-w. It’s not obvious why it helps with having security and privacy.
Respectfully, I think you missed the point of my reply.
The person asked: "Would love to hear if anyone is actually using a Linux phone and enjoying it."
...so I responded about what I enjoy about using a "Linux Phone". A "$10 raspberry pi-w" is not a Linux Phone.
The question didn't address "security and privacy", so I didn't add it. Though, since you brought it up, I will answer how the Pinephone "helps with having security and privacy".
- I have hardware (cut physical power) switches to kill the Microphone, cameras (independent for front and back), and Wifi/Bluetooth.
- I am running a full GNU/Linux distro that I have a lot more confidence in their want to protect users security/privacy.
- I can run software of my choosing on it (the Pinephone actually defaults to booting on an SD card! so it encourages you to experiement with OSes).
- There is serious effort to run the mainline Linux kernel on it, so it will not be artifically obsolete in 3-5 years (or be sutck on running some ancient swiss cheese kernel in 5 years like my old Pixel/Nexus devices).
- I don't have to install/link to some opaque binaries to even boot the system.
I'm sure I can think of more, but that's all I have for now.
Fair enough - I fully agree it would be enjoyable for a Linux enthusiast. I have had one on my shopping list for some time, but I want to use it as a daily driver and since most people say it isn’t ready for that, slightly lower level projects keep winning out.
What I have encountered is the requirements for it being a daily driver are user specific. I have seen a fair amount of users say "I need $FOO app to work", where $FOO is some an app only developed for Android/iOS (a common one is a banking app, there is almost no way that company will support a Linux phone). So I guess I would ask what your requirements are before I can say if it is daily driver ready or not.
My threshold for it being a "daily driver" is if I can fully replace the "phone" features of my Android phone, which for me, is: Calling, SMS, MMS, Voicemails. MMS is not yet UI functional, but will be sooner than later.
The good news is I am seeing the Linux Phone movement be a positive feedback loop: the more features add, the more users, and the more folks helping to add features.
Sure - for me (and likely >99.9% of people), from what I have read, it is clearly not ready. I’m a heavy phone user in most respects. For me it would have to start as a hobby project.
I have been using a Linux smartphone since 2010 an I certainly do enjoy it - first Nokia N900 & N9 followed by a series of Sailfish OS[0][1] devices, with Xperia 10 II with Sailfish X[2] being the latest one.
This is clearly a good thing to do and supporting those projects is great.
So far though, they do nothing to solve the problems we are talking about. The software is not anywhere near audited, and even if it were, you are still interacting with people and services who are using unaudited software.
Many of the projects leverage a well known OS as their base (e.g. pmOS uses Alpine Linux, Mobian uses Debian), and actively ensure that anything that can be upstreamed is upstreamed. So it's not like you're downloading some random ROM off of XDA.
That doesn’t make the system audited. The obvious reason we don’t hear more about the weaknesses is that no high value targets are using these systems, so it’s not worth exploiting them.
Um, these distros are normally used to run like half the Internet, they are very valuable targets today and I don't think putting them on a phone changes the threat environment so much.
> The solution is to build new technologies that are privacy preserving, transparent, don’t place trust in a central authority but are resistant to attack.
I actually completely agree with this.
> This is possible but nobody has built it yet.
Doesn't mean we should stop trying. Here's my $0.02 - I've been building an email solution based on Self-Sovereign Identity ideas, still in progress, but check it out: https://ubikom.cc or https://github.com/regnull/ubikom
Dumping Facebook and its products as I and others have done is one strong step forward but people can't even manage this. It's deeply disappointing. Techno bears its seeds in rebellion but everybody throwing techno parties is coordinating on what is essentially an Orwellian state. Punks too, they're cozied up to this framework of oppression and can't see it for what it is.
I think people have a hard time seeing the ethics in the technology they choose to use. Maybe the next wave of net-natives will be able to rediscover that common thread of rebellion and resist. It's insidious, I'll give you that. It's not obvious what is being surrendered with every participation on these platforms but it doesn't take a genius to see clearly.
> Dumping Facebook and its products as I and others have done is one strong step forward
A strong step forward that gets nullified when Facebook buys the alternative app you're using, or when the app you're using does things as Facebook does.
You can propose all individual options you want, this is a collective issue that won't get fixed just by calls to individual action.
facebook is not going to buy my fediverse instance or my XMPP server. "they will just buy whatever gets used" gets thrown around too much as a sort of impotent defeatism. I don't see them buying instances on federated networks running open source software, and these networks are growing. we have to stop having this 2007 mindset that it is still okay to try to dump millions of people into a large centralized microblogging site.
What we have been doing would have worked if the majority would have followed.
Chose open standards, use and contribute to FOSS, avoid social networks, get involved in your local community, etc.
No need to go to extreems or complicated plans, corporations follow the customers.
But nobody did listen. Quite the opposite. I never had a facebook account, and now today people are boasting when they leave FB. But 10 years ago ? Oh we were the paranoid extremists .
Even today my friends regularly pressure me to get whatsapp.
The solution won't be technological, it will be in realm of laws and regulations. We are weak peasants and don't have any power over big tech, but we can change legal environment for them.
IANAL but we (via our elected representatives) can push a law that prohibit restrictions on execution of users' code on their own devices. Or we can split app stores from vendors and obligate them to provide access to third-party stores, like we do with IE and windows.
Also, it's completely doable to stop NSA/Prism totalitarian nonsense.
What we can do as tech people?
- raise awareness
- help people to switch from big tech vendor locks
- help people harming BT by installing adblockers, pihole etc
- participate in opensource (with donations or work)
This. You can't change mass behavior by individual pleas. Especially when the behavior generates outsized profits that can be used to advertise and lobby in its support.
The most pressing things that should be supported, to have the world I think we want:
1. Mandate open app stores. Your device, your choice. *
2. Mandate open browsers. Your device, your choice. The internet is fundamentally an extension of the OS at this point, so an free (as in speech) connection choice is a requirement for an open OS. *
3. Mandate open apps. Your device, your choice. Installing unsigned apps can be warned, but not prohibited (outside of enterprise devices).
4. Mandate configurable tracking. Your device, your choice. There must be a clear option to disable all tracking, along with an API / payment ecosystem for apps to detect this and request alternative payment. I.e. "free if advertising on + $5.99 if advertising off".
5. Mandate right to repair. The manufacturer must provide necessary technical specifications (hard or soft) for a base level of modification and repair. If the manufacturer no longer supports the device, everything must be released to the public.
* Selection must be offered at time of device setup. Installing alternatives can be warned, but not prohibited (outside of enterprise devices).
I agree, but we can elect different lawmakers. Let's push Louis Rossman to congress for example. I checked, his congressional district is represented by someone named Jerry Nadler, who's been sitting there since 1992 and (I'm pretty sure) is out of touch of his constituents since late 1990's.
The problem is you have to convince a large crowd for that. Spreading the information about free software and GNU/Linux phones is the same work explaining the people why we need to vote differently.
I'd argue: avoid using proprietary networks, avoid vendor lock-in with software and hardware, and use hardware that one is allowed to use their own software on. Champion using open and federated protocols for social tools.
I think solutions exist, but honestly, it isn't easy.
> Making holier than thou comments about everyone else being sheep isn't helpful
I would offer the GP comment isn't necessarily a holier than thou comment, it's a comment of frustration. Frankly, I feel the same frustration.
It's tiring to hear snide remarks of "ohh yeah, we can include you for something because you don't have an iPhone". Hell, I have openly heard, even on this forum, that people don't include folks on social conversations with EVEN THEIR OWN FAMILY because of the dreaded "green bubble". (FYI, MMS is entirely done over HTTP and SMS! How is Apple's MMS client so bad that it can't handle HTTP and SMS?).
Or there is the "why don't you have WhatsApp/Facebook/Instagram/etc." and people think your some sort of weirdo because you don't want to use those networks.
So to be honest, when I see something like that, I think "Well I'm not surprised, this is what happens when you are locked out of your own hardware".
> What are you doing to fix it?
While GP may not be doing anything, others are helping and actively working for alternatives. For example, I have been working to get the Pinephone to have MMS and Visual Voicemail support so I can use it daily. I an very fortunate to work with a lot of very talented and motivated folks who want to see it succeed.
It's incredible how people are going to blame absolutely everything on ads. We're talking about a company for which ads are only a small part of their revenue doing something following government pressure, and somehow ads are the problem.
I'm dependent, just as you say, and have no illusions about that.
Getting into this situation wasn't my decision (it was a collective "decision" of our society), and getting out of this won't be due to anything I'll personally do either.
The only difference between me and the average joe is having understood that we have a problem earlier than most.
Have people already forgotten that Microsoft implemented the tech to routinely scan your cloud storage a decade ago?
>The system that scans cloud drives for illegal images was created by Microsoft and Dartmouth College and donated to NCMEC. The organization creates signatures of the worst known images of child pornography, approximately 16,000 files at present. These file signatures are given to service providers who then try to match them to user files in order to prevent further distribution of the images themselves, a Microsoft spokesperson told NBC News. (Microsoft implemented image-matching technology in its own services, such as Bing and SkyDrive.)
Have you ever had the impression that special interests are much more interested in copyright violations than child pornography? Or that it might extend to memes that sabotages carefully crafted propaganda? I do have that impression a lot.
I don't step in the lowest parts of the internet hell, but I am also not very picky about it. I have never encountered child pornography in 10 years of almost pathological internet usage.
I didn't know what CSAM stood for so I first though this would be the reaction to the security issues iPhones faced. Oh, silly me.
Additionally, by calculating hashes of media on peoples devices, you can quickly determine networks. A private image you shared with your friends? Unique hash and everyone that has it is probably in your network. That is aside from the issue that they would also just read your contacts.
You should read the rest of the linked Twitter thread, because the issue is that if the hash algorithm has a collision vulnerability, any image could be manipulated to show up as "child porn" to the scanner.
Microsoft created and hosts the PhotoDNA service which all providers use, and PhotoDNA has false positives. All reports are supposed to be manually reviewed before being sent to Cyber Tip.
1. there's no such thing as "windows TPMs", whatever that means.
2. TPMs basically has zero access to the rest of the system. It's connected via a LPC bus, so there's no fancy DMA attacks to pull off. Over that bus the system firmware sends various hashes of the system state (eg. hash of your bootloader), but that's about it.
> 2. TPMs basically has zero access to the rest of the system. It's connected via a LPC bus, so there's no fancy DMA attacks to pull off. Over that bus the system firmware sends various hashes of the system state (eg. hash of your bootloader), but that's about it.
That's the specification. Have you actually monitored the bus using probes? Did you check that the TPM is only connected to LPC?
Fwiw, Apple's incentive seems to be hardware from the outside more than Software, imo. They tend to sell the hardware by the software for a lot of people, but given that they're so concerned with keeping their software on their hardware i suspect they don't have much reason to push their software over your software.
What would concern me is if we see a big revenue stream from their software. Then i'd question them not wanting Linux on their machines.
But imo you already gave them what they want when you buy an M1. I don't see a reason why they care beyond that.
“Apple Arcade is a video game subscription service offered by Apple Inc. It is available through a dedicated tab of the App Store on devices running iOS 13, tvOS 13, iPadOS 13, and macOS Catalina or later. The service launched on September 19, 2019 after being announced in March 2019”
“The easiest way to upgrade to the latest iPhone.
Get a new iPhone every year
AppleCare+ coverage included
Works with your carrier
Starting from $35.33/month”
I'm not. Because for years Apple didn't want it's OS anywhere but on it's hardware.
The day apple wants it's OS anywhere it can get it, because it brings in money independent of just it's hardware is the day i actually believe Apple cares about the software revenue.
Right now Apple's software is hardware locked. Google by comparison cares much more about the software side than Apple, imo.
This is a 2017 article[1] about how Tim Cook pushed Apple to get more revenue from software services. Recent articles this year add to say that Apple has been successful in this area. So perhaps the image of Apple being mainly a hardware company is outdated.
Even more reason to avoid them i guess, if there hardware is as closed ecosystem and useless to those outside it. At least before they shipped good hardware for multiple contexts, but perhaps those days are gone.
Then once the manufacturer makes that decision, switch. Honestly this comment is just nonsense.. Apple has always allowed other OSes on Mac. When that changes, buy a new computer.
The iPhone and the Mac are different platforms, developed at different times, for different purposes and different markets with different design philosophies. I don’t see a logical connection between the iPhone being bootlocked and the Mac necessarily following suite.
The selling point of Macs is the Apple ecosystem and UX. If you avoid Apple, what's the point of M1? Linux on equivalent AMDs is cheaper and more compatible.
- (Potentially) programmable top notch security chip with no overhead encryption
- Fanless (Air), cool running, fast processor with very low power consumption
- All metal body
- Top notch HiDPI screen with color accuracy.
- Top notch sensors
- Excellent, illuminated keyboard
- Big trackpad with pressure sensitivity and taptic engine
- Excellent battery life
- Excellent battery endurance
- Very high sound quality, good speakers, good mics.
- High quality webcam
- Light for its class
- WiFi with plenty of antennas, MIMO support and all modern standards, incl. forward facing ones.
> But you can find decent hardware elsewhere, even at lower cost and with better configurability.
The problem is, you pay the same price for a Thinkpad/Elitebook and you get ridiculous gotchas like missing wireless antennas (at most you can find 2x2, if you're lucky). Or the configuration you like doesn't come to your country, or the vendor doesn't allow custom configuration for your country unless you buy 10+, etc.
OTOH, I pay the premium, I get what I configured, with top notch small specs (antennas, wireless chips, etc.). You can't find a spare 40mm SSD for your Elitebook after three years, but Apple will service your device happily.
If you get the said SSD from the vendor, its quadruple the price, so it's Apple's price territory again.
At that price range, there's no advantage in hardware prices between Apple, HP, Lenovo and Dell. They're equally cheap/expensive. So, IMHO, you pay less on the long run for an Apple laptop, which you can use for 7-8 years without problems.
I’m not sure this is true? The $1000 version has absolutely ridiculous performance for it’s price class. To the point it’s nearly as good as my desktop system.
My desktop system is a 24-core Zen 2. The M1 shouldn't be faster, and I'm certain the difference is almost purely a matter of software, but in reality the M1 certainly feels a lot faster.
Yes, the desktop has higher throughput. Of course it does. But that doesn't mean I don't feel a fraction of a second's lag whenever I do basically anything, and on the M1 that just... doesn't exist.
FWIW in a benchmark of a lua parser my friend is writing my M1 beat my 5950X Ryzen by a factor of 2x. According to him the L1/L2 cache makes the difference.
On the other hand, my i7 16 inch MacBook Pro consists of purely lag. Half of the time it’ll be the ‘kernel process’ so I don’t even have a clue what’s causing it.
Is the M1 keyboard significantly better than MBP (with the touchbar)? because that keyboard was crap. To be absolutely fair, good keyboards are hard to come by these days, and that's also why i'm using a logitech from 2006 via ps2 adapter.
> Is the M1 keyboard significantly better than MBP (with the touchbar)?
Yes. It's slightly deeper, much more resistant to dust and one can write with very low effort, but with crisp feedback. It allows me to write at least 10% faster with no effort. I similarly use an old Microsoft Sculpt Comfort at office and Logitech G710+ at home.
Nearly all the HW items have equivalent or superiour replacements, and a fair amount are dependent on the Apple ecosystem. Without Apple's ecosystem it's a good system but overpriced, and that's before you run into Linux compatibility problems.
For example, take the monitor - there are monitors with higher refresh rates or more resolution (Retina does 'only' 6K). Also, Mac OS colour processing has no good equivalent elsewhere, so you won't see as much benefit from Apple's monitor without Mac OS.
M1 is nearly a year old now, and the replacement (M1X, M2?) is rumored to be released in a couple of months. So far no other SoC has been able to come close to even the current-gen M1 in terms of performance + efficiency, with a remarkable x86 emulation on top.
On the software side, Apple has the unique position to force every developer who cares about their user base to rebuild their stuff for a completely different CPU architecture.
M1-based products are anything but overpriced. The Air’s price is comparable to modern flagship smartphones.
The context of the thread is the idea of "avoiding Apple ecosystem while still buying M1" (See paulcarroty's post and my reply). If you install Linux, you don't get their Rosetta x86 emulation or any advantage from 'forcing developers', etc. etc.
I was more focused on the "equivalent or superiour replacement" part.
At this point in history, you can't avoid Apple's ecosystem if you want to use their ARM-based laptops/desktops unless it's a hobby of yours. And it's sad, because this is the best hardware I've ever owned.
For me (and many out there) DPI is more important than slightly higher battery consumption which becomes less of an issue with other components such as SoC mitigate that with more optimized power usage.
Retina Display when it first arrived was literally THE reason I started switching to Apple ecosystem as it was exponentially better to look at that screen, and I'd even pay more for a 4K display on a MacBook if they brought it.
My comment was for laptops primarily. On the desktop side, I'd not buy an Apple branded system unless I need macOS specifically, but on the portable space, their systems are superior to most of the offerings.
They're not EliteBook/Thinkpad tough, but they're more than enough unless you're going to handle it rough.
I can calibrate a screen regardless of the OS I use with the help of a good calibration device, and having a good panel is a good start for that. On desktop, there are possibly better panels for pure color accuracy, I won't argue that.
OTOH, on the portable space, their hardware formula is pretty robust.
I did think more about desktops, but I think same applies to laptops. I think that if you tilt the field by not allowing Mac OS (that's the context of the thread) the top PC laptops are better, since you lose many Apple advantages given by integration.
Worse, it would take some time for me to even trust Linux on M1 laptops to not fail exactly when I need it - that's the nature of reverse engineering. Since they don't have the specs, the only real test is lots of people running it for some time and reliability is more important to me than specs.
As for monitors, Apple's colour advantage is more than good calibration - the entire ecosystem can handle 10bit HDR. That's something AFAIK you won't get elsewhere.
You don't need to verify everything yourself. You can verify any small part and rely on the community to verify the rest. Or pay someone to verify. However, for all that you need verifiability, which Apple lacks.
> You can verify any small part and rely on the community to verify the rest. Or pay someone to verify.
The only difference in this is who you trust. Be it Apple, the community or someone you pay, you're still trusting that someone else's interests align with yours and they did things correctly.
In other words, this is not a technical problem. It's a problem that needs to be solved through regulation, because 99% of the people can't verify by themselves that their devices are actually private and secure.
Not really. There is a huge difference between trusting a single for-profit entity (who provides backdoor to iCloud in China) or huge number of independent people (each would like to get famous/rich for finding bugs).
Yes, because the "huge number of independent people" have never missed any serious bugs or backdoor, and they also verify every piece of equipment you use.
> Apple spends a hell of a lot more time and money verifying that my iPhone is secure than say… the developers of any number of the mobile Linux ports.
Secure against entities they don't like. But intentionally insecure against entities they do like.
My comment was directed at the people who would say “don’t use apple” as if Android or any number of FOSS phone alternatives with 5 people maintaining them are more secure than IOS.
No insult. You actually don’t know that the Linux kernel has less security issues than Apple’s kernel.
But the kernel is only a tiny fraction of the system. There simply is no Linux system that even attempts to solve the problems Apple solves. There could be, but there isn’t - this is what we mean by the term ‘wishful thinking’.
That just shifts who controls the monopoly on verification. Not trusting anyone isn't a reasonable goal. Open verifiability allows you to choose which entities to put trust in and how much trust you can afford to eliminate by doing things yourself.
I don’t think that’s his point. The point is Apple made a laptop that did away technologies that allow PC ecosystem/choices we see today. The M1 MacBook feels like an iPhone, but sized as a laptop.
I recently bought an M1 (my first and only Apple product so far). Anecdotally, I only bought it knowing that it can execute arbitrary code without restrictions, unlike iOS. If they decide to change that then you can be sure I won't be purchasing any future models.
I bought the M1 Air and yes, it feels like an iPhone, but sized as a retro-ish laptop and able to run a dev stack which Apple still won't let me do on my plenty powerful enough iPad.
AFAIK, the M1 is an ARM laptop and the Linux kernel already supports it. I also assume that it won't be too long until we see Bootcamp for M1 Macs. So I don't think that M1 or Intel changes anything in that regard.
Not at all. Here we have a manufacturer that thinks he would be allowed to scan the contents of your machine. If you scan a machine, you can read everything on the machine.
You know, there was a time when people stopped buying cheap shirts made in English factories that employed kids. They were good shirts, too.
Is it worth it, though? Is being "better for mobile computing purposes" worth all of this, and whatever else will come next - which we know is just a matter of time, because no one cares enough to stop buying from them?
It’s not clear that governments would give the open social networks an easier ride either. It could be argued that distributed FOSS developers are easier to pressurise into adding back doors, unless we officially make EFF our HR/Legal department.
The other problem is workers have a right to be paid. The alternatives are FOSS and/or distributed social media. Who in good conscience would ask a tech worker to give away their labour for free, in the name of everyone else’s freedom?
In a world of $4k rent, who amongst us will do UX, frontend, backend, DevOps, UO, and Security for 7 billion people, for anything but the top market rate?
The real alternative is to attack the actual problem: state overreach. Don’t attack people for using SnapChat — get them to upend the government’s subservience to intrusive law enforcement.
imho, we have everything in the foss world working tightly except great UX/UI. in my experience in the open source world – which is not insignificant – great UX is the only thing stopping us from a paradigm shift to actual tech liberation.
even outside of corporate funded work/commits, we see an astounding number of people donating incredible amounts of their time towards great quality code. but we still thoroughly lack great UX/UI.
i’m not talking about “good”, we have some projects with “good” UX, but very very few with great.
there are many reasons and I’d be happy to share what some of them are, but in my mind great UX is unquestionably one of two primary things holding us back from actual truly viable software liberation.
There are tons of OSS projects with great UX... just not for "normies". That's the issue: Most OSS contributors write software primarily for themselves, and if their needs don't align with those of the general population, the end product will not be very attractive to the masses.
> It could be argued that distributed FOSS developers are easier to pressurise into adding back doors, unless we officially make EFF our HR/Legal department.
You'd only need a few important ones, and all you'd have to do is compromise them in one way or another. This can be done via coercion, via money, or by physically or virtually breaking into their system(s).
For example, if money can be an incentive, you can stimulate a FOSS dev to add a NOBUS vulnerability in code. Also, since all the code is public, organizations like NSA can do in-house fuzzing, keeping the findings to themselves.
>Who in good conscience would ask a tech worker to give away their labour for free, in the name of everyone else’s freedom?
Here's the hope: the tech workers doing it for 'free' because they're scratching their own itch. So it would not be an act of onerous charity. The techies make some free open source decentralised clone of Reddit, say, then some folks among knitting communities, origami enthusiasts, parents groups, etc. copy it for free and pay to run it on their own hardware.
If it seems like this scanning is working as advertised, this will be a great marketing stunt for Apple. Actual predators will stop using Apple products out of fear of getting caught and they will be forced to use Android.
Now any person who owns an Android is a potential predator. Also, if you are trying to jailbreak your iPhone, you are a potential predator.
Some 'predators' are dumb. They'll keep using iPhones, get caught, and have their mugshots posted in the press. Great PR for the policy makers who decided this. Such stories will be shoved in the faces of the privacy advocates who were against it, to the detriment of their credibility.
Yeah, and a lot of shortsighted people (even here) will be happy, because they are not 'predators'.
Of course, they won't be shown photos of victims of brutal dictatorships like Russia, Belarus, China, etc. They love it that phone manufacturer keeps their phone free of malware.
I'd argue that this update is spyware and malware. Policy-makers are counting on normies being accepting of law-enforcement malware if the crimes targeted are serious enough.
The twitter comments also mentioned scanning for political propaganda etc. This could work against Apple if normal folks don't want all their stuff scanned on behalf of unnamed agencies.
Or they will just get one step deeper into the dark, by using a designated device for the dirty stuff. Potentially only used within tor/vpn with no connection to "normal"-life.
Congratz, investigations got a bit harder, but now all people have to life with a tool that will be used against them when needed. No sane person can believe that this isn't used for other "crimes" (how ever those are defined) tomorrow.
I think having a manufacturer that is able to read the contents of your device at any point is good marketing. Although, I know some Apple users that would certainly buy that excuse.
This sort of scanning has existed for well over a decade, and was originally developed by Microsoft (search PhotoDNA).
The only thing that's changed here is that there is more encryption around, and so legal guidelines are being written to facilitate this, which has been happening for a long, long time.
(I don't disagree with your overall point, and child porn is definitely the thin edge of the wedge, but this isn't new and presumably shouldn't be too surprising for any current/former megacorp as they all have systems like this).
First they came for kiddie porn, and I did not speak out -because I had no kiddie porn. Then they came for Pepe the frog memes. I did not speak out -because I was held in solitary confinement pending trial and successful completion of the re-education camp.
Nothing is ever new. You can always find some vague prototype of an idea that failed to become ubiquitous ten years ago.
When I read that this shouldn't be surprising, it has an aftertaste of "Dropbox is not interesting/surprising because ftpfs+CVS have existed for well over a decade"
> Nothing is ever new. You can always find some vague prototype of an idea that failed to become ubiquitous ten years ago.
This has been standard practice for well over a decade amongst all big internet platforms.
Like, one can argue that regardless, people's messages should not be readable for any reason, but that's gonna be a tough one to get through a court of law.
The obvious difference here is that backdooring encryption is an all or nothing affair, which may require new thinking (it definitely does).
But the ship around this particular form of backdooring has most definitely sailed.
Like, the only reason Apple is new to this game is because they haven't been in the storage/media sharing business for as long as their competitors.
> 1) You willingly delegated the decision of what code is allowed to run on your devices to the manufacturer (2009). Smart voices warned you of today's present even then.
99% of the population will delegate the decision of what code is allowed to run to someone, be it the manufacturer, the government, some guy on the Internet or whatever. For that 99% of the population, by the way, it's actually more beneficial to have restrictions on what software can be installed to avoid malware.
> 2) You willingly got yourself irrevocably vendor-locked by participating in their closed social networks, so that it's almost impossible to leave (2006).
"Impossible to leave" is not a matter of closed or open, but it's a matter of social networks in general. You could make Facebook free software and its problems wouldn't disappear.
Not to mention that, again, 99% of people will get vendor-locked because in the end nobody wants to run their own instance of a federated social network.
> You willingly switched over essentially all human communication to said social networks, despite the obvious warning signs. (2006-2021)
Yes, it's been years since I talked someone face to face or on the phone and I cannot send letters anymore.
> 4) Finally you showed no resistance to these private companies when they started deciding what content should be allowed or banned, even when it got purely political (2020).
No resistance? I mean, it's been quite a lot of discussion and pushback on social networks for their decisions on content. Things move slow, but "no resistance" is quite the understatement.
> Now they're getting more brazen. And why shouldn't they? You'll obey.
Is this Mr. Robot talking now?
But now more seriously, in December the European Electronic Communications Code comes into effect, and while it's true that there's a temporary derogation that allows these CSAM scanners, there's quite a big debate around it and things will change.
The main problem with privacy and computer control is a collective one that must be solved through laws. Thinking that individual action and free software will solve it is completely utopic. A majority of the people will delegate control over their computing devices to another entity because most people don't have both knowledge and time to do it, and that entity will always have the option to go rogue. And, unfortunately, regulation takes time.
Anyways, one should wonder why, after all these years of these kinds of smug messages, we're in this situation. Maybe the solutions and the way of communicating the problems is wrong, you know.
>99% of the population will delegate the decision of what code is allowed to run to someone, be it the manufacturer, the government, some guy on the Internet or whatever. For that 99% of the population, by the way, it's actually more beneficial to have restrictions on what software can be installed to avoid malware
I do not agree with this. You are saying people are too stupid to make decisions and that is amoral in my opinion.
>"Impossible to leave" is not a matter of closed or open, but it's a matter of social networks in general. You could make Facebook free software and its problems wouldn't disappear.
Data portability is a thing. This was the original problem with FB and thats how we got 'takeout'.
>Yes, it's been years since I talked someone face to face or on the phone and I cannot send letters anymore.
>Is this Mr. Robot talking now?
Using the extreme in arguments is dishonest. We are talking on HN where it is a selective group of like minded people(bubble). How does your delivery driver communicate with their social circles? Or anyone that services you? You will find different technical solutions are used as you move up and down the social hierarchy.
>The main problem with privacy and computer control is a collective one that must be solved through laws.
Technology moves faster than what any law maker can create. We do not need more laws as technology advances but rather an enforcement of personal rights and protections enabling users to be aware of what is happening. It appears you are stating "people aren't smart enough to control their devices" and "We need laws to govern people" vs my argument that "people should be given the freedom to chose" and "existing laws should be enforced and policy makers should protect citizens with informed consent".
> >99% of the population will delegate the decision of what code is allowed to run to someone, be it the manufacturer, the government, some guy on the Internet or whatever. For that 99% of the population, by the way, it's actually more beneficial to have restrictions on what software can be installed to avoid malware
> I do not agree with this. You are saying people are too stupid to make decisions and that is amoral in my opinion.
How much of the code running on your data do you personally inspect? (Don’t forget device firmware) When your browser ships an update, do you reverse-engineer the binary? Do you review all of the open source code you use looking for back doors?
Would it be accurate to say that you don’t do that because you’re stupid? I don’t think that’s reasonable, any more than it would be to say you should carry around a test kit for any food you are planning to buy at the supermarket.
> Technology moves faster than what any law maker can create.
This is a common claim but it’s too simplistic. Laws do get passed relatively quickly when there’s a clear need - think about how things like section 230 arrived relatively soon after the rise of the web - but in most cases it’s more a clarification of existing laws. For example, cryptocurrency wasn’t mentioned in previous laws by name but the IRS had no trouble taxing it under existing laws.
Privacy shows why the “just let people choose” approach doesn’t work: you the individual have no negotiating clout with Facebook or Google, and there are many cases like revenge porn where the problem is only visible after the decision has been made.
Laws are how societies agree to function. If you don’t like the laws, you need to get involved because there simply isn’t a way to get good results by demanding that the system accommodate people who don’t show up.
So you've reversed your earlier position and now admit that it is normal and reasonable for most people to delegate this work? You can call it “the community” but it's still delegation.
> > 99% of the population will delegate the decision of what code is allowed to run to someone
> I do not agree with this. You are saying people are too stupid to make decisions and that is amoral in my opinion.
No, it's just saying that most people have other priorities. If you want to make the world a better place, educate more people so that their priorities change towards caring more about the software that runs on their devices, instead of attacking people with weird non-sequiturs.
> I do not agree with this. You are saying people are too stupid to make decisions and that is amoral in my opinion.
I'm not saying that at all. I'm saying that it's impossible for all people to make informed decisions on all the issues that surround them, because of both knowledge and time. And it doesn't just happen with computer and privacy, see food, for example. Do you make all decisions about what's allowed or not in your food chain? It's impossible! Unless you dedicate quite a lot of time to it, you can't know if certain foods have certain ingredients, and whether those are harmful or not. That's why we have regulation on food. We trust that regulation because we need to do more things than just worrying constantly about our food.
In the same way, most people delegate control on what can run on their device because they don't have the time or knowledge to inspect constantly what is running on their devices.
> Data portability is a thing. This was the original problem with FB and thats how we got 'takeout'.
And did takeout solve any problems? No, because it's not a technical issue.
> Using the extreme in arguments is dishonest. We are talking on HN where it is a selective group of like minded people(bubble). How does your delivery driver communicate with their social circles? Or anyone that services you?
The GP used the extreme by saying that "essentially all human communication" has been moved to social networks.
But yes, we do agree that HN is not the real world. So I'd love to know what were the warning signs to people like a delivery driver, or basically anyone that wasn't active in computer circles. Not to mention that, before, social networks, most communication was done through channels controlled by third parties (phone, letters, television). From a non-technical standpoint, things didn't change that much.
> We do not need more laws as technology advances but rather an enforcement of personal rights and protections enabling users to be aware of what is happening.
"Enforcement" is done through laws and regulations.
> It appears you are stating "people aren't smart enough to control their devices" and "We need laws to govern people"
I'm not saying that at all. I'm saying that people shouldn't need to invest a significant amount of time constantly verifying that their devices and networks are doing what they say they are doing, and that laws and regulations should be applied to corporations instead so that people can reasonably trust that the ones offering those devices and networks are doing things somewhat correctly.
And again, this has been done already with quite a lot of things. There are regulations for cars, food, furniture, clothes... Not because people aren't smart to control what they use, but because it's impossible for any one person to have the time and knowledge to control everything that they use.
Imagine applying your argument when talking about, say, carcinogenic substances on food. You could argue that the best way to fight that is for people to grow their own food and check that their food doesn't contain those substances, or trust that some company that sells them food is doing it for them. Or, you could push for regulation and organisms that ensure that those substances don't make their way into the food chain.
Well, this is the same. Most people have other things to do instead of learning to ensure that their devices are secure and private and then checking that for everything they get their hands on. You need regulations so that there's a consensus on what can you expect, and then enforcement so that the products you get actually comply with those regulations.
But who will formulate these laws that are supposed to give people freedom when we already have malicious state actors pushing for the complete opposite? Who will put pressure on governments to formulate just laws if people are so hopelessly misinformed and cannot possibly take the time to understand the issue of being parted with their freedom?
The suggestion is not that people need to continually invest large amounts of time to manually verify their freedom is being upheld. It is that they should inform themselves once in order to understand the issue and be able to ensure their governments are not secretly becoming totalitarian states.
> But who will formulate these laws that are supposed to give people freedom
I'd bet that most people in HN live in democracies.
> Who will put pressure on governments to formulate just laws if people are so hopelessly misinformed and cannot possibly take the time to understand the issue of being parted with their freedom?
You don't need to know biochemistry and medicine to know that you don't want carcinogenic elements in your food, right? In the same sense, you don't need to know how to check that a channel is private and secure in order to push for regulations that make your communication channels secure.
>It is that they should inform themselves once in order to understand the issue and be able to ensure their governments are not secretly becoming totalitarian states.
Which is exactly the same thing I'm proposing: push for regulations that align with their interests.
> You don't need to know biochemistry and medicine to know that you don't want carcinogenic elements in your food, right? In the same sense, you don't need to know how to check that a channel is private and secure in order to push for regulations that make your communication channels secure.
Yes, but you need to be able to conceive what cancer is and (roughly) how and why it appears in order to understand and fight the concept of carcinogens. Most people today still haven't realized the cancer and the carcinogens in the story of privacy. This is despite the fact that this was widely known in a prior time (see Stasi).
> Which is exactly the same thing I'm proposing: push for regulations that align with their interests.
Yes, except it's not happening because people are complacent, which is why people are fuming when they see stuff like this here.
> Most people have other things to do instead of learning to ensure that their devices are secure and private and then checking that for everything they get their hands on. You need regulations so that there's a consensus on what can you expect, and then enforcement so that the products you get actually comply with those regulations.
I pretty much agree with you. That said, the position that this particular issue is one in which regulations can save us is naive. The basic political reality is that the very people/organizations which are pushing for further encroachments on our rights and destruction of our privacy as people are the same people and organizations responsible for regulating the companies which offer us these services, software, and devices. When you live in a world with blatantly malicious state actors (the US Government) pressuring and demanding these encroachments as an end-run around existing regulations (the Fourth Amendment of the US Constitution), whom exactly is supposed to create and enforce these privacy regulations?
> "Impossible to leave" is not a matter of closed or open, but it's a matter of social networks in general. You could make Facebook free software and its problems wouldn't disappear.
Not true. If you have interoperability between different networks, you can leave. This is how ActivityPub (e.g. Mastodon, PeerTube, PixelFed) works.
> Not to mention that, again, 99% of people will get vendor-locked because in the end nobody wants to run their own instance of a federated social network.
You just switch to any other instance, because Mastodon doesn't prevent you from doing that.
> The main problem with privacy and computer control is a collective one that must be solved through laws. Thinking that individual action and free software will solve it is completely utopic.
We need both. You cannot force Facebook to allow interoperability when there is no other social network.
> If you have interoperability between different networks, you can leave
If all your friends are in a Mastodon instance and you think that instance is scanning your messages, you'll find it hard to leave because leaving the instance for another that doesn't share messages with that one means stopping communication with your friends.
> You just switch to any other instance, because Mastodon doesn't prevent you from doing that.
Controlled by another third party. Not to mention that, with enough users, there will be feature divergence so "switching" won't be that easy.
Want a real world example? See email. Open protocol with multiple client-server implementations. However, most people use one of the major providers (Google, Microsoft...), there are incompatibilities between clients and even if you "can switch", it's not that easy nor gets done often. Yes, you can switch to ProtonMail or something more secure if you want, but that won't solve the problems of the 99% of people that will use general providers and won't even know they can't switch.
> We need both. You cannot force Facebook to allow interoperability when there is no other social network.
Right now you could force Facebook to be interoperable and be open source and still 99% of the people would be on the original Facebook instance. Again, it's not a technical issue.
Everything is controlled by a third party except self-hosting. Mastodon allows that too. Closed networks don't.
> Yes, you can switch to ProtonMail or something more secure if you want
So you answered your own question.
> but that won't solve the problems of the 99% of people that will use general providers and won't even know they can't switch.
My point is that they are able to switch due to the openness of the platform.
> Right now you could force Facebook to be interoperable and be open source and still 99% of the people would be on the original Facebook instance. Again, it's not a technical issue.
Yes. It's not just a technical problem. But there is a technical side in it. Millions will immediately switch given a possibility. What happens next, who knows.
> My point is that they are able to switch due to the openness of the platform.
And my point is that most won't, and the ones that do will still go to another platform that's controlled by another third party and they'll still need to rust that the platform is not doing things they don't like.
> Millions will immediately switch given a possibility.
Switch to where? To another company that could do weird things out of the eyes of the users? Do you think all of those millions are going to run their self-hosted Facebook?
My point is that privacy and security is not something that will be solved by federation or open source. For open source and federation to be useful in that regard, you need most people to actively research and check that the tools that they use are private and secure. If they don't, they're just trusting someone the same way they trust Facebook now. And most people (that includes most people here on HN) don't have both the time and knowledge to do those checks.
In other words, this is a collective issue. Trying to solve collective issues by individual choices is not the best path.
> My point is that privacy and security is not something that will be solved by federation or open source.
I disagree. Here's why:
> For open source and federation to be useful in that regard, you need most people to actively research and check that the tools that they use are private and secure.
This is the key point. You do not need most people. You need some people. And you can always find some people who verify everything and self-host for you. This is how Signal and Matrix appeared and became (relatively) famous.
> This is how Signal and Matrix appeared and became (relatively) famous.
And what happens when another app comes and says that "it's secure" and people start using it instead of Signal or Matrix? What happens if Signal starts requiring some payments (running servers is not free) and people move to other apps? Maybe those other apps are open source and federated, but the federation protocol is found later to have a backdoor, or some instances run data mining on the messages, or something like that. Who will be faster, the users flocking to those apps or the few number of verifiers getting to work and detecting those issues?
If you want most apps to be like Signal or Matrix, the solution is easy: push for legislation and certifications that ensure that, no matter the app, a certain level of security and privacy is enforced. It's not perfect, but it's far better than just relying on trusting that some people invest a lot of time on that research.
> And what happens when another app comes and says that "it's secure" and people start using it instead of Signal or Matrix?
First, early adopters come and verify it. They bring their friends. If it's really secure and they find no serious bugs, more people join. Then, a bridge is created between the services.
> What happens if Signal starts requiring some payments (running servers is not free) and people move to other apps?
This is a problem with a non-federated protocol actively fighting against third-party apps and servers. It will definitely happen with Signal in this way, which is why I'm not using it and not recommending.
> Maybe those other apps are open source and federated, but the federation protocol is found later to have a backdoor, or some instances run data mining on the messages, or something like that.
Such backdoor will be quick and easy to fix, and to verify that it's fixed. Unlike with Apple's Pegasus. No system is ever 100% secure.
> Who will be faster, the users flocking to those apps or the few number of verifiers getting to work and detecting those issues?
Users are typically very slow to move. See Whatsapp & Facebook. But what's your point?
> If you want most apps to be like Signal or Matrix, the solution is easy: push for legislation and certifications that ensure that, no matter the app, a certain level of security and privacy is enforced.
This is definitely an important thing to do, but it's not enough. There is such legislation already in Europe: GDPR. Unfortunately it cannot dramatically change the industry quickly, because of the monopolies and network effects.
> First, early adopters come and verify it. They bring their friends. If it's really secure and they find no serious bugs, more people join. Then, a bridge is created between the services.
That's quite the optimistic path. What if the app starts being used by teenagers, for example? Or by people with less technical abilities?
> This is a problem with a non-federated protocol actively fighting against third-party apps and servers.
Federated services still need to pay for their servers.
> Such backdoor will be quick and easy to fix
Again, pretty optimistic on that.
> and to verify that it's fixed. Unlike with Apple's Pegasus. No system is ever 100% secure.
Pegasus was external malware. What makes you think a Pegasus for federated servers or open source phones can't exist?
> Users are typically very slow to move. See Whatsapp & Facebook. But what's your point?
Security research takes time, probably more time than users need to move from apps.
> There is such legislation already in Europe: GDPR.
And GDPR has accomplished way more in way less time than technical solutions. I wonder why.
> Unfortunately it cannot dramatically change the industry quickly, because of the monopolies and network effects.
Don't those monopolies and network effects affect the technical solutions you propose too?
My point is that of course you need good technical solutions, but just those by themselves are useless, because most people don't have the time and knowledge to reliably distinguish which ones are good and which ones are bad (and "good" and "bad" are relative too), and other differential features (price, capabilities, ease of use) that are easier to notice will weigh more on their decisions.
This is not a problem unique to tech and privacy. Food security, climate, building safety... almost everything you buy has had the similar issue of how to have "things done right" where deciding whether it's done right or not is hard for most people. Almost everything has been solved (or almost solved) with regulation, and just "better products" haven't been enough.
You could go back even further if you wanted. Possibly to the first handwritten letter delivered by a third party. That's where all the potential for censorship and tampering started.
Truth is even if our tools evolve, our chains evolve faster.
Signals intelligence intercept and analysis centres have been called Black Chambers for a long time, including the first such group in the US, predecessor to the NSA:
I don't think this is a fair characterization; it should not be most people's life goal to fight for their privacy against big companies. Some people make it theirs, and that's fine, but I think it's def not something to expect from most people, in the same way that you don't expect everyone to be actively fighting for clean tap water or drivable roads.
Instead, we as a collective decided to offload these tasks to the government and make broad decisions through voting. This allows us to focus on other things (at work, with our actual job, at home, you can focus with what matters for you, whatever that might be).
For instance, I tried to avoid Facebook for a while and it was working well, I just missed few acquaintances but could keep in touch with the people who matter for me. Then suddenly they acquired Whatsapp. What am I to do? Ask my grandmother and everyone in between to switch to Telegram? Instead, I'm quite happy as a European about the GDPR and how the EU is regulating companies in these regards. It's definitely not yet there, but IMHO we are going in the right direction.
The first digital phones ran on 56 bit (symmetric?) encryption. They certainty weren't powerful enough to run public key cryptography at safe key sizes, which is needed for secure e2e.
Probably because for most people they estimate the risk to be low enough (correctly or not). If I was a politically sensitive person in China for instance I’ll definitely be more weary.
(1) happened with the first multitasking OS, or possibly when CPUs got microcode; Android and iOS are big increases in freedom in comparison to the first phones.
(2) and (3) are bad, but a tangential bad to this: it’s no good having an untainted chat layer if it’s running on an imperfect — anywhere from hostile to merely lowest-bidder solution — OS. (And most of the problems we find in software have been closer to the later than the former).
(4) for all their problems, the American ones held off doing that until there was an attempted coup, having previously resisted blocking Trump despite him repeatedly and demonstrably violating their terms.
Re (1): That's technically true, but missing the point when viewed holistically. Those first feature phones were mostly just used to make quick calls to arrange an appointment or discuss one or two things. They were not a platform to mediate a majority chunk of our social lifes like today's phones are.
That also seems to miss the point, as for most of the stuff you’re describing the phone is a thin client and the computation is on a server, and that would still be true even if the phones themselves ran only GPL-licenced code and came with pre-installed compilers and IDEs.
Will this work differently depending on what country you are in?
For instance, back in 2010 there was that thing about Australia ruling that naked cartoon children count as actual child porn. [1]
It's perfectly legal elsewhere (if a bit weird) to have some Simpsons/whatever mash-up of sexualised images, but if I flew on a plane to the land down under, would I then be flagged?
edit:
If this is scanning stuff on your phone automatically, and you have whatsapp or whatever messenger set to save media automatically, then mass texting an image that is considered 'normal' in the sender country, but 'bad' in the recipients, you could get a lot of people flagged just by sending a message.
Sorry to say that, but stuff like this has to happen at some point when people don't own their devices. Currently, nearly no one owns their phone and at least EU legislation is underway to ensure that it stays this way. The next step will be to reduce popular services (public administration, banking, medicine) to access through such controlled devices. Then we are locked in.
And you know what? Most people deserve to be locked in and subject to automatic surveillance. They will wake up when their phone creates a China-Style social score automatically, but then it will be far too late. It's a shame for those people that fought this development for years, though. But the "I have nothing to hide" crowd deserves to wake up in a world of cyber fascism.
When you mention that set of population as deserving the consequences, it does not seem too far to me from "People who want trains instead of cars deserve trains". Is this relevant? The big problem is, people buy controversial services, hence finance them and endorse them, hence strengthen them, and in some cases these services make the acceptable ones extinct: the big problem is that people do not refuse what is not sensible, and sensible people have to pay.
Already here where I live, I cannot get essential services¹ because that practice made them extinct!!!
¹(exactly: public administration, tick; banking, very big tick; medicine, not yet. And you did not mention ___the cars___, and more...)
Other note: you wrote
> nearly no one owns their phone
and some of us are stuck with more reliable older devices, which soon may need some kind of replacement. If you know the exceptions to the untrustable devices, kindly share brand/model/OS/tweak.
This is beautiful, and news (to me) that make me re-breathe like around alpine pines. Thank you!
(It makes me dream of a version which also is ruggedized and uses a high resolution OLED display... But this can easily already be the new palm companion.)
I hope that the current difficulties in (global) manufacturing will be soon be over.
Edit: having mentioned "pines" was not meant to be a pun - I just realized the odd potential reference to the PinePhone.
Why do they DESERVE to be so? Despite what you say there was and is no mechanism to really change or affect the course of these affairs.
Apple? How? Your other option is Android, who do you choose when they start to do it?
Or when governments decide to mandate that ALL phones need to legally have "scanning all the files on it and report them back to the police database" mechanisms?
The EU? Particularly how? An organization that has been deliberately structured to supersede the legitimacy of nation states and export it's power to all of it's members at the whim -- sometimes it seems -- of some aging out of touch bureaucrats.
I'm not even a #brexiter, btw.
Should Scotland become an independent nation? There was a public debate and people had opinions -- and there were mechanisms in place to act on and make a change, as an example.
There has been no public debate on this in a national sense (anywhere), and also no mechanisms by which people could decide to change it. I'm not sure people deserve it.
What do you mean, authorized by whom? The software applications I run have not been greenlit by any third party. Think about a model in which you code, than wait for authorization before deployment... The worst pulp novel.
Well this really debunks my common phrase “Apple is a Privacy company, not a Security company”
I can’t say I’m surprised they are implementing this (if true), under the radar. I can’t imagine a correct way or platform for Apple to share this rollout publicly. I’m sure nothing will come of this, press will ignore the story, and we all go back to our iPhones
Apple only claim to care about privacy because they couldn't manage to compete with Google on running ad-serving cloud service. Since they couldn't sell ads in meaningful number, they figured they might as well brag about it.
But Apple iCloud for ex doesn't intrude on your privacy any more or less than Google Photos.
This is really a pointless comment - yes all companies are ultimately there to make money, but that does not mean always blindly doing whatever makes the most money in the short term. Clearly apple sees value in marketing themselves as privacy friendly.
The terrifying part about this is potential abuse. We have seen people arrested for having child porn in their web cache just from clicking on a bad link. I could inject your cache with any image I want using JS.
Presumably the same could apply to your phone. Most messengers save images automatically. I presume the images are immediately scanned against hashes once saved. And the report is immediately made if it passes the reported threshold. There’s no defence against this. Your phone number is basically public information and probably in a database somewhere. You have no protection here from abuse, if you’re a normal citizen. I bet most people don’t even turn the auto save setting off on WhatsApp.
This has worrying privacy implications. I hope Apple makes a public announcement about this but wouldn’t be surprised if they don’t. I also would expect EFF will get on this shortly.
> Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content.
> That’s the message they’re sending to governments, competing services, China, you.
Is it? That’s just something the tweets have read in.
The message could equally well be ‘We won’t become an easy political target by ignoring a problem something most people care about like child porn, but we are going to build a point solution to that problem, so the public doesn’t force us to bow government surveillance requests.’
It’s easy to nod along with an anti-Apple slogan, but we need to consider what would happen if they didn’t do this.
If Apple thought this kind of dragnet was a losing political fight that tells me they've become too weak to stand up to unreasonable government demands. Where is the company that won the public opinion battle over unlocking a mass shooter's phone?
This isn’t a government demand. This is something the public cares deeply about, and Apple is solving it their own way.
Public opinion is not in favor of giving safe harbor to pedophiles and child pornographers, and I can’t see why anyone would even want Apple to fight that battle.
Not sure where you got that information. I haven't seen any official announcements, so I assumed based on it being US-only and rolled out with no fanfare (except critical press stories citing unnamed sources) that it's something the FBI asked for.
Not sure where you got that - seems like you’ve just made up an explanation that it’s an FBI demand out of whole cloth.
What we do know is that it is a proprietary solution using a proprietary hash which only applies to Apple products and that Apple has always presented themselves as a family friendly company that doesn’t support criminal use cases.
Everything points to this being something Apple thinks needs to be solved before the public asks why they haven’t.
If it was a government demand, we’d presumably see Google responding to it too.
You still haven’t explained why not working to deter pedophiles and child pornographers, is an important battle for them to fight.
They might think it’s a problem they want to solve on their own terms, and that would seem to be what they have done here.
Not solving a problem people care about just because the government also cares about it seems illogical. I think a lot of people like the idea of corporations taking responsibility for the social problems they cause without needing to be forced to do so by the government.
False positives, what if someone can poison the set of hashes, engineered collisions, etc. And what happens when you come up positive - does the local sheriff just get a warrant and SWAT you at that point? Is the detection of a hash prosecutable? Is it enough to get your teeth kicked in, or get you informally labeled a pedo by your local police? On the flip side, since it's running on the client, could actual pedophiles use it to mutate their images until they can evade the hashing algorithm?
False positives are clearly astronomically unlikely. Not a real issue.
Engineered collisions seem unlikely too. Not impossible. Unless there is a straight up cryptographic defect in the hash algorithm, it seems hard to see how engineered collisions could be made to happen at any scale.
At Apple scale, a once in a million issue is going to ruin the lives of 2000 people. A false positive here is not a mild inconvenience. It means police raiding their house, potentially damaging it, seizing all of their technology for months while it is analyzed, and leaving these people highly stressed while they try to put their lives back together.
This isn't some web tech startup where a mistake means someones tshirt got sent to the wrong address. Peoples lives will quite literally be ruined over mistakes here.
Is it a once in a million issue? The collision rate matters. It could easily be much higher and then it wouldn’t matter that it was being used at Apple’s scale.
If this was the kind of hash where flipping one bit of the input completely scrambles the output, the bad guys would just flip one bit of the input to evade it. Obviously a PhotoDna type of hash is going to be be easier to cause a collision with because they're averaging out a ton of the input data. According to Wikipedia the classic way to do it is convert it to monochrome, divide it into a grid, and average the shade of each of the cells. If they're doing that you could probably just pass in that intermediate grid and it would "hash" to the same result as the original picture with no porn present.
Why do you think that? There are plenty of whitepapers on fooling NNs by changing random pixels by a bit, so that the picture is not meaningfully changed for a person, but the computer will label it very differently. Do note that these are not cryptographic hashes because they have to recognize the picture even when compressed differently, cropped a bit, etc.
We know perceptual hashing and cryptography have incompatible requirements. Think of an image, the same image with 1 pixel changed, and a very different image. A perceptual hash should say 1 and 2 were related and not 3. Cryptographers call that failing a chosen plaintext attack.
The hashes will of course be provided by local governments, who have the ultimate authority (because they can forbid Apple to sell there, and Tim Cook never says no to money).
Ok. I can say this since I don't have anything to hide (edit: 1. that I am aware of and 2. yet).
I switched to the Apple ecosystem 2 years ago and have been extremely happy.
I couldn't see a single reason to switch back.
Today that reason came. What goes on on my phone is my business.
I guess fairphone next.
Again, I think have nothing to hide now so I can sat this loud and clear now. Given what recent elections have shown us we cannot know if I have something to hide in a few years (political, religious? Something else? Not that I plan to change but things have already changed extremely much since I was a kid 30 years ago.)
At the end of the day laws are relative so to say. The thought behind such a system is noble indeed, but as we've seen, anything any government gets their hands on, they will abuse it. Classic example being PRISM et al. In theory it's great to be able to catch the bad guys, but it was clearly abused. This is from countries that are meant to be free, forward thinking etc, not any authoritarian regimes.
People in this thread are asking what Saudi Arabia, China etc will do with such power that Apple is adding, you bet your ass that they'll use it for their own gain.
I want to believe in such systems for the good. I want child abusers caught. But a system that equally can be abused by the wrong people (and I guarantee you that will be western countries too) ain't it.
It's not even hypothetical, it's already known that Apple has to use servers operated by China for their operations there [1] so this capability will be fully within their hands now too to arbitrarily censor and report iPhone users for any material they want to disallow.
how the fuck am i supposed to know if that image i downloaded from some random subreddit is of a girl who is 17.98 years old? how long until we just use a NN to identify images of children automatically? she looks pretty young so i guess you will get disemboweled alive in prison? what is stopping someone from planting an image on your phone or a physical picture somewhere on your property? im so tired of this fucking dogma around child porn. you can always identify the presence of dogma by the accompanying vacuum of logic that follows in its wake. a teenage girl can go to jail for distributing pictures that she took of herself. do i even need to say more?
And with this, the fear politics are in effect. Just from reading the comments it seems one can no longer be 100% sure their phone is clean. So people will live in constant fear that on some random Tuesday the cops will come knocking, your reputation will be destroyed and in the end when you’re cleared, you will have incurred incredible financial and mental costs. This is just aside the fact that your phone should be your phone and no one should be allowed.
You demo this tech working with child porn, it maybe shows it's worth with some Isis training videos but before long China will be demanding access on their terms as a condition of accessing their markets.
And at that point the well meaning privacy advocate who worked hard to get some nice policies to protect users is booted off the project because you can hardly tell the shareholders and investors who own the company that you're going to ignore $billions in revenue or let your rival get ahead because of some irrelevant political movement on the other side of the world.
It's happened plenty of times before and it'll happen again.
What I find disturbing is that almost all commenters here took that rumour for a fact. There's nothing to substantiate it, there's no evidence of scan actually happening, and there's no historical precedence of similar thing done by Apple. And yet, people working in tech with supposedly developed critical thinking took the bait.
Why? Is it simply because it fits their world view?
You’re right of course but I think in this case it was due to the reputation of the poster on Twitter. At least, that’s the only reason I would take this rumor seriously. But yeah, a rumor is a rumor still.
I found another source (https://www.macobserver.com/analysis/apple-scans-uploaded-co...) saying apple was already running these scans on iCloud using homomorphic encryption… in 2019. It doesn’t really make sense for them to run it on device. Apple has the keys to unlock iCloud backups on their server and a sizable portion of users have those enabled, so why bother to run these on device?
I’m not sure if it’s a rumor or not but there was a thread on HN the other day about Facebook exploring homomorphic encryption for running ads on WhatsApp and I wonder if wires got crossed?
This matches up with how I view Apples corporate thinking. "we know what's best" "the consumer is not to be trusted".
Apple limits access to hardware, system settings, they block apps that don't meet moral standards, are "unsafe", or just might cause apple to not make as much money. They do not significantly care what people say they want after all they know best.
A lot of people love not having options and having these decisions made for them.
I would never want a device like that or with something that scans my device, but I think the vast majority of their customers if they even hear about it will think "I trust apple, they know what's best, it wont affect me"
Im ok with apple doing it because i think most apple users will be ok with it. I would not be ok with it if all Android devices started doing it though.
That’s sort of the genius of Apple though, isn’t it? They make products that hold your hand tighter than any designer but their marketing department obscures that fact as much as possible with their “think different” campaigns.
It’s less “I trust Apple” and more “if I really cared, I’d have bought from another designer”.
This isn't exclusive to Apple - Microsoft recently decided that starting from August Windows Defender will have the option for blocking PUAs enabled by default for those users who doesn't have other third-party security software [1]. This also I belive falls under "we know what's best" and "the customer is not to be trusted" or "is too stupid to run things by on its own".
This does looks good on paper - caring for customers and their security, peace of mind but tomorrow it might be a total vendor-lock with no ways of installing any other software than one approved by the corporate entities.
I don’t see why a circumventable default block of random executables is bad. People (even technically adept ones, who are a small minority) are very easy to fool. Depending on how easy it is to allow execution of such a program (which is a UX problem) it can indeed be what prevents a botnet on a significant number of computers.
I'm a little bit confused here and hope maybe some of you can clear this up.
My parents took lots of photos of me as a baby/small child. Say lying naked on a blanket or a naked 2yr old me in a kiddie pool in the summer in our backyard. Those are private photos and because it was the 1970s those were just taken with a normal non-digital camera. They were OBVIOUSLY never shared with others, especially outside immediate family.
Transform that into the 2020s and today these type of pictures would be taken with your iPhone. Would they now be classified as child pornography even though they weren't meant to be shared with anyone nor were they ever shared with anyone? Just your typical proud parent photo of your toddler.
Sounds a bit like a slippery slope, but maybe I am misunderstanding the gravity here. I'm specifically highlighting private "consumption" (parent taking picture of their child who happens to be naked as 1yr olds tend to be sometimes) vs "distribution" (parent or even a nefarious actor taking picture of a child and sharing it with third parties). I 100% want to eliminate child pornography. No discussion. But how do we prevent "false positives" with this?
As with all horribly-ill-defined laws, it depends how the judge is feeling that day and their interpretation of the accused's intent. If the case can be made that the images arouse inappropriate gratification, they can be deemed illegal.
If that sounds absurd - most laws are like that. For better or worse, there's a human who interprets the law, not a computer. It's unfortunate Apple is choosing to elect a computer as the judge here, for exactly concerns like yours.
I believe there is a large database of known child pornography.
Unless someone has been distributing photos of your kids as child porn (which would probably be good to know) it's unlikely any of your photos will match the hashes of the photos in that database.
I'm not sure that's how it works, but that's what I've gathered from the other comments on this post.
So far. Many websites already use NN trained to detect any nudity. It is only a matter of time before it lands on all consumer computing devices. The noose will keep on tightening because people keep debating instead of protesting.
Nudity detectors are decades old. People use them when they want to block any nudity. Making iCloud servers scan for nudity would have been simpler than making this new system. And pornography is easier to access than ever for most people.
Well, this is very problematic for a privacy concerned company. Under no circumstances do I want Apple to scan my private files/photos, aspecially so if it means that an alarm can allow someone to determine if it is a positive or a false positive.
Also, this functionality isn't something they should be able to implement without telling their end users.
It is also problematic because it will just make the cyber criminals more technical aware of what counter measures they must take to protect their illegal data.
The consequence is very bad for the regular consumer: the cyber criminal will be able to hide, and the government has the possibility to scan your files. End consumer lose, again.
Every so often I feel a wave of revulsion that the computer I use the most — my iPhone — is an almost completely closed system controlled by someone else.
Contrast this with my desktop where, in the press of a few buttons, I am presented with the source code for the CPU frequency scaling code.
This will be used for anti-piracy, government censorship, and targeted attacks, as always. There's no such thing as "were only scanning for CP". By creating the tool the company can be compelled to use the tool in other ways by U.S. or foreign governments. Apple already complies with anti-lgbt countries and will change their app store to suite each one of them. What happens when they're required to also scan for LGBT materials? They'll comply, because apple doesn't actually have morals.
Ontop of this, it gives apple far too much power. What happens when someone they don't like owns an iphone? They can pull an FBI and put the content onto the device, and having it then "automatically detected".
Since Snowden I use my phone in minimalistic way. Phone calls. Minimal texting. No games. Banking apps if necessary.
Treat your phones as an enemy. Use real computers with VPN and software like Little Snitch when online. Use cameras for photography and video.
The benefits of this approach are immense. I have long attention span. I don't have fear of missing out.
If governments wan't the future to be painted by tracing and surveillance mediated towards people trough big tech - lets make it mandatory by law. And since big tech will reap benefits from the big data they must provide phones for free.
:)
>Treat your phones as an enemy. Use real computers with VPN and software like Little Snitch when online.
I'm assuming your "real computer" is a mac (since little snitch is mac only). What makes you think apple won't do the same for macos? Also, while you have greater control with a "real computer", you also have less privacy from the apps themselves, since they're unsandboxed and have full access to your system.
I get that, but on the other hand if someone says "if you care about privacy, you should use an e2e messenger like whatsapp", then I'll have serious doubts about whether you're actually knowledgeable or just spouting buzzwords.
Your advice might be sound with the proper operating system choice, but the fact that you made such a glaring error in your initial comment makes it hard to take you seriously. It also brings into question whether you actually have a good understanding of privacy/security, or are just LARPing.
Error? Seriously? Did you make a wrong assumption that I care? About karma?
I share my experience and my point of view. Nothing more, nothing less. And the most important point is not technical.
The most important point is personal habits. To overcome smartphone addiction. We put our lives, thoughts and photos on closed technology and have false expectations that someone will care for our security or wellbeing.
Yes, I am a dumb person obviously. Thanks for your invaluable input. Dude.
But my use case is not to hide or remove digital exhaust.
Creating habit of limited usage is more important and realistic. Funny part is that as a side-effect I don't cary my smartphone around so much. I have separate GPS system in my cars and dumb phone for emergency.
If you're treating your phone as hostile why would you skip gaming apps but use banking ones? That seems backwards if you're assuming your mobile is the weak point.
In the EU the PSD2 directive obliged banks to provide strong authentication for customers login process and various operations on the account incl. payments ofc. Most of the time mobile applications are being used in the result - for either login confirm or as software OTP generators (biometric verification is also supported); the lists of printed codes are rather obsolete now and some banks may actually charge your extra for sending you text messages with such codes. I know there are hardware security tokens but in all these years I haven't seen anyone using such here.
So, it's rather hard to avoid banking apps.
Also, the PSD2 directive implements the duty of providing API infrastructure for third-parties. [1]
There still exist banks that provide you with an RSA token. If a bank does not give you the option, how can one (sorry) "of the right segment" have business with it? You look at the service provider, you see all kinds of bad signals, you hire it anyway: this is a big part of what is destroying us!
Restraining myself to write something very strong about phone security and general user expectancy and duly expectancy (low) - let us stress again the legal side: how do you prove to a bank that, in case of theft from the account, your device was safe? People who see their money stolen then have controversies with the bank about responsibility.
BTW: PSD2 has been, in many parts, a huge nightmare. Furthermore, healthy parts of it for some reason have not been implemented.
I don't agree with some (most?) of the parent posters comments in this threa.
But I feel there's a valid argument to be made that if your adversary is the sort of people who'd be feeding Apple image hashes to find people, you're probably be wise to carry a regular phone on which you do boring norm-core sorts of things.
A phone you use to take pictures of cats and pay your rent using banking apps and call your parents - while not using it to communicate with your dealer or your anarchist collective or your friendly investigative journalist.
Sorry, I misunderstand your question. I just don't like mobile games, and mobile banking is acceptable use case for me at the moment. But pc banking is obviously a better choice. The general idea of treating your phone as a problem has deep personal benefits. It started for me with realization (years ago) that I am an addict for "dopamine" hits and this "thing" in my pocket has direct influence on my mental performance.
>It started for me with realization (years ago) that I am an addict for "dopamine" hits and this "thing" in my pocket has direct influence on my mental performance.
Sounds like this is more about "checking your phone less", than "improving security/privacy". This is evident elsewhere in your advice. eg. "phones as an enemy", but no advice about killswitches for microphone? Or some sort of mitigation against GPS/mobile networking tracking?
I was under the impression that one of the reasons why these tools aren’t available for public download is because the hashes and system can be used to design defeat mechanisms? Doesn’t this mean that someone who has an image and a jail broken device can just watch the system, identify how the photo is detected, and modify it so that it doesn’t trip the filter?
PhotoDNA and systems like it are really interesting, but it seems like clientside scanning is a dangerous decision, not just from the privacy perspective. It seems like giving a CSAM detector and hashes to people is a really risky idea, even if it’s perfect and it does what it says it does without violating privacy.
If the algorithm and the blocklists leaked, then not only it would be possible to develop tools that reliably modify CSAM to avoid detection, but also generate new innocent-looking images that are caught by the filter. That could be used to overwhelm law enforcement with false positives and also weaponized for SWAT-ing.
Fortunately, it seems that matching is split between client-side and server-side, so extraction of the database from the device will not easily enable generation of matching images.
I just assumed the entire FAANG group scanned user content for CP already. I mean, EU recently passed a vote that extended the permission for companies to be able to scan and report user content for this purpose without it being considered a privacy violation [1] (first introduced in 2020). And I recall MS[2]/Google[3] being open about this practice way in the past.
Personally I somehow doubt that MS/Google weren't scanning private content (aka not shared) for this type of material. But can't have transparency with these behemoths.
This might be unpopular opinion but catching people sharing CP images is like catching end users of drugs. Yes it's illegal but the real criminals are the ones producing drugs. But it's very difficult to get to them, so you just arrest end users.
Another side note is about near future when someone comes up with synthetic CP images, will they also be criminalised?
It's not just unpopular, it's also wrong: when you use drugs, you are almost entirely harming yourself (leaving aside funding all sorts of illegal activities, just focusing on the act itself). When you propagate CSAM material, you are causing psychological harm to the victims, plus can cause them to physically harm themselves or get harmed by others. So you are a criminal, harming a victim as well.
How would a victim of CSA ever find out that I downloaded a particular file? Surely the harm there is caused by the distributor, not the consumer.
Conversely, when I use drugs, I'm paying someone, so I'm actually directly funding criminals. Depending on the country and the drugs, this is often putting cash in the hands of a very violent cartel.
You are correct, however vincnetas made the comparison between distributing CSAM and buying drugs, it is not aemreunal's fault, you are replying to the wrong person. A comparison that would make better sense would be to compare these in possession of CSAM and these that buy drugs.
I have so many questions about the implementation details.
1) Does this work only on iPhones or will it be iPads, as well?
2) Is this part of a system software update? I wonder if that will show up in the notes and how it would be spun. "In order to better protect our users ..."
3) If it is part of the system software update, will they be trying to make it run on older iDevices?
4) Is it just photos in your photo bin, iCloud, or does it start grabbing at network drives it attaches to? I could see the latter being prone to blowing up in their proverbial faces.
Even when this reaches its final conclusion, policing copyrighted and political content, people will still be content to use their i-spy-devices. The future is grim; it's now.
How do they determine if an image is child porn? My wife has an iPhone and we take pictures of our baby daughter on it, sometimes in diapers and sometimes naked. Our intentions are not pornographic but now I am worried about apple's algorithm flagging them as such.
It just gathers hashes without judging or interpreting. This is the first phase. When a child porn picture is discovered and inquired about, they just compare the hash of it with what they have on database and see who had that picture on their phone as well, allowing them to build a nicely timelined trace of it and even discover the first source.
What happens when a theocracy demands that Apple check for hashes of images that disrespect their prophet? To me this sounds potentially more scary and distopian than surveillance in China. But if I'm honest, I don't know that China isn't scanning citizens' devices for illegal hashes.
It is lovely when your "own" device is working against you to catch if you are in possession of illegal numbers https://en.wikipedia.org/wiki/Illegal_number.
And surely we can trust Apple that it will only be used for this kind of content instead of for example government leaks.
I would like to hear the strongest case for the privacy trade-off. How many more children will be physically recovered versus existing methods? What is the reduction in money flow to abduction activities?
This might be naive, but I would guess that the best way to fight this kind of thing is to let people know more of the case details. People would protect themselves, find the crimes, and stop unwittingly supporting them. For instance, if it can be shown that cryptocurrency or encrypted messengers are used to a significant extent, the community will either find a technical solution, or stop using it.
This is terrifying. The possibilities of extraordinary abuse are endless. What's surprising to me is the complete lack of media focus on this topic ? Why isn't this being hotly debated on TV ? Scanning people's photos is just OK now ?
Back to an Android phone, once I confirm this story is true.
If you like this, I have some other innovations that you may be interested in:
* A car that automatically pulls over when a police cruiser attempts to intercept you
* A front door that unlocks when a cop knocks
* A camera that uses AI to detect and prevent the photography of minors, police, and critical infrastructure
* A Smart TV that counts the number of people in your living room to ensure you aren't performing an unauthorized public broadcast of copyrighted content
Surely, at least one of those sounds ridiculous to you. As well-intentioned as this scanning may be, it violates a core principle of privacy and human autonomy. Your own device should not betray you. As technologists, just because we can do something doesn't mean we should.
> The problem with allowing this is that you’re paving the way for future tyrants to use it against us.
It's funny how everybody talk about the future. This is happenning now. Remember how a certain german guy took the power some 90 years ago ? He was elected.
Not your point, but: Technically he was Austrian by birth, stateless sind since 1925 and tried at least 7 times to get German citizenship before elected in 1932.
I trust we live in a different world today, but it is concerning to read about mandated vaccines to get a job and mandated contact-tracing in countries such as Germany.
Apps weren't mandated, restaurants and such are required to request you check in, but people didn't always fill them or otherwise filled them with junk (especially after they were abused)
People nowadays voluntarily carry tracking devices. This will not stop getting worse until that behavior is denormalized.
The power to be gained from abusing it is beyond irresistable. Expecting those in power to not abuse it is like expecting a heroin junkie to be a good pharmacist.
> People nowadays voluntarily carry tracking devices
on the strict premise that tracking is exceptional fair use from a law enforcement agency. Do not mix up the voluntary accolites of Zuck and people who want tools. (The use of 'tools' in the sentence is an originally unintended pun. I will keep the term 'accolites', though tempted to replace it for the pun.)
The technology is there, now we only need the motivation.
If politicians decides that they want it now, they can simply orchestrate a media campaign and have it. The next time an "outrageous" act of crime is conducted, they can make sure that it stays at the media attention and be portrayed as "If we don't act now very bed things will happen", then slide in their solution.
* Cars can automatically pull over by installing a cheap cut fuel cut switch that can be activated by short range radio. In many places people are used to add devices for toll collection anyway. People are also used to pay for regulatory inspections on their vehicles.
* For the old cars, simply connect an NFC reader that unlocks the central lock system of a car by a master key. For the new cars, simply make manufacturers add a police master key.
* Commercial drones are already stopping their users from flying over forbidden areas, simply extend that to smartphones. Smartphones have enough power and sensors to identify forbidden locations and persons. Add NFC kill switch, meaning the police can send a signal to lock down cameras.
* There were reports of Smart TVs that record all the time, simply mandate it to all manufacturers and enforce automated inspection of the recordings.
Uneven application of the law seems crucial to keep the system functioning and technology can erode that. Many simple laws, if enforced thoroughly and without prejudice, would become absolutely draconian. It is not even possible for a human to know all the laws we are meant to follow at all times, yet computers can.
Apple devices already betray their "owners", and they've been doing it for a long time.
You can't repair them.
You can't run your own software.
You can't use a better, more compliant web browser.
Businesses have to pay a 30% tax.
Businesses are forced to use login with Apple and forfeit a customer relationship.
Businesses have to dance to appease Apple. Their software gets banned, randomly flagged, or unapproved for deployment, sometimes completely on a whim.
Soon, more iDevices and Apple Pay will lead to further entrenchment. Just like in the movie Demolition Man, everything will eventually be Apple. Your car, your movies, your music, your elected officials.
I live in a major city and I’m a fan of none of these things. I’m only one data point, but yours is a sweeping and inaccurate generalization that “cities are frighteningly unsafe”.
Maybe one trusts Apple more than <insert politician>, but they cannot so easily elect away Apple.
Apple has generally locked things down successfully.
There was concern about phone's being grabbed - street robberies. Regardless of whether you believe their were sweeping generalizations - apple ended up creating more power for themselves with their activation lock system. If they don't want to let you sell your phone to someone - they can block use of your phone. But folks trust them to operate the system reasonably, and so far so good there.
They have locked down their app store very tightly for a variety of reasons including supposedly for security. Users have accepted that.
Apple's competitors (google photos etc) generally are directly scannable in the cloud by google et al. Facebook and others routinely scan users photos. Youtube scans their videos etc. My guess is apple will explain why they are doing it and users are going to be happy.
And yes, users are linking things like ring doorbells and home security video cameras together or registering them so that the police state can use them.
By enforcing the copyright laws that exist already to protect content creators from being counterfeited and ripped off by illegal scammers and bootleggers. Are you saying you don't agree with the concept of copyrights?
How does showing Lion King at your kids birthday party qualify for any of that?
I know I know, Disneys inability to collect money for every pair of eyes watching their movies during VHS times literally made it one of the poorest companies on the world. /s
Lots of people don't support copyright. It's a concept that should be abolished in it's current form. Right to attribution is OK. Ability to restrict other forms of use is not OK.
Does that mean if someone spends time and effort writing a book or painting a picture that I can resell it for less than them without their permission?
Why would any fan of someones work buy a knock-off copy? We can safely assume they are a fan, otherwise why would even buy it?
Also, painting is a physical object. It's a one of a kind.
EDIT: your objection and many many other questions like that are nicely argued against here: https://www.youtube.com/watch?v=mhBpI13dxkI
(The Surprising History of Copyright talk by Karl Fogel)
Yes. Definitely. Absolute staggering majority of people writing books don't sell even one copy. And my bet is - they know what are the chances.
EDIT1:
More than that. People don't live of royalties. Publishers do. Can people create without being paid upfront by publishers? Obviously yes. They can be paid upfront in kickstarter-like arrangement. They can be paid via donations later. They can even be paid via "retroactive funding of public goods" [0]. Or they can be never paid - just as they are today.
Also, imagine how much good could come of long-standing IP properties owned by Disney if not for them sitting on it. Vide disaster that is current management of Star Wars.
Do you know how many stories cannot be published because people sit on the IP? There are unpublished "movies" that only exist to maintain exclusive licensing deals that outright prohibit studios from sitting on the IP, letting the original authors work rot until it is forgotten.
Or we can talk about the mess you get with too many groups involved? Do you know who has the rights to Westwoods Dune? Westwood itself had a limited time license for games based on the movie Dune, which also had a license based on the Dune books. So you would have to deal with three license owners to make a Game involving the Ordos faction. Have fun convincing EA, whoever owns the movie rights and the original authors family that you can make a new game worth their signature on a new licensing agreement (you could probably manage if you have a small country to sell).
Ive already been itching to de-cloud, and de-tech my life.
If were already getting to this stage of surveillance I guess thats just another sign I should be getting on top of it.
Today its csam. Tomorrow "misleading information". etc.
Im looking to do the same - this pandemic has made me feel quite claustrophobic about the encroachment of tech, and work into my personal life. Im planning on getting a dumb-ish nokia phone and leaving my smartphone at home to try and wean myself off it. What are your plans?
So many questions that akes this Tweet look odd. It's a "Client side tool" - so what? An app you install? That law enforcement can install? That Apple can silently install.
It lets "Apple Scan"? So Apple is going to proactively scanning your photos using a tool then install?
This is just horrible… the people who actually abuse children and download such photos will now stop using Apple devices, and now the rest of us is vulnerable to misuse/abuse/corruption.
Instead of specifically targeting suspects, everyone is surveyed by default. Welcome to a world of mass surveillance.
For context, I deeply hate the abuse of children and I've worked on a contract before that landed 12 human traffickers in custody that were smuggling sex slaves across boarders. I didn't need to know details about the victims in question, but it's understood that they're often teenagers or children.
So my initial reaction when reading this Twitter thread was "let's get these bastards" but on serious reflection I think that impulse is wrong. Unshared data shouldn't be subject to search. Once it's shared, I can make several cases for an automated scan, but a cloud backup of personal media should be kept private. Our control of our own privacy matters. Not for the slippery slope argument or for the false positive argument, but for its own sake. We shouldn't be assuming the worst of people without cause or warrant.
That said, even though I feel this way a not-small-enough part of me will be pleased if it is deployed because I want these people arrested. It's the same way I feel when terrorists get captured even if intelligence services bent or broke the rules. I can be happy at the outcome without being happy at the methods, and I can feel queasy about my own internal, conflicted feelings throughout it all.