And you can't even do that unless you've already managed to remove the image from the internet:
> This tool will not remove URLs from Clearview which are currently active and public. If there is a public image or web page that you want excluded, then take it down yourself (or ask the webmaster or publisher to take it down). After it is down, submit the link here.
If they do they are in "deep shit" (pardon my french) legally. I actually hope they do this - and somebody can catch them in the act. I believe they will be gone soon then.
Also I would advise anyone under GDPR legislation to also request exactly with whom the data was shared and go on to also request deletion and usage information from them. It is a pain that one has to jump all these hoops. I would love for the GDPR to have a way of forcing such a company to also do the information and deletion requests on your behalf and prove that to you.
Sadly this was not included I believe.
So, I get why they ask for ID, even though I also get the reluctance to give them your ID since it could help tune their system.
Learning about this company (and I imagine other unknown entities are doing the same) has encouraged me to get more aggressive.
I think I will start to try some shenanigans I learned from a friend. I plan on replacing my online profile pics to a random grab from https://www.thispersondoesnotexist.com/. Might not make much of a difference for the old stuff, maybe I can successfully request a data deletion as the article suggests. At least it will introduce a little bit of noise to the AI overloads :)
5-10 years later, Facebook came up with their real name policy and started asking people to snitch on their friends if they used a fake name. Google, and mainly Youtube, came with a real name policy as well, on the one hand for Google+, but on the other to try and fight comment abuse - theory being people are more hesitant to be a dick on the internet if they use their real name.
But people got used to that real fast, and since there was little consequences anyway, it didn't work.
People have valid reasons to use a fake name on the internet; government and business surveillance is a big one. Abusive / stalker exes are another. Having an alternate persona (e.g. entertainers, authors) which people are trying to hide from their un-understanding or abusive family, or society at large.
Here is a more exhaustive list: Who is harmed by a "Real Names" policy?
This includes my avid support of corporations which make good faith efforts to defend natural rights and freedoms, and my vehement opposition to corporate/political nonsense that does not represent, in good faith, the interests of humanity and nature. “A reasonable amount” of surveillance is an essential aspect of society, but it SHOULD be considered invasive, and should never be invisible. I suspect that there will be BS metrics to evaluate how “consenting” a given individual is to NSA/Clearview type behavior, and I would hope that I am casting a shield of protection. I feel bad about the fact that an element of bitterness is necessary to be resilient. I know no other way than truth.
I truly act as if AI is learning from me, and believe that there are long reaching and metaphysical effects to all actions.
Obviously, I'm using a nym. Because I've wanted to my life to be faceted. I've lived very publicly before (ran for office). But in geekdom, I didn't want people to casually correlate my politics (activism) with my open source efforts.
Again, thank you.
You won't be giving them any info on you, but you won't be confounding them either.
Run facial recognition against computer generated face, got no matches. Surely that is the expected and intended result from both parties?
Or is it expected to match against a different face?
By returning no accounts, it demonstrates their AI isn't using those faces for identification.
Managing your privacy is a lot like CPU side channel attacks. It forces you to re-evaluate your fundamental assumptions about what information can be exploited.
As you have figured out on your own, public should listen to some people warning about this for more than decade instead of making fun from them (tin foil hat,...).
And if anyone thinks that google and facebook are not having their own versions of clearview, think again. Any form of online presence under real name has to be minimized and it is doable but this would mean that all personal narcissistic pushes would need to be stopped (or should I say - cured) and refrain yourself from using any personal information (no, you wont secure my account by having my phone number, provide me TOTP if the security is really the reason) on the internet while avoiding it beeing stolen by apps (application firewalls, sending back fake data and not using any google applications including removing their preinstalled spyware by rooting the phone).
I can guarantee you that you wont be missing anything relevant (I am doing it for more than decade). But. Will you do that? Can you do that? Do you want to do that? Most people just rather take blue pill.
This is exactly my fear. If they were more legitimate, I wouldn't be worried about sending an ID card. Thinking if it makes sense to fake an ID with my real picture on it so they can give me the data and 'delete' it but with a fake name to make sure I'm at least not feeding the troll.
Dumb question - how can anyone be sure that companies actually delete the data? What about the backups, what happens to the data there? How do these govt enforcers verify this? Also, what about Clearview's employees abusing the data? what stops them from snooping on someone they are interested in?
This isn't a technical issue. It's a political issue.
That's why I don't think there is much of a point in trying to prevent people (e.g. by law) from crawling and using data in this fashion.
I feel like the only reasonable solution here would be to force these companies to rebuild their databases by legally limiting the lifetime of such data.
That way people have a chance to remove themselves from the database by changing/deleting their online profiles without having to use legal measures like GDPR requests. People wouldn't even have to be aware of any individual database they might be part of; they would be removed from it automatically at some point.
Another benefit of this would be that the pure cost of constantly re-crawling a giant dataset could act as a limiting factor and therefore prevent abuse.
It is worrisome but a facebook could produce a lot more privacy related connections from private photos no one knows existed. I guess I was expecting that.. perhaps in the future facebook will offer this service.
I'd be way more worried if it was finding stuff like me in the background of someone else's photo in a crowded city or something like that.
The concept of being opted-in by default and being forced to authenticate yourself to opt out is getting more and more ridiculous by the day.
These systems need to be opt-in, and that requirement needs to be enforced by some kind of powerful government agency with the power to arrest and jail non-compliant operators. Anything less feels like it would end up being a complete surrender to companies like Clearview AI.
To really flip the table, we need to make it mandatory to pay people for use of their data. If a company is exploiting[ß] someone else's property, the owner must be compensated for the privilege. Take a page from SaaS companies' nickel-and-diming ("pay per use") billing strategy, too. Just turn it around: forbid blanket permissions and consents.
The idea is to ensure that other people's data needs to be universally treated as a toxic liability, not an asset.
ß: in economic sense, although other meanings apply equally well.
So you're saying we should just surrender to companies like Clearview AI? I don't agree with your fatalism.
I think you're making a mistake in assuming that governments are unaccountable and cannot be prevented from pursuing interests that against those of the people. That's clearly not the case. If you were right, we'd already have unchecked police surveillance (isn't government is the consumer of surveillance products?), but we don't. That's only the case in non-democracies like China. In democracies, society (and the government itself) is capable of putting significant constraints on government action.
Privacy starts from us not revealing our private information. That's why we have curtains at home. It is us who put the curtains on windows.
His name is about as generic as you could imagine for a white person. But they returned a bunch of images of HIM, and one Alexey Something-or-other, which could be his troll account.
Edit: the Alexey part is a joke, I'm sorry but I thought it was funny.
I always wondered how the Chinese authorities figured that out considering the massive name reuse in China. I can’t watch a Chinese movie or TV show without at least one character sharing the surname (the first part in China) of a Chinese person I know.
Smith would be our classic version in the west.
I mean it's implicitly known that anything you post on the internet is public property, but legally that is not the case. Portrait law (at least in my country). You can't just take someone's portrait and use it for your own gains.
Couldn't this same argument be made for Google reverse image search? Reverse image search someone's face and you're likely to get links to places that photo is located; potentially including links to different photos of their face, as well.
They're likely mirroring local copies of the photos as well when they show you the link (in case a link later 404s or changes), but so is Google reverse image search, I believe.
If your position is that both things are wrong, that's fine, but would Google then basically have to remove the entire feature? Or prevent searches for any photo containing any faces?
Imagine the story when people find out this or another company started skimming the porn sites with facial recognition, starts gaining access to surveillance footage from Nest or Ring, or maybe even gets access to state and federal DOT cameras and real time feeds from body cameras!
Facial recognition is going to need some regulation, ASAP.
> The agreement gives the company real-time access to state traffic cameras, CCTV and public safety cameras, 911 emergency systems, location data for state-owned vehicles and more. In exchange, Banjo promises to alert law enforcement to "anomalies," aka crimes, but the arrangement raises all kinds of red flags.
> Banjo relies on info scraped from social media, satellite imaging data and the real-time info from law enforcement. Banjo claims its "Live Time Intelligence" AI can identify crimes -- everything from kidnappings to shootings and "opioid events" -- as they happen.
But governments change, the data is still around to be abused.
This is what disturbs me. Is how data can be abused in the future.
Years down the track they ran out of runway (the ugly side of "unicorn or bust" venture capital but that's another rant) and were bought out by Fitbit. Meh, Fitbit seemed pretty good with privacy too so that's alright, I guess?
Now Google's bought Fitbit and potentially has a bunch of very personal, private data on everyone who originally trusted Pebble.
Does the end justify the means? A pandemic like this is the ideal chance for a government to set up emergency measures like martial law, while the people themselves are too busy trying to look out for themselves and their family to be able to protest it.
Of course, anyone with half a brain already knew that unlimited data gathering, including location or personal information, was a bad thing.
the world has always been anonymous because of the lack of capability to track large amounts of data - until recently.
Anonymity allows you safety from any one who seeks to predate you. I think that safety needs to be maintained. People stupidly put photos of themselves online, then face tag their friends. This allows third parties to identify your friends and circles, and that's dangerous. All relationship should be reciprocal.
The first data privacy law ("loi informatique et liberté") were introduced in France around 1980, after a controversial government project to create a massive database of people generated a huge scandal.
So it's been possible for quite a while, it's just that it was reserved to state actors.
In a world without the printing press, anti-libel laws didn't exist. In a world without photography, rights to personal image and freedom from invasive shutterbugs didn't esist. Anti-wiretapping and phone-recording restrictions were necessitated by the telephone. The Bork bill protecting the sanctity of ... video store rental records ... was necessitated by videocassette technology, a video rental market, Supreme Court nomination hearings, chatty store clerks, and newspapers interested in publishing such details.
As technologies tear down and penetrate the long-standing barriers to snooping, recording, transmitting, analyzing, and acting on what had always. been personal and private behaviours, societies turn to law to reinstitute those protections.
Privacy is an emergent phenomenon and a direct response to intrusions.
The people who left or were forced out often died. The world is a scary and dangerous place without a support network, which civilization basically is.
> So if you were kicked from your group for a misdeed, sure you could continue your bad deeds in the new group or you could turn over a new leaf without the weight of your past mistakes holding you back.
Outsiders were often viewed with distrust. Why wouldn't they be, when most people can only associate their leaving the safety of the community with at best a foreign way of thinking, but more likely them being forced out for past misdeeds.
It's sort of like interviewing for a job a 35 year old that has no work experience to show for the last decade, and not a very convincing story as to why (or even if it's feasible, you don't really know). Why take the risk?
It's an awful dangerous place when you're alone out in the Wild West.
The general 1984 style dystopia vision is that there’s a gov’t change for the worst and you could be SWAT’d out of the blue.
The most probable one is that this kind of tool would be used in far less obvious, if at all visible, ways.
In that situation some kind of honeypot/canary strategy would be nice to reveal shady use but I can’t seem to come up with a realistic one.
Most non technical people don’t understand how powerful technology is.
Personal data may soon become the same as a credit score. No data, high risk.
"We detected suspicious blah blah blah"
I wonder what would happen if everyone did.
See also the Equifax settlement. I'd say Equifax is the best / most recent example about something like this.
Equifax’s credit-data provenance is enshrined in law. Their mistake was in improperly distributing legally-owned data.
Clearview does not have legal claim to individuals’ or Facebook‘s copyrights. Its mistake is more fundamental than Equifax’s.
I wonder if this can be combined with adding a “terms of service” to your FB profile.
Corporations have perverted contract law to the point where their terms of service are binding even if you never read them, or are even aware of them.
Turnabout is fair play, right?
Remember: there are two sets of laws in the US: ones that apply to you, and another set for large corporations that cooperate with the police and military.
YouTube is so horrible now for content creation, I don't understand how content creators are able to post anything anymore without it being smashed by the copyright automation.
tl;dr ContentID is a workaround for a broken copyright system. It's not perfect, but it's better for everyone than falling back to the default of copyright through court.
Source? The US asked for his extradition and the NZ authorities raided his home, as far as I know.
I am surprised they haven't been fined out of existence yet.
In Germany not only the GDPR regulation but also the so-called "right to the own picture" applies here.This means that no one may use/sell pictures of a person without explicit consent. Therefore, photographers must also have an explicit release of the person for the respective context of use.
The article claims you could ask for deletion of your data without uploading your ID.
So If they have data on you, that is older than a month, and did not contacted you to inform you of it, you can file a complaint!
https://github.com/LINCnil/Guide-RGPD-du-developpeur/blob/ma... (In French sorry)
edit: also France did fine Google just fine.
The irony is that companies like Facebook, Twitter, etc are really bothered when another business scrapes profiles to mine data uploaded to those sites.
I wouldn't be surprised to find out that one of these companies gets sued by them for "stealing" content they host and violating the licenses for user content they grant themselves via their ToS.
Copyright applies there too, and if you sued them for it, it's not inconceivable you might win.
Am I being naive, or is this being overly generous? What about this can not be recreated with an off-the-shelf web scraper and a pretrained facial recognizer?
So, we need strong legislation around the use of this technology especially when it comes to law enforcement as opposed to trying to kill the idea itself because that's unrealistic. Just as you said, you could start it from your laptop.
Pretty much anyone with python familiarity can do this.
Which then sends another location of:
Which just ends up bouncing you back and forth, unless JS is allowed to percolate through. However, there is some useragent sniffing happening, so the exact set of headers changes.
< meta http-equiv=refresh content=0; url=http://example.com/ >
More I think about it, more I lean toward allowing both, but I can see why people would not like it.
Because celebrity by definition requires trading privacy for fame, and is almost always a decision.
We need a new legal classification for "public, but not accessible by everyone in the world for the rest of time" information, which is what most regular people assume or desire for themselves.
Based on what and how people share online, the "common" sense is an expectation of decency in not vacuuming data just because you can. You and I know that's silly, but that's not common sense.
Well, public images are public and I don‘t think banning such a service would prevent governments from implementing something like this .. the technological challenges are getting less and less.
Meanwhile this company has been nothing but privacy abuses and lies to the public. If it isn't broken up by law, it will be interesting to see what the people do.
Yikes, this is like Minority Report - level weirdness.
Also countries should start issuing arrest warrants and sanctions against them.
You don't know why they're stalking you.
On the face of it it's for law enforcement in case you decide to commit a crime sometime in the future.
...or it could be so that they can raise the prices in shops when you walking and the facial recognition picks you up.
...or it could be for a future employer to decide that the kind of bars you visit means that you're not the right "social fit" for a job.
...or it could be for... anything.
You have no control and that's the scary thing.
There is a difference, or at least there conceptually was, between posting your life story and all of your thoughts on your central LinkedIn profile, versus having two dozen different "blogs" of sorts over the years, Steam accounts, Facebook, MySpace, Flickr, usenet groups that come and go and we think of as ethereal. When you see all of that stuff pulled together it could be deeply unsettling.
Of course that was foolish -- eventually networking, storage, and computation would allow for everything to be ingested, and facial identification would greatly assist in pulling it together -- but it seemed dystopian at the time.
They are using PII of hundreds of millions of Europeans without written consent.
That alone should mean billions of euros in fines.
It would require some serious education before the public wakes up to the dangers of private companies running amok with their data. Sad thing is, it is already too late. It is going to be very difficult to put a lid on this. This is a company that we (now) know about - how many are there silently working in the shadows that we don't know about?
Exploitation is the best education there is. Everything is fine until it isn't. One day, companies and governments are going to start doing things with personal information that are unacceptable to even the average person. By then, it will probably be too late.
Just like your friends, I personally don't particularly care either, but from time to time I have tried to understand the privacy crowd's obsession with this issue and the rationale behind laws like the GDPR and CCPA, as well as the desire for even more restrictive laws and I truly don't get it.
Is there a manifesto somewhere I'm missing? Some essay or thinkpiece that lays out in detail the case against collection of user data?
Take any century in the last 2000 years of human history, and there are files about people. For long it was sculpted or hand written. Then it was printed. Now it's digital. But it's the same thing, only the scale and speed change.
And at any point during those centuries, somewhere in the world, some entity (it doesn't have to be the gov) does bad things with those files. It's different every time. Excluding, killing, tracking, stealing, controlling... The form changes every time, but it's the same thing: abuse of those files.
It would, of course, be less of a problem if the information access was perfectly symmetrical. If anybody could access anybody's data, society would probably have a hard time for a few years, then adjust. And maybe become more fair.
But that's not what's happening. Here it just reinforces power asymmetry. And it creates incentives with huge bias that affect everybody's life.
There are three reasons why people give your answer.
1 - We had a nice run for a few decades in North America and West Europe. It's been a sweet life. And the human mind sets it as a new baseline. Now people see this as normal, and something else as the exception.
However, those decades ARE the exception. An exception that needs maintenance to preserve as best as we can.
2 - We are already pretty bad at making the connection between our misery today and the consequences of our past lifestyle, but today's information system is making it extra hard.
There are several factors for this: those in power getting really good at PR, information overload, more levels of indirection between causes and consequences, and the whole system complexity that never ceases to increase.
3 - The convenience is huge, and the price delayed
We don't get tracked for free, we get huge convenience in exchange. Plus we don't pay the price immediately, nor individually. We pay it as a society, and since it's cumulative, it's not obvious how much it costs us. It will only be painful in ... Well nobody knows when.
In fact, not only doing things right would rip us from convenience, but we would individually pay a strong price on top, right now. While seeing everybody around not doing it.
It's the exact same problem than for global warming.
Not accepting tracking is a deep and important political decision that shapes the future of our entire society. It is as important as avoiding mixing the church and the state or defending freedom of speech.
And it's also why it's not a popular view: it requires to think about what society we want to build, and not just what life we hope to have individually.
Yes. The fundamental issue is the total lack of respect for the user's consent.
People usually have no problem with volunteering personal information that is relevant to whatever activity they're trying to accomplish. For example, a company will need people's addresses in order to ship products to them. This is a voluntary, explicit and respectful process: consumers willingly and knowingly give the company copies of the information they need to perform the service and the company uses that information only for its intended purpose and absolutely nothing else.
The problem we face today is that businesses are collecting massive amounts of personal information indiscriminately, invisibly and without true consent. Web browsers hemorrhage personal information without them even asking and there's no way to stop it. Companies make apps that mine people's phones for every last bit of data they can get their hands on. Web sites put up annoying little banners saying they collect data and call it informed consent even though there's no way to say no. They bury some clauses in a terms of service nobody reads and say the user agreed to it by continuing to use site even though cookies were set and fingerprinting was performed on the very first visit before the user could possibly have known about much less read the contract.
Not only that, the information is being abused to do things people don't actually want. When people give a company their email addresses, they assume they will receive messages that are actually important. What happens is the company thinks it has every right to spam people with marketing and advertising emails or sell their data to other very interested parties. People give a company their phone numbers and next thing they know they're getting marketing calls they never asked for and can't opt out of.
And then there's the security issues. If someone has information about people in a database, there's always a chance it can be leaked even if every precaution is taken. The potential for harm is significant. Data should be considered a liability for companies. Knowing things about people should cost them money. They should have ample incentive to collect as little data as possible, limit the scope and frequency of the use of whatever data they collect as much as possible and delete that information as soon as it is no longer needed. What's happening today is the complete opposite of that: companies are collecting as much data as possible, keeping it forever and using it for whatever makes them the most money regardless of people's wishes.
These examples are relatively benign but the potential for harm is always there. What if companies start buying up personal information and using the data to profile and exclude candidates? No doubt information such as browsing history would condemn a huge number of people. What if companies find a way to deanonymize that data and link it to candidates?
If you were unable to convince multiple smart people who you care about / respect, then perhaps it is time to at least considere the possibility that your position is incorrect?
Source (in german): https://www.heise.de/newsticker/meldung/Gesichtserkennung-Ha...
Europeans can find their national privacy boards here, and file complaints about Clearview through them.
Like part of the pictures come from old failed social networks which sold them and which in their AGB _might_ state that they can do so. Now it's questionable if such AGB is valid at all but as such it needs to:
- If European citizens are affected (it's hard to be be the case).
- the exact legal status as pictures might have been optained legally before GDPR
- also note Clearview stores biometric data (devices from the images, necessary or "fast" search/lookup would not be implementable
So I would not be surprised that Clearview will be required to delete all data from which it can not be sure it's not from EU citizens, which I think would mean all data given what they store and what they don't store. Obviously they won't comply and a EU wide arrest warrant might follow which is kinda useless if the person doesn't enter the EU. I highly doubt that they will try a international warrant.
So practically it's unlike that anything will change except the operators of Clearview being listed official as "potential" criminal (no arrest => no court => innocent until convicted)
Granted, the GDPR doesn't say anything about arresting offenders, but the companies should at least be investigated and fined, which isn't happening either.
The GDPR is a joke.
> Google LLC
> Art. 5 GDPR, Art. 6 GDPR, Art. 17 GDPR
PII may only be processed if you have explicitly consent for the exact purpose you want to use it for.
There is no "open" personal data you may just use for anything.
Not exactly. Consent is one of six allowed ways to process PII. It’s just that for advertising/tracking use its probably the only one that you can use.
Here is an article I could quickly come up with in English
I find it extremely surprising that they would be responding to GDPR subject access requests, given that they appear to be ignoring the rest of it.
Has anyone tried it ?
Sure, they're obligated to give you your profile per GDPR. But if you're within the GDPR's jurisdiction, they're obligated to get your consent BEFORE they collect personal information about you. If they haven't, they're liable for at least millions of euros of fines.
About suing them if they would break copyrights: not sure about the US, but in Germany it wouldn't be that easy to actually sue them for such a high sum. You could argue that the company makes money by offering the search service, but there must be evidence that Clearview made that specific sum just with your photo(s), which is very unlikely.
Just one of those pictures would be enough for a lawsuit.
Of course, if someone were to sue, they would be heavily pushed to settle out of court. I suspect there's a lot more legal activity going on about things like this, but everyone that starts to make some noise is quickly silenced with a lump sum and a binding contract to shut up about it. If they don't, they're threatened with spending the next 5-10 years in court. Because if there's anything corporate lawyers are good at, it's stalling and making sure the suing party, especially if it's a random consumer, spends years and hundreds of thousands in court fees.
What we need is more rich people suing businesses. Or a massive public defense fund supporting the average joe's case.
But right now it's in the rich people's interests to support shady businesses, and if they don't they'll be offered massive financial incentives on the golf course.
> And remember that once you receive your data, you have the option to demand that Clearview delete it or amend it if you’d like them to do so.
"What does a Clearview profile contain? Up until recently, it would have been almost impossible to find out. Companies like Clearview were not required to share their data, and could easily build massive databases of personal information in secret.
Thanks to two landmark pieces of legislation, though, that is changing. In 2018, the European Union began enforcing the General Data Protection Regulation (GDPR). And on January 1, 2020, an equivalent piece of legislation, the California Consumer Privacy Act (CCPA), went into effect in my home state.
Both GDPR and CCPA give consumers unprecedented access to the personal data that companies like Clearview gather about them. If a consumer submits a valid request, companies are required to provide their data to them. The penalties for noncompliance stretch into the tens of millions of dollars. Several other U.S. states are considering similar legislation, and a federal privacy law is expected in the next five years."
Would you volunteer your time to tag images with your friends and acquaintances, to help slow down the virus? To do otherwise would be immoral and lead to the death of thousands, right?
To those worried about that, it's just temporary. It would just last a few months and then you don't need to worry about it any more. This is a global war and we have to make sacrifices and take important actions.
Bruce Schneier, "Emergency Surveillance During COVID-19 Crisis"
I'd have very grave misgivings.
I was in a startup accelerator near the end of last year and many of the business ideas that some of my teams came up with were data oriented and how it could be better gathered or used for good purposes. Or for profit and control. For example measuring people's location, driving speed and brake force to give discounts for car insurance. We found out that some companies do this already, and others are asking for it. And that was not even in an emergency context.
From another viewpoint, the more data we get, the more sensitive we will get for the "crises", or we can define simpler things as "crises" anyway. And that will demand more data gathering. If we allow using data to stop Coronavirus for example, we can call "the flu" as something to tackle, if only we had more information about the people. Why not the cold, too, after the flu?
I feel that it is inevitable.
I heard about this before...
This isn’t even a hard question.
Edits - and what if your friends and family tag you and add your details instead? That way you dont need to actively support it.
No one who knows me will be surprised to find out that I think it creates larger long term dangers or that safety isn’t my highest value in any case.
Response to the edit: I don’t know how I would react, because they didn’t have malicious intent, but they still wronged me. I would be angry with them, and it would definitely have a negative effect on our relationships.
I wouldn’t look down on anyone who wanted to tag their own photos.
And I would be willing to do geo tracking with something that I believed was actually temporary. Give me a dongle I can throw away when this is all over and I’ll carry it around.
There is, of course, one slight problem with Simon's argument: The Nazis did make heavy use of mechanised data processing, provided and supported by IBM. Edwin Black documents this meticulously in his book IBM and the Holocaust:
Whether or not it's possible to transact genocide at similar scale without computerised data records, it's quite clearly far easier to do so with them. Worse, with comprehensive records and rapid identification of any particular meddlesome priest, activist artist, or woman who was warned but nevertheless persisted, it's possible for such regimes, state or non-state, to dip in and retaliate with pinpoint effectiveness. Even the mere suggestion that this is possible can be extraordinarily chilling.