Setting who wants to do this aside in order to avoid the constant 'X is worse than Y' arguments.
I don't think it's unreasonable to assume that in a near future we will have cameras that can do much more than identification on a massive scale. It may be easy to track things like how stressed, healthy, sober etc. people are without their consent or without them even noticing. And this would be precise, not speculative like the current online tracking.
Would you live comfortably in this world? Would you build this?
This kind of world gets built by some people who are excited to work towards this goal, but also by a much larger group of people for whom meeting their daily needs outweighs the abstract, distant drawbacks that they can pretend to rationalize away.
It will be built through economic coercion appealing to their livelihoods, and the impacts will be perceived as externalities. Incidentally, this is the same way most of the modern world got built over the last several centuries, with those having to work to earn their keep taking on various risks to enrich themselves ostensibly to provide for their families: mining, military service, or data mining others' personal information to sell ads; their participation in these has far-reaching impacts on the world, but gets distributed over a large population and avoids hitting close to home.
Some people can't rationalize this away, so (in a low-violence society) they quit and do something else. Given enough time, the workers who are uneasy will be replaced by people who have no such qualms. They, and their bosses, will make good money until an outraged public prompts government regulations to catch up. By that point, they will likely have a seat at the table. In an authoritarian society, or when the state itself is the actor, there's even less of a chance of this balance.
The problem is that the same applies for Police, Mental Health, Child Protection services. In those organizations, it does incredible amounts of damage.
In fact, part of the attraction of working in, say Child protection services is the damage you can selectively inflict on others. Same for the police, but much more direct. 15 minutes of research will tell you that they're not doing their duty for the money.
This is WHY you would want the power of government artificially limited by an independent and powerful judiciary.
When you hypothesize a world where AI uses cameras everywhere to know everything they can about everybody,
you haven't characterized that world at all.
First, IS IT REALLY EVERYBODY? (But let's assume for now, it's really everybody).
The question that will characterize the kind of world, is what is done with that information.
You can choose to use it to oppress and repress people; or you could use it to help them. If you detect a person overly drunk, you can send somebody fetch her, and either send her to the police station or to her home. Here you can ask what kind of world you would live more comfortably in? But it has nothing to do with cameras AI, and databases.
If the AI detects that some person is becoming stressed and loses her health, you can use that information to fire her, (bad health leads to bad productivity right?), to forbid her to get insurance, etc, or you can use that information to give her some holidays and schedule a consultation with health services. Here you can ask what kind of world you would live more comfortably in? But it has nothing to do with cameras AI, and databases.
And if the AI detects that some crime is being committed or about to be, you can send the police after or before the damage is done to the victim, and you can put the criminal in jail or send back to his country of origin, or leave him on the streets to try again another crime. Here you can ask what kind of world you would live more comfortably in? But it has nothing to do with cameras AI, and databases.
You can extrapolate based on what's being done with the information that people are gathering now. Targeted advertisements, intrusive employee surveillance in places like Amazon warehouses or the TSA, to name some instances I've personally read about. How about buying data to target political ads?
Almost every tracking method or tool that has been created is used by businesses to exploit me as much as possible, so why should I believe that further developments would be used any differently?
I understand the point you are trying to make, and maybe I am too pessimistic when thinking that the negative outcomes are more likely to happen on large scale than the positive ones.
I think 'who holds the power?' is a good lens to try to judge this through. Unless there is a major shift in how humans relate to the World, I think unleashing this power is dangerous.
>I think 'who holds the power?' is a good lens to try to judge this through.
Eventually some one with bad intentions will end in power even if started with a pretty good guy(Example: US president start and end). Its better to think of the worst and place the checks at the earliest (like the constitution)
Yup. Think about how you can recognize your friends without even seeing their face, by the way they walk, what they wear, how tall they are, and other contextual clues. Computers will be able to do all of that. Also think about other things that only computers with massive amounts of data will be able to do. Computers will be able to enumerate your wardrobe, your possessions. They'll be able to figure out most of the books you've read or own. They'll know where you work, what your schedule is, where you shop, what you do for fun, when you take vacations and where. They'll be able to learn all of this even if you don't freely give all this information up through social media posts and what-have-you by following you in the background of other stuff or through other "feeds". Imagine how much easier it is to identify someone in a picture or a video if you can pull in "big data" context which narrows down the possible/likely people who could be in that location at that time (wearing those clothes, doing that activity, within a given weight/height range, etc, etc, etc.)
We don't really know what the feasible computational and data limits are here, the potential (and likelihood, I think) is that they are much, much lower than where we would like them to be. In the near future something very close to ubiquitous surveillance is very likely to be the norm. And that has all sorts of chilling implications. If systems can keep track of where you are and what you're doing most hours of most days then that opens up a lot of questions in terms of free expression, political organizing, civil disobedience, etc.
I know people like to imagine of all the orwellian possibilities, but it could have some useful medical cases too. It would be great to put some kind of camera in my grandparent's home to automatically monitor their health. My grandfather fell in the middle of the night, passed out, and was not discovered until the next day. He was still alive, but injured, but it could have been serious.
>Think about how you can recognize your friends without even seeing their face, by the way they walk, what they wear, how tall they are, and other contextual clues.
Yep, but think also how many times you approached a friend from behind and patted him/her on the back only to realize that you just patted a perfect stranger?
I have no particular problem to believe that one's gait is "distinctive", but quite a few in believing that that particular stroll is "unique".
If the goal is to police and/or suppress, using this system means that your targets don't have to carry a phone (and that you don't need to have access to the phone's data).
If you do that, you can just simplify it down to an automatic bluetooth beacon that identifies you to the government- and if you don't carry one you're not allowed out in public.
I think they just don't want to be quite so blatant and obvious about it.
And, in the not so distant future after that, it's safe to assume that people will be forced to have chip implants. Why use a camera and ml when you can just tag someone?
Forced? That sounds a bit too 1984-ish. I believe in a more Brave New World future, where people will pay good money to get chipped and the only "natties" (unchipped people) will be people who either cannot afford it, or "wackos".
Even if it gets built, I can count on thousands of my countrymen to install it improperly, lose the documents, mail the cheques late, accidentally lose equipment, and fail to plug it in.
Don’t worry, systems like this are incredibly fragile. You could spray paint the cameras and force the government into an attrition war (I can spray paint them faster than you can replace them, and paint is cheap).
I think fighting this will be much more like trying to stop Facebook shadow profiling you. You won't know where the surveillance devices will be, or rather, everything will be one.
When the price of getting caught spray painting cameras is you and your family getting executed, or being sent to a 're-education camp', would you still do it? No matter how 'incredibly fragile' you perceive the system to be?
This is already a daily reality for certain minorities in some countries already.
If that's the price for spray painting cameras, the marginal additional punishment for getting caught murdering those who are arresting the spray-painters is no longer sufficient to deter me from attempting it.
Once you go full family execution, that's pretty much the limit of escalation against an individual. If you're willing and able to destroy everything and everyone I care about, there is no longer any bound you can control on what I might be willing to do to prevent that, or avenge it, and you are now relying entirely on my own ethical reticence and my resource limitations to avoid me engaging in acts of terrorism against your nation-state and its loyalists.
And that's the practical argument for proportional punishments, if your government is so dysfunctional that there is no moral argument for them. If you lock up all the Uighurs just for existing, for example, you may find that the ones who escape your pogrom are more willing to fight, and to fight dirty.
Realistically, we should probably be talking about vandalism of surveillance equipment in terms of how many cameras or camera-equivalents you could be compelled by force to replace whenever you are caught destroying one. And that is balanced by how likely you are to be caught when destroying one. Which depends somewhat on how many cameras/snitches in the area are fully operational.
There's probably an optimal camera-destroying algorithm to be discovered in there somewhere.
There are plenty of camera designs that are essentially invisible. There can be plenty of those cameras that are installed in an overlapping fashion. If you want to go vandalizing the visible cameras, you might get entered into the arrest queue somewhere but indeed the police likely have more pressing issues.
Sure, if the powers-that-be get nervous, any sweep would miss a lot of subversives while picking up plenty of random people - just like purges in the many repressive regimes around the world. The main win for repression here is they no longer need a system like, say, East Germany where 25% of population spied on their neighbors.
I suppose you might count it a win that in this coming future, your car isn't going to start when you act staggering drunk rather than when you objectively are staggering drunk. But I can't see the win here.
I'm always so surprised by this community. In discussions about self driving cars and AI, everyone says "the tech isn't there yet", "it's weak because of X scenario", etc.
But talk about cameras and surveillance? The tech is everywhere and can do everything with perfect accuracy and will come for you and your babies if you even sneeze wrong!. Come on.
Having worked with security camera systems (I used to install them) let me tell you something: the cameras are shit, the quality they produce is shit, clients are cheap as fuck and don't want to repair them, they break constantly, the software is terrible to use... I could go on.
We are a very long way away from the dystopia you are describing. People will gladly unintentionally fuck this up long before it becomes what you're fearing it will be.
An authoritarian government that has declared as a matter of policy that the system on which it has spent billions is flawless doesn't need to worry that it isn't flawless. You'll say that wasn't you entering the subversive bookshop - but sorry, citizen, the system is infallible, so why are you lying to us?
Yeah, the consensus "here" is that deep learning, self-driving cars, all that, is "half-there". Presently, deep learning works, has a fairly high accuracy rate, in some percentage of cases. Sure, you can implement it right now in all sorts of places. The question is what the costs of the small but significant failure rate is going to be and who will pay that cost?. If some nobody pays the cost, well, a high failure rate may be fine with the institution that implements this.
With cars, the idea of a 0.1% chance of dying each time you step into a vehicle is terrifying - that's much higher than you get now, humans are actually amazingly good at driving. IE, fortunately, the system probably won't stomach the costs of bad self-driving cars.
But with the justice system spotting potential criminals, yeah, pulling in some innocents isn't going to bother a system with N degrees of repression. Most people in the justice system have no choice but to plea bargain now. Most people in the justice system are also guilty but the innocent-but-not-wealthy minority is generally SOL. And that is in our ostensible democracy, that's known by most sophisticated observers of the system, and only handful make an effort to change it. So a half-assed evidence gatherer of whatever sort will make lots of people happy, put lot of guilty and innocent people in jail and hardly needs great reliability. Indeed, with AI parole-granting systems, we're already there.
The second thing you mention seems largely unrelated to the first. Deepfakes (especially live deepfakes) require extensive learning on extensive data sets. Sure, you could do facial recognition that way, but useful facial recognition would learn from a single still photo.
Ubiquitous surveillance would presumably try to have every camera within view of at least one other camera, so sabotaging them is a good way to get tracked down and arrested.
> Ubiquitous surveillance would presumably try to have every camera within view of at least one other camera, so sabotaging them is a good way to get tracked down and arrested.
You don't have to attack the camera directly to sabotage it. Cutting power or communications cables and jamming transmissions works too.
I think it would be harder to keep every inch of cable in view of a camera than keep every camera in view of another.
Jammers are trackable, but if you use them judiciously, I think they could be used to cover other attacks: activate jammer, attack camera infrastructure, deactivate jammer, and flee.
I made a couple trips to the South Pole in grad school. Down there you were either a scientist in a red parka, black insulated pants, and white boots or support staff in brown carharts. No part of the body could be seen at a distance and even the gloves were standard issue.
What constantly amazed me was how you could see someone walking a quarter mile away and instantly know exactly who it was.
The South Pole is a smaller population than China so I'd be worried about false positives (probably mostly within families as kids learn to stand and walk like their parents and have similar dimensions when fully grown) but I suspect it's actually a pretty robust identification technique.
> The South Pole is a smaller population than China so I'd be worried about false positives
Even if the false positive rate is really low, it's still a problem. All of these dragnet-style things mess with your priors. In a traditional investigation, the police have a handful of suspects and then gather evidence. Circumstantial evidence showing that the chances are 1 in 1000 that this one of the five suspects was not the culprit is pretty convincing, and probably enough to convict.
Cast a dragnet over a city of 1 million, and suddenly that same 1-in-1000 error rate will be giving you 1000 innocent people who, under the old system, are 'guilty beyond reasonable doubt'. They've been pre-selected to look pretty frikkin' bad regardless of whether they're guilty, which means any one of them will have a hell of a time proving their innocence.
When I hear a roommate or family member walking down the stairs somewhere else in the house, I can immediately tell who it is based on the sound of the pattern of how fast and how hard they are walking.
Former Polie (10x; IceCube) - I also noticed this there. I wouldn't trust it 100%, but it was pretty reliable.
I expect that if you knew that the cameras were watching, you could trick their AI's pretty easily (e.g., by faking a limp), but I'd prefer to confirm that before relying on it :-).
Maybe. I think it's possible to develop a couple of different gaits, though it requires a lot of self-control to restrict specific gaits to specific situations.
I live in a block of flats with a flat mate. I can tell when she is home before she even gets to the front door as you can hear people walking in the corridor and she has a particularly heavy, almost clumsy walk that is completely and easily distinguishable.
Former CIA Chief: "Changing the way that you walk requires some amendment to what you're doing. You're going to need a physical apparatus to make you walk differently. You can't possibly say, okay, now I'm going to limp. To change my walk, I put a little piece of gravel in my shoe, just a little thing that pinches my toe, and it changes my walk whether I want to or not. You can put a bandage around a knee, something that restricts you, will change your walk."
Arguably, there are enough people walking around with stones in their shoes that altering your movement with that method would be masking enough - until they're able to simulate how you're normal gait would change with x sized stone at whatever location in your shoe..
Alternatively, I've heard (don't have the source handy) that they're looking at other stuff that the stone in shoe might not change such as overall height and knee to leg to hip to foot stature and angular momentum ratios. Who knows?
I was gonna suggest putting a coin to get away with it but then I read #3. Damn, China is screwed. I hope they don't plan to bring this worldwide because although this will be useful in catching criminals, it just might spread paranoia and fear. Then again, maybe it's already happening and we don't even know it yet?
I think the vast noise of people sprain their ankles, the people who did get a pebble in their shoe and the old people who decided to not put on their knee brace for that quick errand and similar that would result in way too many false positives to mark them all persons of interest.
You'd think the same logic applied with profiling metrics used in areas that implement stop-and-frisk, but it doesn't stop them.
A false positive is an excuse to target someone police don't like, or suspect is guilty, so the dismantlement of a system that provides those convenient false positives will be fought.
This will cause people to start walking on the street in the same way as Fremen in Frank Herbert's Dune, by varying frequency and step length in order not to attract a (government) worm.
Which will make the staggering oddballs much easier to detect by human review of the footage, or by police/informants in person.
It's like that anti-face-ID makeup that covers your face with dazzle patterns. It may work as a visible act of protest, but it won't actually protect your privacy.
So, my understanding is that the machine vision technologies discussed in this article don't really work that well, at least not once deployed in the real world, outside of "the lab" and away from the controlled conditions of a scientific experiment reported in a research paper.
I wonder, then- do the authorities of the various countries (by far not just China) who make use of this kind of technology know how little they add to their ability to spy on their citizens? If they know- do they use it anyway just so that lucrative contracts can be signed and money exchange hands? Do they (also?) figure that the fear of being identified by all-seeing, all-knowing cameras will keep the population in the straight and narrow, even if the cameras are blind and dumb as bricks, in practice?
My Bachelor's (of engineering) thesis was in biometrics and if there's one thing I remember from the seminars it's that gait along with typing rhythm and voice are these modalities that pop up every now and then only to disappear after a while after their proponents figure out that it's no use.
These modalities simply don't carry enough information to reliably identify anyone.
Facial recognition works best when you need the subject to be unaware that they are being identified.
You don't necessarily need to identify one amongst 1.3 billion. (requiring 30 bits of information out of the gait!)
The context is a crowd of about 100 (10 - 1000), in which you are tracking a few percent, and you need to keep "eyes" on an individual, discriminating amongst a few others that have crossed is path.
That said, I'd agree that it looks difficult to help distinguish individuals in a trained team.
This is not new, and is not unique to China. Any country with intelligence units, like the US, is using data mining to identify and control their citizens. It has become fashionable lately to imply that China is the only country doing these things, or somehow doing it in larger scale. Has everyone forgotten what Snowden revealed?
I think it is safe to assume that every country deploying networks of surveillance cameras and infrastructures for tracking communications won't limit its use to counter terrorism or other "good" purposes.
Why? Because they can, and because the first thing any powers that be do once they get the captains chair is to secure their position, and surveillance is an extremely powerful tool to get that goal.
It may seem excessive and paranoid, but again I would be surprised if they didn't.
Of course, Snowden was only able to review what he saw 5 years ago or more. Why you have any doubts that the NSA, with ample resources to track and control, would use techniques similar to what the Chinese do? Do we really need another Snowden to realize this obvious fact?
Because while sure they may be working on any sinister form of control possible, the NSA has to do it in secret, which makes it a bit harder to trample on rights. In China it isn't secret, its just the way things are.
> Of course, Snowden was only able to review what he saw 5 years ago or more. Why you have any doubts that the NSA, with ample resources to track and control, would use techniques similar to what the Chinese do? Do we really need another Snowden to realize this obvious fact?
Why do you have so much certainty? What evidence can you provide that they use or currently plan to use these techniques domestically for widespread tracking?
It’s controversial because China does not have an independent judiciary or any real civil rights protections, so the potential for and likelihood of abuse is much higher. See also the concentration camps containing over 1 million Muslims in Xinjiang province [0].
China is doing it at MUCH larger scale and a lot more intrusive. Everything on the internet is tracked and filtered, cameras everywhere. Chinese government can pinpoint location of any of it's 1.3 bln citizens.
The parent article starts with "China is home to the world’s largest network of CCTV cameras — more than 170 million...", so apparently they really are conducting surveillance on the largest scale.
Sorry, but it looks like China is just playing catch up on this. The US intelligence already has information on pretty much every American, their social networks, phone calls, consumer habits, etc. And through Facebook, Google and others, they also have information about mosts people around the world. Even if China is able to track everyone inside Chine they're still at a disadvantage compared to the US covert operations.
The whataboutism is strong with this one. Is there a service that lets you associate your own tags with HN accounts? could be useful. Or abused. But useful.
Chinese government can kill any citizens anytime they want. If Snowden is born in China, his parents and relatives will be used as hostage or sent into jail.
I think it's important to remember that the Snowden leaks occurred shortly before the "Big Data" craze went into full swing. Even Snowden said at the time that their was more data than could be effectively analyzed.
While I agree with what you're saying, it makes sense that we'd be getting this information only just now - after there has been enough time for 1) data scientists and the like to become prevalent/cheap enough for government contracting and 2) enough time to build working applications on top of all that data.
Yea, china is in a different league. You are talking about 400M video cameras. The only thing close are the self driving car companies, who will be selling info the feds :)
Also the specific concept of gait analysis[0] has been around for decades. Changing one's gait has been an important piece of the tradecraft tool box for a long time now[1].
Don't even pretend that the US and the China use this tech the same way. The US might have a few bad apples in intelligence using data mining for bad things and there have been some questionable data being mined; but suppression of people isn't happening from it. China is using this tracking to actively suppress their people and view that do not align with the state.
The real punchline from that article "no peer reviewed scientific studies have ever been done to prove the basic assumption that every person’s fingerprint is unique."
I'm about to walk out the door so I can't look up papers, but I'm going to guess that it's based as much on dynamics as actual proportions of limbs. I would estimate the number of features to be on the scale of 2000-2010 era facial recognition. Dynamics would increase the number of dimensions, making it possible to separate two people with nearly identical features.
However, I do agree that inferring the proportions from video imagery at various angles and with various items of clothing is going to be challenging due to systemic distortion, e.g. inferring incorrect angle due to clothing.
If they blanket an area with cameras, as China does, they can use multiple vantage points to get more accurate measurements and, when a positive ID is made, they can persistently track that person crossing from one field of view to another.
I think the main advantage of gait analysis (or limb proportion measurement) is that it will encourage a more efficient, decentralized computation that first distills the observed data into a fingerprint that can be more easily transmitted over a wire to a massive database for matching, rather than the alternatives of either replicating that database or transmitting raw, high-bandwidth data.
Besides limb measurements, they could also use last known location, residence, and vehicle ownership information to increase the certainty of loose matches.
One of the books I read when I was younger that got me into programming is Little Brother by Cory Doctorow. In the book, the high school they attend has cameras that identify students by measuring gait. The characters in the book put gravel in their shoes to throw the tech off when cutting class.
If this is all out in the open in China and many Chinese people even approve of it, wouldn't it be more efficient to have everyone wear ID tags of some kind?
I'm skeptical, especially of the surveillance claims. If you already know who you're looking for and you have lots of footage of that person, you might be able to train your system to pick up that person's gait from new footage. But that's a far cry from being able to "identify anyone" by their gait, i.e. set up lots of cameras around the city and track individuals coming and going.
There are already amazing airborne camera systems that track the motion of every license plate in a city, and even that isn't the same as tracking every car's movement because you still have to know which license plate to follow -- you just can't care about all movement of all cars. I suppose someone will say that you can store all movement in a database, but I'll wager that's just not worth anywhere near the cost. Your computing power would be much better spent on computing large primes.
Anyway, it's a cool sci-fi movie premise. You can see the murderer shaving off his mustache, burning off his own fingerprints and then knee-capping himself to avoid detection.
> you just can't care about all movement of all cars
Of course we can. For each license plate encountered by CCTV or cop cars, store a tuple of (timestamp, coordinates). This won't require much disk space, so that's not a concern. Later, it is easy to answer questions like "what places was this car seen and when?" and "which cars were in this vicinity at this time?"
Well, let's test your claim. Say there are 1 million cars in a city. Say your camera system covers 300 square miles (to handle nyc). The question is how long would it take to scan the images from a single moment and capture all license plates. At 5 seconds per plate, that's 57 days. Plenty of time to commit a crime and escape without detection. And again, you could've been doing something useful like capturing some large prime numbers, in order to break the encryption of someone who is actually trying to hide something.
Why would it take 5 seconds a plate? They could as well station a licence plate reader at each intersection, and update a key-value store with a given plate's last seen location whenever one is spotted. O(1).
Parallelization you say? 5 seconds is constant time, and a pretty optimistic one. How many license plates are visible from any given intersection in my town, NYC? Somewhere between 5 and 50. There is always motion, so what should the frame rate be? You probably can't commit much of a crime in 15 seconds, but you can drive an 8th of a mile (at 30 mph), about two intersections. So let's settle on a frame rate of 5 seconds. You still won't have much precision with cameras only at each intersection, so let's put them 10 feet apart, and let's have 4 at each point. Honestly, that's a minimum. At 10 feet apart, for NYCs 6074 miles of road, that's 12,828,288 computerized cameras. That's about $12B. And I still bet that the error rate will be too high to be useful.
I'm not arguing that it will never be possible, I'm arguing that the original claim, and the mass surveillance that make people nervous, is not happening today.
There're already drones flying over American cities that record the city from the air nonstop. When a crime occurs, they review the recording in reverse and see where the bank robber etc drove from (their house), then go to their house and arrest them. No need for anything automatic.
Mitigation strategies discussed in an unrelated (but relevant) conversation between WIRED and Jonna Mendez - https://www.youtube.com/watch?v=JASUsVY5YJ8&t=278 (if you're not auto-jumped, find 4:38 in the video)
Anyone who's heavily near-sighted knows this technique. In college, my dorm bathroom was quite a distance from my room. At night, I would remove my contact lenses, and upon returning I would always identify certain people by their style of walking, or their posture.
I was really into the idea of card counting when I was in high school (over 10 years ago) and remember coming across multiple sources that claimed casinos did this as part of their arsenal to detect people who've been blackballed.
Surely gait analysis is part of the US arsenal as well. It's unique like a fingerprint and is used extensively by equestrian teams, etc., to detect shoe and health issues early to protect the investment in the horse. Many relevant health problems are visible early on as a change in the horse's gait.
It can be done with an accelerometer or a camera, and probably other kinds of sensors.
It could also be used for early diagnosis in humans, but is cost prohibitive for human preventative care use.
Remember a decade or so ago when the U.S. military was icing Middle Eastern "terr'ist" targets based upon gait analysis?
Well, now it's you. And also, once again, brought to "the world" by China. (Maybe they'll sell it along side of or as an add-on to their Great Firewall turnkey systems.)
I remember viewing a few of those old video segments (drone- and plane-based platforms) and feeling pretty nervous.
The US government has had this forever. This is why they train you at the farm to put a pebble in your shoe if you’re trying to evade detection so your gait changes. It is most certainly not impossible.
> the kinds of questions that began to present themselves were just.. the questions themselves were ugly and I didn't want to know the answers. It's like banging on a door, you knock on a door, and the door opens slightly and behind that door it's dark and there's loud noises coming like there's wild animals in there or something. And you peer into the darknees and you can't see what's there but you can hear all this ugly stuff.. do you want to step into that room? No way, you just sorta back out quietly, pull the door shut behind you, and walk away from it.
He was my history teacher in Highschool. He was (and still is) an inspiring orator and incredibly talented poet. His perspective is invaluable and has already proven to be timeless.
Amazing that nobody has so far mentioned "Mission Impossible Rogue Nation"! There's a scene where Benji has to pass a corridor where people are identified by the way they walk!
I remember this being a research topic while I was in grad school more than 10 years ago. Like face recognition, it is nothing unique and is being researched by plenty of people
Little Brother, Cory Doctorow.
Of course then you just train a network to map between pebbled and unpebbled gaits and run that on any failed matches. So...
It seems likely other governments are using the same techniques and that our, I'm assuming, Western view of China as overbearingly authoritarian is tainted by a lot of cultural baggage: that the hostility of the Chinese security services towards their own population isn't as markedly different as we imagine it to be, because I - for example - have been carefully inculcated into the idea that Western governments are beneficent in their spying whilst other governments are malicious.
Decades of careful propagandised hatred for the Commies, or the Reds, or whatever slur, is hard to shake off too.
Or maybe you're right. But it's impossible to trust what our own governments say to us now.
> it's impossible to trust what our own governments say to us now.
Sure, but throwing up our hands and assuming that all states are basically the same probably takes us further from the truth.
We've got to make our own judgments somehow, and it's prety clear that despite the many flaws of our Western countries, they are meaningfully more open and tolerant of dissent, investigative journalism and the like than some other countries, including China.
Every piece of information should be taken with a grain of salt, and it makes sense to add a few extra grains when it comes to countries that we have been trained to be suspicious of. But that doesn't mean we should bury it all in a salt mine and ignore it.
>they are meaningfully more open and tolerant of dissent, investigative journalism and the like than some other countries, including China. //
I'd agree with all that. I just don't have any evidence to suggest they're watching us less or have greater ethics when it comes to invading citizens/subjects privacy in secret.
That's fair, but I also think absence of evidence is meaningful in proportion to the likelihood that, if evidence existed, it would be gathered and disseminated. In other words, the more closed the society, the more we should assume the worst when the facts are unclear.
In absolute terms whistleblowing is a dangerous game in the west, so it is likely that there's plenty being successfully hidden from us -- and the evidence we do have suggests that surveillance is very intense and intrusive. But I still lean toward the belief that it is probably relatively more restrained, and less likely to be used in completely dystopian ways, than in more secretive and repressive states less constrained by the rule of law.
The US is still actually pretty good to it's citizens by global standards. For example as an American I don't have any worries that the government will come after me if I speak out against it and a politician can't have me thrown in jail without a trial.
But I mean of course as an American I would care more about what the US does because it affects me more. The war on drugs especially is an authoritarian nightmare (and practically had ended up racist) and the war on terror has eroded due process but China has a type of authoritarianism that just isn't the same as the type we have in the US.
USA Drug War was always meant to be racist, and it always has been. See the statements of e.g. Anslinger and Nixon. Seeing mere contingency where intent is clear is a tawdry sort of privilege.
I was just making a weaker claim because honestly I didn't have enough time to properly defend a stronger one on the internet and in my experience ascribing racism explicitly to things takes a lot of effort to defend (though in this specific case that isn't true). Apologies if it was insensitive.
I don't think it's unreasonable to assume that in a near future we will have cameras that can do much more than identification on a massive scale. It may be easy to track things like how stressed, healthy, sober etc. people are without their consent or without them even noticing. And this would be precise, not speculative like the current online tracking.
Would you live comfortably in this world? Would you build this?