Hacker News new | more | comments | ask | show | jobs | submit login
Is China’s corruption-busting AI being turned off for being too efficient? (scmp.com)
202 points by rakkhi 19 days ago | hide | past | web | favorite | 96 comments



I would say that it's far more likely that the system simply never worked properly. The amount of non-functional so-called "AI" features you find in China is astounding. A friend's apartment building got the latest facial-recognition door lock - which will open the door if you hold up his Facebook photo on your phone. Just the other day, my wife found out by accident that her friend's Huawei phone will unlock for her using its version of Face ID. They have similar facial structure but really look nothing alike. China is amazingly good at many things, but _genuine_ technological innovation is not something I've seen in my time here.

No doubt this corruption-busting "AI" was developed by companies and people with deep connections to the relevant Party members, but little of the necessary experience or resources to actually carry out the project successfully. When it became clear that it didn't work and would never work, this story was a way to allow the responsible people to save face.


"the latest facial-recognition door lock - which will open the door if you hold up his Facebook photo on your phone"

Meanwhile, Baidu's main campus has had facial recognition on entry gates for years (and the gates cannot be fooled by 2D photos). And Face++ (a Chinese company) provides SDKs to allow you to detect liveness (i.e. not a photo) via their APIs. This is used by many apps when a user registers for the first time.

"her friend's Huawei phone will unlock for her using its version of Face ID"

There was a well-reported story about someone who got a refund on their iPhone for the same reason. How is this a Chinese phenomenon?

"but _genuine_ technological innovation is not something I've seen"

How do you define this? There was a white paper linked here recently about how WeChat scales its backend systems to cope with load. Are none of those scaling techniques innovative? (And probably similar for taobao.)


True innovation means to develop or do something not done elsewhere before. The products you listed sound good, but they are certainly not innovative.


That would be a difficult bar to exceed. Almost all major product innovations have been an iteration on existing products. From that definition, I cannot think of anything I would call "Truly Innovative".

China, IMO has both innovative product and services as well as lazy copycats. Because the copycats are so many in number, we often don't regard the innovative stuff coming from there as well.


Anecdotally,

I bought a Huawei matebook laptop this past year. It was very cheap but looked cool - like a macbook. Funny thing though, the windows software was bootlegged and I had to use my institutional registration to get Windows software.

I used to use Amazon's market place and have now stopped after accidentally purchasing Chinese fraudulent goods (I still use for digital content).

My exposure to Chinese tech and goods, while limited, has now totally predisposed me negatively towards them. I haven't seen any innovative product personally. Just inconvenient knock off which have effectively stolen my money. If all Chinese tech goods were taken off the market, innovative or not, I would be happier knowing that I could start using a more trustworthy Amazon market place.


Did the Huawei matebook come in factory-sealed packaging, or had the wholesaler/retailer tampered with it? (A friend in China experienced a similar issue with another brand. It came with Chinese Windows from the factory, but the retailer had installed a bootleg version of English Windows.)

"If all Chinese tech goods were taken off the market" then which tech goods would be left? Are there any tech goods that include no Chinese components? What % of Chinese components is acceptable to you (assuming non-zero)?


Taiwan is effectively China the went down the capitalist path a few decades before China itself. Taiwan's engineers now lead the pack in what must be one of the the most punishing technology areas - semiconductor manufacturing. In less punishing areas mainland China leads the world in manufacturing. Yes, part of that is to do with having cheap labour, but you don't get to be the best at the world at anything by sitting on your hands. Every time someone business figures out how to build something cheaper than anyone else on the planet they have been innovating, and China has done this many, many times.

What is different about China is an attitude. It was crystallised for me by a Australian Documentary titled "Two Men in China". One of those men was formerly Australia's chief scientist. They walked into a typical Chinese apartment. Very small, but it had all the latest gadgets. Then they showed the doors had fallen off some cupboards. In fact a whole pile of things had broken a few weeks after the couple had moved in. This was considered perfectly normal - they even had a word for it (which I forget).

In extreme cases it leads to things like baby formula suppliers adding melamine (a poison) to get the protein readings up to spec. (Australia's baby formula industry has done very well out of that little episode, so well there fights in the baby food isles appears on the nightly TV news as Chinese expats empty the shelves to make a quick buck. Turns out that _really_ pisses of local mothers with a hungry baby in their arms.) If you read stories on HN you come to the conclusion this attitude must pervade the entire business culture. Building falling down because of sub-par concrete, stories of "we sent them a spec, they built it well for a while, then someone notices part X had been replaced by a cheaper version" is a very common thread. Anybody who does business in China has to spend an inordinate any of money verifying all their suppliers are not cutting corners.

Americans are coming face to face with this now Amazon has allow the Chinese suppliers unrestricted access. Fake products, fake reviews, fake suppliers, fake purchases on other products leading to critical reviews, bribes for fake reviews - any scam that might work even only for a short term is being tried. Now imagine living in the country where that is how all business is done. Is it is at all surprising someone bought a system with face recognition, only to discover later it could be fooled by a photo?

If you are thinking China can't innovate you are just plain wrong. If you are pinning your hopes they will be perpetually held back by corruption you might be right (things look bleak under the current leadership) - but it's also possible they could turn into another Taiwan or South Korea, albeit with 1.3 billion people. If that happens we will see innovation on a scale we have never seen before.


When will these people implementing the biometrics learn... BIOMETRICS ARE ONLY DONE SECURELY ON PERSONAL DEVICES UNDER YOUR CONTROL!

Otherwise replay attacks are trivial.

China has “smile to pay” which is ridiculous. Hopefully it’s for small amounts.

Look, biometrics can be used as an id (public key) but not a password (private key)

And as for USA ... same goes triple for social security numbers people!!!


Aren't social security numbers explicitly non-public data?

They are supposed to be that around here and the government has done the nice thing of including them in some people their (one-man) business account numbers which need to be published...

For everything else: True.


In theory, (a) social security numbers should not be used for identity verification and (b) social security numbers should be kept private.

In practice, (a) social security numbers are gathered, stored and used for identity verification by thousands of private organisations and (b) those records are frequently leaked.


They can't even be used as an ID. They can be used to "verify" an id (weaker than a password, but maybe good enough to eliminate an impersonator), but not to SELECT $person FROM all_users


The phone thing isn't unique to Huawei. My wife has two sisters and they can unlock each others phones using Face ID. They were surprised when they realised this so I don't think they added each other's face or anything (I don't know if you can do that though..).


AI smells ever more bubbly as the months go by. The average revenue from AI companies cannot justify their current market value, which suggests something financially unpleasant will befall them.


These days, AI means an inefficient chatbot standing in for the function of a simple webform. It's not even about working or not working.


“AI may quickly point out a corrupt official, but it is not very good at explaining the process it has gone through to reach such a conclusion,” the researcher said. “Although it gets it right in most cases, you need a human to work closely with it.”

And that's the crux of the issue with AI being used in any law enforcement situation.

If we allow decisions and conclusions being drawn by the AI without a clear explanation of how it got there, we're just creating a monster that will advise -maybe replace- the judgment of law enforcement professionals who won't have the means to question these decisions.

Catching corrupt officials is a laudable goal anywhere and I'd like to see it applied to highlight potential irregularities that may require a second look but the danger here is that an unprovable AI be used to make claims or start being used as sufficient evidence to ruin people's lives.


This reminds me of Klaatu's speech in "The Day The Earth Stood Still." They created a race of robots to prevent aggression.

"The test of any such higher authority is, of course, the police force that supports it. For our policemen, we created a race of robots. Their function is to patrol the planets—in space ships like this one—and preserve the peace. In matters of aggression, we have given them absolute power over us; this power can not be revoked. At the ?rst sign of violence, they act automatically against the aggressor. The penalty for provoking their action is too terrible to risk. The result is that we live in peace, without arms or armies, secure in the knowledge that we are free from aggression and war—free to pursue more pro?table enterprises. Now, we do not pretend to have achieved perfection, but we do have a system, and it works."[1]

[1] https://www.rottentomatoes.com/m/1005371_day_the_earth_stood...


Fabulous movie. The 1950s one anyway. We'll not speak of the 00's awfulness.

If we continue to blindly stumble towards that future we are likely to set 21st century (USA) biases in stone. Racial, national, sexual and corporate biases etc will be baked in. We've seen they usually are in the systems we have already. Yet it will be done mostly innocently, and unaware from accidental bias in training data.

Makes for an interesting future.


I doubt such a scheme would work. People would just engage in forms of aggression the robots would not recognize.

It's like today's popular notion that one can control the thoughts of others by banning certain words.


There's a rather obvious gap between calling someone names and throwing nuclear weapons at them.

As for controlling thoughts - that's the easiest thing in the world today. There are entire industries devoted to making sure people think as they're told to, and act as they're told to.

Of course they don't work on everyone. But if you can fool more than half of the people when you need to, you can do more or less anything you want.


My point is there are effective ways to destroy people without using overt violence.

> There are entire industries devoted to making sure people think as they're told to, and act as they're told to.

I'm talking about punishing people for the use of certain words with the idea that it will reshape their thoughts, which is not the same thing as persuasion techniques.


They're solving the wrong problem. Instead of writing "an AI that determines guilt" (e.g. a classifier) they should be trying to write "an AI that compiles a list of evidence against the accused".

This list of evidence would then be subjected to the same scrutiny that a list of evidence from a department of human police officers would be subjected to. If the AI cannot make the case against somebody in a way that's persuasive to a human prosecutor, then the case goes nowhere.

If the AI cannot provide rational clear and understandable justification for it's assertion, then it's not ready to replace humans who can be expected to provide justifications for their beliefs. Police officers are expected to write police reports, and so should any AI that's meant to do their job. Right now they've just got the AI equivalent of a cop who says "I feel in my gut that he's guilty, and I'm not going to explain why." But because it's "AI" I guess some politicians thought that was good enough.

Of course doing it right is probably harder.


> Instead of writing "an AI that determines guilt" (e.g. a classifier) they should be trying to write "an AI that compiles a list of evidence against the accused".

The problem there is selection bias. If you have a list of fifteen things, three make it slightly more likely they've committed a crime. Seven make it slightly less likely. Two make it significantly more likely and three make it significantly less likely. If you consider them all, the probability they've committed a crime is in the "probably not" range. But if you take the list and prune everything that makes it less likely, leaving only the factors that make it more likely, you get a distorted result. This is what prosecutors do for juries in general, and it's problematic.

Now suppose instead of a list of fifteen you start with a list of fifteen million and do the same thing.


Legal systems avoid this problem by imposing a high degree of confidence on the persecutors, just like some areas of physics require 5 sigma on their p-values and thus can ignore all the problems with p-value hacking and repeating experiments.

Of course, criminal law works on a much lower confidence level than physics, so it needs a trial where the defense is in charge of gathering that exonerating evidence when the persecutors get a false positive. It could work better, but this one problem is already accounted for.


"Beyond a reasonable doubt" accounts for only disclosing the five most incriminating factors out of fifteen.

If you instead got the thousand most incriminating factors out of millions, it looks like a damning mountain of evidence when it's really just the first page of the list of millions of factors sorted by how bad they look.

It completely destroys anyone's credibility scale because seeing a dozen one-in-a-million matches intuitively feels impossible to be a coincidence, but that's less than the result you get by random chance when you attempt to match fifteen million factors.


I don't think you'll find a persecutor willing to take a laundry list of low probability evidence and go after it, whatever the size of the list. Almost certainly, the longer you make the list, the less people you'll find willing to use it.


It's not about the length of the resulting list -- just the opposite. It's that you can take a list of a million and prune it to a thousand and suddenly everything bad happens with 1000 times the expected probability because you excluded the 999 things that didn't for every one that did.

If you take the list of a million and prune it to ten it's even worse. You can find ten 1:100,000 probability events that occur randomly in a list of a million. It's this:

https://xkcd.com/882/

Except a million tests instead of twenty.


That's no different from how regular law enforcement works. It's the standard to which any autonomous law enforcement should be implemented.

If you want to build better prosecutors that understand statistics, that should be considered a separate project. One AI that assembles the best case against the accused, and another AI that critically examines that case and discards it if it fails to meet some threshold for statistical relevance, or passes it on to human prosecutors if it does.


There are at least two differences.

One is that the selection process is still opaque. If you have a corrupt prosecutor you're screwed either way. But if you have an honest prosecutor and you give them a collection of evidence selected for a bias in favor of guilt, they have no independent way of knowing that. So now you not only need an honest prosecutor, you need an honest AI, and we're back to the opaqueness issues.

The second problem is that in principle right now if the prosecutor and the defense attorney spend an equal amount of resources, they each get proportional results. If you create this database which the prosecutor can use, can the defense also use it? Do they get access to the data to run their own queries and algorithms against? If so it seems a rather large privacy problem, since the database would contain sensitive information about everyone, but if not then you're handing an asymmetric advantage to the prosecution.


I believe the data being compiled in the first place is the privacy problem - especially when it is in the hands of government officials with the power to jail, seize, and execute.

Transparent at least allows everyone to go through it and undermine it and the system throughly. "The judge has more corruption data points than the accused sentenced, and this newborn with a Han name has a far higher rating than this infamous connected embezzler!"


I wonder if it gets it right in most cases because officials are corrupt in most cases.

It's like building an AI that guesses when it's going to be a rainy day in Seattle.


My Seattle rain-forecasting algorithm is very good. It forecasts rain tomorrow if it is raining today, and not-rain if it is not raining today.


>I wonder if it gets it right in most cases because officials are corrupt in most cases.

This with a large helping of "the laws were intentionally written to be broad" on the side.

With a suitably inclusive definition of "corrupt" every official is corrupt.


> If we allow decisions and conclusions being drawn by the AI without a clear explanation of how it got there

The funny thing is expert systems were doing that 30 years ago. Seems like this would be a perfect use of them but it's not a buzzword any more.

It's crazy how many wheels get poorly reinvented in the software world.


Explainable isn't the same as justifiable. An expert system may be able to readily explain that it determined the chance of recidivism is high due to factors x, y, and z. However, there is no guarantee that those factors correlate with recidivism at all since expert systems were simply built to emulate (human) expert decisions.

While it's very easy to do machine learning _incorrectly_ it is also possible to reasonably attribute factors to outcomes based on large quantities of data. You can also look a LIME/SHAP and other factor contribution metrics. This seems like a significant improvement over expert systems.


> Explainable isn't the same as justifiable. An expert system may be able to readily explain that it determined the chance of recidivism is high due to factors x, y, and z. However, there is no guarantee that those factors correlate with recidivism at all since expert systems were simply built to emulate (human) expert decisions.

Re-reading the article now, it may very well be an expert system. The "big data" mentioned isn't training data, but pre-existing databases like bank account info, salaries, contracts, property ownership, etc. FTA:

> Disciplinary officials need to help scientists train the machine with their experience and knowledge accumulated from previous cases. For instance, disciplinary officials spent many hours manually tagging unusual phenomenon in various types of data sets to teach the machine what to look for.

As far as explaining the results, it's hard to draw any conclusion without knowing exactly what "not very good at explaining the process it has gone through " means.


"If we allow decisions and conclusions being drawn by the AI"

Im doubtful that anyone is getting charted without specifics.

Moreover, with this amount of corruption, it seems clear that just some basic oversight is all that is necessary.


> Although it gets it right in most cases, you need a human to work closely with it

Is it just me or does that simply show that there is a lot of casual corruption in China?


This reminds me of a detail from the plot of the original Deus Ex. Part way through the game, there is an optional dialogue with a benevolent AI. If you question it enough, it reveals that it is the rejected prototype for an anti-corruption/terrorism AI which was turned off for being too effective: eventually classifying its own masters as a criminal organisation.


Hmm, wow... I played original Deus Ex a few times and do not recall it. Is it part of the recently released "Revision" uplift? :-\


Deus Ex had multiple storylines and endings, depending on your actions.


man this thread makes me want to replay the game again.

It's in the original game and it was the MJ12 Daedalus AI, although I'm not quite sure where the player picks that piece of backstory up. It might be in the dialogue with the prototype in the Everett house.


Yeah it's Daedalus. I Googled it and I'm factually correct but I must have the conversations mixed up in my head... It might be one of the human characters who tells the player about it.

Definitely time to give it another play through.


I have played that game many many times and never came across this conversation.

Time to play again I guess.


I think I got the conversation wrong... But it's definitely revealed at some point. My memory has gotten fuzzy! Definitely time to give it another play through.


"it is not very good at explaining the process it has gone through to reach such a conclusion".

The above statement is just too convenient and practically superstitious.

AI is data hungry. That implies that there are so many past incidences of corruption you can generate large training data sets. So perhaps even a random guess would be too efficient since it is right more often than not.


"it is not very good at explaining the process it has gone through to reach such a conclusion".

To be fair, generating human-understandable explanations of predictions in a complicated nonlinear model is difficult in general. You typically need to come up with some kind of simplification of the model (like LIME [0]), and it's far from perfect or generally-applicable.

[0]: https://homes.cs.washington.edu/~marcotcr/blog/lime/


SHAP is more recent and does work with (just about) any ML model: https://github.com/slundberg/shap

There's still more work to do in interpretability but models are rarely opaque black boxes with no way to see inside.


def is_corrupt(data): return rand(0.0, 1.0) < 0.9


So it's fine to use systems like this on the populace but when used on the government, they make people 'uncomfortable'?


As O'Brien passed the telescreen a thought seemed to strike him. He stopped, turned aside and pressed a switch on the wall. There was a sharp snap. The voice had stopped. Julia uttered a tiny sound, a sort of squeak of surprise. Even in the midst of his panic, Winston was too much taken aback to be able to hold his tongue.

'You can turn it off!' he said. 'Yes,' said O'Brien, 'we can turn it off. We have that privilege.'


Just finished a few of his and it was well worth it. I’m left with the impression that he seemed to do things just to see what they were like. His descriptions of hunting critters in his squalid Paris bed sit and of going over the top in trench warfare are memorable.


This is exactly what I was thinking, and it's a little scary. The officials are smart enough to know why this kind of system is dangerous and refuse to be subject to it, yet they'll forge full steam ahead whilst applying it to the general populace. I hope they recognize the danger of this.


This attitude is not unique to government officials. People are incentivized to care more about themselves and those close to them than the whole entirety of humanity. Malignant self interest is mainstream, cooperation is taboo.


Reminds me of how most of Congress is against Single Payer, yet they are all comfy with their great health insurance.


It's not too dissimilar from what we do at home, in secret. Listen to William Binney, former Director for Global Communications Intelligence at NSA. He might know a thing or two. It's too bad we collectively sit in silence as we build our future prison and bicker about things that aren't nearly as important. It's not the Chinese you really have to worry about.

https://www.youtube.com/watch?v=3owk7vEEOvs

Or listen to Tom Drake, Process Portfolio Director/NSA, Technical Director for Software Engineering Implementation/NSA. He also knows a thing or two.

https://www.youtube.com/watch?v=mjwW1JlGG4o

Those 2 guys are only talking about declassified info. Imagine what they would tell you about the real classified stuff. Also take note at what the government tried to do to them, and they actually broke no laws.

Look at how cities like Chicago, New Orleans and Tampa have gotten in bed with Peter Thiels Palantir systems (with funding from NSA). Sometimes, the public isn't even made aware, like in New Orleans where they (the Mayor and Palantir) figured out a way to be able to keep it from the public.

> the program escaped public notice, partly because Palantir established it as a philanthropic relationship with the city through Mayor Mitch Landrieu’s signature NOLA For Life program. Thanks to its philanthropic status, as well as New Orleans’ “strong mayor” model of government, the agreement never passed through a public procurement process.

> In fact, key city council members and attorneys contacted by The Verge had no idea that the city had any sort of relationship with Palantir, nor were they aware that Palantir used its program in New Orleans to market its services to another law enforcement agency for a multimillion-dollar contract.

https://www.theverge.com/2018/2/27/17054740/palantir-predict...

Look at how some cities and counties are contracting with Amazon Rekognition systems and piping the results back to police. Washington Country, OR (the largest populated county containing Portland) has it. Cameras mounted on poles everywhere.

https://www.kgw.com/article/money/aclu-calls-out-amazon-wash...

This stuff is happening right here at home. Your 1st and 4th amendment constitutional protections have gone away and you're living under an illusion. Your attention is diverted. Wake up people. It's not just "the other guy". We are complicit. Don't trust they will only use it for "good" purposes. That's just how they sell it, but that's not how it ever ends up. The problem is there is no oversight to be able to tell they are doing what they say they are doing. We've gotten rid of that pesky oversight thing. It was getting in the way.

They have no right to collect any data on you without a search warrant, signed off by a judge, once probable cause has been established. Read about the history of China, Russia and Germany to know what these kinds of files produce, and why we have the 4th amendment to begin with - The Revolutionary War - British soldiers entering anywhere and everywhere they wanted without notice using a "general writ", aka blank search warrant.

I think we all need a history lesson, as we seem to be very hellbent on repeating it. 100 million deaths under Mao, Stalin and Hitler. One Hundred Million. Let that sink in. Left extremists, right extremists - it doesn't matter, they both lead to the same end result - a totalitarian police state and many, many millions of deaths. They're still discovering mass graves.


"Everyone has committed a crime, it's about who we decide to prosecute" --KGB https://www.wsj.com/articles/SB10001424052748704471504574438...


I could do the same thing with a heuristic. Just enumerate every government official with a high net worth and investigate them.


The 'problem' with widespread corruption is not identifying it, it's always been an issue of the political will to do something about it.

When it's rampant, it's easy to find.

So this has nothing to do with AI or even technology. Granted, tech might make it easier to find some bad guys, even then, the fact that stuff is 'online' and in a 'searchable DB' makes this possible, the AI is not necessary.


This sounds like a really typical implementation of an AML system. My guess is they actually shut it off because it generated too many false positives. In authoritarian countries, corruption charges are generally trumped up when an official becomes inconvenient for those in power. This means that there are a lot of things officials do that are tolerated and will be used against them once they need to be removed. Therefore the definition of corruption is loose as it's kind of an accepted operating model. If you implement a system like this, suddenly all that acceptable behavior generates lots of positives because your feature selection and classification is based on a threshold no one meets.


Here in Kazakhstan a couple of wise guys suggested to use a blockchain to fight crimes within the police itself, and these plans quickly ditched after they realised the blockchain's pure brutality.


Will blockchain be a force as impactful as writing itself? I remember a short story (possibly fictitious) involving Native American tribes and a missionary who introduced writing. The permanence of record changed the way they live and the debate of standings between the tribes. What was once a negotiation tactic (by "remembering" that their ancestors fought together/who married who) becomes merely a bookkeeping routine. A lesson was sometimes, what matters is not what happened in the past. What will change when useful, ubiquitous, and user-friendly blockchain applications are plenty?


FWIW, that sounds like The Truth of Fact, the Truth of Feeling by Ted Chiang.

https://en.wikipedia.org/wiki/The_Truth_of_Fact%2C_the_Truth...


Yes! The details were really fuzzy to me. Thank you for finding this story!



"AI" might be as simple as running a calculation on his expenses, real estate holdings, $$ deposited, wired out, salary etc. Maybe run for his close family, driver's family too.


Are officials objecting to it because they know it will expose their dirty deeds?


High level issue ================

Given how corruption is being used and how low pay the top guy is, i think the AI need HI very much.

Put it the other way everyone must be corrupted somehow otherwise how can a us$100k top communist sent her daughter to harvard to study psychology. And the other guy who died of corruption charge run a fast railway wort trillion dollars for petty salary.

They have to reform the compensation package ... communist have a hard time here. Hence has to lie ... and if a system can spot liar ...

Muslum jailing issue ====================

I heard this black ... but both should not be done and given there is not even fake news independence and judicial system, stange to compare


> Beijing has been developing a nationwide facial recognition system using surveillance cameras capable of identifying any person, anywhere, around the clock within seconds.

So before all the privacy activists are up in arms, this is pretty incredible and it looks like they're getting pretty close to eliminating all violent crime. I think that's an incredible achievement if they can pull it off. In an even broader historical context, individualism and capitalism have had their run for 100+ years, maybe this is the rise of a new ideological movement.


There are many theoretical ways to eliminate violent crime that would be to most people's eyes vastly worse than letting crime continue, so it's dangerous to elevate one metric above all others.

Here are some theoretical ways to eliminate all crime which would likely be worse for everyone:

- Eliminate all people

- Keep all people physically separated from each other.

- Remove all freedom of expression or free will from people.

Those are, obviously, some of the most extreme possible ways to accomplish that goal, but it does illustrate that there's an obvious trade off being made, and that even in cases where it might not be as obvious as these, we should identify and think about the consequences of that trade.


> So before all the privacy activists are up in arms, this is pretty incredible and it looks like they're getting pretty close to eliminating all violent crime.

To be fair, Chinese people anywhere in the world have exceptionally low rates of committing violent crime, and this has been true since long before anyone started a facial recognition program.


Since nothing a government does is a crime, by definition, then by consolidating all crimes, violent and otherwise, to the government then we have eliminated all evil! Brilliant!


> getting pretty close to eliminating all violent crime

Clearly you haven't been on YouTube lately, China must be the world capital of getting stabbed on CCTV.


> So before all the privacy activists are up in arms, this is pretty incredible and it looks like they're getting pretty close to eliminating all violent crime.

Even if it's successful, it will only be used to eliminate the crimes committed by people without connections. Powerful and connected people will be allowed to get away with crimes.


"There is no murder in paradise."


Violent crime rate is very low because of another reason: Security cameras everywhere. Criminals are either already in jail because of more efficient policing, or stop doing stupid things after watching the real stories on TV about how criminals were caught.


> eliminating all violent crime

You mean, if it’s real and even if it does work that well, besides the state sponsored kind?

China has millions of people in internment and retraining camps...

Regarding your “new ideological movement” comment. It’s not really... we saw the same ideology (to a less effective level) in all dictatorships and communist countries. Secret police, thought police, etc.


We have millions of black people incarcerated, the result of systematic discrimination over generations.

I'd also like to point out your statement doesn't really add to the conversation as any people in prison, or considered an enemy of the state can be spun politically.


> We have millions of black people incarcerated

Give how important this subject is, it's important to not invent facts.

Your core statement is false and off by at least four fold. There are roughly half a million black people in prison at all levels, not millions. That destroys your setup by debunking the extreme exaggeration. However given it is an important subject, we shouldn't stop there, let's examine the situation.

There are around 1.2m-1.3m people in state prisons, 500k-600k in local jails and 200k in federal prisons. Roughly 1.9-2.1m total across all jails and prisons at all levels. Several hundred thousand of those people are processing through the system, awaiting trial, etc. at any given time.

Roughly one million of those people are there due to rape / sexual assault, murder, assault or robbery.

As of 2016, 339,000 hispanics were sentenced to either federal or state prison; 439,000 whites; 486,000 blacks. That's from the Bureau of Justice Statistics (no figures for local jails). Over the last decade black people saw a decline of roughly 120k people in prison at those levels, white people saw a decline of about 60-70k, hispanic figures were steady.

Can you explain what part of society forces people to rape and murder - whether hispanic, white, or black? I grew up poor, most of the people I knew growing up were relatively poor or very poor, many with broken homes and absent parents, I'd like an education on this. I never saw anything related to social oppression forcing people to rape and murder.

There are a lot of societies with enormous oppression, corruption and poverty, which lack high murder rates and high violent crime rates. China is one example of that and there are several others in Asia. Eastern Europe has also had many examples of that over the last several decades.

Russia has a murder rate about ~8-12x worse than China along with far higher violent crime rates, despite the two countries being at similar levels economically per capita and both having oppressive political systems. What's the difference? Russia has a much worse culture of tolerating and encouraging violence and murder.

I've known a lot of poor people, the only difference I've ever seen between poor people that were violent and poor people that were not, is culture and it was always a choice.


>China is one example of that and there are several others in Asia.

Keep in mind that China has a heavy incentive to lie about their prison population and/or play games with the definition of "prison population"; after all, it's important that a harmonious society has few people who break laws (implying a low prison population). Eastern Europe under Communism faced a similar cost/benefit structure.

Therefore, as they have an incentive to cover that up, it's safe to make the assumption that the real numbers are likely significantly higher.


https://www.naacp.org/criminal-justice-fact-sheet/

If you grow up in the ghetto you don't have the time to think about college prep, violin lessons, or things like that traditionally needed to get to college.

Maybe your parents were incarcerated, and maybe your parents were incarcerated because of a lack of opportunity. These things stack up over generations until the point where it becomes normal for that part of the population.

Yes, many may be incarcerated for good reason, but you need to understand why. For this you'll either believe that black people are simply more likely to be murderers and rapists or there's another reason why.


I’d assume it’s because he’s also replying to an exaggeration. The GP claimed “millions of people,” but the US State department says it’s around 800,000 that are in these de-radicalization centers.


You need to stop spreading incorrect information. The state department estimates that there is between 800,000 and 2,000,000 prisoners in these camps.

Also the conditions inside these camps have been reported to be far worse than those you may see inside us prisons.


My point is criminal behavior is defined by the ruling class. So this pervasive 24/7 monitoring doesn’t necessarily reduce the overall harm to a society - in fact it may make it worse.

Also:

> We have millions of black people incarcerated, the result of systematic discrimination over generations.

Not even close to the extent China is persecuting - today. I won’t speak for generations past, clearly not ideal. But I think that’s kind of the point, we agree it’s not ideal.


>Not even Close to the extent China is persecuting today

You might want to look that up. America has the highest incarceration rate in the world, even after you factor in the worst estimates for the Uyghur reeducation camps.


Incarcerations are not persecutions. Arguably a negligible number of percecutions occur in the U.S., as most of the incarcerations are due to violent crimes. Most of the Uyghur re-education camps (which is just one of many examples known and unknown) are not due to violent crimes, but ethnic or political backgrounds (I.e. persecutions)


There is a strong legacy component to African Americans and violent crime. You either believe that they are simply more inclined to violent crimes, or there's societal factors out of their control that push them towards that, such as denial of opportunities or institutionalized discrimination.

A great example is the fact that Nixon started the drug war with specifically designed laws to target black people: https://www.huffingtonpost.com/entry/nixon-drug-war-racist_u...

These types of things have long lasting effects to this day, that affect the African American population. Saying that they are now committing violent crimes so that it's justified is a copout on the factors that lead to it.

For example prestigious universities have strong bias towards legacy applicants. If your forefathers were incarcerated by discriminatory laws in the past, you're unlikely to be a legacy admission, or benefit from any legacy policies in society.


In America activists can potentially sue/appeal for to free those wrongly imprisoned, and the incarceration rate is about 0.75%. The Uyghurs, by contrast, are currently imprisoned at a rate of minimum 6% (calculated via lower estimate of concentration camp size and upper estimate of Ughur population in China), another 6% of their population is matched by live-in Han Chinese family monitors, and no one is even pretending that they broke any laws.


Yes but African Americans make up 34% of the 6.8 million that are incarcerated. Maybe the rules causing this disparity are not written down anywhere like in China, but it's certainly ingrained into American society in practice.

https://www.naacp.org/criminal-justice-fact-sheet/


I think you might be a little less than objective when comparing your home team against the rest.


The US State department says at most 800,000 are in these de-radicalization centers. Where are you getting the “millions of people” from?


Watch "Person of Interest" show


One good reason for having many countries is we can run lots of different experiments on how to govern.

Totally ignoring all the humans rights fears, this sure is fascinating.


I mean that's the thing, China is going to implement it, we get to see what happens, and I don't think there's much we can do to stop it. I'm interested in seeing how historians look at this period of human history since it will theoretically be so objectively (? maybe? At least plentifully.) documented.


If there is one thing the Chinese government is known for, it is objectively documenting everything it does and making such records available to all.


Either it's documented and the opposition can make educated criticisms, or it's not documented and the oppositions is politically motivated. You can't have both.


Uh, by records do you mean terra cotta soldiers?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: