No doubt this corruption-busting "AI" was developed by companies and people with deep connections to the relevant Party members, but little of the necessary experience or resources to actually carry out the project successfully. When it became clear that it didn't work and would never work, this story was a way to allow the responsible people to save face.
Meanwhile, Baidu's main campus has had facial recognition on entry gates for years (and the gates cannot be fooled by 2D photos). And Face++ (a Chinese company) provides SDKs to allow you to detect liveness (i.e. not a photo) via their APIs. This is used by many apps when a user registers for the first time.
"her friend's Huawei phone will unlock for her using its version of Face ID"
There was a well-reported story about someone who got a refund on their iPhone for the same reason. How is this a Chinese phenomenon?
"but _genuine_ technological innovation is not something I've seen"
How do you define this? There was a white paper linked here recently about how WeChat scales its backend systems to cope with load. Are none of those scaling techniques innovative? (And probably similar for taobao.)
China, IMO has both innovative product and services as well as lazy copycats. Because the copycats are so many in number, we often don't regard the innovative stuff coming from there as well.
I bought a Huawei matebook laptop this past year. It was very cheap but looked cool - like a macbook. Funny thing though, the windows software was bootlegged and I had to use my institutional registration to get Windows software.
I used to use Amazon's market place and have now stopped after accidentally purchasing Chinese fraudulent goods (I still use for digital content).
My exposure to Chinese tech and goods, while limited, has now totally predisposed me negatively towards them. I haven't seen any innovative product personally. Just inconvenient knock off which have effectively stolen my money. If all Chinese tech goods were taken off the market, innovative or not, I would be happier knowing that I could start using a more trustworthy Amazon market place.
"If all Chinese tech goods were taken off the market" then which tech goods would be left? Are there any tech goods that include no Chinese components? What % of Chinese components is acceptable to you (assuming non-zero)?
What is different about China is an attitude. It was crystallised for me by a Australian Documentary titled "Two Men in China". One of those men was formerly Australia's chief scientist. They walked into a typical Chinese apartment. Very small, but it had all the latest gadgets. Then they showed the doors had fallen off some cupboards. In fact a whole pile of things had broken a few weeks after the couple had moved in. This was considered perfectly normal - they even had a word for it (which I forget).
In extreme cases it leads to things like baby formula suppliers adding melamine (a poison) to get the protein readings up to spec. (Australia's baby formula industry has done very well out of that little episode, so well there fights in the baby food isles appears on the nightly TV news as Chinese expats empty the shelves to make a quick buck. Turns out that _really_ pisses of local mothers with a hungry baby in their arms.) If you read stories on HN you come to the conclusion this attitude must pervade the entire business culture. Building falling down because of sub-par concrete, stories of "we sent them a spec, they built it well for a while, then someone notices part X had been replaced by a cheaper version" is a very common thread. Anybody who does business in China has to spend an inordinate any of money verifying all their suppliers are not cutting corners.
Americans are coming face to face with this now Amazon has allow the Chinese suppliers unrestricted access. Fake products, fake reviews, fake suppliers, fake purchases on other products leading to critical reviews, bribes for fake reviews - any scam that might work even only for a short term is being tried. Now imagine living in the country where that is how all business is done. Is it is at all surprising someone bought a system with face recognition, only to discover later it could be fooled by a photo?
If you are thinking China can't innovate you are just plain wrong. If you are pinning your hopes they will be perpetually held back by corruption you might be right (things look bleak under the current leadership) - but it's also possible they could turn into another Taiwan or South Korea, albeit with 1.3 billion people. If that happens we will see innovation on a scale we have never seen before.
Otherwise replay attacks are trivial.
China has “smile to pay” which is ridiculous. Hopefully it’s for small amounts.
Look, biometrics can be used as an id (public key) but not a password (private key)
And as for USA ... same goes triple for social security numbers people!!!
They are supposed to be that around here and the government has done the nice thing of including them in some people their (one-man) business account numbers which need to be published...
For everything else: True.
In practice, (a) social security numbers are gathered, stored and used for identity verification by thousands of private organisations and (b) those records are frequently leaked.
And that's the crux of the issue with AI being used in any law enforcement situation.
If we allow decisions and conclusions being drawn by the AI without a clear explanation of how it got there, we're just creating a monster that will advise -maybe replace- the judgment of law enforcement professionals who won't have the means to question these decisions.
Catching corrupt officials is a laudable goal anywhere and I'd like to see it applied to highlight potential irregularities that may require a second look but the danger here is that an unprovable AI be used to make claims or start being used as sufficient evidence to ruin people's lives.
"The test of any such higher authority is, of course, the police force that supports it. For our policemen, we created a race of robots. Their function is to patrol the planets—in space ships like this one—and preserve the peace. In matters of aggression, we have given them absolute power over us; this power can not be revoked. At the ?rst sign of violence, they act automatically against the aggressor. The penalty for provoking their action is too terrible to risk. The result is that we live in peace, without arms or armies, secure in the knowledge that we are free from aggression and war—free to pursue more pro?table enterprises. Now, we do not pretend to have achieved perfection, but we do have a system, and it works."
If we continue to blindly stumble towards that future we are likely to set 21st century (USA) biases in stone. Racial, national, sexual and corporate biases etc will be baked in. We've seen they usually are in the systems we have already. Yet it will be done mostly innocently, and unaware from accidental bias in training data.
Makes for an interesting future.
It's like today's popular notion that one can control the thoughts of others by banning certain words.
As for controlling thoughts - that's the easiest thing in the world today. There are entire industries devoted to making sure people think as they're told to, and act as they're told to.
Of course they don't work on everyone. But if you can fool more than half of the people when you need to, you can do more or less anything you want.
> There are entire industries devoted to making sure people think as they're told to, and act as they're told to.
I'm talking about punishing people for the use of certain words with the idea that it will reshape their thoughts, which is not the same thing as persuasion techniques.
This list of evidence would then be subjected to the same scrutiny that a list of evidence from a department of human police officers would be subjected to. If the AI cannot make the case against somebody in a way that's persuasive to a human prosecutor, then the case goes nowhere.
If the AI cannot provide rational clear and understandable justification for it's assertion, then it's not ready to replace humans who can be expected to provide justifications for their beliefs. Police officers are expected to write police reports, and so should any AI that's meant to do their job. Right now they've just got the AI equivalent of a cop who says "I feel in my gut that he's guilty, and I'm not going to explain why." But because it's "AI" I guess some politicians thought that was good enough.
Of course doing it right is probably harder.
The problem there is selection bias. If you have a list of fifteen things, three make it slightly more likely they've committed a crime. Seven make it slightly less likely. Two make it significantly more likely and three make it significantly less likely. If you consider them all, the probability they've committed a crime is in the "probably not" range. But if you take the list and prune everything that makes it less likely, leaving only the factors that make it more likely, you get a distorted result. This is what prosecutors do for juries in general, and it's problematic.
Now suppose instead of a list of fifteen you start with a list of fifteen million and do the same thing.
Of course, criminal law works on a much lower confidence level than physics, so it needs a trial where the defense is in charge of gathering that exonerating evidence when the persecutors get a false positive. It could work better, but this one problem is already accounted for.
If you instead got the thousand most incriminating factors out of millions, it looks like a damning mountain of evidence when it's really just the first page of the list of millions of factors sorted by how bad they look.
It completely destroys anyone's credibility scale because seeing a dozen one-in-a-million matches intuitively feels impossible to be a coincidence, but that's less than the result you get by random chance when you attempt to match fifteen million factors.
If you take the list of a million and prune it to ten it's even worse. You can find ten 1:100,000 probability events that occur randomly in a list of a million. It's this:
Except a million tests instead of twenty.
If you want to build better prosecutors that understand statistics, that should be considered a separate project. One AI that assembles the best case against the accused, and another AI that critically examines that case and discards it if it fails to meet some threshold for statistical relevance, or passes it on to human prosecutors if it does.
One is that the selection process is still opaque. If you have a corrupt prosecutor you're screwed either way. But if you have an honest prosecutor and you give them a collection of evidence selected for a bias in favor of guilt, they have no independent way of knowing that. So now you not only need an honest prosecutor, you need an honest AI, and we're back to the opaqueness issues.
The second problem is that in principle right now if the prosecutor and the defense attorney spend an equal amount of resources, they each get proportional results. If you create this database which the prosecutor can use, can the defense also use it? Do they get access to the data to run their own queries and algorithms against? If so it seems a rather large privacy problem, since the database would contain sensitive information about everyone, but if not then you're handing an asymmetric advantage to the prosecution.
Transparent at least allows everyone to go through it and undermine it and the system throughly. "The judge has more corruption data points than the accused sentenced, and this newborn with a Han name has a far higher rating than this infamous connected embezzler!"
It's like building an AI that guesses when it's going to be a rainy day in Seattle.
This with a large helping of "the laws were intentionally written to be broad" on the side.
With a suitably inclusive definition of "corrupt" every official is corrupt.
The funny thing is expert systems were doing that 30 years ago. Seems like this would be a perfect use of them but it's not a buzzword any more.
It's crazy how many wheels get poorly reinvented in the software world.
While it's very easy to do machine learning _incorrectly_ it is also possible to reasonably attribute factors to outcomes based on large quantities of data. You can also look a LIME/SHAP and other factor contribution metrics. This seems like a significant improvement over expert systems.
Re-reading the article now, it may very well be an expert system. The "big data" mentioned isn't training data, but pre-existing databases like bank account info, salaries, contracts, property ownership, etc. FTA:
> Disciplinary officials need to help scientists train the machine with their experience and knowledge accumulated from previous cases. For instance, disciplinary officials spent many hours manually tagging unusual phenomenon in various types of data sets to teach the machine what to look for.
As far as explaining the results, it's hard to draw any conclusion without knowing exactly what "not very good at explaining the process it has gone through " means.
Im doubtful that anyone is getting charted without specifics.
Moreover, with this amount of corruption, it seems clear that just some basic oversight is all that is necessary.
Is it just me or does that simply show that there is a lot of casual corruption in China?
It's in the original game and it was the MJ12 Daedalus AI, although I'm not quite sure where the player picks that piece of backstory up. It might be in the dialogue with the prototype in the Everett house.
Definitely time to give it another play through.
Time to play again I guess.
The above statement is just too convenient and practically superstitious.
AI is data hungry. That implies that there are so many past incidences of corruption you can generate large training data sets. So perhaps even a random guess would be too efficient since it is right more often than not.
To be fair, generating human-understandable explanations of predictions in a complicated nonlinear model is difficult in general. You typically need to come up with some kind of simplification of the model (like LIME ), and it's far from perfect or generally-applicable.
There's still more work to do in interpretability but models are rarely opaque black boxes with no way to see inside.
'You can turn it off!' he said. 'Yes,' said O'Brien, 'we can turn it off. We have that privilege.'
Or listen to Tom Drake, Process Portfolio Director/NSA, Technical Director for Software Engineering Implementation/NSA. He also knows a thing or two.
Those 2 guys are only talking about declassified info. Imagine what they would tell you about the real classified stuff. Also take note at what the government tried to do to them, and they actually broke no laws.
Look at how cities like Chicago, New Orleans and Tampa have gotten in bed with Peter Thiels Palantir systems (with funding from NSA). Sometimes, the public isn't even made aware, like in New Orleans where they (the Mayor and Palantir) figured out a way to be able to keep it from the public.
> the program escaped public notice, partly because Palantir established it as a philanthropic relationship with the city through Mayor Mitch Landrieu’s signature NOLA For Life program. Thanks to its philanthropic status, as well as New Orleans’ “strong mayor” model of government, the agreement never passed through a public procurement process.
> In fact, key city council members and attorneys contacted by The Verge had no idea that the city had any sort of relationship with Palantir, nor were they aware that Palantir used its program in New Orleans to market its services to another law enforcement agency for a multimillion-dollar contract.
Look at how some cities and counties are contracting with Amazon Rekognition systems and piping the results back to police. Washington Country, OR (the largest populated county containing Portland) has it. Cameras mounted on poles everywhere.
This stuff is happening right here at home. Your 1st and 4th amendment constitutional protections have gone away and you're living under an illusion. Your attention is diverted. Wake up people. It's not just "the other guy". We are complicit. Don't trust they will only use it for "good" purposes. That's just how they sell it, but that's not how it ever ends up. The problem is there is no oversight to be able to tell they are doing what they say they are doing. We've gotten rid of that pesky oversight thing. It was getting in the way.
They have no right to collect any data on you without a search warrant, signed off by a judge, once probable cause has been established. Read about the history of China, Russia and Germany to know what these kinds of files produce, and why we have the 4th amendment to begin with - The Revolutionary War - British soldiers entering anywhere and everywhere they wanted without notice using a "general writ", aka blank search warrant.
I think we all need a history lesson, as we seem to be very hellbent on repeating it. 100 million deaths under Mao, Stalin and Hitler. One Hundred Million. Let that sink in. Left extremists, right extremists - it doesn't matter, they both lead to the same end result - a totalitarian police state and many, many millions of deaths. They're still discovering mass graves.
When it's rampant, it's easy to find.
So this has nothing to do with AI or even technology. Granted, tech might make it easier to find some bad guys, even then, the fact that stuff is 'online' and in a 'searchable DB' makes this possible, the AI is not necessary.
Given how corruption is being used and how low pay the top guy is, i think the AI need HI very much.
Put it the other way everyone must be corrupted somehow otherwise how can a us$100k top communist sent her daughter to harvard to study psychology. And the other guy who died of corruption charge run a fast railway wort trillion dollars for petty salary.
They have to reform the compensation package ... communist have a hard time here. Hence has to lie ... and if a system can spot liar ...
Muslum jailing issue
I heard this black ... but both should not be done and given there is not even fake news independence and judicial system, stange to compare
So before all the privacy activists are up in arms, this is pretty incredible and it looks like they're getting pretty close to eliminating all violent crime. I think that's an incredible achievement if they can pull it off. In an even broader historical context, individualism and capitalism have had their run for 100+ years, maybe this is the rise of a new ideological movement.
Here are some theoretical ways to eliminate all crime which would likely be worse for everyone:
- Eliminate all people
- Keep all people physically separated from each other.
- Remove all freedom of expression or free will from people.
Those are, obviously, some of the most extreme possible ways to accomplish that goal, but it does illustrate that there's an obvious trade off being made, and that even in cases where it might not be as obvious as these, we should identify and think about the consequences of that trade.
To be fair, Chinese people anywhere in the world have exceptionally low rates of committing violent crime, and this has been true since long before anyone started a facial recognition program.
Clearly you haven't been on YouTube lately, China must be the world capital of getting stabbed on CCTV.
Even if it's successful, it will only be used to eliminate the crimes committed by people without connections. Powerful and connected people will be allowed to get away with crimes.
You mean, if it’s real and even if it does work that well, besides the state sponsored kind?
China has millions of people in internment and retraining camps...
Regarding your “new ideological movement” comment. It’s not really... we saw the same ideology (to a less effective level) in all dictatorships and communist countries. Secret police, thought police, etc.
I'd also like to point out your statement doesn't really add to the conversation as any people in prison, or considered an enemy of the state can be spun politically.
Give how important this subject is, it's important to not invent facts.
Your core statement is false and off by at least four fold. There are roughly half a million black people in prison at all levels, not millions. That destroys your setup by debunking the extreme exaggeration. However given it is an important subject, we shouldn't stop there, let's examine the situation.
There are around 1.2m-1.3m people in state prisons, 500k-600k in local jails and 200k in federal prisons. Roughly 1.9-2.1m total across all jails and prisons at all levels. Several hundred thousand of those people are processing through the system, awaiting trial, etc. at any given time.
Roughly one million of those people are there due to rape / sexual assault, murder, assault or robbery.
As of 2016, 339,000 hispanics were sentenced to either federal or state prison; 439,000 whites; 486,000 blacks. That's from the Bureau of Justice Statistics (no figures for local jails). Over the last decade black people saw a decline of roughly 120k people in prison at those levels, white people saw a decline of about 60-70k, hispanic figures were steady.
Can you explain what part of society forces people to rape and murder - whether hispanic, white, or black? I grew up poor, most of the people I knew growing up were relatively poor or very poor, many with broken homes and absent parents, I'd like an education on this. I never saw anything related to social oppression forcing people to rape and murder.
There are a lot of societies with enormous oppression, corruption and poverty, which lack high murder rates and high violent crime rates. China is one example of that and there are several others in Asia. Eastern Europe has also had many examples of that over the last several decades.
Russia has a murder rate about ~8-12x worse than China along with far higher violent crime rates, despite the two countries being at similar levels economically per capita and both having oppressive political systems. What's the difference? Russia has a much worse culture of tolerating and encouraging violence and murder.
I've known a lot of poor people, the only difference I've ever seen between poor people that were violent and poor people that were not, is culture and it was always a choice.
Keep in mind that China has a heavy incentive to lie about their prison population and/or play games with the definition of "prison population"; after all, it's important that a harmonious society has few people who break laws (implying a low prison population). Eastern Europe under Communism faced a similar cost/benefit structure.
Therefore, as they have an incentive to cover that up, it's safe to make the assumption that the real numbers are likely significantly higher.
If you grow up in the ghetto you don't have the time to think about college prep, violin lessons, or things like that traditionally needed to get to college.
Maybe your parents were incarcerated, and maybe your parents were incarcerated because of a lack of opportunity. These things stack up over generations until the point where it becomes normal for that part of the population.
Yes, many may be incarcerated for good reason, but you need to understand why. For this you'll either believe that black people are simply more likely to be murderers and rapists or there's another reason why.
Also the conditions inside these camps have been reported to be far worse than those you may see inside us prisons.
> We have millions of black people incarcerated, the result of systematic discrimination over generations.
Not even close to the extent China is persecuting - today. I won’t speak for generations past, clearly not ideal. But I think that’s kind of the point, we agree it’s not ideal.
You might want to look that up. America has the highest incarceration rate in the world, even after you factor in the worst estimates for the Uyghur reeducation camps.
A great example is the fact that Nixon started the drug war with specifically designed laws to target black people: https://www.huffingtonpost.com/entry/nixon-drug-war-racist_u...
These types of things have long lasting effects to this day, that affect the African American population. Saying that they are now committing violent crimes so that it's justified is a copout on the factors that lead to it.
For example prestigious universities have strong bias towards legacy applicants. If your forefathers were incarcerated by discriminatory laws in the past, you're unlikely to be a legacy admission, or benefit from any legacy policies in society.
Totally ignoring all the humans rights fears, this sure is fascinating.