Hacker News new | past | comments | ask | show | jobs | submit login

oh wow. like a real time streetview monitoring the society?



Seems like it, the captured photos (in 360 degrees) will be compared within milliseconds with the list of the wanted criminals in the central database. Even when the car is moving with 120 kmph.


What's the false positive rate? I have a feeling that the base rate fallacy is going to make this technology far too interesting.


Not exactly a technical answer...

does the Chinese government really care? I ask because this is the type of thing that only enters a conversation when 'effectiveness'/'accuracy' becomes a discussion down the line. I doubt that China gives a s* frankly, which makes this even scarier...the data used to show it works will like be bad by design.


It's a quantum leap in surveillance tech -- the latency for nation-wide tracking of individuals will be phenomenally reduced. HUMINT resources can be more effectively allocated and target acquisition times can be reduced.

That said, the false positive rate is guaranteed to be high at first but the tech will be improved, furthermore the data will be correlated with other's rather than sole-sourced.

Sure they won't mind if there's collateral damage, but it's useless if it wastes time and effort by leading to the wrong person too often.


The Chinese state can throw basically unlimited resources at dealing with false positives. Doesn't seem like a problem for them.


Any place that uses the death penalty as quickly and frequently as China isn't too concerned about false positives.


Really important question, but unfortunately the article doesn't mentions it. What do mention is that for a successful identification only the three-quarter of the face is needed.


Would this work just as well on western faces as asian faces?


i don't know, are computers racist, or is that just a human thing?


We published a paper on this a few years ago! Short answer: algorithms exhibit different accuracies on different races, genders, and ages.

B. F. Klare, M. J. Burge, J. C. Klontz, R. W. Vorder Bruegge and A. K. Jain, "Face Recognition Performance: Role of Demographic Information," in IEEE Transactions on Information Forensics and Security, vol. 7, no. 6, pp. 1789-1801, Dec. 2012.

http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA556941


Computers certainly can be racist when it comes to visual recognition tasks. Different races do look different, after all. For example, there have been several cases of face tracking software failing to detect black people.


In the case I think you're referencing, they were detected, just not as people.

It's a bit of a cop-out just to say, "shucks, computers do the darnedest things!" Humans coded, tested, and deployed that service. They let something fall through the cracks, the awful implications of which detract quite a bit from all the good the service might ever accomplish.


There are a bunch of different examples out there, just search for "face detection black people." In one example, Google Photos tagged a pair of unfortunate folks as gorillas. In another, HP had some webcams that tracked faces but couldn't track black people. Kinect had a similar problem. Nikon had a "did someone blink?" message that produced a lot of false positives with Asian people.

My point isn't to excuse it, and indeed I'm not even trying to address the moral dimensions of it. My point is just that if this is a Chinese product developed for use in China, it's entirely possible that it won't work well on non-Asian faces without additional work.


Sorry, you're right. I must have been aggressively misunderstanding in this case.


That's alright. I didn't know about the gorilla case until I went looking for it there, so that's something I learned.


Don't worry, anyone not looking Asian enough is automatically suspicious.


Someone has to program them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: