In order to ensure the right to e.g. scratch your face in public, how many false negatives are acceptable? I'll take false negatives over false positives. More so, it's incredibly important for false positives to be easily rectifiable.
I'm stunned to see someone arguing that ticketing someone for scratching their face is acceptable. Just because a machine lets attempt it at scale doesn't make it any more acceptable.
My wife was driving in the carpool lane with our infant daughter asleep in a car seat covered by a blanket to keep the sun off of her.
A cop pulled her over for illegal use of the carpool lane. It's a completely understandable error since the car appeared from the outside to have only one occupant.
The point is that errors happen with all methods of enforcement. But when we talk about automated systems, somehow the standard is set much higher, so even one driver falsely accused is too much.
Since the tradeoff to minimizing false positives is greater injury and death, it matters that we get it right.
The bar isn't higher for machines. If anything it seems lower. If the cop had given your wife a ticket after seeing that your daughter was there, that would be crazy.
Is this true? Couldn't we run a process on top of a lot of humans, which, assumming they are weak learners, garuantees strong learning (i.e. boosting)?
Vision intelligence techniques are heavily used in China in relatively critical scenarios such as the one in this news, door security systems or bank check in systems.
This is so stupid because none of those techniques are reliable enough to be put in such kind of scenarios.
Not mention that software developers work for gov or bank in China are usually worse than developers in tech companies.
> Not mention that software developers work for gov or bank in China are usually worse than developers in tech companies.
It's true.
But in recent years, leading tech corporations like Alibaba are becoming the major backers for the government's plan on big data and smart city. You are sure the Chinese governments won't start outsourcing its mass surveillance system to the private sector?
I think the I in AI is going to come to mean Idiot after a while due to the lack of nuanced perception and just going off of broad brush strokes. It’s akin to jumping to conclusions, we’ve just created computers that can do that now.
the media manufacturing consent for yet greater incursions into every detail of our lives in the inexorable search for more governmental revenue...i love it..
I'll take the opposite view.
Distracted driving causes 1.6 million traffic accidents a year in the US alone, causing more than 3,000 deaths [0].
I don't have statistics for China, but 4x the population indicates maybe 6.4 million accidents and 12,000 deaths a year.
No system is completely error free. Even posting a cop on every corner to cite distracted drivers will make mistakes.
So given the carnage, agony and grief that could be prevented, how many false positives are acceptable?
[0] https://www.nhtsa.gov/risky-driving/distracted-driving