I think a fun thought exercise is finding the fine line between tech and guns/alcohol/cars. You cannot sue a gun/car/alcohol manufacturer if their product is used to injure someone because it functioned as designed but was used maliciously. How does that legal precedent work when extrapolated to tech and something like facial recognition? If it worked exactly as designed and we know it has a margin of error (or can be used improperly and have disastrous results, like a car or gun), could Amazon or a tech administering it be liable for someone falsely imprisoned?
It's still ethically questionable. In fact I'm struggling to come up with a better example than facial recognition tech (except other mass surveillance). Maybe cutting corners while developing driverless cars that results in the death of a pedestrian.
Almost every engineering discipline has a code of ethics [0][1][2][3]. It's time software "engineering" grew up and did the same.
I rarely see ethics mentioned on HN, and granted, people's view differ. But it's weird we're not having that conversation at all.
You're right! This is a critically important conversation that we absolutely need to have within our profession. It's very often ignored and there's no support system for people who take ethical stands.
So. Let's talk about ethics. I, personally, subscribe to the ACM code of ethics.
I think Rekognition, as built and presented, falls fully within that strict ethical code. It can be put to uses that are unethical, but that does not fall upon the people who made it. Certainly, an engineer creating a system such as the one the ACLU created would be acting unethically.
> could Amazon or a tech administering it be liable for someone falsely imprisoned?
I would wager "absolutely not" for the exact same issue as firearms or cars. The person who mis-applied the technology or a middle man vendor though? Unless they fall under qualified immunity, there's your fall guy.
I think a better analogy would be a bridge or building- they are regulated by a rigorous code that will exonerate the engineers behind the implementation only if it was followed to the letter.
Will that slow things down and make them more expensive? Absolutely, but that’s the cost of safety.
In this case though what is probably most lacking is public knowledge about the shortcomings of such systems. The public needs to understand that 90% accuracy, while it sounds high, mean it’s wrong a lot, and the chances of it being wrong if it was continuously run all the time are actually quite high.
If firearms manufacturers produced guns that fired straight over ninety percent of the time, but no guarantee of bullet direction beyond that, they'd probably have a lot of lawsuits on their hands.
Would a camera manufacturer be liable if the camera's image processor happened to cause one person's photograph to look like another, which then caused a human looking at that photograph to misidentify someone?
I would be shocked if LE started using this tech, imprisoned a bunch of people, and used the excuse "Well, we followed the FAQS." I would imagine they would work very closely with Amazon and run a ton of calibration. For the ACLU to say that we used the default settings and it doesn't work correctly is disingenuous.
I would not be shocked at all if some LE somewhere used the default settings. It seems absurd to assume otherwise. It's not as though LE has a great track record.
Assuming that law enforcement is going to spend any amount of time being careful about enforcing correct use of forensic tools is hard to reconcile with the history of the American judicial system. For example, there is essentially no scientific evidence that a polygraph test measures anything at all, but polygraph evidence is considered admissible in many courts and used to be admitted in many more. Moreover, rigorous studies of fingerprint evidence have found them to have false positive rates as high as 1 in 18, but they are still often treated as close to infallible in court. I am not sure why you think facial recognition is going to be different.
LE doesn't have the best track record following best practices. For example, they still rely heavily on notoriously unreliable eye-witness accounts and polygraph tests, two things science has largely debunked.
In my opinion, this is where the crux of the misinformed population epidemic resides. Sensational headline that causes mass amounts of doubt, panic, uncertainty, and outrage will get thousands and thousands of retweets and shares, yet the very valid response to it from the accused will not get any traction.
I concur. I really enjoy reading the comments section on some websites because there is some very insightful discourse happening that offers counter-arguments to the points in the article. But I am thoroughly unimpressed with the HN community when I read the comments on any topic that can tangentially be related to politics or currently politicized topics (mostly US topics, like immigration, Repub vs Demo, Trump, etc.). Ones that come to mind recently are MSFT and ICE, Facebook and censorship, TSA, Amazon and AI. There is an immense amount of conversation that could happen and great insight can be shared from a community of extremely knowledgeable people, but within 2 hours of it making to the front page, the comments section is mostly light gray, double digit dead threads, and not much more comprehensible than a Buzzfeed comment section about the Kardashians.
I believe there is a huge difference between you working on something that has the main function of facilitating an illegal enterprise and you working on something that has a tiny minority mis-using your product for illegal purposes. Silk Road vs Craigslist is an example that comes to mind.
I wonder how much of Tor usage is for illegal activities, including black markets, compared to legal activities? And doesn't it seem fickle that the same action may or may not be legal depending upon how many people use Tor for legal vs illegal activities? What happens if last year it was primarily for legal purposes but this year some new black market site took off and people are primary using it for that now?
Previous discussion was all about whether what he did was illegal or whether he was complicit.
I'm actually most saddened by this part.
When he was 8 years old, Alfred Anaya destroyed his mother’s vacuum cleaner in the pursuit of knowledge. “I took it apart because I wanted to find the motor inside,” he recalls. “I was so young, I thought the motor would work all by itself even after I took it out. I didn’t realize it needed to be plugged in to go.” His mother was upset but hardly surprised to discover her ruined vacuum, for she knew all about her youngest son’s rabid curiosity. Alfred was forever disassembling Sony Walkmans or clock radios so he could fill his favorite junk drawer with circuit boards, which thrilled him with their intricacy.
His childhood doesn't seem much different from many of us hackers. I wonder how different his life would have been if he had been noticed by a technically savvy and connected benefactor who could have sponsored him. I could easily see someone like him being discovered at an early age going on to do breakthrough work with the benefit of a better education and circumstances. His poverty probably prevented the potential opportunities he could have had in life.
This feels like one of the big challenges that needs to get solved. We need to find ways to bring people up like this up out of poverty so they don't end up making bad choices and ending up on the wrong side of the law. Feels like such a waste.
I wonder if it's an education problem. I would wager there's quite a few of us who would be attracted to legally questionable work for big payoffs. Even FBI higher-ups have sold secrets to foreign countries for money.
And according to the article, he was discovered after all, the stereo shop that hired him were impressed by his installer work.
+1 I haven't seen it back then, it came yesterday in someones comment and I found it shocking!
The reasoning behind DA that he made the illegal business blooming possible shocked me. We should put Zuckerberg in jail because thanks to Facebook, ex cons are finding each other and their conspiracies are blooming. We should put Page behind bars because Google makes possible to find "how to make bombs" DYI. We should put AT&T officials behind bars because thanks to internet, online crime is booming! Also Apple officials because I'm sure crminals use iPhones.
Its really shocking and saddened this particular DA approach to destroy his life. The man didn't do anything illegal and he didn't want to become snitch. So they send jury after him. And because he is covered with tattoos and likes strips clubs, guess his place is 25 years behind bars. /s