Hacker News new | past | comments | ask | show | jobs | submit login

There is a case to be made about how good the AI behind emotional detection is. When you take the test, it will be accurate for some, and blissfully wrong for others. More so wrong for most. I took the test (or unknowingly took it) and it was correct for the most part. And it got some things wrong. I love dogs.

The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; Your terrorist score is at 52%. A police car is dispatched.

In 2017, my contract was not transferred to the new system. The automated system saw that an ex employee was scanning his key cards multiple times. Security was dispatched to catch the rogue employee. Now a simple questioning should have cleared things up, but the computer had already flagged me as troublesome. Long story short, I was fired.

When the machine calculates your emotions, the results are unquestionable. Or, we don't know how it got to the answer, so we trust that it is right. It is a computer after all.

What scares me is not how fast machine learning is being deployed to every aspect of our lives. My scare is our reaction to it.




The problem with all current "AI"-driven systems be it facial recognition, voice recognition, translating, fraud detection, navigation, whatever, is that they are not 100% right, and when they're wrong, they're hilariously devastatingly super-wrong in a way that humans are not wrong.

But since the success modes are good and human-like, we assume that the failures are going to be human-like as well, but the failure-modes of these system are usually bizarre and alien. Take self-driving accidents, for example. Pretty much all of them happen in situations that no human would fail in, and that's obvious to most people, but then we're forgetting about all the other mistakes similar "AI" systems make, and don't realize that they're also failures no human would make.


> Take self-driving accidents, for example. Pretty much all of them happen in situations that no human would fail in, and that's obvious to most people, but then we're forgetting about all the other mistakes similar "AI" systems make, and don't realize that they're also failures no human would make.

Amen? I've tried to explain precisely this to many brilliant AI/ML folks in lots of variations, with little success. They look at me funny, as if I'm a crank for believing that that probabilities don't capture the whole story. It seems that, to far too many of them, a computer that has half the accident rate of a human is a strictly better replacement, end of story. The notion of just how spectacular the failure mode might be, or the degree of control that a human might have in the process, or any other human factor you can think of besides the accident rate, just seems like a completely nonsensical notion to many of them. For the life of me I have yet to find a way to convey this thought in a compelling manner.


To me, one of the best uses of this generation of AI would be as an aid to decision making. Your personal Skynet learns representations of data that help you make decisions, and watches out for worrying signs that you may be missing (perhaps based on your known mistake patterns, adapting as those change).

Meanwhile, human domain experts still look at the details and are able to add their own understanding of broader context, human nature, etc.

In this world, AI/ML primarily increases quality by augmenting human abilities rather than decreasing cost by automating humans out of the system. It's a smaller market maybe, but there are areas where better work can fetch a premium.

I think part of this is the lack of well understood patterns for the plumbing and UI of a system like this that makes it easy and useful and non-threatening to users. It's nothing that someone couldn't figure out, but not as well paved a road as integrating a caching, search, or image processing subsystem.


So I guess we aim for augmentation first before shooting for a complete replacement.


It has a reasonable sound to it! Although I think the complete replacement scenario has proven a little harder than it looked on some tasks (driving could be an example).

Yet it seems like there is a lot more all or nothing efforts (or seeing human workers as a level of escalation), I see fewer projects aimed at being helpful in a user directed way without taking control.


Statistics is incredibly hard and unintuitive, and if you understand statistics, it might be hard to understand that the majority of people absolutely don't.

I get that if all cars were self-driving, and the error rate of the self-driving system was half of the average human error rate, it would be better for humanity.

But if the errors of those self-driving cars would be obviously avoidable for a human, the absolute majority of people would never ever ever in a million years trust that system. It's a complete fucking no-go. Most people would much rather take shittier odds, if the failures are human, than better odds, but where the failures are alien and bizarre. Most people would automatically think that that risk is worse.

I would probably too, even though I understand the statistics.

It's the same reason people are afraid of flying, even though it's much, much safer than driving a car.


> in a way that humans are not wrong.

I dunno... from the parent post:

> you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; Your terrorist score is at 52%

That sounds like a pretty human failure mode to me.


Yes, it sounds like that because it was a human making up a failure mode, so it was human. Now suppose you bought 3 12-packs of Coke Zero last week, you sneezed on the frame the computer used to identify your face, the moon was waxing gibbous (which nobody even realized the AI was actually using as part of its calculations), your name is a 3 letter first name followed by a 15 letter last name, and a coincidental confluence of 25 other equally pointless things that, technically, the AI weights at non-zero. Now you're a terrorist.


That's a way better example. Neural nets are really good at picking up obscure correlations. The last letter of your first name is 'j', there are limestone blocks in the picture and you're wearing a yellow t-shirt? Terrorist for sure.


Every educated AI knows that the best way to spot a criminal is too look for people staring directly at the camera, with a white rectangle below their face.


To play devil's advocate though, sometimes the ML training process is picking up a signal that humans simply haven't noticed.

The moon waxing gibbous is a fun example - what if moon phase were to affect our biology in subtle ways?

There are plenty of things that people feel or know to be true but have trouble putting into words. That guy you know who just gives you a creepy vibe, but you can't put your finger on exactly why?


To some extent, I tend to agree. Sometimes we humans are a bit too attached to our stories, and when a computer shows us they may not be true, our instinct is to simply disregard the computer, rightly or wrongly. After all, it isn't as if we haven't all had computers be wrong several thousand times for ourselves, right?

But there is also an extent to which if my example did occur, it would still be flawed, at least with modern AI. Even if all these signals are in some abstract sense truly signals that you are a terrorist, in the real world it is still not correct to simply add them together, or whatever other simple operation the AI is doing. For all that deep learning may be cool, it's still got some giant, gaping holes in it compared to however human cognition works. Humans can look at say "Yes, even if those are all 1% signals, it really isn't sensible to add them all together." The current state of AI is not able to come to that conclusion, or at least, not well enough.

If that changes in the future, well, I'll update my beliefs as appropriate then. It's hard to guess the "cognitive shape" the biases of the next breakthrough in AI may have.


The problem is that correlation does not necessarily imply causation. So if a ML process finds a counter-intuitive correlation we should seek to understand why rather than assume the results are actually meaningful. Sadly I think this is the step that is being gratuitously skipped to the extent that “working” models are hardly interrogated.


Seems like the main game changer here is the feasibility of massive surveillance databases, not the human/"AI" decision-making. Of course non-invasive keys for those databases, like face recognition, are convenient, but if they don't work good enough I expect something like machine-readable pedestrian license plates.


This is an excellent point. I can foresee a world where ML becomes a sort of "bias laundering."

"Nobody knows why the machine learning module denied black people a mortgage 43% more frequently than whites, but it's 'AI' shrug"

One of the biggest priorities as we shape this future needs to be a way not only for the algos to make correct decisions, but also the ability for us to interrogate the decision-making process so we can be proactive about the kind of future we want this technology to give us.

Because it is coming, without a doubt.


"Foresee"? "Coming"? The example you mentioned is already with us. Insurance companies and lenders already use computerized models with problematic outcomes, which are completely opaque: https://scholarworks.law.ubalt.edu/cgi/viewcontent.cgi?artic...

"AI" is just speeding up processes that started in the 1970s.


> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

And this is why mathematical statistics and probability theory should be taught in middle school (may be instead of some trigonometry and stereonometry). Not only researchers, but any decision maker, and general public too, need to understand what confidence intervals are and how normal distribution works, on an intuitive level.


Won't help. Most practicing scientists who have been trained in statistical methods don't understand them, and just publish whatever garbage they squeeze out of SAS and then add a layer of misinterpretation on top for good measure. Statistics is fundamentally beyond the comprehension of 98% of people.


Not if you gameify it and educate them when they're about 12 years old. I don't have any sources, just a very strong gut feeling.


So your boss is an uneducated moron that doesnt understand current AI. Did you try to explain it to the boss? You should have shamed the company online. Always a good thing to know which companies are moronic.


And you should've found him a convenient and well paid job. Since publicly shaming the company online, besides being ineffective, is a good way to get fired.


GP already was fired for accidentally triggering disclosure of the company's incompetence. Warning people not to work for a company is one of the best ways to justifiably kill a company by starving it of talent.


> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

It's not clear to me how this is different than the current standard?


> it was correct for the most part. And it got some things wrong. I love dogs.

Seems an order of magnitude easier + more accurate to just track how long you linger each post while scrolling down a newsfeed and which ones you engaged in.

MacBooks, Chrome etc. already warn you when your webcam are on anyway so if social sites started adding webcam tracking when you're only viewing the newsfeed I can't see it lasting for long.


The fact that I lingered longer on a post doesn't indicate the polarity of my interest, just the magnitude at best.


Knowing that you linger longer on that topic is good enough to know that the topic is likely to get further engagement from you though - people like to spend time arguing about things that annoy them as well so you don't only want to show things that make people smile.

If you interact in any way with a post, you can do sentiment analysis on the text too to figure out their thoughts on the topic.

I just don't see facial analysis being that powerful over the above. You're already giving Facebook and Twitter a ton of information about what you enjoy via likes, shares and follows, without having to get into the subtleties of what it means if you half-smile or frown when you see a post.


Seriously why is the discussion anything but delete all tech immediately and bomb silicon valley?

Too late, the ML just flagged me as being anti-ML. I'll probably be assassinated soon.


The problem is not the tech or the ML. It is the people using and creating it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: