
Systems Smart Enough to Know When They're Not Smart Enough - mpweiher
https://bigmedium.com/ideas/systems-smart-enough-to-know-theyre-not-smart-enough.html
======
darkerside
Interesting that many of these wrong answers seem to be a result of begging
the question in the traditional sense. i.e. when you ask "are Jews evil?", you
are only going to find answers from sources that are willing to pose the
question in the first place.

------
johnhenry
I'd like to propose HTTP Response code 250
[https://en.wikipedia.org/wiki/250_(number)](https://en.wikipedia.org/wiki/250_\(number\))?

------
liberte82
Well, I certainly wouldn't include humans in this category.

~~~
mdinstuhl
Dunning-Kreuger syndrome is, like, just your opinion man.

------
dilemma
So you want a system that is not only knowledgeable but self-reflective. Far
beyond what machine learning can do and why AI is a pipe dream.

~~~
threepipeproblm
While I also tend to be an AI skeptic, did you actually ready this? For
example, the author suggests how using confidence data that are already
available, and discusses techniques for getting these across to the user.

Then suggests a measure of "controversialness" that, also, does _not_ require
pipe dream AI.

I thought this was a very insightful piece, cataloging a slew of potentially
reasonable approaches aimed at a growing problem.

------
rafiki6
Do humans fall under those "systems"...I definitely know some people who need
to.

Disclaimer: I haven't read the article so this comment is purely internet
sarcasm

------
jtraffic
I skimmed. My synopsis: "ML models should report confidence levels."

~~~
pimmen
Or, like Watson when it played Jeopardy, not respond at all if the confidence
is under a certain threshold.

