Hacker News new | past | comments | ask | show | jobs | submit login

Well designed ML techniques are always correct 99% of the time. Its the 1% that is the problem.



99% of the time it works all of the time.

Like humans.

How do we deal with problematic humans?

Retraining or replacement.


No, we deal with 99% accuracy in humans by designing all our human-based algorithms to be resilient against mistake (and still mistakes in the end result happen very frequently). This is completely different from the way we have designed our computer systems, because it's way easier to do this with flexible agents like humans than with inflexible agents like computers.


So, the mistakes algorithms make are different to the mistakes humans make?

That’s... equivocation.

Both humans and ML algorithms are flexible. That is the point of “learning”.

Adaptation.


But the problems where we apply human labour are vastly different from the ones where we apply machine labour. In (most) tasks where we apply human labour a few errors are tolerated.


This seems irrelevant.

Neither humans nor ML make zero errors.

Ceteris paribus, if an ML algorithm makes fewer errors at a task which with low error tolerance - you would use the algorithm instead of the human, no?


I would expect that might depend on what sort of errors each make, no?


That is not practical when you expect the system to be correct 100% of the time and make decisions based on that. There are many situations where this is critical. You would never want your car's safety system to be correct only 99% of the time.


If you expect 100% correctness you are not a very practical man.

Perfect is the enemy of good enough.


And good enough is the enemy of perfection.

I suppose it depends on your end goals.


Over-achievement of my goals is not my goal.


With humans, we can ask what went wrong and fix the problem.

With black box algorithms, we throw in some new training data and just hope it's enough.


> With humans, we can ask what went wrong and just hope we've fixed the problem.

> With black box algorithms, we throw in some new training data and just hope it's enough.

A small wording change and they're equivalent again.

To some extent human beings are also a black box - with some very peculiar failure conditions and side-channel weaknesses.


No, we don't do anything against the humans who are 99% correct. 99% is good enough in most real world cases.


what's the promote option in this analogy?


ML took your job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: