Hacker News new | past | comments | ask | show | jobs | submit login

So you train a bot to show 'human like patterns'.

The problem with relying on ML on one side is it can be gamed by ML on the other.




If it was easy to train a bot to show humanlike patterns, we'd have driverless cars by now.


Hmm... I think training a program to fool another program is a much, much, much, much easier problem than driving a car.

As an example, you only have to learn how to emulate a single 'human like' pattern to fool the ML, where you target function is a simple fool or not fool ML - once you achieved fooled you are done.

Whereas to drive a car you need to deal with all the infinite variation in inputs you might experience in driving a car, with complex target functions ie safety versus speed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: