Hacker News new | past | comments | ask | show | jobs | submit login

As a researcher in AI, I accept that a lot of currently unsolved challenged are thought of as AI. But lately, I feel that AI is the problem description for all currently unsolved problems. And then some...

This surprises me, because most AI technologies have been around for a long time. Now with blockchain a couple of years ago, I could at least rationalize all excitement as people throwing new technology at an old problem. But with AI I am continually surprised by the reasons why 'an AI' would be able to solve it.




As a researcher in AI, what are you really spending most of your time on? What problems are you solving?


I am currently interested in infusing reinforcement learners with symbolic knowledge, with safety constraints as a special case.

I hope this helps cases where learners could come up with better solutions if it were not for pathological failures that we know to avoid.

Also, I try to keep expectations around AI reasonable.


I'm not OP but I also do research in ML. My research focus is identifying and preventing critical system failure so people don't die. My of my time is spent developing new techniques and then testing them against data we collect from the field.


Where could one read more about your (or similar) research?

This kind of thing is quite big at the moment in mobile work machinery circles, everyone's looking for a certifiably safe solution for enabling mixed-fleet operation (i.e. humans, human-controlled machines and autonomous machines all working in same area). Current safety certifications don't view the nondeterminism of ML models too kindly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: