Hacker News new | past | comments | ask | show | jobs | submit login

You're right! I package public trained models for natural language queries (https://github.com/paulfitz/mlsql/), and the models are bad at saying they don't know. I'm optimistic though. There's significant year-on-year improvement (driven by real progress in NLP), and the training datasets are getting more interesting. There are now conversational datasets (e.g. https://yale-lily.github.io/cosql) where the model is trained to ask follow-up questions, and an explicit goal is "system responses to clarify ambiguous questions, verify returned results, and notify users of unanswerable or unrelated questions". That could be a big win.



Great job compiling the datasets! For the Yale group, I like the goal of their research. If the system can only answer 70% of questions, but it's pretty sure it knows the answer when it claims it does, that can still be very useful in many domains. And the conversational approach is a good one for clarifying ambiguity. Will continue to monitor their progress. Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: