
The State of Voice Assistants: Where's the AI? - neuronDen
https://rajat503.github.io/blog/post/virtual-assistants/
======
streetcat1
The issues are there from the 80's (NLU). The only advancements made in this
field are speech recognition (which reached human level, and thus help in
reducing error rates at the input to the NLU component), and maybe deep RF, to
find the most optimal conversation response, however the machine still do not
have a semantic understanding of users utterance, and will likely never have.
I.e. if you solve the NLU issues, you are likely very close to AGI.

~~~
neuronDen
You don't need NLU to be fully solved for assistants today which work on
narrow range on tasks. For example - modern deep learning techniques can
easily mark "Send a message to <person> \- What are you upto?" and "Check with
<person> what he is upto" as similar. Assistants today respond correctly to
the first input but not the second.

~~~
streetcat1
So I am talking about spoken dialogue system. I.e. system that can converse
(I.e. at least 15-20 turns between the user and the system).For example, a
system that can book multiple legs flight.

~~~
neuronDen
Microsoft acquired Semantic Machines has the capabilities to do so.
[http://www.semanticmachines.com/](http://www.semanticmachines.com/)

~~~
streetcat1
As I said. This is a problem from the 80s. Why would it would be solved now?
and by a specific startup? what changed?. I think that in order to solve this
issue we would need some breakthrough in symbolic AI and not just statistical
AI. However, most of the research today is done on DNN and their derivative.

------
neuronDen
Reposting this post from 2018. Most of the issues still remain unsolved in
2019. Looking forward to the comments from the HN community.

