Ask HN: Are there any open source alternatives to Alexa, Siri, etc? - nocoder
======
hiisukun
I must confess I haven't used it yet, but snips.ai [1] is in my list of
potential raspberry pi projects and claims to be open sourcing soon. On their
FAQ page [2] you can find the reason why I have it in my list:

"Snips, on the other hand, runs completely on your device, with nothing being
sent to the cloud. This means it guarantees your Privacy, works offline and
doesn't have variable costs!"

Unfortunately this leads me to wonder whether or not the service will remain
free, but for the moment it is "free for makers and for building prototypes".
Commercial costs apply.

[1] [https://snips.ai/](https://snips.ai/)

[2] [https://github.com/snipsco/snips-platform-
documentation/wiki...](https://github.com/snipsco/snips-platform-
documentation/wiki/FAQ)

~~~
nocoder
>Snips, on the other hand, runs completely on your device, with nothing being
sent to the cloud. This means it guarantees your Privacy, works offline and
doesn't have variable costs!

Does this mean the language model will not learn as it is used more? I feel
the main reason for sending & storing data in cloud is to train the algorithm
to get better at recognizing & answering the questions.

~~~
Eridrus
Cloud or not, these products don't magically get better because of data, they
have teams of engineers using your data to improve their systems.

------
unignorant
Our group at Stanford is building an open source conversational agent for data
science:

Github: [https://github.com/Ejhfast/iris-
agent](https://github.com/Ejhfast/iris-agent)

ArXiv: [https://arxiv.org/abs/1707.05015](https://arxiv.org/abs/1707.05015)

~~~
unignorant
Oddly enough, just got some coverage today:
[https://www.newscientist.com/article/2142908-siri-rival-
can-...](https://www.newscientist.com/article/2142908-siri-rival-can-
understand-the-messy-nature-of-our-conversations/)

------
ubik_
Mycroft [https://github.com/MycroftAI](https://github.com/MycroftAI)

Jarvis
[https://github.com/alexylem/jarvis](https://github.com/alexylem/jarvis)

------
throwaway2016a
There is a lot to those services:

1\. Waking

2\. Voice to text

3\. Natural language understanding

4\. Skills (search, weather, reminders, etc)

None of those are trivial problems but #4 is notable because it is often used
as one of the ways Google beats Alexa. It can be boiled down to if a human
simply asks for what they want, what are the odds the assistant will have an
answer?

That to me is the part least likely to have a good open source alternative.

------
tinus_hn
Never used it but the new voice recognition in Firefox uses an open source
service called Kaldi:

[http://kaldi-asr.org/](http://kaldi-asr.org/)

------
Jedd
Jasper[1] is one such that popped up on HN a while back. I spent a wet weekend
getting it going on a Raspberry Pi -- it was quite the effort to get all the
moving pieces working together.

At the time you had the option of using AWS or Google to handle the voice, or
possibly (if you have the time and knowledge) train it up to use a local
service - this was gently discouraged by the documentation (it was referenced,
but not well explained what it involved).

I believe you can now use Watson to offload the voice to text, too.

But in all those cases you're sending data off-site, which may be a concern.
And each of those services has some usage constraints that _should_ be enough
to cover household use, but I'm not sure what happens when you start to hit
those limits.

[1] [https://jasperproject.github.io/](https://jasperproject.github.io/)

~~~
digikata
Inside the Jasper documentation, there is some discussion about the
configuration of the Speech to text engine. They seem to list 5, 2 of which do
not use off-site aids.

[http://jasperproject.github.io/documentation/configuration/](http://jasperproject.github.io/documentation/configuration/)

I don't know if other portions of the code might use off-site access data
processing. I wish more projects would create a system block diagram of their
software.

------
IshKebab
As far as I know there are no open source solutions for wake word detection.
Most 'open source Alexa' projects require you to press a button to make it
listen.

There is a free offline wake word detector here: [https://github.com/Kitt-
AI/snowboy](https://github.com/Kitt-AI/snowboy) ... ok that was closed source
when I looked last! Looks like you still need to use their website for
training though.

~~~
jmg_
Fwiw CMUSphinx supports wake words in the pocketsphinx project:
[https://cmusphinx.github.io/wiki/faq/#q-how-to-implement-
hot...](https://cmusphinx.github.io/wiki/faq/#q-how-to-implement-hot-word-
listening)

~~~
matthewmacleod
Having used this one too, I can say with fair confidence that it is not good,
unfortunately :/

------
sabatesduran
Mycroft is Open source and it works with a Raspberry Pi
[https://mycroft.ai/](https://mycroft.ai/)

------
Davidbrcz
Stéphanie [https://slapbot.github.io/](https://slapbot.github.io/)

