Hacker News new | past | comments | ask | show | jobs | submit login

One thing I don't like about Echo, IFTT, and others like them is the fact that everything has to go through "the cloud." If I'm sitting at home and want to dim my lights, why do I need to send my voice to Amazon's server, have it call out to IFTT, which calls back into my home? Especially since I have a data cap. All of that can easily just go through a locally-installed system and you don't need expensive hardware to do it.

That's what's driven me to write my own home automation code. (Though my code's terrible, not HN-worthy.) Anything that insists on a cloud API is out in my book. A local server is potentially worth serious money, especially if it's easy to use.

I've been toying with something similar (mostly on paper so far) involving custom GStreamer elements for voice recognition/synthesis, a Wit.ai-like intent resolver for command & control, and ChatScript-like pattern matching for conversational dialogue. My goal is to put it all behind a WebRTC and SIP gateway and have a low-latency personal assistant that I can access from virtually any device (even an old landline telephone) and that runs on my own private server. That's the dream anyway. I'm stuck on the voice synthesizer, at the moment...

I started at device control. Once I get device control where I want it on my home server, I'll expand out to being able to reach it elsewhere, and then voice.

Would that really be a significant source of data usage?

Depending on how often I use it and the audio formats, it could be. Maybe not for someone with a 300GB/mo cap, but I'm stuck with 15, so every megabyte counts. Either way, my point is that there's no reason for it to use any data at all to control devices on my own local network.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact