Hacker News new | past | comments | ask | show | jobs | submit login

Thanks to the sacrifice of these clueless users (or at least a good part of them are) the era of offline assistants is near looking at what Google has shown recently.



> Thanks to the sacrifice of these clueless users (or at least a good part of them are) the era of offline assistants is near looking at what Google has shown recently.

I think I'm being clueless, but I can't figure out what this sentence means. Is there a typo in it?


I think gvand means that these users are training the AI models to eventually provide a similar level of service offline. The "cluelessness" is just a value judgment of the users.


I was referring, in a convolute way, to the fact that all the data collected have been/will be used to train the models that will allow offline voice recognition (like the one google has shown at i/o last week).


I might be mistaken, but the reason we don't see offline recognition (amongst other things) is hardware limitations, not the lack of training data. The small onboard chip doesn't have that much compute power, so they offload to more powerful Amazon/Google servers that can run the inference.


> amongst other things

I think that this is an important point. Obviously there's more computing power available in Apple/Google/whoever's data centres than on my device, and I'm sure that is, or at least was, a concern; but I also don't believe that they are indifferent to the utility of sitting on such a huge volume of user-submitted, real-world data.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: