Hacker News new | comments | show | ask | jobs | submit login

I have a basic understanding of machine learning and absolutely no understanding of TensorFlow.

Can someone help me understand what is going on here?

Are we doing just doing prediction for a model on a mobile device instead of in the cloud? If so, for what kinds of scenarios is this useful?






Sure! They have a neural net that they pre-trained for image recognition as a demo. They ran it on the mobile device both times -- no cloud involved -- but the one on the left is running on the CPU, while the one on the right is running on a DSP located on the same chip. The DSP is specialized for workloads that have very regular control flow and involve a lot of fixed-point arithmetic. Running the neural network is such a workload, so they get impressive speed and power improvements by using the DSP instead of the CPU.

How does training the brain compare to running a trained brain?

Can the pre-trained brain (the one in the phone) flip to training mode? Can you teach it something and upload that new training result to the original?

Or for things it doesn't recognise, do you need to add the images and classification to the training data and create a 'new brain' and download it to the phone?

Is there one super organism (cloud based learning) that gives birth to millions of mini-minds. Each mini-mind asking it's parent to help it with things it doesn't understand. In 20 years time what will this say about consciousness? Where would it live? Is this a new way to think about minds, those that are distributed in many physicals devices?

And the precision of the hardware changing thought processes in subtle ways is very interesting. Upgrading a neural net to a new hardware platform would change how it works, how it thinks and makes decisions.


> How does training the brain compare to running a trained brain?

Harder operations and you need to do a lot more of them. Far more suited to having a single massive training system then send out the information just for inference.

Another thing that can be done is to train a large neural net then figure out which bits you can cut out without sacrificing much accuracy. The newer, smaller net is then faster to run and more likely to actually fit neatly into the RAM on your phone.

> Can the pre-trained brain (the one in the phone) flip to training mode? Can you teach it something and upload that new training result to the original?

Technically you probably could, but practically the answer is no for the types of nets used in this kind of thing. You'd want to be training the net on millions of images, and even if it were as fast as the inference on the phones that'd still take way too long.

[edit - interestingly this is not only technically possible but pretty much what is often done but on more powerful machines. You can start with a pre-trained network or model and then "fine tune" it with your own data: http://cs231n.github.io/transfer-learning/]

> Or for things it doesn't recognise, do you need to add the images and classification to the training data and create a 'new brain' and download it to the phone?

This is generally the approach, yes. It has other advantages though, the performance can be checked and compared once then re-used lots of times.

> Is there one super organism (cloud based learning) that gives birth to millions of mini-minds. Each mini-mind asking it's parent to help it with things it doesn't understand. In 20 years time what will this say about consciousness? Where would it live? Is this a new way to think about minds, those that are distributed in many physicals devices?

In many ways, sounds similar to delegating work to more junior / less well trained staff.


Lower-power and real-time machine learning, and it could also be used for stuff like computational photography. Doing computational photography through the cloud while you're taking a picture would be pretty crazy.

It can also be useful for basic "AI assistants" that process the data locally, so you get some extra privacy. For instance, you could get better image search on the device, without ever putting the photos in the cloud.

I also don't think any of those AI assistants that Google and Facebook are pushing with their messengers actually need to exist in the cloud. But of course Google and Facebook will continue to prefer doing it over the cloud because they actually want that data for themselves.

I think Huawei is also pushing for "smart notification management" to save battery life using such AI, although so far Huawei's solution has been pretty dumb. But I can see how this could improve in the future.

There should be at least a few more use cases where this is useful, and I think we'll see more smartphone makers take advantage of this.


> It can also be useful for basic "AI assistants" that process the data locally, so you get some extra privacy. For instance, you could get better image search on the device, without ever putting the photos in the cloud.

Until we are surrounded by recording devices that have autoencoder-based speaker fingerprinting and audio transcription, combined with some NLP to make sure that if you say "Hello, I'm Tom Walker", it'll remember and fill that in in the transcriptions. Instead of having vague videos and maybe some confusing sounds that can be deciphered by the police if there's enough reason to put in the effort and personnel we'll now have direct audio transcriptions of everything we say and do everywhere available to a number of companies.

And the worst part of it is. This is useful. For security, for remembering things, for automated secretary, for ... People will want this, and the features it can bring, so it'll happen, and privacy will be eroded until it's entirely gone.


Check out this article, might help https://www.oreilly.com/learning/hello-tensorflow

On device inference has two important properties: lower latency and lower power. A radio is expensive compared to a DSP (or even a CPU).

Imagine recognition where high FPS is tough for cloud-based solutions, and expected to run in poor internet connection as well.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: