Hacker News new | past | comments | ask | show | jobs | submit login
Google Coral Edge TPU (withgoogle.com)
136 points by kilovoltaire 46 days ago | hide | past | web | favorite | 47 comments



It is a small turnoff that you need to use their cloud model ‘compiler’ but I still think I might get the USB Dev device.

I am retiring in a couple off weeks from my job managing a machine learning team and I intend on being a ‘gentleman scientist’ studying things of interest, without worrying about immediate practicality. Of most interest is local ML using tensorflow.js and devices like the Edge TPU, and also hybrid symbolic AI and deep neural net systems.

Anyway, good to see competition for edge devices.


> Upload your model

> It should take about one minute for compilation to complete.

...also, it should take about six months for Google to lose interest in this product, at which point the product you made when you integrated the Edge TPU -- is stuck without updates.


This kind of comment is getting really tired.

Can you show me statistically that Google is any more likely to discontinue something than any other startup? Or than Apple or Amazon?

A few people got upset about Google discontinuing Reader, but that was a looong time ago. And they've certainly discontinued other things to... but just like every other company.


https://en.wikipedia.org/wiki/List_of_Google_products#Discon...

They seem to discontinue a lot of products, including ones with fairly large user bases. It seems like a valid concern if you're going to try to build something on top of their stuff.


They seem to ship a lot of products too: https://en.wikipedia.org/wiki/List_of_Google_products

Disclaimer: I work at Google.


I think the reasoning is that it would be much nicer to have a compiler that runs locally so that you aren't dependent on Google to run the hardware even if they do EoL it.

It's a major issue for actual deployments of hardware in e.g. medical, education, research settings where a machine may end up supporting a piece of machinery for a couple decades on no support but just some spare duplicate parts that can be swapped in.

I once used a fiber optic splicer at MIT that was 2.5 decades old and ran DOS. Nobody gave a crap that it was DOS. We just needed fibers spliced and a new shiny touch screen splicer would cost $30K.


I'm the first one to say this type of comment is tired usually, but that's because people say it about Google cloud where it's patently untrue.

This however seems to be a product with no SLA and no guarantees, outside of the cloud offerings etc. I kinda agree with OP, Google's track record is bad when it comes to this kind of products.

And yes I think they are worse than other companies. Google isn't a hardware company, so they're worse than apple in that regard. And Amazon would do it through AWS, which would also make this fall inside their core competency.


There are at least two websites [1][2] dedicated to listing past google products.

[1]: https://killedbygoogle.com

[2]: https://gcemetery.co


That's whataboutism and not an argument...


The HW is still there and if there is enough interest people can keep on hacking on it. There are many alternatives though. The spec for the dev board is interesting, I am curious about the ML accelerator coprocessor & cryptographic coprocessor. Interesting choice of operating system. If they were also releasing their new OS for these that would make this project infinitely more interesting to me.


Previous discussion: https://news.ycombinator.com/item?id=19130896

They mentioned previously that you had to compile your models on the cloud, and not locally on your computer. Not sure if they've changed this policy.


They mentioned previously that you had to compile your models on the cloud

Wow, I was interested in this, right up until I read that. Talk about "weak sauce".

Sorry Google, but no, I will not use your proprietary compiler, especially when it's only available in the cloud, and become beholden to hardware which could instantly become a very expensive paperweight when you shut down the compiler service. No f'in way.

Release an open source compiler and I'm on-board. Otherwise, stuff it.


Doesn't look like it: https://coral.withgoogle.com/web-compiler/

The proprietary compiler thing sucks, but it is where a lot of the secret sauce is, unfortunately. But a binary wouldn't be too much to ask for...


Well then, its DoA. Not sure why any company would agree to these terms.


Agree it might be a dealbreaker for some. But right now there is not that much competition in the TPU for embedded space. NVidia Jetson and Intel Nervana are the only ones shipping? So if the TPU allows some company to do something not possible / much better than with Jetson, they will probably be willing to play that game.


It's starting to heat up. K210s are supposed to be pretty cool if you can put your hand on one.


The baseboard and SOM module split looks very well done. The module includes CPU+RAM+EMMC in addition to the TPU, so a custom baseboard can be quite simple. A lot of audio input, ready for microphone arrays. Curious to see what role the M4F microcontroller will play, hopefully that is for some sleep/low-power usage where it can wake up the beefy CPU (and TPU).


I wish Google created a development version of TPU for inference so that it's possible to debug models locally and then send them for training to the GCP.


Ya, I feel uneasy about this business model of creating hardware that you have to connect to a cloud service to actually use. Instead of vendor lock-in or proprietary drivers or whatever, it's a new form of locality-based lock-in.

Meaning, if I have an application that needs a big hot PCI-E card attached to a physical server I own somewhere, comparable to GPUs now, the TPU is not for me. But meanwhile, a bunch of NN research and frameworks on top of TensorFlow will treat these proprietary things as a first class citizen.


The lock-in, while bad, only affects the development of new models. These devices exist so that you can avoid the cloud for inference.


Well, you could get one of the development boards? Not sure what the use case you have in mind is, but these Edge TPUs are not for use inside of their GCP AI solutions, but rather on an 'edge' device. For anything else your 'local TPU' should be a hefty desktop with a couple of NVidia cards.


Google Colab provides free TPU/GPU accelerated runtimes https://colab.research.google.com/


TPU in Colab is not the same device as edge TPU.


The USB accelerator is designed for that - so you can target models for the a (small scale) TPU and then scale up on the cloud.

This would specifically let you make sure that the TensorFlow ops your algorithms use are supported on a TPU.

https://coral.withgoogle.com/products/accelerator/


USB has too little bandwidth to do real training. Modern GPUs use 16PCIe lanes which is ~126Gpbs for PCIe 3.0 which is incomparable to USB.


Hence the part "scale up on the cloud" - USB units aren't supposed to replace the GPUs, c'mon.


The ask here was inference, not training.


You could buy your own, but I don't know how different the Edge TPU version is.

http://linuxgizmos.com/google-launches-i-mx8m-dev-board-with...


It's very different. It's optimized for inference not for training.


An emulator?


It's too slow for this.


Is it?

The edge TPU can do MobileNet V2 at 100 FPS.

An iPhone 7 can do it at 145 FPS (source https://machinethink.net/blog/mobilenet-v2/)

The deal is though that the Edge TPU is able to do it at much lower power.


The Edge TPU devices that Google has been promising since last year is now available under a new company called Coral. Would love to get one to compare to my Jetson TX2. The downside is that the unit can only use Tensorflow Lite.

E: Hah, seems like my topic got merged with this one. Interesting how I was short from OPs post by like a minute a two. Such a coincidence!


> a new company called Coral.

On that website, each page has the Google logo at the bottom and "Copyright 2019 Google LLC. All rights reserved.". Also, at [1], Google LLC is mentioned as manufacturer of the devices. At this point, Coral still seems to be a brand only, not a company. Maybe they just didn't want to harm/affect their "main" trademark with this. Or they actually do want to create a separate company and this is the first step.

[1]: https://coral.withgoogle.com/legal/


Software Eng. with The Coral Project[0] here. Feeling a little odd with the same color (even the logo a bit) + name combo used for their TPU as we've used for The Coral Project for years now..

[0]: https://coralproject.net/


I'm sorry but isn't that the Coral color..?


Kinda surprised they went with the internal code name for this.


The datasheet says it features a "Cortex M4 with 16 KB of instruction cache and 16 KB of data cache". As far as I know, M4 don't have L1 cache. Maybe they're using an M7? Or there's just simply no cache?

https://coral.withgoogle.com/tutorials/devboard-datasheet/


I don't think the M4 is a typo. Apparently it's based on the NXP i.MX 8M, who's block diagram definitley states Cortex-M4 w/ 16K L1 Cache: https://www.nxp.com/products/processors-and-microcontrollers....

M4 application notes (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc....) says the M{-0..4} doesn't have any internal cache, but that it can be provided by the SoC. Presumably that's what's happening here -- although it seems weird that this can be called an L1 Cache (although I'm by no means an expert on this so can't really comment!).


Maybe they're not talking about ram but Flash I/D cache. By example Stm32f4 come with flash cache (they call it ART accelerator) to prefetch instructions from flash to enable "zero" wait states.


The NXP i.MX 8M SOC has a Cortex-a53 and M4F.


Interesting that it's Debian Linux support only for the peripherals. I'd be interested to see if that support grows to other os's, especially if it's a restriction to adoption.

I'm not in the space per-say but what are the predominant OS choices for ML/AI Devs?


I think they just want to get things out quickly. Plenty of people will be willing to deal with limitations in an early phase. I'm sure that for the USB stick other Linux systems will follow, and probably Mac/Windows also. For the SOM they might stick with just Debian I guess. It is normal in embedded to just have one platform provided by the vendor, and everything else be "at your own risk".


They were handing the USB ones out today to attendees at the TensorFlow Dev Summit. I'll test mine later.

However I really wish they would make something beefier, to compete with e.g. Nvidia's Xavier.


Any details on how much this board costs? Also, how many TOPs the Edge TPU has?


Dev Board cost $149.99, says on the website.


Thanks. Somehow I missed that, even though it's in large print at the top. :)

How about the Edge TPU specifications? Did I overlook those too?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: