
The BeagleBone AI is Available - FrankSansC
https://beagleboard.org/blog/2019-09-19-beaglebone-ai-available
======
Abishek_Muthian
Although it is priced competitively, ~ same as Nvidia Jetson Nano ($125) it
seems underpowered when compared to Nano. Nano has 4GB RAM, 128 CUDA cores and
can 4K encode/decode 30/60 FPS and also handle multiple streams when compared
to 15/15? of BeagleBone.

Perhaps the Vision Engine is better for computer vision tasks, but having to
use TIDL suite when compared to Jetson Nano's JetPack with tools which we use
regularly on bigger GPUs is going be a hard compromise to make.

Jetpack includes CUDA 10, TensorRT, OpenCV 3.3.1 etc. by default and PyTorch
is available separately for Jetson Nano. Besides the community is very active.

~~~
Isamu
Thanks for pointing out the Nano, seems they are pricing it at $99:

[https://www.nvidia.com/en-us/autonomous-machines/embedded-
sy...](https://www.nvidia.com/en-us/autonomous-machines/embedded-
systems/jetson-nano/)

~~~
Abishek_Muthian
I bought Nvidia Jetson Nano Dev Kit in India as soon as it became available
for (~$125), in USA it's bit cheaper at ~$110 (incl. shipping).

Please note that there is also Nano Module, which is a SOM (System on Module)
with 16GB EMMC storage sold for $150. I think this is intended for clusters.

------
Quarrelsome
Is it just me or is the ® symbol being everywhere a bit off-putting (in terms
of parsing the text)?

~~~
Stratoscope
It's not just you. Any trademark expert will tell you that you only have to
use the symbol once, in the first or most prominent place where you use the
trademark.

Also when you use other companies' trademarks, you should have a notice of who
the trademark owner is, which this press release does for Sitara but not for
the other trademarks used in the piece.

~~~
msla
> Also when you use other companies' trademarks, you should have a notice of
> who the trademark owner is

Do you have a cite for this? Thanks.

~~~
Stratoscope
Sure, here is one:

[http://www.bpmlegal.com/tmdodont.html](http://www.bpmlegal.com/tmdodont.html)

Search the page for "disclaimer", or just read the whole article - it has a
lot of good trademark advice. It also somewhat disagrees with what I said
about only using the trademark symbol once - it says to use it the first time
and "occasionally thereafter".

So here is an interview with an IP attorney who suggests just using it once:

[https://www.forbes.com/sites/work-in-
progress/2014/03/12/whe...](https://www.forbes.com/sites/work-in-
progress/2014/03/12/when-and-how-do-i-have-to-use-trademark-symbols/)

(Disclaimer: I'm not a trademark expert! Just passing along what I've read
from those who claim to be...)

~~~
msla
> (Search the page for "disclaimer".)

This is confusing because that isn't how the USPTO seems to use the term
"disclaimer" as regards trademarks:

[https://www.uspto.gov/trademark/laws-regulations/how-
satisfy...](https://www.uspto.gov/trademark/laws-regulations/how-satisfy-
disclaimer-requirement)

> What Is a Disclaimer?

> A disclaimer is a statement that you include in your application to indicate
> that you do not claim exclusive rights to an unregistrable portion of your
> mark. For example, if you sell shirts and your mark includes the generic
> word "SHIRTS," you could not object to someone else also using the word
> “SHIRTS” as part of his/her mark. The word is still part of both marks, but
> no one is claiming exclusive rights in that word, because it is an
> 'unregistrable' component of an overall mark. (See below for typical
> examples of unregistrable matter that must be disclaimed.)

> A disclaimer does not physically remove the unregistrable portion from your
> mark or affect the appearance of your mark or the way you use it. It is
> merely a statement that the disclaimed words or designs need to be freely
> available for other businesses to use in marketing comparable goods or
> services.

~~~
Stratoscope
I don't think the bpmlegal article I linked is using the word "disclaimer" in
the same sense as USPTO here. One of the examples they list is:

 _Teflon is a registered trademark of DuPont._

They're just saying you should add a statement like that, not saying this is a
"disclaimer" in the specific meaning USPTO uses above.

~~~
msla
> They're just saying you should add a statement like that

With absolutely no evidence that it's required.

------
penagwin
Is it just me or does 1GB of ram seem a little low for a $100+ board? I can't
seem to find what speed it is either.

I'm not expecting anything crazy like 8gb or anything like that, but given how
many boards sell at ~50$ with 4GB of ram this just seems kinda limited.

~~~
mumblemumble
I think that it probably doesn't make sense to compare this to a general-
purpose single board computer. It's apples and oranges.

The BeagleBone AI is aimed at prototyping in industrial automation
applications. I've never worked in that area, but I wouldn't be at all
surprised to see that large amounts of RAM isn't a priority for industrial
controllers. Probably the software tends to be frugal with memory, because a
bigger heap means more cache misses, and more cache misses mean worse latency.

A Raspberry Pi, by contrast, is mostly targeted at running a GUI and memory-
hungry user applications up to and including Minecraft. It's meant for
teaching kids to program and hobby stuff. It doesn't have built-in DSPs and
programmable real-time units, because those are for supporting applications
that fall far outside its intended purpose of having fun with Python.

~~~
codesushi42
That reasoning is just plain wrong. The memory usage has almost nothing to do
with the field where it will be used. It will depend on the size of the model
being run on the board.

The size of that model will be determined by the number of weights used. Being
that industrial automation will likely use CV, that will mean the potential
for a lot of weights.

Same goes for any other non trivial use case.

So yeah, 1GB is paltry.

~~~
mumblemumble
It's been a long time since I've done anything in machine vision, but, at
least back in the day, what I was seeing was that, compared to other uses for
machine vision, industrial applications tended to stay a lot simpler: Lower-
resolution images, black-and-white imaging, support vector machines instead of
neural nets (let alone deep learning), all that good stuff. They could get
away with it because they are able to much more carefully control the input
domain - controlled lighting conditions, consistent orientation of the thing
being imaged, all that good stuff. So they don't need 10^9 or more weights'
worth of ocean-boiling complexity the way you would in something like self-
driving cars or impressing your friends with your Imagenet performance.

And if you can get away with running an SVM on a 1-megapixel black and white
image, then your weights will fit in 1GB with an order of magnitude to spare.

~~~
codesushi42
Ok, what you said about lower res images makes sense. Lower variation of
images maybe means you could get away with less weights/more quantization--
you could afford to lose more information in the model. Maybe 1GB can be
sufficient then.

There's no reason to use an SVM over a (C)NN nowadays though.

~~~
mumblemumble
Sure there is. With an SVM, you can pick different kernels to more carefully
engineer specific behaviors, what kinds of errors your model is likely to
make, etc. You can get a good, stable model on less training data, which is
great when your training data is expensive to produce. (A situation that I'm
guessing is not at all uncommon in industrial automation.) You get all that
crispy crunchy large margin goodness. Stuff like that.

I'd absolutely focus on ANNs if I were an academic researcher, because that's
the hot flavor of the month that's going to get your career the attention it
needs to bring in funding, jobs, etc. I'd also pick it for Kaggle-type stuff,
where there's effectively no real penalty for generalizing poorly. Bonus
points if you consume more energy to train your model than Calgary does to
stay warm in the winter.

In a business setting, though, I would only default to ANNs if it were
holistically the best option for the problem domain. By "holistically" I mean,
"there's more to it than chasing F1 scores at all costs." The business
considerations that caused Netflix to never try productionizing the prize-
winning recommendation engine, for example, are always worth thinking about.
Personally, I'm disinclined to look past linear models - not even as far as
kernel methods - without strong reason to believe that I'm dealing with a
curve that can't be straightened with a simple feature transformation.
Complexity is expensive, and needless complexity is a form of technical debt.

~~~
codesushi42
_You can get a good, stable model on less training data, which is great when
your training data is expensive to produce_

Huh? SVMs don't perform better than NNs on less training data.

I'm sorry, but the rest of what you said is out of date and wrong. CNNs work
better than SVMs for CV tasks. There's no reason to use SVMs anymore for CV,
and no one in their right mind does.

------
ausjke
How does this comparing to Nvidia's Jetson-
Nano([https://developer.nvidia.com/embedded/jetson-nano-
developer-...](https://developer.nvidia.com/embedded/jetson-nano-developer-
kit) for $99) which is cheaper and appears to be more powerful?

I used BB in previous projects, one thing definitely stands out for BB is
that, it could be used as a product directly with a case and some
certification(EMC,etc). Nvidia's Nano is more of a development platform.

Beagleboard predates RPi actually, though after Arduino, BB is arguably the
very first board running a 32-bit ARM that is also open source, cheap, small,
however it's overshadowed by RPi in recently years.

~~~
castratikron
Yeah the Beagleboard and more generally the TI AM335x is getting old. Single
core, DDR3-800, no PCI-e, no secure boot, and other bizarre limitations. It's
nice that they're putting something newer out.

------
kbumsik
Dual Cortex-A15, 2 DSPs, 4 Vision Engines, 4 Real-time controllers (PRUs), 2
Cortex-M4s, 2D accelerators, dual 3D GPUs...

It's impressive but, being pretty much domain specific chip, can anyone make
use of its capabilities at hobbyist levels where Beaglebone is targeting?

~~~
ksangeelee
Having the PRUs as supporting microcontrollers can be great where timing is
critical. For example, it's possible to decode signals from an AM receiver and
pass them to the host, or monitor sensors and have immediate responses (such
as triggering a shut-off via GPIO).

There's nothing particularly difficult about wiring a microcontroller up to a
single board computer to do these jobs, I've done exactly that for reading my
weather station and heating oil tank level. But it's messy and I think the
cohesion of being able to do it directly on one board is a worthwhile
advantage.

~~~
tyingq
The PRUs share a section of main memory with the CPU, which is pretty unique.

------
oceanghost
>low cost development board yet, and it’s driving a whole new AI revolution.

This press release is a disaster as far as grammar is concerned. I am
legitimately unable to tell if it has any special properties regarding AI.

And NO, it came out yesterday, it's not driving any revolutions.

------
rcarmo
I'm not overly familiar with TI's SOCs post-2010. Anyone out there with a good
overview of what the Sitara AM5729 includes besides the bullet points in that
piece?

And what about TIDL adoption? I've been working on the Intel/NVIDIA-grade part
of the ML scale and have a few ESP32 boards to fiddle with OV2640 cameras, but
very little in between except what Broadcom has been doing.

~~~
Zenst
Not a summary (which would be from one perspective or another) the PDF's cover
much and can wordsearch down into the details that pick your interest.
[http://www.ti.com/product/AM5729/technicaldocuments](http://www.ti.com/product/AM5729/technicaldocuments)

As for tooling, could not say, but as a previous comment stated - the NVIDIA
offering for the same money, makes this hard to stick out for many. Though i'm
sure it, as most boards do, have a niche - what that niche is beyond already
invested in the beagleboard environment and comfort level, that would be the
only uptick for some that i'm seeing from initial glance.

------
missosoup
Any experiences with using tensorflow models with TI Deep Learning (TIDL)?

~~~
jjoonathan
I'd be interested in hearing anecdotes as well.

In my experience, software from hardware companies has been so reliably
abysmal that it makes the "enterprise software" us SWEs like to complain about
look decent by comparison.

Hopefully it's different this time :)

------
m0zg
What's so "AI" about it? It doesn't even have a TPU. Kendryte K210 has a fixed
point TPU, 400MHz dual core RISC V with FPU, 8 channel audio DSP, FFT and
crypto acceleration, and costs $8.90 with wifi and $7.90 without. And the
module is the size of a half of a postage stamp. Runs TensorFlow Lite (a
subset of ops, but good enough to do practical things).

~~~
mkl
That has only 8MB RAM, and the company seems so fly-by-night they can't be
bothered setting up Let's Encrypt.

~~~
m0zg
8MB RAM is more than enough to run a quantized MobileNet, which they
demonstrate by preloading object detection on it, out of the box. And the chip
is real. I have a couple of boards with it, it works. I guess people just have
a hard time believing all the stuff in the spec can be done for less than 10
bucks. 28nm by the way, not a joke. The company got its start in crypto
mining, so this is a side gig for them.

------
kumarvvr
Can someone tell me if there are easy to use libraries that can speed up
existing ML code, say written in Python, on this?

Or do we have to write custom C/C++ code to make best use of available
hardware?

------
SerJaime
It looks pretty cool and I think about getting one. Does anyone know if it has
a dedicated neural network accelerator?

------
gapo
Can any of the better minds out here compare this with the Jetson/Jetson Nano
?

~~~
kbumsik
It's much more focused on real time controls (4 PRUs and 2 Cortex M4) than
GPGPU processing. It's pretty much for industrial vision applications than the
AI actually.

------
ycombonator
What are the typical use cases for this type of board ?

------
trollian
DSP and vision does not "AI" make.

~~~
monocasa
FWIW, the SGX GPU in it were on of the first to go hog wild on f16.

And if they'd document their ISA, it's pretty amenable to being used for
neural networks, wayore than the other mobile GPUs at least. It'd be a cold
day in hell before they did that though unfortunately.

~~~
UncleOxidant
Their ISA isn't even documented?

~~~
monocasa
No. I actually looked at reverse engineering it a few years back. If you pull
apart their drivers you can figure it out pretty quickly (there's actually two
different ISAs, the main shader cores and a tiny little RISC esque core that
marshals work for the shader cores).

But their driver/software complexity is super high to even get a triangle in a
buffer or run a compute job. They have a RTOS looking microkernel running on
the shader cores, and there's a ton of caching and MMU setup you have to do
from the GPU side (not the main app processor). And there's a lot of caching
hints and hacks that are hard to work around if you don't know the context (a
lot of tables for bug reference numbers and special cased code depending on
those)

If anyone from Imagination is listening, the open source community would still
love your help in supporting these chips. : ) They're really pretty inside,
and the world should know about the good work y'all did!

------
tiborsaas
The press release could have used a real life example, like "Training MNIST
dataset takes .5 seconds" or something.

Where can I find info about how these edge computing boards are speeding up
training time? Or how they compare to a 1080i?

~~~
chunsj
As most of the so called machine learning boards, this one is not for
learning, just for inferencing at best.

