

Probabilistic processors possibly pack potent punch - coderdude
http://arstechnica.com/hardware/news/2010/08/probabilistic-processors-possibly-potent.ars

======
ChuckMcM
When I saw the pitch deck for these guys I must admit I didn't get it. From
what I could see it was "gee you can do divide in analog instantly!" which is
to say that they had re-discovered analog computers. Now, don't get me wrong,
an HDL for designing analog computers and a way to plug them into a digital
stored program computer could be hella cool, but it read too much like someone
simply not remembering the past.

That being said, and the fact that most 'analog' stuff is done in a DSP these
days anyway, it would be fascinating if someone came up with the analog
equivalent to a CLB in an FPGA and an HDL for describing non-linear
computation (something the original analog computers could not do).

Its not clear at all however that these are the droids I was thinking of
though. There is a lot of unused 130nm fab capacity though, so for a leading
edge idea that could be implemented on trailing edge technology there should
be a window.

~~~
jwm
> it would be fascinating if someone came up with the analog equivalent to a
> CLB in an FPGA and an HDL for describing non-linear computation

What you have described do indeed exist as Field Programmable Analog Arrays
(FPAAs).

The Wikipedia article (<http://en.wikipedia.org/wiki/FPAA>) gives a good
overview of them, but basically they consist of multiple configurable analog
blocks (CABs), which can be configured to perform some analog operation on a
signal, and multiple CABs can be connected together to form a signal path. All
the signal paths and CAB parameters are defined by the digitally loaded
configuration. Very interesting!

The one's I have briefly used (many years ago) were the Anadigm FPAAs.
Examples of the operations the CABs (or "CAMs") can do are signal filtering,
rectification, addition, multiplication, comparison and
integration/differentiation over time
(<http://www.anadigm.com/_doc/PR021100-0024.pdf>). These devices to me are
really targeted at analog signal processing applications (audio,
instrumentation etc). If I was an EE (which I am not) I might be able to
provide a decent evaluation of them, but they seemed like they would be pretty
useful for this kind of thing.

My project using them was to get the FPAAs to perform some simple physics
calculations, based on previous work by another student. One example was
modelling the height of a moving projectile under gravity. I can't remember
the specifics (sorry!) but input voltages were used to specify the equation
parameters (mass, initial force), the CABs transformed these voltages
according to the given motion equation and the output voltage was the height
at time t. And it worked ok! A computer used Digital-to-Analog converters to
hold the input voltages steady for the FPAA circuits, and sampled the output
using Analog-to-Digital converters.

While this worked for simple examples, the whole system was limited to 8-bit
effective values by the Signal-to-Noise resolution of the equation signal
path. While for the FPAA device itself has a high signal-to-noise ratio that
effectively gives a smallish 20 bit resolution (~= SNR 120db), the extra
wiring and components in the rest of my circuit added enough noise to lower
this down to 8bit (~= SNR 48db). (Will happily admit I am no great
circuitsmith:). Also there were some caveats like hard limits to the
parameters of the CABs (restricted internally by the physical limits of the
variable capacitors and resistors), which limit quantities used in the analog
equations and restricts its general purposefulness.

Some of the things analog computers can do are interesting, like "instant"
multiplication and division. However the difficulty of keeping thermally and
electromagnetically induced noise low to get a high resolution signal (e.g.
pushing past SNR 120db) in analog systems is one of the reasons why we don't
really see them being used in computation any more. This is also one of the
reasons why digital circuits are much much easier to work with as you no
longer (really) have to worry about noise affecting accuracy, from the
transistor level up to the PCB or long cable level.

This kind of thing explains the low 8-bit resolution from the device in the
article.

> The technology allows a resolution of about 8 bits; that is, they can
> discriminate between about 2^8 = 256 different values (different
> probabilities) between 0 and VDD

High resolution analog seems to be just hard to do in practice.

(Aside: Also if you take with the width of the known universe, and discretize
this distance by Planck length intervals, you only need ~220 bits to represent
the position of an object on this line. So only ~3*220 bits for (x,y,z).
Binary rules!)

So, I'm more of an digital fan for general purpose computing myself:) However
analog computers are hella cool all the same.

EDIT: This post is long. You'll have to excuse the fact that I was finally
happy for some information from the old undergrad thesis to see the light of
day. TLDR: Analog computers are cool, but it seems they are limited in
resolution and general configurability. Use FPGAs or GPUs or even poor old
CPUs instead.

EDIT2: Rereading your post, maybe you were thinking about some kind of
FPGA+FPAA hybrid? An FPGA, but with some with analog blocks and ADCs, DACs
inside?

~~~
ChuckMcM
A mix of FPAAs and FPGAs, rather than all FPGAs or all FPAA. The DSP elements
in newish FPGAs is a nod in this direction.

 _High resolution analog seems to be just hard to do in practice._

That is exactly the challenging bit. And something I was trying to figure out
if Lyric had actually made an improvement on. If you could do computation in
an analog matrix and still pull 32 bits reliably out of it, that would be
something new.

------
RiderOfGiraffes
Here are the submissions from seven months ago:

<http://news.ycombinator.com/item?id=1619515>

<http://news.ycombinator.com/item?id=1616800>

I provide this purely for reference, because as so often happens, it sank
without trace then, but it's getting noticed and some commentary now.

------
aidenn0
I can see it as potentially useful in mobile devices, since it sounds like
it's a trade-off of engineering time (analog circuits are _much_ harder to get
right) vs. power consumption, and mobile has such large volume that
engineering costs can become a rounding-error.

I'm slightly concerned about per-unit cost since I'm assuming testing yields
is going to be much harder. Digital behaves in well-understood ways across
temperature and voltage ranges; analog is not as well understood.

[edit] I see they are claiming 30x smaller footprint, which will help, since
they can be very aggressive about rejecting chips, but still I wonder if fabs
that mainly do digital are setup to test something like this.

------
jbri
Analog electronics have been used for simulation for a long time. Mainly for
simulation of analog components.

But trying to use an analog processor for discrete computation? While it could
work, part of the point of digital electronics is that you can build in error-
correction at every single gate to mitigate the effects of not using ideal
zero-resistance electrical components. In order to do that with an analog
machine you'd either need to sample-then-recreate (essentially making it a
digital-logic circuit) or have accurate knowledge of how much voltage is being
dropped at each point (which is influenced by environmental factors such as
ambient temperature).

~~~
arethuza
Analog computers used to come in various forms - including hydraulic. I once
worked for an engineering Prof who had spent a lot of time using hydraulic
analog computers to "solve" systems of equations modeling various kinds of
real world systems.

One thing that I thought was particularly neat is that he had spent time using
a hydraulic analog computer to model the flow of fluid through building
walls....

------
aidenn0
My biggest reason for thinking this won't make inroads into HPC is that they
talk about "Up to 1000x improvement over x86 calculations" well that means
they need to get it to market and adopted before we have 256 or 512 x86 cores
in a single socket, which isn't _that_ far away, and would reduce their
advantage to "Up to 4-8x" which is not nearly as impressive.

~~~
pygy_
These chips are also very energetically efficient, and I can't see why moore's
law wouldn't apply to them too. You could build a 256 core probabilistic
processor as well.

------
onan_barbarian
Possibly this may go the same way as the last outing of this idea, which can
be seen, no joke, in fuzzy logic rice cookers and toasters and not many other
places. Maybe.

~~~
psyklic
I wouldn't be so dismissive of this technology. The key is that it uses
significantly less power than conventional circuit elements.

For instance, this concept has been demonstrated to be great at decoding lossy
video -- perfect for mobile devices. And, it is already demonstrated to work
as part of an error correction circuit for flash memory.

According to Lyric, their flash error correction chip requires "30X lower cost
to manufacture and 12X lower power consumption" -- pretty significant savings.

~~~
modeless
Do you have a citation for the lossy video thing? Seems like it could be good
for rendering graphics too. For a 12x power savings maybe I can live with my
pixels coming out #00FE00 instead of #00FF00 (for example).

~~~
psyklic
Sure -- <http://www.technologyreview.com/energy/20246/>. Specifically, they
have "implemented filter primitives using PCMOS technology, used to realize
the H.264 decoding algorithm." (from
<http://www.cs.rice.edu/~lc6/visen/2006vlsisoc.pdf>). That paper does not go
into much detail about the video implementation, but it seems they discussed
it more in a conference presentation.

The relevant part of the technology review article: "That changed in 2006.
[Palem] and his students simulated a PCMOS circuit that would be part of a
chip for processing video, such as streaming video in a cell phone, and
compared it with the performance of existing chips. They presented the work at
a technical conference, and in a show of hands, much of the audience couldn't
discern any difference in picture quality."

------
waratuman
Wow, someone went for alliteration!

~~~
jacques_chester
Always Aim to Alliterate, Says Stan.

~~~
Eliezer
I always avoid excessive alliteration as it often annoys an otherwise
attentive audience.

~~~
jacques_chester
Downvotes denotate demographic homogeneity.

------
caf
If it lives up to the hype, it would be useful for some kinds of
cryptanalysis.

------
markkat
I really expected to find some mention of quantum computing in this article.
Could probability of spin states (or some other state) be used in this
paradigm?

------
jacques_chester
I wish Jon Stokes had written this article.

