

Darpa Has Seen the Future of Computing ... And It's Analog - webwanderings
http://www.wired.com/wiredenterprise/2012/08/upside/

======
Dn_Ab
_Note_ \- The article is about probabilistic chips. The use of _analog_ is
incredibly misleading. As can be seen in the interesting but nonetheless
unrelated discussions on various types of analog computers.

This type of chip which will save a lot of energy and still be correct in the
sense exact answers are not needed. Non-deterministic computations invloving
sampling for example. This particular chip is better for many types of
bayesian and or _generative_ machine learning algorithms. Not the multi-layer
perceptrons most posts are referring to.

~~~
billswift
Interestingly, if you look at it the other way around, "probabilistic" is a
good way of describing analog output. Something I read many years ago on
computer history claimed that one of the reasons digital technology displaced
analog was the inherent lack of "exactness" in analog.

------
FrojoS
> By definition, a computer is a machine that processes and stores data as
> ones and zeroes.

No, of course thats only true for digital computers. Otherwise, what would be
the point of the headline? Not a great article but interesting news.

I could imagine specialized computer for artificial neural networks (ANN)
being commercially successful in the future. Not sure if this is what the
people in this DARPA project are working on. As far as I can tell, there are a
lot of breakthroughs in ANN's at the moment, especially in the realms of
pattern (e.g. image) recognition: Ng:
<http://www.youtube.com/watch?v=ZmNOAtZIgIk> Hinton:
<http://news.ycombinator.com/item?id=4403662>

~~~
ralfd
> No, of course thats only true for digital computers.

Not even that. Computers with ternary instead of binary logic are possible:

<http://en.wikipedia.org/wiki/Ternary_computer>

Quote:

"The only modern, electronic ternary computer Setun was built in the late
1950s in the Soviet Union at the Moscow State University by Nikolay Brusentsov
[…] IBM also reports infrequently on ternary computing topics (in its papers),
but it is not actively engaged in it."

~~~
derekp7
Which brings up a question I've always had. If a Quantum computer uses bits
that are on, off, and both, can't this be simulated with a Ternary computer?

~~~
cgranade
You need more classical resources to model a qubit in generality. This is
because a qubit can be in the "0" state, the "1" state, or in _any_
superposition between them. For a single qubit, you can describe an arbitrary
superposition by two angles [1], each of which is a continuous real number.
While you can't extract both of these parameters from a single measurement, a
classical computer has to know both angles to simulate that qubit. Thus, it's
more like that you need two floats to simulate a single qubit.

When you have more than one qubit, the requirements get much worse, as you can
have superpositions over all of the possible classical states. When you work
through the math, you find that you need a vector of 2^n - 1 complex floats to
model a register of n qubits in full generality. Going from a classical bit to
a classical trit does nothing to help with that exponential scaling.

[1]: <http://en.wikipedia.org/wiki/Bloch_sphere>

------
co_pl_te
When I was younger and less of a technophile, I often wished analog computing
would somehow overtake digital computing. I suppose from my naïve perspective
I found it unsettling that our model of computing is discrete while the space-
time we exist in is continuous.

Being much more into technology today than I was 10 years ago (but equally
naïve :-), my perspective seems to have flipped. I question whether reality is
really continuous or whether it is just our limited perception of it that
makes it seem that way. Obviously, my growing interest in digital technology
has had a profound impact on how I view the world in which we live.

In any case, I'm glad DARPA is looking more seriously into analog computing. I
think there is a lot to be learned from revisiting the issue in a field that
is still very young.

~~~
toolslive
Funny, I came to the exact opposite conclusion: It's unsettling our models of
physics are based on continuous functions while the space-time we exist in is
obviously discrete. Considering them as continuous is a simplification to make
the math easier, but not correct.

~~~
ralfd
"Obviously discrete"?

~~~
vdondeti
From my understanding, the Planck length is the smallest length there can be,
and every other length has to be an exact integral multiple of the Planck
length [1]. I believe the same applies to the other Planck units [2]- they are
fundamental units. Similarly, every charge has to be an exact integral
multiple of the charge of the electron. These fundamental units seem to to
indicate that our universe is not continuous, but quantized/discrete. With
that said, there are some theories stating that these fundamental units only
apply to our universe (and even in our own universe, the fundamental units may
change over long periods of time), and that other universes may have different
fundamental units. So depending on how you define the universe, the answer to
whether the universe is continuous or discrete could change.

[1] <http://en.wikipedia.org/wiki/Planck_length> [2]
<http://en.wikipedia.org/wiki/Planck_units>

~~~
joeyo
Quarks carry fractional charge (e/3) [1]. And Quasiquarks seem to be able to
carry any fractional charge with an odd-denominator (e/3, e/5, e/7, ...) [2,3]
which would seem to indicate that the universe has quantized space yet has no
lower bounds.

1\. <http://en.wikipedia.org/wiki/Quark#Electric_charge>

2\. <http://en.wikipedia.org/wiki/Fractional_quantum_Hall_effect>

3\.
[http://physicsworld.com/cws/article/news/1997/oct/24/fractio...](http://physicsworld.com/cws/article/news/1997/oct/24/fractional-
charge-carriers-discovered)

~~~
vdondeti
Fair point. Depending on how you define it, all charges are an integral
multiple of e or e/3 (see below). Either way, the relevant point to this
discussion is that charge is quantized/discrete.

As for the Fractional quantum Hall effect, based on a quick glance, it seems
that thus far only quasiparticees with charge e/3 have been discovered. So,
while the theory could be correct, we will have wait and see if the other
predicted fractional charges are detected.

\----------------

From the Quark wikipedia page you cited: What is the quantum of charge? All
known elementary particles, including quarks, have charges that are integer
multiples of 1⁄3 e. Therefore, one can say that the "quantum of charge" is 1⁄3
e. In this case, one says that the "elementary charge" is three times as large
as the "quantum of charge". On the other hand, all isolatable particles have
charges that are integer multiples of e. (Quarks cannot be isolated, except in
combinations like protons that have total charges that are integer multiples
of e.) Therefore, one can say that the "quantum of charge" is e, with the
proviso that quarks are not to be included. In this case, "elementary charge"
would be synonymous with the "quantum of charge".

------
ajb
This comes up every few years. Please, guys, before you conclude that
probabilistic computers are the future, think about what it would be like
debugging a program running on one. Speaking as someone who has implemented a
probabilistic version of 'git bisect'[1], I think it would be hard.

That's not to say that the idea is completely dumb. A probabilistic GPU would
be useful, although it would go against the trend of being able to use them
for general computation.

[1] That is, one that looks for probabilistic bugs, not one that runs on a
probabilistic CPU. See <https://github.com/Ealdwulf/bbchop>

~~~
ThaddeusQuay2
"Please, guys, before you conclude that probabilistic computers are the
future, think about what it would be like debugging a program running on one."

Most of today's software is already probabilistic. Most bugs occur because the
software is run on the wrong hardware. Therefore, once we have probabilistic
hardware, most existing software can easily be ported, and most bugs will
instantly disappear.

~~~
bad_user
Err, what?

Most bugs occur because the software has bugs, as in broken logic or edge-
cases that aren't handled. To quote from "No Silver Bullet" [1] ...

    
    
        I believe the hard part of building software to be the 
        specification, design, and testing of this conceptual 
        construct, not the labor of representing it and testing
        the fidelity of the representation
    

Hardware has nothing to do with it.

[1]
[http://www.cs.nott.ac.uk/~cah/G51ISS/Documents/NoSilverBulle...](http://www.cs.nott.ac.uk/~cah/G51ISS/Documents/NoSilverBullet.html)

------
scarmig
Check out [http://pruned.blogspot.com/2012/01/gardens-as-crypto-
water-c...](http://pruned.blogspot.com/2012/01/gardens-as-crypto-water-
computers.html) to see a Soviet water-based computer. I don't know the details
of it, but it appears to be able to solve PDE's by mapping the PDE you're
solving to a hydraulic system. Since the flow of water is governed by PDEs,
once you've mapped the problem to the computer you can just make some analog
measurements of water flow to get the answer you're looking for. (Is this
right?)

I have to wonder, thought, whether this is can really be called a computer. If
you do, don't you also have to consider a simple integrator circuit one?

Error correction must also be somewhere between a total PITA and impossible,
right?

~~~
SimHacker
Instant error correction and a solution for memory leaks: Just add water!

~~~
ktizo
buffer overflows could get messy.

------
bornhuetter
> “One of the things that’s happened in the last 10 to 15 years is that power-
> scaling has stopped,” he says. Moore’s law — the maxim that processing power
> will double every 18 months or so — continues, but battery lives just
> haven’t kept up.

Has everyone just given up now on the original Moore's Law about transistor
count, and just decided that the law is about computing power (per the David
House quote)?

~~~
yaantc
Moore's law is now commonly understood as referring to the best transistor
density we can do. But in the original paper Moore was talking about the
density of the least cost process. With the original meaning Moore's law stops
not necessarily when the density stops increasing, it can also stop because
the new finer process remains forever more expensive than the previous one.
With the old definition some expect Moore's law to stop at 28nm. Some even say
it already stopped at 40nm (which is still less expensive than 28nm, so we'll
have to see). The difference matters, because of economics. New fabs are
always more expensive, but up to now the resulting chips were both better and
cheaper. So everybody moved to newer process eventually, and the addressable
market also increased. If the original Moore's law stop due to increasing chip
cost, you can still get better perfs, but now it's more expensive. As a
result, all the applications where performance is good enough and price
matters more (embedded applications, ...) will stay on the now cheaper bigger
processes. So the new fabs will still be more expensive but will address a
shrinking high perf market. This will have consequences.

~~~
Dylan16807
Was it actually about _density_? I always thought it was about transistors per
dollar. Sure, density is a very effective way of lowering costs, but the
maturation of technology at a specific process size will work just fine too.

------
Geee
Probabilistic chips will be the next generation of computing; it's great to
see initiatives on this front. PCMOS (I assume this is the same thing) itself
is patented, so I'm not sure how much it affects the progress. Original and
more comprehensive article of the PCMOS technology (2009)
<http://phys.org/news153398964.html>

------
jfaucett
Interesting, I would have to say though I can't really imagine how you begin
programming an analog cpu. I'm assuming is is some type of neural network
right (b/c of the metioning that transistor states are not pure on/off)? any
help here? it sounds fascinating but I have never read anything on the topic
or worked on an analog cpu.

Also how does analog use less energy (i.e. how can aynthing compute at a
reasonable speed without some sort of power charge)? Obviously, my pc has a
lot of power dissipation compared to any microcontroller on the market, which
according to the article is the main reason they're going analog. In a
standard case, I would have a multi-core (for less dissipation) and need to
program multi-threaded. Are there any paradigms for analog? Or is this
something completely new (at least since the 50's).

~~~
andreasvc
I don't have a picture of general anolog programming, but a concrete example
is devising a strategy to hit some balls in a game of pool. Like in a neural
net, this can take the form of turning a few knobs until you get the right
result.

The reasons it can require less energy includes error tolerance, as mentioned
in the article. The foundation of digital computer is error correction, so
this trade-off could be the fundamental difference between the two.

------
option_greek
I can't believe that the power required for computing is significant when
compared with power of flight.

~~~
jws
I find myself unable to offer an opinion, let's find out.

NOTICE: maxs points out I missed a x1000 on the barrel count, the following is
off by three orders of magnitude:

In 2008 the world consumed 5269 barrels of jet fuel per day. 42 gallons in a
barrel, 6.8 pounds per gallon, 0.45kg/pound, kerosene has 43MJ/kg… that is 29
terajoules (or I made a mistake). [1] [2]

Spread 29TJ out over the day and that is 29TJ /24/60/60 and you get 337
megawatts (or I made a mistake, this feels low, but I'll go with the
calculation). [3]

Google's data centers drew 260 megawatts in 2011. [4]

So, there you go. Google's data centers alone use three quarters as much
energy as all jet airplanes combined.

EOM

[1] [http://www.indexmundi.com/energy.aspx?product=jet-
fuel&g...](http://www.indexmundi.com/energy.aspx?product=jet-
fuel&graph=consumption)

[2] <http://large.stanford.edu/courses/2010/ph240/glover2/>

[3] <http://en.wikipedia.org/wiki/Watt>

[4] [http://www.nytimes.com/2011/09/09/technology/google-
details-...](http://www.nytimes.com/2011/09/09/technology/google-details-and-
defends-its-use-of-electricity.html)

~~~
maxs
You're off by 3 orders of magnitude. 5269 barrels a day did sound too little
to me. In the link you cited the quantity is given in thousands of barrels.
The daily use is 5269000 barrels a day.

~~~
jws
Thanks for finding that. The result seemed odd.

So 337 gigawatts for jet fuel instead of megawatts. That will dwarf google,
we'll need to compare to something bigger.

Lets convert to annual kilowatt hours, 337GW _24h_ 365= 2950 billion kWh.

In 2007 it was estimated that the total power consumption of the internet was
868 billion kWh. [1]

So now we have three times the energy in jet air transport as in powering the
internet. I have to say this feels better.

EOM

[1] <http://uclue.com/index.php?xq=724>

------
sroussey
When I was an EE major I distinctly remember designing an APU and thinking
what a convoluted cluster f __* the whole thing was -- taking electron flows,
pouring then over specific transistor arrangements (transistors are not
binary, they are amplifiers) that take time to flow over a transitor set for
each binary digit, and design it so it had the right answer at the exact time
of the next clock pulse. I wanted to just take two signals and let them
superimpose and let nature to the instant addition. Of course the "context"
between the analog and electron stram with a clock "digital" just wasn't
possible on the process of the day. Or today. Maybe someday in all optical
computing. Or something else...

------
gizmo686
This is not what the article was talking about, but back in the 1800s, Charles
Babbage designed a completely mechanacal computer. There are currently plans
to actually build it, <http://plan28.org/>

As a side note, this machine also doesn't work in binary, it works in decimal.
I think this is to reduce vertical space requirements.

~~~
andreasvc
To be clear, that means it's a digital computer.

------
logn
Not really described as the future of computing. It's targeted at applications
where a few incorrect bits are OK, like image processing on a drone. What if
they could add some redundancy or parity though so we could achieve 100%
accuracy?

~~~
lurker14
100% accuracy does not exist in real-world systems. 99.99999% accuracy might
be good enough, though.

------
crististm
"brand-new way of doing computing without the digital " - Analog computers
have been around for 70 years+. They are based on OP-AMPS and, at that time,
were made out of tubes.

