
Optical Computer Prototype - fenrissan
http://optalysys.com/technology/achievements-date/
======
mechagodzilla
This appears to basically be an analog optical computer - it inputs some 2D
data using an LCD, puts it through a bunch of optical transforms, and then
captures the output with a camera. This probably does have some kind of
obscure use cases where it makes sense, but it's very hard to compete with
modern silicon for raw compute using some kind of hybrid method - moving data
in and out of the optical part is just painful compared to keeping it all
digital/electrical. The main problem is that computers today _aren 't_
actually slow =)

~~~
imaginenore
Today's computers are very slow. Try modeling something complex, let's say 1
billion water molecules interacting. Then realize that 1 billion molecules is
orders of orders of magnitude far from modeling a cup of water.

~~~
jjoonathan
> 1 billion molecules is orders of orders of magnitude far

Understatement. There are 50 trillion atoms in a cell, 50 trillion cells in a
human body (give or take an order of magnitude or two for definitions and
caveats). Furthermore, big swaths of chemistry/biochemistry are inherently
quantum mechanical (classical mech + E&M doesn't explain why molecules snap
into little geometric shapes, let alone how those shapes interact) which has
god-awful asymptotic complexity on account of the "present state" of the
system (wavefunction) being a probability for each possible configuration of
the system rather than a description of a single configuration.

A purpose-built silicon supercomputer will struggle to simulate a single small
protein using classical-mechanics approximations for a millisecond (and there
are millions of those per cell and trillions of cells per body). There's a
_lot_ of room for improvement.

~~~
mechagodzilla
Right, but they didn't build an optical computer that is even close to
competitive with something like this:[http://www.hotchips.org/wp-
content/uploads/hc_archives/hc26/...](http://www.hotchips.org/wp-
content/uploads/hc_archives/hc26/HC26-11-day1-epub/HC26.11-1-High-Performance-
epub/HC26.11.130-Anton-2-Butts-Shaw-Shaw-Res-Search.pdf)

------
sounds
I think the submission here on HN has overdone it on the title.

It doesn't compute "flops" like a traditional computer. The relevant text from
the article:

"The prototype achieves a processing speed equivalent to 320 Gflops and it is
incredibly energy efficient as it uses low-powered, cost effective
components."

I am as interested as anybody in switching out for photons instead of
electrons/holes. But please use the original title, unless it is misleading or
linkbait.

~~~
dang
Ok, we took "32O Gflop" out of the title. If anyone suggests a better title we
can change it again.

------
mentos
If photons and electrons travel at the same speed am I right in thinking that
the benefits of optical computing would be limited to parallel processing?

~~~
dogma1138
Electrons have mass they can't move at the speed of light. Photons also have
other nice properties such as wavelength which open a whole suit of
possibilities e.g. like having a logic gate which can operate in different
mods based on the wavelength and polarization of the light.

While this is technically possible with electronics as well by setting a
different voltage limit it's much more effective with photonic computing and
doesn't not increase the complexity of your base components as much.

The current designs for a photonic computer are also much more parallel most
of them basically layers of LED's and detectors with a very fast LCD matrix
which serves as a mask between them. if you have a 256x256 pixel screen you
can perform an operation on 65536 bits in a single clock, if you stack them up
you basically getting 1 order of magnitude with each layer this isn't
something you could ever achieve with current solid state electronics.

~~~
mentos
What fraction of the speed of light are electrons moving at in the most
advanced silicon chip we have? Trying to understand how much speed is left on
the table for us to pick up in serial processing?

~~~
dogma1138
It's not that easy to define because were talking about semi-conductors after
all;)

They don't move at a constant speed like say in a conductor (which isn't the
case either because electric field causes resistance e.g. eddy currents, but
lets say a super-conductor)

You have quite a few concepts with semiconductors primarily saturation
velocity (which in most cases is the peak velocity, but not necessarily
attained in actual operation) which is also affected by the drift velocity due
to any electric fields in your components.

[https://en.wikipedia.org/wiki/Saturation_velocity](https://en.wikipedia.org/wiki/Saturation_velocity)

Light speed in vacuum is 29979245800 cm/s Saturation velocity in Si based SC
is 10000000 cm/s

So there's quite a bit of difference there ;)

~~~
mentos
So does this imply an opportunity to improve computing speed by 2997?

~~~
dogma1138
No because the speed of the components with the current designs is still
limited by the switching speed of the masking matrix. And in any case you will
still need to transform photons back to electrons twice. The speed of
electrons or EM wave propagation vs photons isn't a player in why photonic
computing might be better. The ability to encode more data into the photons, a
much more power efficient and inherently parallel design are.

Also because light has much more shorter wavelength than "electricity"
electromagnetic waves in conductors (kilometers vs nano-meters) you can have a
much higher throughput with light even if you use it to replace the carrier
wires in electronics (which is what Intel is working on they want to keep
electronic switching but make all carriers photonic).

[https://en.wikipedia.org/wiki/Speed_of_electricity](https://en.wikipedia.org/wiki/Speed_of_electricity)

------
reilly3000
The better we understand EM radiation effects on all kinds of life the more
important this technology will become.

------
programmer_dude
Isn't 320 GFLOPs kinda slow? An NVIDIA GeForce GTX Titan Z can crank out 8122
SP GFLOPs for comparison.

~~~
baobabaobab
That's the proof of concept prototype. If they achieve their stated targets,
it would be a leap in computing capability.

[http://www.hpcwire.com/2014/08/06/exascale-breakthrough-
weve...](http://www.hpcwire.com/2014/08/06/exascale-breakthrough-weve-
waiting/)

"The analysis unit works in tandem with a traditional supercomputer. Initial
models will start at 1.32 petaflops and will ramp up to 300 petaflops by 2020.

The Optalysys Optical Solver Supercomputer will initially offer 9 petaflops of
compute power, increasing to 17.1 exaflops by 2020."

------
mcnamaratw
Oh no, not this stuff again.

