
A scalable photonic computer solving the subset sum problem - fortran77
https://advances.sciencemag.org/content/6/5/eaay5853
======
anderskaseorg
The problem with using a classical computer to solve an NP-hard problem is
that as the problem size increases, in the worst case, as far as we know, the
running time increases exponentially.

The problem with using a photonic computer to solve an NP-hard problem is that
as the problem size increases, the amount of light at the output node drops
exponentially. Even under the assumption that you can get rid of all
background noise, that means you need to run your detector for longer so as to
be able to detect the output photons, and…the running time increases
exponentially.

Perhaps there are interesting quantum effects that could be exploited, but the
authors specifically note that they aren’t doing that.

So this is a neat engineering project but an almost certainly irrelevant
computational project.

~~~
scythe
Figure 3 does indeed show an exponential time complexity, but the base
exponent is much smaller than for a classical computer. That is an asymptotic
improvement at least.

~~~
anderskaseorg
That figure is extrapolated based on the longest photon path length through
the device. It isn't based on real experimental results (the device was
demonstrated with n = 4), and it does not take into account the exponential
quantity of photons you'd need to pump through the device before getting a
measurable result.

Fundamentally, there's nothing this device is doing that a classical computer
couldn't, so there's no reason the exponential base should be any different
under physically realizable conditions.

------
praptak
Subset sum is only _weakly_ NP-hard. It has a polynomial solution not in the
size of the input but rather in the value of the numbers (dynamic
programming). Since numbers are encoded logarithmically, this algorithm is
exponential in the size of the input.

So, solving subset sum is only impressive if you can solve it for very large
numbers.

------
shireboy
I actually have this problem in a work project: matching 1-n point of sale
transactions, credit card payments etc to 1-n bank transactions for large
companies. Ie “these three credit card payments totaling 300 match this bank
transfer for 290 2 days later.” Can confirm it’s difficult and would love any
creative solutions.

~~~
anderskaseorg
You're looking for a mixed integer programming solver. Although the problem is
NP-hard, in practice, state of the art MIP solvers run much faster than brute
force search in most non-pathological cases.

~~~
ampdepolymerase
Are SMT solvers insufficient?

~~~
Hercuros
Mixed integer programming is a more specialized task than SMT solving,
therefore an MIP implementation is likely to be more optimized for the task,
even if you might be able to use an SMT solver to solve the same problem.
Moreover, SMT solvers are focused on finding _some_ solution to some
constraints, whereas MIP solvers are focused on finding the _best_ solution to
the constraints according to some linear objective. Finally, an SMT solver is
basically a SAT solver combined with a theory solver for reasoning about
integers, etc. If the problem can be expressed just as a conjunction of
constraints (i.e, it has no interesting propositional structure), then you
don’t really need the SAT solver part and can just use a specialized theory
solver (optimizer, in this case) directly.

------
grej
Very cool. The problem reminds me of Kadanes algorithm, a dynamic programming
approach to solve the maximum sum subset problem in linear time.

------
peter_d_sherman
Excerpt:

Photonic chip fabrication

"Waveguide networks in three-dimensional architecture were written by the
femtosecond laser with a repetition rate of 1 MHz, a central wavelength of 513
nm, a pulse duration of 290 fs, and a pulse energy of 190 nJ. Before radiating
into the borosilicate substrate at a depth of 170 μm, the laser beam was
shaped by a cylindrical lens and then focused by a 100× objective with a
numerical aperture of 0.7. During the fabrication, the translational stage
moved in x, y, and z directions according to the user-defined program at a
constant speed of 15 mm/s. The careful measurements and characterization on
the geometric parameter dependence of the three types of junction, such as
coupling length, coupling distance, decoupling distance, and curvature, were
taken to optimize the performance to form the standard elements."

------
Gatsky
I have to say, this stuff is really cool. Maybe one day we’ll get good enough
at shifting atoms around so that custom computing substrates can be built much
faster and cheaper.

------
deepnotderp
This is basically the same problem as the DNA computing thing and the
memcomputing thing- it's kicking the exponential problem down the road.

