Photonic technology is new and not as readily available - for now.
If it turns out to be a way of minting money, demand will rise, and the competitive process of PoW will surely result in every photonic-miner owner trying to build a bigger, faster farm, just like all the others, and then we're right back to energy.
For photonics the ratio changes significantly due to fundamental physical properties of analog computation, photons vs. electrons and so on.
Anyhow the process of the Si Photonics (and memristors/other analog approaches to AI) maturing to the same degree as digital ASICs will likely be long and gradual so this isn't a practical issue for next decade.
It will take many years for 1GB of on-chip cache to be common, but when that happens, hopefully commodity computing devices can support somewhat efficient Cuckoo mining...
> This leads to the selection of a hybrid design that composes digital hashing with low precision vector-matrix multiplication (intended for photonic acceleration) to produce HeavyHash. HeavyHash is an iterated composition of an existing hash function, i.e. SHA256, and a weighting function such that the cost of evaluation of HeavyHash is dominated by the computing of the weighting function.
What is the weighting function? How do we verify that the result is valid? There have been other attempts to make proof of work more capex sensitive (especially memory-hard variants, like birthday paradox) but they all end up suffering from the fact that the very fact that you can verify the result means that you can brute force the outcome, and often that's a tradeoff that works.
Without knowing specifics it's very hard to say whether this particular proof of work algorithm does not permit an energy-inefficient brute force solution that will end up making the energy problems just as bad -- my intuition is that of course this won't work, as a matter of "no free lunch" -- the cost to secure a coin will be equal to the value of keeping it secure.
I guess we'll have to wait:
> Beyond these intuitions, the specifics of the algorithm and a detailed proof of its security will be published in a separate manuscript. 
>  Michael Dubrovsky and Marshall Ball. Towards optical proof of work; oPoW. Unpublished Manuscript, 2019.
No; you cannot brute-force the outcome in any realistic sense. For example, a Cuckoo Cycle  proof consists of 42 n-bit indices of edges that together form a cycle in a random bipartite graph on 2^n+2^n nodes, with typically n >= 29. Brute forcing over all possible size-42 subsets of 2^n indices will take well beyond the heat death of the universe. It's way easier to brute force the 256-bit private keys of all bitcoin balances...
However, it's easier to make it work on top of a more radical shift in hardware. At the basic level, we are just using simple random matrix-vector mults. Of course, the photonics or other analog low-energy approaches have to win in the market for this operation, and that will be tested empirically (though there has been a ton of investment into this kind of processing going analog as we discuss in the paper).
Right now the barrier to entry is relatively low. Miners just need cheap electricity and off the shelf GPUs to get started. It's also well known to use and get started with building your own mining cluster.
The only way to fix it is to introduce graduated electricity rates that increase as more electricity is used.
Have you looked into mining Bitcoin with GPUs lately? You would need free GPUs and free electricity to justify the effort to set it up...
BTC is only profitable with a. ASICs b. Very cheap energy source
Same is increasingly true for other coins.
" In this paper, we propose a solution to the double-spending problem using a peer-to-peer distributed timestamp server to generate computational proof of the chronological order of transactions."
Everything else in Bitcoin is just turning that timestamp server into a practical(ish) system.
In fact, if the time of nodes is not synchronized, it can cause significant problems and vulnerabilities. If time is too fast, the difficulty adjustment algorithm will think it mined too few blocks and decrease the difficulty.
Really, the timestamp field in most PoW systems’ “block” structs (Bitcoin’s, Ethereum’s, etc.) is just defined as “a number that is higher than the one in the parent block, and not so high that when interpreted as a POSIX timestamp it would land 30+ seconds in the future relative to the local node’s time.” So you just need >50% of the nodes to have a ±30s clock sync in order to agree on which blocks are valid for consideration; and even if you don’t have that level of synch, those blocks will still become valid eventually, once they’re old enough that all the nodes do consider them to be in the past. (And most PoW systems keep around near-“future” blocks until they’re valid for just such a case.)
Building the hardware could maybe be optimized, but is unlikely to be energy efficient or environmentally friendly. At least with energy you can optimize the point source of pollution at the source, and as such use sources of energy that are renewable or environmentally friendly.
This scheme seems to be robbing Peter to pay Paul.
There's much less embodied energy in $1 of chip (especially cutting edge HW where you are covering R&D) than in $1 of energy.
Also once hardware (access to capital much better distributed than access to huge quantities of discounted power) is purchased it's portable and condensed. Much better for decentralization.
But while they've demonstrated a low-energy way of computing an equivalent hash, presumably this is in no way currently competitive. Therefore this proof-of-concept itself is not an example of an algorithm with capex-dominated costs.
Given that capex versus opex is primarily a matter of accounting (i.e. do I buy a PC, or do I rent a VM from AWS?), I don't understand how that algorithmic distinction can even be achieved. If the ongoing cost of running the device become negligible, then you just incentivise the miners to "spend the saving" by buying more mining devices up-front.
Also, measures of how centralized the mining is, is widely believed to have a substantial impact on the price. Miners may believe that if they were to purchase enough mining power to control a majority, that the price they could sell the tokens for would go down, actually reducing their profit. Whether this actually would reduce their profit, I don’t know. Also, conceivably, if they were to buy more mining power, possibly their main sources of competition might respond in kind, resulting in the same income, but higher costs.
By a similar argument to the “avoid there being a majority”, might also want to avoid the case where “if just one large miner drops out, there would be a majority” if people think there is a non-trivial chance of such a miner dropping out.
There is, as I understand it, a relatively small collection of large miners such that together they would comprise a majority. Perhaps this is around the smallest number of independent entities that people will expect to be large enough that they will not collude to do bad stuff, and therefore no miner will buy enough additional mining power to cause this number to shrink, out of fear of making the price go down?
I don’t know, these are just some ideas.