
Building a Completely Reversible Computer - lainon
https://arxiv.org/abs/1702.08715
======
ajb
OT: when I click on this, researchgate claims "People who read this
publication also read 'Is minimal hepatic encephalopathy completely reversible
following liver transplantation?'

I don't believe them.

~~~
yorwba
Maybe someone was searching for "completely reversible" and got intrigued? But
I guess a recommender system is the more likely explanation.

~~~
mahmud
Not a recommender system or anything that intelligent. This is basically a
keyword search of the paper titles. Even something as simple as term-frequence
analysis would place the two papers far apart.

------
ballenf
A couple questions:

1\. How does "reversible" here apply to encryption algorithms? In a sense the
encryption key is forgotten, but with enough computing power it can be derived
from the output (at least for certain encryption methods). So, in a sense no
information is lost.

2\. If two separate computers perform the same reversible operation, but one
in reverse, are the two computers (if considered as one system) consuming no
energy? Or must the operations be performed on the same computer sequentially?

If the system of two computers is valid, then does that mean that somehow
certain operations / calculations _produce_ energy and others _consume_ it?

I realize both these questions are somewhat nonsensical, so I clearly am
missing something fundamental about this theory.

~~~
johncolanduoni
Reversible (in the paper's sense) is more about the paths information takes
than whether it's recoverable in principle. Lossless compression is
reversible, but a reversible computer performing it would still need to have
storage equal to the size of the entire original data.

It's also important to note that the fact that you are doing calculations
isn't the _direct_ cause of energy consumption. Landhauer's principle is a
consequence of the second law of thermodynamics, which mandates certain heat
losses in certain situations but doesn't tell you anything about how they
happen. The question of what actually causes the second law of thermodynamics
to hold at the level of fundamental interactions is called Loschmidt's
paradox, and there's not really a consensus as to whether we have an answer.

~~~
scottlocklin
" it would still need to have storage equal to the size of the entire original
data."

I think it's worse than this. You need the intermediate steps as well. So, for
every and/or type calculation which looks like multiple inputs one output, you
need to keep all the input bits.

Feynman has an interesting chapter on this in his lectures on computation,
including a cool billiard ball computer (a la Ed Fredkin I think).

------
cmrx64
See also a type system and programming language for reversible computations:
[http://gradworks.umi.com/35/87/3587675.html](http://gradworks.umi.com/35/87/3587675.html)

------
ArchReaper
What kind of applications would this actually be useful for? I don't think I
really understand the benefit(s) here.

~~~
Zaak
The benefit of a reversible computer is that there is no minimum amount of
energy it must dissipate as heat in order to perform its computations.

Every time a bit of information is erased, a certain amount of energy must be
dissipated as heat. A reversible computer doesn't (have to) erase information.

Reversible computing is energy-aware computing taken to its logical extreme.

~~~
ArchReaper
I get that. The concept of saving energy is what I do understand.

What I don't understand is what kind of applications this could actually be
used for. Given the (by-design) limitations of such a system, how would it be
actually usable? What real-world scenarios would want this and be able to
utilize it?

~~~
comicjk
Physics simulations. I work in drug design, and my field would be enormously
benefited by this. Quantum mechanics calculations are naturally reversible.

~~~
johncolanduoni
The interior of the computation is reversible, but the measurement performed
at the end most definitely is not. If it were, quantum computers would be able
to solve NP-complete problems trivially, and complexity theorists would get
pissed that physicists were holding out on them for decades.

~~~
Entalpi
Can you expand on this?

~~~
johncolanduoni
Sure. I'll use the quantum circuit model of quantum computation since it
removes a lot of non-essential details.

A quantum circuit is analogous to an idealized digital one, in that it is
composed of gates with (possibly multiple) inputs and outputs, hooked together
by "wires" that "carry" quantum states. However the gates aren't able to
perform arbitrary operations; they can only multiply the inputs by unitary
matrices to produce the outputs. These kind of matrices are always invertible,
so none of the gates can "lose" any information the way some digital gates do.
You also can't hook one output to multiple inputs, though you can ignore an
output if you want.

However if you want to do some sort of computation whose output makes it into
your brain (or a classical computer) you need to measure one or more of the
output states. With digital circuits this is trivial; with quantum circuits
you have to decide how you want to make the measurement (essentially along
what "axis" between zero and one), and the output is in general probabilistic
based on the angle between the measurement axis and the actual value. You also
only get to do one measurement per output per instance of computation; the
original output state will be replaced by the measured value which will
usually not be the original output state, and all the intermediate states will
already be consumed by the computation itself.

This is why most quantum algorithms are probabilistic. Shor's algorithm (the
RSA breaking one) can actually fail even with an ideal quantum computer.
However the likelihood of failure is small enough that running the computation
a few times is enough to get the right answer with high probability (and it's
easy to check if the output is right).

You could take any computable function that takes in N bits, apply it to a
superposition of all 2^N possible inputs (encoded as N qubits) with about the
same level of effort it takes to apply it to only one classical input. However
there's no measurement which can recover the 2^N different outputs in one
shot, or find the input associated with a specific output value. Most of the
ingenuity in designing quantum algorithms comes in smuggling data out of
quantum states by cleverly causing interference to push the possible output
values into alignment with your measurement axis.

------
dkarapetyan
So reversible computing requires less energy not more right? Last I heard it
is damn hard to forget information.

~~~
murbard2
In theory yes, the catch is that you're not allowed to forget information. For
instance, you cannot use an "AND" gate, because a result of 0 would be
ambiguous, but you can use the Toffoli gate [1] which takes three inputs and
produces three outputs, mapping (a,b,c) to (a,b,c^(a&b)).

But what happens to the outputs you don't need? They do need to go somehere. A
reversible circuit ends up having a lot of unused outputs which need to be
erased if the circuit is to be reused. In essence, we have traded a heatsink
for a bitsink, but they are essentially the same thing.

I don't see reversible computing as categorically different from regular
computing. It's regular computing, with careful and detailed heat management.

[1]
[https://en.wikipedia.org/wiki/Toffoli_gate](https://en.wikipedia.org/wiki/Toffoli_gate)

~~~
_0ffh
I wonder if the "bitsink instead of heatsink" problem might be somewhat
mitigable.

So, we are just pushing our 0s and 1s around, instead of permanently
destroying and re-creating them. As reversible logic is as universal as, say,
NAND logic - wouldn't it be possible to sort the unused output bits of any
gates into two pools for 0s and 1s [1] and use these pools for any constant
inputs that are needed in the logic? So that we would only have to "create"
any new 0s and 1s when one of these two pools runs dry?

[1] E.g. a sorting buffer with 0s at one end and 1s at the other. Or does
sorting necessarily mean we are sorting unknown bits at the cost of mixing
known bits (constant gate inputs)?

~~~
hcs
Sorting is not reversible, at least if your output is no larger than your
input. [0,1] and [1,0] both sort to [0,1], so you'd already need 1 extra bit
to unsort 2 bits.

I don't see how it would help anyway, but maybe I'm misunderstanding your
suggestion. The state of these unwanted outputs needs to be kept around, if
you know they will be 0 or 1 then the state is already implicit in the machine
and the output isn't actually necessary.

------
mrfusion
Are there limitations for a reversible computer? Could this run Firefox
someday?

~~~
marcosdumay
It is only capable of reversible operations. A completely reversible computer
could not interact with you.

This paper goes long about how you can irreversibly initialize such a
computer, run it, and then irreversibly read its results, and how much power
you can save doing that.

In practice the computers people could build were always slow and inefficient,
nothing comparable to the computers we are used to have (not even to the usual
microcontrollers).

~~~
johncolanduoni
It's also worth noting that even low-power chip designers would laugh at
worrying about nkT losses from throwing bits away. Our technology is very far
away from the thermodynamic limit for irreversible computers.

------
adrianratnapala
Here [https://arxiv.org/abs/1702.08715](https://arxiv.org/abs/1702.08715) is a
link to the same article that might be a little easier to use.

------
stevemk14ebr
what is meant by reversible?

~~~
SilasX
Logically (and therefore thermodynamically) reversible. By Landauer's
principle, the only reason a computer _must_ emit heat (and therefore use
energy) is to perform irreversible operations. Therefore, if you want a
computer that (potentially) uses the minimum energy, it needs to be reversible
in this sense.

An irreversible operation is anything that maps many (possible) states to the
same state. Setting a bit to zero is irreversible (because it destroys the
information about what bit was there previously). An AND gate is irreversible
(because both TF and FT map to F). A NOT gate is reversible (because you can
infer the input from the output.)

There are universal reversible gates (Toffoli and Fredkin gates are examples,
but there are many others), so you can (theoretically) build a (usable)
computer whose logical operations are completely reversible.

~~~
sadgit
Does this imply that physical entropy and informational entropy are the same?

Does it imply the computer would not give off heat? Or that a significant
amount of the heat generated by a computer is due to information loss?

~~~
SilasX
>Does this imply that physical entropy and informational entropy are the same?

It does, and there is a very strong (but not universally accepted) case that
they are the same thing:
[https://en.wikipedia.org/wiki/Entropy_in_thermodynamics_and_...](https://en.wikipedia.org/wiki/Entropy_in_thermodynamics_and_information_theory)

Great lay article about it:
[http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_...](http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/)

(Side note: I've long thought you can derive Carnot efficiency based on
information-theoretic arguments about the information contained in knowing
only the temperature difference between a heat source and a sink.)

>Does it imply the computer would not give off heat? Or that a significant
amount of the heat generated by a computer is due to information loss?

Basically, yes. You can't avoid the heat emission from setting a 0 to a 1 on
that computer's storage medium, but you would avoid the loss from any of the
intermediate computations that typically make CPUs so hot.

~~~
socmag
I'm not the person who was asking, but thanks for the insightful comments.
Makes sense.

Wouldn't a reversible computer that executes need to take up an exponentially
large amount of physical space as it executes?

That might work for small problems but it doesn't seem like it would scale. Or
rather, it would scale to fill the known universe.

Sorry I haven't really dug into the attached articles yet, but I will.

~~~
SilasX
See the thread about the asymptotic penalties:
[https://news.ycombinator.com/item?id=14151390](https://news.ycombinator.com/item?id=14151390)

------
openasocket
What are the limitations to a completely reversible computer? I'm guessing
it's still Turing complete, but do you pay a cost in terms of asymptotic
complexity? I'm guessing you can make a program reversible trivially by doing
something like making everything immutable and only ever appending new data,
but of course that's extremely inefficient in terms of memory. Of course, I
might be missing the core concept here.

~~~
SilasX
AIUI, it's just a linear penalty because, at worst, you have to run each
computation backwards to "uncompute it".

That's under the (necessary) assumption that you don't even try to avoid
erasure operations that need for their own sake rather than as an intermediate
step.

~~~
aparent
It is a bit more complicated then that. The only penalty to time is a constant
factor if you are willing to use space proportional to the computation time. I
did a bit of work on how to do time-space trade-offs in reversible circuits in
my masters thesis
[http://hdl.handle.net/10012/10949](http://hdl.handle.net/10012/10949).

------
karmicthreat
So does this mean you can move the entropy cost around? For stack or 3D
computing moving the heat cost to directly heatsinked surfaces would be
useful.

------
brudgers
Direct link to paper at arXiv,
[https://arxiv.org/pdf/1702.08715.pdf](https://arxiv.org/pdf/1702.08715.pdf)

~~~
tzs
Link to the abstract:
[https://arxiv.org/abs/1702.08715](https://arxiv.org/abs/1702.08715)

Generally, it is better to link to the abstract than to the PDF. If the paper
gets revised the abstract page page's PDF link will point to the latest
version of the paper (with links available to prior revisions). The direct PDF
link will remain stuck with the old version.

~~~
brudgers
Because I the find abstracts of readable, clear, and interesting papers are
often indistinguishable from unreadable and dull papers, I prefer the paper
itself to its abstract...at least in terms of Hacker News...I am long since
reading scholarly papers for academic purposes. If someone wants to read the
abstract, it's right there at the beginning of the paper anyway.

~~~
seanmcdirmid
Abstracts are short enough that they can be inlined into a hackernews comment
directly:

> A critical analysis of the feasibility of reversible computing is performed.
> The key question is: Is it possible to build a completely reversible
> computer? A closer look into the internal aspects of the reversible
> computing as well as the external constraints such as the second law of
> thermodynamics has demonstrated that several difficulties would have to be
> solved before reversible computer is being built. It is shown that a
> conventional reversible computer would require energy for setting up the
> reversible inputs from irreversible signals, for the reading out of the
> reversible outputs, for the transport of the information between logic
> elements and finally for the control signals that will require more energy
> dissipating into the environment. A loose bound on the minimum amount of
> energy required to be dissipated during the physical implementation of a
> reversible computer is obtained and a generalization of the principles for
> reversible computing is provided.

It can save readers a click in deciding if they want to go for the whole paper
or not. Think of the abstract as an upper bound on whether the paper is
interesting or not. A paper could possibly be less interesting than its
abstract, it is much less likely that it is more interesting than its abstract
unless the author really messed up.

~~~
brudgers
Personally, I usually click on links based on the title more often than based
on the comments. But I know other people have different habits. With a direct
link to the PDF, the abstract is at the top of the PDF for anyone who wants to
read it.

For me personally, if I have the paper, I skip the abstract and read the
introduction because it is more interesting and I don't feel a duty to bore
myself reading the abstract. Here, I thought the title was interesting enough
to click on the link and the paper interesting enough to provide a link to the
less noisy original.

