Hacker News new | past | comments | ask | show | jobs | submit login
DNA Computing (wikipedia.org)
11 points by WMCRUN on Dec 17, 2018 | hide | past | favorite | 15 comments

Very interesting topic. Does anyone have any personal experience with it? If yes please share.

I work in the field (DNA nanotechnology) and a question I often get asked is, "When are DNA computers going to replace silicon based computers?" The answer is that that's highly unlikely to happen. They both have their strengths and drawbacks and their own domains. For instance, DNA computing will probably never match the computation speed of silicon based computing since in order for DNA to compute,chemical reactions such as DNA hybridization or dissociation with their complementary counterparts must occur (which is very slow compared to manipulating electron flow). Also, the error rate using DNA is pretty high, e.g., for DNA computing using double-crossover tiles (which is mathematically equivalent to Turing-universal Wang tiles) implementing an XOR logic cellular automata, the best error rate is currently roughly on the order of ~0.1%. Two of the greatest strengths of DNA computation are its energy efficiency and massive parallelism. A microtube containing just 100 ul of DNA solution can have roughly 10^17 or 10^18 strands of DNA working in parallel. Lastly, it may be easier to get computing DNA nanomachines to work /in vivo/ or inside cells as opposed to silicon based nanomachines.

can you expound on energy efficiency? How much energy does it take to chemically synthesize the dNTPs required to do the equivalent of moving a bit and how does that compare to, say moving an electrons in a bit from SSD to cpu and back?

Good question. The second law of thermodynamics dictates a theoretical maximum of 3.4 x 10^20 (irreversible) operations per joule (at 300 K or room temperature). I think there is an exemplary work which answers your question. In 1994, Len Adleman (the A in RSA) wrote a Science paper[0] where he used DNA to solve a directed Hamiltonian path problem. In that work, he calculated that in principle 1 joule is sufficient for ~2 x 10^19 operations using DNA. This number is remarkable in that it is extremely close to the theoretical maximum. Existing supercomputers (at the time the paper was written) execute at most 10^9 operations per joule. He goes on to say that "the energy consumed during other parts of the molecular computation, such as oligonucleotide synthesis and PCR, should also be small in comparison to that consumed by current supercomputers".

[0] L. Adleman, "Molecular Computation of Solutions to Combinatorial Problems", Science 266, 1022 (1994)

Nowhere did I see you address the dNTPs.

Nucleic acid computation vs electronic computation is rather like SpaceX vs traditional rocketry, in that in traditional rocketry you have to build your entire infrastructure and medium from raw materials each time you want to do a new launch.

Adelman is right that process of oligo synthesis is energy cheap and amortizable, but he's not a chemist: the dNTPs themselves are not. A nucleotide triphosphate is not an easy molecule to make by virtue if their instability, and their usefulness derives from their instability.

I think you're confusing the energy efficiency of computation and the energy needed to create the elements which perform the computation. The initial comment refers to the former whereas you're asking about the latter. Your question, "How much energy does it take to chemically synthesize the dNTPs required to do the equivalent of moving a bit..." is not an appropriate analogy because synthesizing dNTPs (energy needed to create computing elements) is not analogous to "moving a bit" (energy needed to perform a computation). It's like comparing how much energy is needed to manufacture a hard disk and how much energy is needed to read/write to and from the hard disk. It's a meaningless comparison.

that's exactly correct. YOu only need to make a hard drive once and it's good for tons of computation. Not so with DNA.

You only have to make DNA once. If you actually wanted to make a sensible comparison, then the question you should be asking is, "How much energy does it take to manufacture a silicon based computing element (such as transistors or CPUs) as opposed to an analogous DNA computing element?" But again, this has no relevance to the computational energy efficiency of the element.

you only have to make DNA once per task, that's correct. You don't need to build a new hard drive or a GPU each time you want to do, say, a gigaflop's worth of an ML experiment. It is reusable across tasks.

Also really fun would be writing your unit tests to make sure the DNA algorithm you've programmed was acutally correct.

Currently, DNA may not be a general purpose computing element as the examples you have given, but again this is beside the point you're trying to make, namely, the comparison between the energy efficiency of computation and the energy needed to create the elements which perform the computation. They are not comparable in any sensible way.

Tell that to United Launch alliance.

You're welcome to do that yourself, if you're so inclined. I doubt they would take your argument seriously though since you're not making any sensible comparisons.

Probably not exactly what you asked for but one of the few times I fell out of my chair in a colloquium was when Erik Winfrey presented his Sierpinski gaskets grown from DNA.


This was the first paper to experimentally demonstrate the use double-crossover (DX) DNA tiles to implement a 1D cellular automata, namely Sierpinski patterns. Hence, a new type of crystal, called algorithmic crystals were born. Here the error rate was between 1% to 10%. In a subsequent work in collaboration with Erik Winfree, we're trying to reduce the error rate of these types of DNA computation models.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact