If we want bio computing to take off, somehow we need an IDE for this stuff. I thought following install instructions for typical software was hard, check this out:
BioCoder, a C++ library that enables biologists to express the exact steps needed to execute a protocol. In addition to being suitable for automation, BioCoder converts the code into a readable, English-language description for use by biologists
While I definitely agree that the software around synthetic biology is weak, I think this is not actually the problem bottlenecking the field. Ultimately, most efforts in synthetic biology are gated foremost by a lack of underlying biological knowledge. Automation may aid discovery, but the number of components and complexity of biological systems leaves many unanswered questions.
Yes, There have been demonstrators of biological computing and circuits, but from a functional perspective they are extremely limited, especially compared to in vivo systems. Put a different way, even if we can do complex "computation" in a cell, our understanding of the "I/O" is too superficial to design systems to treat diseases or act as devices. The failure of rational drug design over the last 20 years has taught the pharmaceutical community this lesson quite harshly.
Understanding the interfaces and mechanisms that our "biological computers" will work with will require more research in fields like structural and molecular biology, and this work will take time.
With that said, there are some useful systems we can engineer in the meantime; for example, biosynthesis pathway work is well underway. Just realize that these systems don't require substantial engineered computation.
> While I definitely agree that the software around synthetic biology is weak, I think this is not actually the problem bottlenecking the field. Ultimately, most efforts in synthetic biology are gated foremost by a lack of underlying biological knowledge.
This is spot-on correct. In the lab I worked in, we are working on a gene cluster of 11 genes, that result in the assembly of an enzyme that creates hydrogen. The PI's project was to play around with "promoters" that tell the bacterium to produce the genes. Think of this as 'script kiddie' work. I came in and immediately told him that this was the wrong approach; we had to alter the structure and chemistry of the enzymes themselves (think of this as 'assembler-level hacking'). Thankfully he was sympathetic to my argument and let me play around; I achieved a 3x improvement in enzyme activity. This summer,we tested the original idea, and all of the variants we tested were worse than what we started out.
Also, and this is my highly opinionated position informed by my experience as the sole chemist in a synthetic biology lab, but part of the problem is that a lot of biologists tracked themselves into biology because they didn't do so hot at chemistry. Many synthetic biology problems are chemistry problems, and even problems that are not on first blush like those that Drew Endy are trying to solve really become easy to grok if you're used to thinking qualitatively in terms of statistical thermodynamics and can mentally estimate collisions per nanosecond, from concentrations and kinetic parameters. These are quintessentially chemistry skills.
Coming from the world of biophysics and structural bio, I wholeheartedly agree. Multidisciplinary approaches answer many more questions, but can be quite challenging to wrap your head around.
Thinking like this is awesome when it happens: a lab I worked in would combine structures of motor proteins with single molecule studies, targeted mutation work, and even in-vivo studies. The result? You could take specific loops and helices within the motor and not only understand where and how they interact with a microtubule, but also how that affected the motor's mechanical action and cellular behavior.
No. It's not about thinking multidisciplinary. Often times multidisciplinary translates to, "half-assed everything". What I am saying is you have to be able to physically do the basics in everything. And modestly well. There is nothing, nothing that tempers the brain like experiential knowledge. For chemistry, the basic skills include: reading an NMR, looking at a mechanism and say, that makes no sense, running a silica gel column, shooting something down a mass spec... For biology, that's designing an analytical PCR from scratch, cloning (knowing the difference between a K12 and B strain, for example), doing western blots. And you better be able to tell me the chemical, protein structural, and enzyme mechanism differences between arginine and lysine without looking it up - I remember knowing biophysicists who couldn't do that - I remember PhDs who get their training in ostensibly multidisciplinary labs who couldn't do a single thing on that list. Hell, one of them is a Professor at UW Seattle; when he went off to do his postdoc he was scared out of his pants because he went into a yeast lab without knowing how to clone anything because the tech did all of it for him in grad school.
Another real problem is we don't teach students anything anymore on the premise that there's too much to know, and as long as you can look it up, it will be fine. Or if you are working interdisciplinarily, your colleague will be able to tell you (of course they're relying on you too so there's a bootstrapping problem). This is terrible. Sometimes you need to call someone on their bullshit, say during a lecture, or even more importantly during a meeting where actionable decisions are being made and critical insight can avoid wasting time and effort. If you don't have a significant depth of knowledge, which multidisciplinarity typically doesn't encourage - because it's hard - you are just a body making warmth in that room. You need to have instant recall of as much related information as possible at all times when doing science.
You have the right idea. I'm currently build a Bio-CAD/CAM system for engineering biology. Like an AutoCAD for genetic engineering that's easy to use and even provides tools for downstream assembly automation in the lab.
Its built on the cloud with NodeJS & MongoDB. Runs right in the browser.
Agreed, the current state of software tools for biology is sad- the tools are written by scientists, for scientists, and tend to have messy source code and incomplete/difficult to read documentation.
I don't mean to insult the people who work on the tools currently- they're great! But we need more software people writing tools for the industry.
Fortunately, people are starting to do just that. TeselaGen and Genome Compiler are both good examples. (Disclaimer- I'm a TeselaGen engineer)
Genome compiler is not a good example. The fundamental premise is incorrect. Biology should not be done as a drag-and-drop exercise... Saving a few minutes of your time is not worth having blinders that increase the likelihood of huge errors in your design. Having an intimate and comprehensive knowledge of your sequence is critical, and having a casual knowledge can lead to disaster. This is not for just anyone, either, you also have to have instant, library recall of as much as possible.
To give an example, I once witnessed an algorithmic redesign effort completely miss two extragenic components in an essential gene that would likely not have been missed by an attentive human (or better yet two or three attentive humans). Luckily, the cells evolved their way around it, the researchers tracked down the problem and how the bugs solved it and the situation is interesting enough to possibly result in a publication.
It's not necessarily a people problem as much as an incentives problem - the incentives around Biology are, at present, entirely at odds with writing good software.
Clean, well-documented source code won't get you grants. It won't yield citations. It won't get you tenure. Beyond making sure you can run the same code again, and it works if the postdoc who wrote it leaves, everything else is under the "For the good of humanity" incentive structure. And with grant paylines in the middling single digits, its really hard not to triage good code in favor of making sure the lights stay on.
That's interesting. I think if programmers in the biology realm open sourced all their code, that might be incentive in itself to write good code. Once multiple people start maintaining a project there's inherent incentive to have nice code. In addition, there's a certain level of bragging rights of putting an awesome project on your CV and getting future jobs because of that codebase.
But it took years for your (now typical) OS, server, and Internet open source projects to reach maturity and figure out how they can be monetized.
People in the sciences should start blogging more. People like me find all of these subjects very interesting but very foreign. And I think many of us have grown a bit bored with where most programming efforts are directed (backoffice, ecommerce, and social apps).
1. Keep in mind for most projects and papers, not very many people are ever going to use the source code. For most projects, there's almost no chance that you're going to get a lively, multiple contributor project going. Odds are it's just going to be on your shoulders.
2. If you're going to stay in academia, there's no level of bragging rights to an awesome project, and it won't particularly help your job prospects - indeed from an opportunity cost perspective, most of the time it will hurt them. Once the code is good enough for a paper to be written, the incentive to do more work on the code vanishes.
3. Science blogging is actually a pretty active field. But talking about the software aspects of code don't get talked about as much because its just a tool. There are some blogs on software for science drifting around out there though.
Oh wow, that's awesome. One wonders if there's something to be cribbed from the Recipe world? Voice activated step by step app with built in timers? Cookpad for wetlab? labgenius?
This feeds into one of my pet questions : will Moore's Law really die?
Doubling transistor density on silicon will end about 2020/22, when 7 or 5nm etching occurs - beyond that and chip designers are past the wavelength of red (?) light and into quantum tunnelling effects
But ...
maybe the amount of CPU cycles available to use for a given bi-annual price will keep doubling. More efficient systems on a chip, cooling in huge data centers means Siri can keep doubling its ability to run voice analysis on my behalf?
is that true?
3. is there genuinely any chance things like bio-computing?
Hurray, Dr. Endy, Monica, Jerome, and Pakpoom! :) I interned in this lab and it's really cool seeing their amazing work (especially using M13 for high-bandwidth communication) get the press it deserves
Refactoring in bio has the typical CS connotation of cleaning up messy code -- The natural PhiX174 has some overlapping genes that cannot be edited in isolation -- The refactored version has no gene overlaps, so the genes can be editted individually in a more rational manner.
http://openwetware.org/wiki/ChIP-Chip_E._coli
edit:
found this: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2989930/
BioCoder, a C++ library that enables biologists to express the exact steps needed to execute a protocol. In addition to being suitable for automation, BioCoder converts the code into a readable, English-language description for use by biologists