When Biology Becomes Software
All of life is based on the coordinated action of genetic parts (genes and their controlling sequences) found in the genomes (the complete DNA sequence) of organisms.
Genes and genomes are based on code– just like the digital language of computers. But instead of zeros and ones, four DNA letters — A, C, T, G — encode all of life. (Life is messy, and there are actually all sorts of edge cases, but ignore that for now.) If you have the sequence that encodes an organism, in theory, you could recreate it. If you can write new working code, you can alter an existing organism or create a novel one.
If this sounds to you a lot like software coding, you’re right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we’re dealing with molecules — and sometimes actual forms of life — the risks can be much greater.
Imagine a biological engineer trying to increase the expression of a gene that maintains normal gene function in blood cells. Even though it’s a relatively simple operation by today’s standards, it’ll almost certainly take multiple tries to get it right. Were this computer code, the only damage those failed tries would do is to crash the computer they’re running on. With a biological system, the code could instead increase the likelihood of multiple types of leukemias and wipe out cells important to the patient’s immune system.
We have known the mechanics of DNA for some 60 plus years. The field of modern biotechnology began in 1972 when Paul Berg joined one virus gene to another and produced the first “recombinant” virus. Synthetic biology arose in the early 2000s when biologists adopted the mindset of engineers; instead of moving single genes around, they designed complex genetic circuits.
In 2010 Craig Venter and his colleagues recreated the genome of a simple bacterium. More recently, researchers at the Medical Research Council Laboratory of Molecular Biology in Britain created a new, more streamlined version of E. coli. In both cases the researchers created what could arguably be called new forms of life.
This is the new bioengineering, and it will only get more powerful. Today you can write DNA code in the same way a computer programmer writes computer code. Then you can use a DNA synthesizer or order DNA from a commercial vendor, and then use precision editing tools such as CRISPR to “run” it in an already existing organism, from a virus to a wheat plant to a person.
In the future, it may be possible to build an entire complex organism such as a dog or cat, or recreate an extinct mammoth (currently underway). Today, biotech companies are developing new gene therapies, and international consortia are addressing the feasibility and ethics of making changes to human genomes that could be passed down to succeeding generations.
Within the biological science community, urgent conversations are occurring about “cyberbiosecurity,” an admittedly contested term which exists between biological and information systems where vulnerabilities in one can affect the other. These can include the security of DNA databanks, the fidelity of transmission of those data, and information hazards associated with specific DNA sequences that could encode novel pathogens for which no cures exist.
These risks have occupied not only learned bodies — the National Academies of Sciences, Engineering, and Medicine published at least a half dozen reports on biosecurity risks and how to address them proactively — but have made it to mainstream media: genome editing was a major plot element in Netflix’s Season 3 of “Designated Survivor.”
Our worries are more prosaic. As synthetic biology “programming” reaches the complexity of traditional computer programming, the risks of computer systems will transfer to biological systems. The difference is that biological systems have the potential to cause much greater, and far more lasting, damage than computer systems.
Programmers write software through trial and error. Because computer systems are so complex and there is no real theory of software, programmers repeatedly test the code they write until it works properly. This makes sense, because both the cost of getting it wrong and the ease of trying again is so low. There are even jokes about this: a programmer would diagnose a car crash by putting another car in the same situation and seeing if it happens again.
Even finished code still has problems. Again due to the complexity of modern software systems, “works properly” doesn’t mean that it’s perfectly correct. Modern software is full of bugs — thousands of software flaws — that occasionally affect performance or security. That’s why any piece of software you use is regularly updated; the developers are still fixing bugs, even after the software is released.
Bioengineering will be largely the same: writing biological code will have these same reliability properties. Unfortunately, the software solution of making lots of mistakes and fixing them as you go doesn’t work in biology.
In nature, a similar type of trial and error is handled by “the survival of the fittest” and occurs slowly over many generations. But human-generated code from scratch doesn’t have that kind of correction mechanism. Inadvertent or intentional release of these newly coded “programs” may result in pathogens of expanded host range (just think swine flu) or organisms that wreck delicate ecological balances.
Unlike computer software, there’s no way so far to “patch” biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to “patch” the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.
Opportunities for mischief and malfeasance often occur when expertise is siloed, fields intersect only at the margins, and when the gathered knowledge of small, expert groups doesn’t make its way into the larger body of practitioners who have important contributions to make.
Good starts have been made by biologists, security agencies, and governance experts. But these efforts have tended to be siloed, in either the biological and digital spheres of influence, classified and solely within the military, or exchanged only among a very small set of investigators.
What we need is more opportunities for integration between the two disciplines. We need to share information and experiences, classified and unclassified. We have tools among our digital and biological communities to identify and mitigate biological risks, and those to write and deploy secure computer systems.
Those opportunities will not occur without effort or financial support. Let’s find those resources, public, private, philanthropic, or any combination. And then let’s use those resources to set up some novel opportunities for digital geeks and bionerds — as well as ethicists and policymakers — to share experiences, concerns, and come up with creative, constructive solutions to these problems that are more than just patches.
These are overarching problems; let’s not let siloed thinking or funding get in the way of breaking down barriers between communities. And let’s not let technology of any kind get in the way of the public good.
This essay previously appeared on CNN.com.