With all the wonders of modern computing, it's easy to forget that an
advanced, almost alien-like form of computer architecture was the
original designer. And to this day, humanity's understanding of how
these "biological computer architectures" function is still in it's
infancy. But as the passage of time continues, so does human progress,
and as of recent, large breakthroughs have been made in advancing the
frontier.
In attempts to program biology similarly to a computer, we have been
slowly chipping away at techniques to alter the "source code" of
biological life itself, DNA. With RNA-guided
nucleases (e.g., CRISPR–Cas9) in 2012, we unlocked inserting or
removing a full line of code. With first-generation
base editing (e.g. BE1), we unlocked the ability to swap out single
characters in the code. With Prime
Editing, we unlocked the ability to rewrite entire functions and
modules at a time. And most recently, with Bridge
systems, we have vastly increased the size of rewrites to roughly
0.25 MB at once!
But the interesting thing about the so called "source code" of human
life is that the analogy does not travel very far. DNA itself does not
tell you how to build the human anatomy, it tells you how to build the
proteins that make it up. Therefore DNA is essentially a library of
low-level molecular machine components. How proteins combine into a
human is dictated by the environment these proteins are in. To further
draw loose analogies to classical human computing models like Turing
machines: DNA encodes the construction rules for the machine itself, the
set of proteins expressable and their 3D spatial configuration define
the machine's transition table, and the cell's instantaneous
configuration of proteins, ions, and metabolites represents its current
state.
If we had to draw an analogy on the reverse side, from a classical
computer to a "biological computer", we might say something like: DNA is
the set of possible circuits producible by TSMC, as well as the other
digital computer parts manufacturers. This metaphor distills these
systems enough to highlight one phenomenal aspect of biological
machines, they are able to alter the design and production of their own
constituent parts! And what's even more interesting is that no where in
the definition of the molecular machine components of a cell does there
exist a definiton for how cells combine into larger structures like
tissues, organs, and large scale anatomical structures.
The "goal" of an individual cell looks much more like first-order
cybernetic homeostatic regulation, that is continuous sensing and
correction of internal variables to maintain viability in the face of
environmental fluctuation, rather than the pursuit of any externally
defined objective. So how exactly are they able to coordinate with each
other into collective action? The answer, atleast according to Michael
Levin, is that biological organisms are multiscale competency
archtitectures that exhibit collective intelligence. The fact that an
Axolotyl is able to regrow it's tail, head, eyes, heart, etc tells us
that at the highest coordination scale of this organism, there is some
notion of anatomical homeostatis. This organism is then able to send
goals throughout the layers of organs, tissues, and cells within to
coordinate each layer. It is this paradigm that the rest of this post
will focus on. Levin's view is that this multiscale competency
architecture is made up of layers of "agential materials" where an
intent must be negotiated with at each layer in order to achieve a
desired outcome, the highest level being an outcome in anatomical space.
And how we might be able to negotiate these outcomes with agential
materials is through a communication device he refers to as an
"anatomical compiler".
Agential Material
From the lens of "agential material", almost all of our existing
construction and development techniques can be framed as operating on
"passive material". Agential material is defined as having some level of
autonomy to execute on goals of it's own.
If we look at a piece of wood for example, before it was cut down it
was situated as a component in a biolgoical system with goals of its
own. Reverse engineering the goals of a tree from it's observable
behavior, it's something like maintain structural integrity while
capturing light, exchanging gases, and allocating resources to ensure
growth and reproduction. There's more to be said about what a tree's
local goals mean inside of a broader environment or biosphere which feed
it molecules, but nonetheless, it's goals are largely homeostatic and
reproductive. Branches navigating towards open areas of light can be
seen as expressing a strategy in anatomical space. But once the tree has
been chopped down and dried out? Well it simply has no more goals. If
the environment a piece of wood exists in is too hot or humid, it has
not capability to regulate. Therefore this material is passive.
Humans have become quite capable of "programming", or modifying
passive materials to our goals. At first we used stones and dead plants
and animals to construct tools, shelters, and food. And now we can
rearrange atoms on the molecular and nanoscale through the use of
precisely directed beams of particles.
But in terms of programming "agential materials", we are in the stone
age. Nonetheless, there are numerous promising and exciting early
examples which showcase our growing capabilities in these regards.
Early Examples
of Programming Agential Materials
Mycelial Structures as Living Architecture Fungal
mycelium are a powerful example of semi-autonomous material systems.
Mycelial networks grow by exploring gradients of nutrients, moisture,
and light, effectively computing optimal paths through their
environment. Researchers are able to direct this agency through
environmental and chemical cues to grow mycelium-based composites that
form self-healing, biodegradable structures. > Manan, S. et
al. (2021). "Synthesis and applications of fungal mycelium-based
advanced functional materials." Journal of Bioresources and Bioproducts,
6(4), 293–310.ScienceDirect
Immunotherapy as Cellular Goal Rewriting Through
cytokine signaling, mRNA delivery, and checkpoint modulation, immune
cells can be taught to reinterpret which molecular patterns count as
self or enemy. The immune system already senses, models, and acts, and
we intervene at the level of interpretation rather than direct control.
By doing so, we reshape its internal goal hierarchy to attack tumors or
tolerate implants as opposed to directly telling it to attack like.
Netea, M. G. et al. (2020). "Defining trained immunity and its
role in health and disease." Nature Reviews Immunology, 20(6),
375–388.Nature
Bioelectric Organogenesis and Morphological
Reprogramming Michael Levin's lab has shown that altering
bioelectric voltage patterns can trigger the formation of complex organs
in new locations. In Xenopus laevis, manipulating membrane
potentials induced complete, functional eyes to form on tails and guts
without any genetic editing. This demonstrates that by addressing the
goal architecture of tissues rather than their molecular machinery, we
can redirect morphogenetic intelligence to construct new anatomical
structures.
Pai, V. P., Aw, S., Shomrat, T., Lemire, J. M., & Levin, M.
(2012). "Transmembrane voltage potential controls embryonic eye
patterning in Xenopus laevis." Development, 139(2), 313–323.PubMed
Although these examples are not referenced in any literature as
examples of agential material engineering, to me they signal research
engineering proof of concepts that illuminate the design space. Roughly
paraphrased from Michael Levin, in the context of having a goal to build
a tower, the switch from passive to agential materials is the switch
from building with legos to building with dogs. It's much harder to
build a pyramid of dogs, but if done right, they would be able to
self-heal and reconfigure to changes in environmental factors in a way
legos never could.
Multiscale Competency
Architectures
In Levin's
words, "Living matter is a multi-scale agential material, with
learning capacity and goal-seeking competencies at every scale, which
has massive implications for evolution, bioengineering, and
biomedicine."
Therefore cells coordinate not because they are mechanically forced
to, but because they share information and negotiate goals across many
layers of organization. A single cell regulates its internal chemistry
to stay alive. A tissue regulates local morphology to maintain
integrity. An organ regulates physiological parameters to keep the whole
system in balance. Together they form a multiscale competency
architecture, where each layer possesses its own sense of error
and success, and where collective behavior emerges from the alignment of
many smaller homeostatic loops.
In Levin's framework, each of these layers has a kind of "self" that
defines a boundary of control and perception, constraining what problems
it can sense and solve. He writes:
"Each living system is defined by a spatiotemporal boundary of
control: the size of the goal space it can actively regulate. This
defines a scale-free architecture of selves, where larger organisms
emerge from the cooperation of smaller agents, each expanding its
cognitive light cone through shared information and joint action."
(Levin, M. 2019. "The
Computational Boundary of a ‘Self': Developmental Bioelectricity Drives
Multicellularity and Scale-Free Cognition." Frontiers in Psychology,
10:2688.)
To work effectively with such systems, we must learn to communicate
goals with each layer according to its own internal language. This
requires setting conditions that each scale can interpret in pursuit of
its own goals. Cells interpret voltage gradients, mechanical tensions,
and biochemical cues as part of their problem-solving landscape. Tissues
and organs interpret higher-order bioelectric patterns as signals about
anatomical setpoints. The conclusion is that "programming biology" is
not achieved through micromanagement, but through shaping goals into
meaningful information for each level of the system's distributed
intelligence.
This architecture, like any system of autonomous agents, inevitably
introduces principal–agent problems: local goals do not always align
with global ones. Therefore understanding and negotiation will likely be
central to the future of agential material engineering.
A Note on Cybernetics
As a small aside, cybernetics has an alternative way to view the
multiscale competency architecture of biological organisms. Cybernetics, as
first defined by Norbert Wiener in 1948, is "the scientific study of
control and communication in the animal and the machine."
At its core, cybernetics studies how systems sense, act, and correct
deviations from desired states through feedback. This framework
elegantly parallels Levin's description of living matter as a network of
goal-seeking agents, each maintaining homeostasis through loops of
perception and adjustment.
In the mid-20th century, thinkers such as Heinz von Foerster and
Gordon Pask expanded the field into what became known as second-order
cybernetics through the application of cybernetics onto itself. This
moved the role of the observer from outside the system to the inside in
what Von Foerster described as "the cybernetics of observing systems,".
In this reflexive process, a system of control is able to alter itself
to achieve a goal that the underlying initial system of control was
unable to acheive. A useful example is temperature regulation in humans.
If a human is too hot, first-order cybernetic control would initiate
internal processes to begin sweating to cool down. Second-order control
would use our legs to move to an air conditioned room. This example is
second-order because it is not a strategy that is inherent to our body's
archtiecture.
(von Foerster, H. 1979. Cybernetics of Cybernetics.
Biological Computer Laboratory, University of Illinois.)
For the last twenty thousand years, Homo sapiens have engaged in ever
more complex forms of higher-order cybernetic feedback. First by
regulating our physical environment, through shelter, agriculture, and
climate control, then, by regulating our informational environment,
through language, printing, and the internet.
In this light, agential material engineering represents the next step
in this trajectory: the ability to extend cybernetic feedback loops into
the biological substrate of life itself. By learning to communicate with
living matter's intrinsic control systems rather than imposing external
ones, we move toward a future where humans can collaboratively rewrite
the architecture of our own bodies. In many ways, this is the dream of
many transhumanists.
Anatomical Compiler
Once the capability to direct seas of electrons to perform useful
computation began to emerge, a never ending series of tools emerged to
make the process easier and more efficient. In the early days
programming was done through manually reconfiguring physical circuits,
then punch cards let operators encode instructions symbolically,
assembly language then abstracted away raw binary into human-readable
mnemonics, from there high-level languages like Fortran, COBOL, and
eventually modern languages like C and Python allowed programmers to
describe intent in near natural language. And as of even more recently,
the use of LLMs has enabled the translation of natural language into
high-level language as well as automation of system commands.
Screenshot 2025-10-26 at
3.10.22 PM
This progression of engineering innovation steadily enables more easy
control of computers by humans. Today, the path from intent to digital
computer execution and it's feedback loop looks something like the
following diagram. A human has an intent and writes it into a high-level
programming language, a compiler translates this into assmebly and then
machine code (depending on the language there may be intermediate
steps), which is then executed by the flow of electricity through the
circuits of the machine.
flowchart TD
A["🧠 Homo Sapien <br>Mind"] -->|"writes in"| B["⌨️ Programming <br> Language"]
B -->|"compiles into"| D["💾 Machine <br> Code"]
D -->|"executes on"| E["🔋 Digital <br> Computer"]
E -->|"produces output for"| A
style A fill:#fff2cc,stroke:#b58900,stroke-width:2px
style B fill:#e0f7fa,stroke:#00796b,stroke-width:2px
style D fill:#fce4ec,stroke:#ad1457,stroke-width:2px
style E fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
So if our goal is to enable programming of biology in the same ease
of use as a computer, we will inevitably need what Michael Levin calls
an "Anatomical Compiler". In his words
the anatomical compiler is, "a future system representing the long-term
endgame of the science of morphogenesis, that reminds us how far away
from true understanding we are. Someday, we will be able to sit in front
of this combination of software and robot scientist platform, specify
the shape of an animal, plant, organ, or biobot that we want, and it
will convert that shape specification into a set of stimuli that will be
given to cells to get them to build exactly that shape (no matter how
novel - total control over growth and form)."
Using the anatomical compiler, one could describe the high-level goal
of "regrow my arm", feed it to the compiler and get out a series of
stimuli to apply to your body in order to signal the regrowth of your
arm. You could then take these instructions into a machine which is able
to apply them to your body in the specified format and process. In this
way, progrmaming biology could very closely mirror programming a
computer!
Screenshot 2025-10-26 at
4.00.04 PM
Unlike digital compilers, an anatomical compiler would not enforce
exact execution. It would work with an intelligent substrate that can
self-correct and adapt. It would propose morphogenetic goals, then allow
the living material to decide how to meet them. It would not control
life but collaborate with it. Such a system would unite
intention with regeneration and turn the plasticity of biology into a
programmable medium.
Biological IDE
From first principles the invention of an anatomical compiler, or
something closely resembling it, feel quite certain. On what timescale I
have very little certainty. But allowing ourselves to exist in such a
future, and following the existing paradigms of programming, we can
extrapolate what other technologies will also be necessary. Luckily, in
one of their papers, Levin et al give us a proof of concept for "A
Scalable Pipeline for Designing Reconfigurable Organisms". Living in
this future, I believe that all of these technologies will be
miniturized and perfected to the point of becoming a "Biological IDE"
that would let anyone develop their own "Biobots" at home for a variety
of purposes.
A
Scalable Pipeline for Designing Reconfigurable Organisms
In their 2020 paper,
Sam Kriegman, Douglas Blackiston, Michael Levin, and Josh Bongard
revealed a complete digital-to-biological design loop that produced
functional multicellular organisms from scratch. The team called them
reconfigurable organisms, though they are now widely known as xenobots.
With this pipeline they were able to develop multiple morphological
capabilities in living tissue: * Locomotion: Some
xenobots were evolved to move directionally across a dish by contracting
and expanding like microscopic heart-driven paddles. * Object
Manipulation: Others gathered particles into clusters,
spontaneously clearing their environment. * Object
Transport: Certain body plans evolved hollow centers that
functioned as pouches to carry loads. * Collective
Behavior: Groups of xenobots exhibited coordinated motion,
temporarily bonding and orbiting each other.
As the authors write in the paper's Significance
section:
"Computers automatically design new machines in simulation, and the
best designs are then built by combining together different biological
tissues. This suggests others may use this approach to design a variety
of living machines... or further broaden our understanding of the diverse
forms and functions life may adopt."
Their experiment marked, to my knowledge, the first time an
algorithmically generated morphology was physically realized in a
biolgoical organism.
The Design–Build Pipeline
Requirements
The xenobot workflow follows a continuous feedback loop between
computation and biology. It begins with a set of simulated cell building
blocks and proceeds through several iterative stages that give us a
glimpse at what a biological IDE might one day generalize.
1. Design Generation
"A behavioral goal (e.g., maximize displacement), along with
structural building blocks... are supplied to an evolutionary algorithm.
The algorithm evolves an initially random population and returns the
best design that was found." (Kriegman et al., Fig. 1
description)
A biological IDE will one day be able to take your natural language
description of your intended behavior and deploy it to a soft-body
physics simulator to search for the best designs.
2. Robustness Filtering
"Performant designs are then filtered by their robustness to random
phase modulation of their contractile cells... noise resistance in
simulation is a simple and effective predictor of whether a design will
maintain its behavior when instantiated physically."
Integration tests with real world physics will automatically be run
on the designs from the prior steps to filter out designs that would not
withstand initial forms of physical instantiation. "Biological fuzzing"
will then be performed to filter out weaker designs that cannot
withstand basic perturbations.
3. Build Filtering and Biological Realization
"The surviving noise-resistant designs are then passed through a
build filter... constructed in vivo using developing
Xenopus cardiomyocyte and epidermal cell progenitors."
From these filtered designs, the anatomical compiler will generate
stimulus to apply to initial living tissue inside of an automated lab
environment, either at home, or "in the cloud".
4. Empirical Feedback and Model Update
"Discrepancies between in silico and in vivo
behavior are returned to the evolutionary algorithm... increasing the
success likelihood of subsequent design-to-deployment attempts."
Real-world behavior—locomotion efficiency, stability, group dynamics
are recorded and fed back into the simulator, refining its
constraints.
Through this loop, the pipeline gradually learns how to express
digital intent through biological matter.
Figure 1 in their paper shows us a sample of this in silico
simulation to in vivo measurement feedback loop.
image
The End
The work at Levin Labs by
Michael Levin et al which I have attempted to summarize here needs no
conclusion. Instead, for me personally, it is just the beginning of
entirely new directions in human evolution.
The Anatomical Compiler
2025 Oct 26 See all postsWith all the wonders of modern computing, it's easy to forget that an advanced, almost alien-like form of computer architecture was the original designer. And to this day, humanity's understanding of how these "biological computer architectures" function is still in it's infancy. But as the passage of time continues, so does human progress, and as of recent, large breakthroughs have been made in advancing the frontier.
In attempts to program biology similarly to a computer, we have been slowly chipping away at techniques to alter the "source code" of biological life itself, DNA. With RNA-guided nucleases (e.g., CRISPR–Cas9) in 2012, we unlocked inserting or removing a full line of code. With first-generation base editing (e.g. BE1), we unlocked the ability to swap out single characters in the code. With Prime Editing, we unlocked the ability to rewrite entire functions and modules at a time. And most recently, with Bridge systems, we have vastly increased the size of rewrites to roughly 0.25 MB at once!
But the interesting thing about the so called "source code" of human life is that the analogy does not travel very far. DNA itself does not tell you how to build the human anatomy, it tells you how to build the proteins that make it up. Therefore DNA is essentially a library of low-level molecular machine components. How proteins combine into a human is dictated by the environment these proteins are in. To further draw loose analogies to classical human computing models like Turing machines: DNA encodes the construction rules for the machine itself, the set of proteins expressable and their 3D spatial configuration define the machine's transition table, and the cell's instantaneous configuration of proteins, ions, and metabolites represents its current state.
If we had to draw an analogy on the reverse side, from a classical computer to a "biological computer", we might say something like: DNA is the set of possible circuits producible by TSMC, as well as the other digital computer parts manufacturers. This metaphor distills these systems enough to highlight one phenomenal aspect of biological machines, they are able to alter the design and production of their own constituent parts! And what's even more interesting is that no where in the definition of the molecular machine components of a cell does there exist a definiton for how cells combine into larger structures like tissues, organs, and large scale anatomical structures.
The "goal" of an individual cell looks much more like first-order cybernetic homeostatic regulation, that is continuous sensing and correction of internal variables to maintain viability in the face of environmental fluctuation, rather than the pursuit of any externally defined objective. So how exactly are they able to coordinate with each other into collective action? The answer, atleast according to Michael Levin, is that biological organisms are multiscale competency archtitectures that exhibit collective intelligence. The fact that an Axolotyl is able to regrow it's tail, head, eyes, heart, etc tells us that at the highest coordination scale of this organism, there is some notion of anatomical homeostatis. This organism is then able to send goals throughout the layers of organs, tissues, and cells within to coordinate each layer. It is this paradigm that the rest of this post will focus on. Levin's view is that this multiscale competency architecture is made up of layers of "agential materials" where an intent must be negotiated with at each layer in order to achieve a desired outcome, the highest level being an outcome in anatomical space. And how we might be able to negotiate these outcomes with agential materials is through a communication device he refers to as an "anatomical compiler".
Agential Material
From the lens of "agential material", almost all of our existing construction and development techniques can be framed as operating on "passive material". Agential material is defined as having some level of autonomy to execute on goals of it's own.
If we look at a piece of wood for example, before it was cut down it was situated as a component in a biolgoical system with goals of its own. Reverse engineering the goals of a tree from it's observable behavior, it's something like maintain structural integrity while capturing light, exchanging gases, and allocating resources to ensure growth and reproduction. There's more to be said about what a tree's local goals mean inside of a broader environment or biosphere which feed it molecules, but nonetheless, it's goals are largely homeostatic and reproductive. Branches navigating towards open areas of light can be seen as expressing a strategy in anatomical space. But once the tree has been chopped down and dried out? Well it simply has no more goals. If the environment a piece of wood exists in is too hot or humid, it has not capability to regulate. Therefore this material is passive.
Humans have become quite capable of "programming", or modifying passive materials to our goals. At first we used stones and dead plants and animals to construct tools, shelters, and food. And now we can rearrange atoms on the molecular and nanoscale through the use of precisely directed beams of particles.
But in terms of programming "agential materials", we are in the stone age. Nonetheless, there are numerous promising and exciting early examples which showcase our growing capabilities in these regards.
Early Examples of Programming Agential Materials
Mycelial Structures as Living Architecture Fungal mycelium are a powerful example of semi-autonomous material systems. Mycelial networks grow by exploring gradients of nutrients, moisture, and light, effectively computing optimal paths through their environment. Researchers are able to direct this agency through environmental and chemical cues to grow mycelium-based composites that form self-healing, biodegradable structures. > Manan, S. et al. (2021). "Synthesis and applications of fungal mycelium-based advanced functional materials." Journal of Bioresources and Bioproducts, 6(4), 293–310. ScienceDirect
Immunotherapy as Cellular Goal Rewriting Through cytokine signaling, mRNA delivery, and checkpoint modulation, immune cells can be taught to reinterpret which molecular patterns count as self or enemy. The immune system already senses, models, and acts, and we intervene at the level of interpretation rather than direct control. By doing so, we reshape its internal goal hierarchy to attack tumors or tolerate implants as opposed to directly telling it to attack like.
Bioelectric Organogenesis and Morphological Reprogramming Michael Levin's lab has shown that altering bioelectric voltage patterns can trigger the formation of complex organs in new locations. In Xenopus laevis, manipulating membrane potentials induced complete, functional eyes to form on tails and guts without any genetic editing. This demonstrates that by addressing the goal architecture of tissues rather than their molecular machinery, we can redirect morphogenetic intelligence to construct new anatomical structures.
Although these examples are not referenced in any literature as examples of agential material engineering, to me they signal research engineering proof of concepts that illuminate the design space. Roughly paraphrased from Michael Levin, in the context of having a goal to build a tower, the switch from passive to agential materials is the switch from building with legos to building with dogs. It's much harder to build a pyramid of dogs, but if done right, they would be able to self-heal and reconfigure to changes in environmental factors in a way legos never could.
Multiscale Competency Architectures
In Levin's words, "Living matter is a multi-scale agential material, with learning capacity and goal-seeking competencies at every scale, which has massive implications for evolution, bioengineering, and biomedicine."
Therefore cells coordinate not because they are mechanically forced to, but because they share information and negotiate goals across many layers of organization. A single cell regulates its internal chemistry to stay alive. A tissue regulates local morphology to maintain integrity. An organ regulates physiological parameters to keep the whole system in balance. Together they form a multiscale competency architecture, where each layer possesses its own sense of error and success, and where collective behavior emerges from the alignment of many smaller homeostatic loops.
In Levin's framework, each of these layers has a kind of "self" that defines a boundary of control and perception, constraining what problems it can sense and solve. He writes:
To work effectively with such systems, we must learn to communicate goals with each layer according to its own internal language. This requires setting conditions that each scale can interpret in pursuit of its own goals. Cells interpret voltage gradients, mechanical tensions, and biochemical cues as part of their problem-solving landscape. Tissues and organs interpret higher-order bioelectric patterns as signals about anatomical setpoints. The conclusion is that "programming biology" is not achieved through micromanagement, but through shaping goals into meaningful information for each level of the system's distributed intelligence.
This architecture, like any system of autonomous agents, inevitably introduces principal–agent problems: local goals do not always align with global ones. Therefore understanding and negotiation will likely be central to the future of agential material engineering.
A Note on Cybernetics
As a small aside, cybernetics has an alternative way to view the multiscale competency architecture of biological organisms. Cybernetics, as first defined by Norbert Wiener in 1948, is "the scientific study of control and communication in the animal and the machine."
At its core, cybernetics studies how systems sense, act, and correct deviations from desired states through feedback. This framework elegantly parallels Levin's description of living matter as a network of goal-seeking agents, each maintaining homeostasis through loops of perception and adjustment.
In the mid-20th century, thinkers such as Heinz von Foerster and Gordon Pask expanded the field into what became known as second-order cybernetics through the application of cybernetics onto itself. This moved the role of the observer from outside the system to the inside in what Von Foerster described as "the cybernetics of observing systems,". In this reflexive process, a system of control is able to alter itself to achieve a goal that the underlying initial system of control was unable to acheive. A useful example is temperature regulation in humans. If a human is too hot, first-order cybernetic control would initiate internal processes to begin sweating to cool down. Second-order control would use our legs to move to an air conditioned room. This example is second-order because it is not a strategy that is inherent to our body's archtiecture.
For the last twenty thousand years, Homo sapiens have engaged in ever more complex forms of higher-order cybernetic feedback. First by regulating our physical environment, through shelter, agriculture, and climate control, then, by regulating our informational environment, through language, printing, and the internet.
In this light, agential material engineering represents the next step in this trajectory: the ability to extend cybernetic feedback loops into the biological substrate of life itself. By learning to communicate with living matter's intrinsic control systems rather than imposing external ones, we move toward a future where humans can collaboratively rewrite the architecture of our own bodies. In many ways, this is the dream of many transhumanists.
Anatomical Compiler
Once the capability to direct seas of electrons to perform useful computation began to emerge, a never ending series of tools emerged to make the process easier and more efficient. In the early days programming was done through manually reconfiguring physical circuits, then punch cards let operators encode instructions symbolically, assembly language then abstracted away raw binary into human-readable mnemonics, from there high-level languages like Fortran, COBOL, and eventually modern languages like C and Python allowed programmers to describe intent in near natural language. And as of even more recently, the use of LLMs has enabled the translation of natural language into high-level language as well as automation of system commands.
This progression of engineering innovation steadily enables more easy control of computers by humans. Today, the path from intent to digital computer execution and it's feedback loop looks something like the following diagram. A human has an intent and writes it into a high-level programming language, a compiler translates this into assmebly and then machine code (depending on the language there may be intermediate steps), which is then executed by the flow of electricity through the circuits of the machine.
So if our goal is to enable programming of biology in the same ease of use as a computer, we will inevitably need what Michael Levin calls an "Anatomical Compiler". In his words the anatomical compiler is, "a future system representing the long-term endgame of the science of morphogenesis, that reminds us how far away from true understanding we are. Someday, we will be able to sit in front of this combination of software and robot scientist platform, specify the shape of an animal, plant, organ, or biobot that we want, and it will convert that shape specification into a set of stimuli that will be given to cells to get them to build exactly that shape (no matter how novel - total control over growth and form)."
Using the anatomical compiler, one could describe the high-level goal of "regrow my arm", feed it to the compiler and get out a series of stimuli to apply to your body in order to signal the regrowth of your arm. You could then take these instructions into a machine which is able to apply them to your body in the specified format and process. In this way, progrmaming biology could very closely mirror programming a computer!
Unlike digital compilers, an anatomical compiler would not enforce exact execution. It would work with an intelligent substrate that can self-correct and adapt. It would propose morphogenetic goals, then allow the living material to decide how to meet them. It would not control life but collaborate with it. Such a system would unite intention with regeneration and turn the plasticity of biology into a programmable medium.
Biological IDE
From first principles the invention of an anatomical compiler, or something closely resembling it, feel quite certain. On what timescale I have very little certainty. But allowing ourselves to exist in such a future, and following the existing paradigms of programming, we can extrapolate what other technologies will also be necessary. Luckily, in one of their papers, Levin et al give us a proof of concept for "A Scalable Pipeline for Designing Reconfigurable Organisms". Living in this future, I believe that all of these technologies will be miniturized and perfected to the point of becoming a "Biological IDE" that would let anyone develop their own "Biobots" at home for a variety of purposes.
A Scalable Pipeline for Designing Reconfigurable Organisms
In their 2020 paper, Sam Kriegman, Douglas Blackiston, Michael Levin, and Josh Bongard revealed a complete digital-to-biological design loop that produced functional multicellular organisms from scratch. The team called them reconfigurable organisms, though they are now widely known as xenobots. With this pipeline they were able to develop multiple morphological capabilities in living tissue: * Locomotion: Some xenobots were evolved to move directionally across a dish by contracting and expanding like microscopic heart-driven paddles. * Object Manipulation: Others gathered particles into clusters, spontaneously clearing their environment. * Object Transport: Certain body plans evolved hollow centers that functioned as pouches to carry loads. * Collective Behavior: Groups of xenobots exhibited coordinated motion, temporarily bonding and orbiting each other.
As the authors write in the paper's Significance section:
Their experiment marked, to my knowledge, the first time an algorithmically generated morphology was physically realized in a biolgoical organism.
The Design–Build Pipeline Requirements
The xenobot workflow follows a continuous feedback loop between computation and biology. It begins with a set of simulated cell building blocks and proceeds through several iterative stages that give us a glimpse at what a biological IDE might one day generalize.
1. Design Generation
A biological IDE will one day be able to take your natural language description of your intended behavior and deploy it to a soft-body physics simulator to search for the best designs.
2. Robustness Filtering
Integration tests with real world physics will automatically be run on the designs from the prior steps to filter out designs that would not withstand initial forms of physical instantiation. "Biological fuzzing" will then be performed to filter out weaker designs that cannot withstand basic perturbations.
3. Build Filtering and Biological Realization
From these filtered designs, the anatomical compiler will generate stimulus to apply to initial living tissue inside of an automated lab environment, either at home, or "in the cloud".
4. Empirical Feedback and Model Update
Real-world behavior—locomotion efficiency, stability, group dynamics are recorded and fed back into the simulator, refining its constraints.
Through this loop, the pipeline gradually learns how to express digital intent through biological matter.
Figure 1 in their paper shows us a sample of this in silico simulation to in vivo measurement feedback loop.
The End
The work at Levin Labs by Michael Levin et al which I have attempted to summarize here needs no conclusion. Instead, for me personally, it is just the beginning of entirely new directions in human evolution.