The Levin Lab
Home
Research
Overview Temporal Information Spatial Information Lab's New Directions Mind Maps
Publications
Peer-Reviewed Papers Editorials Preprints Popular Press
Presentations
Talks Audio & Video Interviews Print Interviews
People
Principal Investigator Lab Members Photo Galleries
Resources
Frequently Asked Questions
Educational Materials Conferences and Symposia Conversations and Guests Tools and Software Image Gallery Additional Affiliations Other Labs of Interest Inspiration
Jobs
Overview
Contact Us
Overview About the Lab
Students
Overview Awards Papers Publicity
The Levin Lab
Resources

Frequently Asked Questions

If your question is not in this list, you can also try asking our LevinBot (a chatbot trained on the contents of the Levin Lab website). Furthermore, our views are constantly evolving as can be seen when you compare our current answers to our 2012 ones.

General Lab Overview

  1. What does your lab work on?
  2. What ties it all together?
  3. Why did this all change since the 2012 version of this page?
  4. Which model systems do you use and why?
  5. Why are all the projects so unusual and the emphasis different from most of the mainstream work in the field?
  6. Why all the emphasis on synthetic forms?
  7. Is your motto, "Shut up and engineer?"

Bioelectricity

  1. What's all this bioelectric stuff? What is bioelectricity and why not stick to well-known biochemical signals?
  2. But where is the bioelectric code? We know where the DNA code is.
  3. What is bioelectricity in the great scheme of things?
  4. You talk about interfacing to cells and tissues at a higher level (communication, not micromanagement) but the techniques you use target ion channels and such. Isn't this still bottom-up control?

Cognition and Philosophy

  1. For the planaria that keep their memories after they regenerate their heads, where is the memory stored?
  2. What are some of these unusual terms you use?
  3. What is memory, agency, decision-making, cognition, goal-directedness, etc.?
  4. Isn't it a category error to attribute cognitive properties outside of brainy animals?
  5. Can we decide what degree of intelligence can be exhibited by cells, tissues, and other unconventional agents as a matter of philosophical commitment? Can we simply define intelligence as something that belongs to a certain kind of brain for example?
  6. All the examples of developmental plasticity you talk about - isn't that just complexity and emergence - why bring in concepts from behavioral science?
  7. How does "just physics" become cognition, decision-making, memory, agency, etc.?
  8. Under the view of agency and intelligence described in these 2 papers, how can we tell what level of cognition a given system has?
  9. By placing living things on the same spectrum as machines, and explaining biological phenomena via algorithms, are you hoping to advance a mechanistic, reductionist agenda?
  10. So is life/brain/body like a computer?
  11. Why is it so hard for us to recognize intelligence in unconventional embodiments?
  12. What kind of pushback does this set of views receive?
  13. What does this imply for machine learning?
  14. Given the argument that chimeric technologies and gradual evolutionary chains between living forms dissolve binary, crisp categories, what really is a human - what's a useful definition of a human, when brain structure and sensory/motor components can change radically?
  15. How does learning relate to direct cellular control?
  16. Isn't everything eventually supposed to be explained by the lowest level possible?
  17. What is it like to be part of the multi-scale Selves architecture described in your 2019 article The Computational Boundary of a "Self": Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition?
  18. What about consciousness?

Practical Implications and Ethics

  1. In what sense are Xenobots "bioengineered"? They have a wild-type genome and no inorganic components. How is it "robotics"?
  2. What are the practical implications of your work?
  3. So on your view of collective intelligence and homeodynamic setpoints, what could be a definition of "health"?
  4. What are the implications of these views for evolution or bioengineering?
  5. What are the long-term implications of chimeric and biobot technologies?
  6. What is the ethical status of enhancement - why should we want to improve our bodies and IQs with new extended mind, prosthetic, and hybrid technologies?
  7. Isn't { Xenobots, bioelectric repair, etc. } going too far?
  8. Shouldn't we be outraged over making living machines out of skin cells? (Xenobots etc.)
  9. Why did you specifically choose ciliated cells as a material - are they the only cells that can make biobots?
  10. { Xenobots, regeneration, etc.} is not natural!
  11. What keeps me up at night? What am I worried about, with these technologies?

Miscellaneous

  1. What might be a relevant piece of art for this whole "multi-scale Selves" topic?

General Lab Overview

1. What does your lab work on?

< Back to top

2. What ties it all together?

The central question at the heart of our work in developmental physiology, AI, and cognitive science/philosophy is: how do embodied minds arise in the physical world, and what determines the capabilities and properties of those minds? We are interested in decision-making, memory, and optimal control in a wide range of evolved, designed, and synthetic hybrid systems. We use different model systems (from gene-regulatory networks to cell groups to collectives of behavior animals) to understand multi-scale scale dynamics that process information.

< Back to top

3. Why did this all change since the 2012 version of this page?

It didn't really. This is what I had in mind from day 1, but the website reflects the current work and what could be said at a given time, without going too far beyond our actual publications. It's a step-wise unrolling, with many surprises in the details but a stable core direction.

< Back to top

4. Which model systems do you use and why?

< Back to top

5. Why are all the projects so unusual and the emphasis different from most of the mainstream work in the field?

I am, fundamentally (and by training), a computer engineer with a deep interest in the philosophy of mind and I suppose that's why my perspective on these questions may be different.

< Back to top

6. Why all the emphasis on synthetic forms?

Barring exo-biology, all we have access to is the N=1 example of life on Earth - the evolved phylogenetic tree, full of frozen accidents of the meandering path of evolution on this planet. Making general conclusions from this dataset is like testing your theory on the same data that generated it. Normal development is very robust and reliable, which obscures the power of biology for novel problem-solving. We need to expose cells and tissues to new environments and novel configurations, to really probe their competencies.

< Back to top

7. Is your motto, "Shut up and engineer?"

Yes and no. On the one hand, I think the humanities, and questions of philosophy, are very important. So I do not believe that we should exclusively favor engineering at the expense of the bigger questions of life and meaning. On the other hand, engineering is a critical (perhaps the only available) method for deciding between competing worldviews and frameworks: the best ones are the ones that enable the most fruitful relationships with the world and its diverse levels of agency (from simple matter to other humans). We can decide between ways of thinking about the world by how much new engineering (discoveries, novel capabilities) they give rise to. Not just pre-diction (of existing systems) but "pre-invention" (how much do they facilitate novel research programs).

I view engineering in a broader sense of having a relationship with the physical world - of taking actions in physical, social, and other spaces. The cycle I like is: philosophize, engineer, and then turn that crank again and again as you modify both aspects to work together better and facilitate new discoveries and a more meaningful experience. Moreover, the "engineer" part isn't just 3rd person engineering of an external system. I'm also talking about 1st person engineering of *yourself* as engineer (change your perspectives/frames, augment, commit to enlarging your cognitive light cone of compassion and care, etc.) - the ultimate expression of freedom is to modify how you respond and act in the future by exerting deliberate, consistent effort to change yourself). I also include 2nd person engineering - communicating (signaling, behavior-shaping) and relating to agential materials and other beings.

< Back to top

Bioelectricity

8. What's all this bioelectric stuff? What is bioelectricity and why not stick to well-known biochemical signals?

Bioelectricity refers to signals carried by the voltage gradients, ion flows, and electric fields that all cells receive and emit. It has been known for over 100 years that all cells, not just excitable nerve and muscle, exhibit steady-state long-term bioelectrical activity, and that this activity appears to be instructive for cell behavior during morphogenesis. While bioelectricity functions alongside biochemical and biomechanical events, it has a unique aspect. Much like in neuroscience, bioelectricity is the computational medium with which cellular collectives make decisions (about growth and form). Evolution discovered that electrical networks are a great way to compute, long before brains and muscle came on the scene. Bioelectricity is an ancient modality that serves as the proto-cognitive medium of the cellular collective intelligence that navigates morphospace (the space of possible anatomies). As such, it is a powerful interface that cells and tissues expose to us (and to each other) that enable reprogramming for biomedical purposes (and for understanding evolutionary change).

< Back to top

9. But where is the bioelectric code? We know where the DNA code is.

  1. simplest/shortest: it's stored in very much the same way as information in the brain: in the electric states of cells (just like in neurons) and downstream modifications (long-term storage in cytoskeletal and transcriptional states).
  2. better: "it's stored in the stable bioelectric states maintained by cell networks." Just like in the brain, groups of cells make electrical networks that can stably store information. This is routinely modeled in neuroscience and is the basis of much of our technology; like memory circuits in volatile RAM, it's easy to store encodings in electrical states of a medium that holds patterns over long time. All tissues - not just brains - do that. So, the excitable medium which can store information is the voltage state of groups of cells (another more familiar medium is DNA in groups of cells, and there are others such as cytoskeleton structures, etc.). The notion that "body pattern is stored in the DNA" is not that simple, depending on what you're asking. What is stored in the DNA is protein sequences, which is single cell-level hardware information. Bioelectric patterns emerge from the complex dynamical interactions of ion channels and gap junctions opening and closing, and it's that physiological software that stores and processes patterning information.
  3. deeper still: There needs to be agreement on what "storing a code" really means. It's not simple, and there's a lot of work on this. Things are only codes with respect to what is reading or interpreting the code. So what we really need to do is talk about how bioelectric properties are interpreted by the tissues. There are 3 basic modes we've found: a) 1:1 prepatterns (like the electric face or your brain pattern), b) non-1:1 prepatterns encoding specific organs, like planarian head-tail info (which can be mapped onto heads or tails but it is not visually obvious like for electric face) or eye spots, or c) binary triggers that say "build whatever goes here" like the tail/leg signals (which carry almost none of the detailed info of how to build it). This is the state of the art now - interpretation - which we still poorly understand but are working on. And lest we get too comfortable with how well the "DNA code" has been decoded, let's remember that we have no ability to predict anatomy from genome (other than by cheating - comparison with known genomes) and we can't tell in advance if a frogolotl (mixed embryo of frog and axolotl cells) will have legs, by having frog and axolotl genomes.

< Back to top

10. What is bioelectricity in the great scheme of things?

Bioelectric dynamics in the body are, like in the brain, the computational medium of the collective intelligence of cell groups. Evolution discovered ion channels and gap junctions as ways to implement powerful laws of physics and computation for memory, scaling of goals, and integration of information. It uses the bioelectric layer to achieve evolvability and robust plasticity from the indirect encodings of form and function via genetic specification of hardware. Ion channels on the cell surfaces are the interface - the programming interface, if you will, to this physiological software.

Developmental bioelectricity shows us that deep neuroscience isn't just about neurons any more than computer science is about current laptops. And indeed, it's not even clear what neurons really are: powerful bench techniques of neuroscience do not distinguish neurons from non-neural cells. The distinction is a partition that we have invented, and it's useful in some cases (like studying neuroanatomy), but obscures important biology in many cases. Neuroscience provides us with frameworks for seeing how individual competent cells scale up to emergent, large Selves. This is a far more general phenomenon than just neurons make a brain. Isn't it interesting that Alan Turing was interested in both: intelligence and morphogenesis? It's no accident, because morphogenesis is an excellent example of an unconventional intelligence (which uses the same medium - bioelectricity - as evolution chose for our brains).

< Back to top

11. You talk about interfacing to cells and tissues at a higher level (communication, not micromanagement) but the techniques you use target ion channels and such. Isn't this still bottom-up control?

Of course, there is always a physical story to tell about a process if you want to zoom in to the lowest level of description - it's never going to be magic, it's always physics underneath. But for example, when someone is programming a computer, or explaining reasoning to another person, it is possible for an observer to focus on the mechanics of the keyboard buttons being pressed, or the specific air waves and molecules being generated, respectively. But in both cases that observer would be missing everything that is important about the interaction, and more specifically, that frame of analysis would not facilitate either programming or effective communication. The molecular details simply don't capture the understanding and control that is inherent in the interaction between systems that have higher agency than molecules. So, when we target ion channels, we only know which ion channels to target because we understand what the other cells are tracking - not the identity of the channel protein, but the voltage pattern (and we've shown that in fact you can get the same effect by using many different channels and types of ions, as long as you get the voltage right - it's a coarse-grained master variable and the molecular details can differ considerably). While you can track the intervention as having a simple reductionist physical chain of events after we've shown an example of voltage control, the ability to infer novel interventions (i.e., discovery) requires that you understand the higher-level dynamic that's going on which cannot be captured by a story about the chemistry of that specific channel and ion. The most potent control here is gained (just like in many aspects of neuroscience and behavioral sciences) at a higher level that abstracts from the details of the many different ways there are to send a given message.

< Back to top

Cognition and Philosophy

12. For the planaria that keep their memories after they regenerate their heads, where is the memory stored?

We don't know yet. But even more critical than the question of where it is stored, is the question of how it is imprinted onto a nascent regenerating brain and then interpreted. This gets to a core philosophical issue about personal identity that is relevant to all of us. We don't have real access to the past - at every moment, we have to actively reconstruct a model of the past from the evidence that the past has left in our brain and body - engrams for us to interpret as memories and maintain a coherent life story. So even without our head being actively cut off and regenerating, time itself is making sure that we are all like planaria, and also a bit like anterograde amnesia patients, who have to leave themselves notes every day about what's going on (it's just that for most humans, that scratchpad happens to be inside our skulls). We have to constantly interpret and reconstruct our memory engrams just like the planaria.

< Back to top

13. What are some of these unusual terms you use?

I've introduced a few, to transmit important new concepts in this developing field.

< Back to top

14. What is memory, agency, decision-making, cognition, goal-directedness, etc.?

Each of these is a metaphor, like all scientific concepts - a package of methods and relationships between ideas that offers to ways to think. It is a lens through which we can choose to view a particular context. All of these terms are observer-dependent and relative to a reference frame (a problem space), and each one has advantages and limitations. The quality of each lens is determined by how much prediction, control, and insight (ability to drive novel questions and research progress) they enable in a specific context (not by philosophical pre-commitments). Thus, proposing a precise definition of each of these is essential when embarking on a specific discussion about some system.

Empirical utility (facilitating the making of testable predictions, and more importantly, driving new experiments and discoveries), not philosophical (armchair) commitments, should be the criterion by which such definitions are evaluated. Moreover, a scale of analysis should be made explicit in all definitions. For example, many attempts to define decision-making break down because of an implicit focus on the molecular event itself. But this is not the only level of analysis and may be sub-optimal in many circumstances. For example, the degree to which an event is a decision has to be judged with respect to how much the optimal understanding and control of that event by an observer (e.g., scientist, or other biological system) will require knowledge of the large-scale goals, adaptive cycles, reward functions, and context - it is only defined within a context that may include evolutionary or engineering cycles. Specifically, the degree to which a process is a decision is proportional to the size of the informational light cone of spatiotemporal events that need to be considered for optimal understanding and management of that event. Very mechanical behaviors can be captured by the immediate, local pushes and pulls occurring to an object. In contrast, to properly understand the events happening in a complex agent need to consider events in the distant past (due to memory), the distant future (due to predictive capacity), and in other locations (due to integrated information across space). Similarly, the closely-related concept of free will cannot be sought in molecular events (where only mechanical necessity and quantum randomness can be found) but in the large-scale behavioral functions that are best understood as a cognitive system curating its own structure and future possibilities by rich chains of action that take place over time.

Some definitions of relevant words as I use them:

< Back to top

15. Isn't it a category error to attribute cognitive properties outside of brainy animals?

There really are no frameworks for guessing in advance how many "beings" exist inside a chunk of neural tissue like a brain, or for knowing how to evaluate an alien species with a radically-different architecture, with respect to cognitive properties. We have very few examples to work with (and for consciousness, just 1 - yourself); we need to be open to novel embodiments of mind.

It's easier if we give up binary dichotomies. In the pre-scientific past, the options were just 2: "mind like a human's" or "mere physics, like everything else". If those are your only 2 options, then of course scientists might want to say that evolution is completely blind, robots and cyborgs are completely machines with 0 cognition, etc. etc. Acknowledging a continuum view frees us from having to draw arbitrary distinctions and instead get on with the more fruitful research program of importing powerful tools from behavior science (beyond neurons) to make testable hypotheses about what kind, and how much, cognitive capacity we can usefully show in any system.

Specifically relevant to our work is the fact that until now, it's been a standard assumption of the mainstream paradigm that the concepts of chemistry and physics are the right set of tools for developmental and regenerative biology. I think this is a testable assumption (not an unimpeachable philosophical stance), and that in fact the tools of behavioral and cognitive science are also apropos. Living tissues are not the kinds of simple machines that are effectively tamed by low-agency tools.

< Back to top

16. Can we decide what degree of intelligence can be exhibited by cells, tissues, and other unconventional agents as a matter of philosophical commitment? Can we simply define intelligence as something that belongs to a certain kind of brain for example?

People often do, but it’s not a good idea. The position of any system on the Spectrum of Persuadability is a matter of experiment, not armchair preconceptions. On the left side of that spectrum are things like bowling balls – to predict and control them, you focus on their landscape. But to control and predict complex systems (like living beings, and some kinds of robots and autonomous vehicles) the real landscape is not nearly as important as the agent’s *perception* and internal beliefs about the landscape.

So, are cells and tissues more like a bowling ball on a landscape or like a mouse on a landscape? The reason it matters is that it determines which kinds of tools (practical and conceptual) you are empowered to use. The standard assumption of biology is that concepts from chemistry and physics are exclusively the right tools for developmental and regenerative biology (bottom-up approaches). But this is a limiting assumption; treating this as an empirical question instead of a philosophical commitment facilitates research and new discoveries. When we do experiments to probe systems using techniques from other fields (e.g., behavioral science), we often get surprises – intelligent behavior in unexpected places. There needs to be a kind of impedance match between tools and what they are supposed to study. The tools of chemistry and physics are low-agency apparatus, and thus they only see mechanisms and not mind. It requires a mind to be able to detect agency and interface with it.

My lab has been pursuing the hypothesis that the tools of computer science, cybernetics, and behavioral/cognitive sciences are even more apropos, for some purposes in the biological sciences, than those of chemistry and physics, because living tissues are not the kinds of simple machines that are appropriate for those low-agency approaches. By borrowing concepts from fields that focus on information and cognition, we discover novel competencies that we can exploit in biomedical and engineering settings.

< Back to top

17. All the examples of developmental plasticity you talk about - isn't that just complexity and emergence - why bring in concepts from behavioral science?

I claim 2 key things.
A) The amazing capabilities of morphogenesis are *not* simply the fact that by following simple rules, complexity reliably emerges. This open-loop emergence indeed does not require any of the cognitive approaches. Instead, what we see morphogenetic systems doing is not just rolling forward toward emergent outcomes but doing novel actions in order to reach the same goal despite perturbations. These homeostatic and allostatic competencies are not found in simple emergent systems and, by their very nature, begin to necessitate tools from the domain that best deals with agents with goals and problem-solving competencies: behavioral and cognitive science.
B) If it were optimal to predict, control, and engineer such systems using standard tools of emergence and complexity science, that would indeed not require my approaches. The claim is that treating morphogenesis as a collective intelligence operating in morphospace provides additional control and discovery capabilities over competing traditional approaches, which are reviewed in many of our papers, such as these:

< Back to top

18. How does "just physics" become cognition, decision-making, memory, agency, etc.?

Assuming that these terms have binary, sharp definitions leads to thorny pseudoproblems in which it's very hard to see how this mentalistic features arise in a physical world. Instead, we should think of them as different lenses through which we see events - sometimes the physical lens is most useful, sometimes the agential. To make it clear that there is no significant gulf between physics and cognition, always ask "what would a truly minimal, evolutionarily-ancient version of this capacity look like?". This is why issues of basal cognition don't ever depend on the details of what specific forms (paramecia, Physarum, etc.) can or can't do. Regardless of whether a given simple creature has or doesn't have a certain type of learning for example, we know that there must be some simple form, produced by a gradual process of biological reproduction, that is a troublesome case sitting between obvious cognition and simple responses. If we didn't yet have data on basal cognition, we could still be assured that we simply hadn't looked at the right problem space in the right way, for some microbe or other creature.

Unavoidably, if you go back far enough through evolution, the most minimal version of "decision", "memory", etc. will look like physics. The journey toward advanced cognition is gradual - there is no bright line; and difficult-to-classify cases are guaranteed to exist because of the evolutionary continuum (and the ability to make a chimeric system between one that has the property and one that doesn't). The difficulties disappear if we learn to ask not "Whether this system is... " but "How much ... and what kind of ... does it have?".

We must also avoid the tendency to continuously move goalposts. This happens all the time in AI research, where people say that whatever is doable by machine now, that must not be really AI - true Intelligence is whatever we can't engineer yet. When research uncovers the mechanisms underlying any example of basal cognition, people have the tendency to say "ah, I see how that works, so then that's just physics, that's not real memory/decision/cognition". We have to get over the idea that seeing a causal mechanistic chain automatically evaporates cognition or agency. Many people are (implicitly) still expecting some sort of magic underneath that is dispelled by clear explanations and mechanisms. Of course it's physics underneath - what else could it be? The problem is that we shouldn't be looking for cognition at the lower levels - it's apparent when looking at the system top-down (in cybernetic descriptions of agents' teleonomy), not bottom up.

Binary "real cognition" (to be contrasted with the "metaphorical cognition" to be found in cells, tissues, etc.) is a pseudo-scientific folk notion that doesn't take evolution or bioengineering seriously. Thinking of agential models of unconventional agents as "just metaphors" ignores the reality that all scientific concepts are metaphors; the question is not whether something is a metaphor, but what practical advantages any given metaphor enables. No definitions in this field which posit sharp lines are likely to survive the next few decades of bioengineering advances.

The same is true of anthropomorphism - there is no such thing; humans have no magic that can mistakenly be bestowed on others. We have to get over our teleophobia and realize that human minds have no monopoly on decision-making, intelligence, and goals. If the expectations for those features are scaled appropriately to other systems of study, it is reasonable and essential to look for them in other implementations. All of these advanced human capacities evolved from much more simple roots during the evolution of life. The key is to formulate models that use the optimum degree and kind of cognition to model any system most efficiently.

To help think about these things, work backwards. Don't start by asking if amoebae are conscious; start by acknowledging that you are, and then ask yourself: on your journey backwards to a quiescent oocyte (or evolutionarily, to a lower primate and back to a microbe), when does this property wink out? No-where - you will not find any clean line where a certain stage has it, and the stage just before that doesn't have it. A gradualist (continuum) view is the only defensible position, I think.

< Back to top

19. Under the view of agency and intelligence described in these 2 papers, how can we tell what level of cognition a given system has?

Cognitive claims are just engineering protocol claims. When you say that system X is some specific level of cognition, what you are really offering is a list of engineering protocols that are good for managing it, including how much autonomous functionality can be expected from it. The level of cognition of a system can be defined as the highest level of cognition that it is helpful to attribute to it when attempting to predict, control, or communicate with it. It is the cognitive level of the most efficient model on the persuadability continuum that you can apply to the system. This means it is observer-dependent, not objective/unique. Under this pragmatic stance, a level of cognitive sophistication applies not to a system but to the interactions an observer can have with that system - it's in the eye of the beholder. Thus, when you estimate the intelligence or cognition of a system, you are in effect taking an IQ test yourself, because it requires a certain degree of intelligence to recognize it in others, and it's easy to miss in unconventional agents. If you don't know what problem space the system is operating in and can't recognize how well it navigates that space, you will under-estimate cognition, often to great opportunity cost. Turing saw this clearly, framing his classic test "in the eye of the beholder".

< Back to top

20. By placing living things on the same spectrum as machines, and explaining biological phenomena via algorithms, are you hoping to advance a mechanistic, reductionist agenda?

No, just the opposite. My work is fundamentally rooted in the organicist tradition. But I reject the simple dichotomy, binary thinking, and zero-sum-game approach that says that in order for conventional living beings to drive the necessary amount of awe and respect, the rest of the universe has to provide a strong contrast and be entirely mindless. My framework does not reduce the importance, magic, or moral worth of living beings; however, it hopes to give insight into their essential nature that goes far beyond familiar implementations. I think that the holistic, organicist community does not take their own views seriously enough and stops short of where these ideas really need to go. It's not that living things are less amazing than we thought. It's that we did not properly appreciate what "mere matter", algorithms, and the laws of cybernetics were actually capable of. Compared to popular approaches to this deep question, it sees more life and mind, not less.

So what is a machine? A "machine" is any system that is understandable to some extent from a 3rd-person perspective - it has features that allow other observers to predict and rationally manipulate its behavior. On this account, human beings (and other animals) have machine-like aspects because bioengineers, parasites, etc. are able to hack our bodies (and cognitive systems) to make certain things happen. But of course, those machine like aspects do not tell the whole story, because there is also a lot of richness to be had from engaging with the first person perspective - benefit from the high levels of agency, autonomy, and wisdom that living systems can offer (complementing the control lens of the engineer with a relationship lens where you learn from, not just manipulate, the other side of the interaction). Is a human being a machine? That depends on you, the observer, and what frame you choose to take for a given interaction with that human. Various contexts favor different degrees of deploying toolkits appropriate to dealing with machines or high-agency beings.

< Back to top

21. So is life/brain/body like a computer?

Yes and no. It's certainly not like the computer architecture most of us use today - a linear, deterministic, centrally-controlled process. But it does have some features which concepts in computer science really help to understand. These key similarities include the ability of muliple subsystems to encode and process symbols, to be reprogrammable (new behavior patterns from the exact same hardware), multi-scale causality (yes, chemistry at the bottom; but also, algorithms/cognition at the top which has causal power), and perhaps the most powerful concept of all: abstraction layers which hide the complexity of the micro-scale details underneath to allow efficient control by hacking the system at higher levels using targets that do not exist at lower levels. This allows evolution to work over a highly competent material, and cells and tissues to behavior-shape each other in complex and adaptive ways.

< Back to top

22. Why is it so hard for us to recognize intelligence in unconventional embodiments?

Because our sensory organs evolved to look outward, calibrated for medium-sized objects moving at medium speeds - our training set has been observing behavior as motion in the 3D world and that is the kind of intelligent problem-solving we are reasonably good at recognizing and managing. If we had senses (like bio-feedback) that routinely let us directly feel how well our inner organs were navigating physiological space every day, we would have no problem recognizing the function of the pancreas (for example) as intelligent behavior. Embodiment is critical for intelligence, but it doesn't mean necessarily embodiment and motion in 3D space - embodiment can occur in many different problem spaces. In the end, the relevant factor is not whether we, as external observers, are smart enough to notice if the system has a body in some unconventional problem space: it's whether the system itself believes it has a body and a perception-action loop in a space it models.

< Back to top

23. What kind of pushback does this set of views receive?

A) One claim is that we can just do the experiments without needing all the philosophy. I think there is no such thing as experiments without a philosophy, explicit or unexamined, that constrains some approaches and facilitates others. My claim is that over the last 30 years, we've done experiments that were novel (not done) because other competing views did not suggest those experiments. Thus I believe this framework generates novel discoveries and empirical progress, as frameworks should.
B) Organicists tend to like the part where I extend mind to unconventional aspects of the biosphere (the holistic aspects), but they do not like the fact that in my framework, this merges smoothly and continuously into "machines" on the left side of the spectrum - they prefer a sharp separation, and worry about an engineering approach that they think will diminish the moral worth and majesty of life
C) Molecular biologists go the other direction - they tend to like the engineering and computational metaphors, but don't approve of the claim that this merges smoothly into cognitive science on the right side of the spectrum. They worry about profligately painting mental terms onto things which are well-described by mechanistic approaches (which my framework explicitly does not do).
D) There is also lots of resistance of the form "That's just chemistry and physics, it's not cognitive", which rests on philosophical commitments to what should and shouldn't be cognitive (and unspoken, quasi-religious assumptions that humans are somehow magical and that something should be underneath the cognition that is something other than chemistry and physics). I claim that such views should be empirical claims, not a priori feelings, and need to be tested for their utility in driving research. i.e., it's then on the skeptic to specify what they think the necessary and sufficient conditions should be.

< Back to top

24. What does this imply for machine learning?

One implication is that a crucial test of general AI should be the capability of detecting agency in others. Machine Learning should not only exhibit cognition, but one of its skills must be to recognize and characterize cognition in other systems which it interacts with. Synthetic cognitive agents should not only be able to pass Turing Tests (or mini versions thereof, in other spaces) but should also be able to administer them.

< Back to top

25. Given the argument that chimeric technologies and gradual evolutionary chains between living forms dissolve binary, crisp categories, what really is a human - what's a useful definition of a human, when brain structure and sensory/motor components can change radically?

Useful definitions of human need to be developed for future discussions of how upset we should be when our bodies and brains are replaced with novel architectures or evolved modifications (or the natural species is supplanted entirely), how to estimate capacity of cyborgs etc. for agency and moral judgement in legal settings, and for ethical considerations of responsibility toward beings whose composition and origin are very different from our own. I don't know the right answer, but I suggest that one useful direction is to define a human as a being that can harness its IQ and goal-directed behavior at a specific level of compassion " humans have a larger cognitive boundary than other beings known to date, and we can define as human a being with a minimal level of capacity to pursue goals that are aimed outwards (rather than its own goals) - at increasing the well-being of others. "Human" should be a term that indicates achievement of a level of ethical sophistication, not directly derivable from genotype, composition, or origin story (evolved, engineered, or a mix of the two).

Indeed, the "proof of humanity certificates" which are being developed in this time of advances of AI put the problem most clearly. What do you really want to know, as proof of "humanity"? Is it having a natural human genome, or a standard evolved set of anatomical structures? I don't believe those are useful criteria. What we really want to know, when making sure we're dealing with a human, is that they have a minimal level of competency for compassion, the right size of cognitive light cone to be able to care about the things we care about, and face the same existential battles that we do (autopoietic self-construction, an impermanent fluid self, limitations that drive and constrain actions, etc.) - beings who meet those criteria are the ones with whom we can have human relationships. The rest (what combination of evolved/engineered materials they are made of etc.) are as irrelevant as other details of origin and appearance which society has fought hard to dethrone as metrics for how we should treat each other.

< Back to top

26. How does learning relate to direct cellular control?

Learning is a more effective version of what we try to do when we try to micromanage the function of a brain from the outside; a learning system is also doing the same thing, to its own brain - writing into the memory medium and controlling effectors, but doing it better than our clumsy external interventions. Evolution provides several physiological software layers on top of the lowest-level molecular modules because that's the most efficient way to control them (it's using the higher-level interfaces), and we should do it too - transformative regenerative medicine by taking advantage of the intelligence of the tissues, which enable us to work in simpler reward spaces, not gene expression spaces - using stimuli, not rewiring. See Top-down models in biology: explanation and control of complex living systems above the molecular level.

< Back to top

27. Isn't everything eventually supposed to be explained by the lowest level possible?

No, see the recent advances in information theory:

Consider the Game of Life cellular automaton. This world has a deterministic, simple physics and you can predict all microstates going forward into infinity. And yet, if our visual system wasn't tuned to find "persistent moving objects", we would have no concept of a "glider" and then we wouldn't think of making Turing machines out of glider streams: our engineering capacities in this space are directly potentiated (i.e., objectively made better) by having the ability to conceive of macro-scale entities.

Also, imagine what would have happened if at the turn of the 20th century, physicists thought that they could track each molecules of a gas. We might have missed out on the very deep discoveries of thermodynamics, resulting from coarse-graining and taking seriously higher levels of description. Modern biologists have the feeling that soon we will track every molecule (through big data and omics approaches) and this is keeping us from finding things like the Boyles Law for biology. Always looking at the most detailed level can obscure great truths.

The bottom line is that we need to outgrow our teleophobia, and realize that under-estimating agency is as bad as overestimating it.

< Back to top

28. What is it like to be part of the multi-scale Selves architecture described in your 2019 article The Computational Boundary of a "Self": Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition?

Each subunit has its own experience (including us). To a cell, competently going about its business of maintaining physiological homeostasis and planar polarity with its neighbors, getting even a glimpse of the immensely huge and alien goals of the person in whose body they live would be a horror that even Lovecraft could not have imagined. As of yet, we have no inkling of how to detect what supersystem we may be part of, or what problem space it is navigating.

< Back to top

29. What about consciousness?

My work is mostly about objective, external phenomena such as cognitive behavior. But I can say a few things. First, I think that Consciousness cannot really be studied in 3rd person. The only way to do it is to become part of the experiment; and, as mystics have long said, you can't stay the same while doing that (unlike normal, objective, 3rd person science). It can only be studied in 1st person, for example by modifying your own conscious experience by merging with your subject (in a stronger way than seeing data about their brain come from your visual system looking at instruments). See the last figure of Technological Approach to Mind Everywhere (TAME): an experimentally-grounded framework for understanding diverse bodies and minds.

The reason it is a Hard Problem (in Chalmers' sense) is that there is no obvious format for a prediction in this theory. Imagine we had a correct (or even good) theory of consciousness; now we ask it to predict the conscious experience of a creature in a specific scenario. What form will the predictions of a theory of consciousness take? Not predictions about its behavioral dispositions or physiology - predictions about its experience. What would that prediction even look like? We have no idea; unlike other scientific subjects, where the predictions have obvious forms in quantifiable terms that can be communicated between scientists, we don't know how to move information from a theory of consciousness to scientific observers other than to connect them to the system and let them experience it for themselves (as a new creature emergent from this mind-meld).

A second thing I can say (and I've argued it in this short talk) is that for the same reasons we associate consciousness with brains - behavior and physiological mechanisms - we should take seriously (nonverbal) consciousness in other components of bodies. Much of what neurons in the brain are doing is happening everywhere in the body (and of course "you" aren't aware of it, any more than you are directly aware of the consciousness of other humans). But in any case, I don't pretend to have any solutions to the Hard problem - for now, I'm sticking to observable cognitive questions.

< Back to top

Practical Implications and Ethics

30. In what sense are Xenobots "bioengineered"? They have a wild-type genome and no inorganic components. How is it "robotics"?

Engineering is not just adding new circuits or components, it's more general - the rational modulation of natural systems to change their functionality or behavior. In the case of the Xenobots, we did something interesting: removing influences and constraints. By liberating the skin cells from the rest of the embryo, we unlocked a bunch of potential (of these competent subunits) which was being kept suppressed by developmental signals from other tissues. These skin cells were being told to have a quiet, boring 2-dimensional life as a barrier layer of an active system (the tadpole). On their own however, we see their default geodesic in problem space: what they would rather do, when left to their own devices, is to have a more exciting 3-dimensional life as a Xenobot. This is control by releasing constraints to reveal the native problem-solving capabilities of cell collectives that were not apparent in their default context.

Similarly, robotics is not about micromanaging every functionality and programming every capacity directly. That is how robotics started, but it is just an early phase of the field, where the engineer works with passive, dumb parts. The more advanced phase, which we unlock by working with biological components, is that we can work with competent parts that do things we don't always have to micromanage. Learning to create autonomous machines with emergent functions (robotics) involves guided self-assembly, where we provide signals and conditions but rely on multiple levels of competency and spontaneous behavior from our materials. It is an outdated view to think of robots as necessarily being highly predictable, metallic, and precisely-engineered at all levels. Xenobots are an ideal example of robotics as a collaborative process between the human designer and materials that have competency at multiple scales.

< Back to top

31. What are the practical implications of your work?

Our projects are basic research aimed at understanding fundamental mechanisms and dynamics. However, once uncovered, these mechanisms suggest control points for biomedical intervention. Thus, our work suggests novel approaches to the detection, prevention, and repair of birth defects (especially involving the laterality of the heart and various internal organs and brain/craniofacial disorders), new diagnostic and treatment modalities for some types of cancers, approaches to induce regenerative repair of limbs, eyes, spinal cords, and face, and the discovery of new nootropic drugs (compounds that increase intelligence or improve memory for example). Specifically, our strategy is to find the highest-level signals with which we can communicate to cell collectives to build specific shapes, avoiding bottom-up micromanagement of pathways.

< Back to top

32. So on your view of collective intelligence and homeodynamic setpoints, what could be a definition of "health"?

This term can usefully mean different things in different contexts, but how about this as a definition focused on the multiscale competency architecture. Health is a descriptor of the degree to which flow of control most successfully spans levels of organization. That is, higher levels (e.g., the social mind and advanced cognition) successfully deform the energy landscape for the lower levels (organs, cells, and molecular pathways), while those lower levels competently solve problems to allow the higher levels to communicate, delegate, and incentivize instead of micromanaging details. The levels of competition are kept just high enough to enable coordination, stress is low because each subsystem at its own scale and in its own space is close to its homeodynamic setpoint, and the boundaries of each agent at its own scale are crisp and obvious to all (avoiding dissociative identity defections, psychological as well as cancer). Adaptive control and communication relationships between Selves at all levels of organization, not just lateral homeostatic states, within the body are what is crucial for optimal health from molecular pathways to societies and ecosystems.

< Back to top

33. What are the implications of these views for evolution or bioengineering?

I think that evolution is not just about "how to make feature X". The parts are very competent, and they will do things on their own, by default (as our Xenobot and other experiments show). The real trick is to bend their action space so that the system's subunits do (or don't do) what's good for the large organism - it's not just about evolving mechanisms to build organs. The default has action and goal-seeking at every level at every level - the parts do things in their local problem spaces if left to their own devices. So, for evolution to adapt structure and function, it's not all micromanagement - it's "guided self-assembly" and behavior shaping, the same way that bioengineers work not with passive materials but rather agential matter. We have to modulate what the cells do - we don't micro-specify features, we try to guide them toward outcomes, if we can, but the cell collectives do all the heavy lifting. This is seen in cancer too. "Why is there cancer?" is the wrong question, because the default for cells is to replicate and migrate. The real question is, why is there ever anything but cancer - how does this normal behavior of cells get suppressed in vivo. The key effort is to achieve a mature science of collective intelligence, to learn to predict and control what the default geodesics are for cell collectives in morphological, physiological, transcriptional, and behavioral spaces.

There are implications for the intellectual property system, for example. With classical, passive materials where everything is in what the craftsman did, patenting the craftsman's recipe makes sense. It's not yet suitable for work with active materials where the inventor is a collaborator with the material - the outcome is partly the method but it's partly what you've discovered about the competency of the agential material. It's different than trying to patent natural laws, because they are (probably?) passive and constant. Whereas agential matter (biological components, and someday multi scale engineered materials) is helping the inventor get it done - it's doing a lot of the heavy lifting, and we have to figure out how to patent cases like that. There will be more and more of that as tech evolves. It's probably the same as with inventions by AI agents - it's a collaboration.

< Back to top

34. What are the long-term implications of chimeric and biobot technologies?

Over and above useful synthetic living machines, sandbox systems like Xenobots are extremely safe ways to begin to hone the science of decreasing radical surprise. We work on creating tools to predict or manage what novel collective agents (from Internet of Things to robotic swarms to groups of cells in a petri dish to bacterial colonies) will want to do. We are surrounded by highly impactful technologies whose drives we do not understand; it is imperative to use model systems like Xenobots (swarms made of intelligent components) to begin to develop frameworks for understanding where complex systems' goals come from and how they can be guided toward life-positive outcomes.

< Back to top

35. What is the ethical status of enhancement - why should we want to improve our bodies and IQs with new extended mind, prosthetic, and hybrid technologies?

One key aspect is morphological freedom (a.k.a., radical freedom of embodiment). We were all born into physical and mental limitations that were set at arbitrary levels by chance and genetics. Even those who have "perfect" standard human health and capabilities are limited by anatomical decisions that were not made with anyone's well-being or fulfillment in mind. I consider it to be a core right of sentient beings to (if they wish) move beyond the involuntary vagaries of their birth and alter their form and function in whatever way suits their personal goals and potential. We spend a lot of time talking about freedom of speech and behavior, but all of those are derived from the fundamental bodies and minds we have - disease, aging, birth defects, and the vagaries of the random evolutionary process have embodied us in ways that fundamentally limit the kinds of thoughts we can have and what we can achieve. It is everyone's right to improve as they will, and our duty as scientists (and supporters of progress) to enable methods for liberation from arbitrary constraints of the evolutionary process as it happened to occur on this planet.

A second aspect is compassion. Each of us has a cognitive light cone which determines the size of goal states we can actively care about. By increasing our cognitive capacity, and enlarging that light cone, we become capable of greater compassion - we become able to functionally care about the well-being of more sentient beings. This is not about feeling emotions (love) toward others, but about having the cognitive depth to actively work towards the improvement of the lived experience of all creatures. It is also not about raising IQ just for the sake of newer tech; the technology is just a tool, and the more fundamental goal of increasing intelligence is to increase the facility of practical care. If all this talk of inauspicious births, compassion, and liberation of sentient beings from suffering sounds familiar, it should - there are links here to ancient ways of thinking about the world

< Back to top

36. Isn't { Xenobots, bioelectric repair, etc. } going too far?

There is no magical line that separates life-improving techniques from ones that are "too much change". When early hominids went into a cave to get out of the cold rain and avoid pneumonia, they were already on a continuous journey to setting bones, brain surgery, and marrow transplants. We always use our intelligence to improve our lot and fight the vagaries of a dangerous world; there is no principled way to draw a line between improvements that are allowable and ones that should be prevented. Each technology can be debated on its own pros and cons, but there is no sense in which one can go "too far" along the path of improving life for all. Moreover, we have a moral responsibility to use our intelligence to improve life for every being.

The sense of advance is relative; I imagine that our concerns over today's technologies will sound like this to our future descendants: "Og make wheel? Og go too far!! Wear fur, make fire to cook, ok, those good; but what next - plant seeds? Set bones? Those go too far - it's playing gods. Must make taboo!!" If that sounds too outlandish, consider the reaction of crowds to the audacity of the first umbrella. This is how our current wrangling over these technologies will seem to our descendants.

< Back to top

37. Shouldn't we be outraged over making living machines out of skin cells? (Xenobots etc.)

No longer being able to rely on what something looks like (composition - biological vs. metallic) or where it came from (evolved or engineered) to determine how you should relate to it - that does call for new ethics beyond "how much like a human brain does it look like" (see Synthetic Living Organisms: Heralds of a Revolution in Technology & Ethics). However, our outrage should be proportionally calibrated. Before one worries about autonomous pieces of skin, we have to deal with the millennia-long history of shaping living things towards others' purposes. There are primitive cultures' ancient practices of modifying pig snouts to keep them unable to root (and thus dependent on humans), etc. and more recently, factory farming. The abhorrent conditions for complex animals in factory farms are by far a bigger problem world-wide than anything that is happening with skin cells allowed to reboot their multicellularity.

< Back to top

38. Why did you specifically choose ciliated cells as a material - are they the only cells that can make biobots?

It is likely that most cells will be able to exhibit self-assembly to novel forms of life and behavior. The problem is that currently, interesting functionality and problem-solving in other spaces besides familiar 3D space of motile behavior is hard to detect. Thus, we started with cells that could produce movement and morphogenesis - something easy to recognize and study. It is quite possible that many of the passive organoids or other ex-vivo bioengineered constructs are doing fascinating things in transcriptional, physiological, metabolic, and other spaces, but no one knows this because people tend to equate intelligence with movement. We are working to develop formalisms and tools to detect other kinds of problem-solving and exploratory behavior in unconventional embodiments, and synthetic living forms are an excellent tool for the field of Diverse Intelligence to extend our own IQ in recognizing novel functionality in unfamiliar guises.

< Back to top

39. { Xenobots, regeneration, enhancement, etc.} is not natural!

Don't confuse "this is how it's always been, and how it is now" with "this is how it should be" or "this is how it has to be". In the pre-scientific era, it was possible to have a worldview in which the status quo was set up by God and thus was the way things should be because it was set up to be the best possible way. We now know that the state of the biosphere, our own anatomies, capacities, and behavioral proclivities are all the outcomes of an evolutionary process that rewards prevalence, not quality. Evolution is a meandering search that does not seek to optimize our happiness, quality of life, intelligence, ability to see truth, or any of the things to value. It basically just optimizes for adaptive biomass - the most life to be observable. Surely we can do better than the vagaries of chance and necessity have done so far.

Consistent with this, the status quo is pretty terrible - disease and arbitrary limits on potential and quality of life abound. Those limitations (what some people call "too far") are not set by a wise creator who knows what's good for us - they are purely accidental, driven by the meanderings of the evolutionary process through the space of possibilities. There is nothing sacred or beneficial about the limits we face in our baseline state. Once we realize that there is no one setting a beneficial agenda, we have a moral responsibility to do it ourselves. There is no one else to do it for us, and failing to pursue scientific ways to improve life is a kind of cowardly moral abdication of responsibility. It is our duty to improve ourselves and our world, which fortunately we can do because our cognitive capacities enable working toward specific goals, not just blind local search. If we set our goal space to be inclusive and very large, the combination of intellect and compassion offers much opportunity to do better than "natural".

There's even an interesting component of this in the Judeo-Christian origin story. Why did God have Adam name the animals in the Garden of Eden - why didn't he tell Adam what they were, or have the angels pre-name them? Because it is up to us to understand the world around us - to name things (discover their true nature), and create new things that didn't exist before, naming them (in the scientific sense of understanding their essence) as we go. This doesn't mean we shouldn't be constantly humble about the many deep areas of ignorance or the unexpected consequences of any action. But the solution to those limitations is more science, not less, a mind open to life-as-it-can-be, and striving for improvements for all, not artificial self-imposed limitations of a pre-scientific worldview in which we hope that someone else will do what needs to be done.

< Back to top

40. What keeps me up at night? What am I worried about, with these technologies?

What keeps me up at night is the risk of committing the ethical lapse of not moving these discoveries to their full positive impact for humanity and other life forms, current and future. I worry about limitations of drive, vision, intellect, and commitment that would prevent us from implementing the moral imperative to use our minds to improve life for all, and live up to our full potential as living beings. Fear and lack of clarity leads to the opportunity cost of failing to address the enormous biomedical suffering in the world. These technologies can help us implement effective compassion, and correct the unjust disparities resulting from an evolutionary and genetic lottery which distributes a range of bodily damage across the population.

The ethics component here is not just about what could go wrong. Often people focus on the potential problems, because "don't make things worse" hides the implicit assumption that everything is fine now, and we should just make sure we don't ruin things. This is of course something to keep an eye on, but it neglects a huge part of the equation. Things are absolutely not fine now, as obvious from the state of the world and the phone calls I receive daily from people with horrendous medical issues (there is an almost perfect correlation between young, healthy people who call me saying "stop this scary research" and ones whose children or themselves have various severe problems who call and say "what's taking you so long to find solutions"). The moral calculus of what to do must take into account the negative balance of failing to help those whose physical embodiments are impairing quality of life.

We now know that we have not been placed, with great care for our happiness and well-being, at come carefully-curated optimum of capabilities. There is nothing special, optimal, or "right" about our current levels of IQ, susceptibility to aging and disease, and various limitations - these are just where the meandering process of evolution happened to bring us. It is up to us to rise to the challenge, move beyond the vagaries of our meandering history through genotype space and random external influences, and improve the embodied experience for all sentient beings.

< Back to top

Miscellaneous

41. What might be a relevant piece of art for this whole "multi-scale Selves" topic?

My favorite is this passage, from Stephen King's story "Little Sisters of Eluria":

"Jenna?

Nothing. Only the wind and the smell of the sage.

Without thinking about what he was doing (like play-acting, reasoned thought was not his strong suit), he bent, picked up the wimple, and shook it. The Dark Bells rang.

For a moment there was nothing. Then a thousand small dark creatures came scurrying out of the sage, gathering on the broken earth. Roland thought of the battalion marching down the side of the freighter's and took a step back. Then he held his position. As, he saw, the bugs holding theirs.

He believed he understood. Some of this understanding came from his memory of how Sister Mary's flesh had felt under his hands... how it had felt various, not one thing but many. Part of it was what she had Said: I have supped with them. Such as them might never die but they might change.

The insects trembled, a dark cloud of them blotting out the white powdery earth.

Roland shook the bells again.

A shiver ran through them in a subtle wave, and then they began form a shape. They hesitated as if unsure of how to go on, regrouped, began again. What they eventually "made on the whiteness of the sand there between the blowing fluffs of lilac-coloured sage was one of Great Letters: the letter C."

"Except it wasn't really a letter, the gunslinger saw; it was a curl.

They began to sing, and to Roland it sounded as if they were singing his name.

The bells fell from his unnerved hand, and when they struck ground and chimed there, the mass of bugs broke apart, running every direction. He thought of calling them back - ringing the bell again might do that - but to what purpose? To what end?

Ask me not, Roland. 'Tis done, the bridge burned.

Yet she had come to him one last time, imposing her will over thousand various parts that should have lost the ability to think when the whole lost its cohesion . . . and yet she had thought, somehow enough to make that shape. How much effort might that have taken?"

< Back to top