written by Aaron Cardon
Close your eyes and picture that first moment of direct extraterrestrial contact, but add a less familiar twist: imagine that we are the arriving extraterrestrials. Far (or, perhaps, nearer than we think) into our future, with ever-advancing astronomy, unmanned extrasolar probes, and perhaps manned missions to nearby planetary systems, the depths of the vast interstellar ocean finally yield that Holy Grail of astrobiology: dozens of light years away, we spot another planet stamped with an unmistakable signature of intelligent life. After our species-long history of wondering, and with the technology available, are we not compelled to go to them?
If your answer is yes, read on…but prepare yourself for another imaginative leap. I’ve told you that we have the technology, and asked you to vaguely imagine us, arriving in their solar system, to present ourselves to them in that incredible moment of first contact. But I didn’t describe what we would look like descending from the skies; I left your brain to fill in what this interstellar technology looks like. My mind normally goes to a habitation-module lander (think Apollo), cruising to the softest landing possible under intensive thrusters, with the deck then slowly unfolding to reveal a biped silhouette. Others, perhaps, think first of Kirk and crew appearing suddenly, transported from their orbiting ship directly in front of an alien welcoming committee. Did you imagine, however, that when we arrive, perhaps we fall directly to their planet’s surface, with no gear except our “selves.” We may walk on two legs, or perhaps four, or eight; we might not walk at all. Most noticeably, however, we are a fraction of our current size and weight, almost certainly don’t look very hominid at all, and are composed entirely (or at least mostly) of circuit boards, memory sticks and processor chips.
When discussing deep-space exploration, brain emulation – the use of advanced computers to digitally recreate a complete (or essentially complete) human neural network – comes up with increasing frequency. As our computers become progressively faster, more intricate, and compact, it becomes easier and easier to envision such a “digital crew” and the benefits it might offer to mission design. I will introduce here, to stimulate discussion, what is known about the feasibility of whole-brain emulation technology, the advantages it may offer in its application to manned interstellar exploration, and the mission designs implied by those advantages. We may then proceed to discuss the state of the evidence and develop each of these sections in further detail.
Technology-readiness and Feasibility
If emulation turns out to be feasible, it is important to remain clear in distinguishing its many independent research goals and purposes. Most basically, we can conceptualize whole brain emulation as taking three forms: whole brain emulation (WBE) can be defined as a successful digital representation of the general components of a whole brain, operating in real-time (or close to it) in such sufficient detail as to reliably reproduce its most important outputs (i.e. behaviors). The more specific use of brain emulation to model the “standard” output of a human brain (most generally validated as successful by some sort of Turing test), can be labeled mind emulation.. It is at this level, where its use becomes comparable to that of other AI models, that I think most of us begin to envision practical brain emulation and the fantastic uses it could provide. Person emulation, finally, is the digital reproduction of specific minds (i.e. individuals). It is difficult, at our current level of knowledge, to predict the relative difficulty in achieving each of these forms of emulation, although it seems likely that they require increasing levels of technological sophistication. Person emulation, for example, would appear to require the ability to specifically image and capture (either destructively or not) all relevant parameters of a particular brain, while mind emulation could be demonstrated by merely gathering sufficient data, from as many sources as necessary, to construct a model of each of the relevant systems to whole brain function.
Although the feasibility of even basic WBE has yet to be demonstrated, there are good reasons to expect that it may be possible, even in the relative near-term future. Only a few unproven assumptions must be true to accept its theoretical plausibility – most notably, scale separation. Other assumptions or speculations depend on the particular form of emulation being discussed.
Ghost in the shell: Brain emulation is a common topic in science-fiction
The feasibility of any sort of whole brain emulation is dependent on scale separation – the principle that at some sufficiently high level of system function, brain activity can be described, accurately predicted, and reproduced. Brains, as most modern students will remember, are made up of neurons (usually estimated at 100 billion of them), connected to one another by, on average, 1000 synapses (100 trillion synapses, for those keeping count). Neurons communicate with one another across these synapses by chemical stimulation: an action potential is “fired” in a neuron, stimulating the release of neurotransmitters, which float across the synapse and modify the properties of the receiving neuron (usually by increasing or decreasing the receiving neuron’s probability of firing). Neurons, like all cells, are incredibly complex individual units whose function relies on tightly-regulated protein interactions. Protein interactions are governed by quantum and classical chemical behaviors which we have yet to completely model. So at which level of this complex system is the information contained?
It has been suggested, though never widely accepted, that information contained either at the quantum molecular level, or within the continuous variables of the analog signals, may be used by the brain to enable “hypercomputation” which will not be amenable to emulation. These two scenarios would essentially compel us to accept that the physical structure of a brain is necessary to produce a mind, or at least make it unlikely that the necessary computation is a tractable problem. These both remain valid hypotheses (they will be best tested by progressively improvements in the neural network models discussed below); fortunately for our future emulators, however, they both contain significant problems to reconcile to current neuroscience.
So returning to our levels of scale, the action potential (the “all-or-none” activations of individual neurons) – or, perhaps, the repeated action potentials in a spike trains – is the fundamental unit of information in contemporary neuroscience. Neurons are, after all, binary on/off units, just like bits. Their information-processing capacity lies in their extensive interconnectedness. Wait, you say, but the connections of neurons are not random or average at all; rather, there are different specialized neuron types which have specific unique functions and connections; in other words, even if we had complete, functional electronic models of 100 billion “typical” neurons with their thousand connections each, we obviously could not simply connect them to each other randomly and expect a mind to emerge (more to the point, we should probably expect an epileptic meltdown). Thus, to emulate the brain, we may “merely” have to digitize the relevant properties of two things: the thresholds and excitability of individual neurons (compartment model) and the map of their connections (the connectome) in the human brain.
Indeed, that combination is very close to the form of most modern neural network models. Individual nodes, representing neurons, act to process their input according to their compartment model, and based on their result, either propagate a signal to the next node or not. Usually, depending upon the application, most models simplify either the compartment model or the connectome, even while focusing on a small population of neurons (often, though not always, modeled off of part of the brain). Even such limited models, however, have been successful in replicating some functions and outputs of neural networks. For example, auto-associative networks modeled from the hippocampus (one of the most extensively studied regions of the brain, known to be important to memory) have been shown to be capable of pattern recognition akin to prompted memory recall: input of a small portion of a previously “learned” pattern will reliably reproduce the pattern. Another group, working on prosthetic replacement of the hippocampus, has developed an integrated circuit that, when used to replace parts of the hippocampus circuit, can reliably replicate the spike timing and output of the network.
Although our models remain limited in size and scope, advanced simulations running on standard computers may still take hours to produce mere seconds worth of real-time output. There are, however, good reasons to believe such limitations are temporary and that whole brain, real-time emulation can still develop from extrapolations of the models above. These reasons are closely related to those which would offer reduced payloads, so we will discuss them together later.
Successes such as those mentioned above indicate that a faithful emulation could emerge from models containing some sufficient detail of compartments and connectedness. At the groundbreaking conference on whole brain emulation, held by Oxford’s Future of Humanity Institute in 2008, an informal poll of attendees produced this consensus of an expected range of complexity for mature emulation technology (Sandberg and Bostrom, 14). It is possible (and suggested by a few at that conference), that non-reducible information may be contained at the level of neurotransmitter concentrations, quaternary protein structures, or ephaptic (local electrical fluctuation) effects. While the evidence for such information-processing capacity is preliminary at best, addition of this information could increase the computational and imaging requirements by a few orders of magnitude, but should not generally be considered a major impediment to its development.
So while it is certainly possible that some of the unanswered questions described above could, when answered, demonstrate that digital technology cannot emulate biological brains to a sufficient degree to replace (or equivalently complement) human crew members, the future of whole brain emulation overall looks quite bright. Introducing their summary of the Oxford conference, Sandberg and Bostrom summarize the expert consensus that although “WBE [whole brain emulation] represents a formidable engineering and research problem…[it] could, it would seem, be achieved by extrapolations of current technology.” (5). Research is being funded across the globe with the explicit intent of further defining and completing the reverse engineering challenges. When we return to discuss next the advantages of WBE to crew design, we will review the successes of the Blue Brain Project and their goals with the Human Brain Project, as well as HHMI’s Janelia Farms projects making strides toward maturing the necessary technologies, to extrapolate from current success to predict the usefulness to future missions.
Design Advantages of Digital Crews
Whereas traditional biological crews require extensive life-support systems, a digital crew would theoretically allow massively reduced payloads; after all, the digital crew can be reduced to simply computer hardware powered directly by electricity, thus achieving the same ends while eliminating those pesky biological nuisances like food, oxygen, and waste disposal? Once we get our minds out of these weary cranial vaults and into their sturdy emulation modules, we may be better pilots without biological fatigue, or require a smaller crew without the need for shifts for sleep, right?
Indeed, I framed my scenario above to contain a pair of essential design challenges which would support such a crew choice: a large distance (at the very edge of our foreseeable feasibility based on other mission design considerations) with a consequently long mission, along with a mission objective requiring a crew (establishing first contact). In comparison, my favorite description of brain emulation in science fiction happens to be from Stephen Baxter’s Manifold: Space. In that case, a journey for two human astronauts (and their alien and Neandertal companions) to the center of the Milky Way is enabled by reduction of the entire payload, into a “ship,” presumably consisting of a very miniaturized (not to mention durable) supercomputer the weight and size of a relay baton, allowing an exotic propulsion system to accelerate this small payload to near the speed of light. I will avoid any further spoilers to Baxter’s plot, but the common elements across most proposals to use digital crews appear to be: reduced payload, stability and self-sufficiency extending beyond the human lifespan, and. So it is reasonable to consider, by doing our best to envision what such a far-off digital crew would look like, what kind of payload reduction we may achieve.
No matter the propulsion system, payload is a costly consideration in discussing mission design. Although we must assume that early brain emulation would occur in very large distributed computing networks, relatively conservative applications of Moore’s Law and the relative energy cost of silicon versus carbon-based information-processing predicts that, once demonstrated feasible, brain emulation, particularly with the type of design appropriate for use as crew members, should be reducible to a point of being more cost-effective in payload considerations.
Self-sufficiency and stability of such crew members, unfortunately, remains entirely uncertain. While I am loathe to invoke any “mad-at-the-edge-of-space” stereotypes, one of the first research goals of mind emulation is massive improvements, through availability of model systems for hypothesis testing, in the standard models of psychology, neurology, and particularly cognitive neuroscience. There are few clinical evidences to support speculating that a mature mind emulation may be amenable to “saving and storing,” by turning on and off at will. However, even in this simple design question, we do not, and cannot, know that until we find it possible (and, perhaps more importantly, ethical) to do just that. As we move progressively through the steps of “building a mind” with a brain emulator, the questions mount further and faster regarding psychological stability, and by extension, what support structures must be put in place to achieve such stability.
Though certainly still one of the most fanciful propositions for use in deep-space mission design, the promise of digital crews offered by possibilities for brain emulation and brain-like computers remain exciting and alluring. Brain emulation remains, at best, technology-readiness level 1, with active investigation of basic principles and ever-increasing hopes of near-term proof-of-concept experiments. Brain emulation is an exciting possible technology which could offer such significant benefits to a deep-space mission as to become indispensable, even though replacement of biological crews may not turn out to be its primary function. Should it prove feasible, it may well serve better as an adjunct to traditional crews, as an intermediary between a human crew and ship control systems – an integral crew member with particular mission duties, but not a self-sufficient crew entirely separate from its human counterparts.Images: