In this post, I will be highlighting a new paper by Fernando, Szathmary, and Husbands [1] called "Selectionist and evolutionary approaches to brain function: a critical apprasial" [2]. The paper was published in the most recent edition of Frontiers in Computational Neuroscience, an open-source journal. This post is a mix of reviewing the featured paper, a more general discussion on neuroevolutionary modeling, and my own thoughts/insights.
This paper reviews so-called Darwinian models of brain dynamics and function (as part of a field referred to as Darwinian Neurodynamics), and then proposes a new method for building an evolvable, artificial brain. This can be used as an analogical model for studying brain evolution, or as a template for simulating brain evolution or building intelligent machines. While all of the models presented are selectionist (e.g. utilizing a natural selective mechanism), many of them do not propose an explicit mechanism for heredity nor envision neurons as replicator vehicles. Thus, they are quasi-evolutionary without being explicitly Darwinian.
The authors also show that all models included for the review (there are six major Darwinian theories out there) can be restated using the Price equation. While the Price equation [3] allows for natural selection to be understood in terms of additive covariance, it does not have an explicitly algorithmic basis. This means that tools such as fitness landscapes cannot be extended to such theories. Fortunately, the authors consider how existing Darwinian theories might be extended to algorithmic search, which will become manifest in a model system that can be used to test specific hypotheses.
The authors contrast their model with Gerald Edelman's Neuronal Group Selection (NGS) model [4]. The idea behind NGS is simple: the evolution of intelligent behavior is shaped by the selection of some neuronal groups (e.g. neurons, brain regions) over others. While this is loosely inspired by group selection theories in evolutionary biology, it is in many ways more akin to Hebbian [5] principles (e.g. cells that wire together also fire together). In NGS, it is the initial connectivity and number of cells that determine the success of this brain architecture with respect to a given set of stimuli. This only serves as a loose analogy to natural selection, which can be defined as differential reproduction with respect to changes in allele frequencies. In the paper, the authors spend a significant amount of time criticizing NGS on these grounds, which leads to a discussion of the Price equation.
Figure 1. A model of the Price equation extended to neuronal groups using several different scenarios. COURTESY: Figure 1 in [2].
The Price equation, a generalized model for quantifying natural selection, can be extended to neuronal groups as shown in Figure 1. In fact, the Price equation is well suited to modeling selection in neural systems, because it mathematically formalizes how selective pressures "weight" things over multiple units of time. In Figure 1, we can see how weakening and strengthening of some trait in a population (illustrated on the left) can affect connectivity of a neuronal network (illustrated on the right) in a Hebbian manner. The network utilizes statistical information to become active and learn, but does not engage in formalized statistical learning. The Price equation is also robust to the addition of many potential sources of selection, which is semi-compatible with the integrative capacity of neuronal networks (e.g. any given synaptic connection is the product of many sources of downstream activation).
The theory of synaptic selection (TSS) [6], the Darwinian synapse model [7], and the neuronal replicator hypothesis [8] are all important take-off points for what the authors of [2] consider to be algorithmically-robust Darwinian neurodynamics. With his Darwinian synapse model, Sebastian Seung [7] suggests that the stochasticity inherent in neural activity is directly analogous to the random components of genetic mutation and recombination. Meanwhile, the Dehaene and Changeaux's TSS theory [6] suggests that objects are first encoded in redundent fashion by the stochastic activity of a large number of neurons, then later stabilized by learning, which is a form of selection. This is known as selection by amplification [9], which refers to the consequences of this process on a neuronal network topology (see Figure 2 for a general example). Despite some accounting of the stochastic nature of both population dynamics and brain activity, evolutionary neutrality is not accounted for in these models (e.g. they are strictly selectionist).
Figure 2. An example of how selection can amplify activity in a (simulated) neuronal network. COURTESY: Figure A2 in [2].
What does it mean to be "algorithmically robust"? In this case, the implication is that since no heritable genotypes exist, the individuals that constitute a population (neurons and/or synaptic connections) can not explore all possible genotypic and/or phenotypic states. In the language of physics, none of these models assume ergodicity as the initial condition.
According to Fernando, Szathmary, and Husbands, one way to make the Price equation-based models algorithmic is to add units of evolution to their structure. This involves applying the Maynard-Smith model of evolutionary units to a neuronal context, which requires modeled neuronal units to exhibit autocatalytic growth, inheritance, and variability [10]. This is not incompatible with a Hebbian view of the brain, however, and gives a candidate model of neural dynamics a set of "lifelike" attributes. This approach can be further validated using mathematical logic: a modified Hebb rule is compatible with the well-known Quasispecies (Eigen) equation [11], which like the biological brain incorporates all three elements of the Maynard-Smith's model.
The authors of [2] then propose that an evolutionary algorithm could be used to bridge the gap between existing neuronal theories and the gradient descent (or continuous exploratory) behavior more indiciative of fitness landscapes. While this is not a novel idea, assumptions regarding the encoding scheme [12] used by the authors is rooted in previous, non-Darwinian approaches. This is what makes for a unique approach. It is fitting then that the rest of the paper moves in this direction. One way to bridge the gap between brain function and Darwinian dynamics is to use a Bayesian model to approximate how a population learns from information and moves towards an adaptive state. The second is to view evolution by natural selection as a optimum-seeking process in which the global optimum is not always guaranteed. Third, population structure needs to be incorporated into a model of neural evolution so as to account for explicitly dynamic phenomena [13].
Figure 3. The Darwin series of brain-based devices. FROM LEFT: Darwin VII (models the somatosensory loop in mammals), Darwin VIII (models re-entrant connections within the visual pathways of mammals), Darwin IX (texture discrimination using artificial whiskers).COURTESY: Neurosciences Institute website.
Figure 4: Images of the gantry robot (left), which uses the GasNet evolutionary encoding (right). COURTESY: Figures 6 and A1, [2].
Like Edelman before them (see Figure 3), the authors test this model in a robot. Edelman is famous for his Darwin series of robots built specifically to test NGS. In [2], a gantry robot is used to test the GasNet encoding of a neuronal network. Neural "gas" networks (e.g. GasNet) are built around nodes (neurons) and arcs (synapses), but also utilize a gas (series of concentric circles -- see Figure 4) emitted from each neuron to simulate the release of neurotransmitter. In this way, neurons can be indirectly affected by local activity, and by extension use the "gas" as a signal for selection. These features enable the evolution and selection of certain pathways in the model that enable many of the adaptive features (e.g. evolvability, robustness) that are common features in formal evolutionary systems such as complex genotypes and phenotypes.
There are many other innovations featured in this model that I will let the reader explore on their own. There have been a number of other papers published involving one or more of the authors on this paper which elaborate on particular details of the evolutionary computation and modeling. Overall, this paper is a comprehensive foray into the world of brain evolution, modeling, and evolutionary computation that synthesizes a lot of previous work on evolution and brain function. By preparing this post, I learned a lot about both contemporary thinking in this area and what the future directions might be for applying these models to designing artificial systems and understanding in vivo systems alike.
References:
[1] This is an interesting collaboration. Eors Szathmary is an evolutionary geneticist, and Phil Husbands is an artificial life/evolutionary robotics researcher. Did they team up through interactions with the first author?[2] Fernando, C., Szathmary, E., and Husbands, P. (2012). Selectionist and evolutionary approaches to brain function: a critical apprasial. Frontiers in Computational Neuroscience, 8(24), 1-28.
[3] Price, G.R. (1995). The nature of selection. Journal of Theoretical Biology, 175, 389-396. Home of the Price equation.
[4] Edelman, G. (1987). Neural Darwinism. The Theory of Neuronal Group Selection. Basic Books, New york.
[5] Hebb, D.O. (1949). The Organization of Behavior. John Wiley and Sons, New York.
[6] Changeux, J.P. and Dehaene, S. (1989). Neuronal models of cognitive function. Cognition, 33, 63-109.
[7] Seung, S.H. (2003). Learning in spiking neural networks by reinforcement of stochastic synaptic transmission. Neuron, 40, 1063-1973. Sebastian Seung just released a new book for a popular audience called "Connectome" that focuses on the implications of his work on connectomics (or the things that characterize and determine the interconnectivity of neuronal cells in the brain).
[8] Fernando, C., Karishma, K.K., and Szathmary, E. (2008). Copying and evolution of neuronal topology. PLoS One, 3, e3775.
[9] Lieberman, E., Hauert, C., and Nowak, M. (2005). Evolutionary dynamics on graphs. Nature, 433, 312-316. This is a really interesting (but purely theoretical) paper. Highly recommended.
[10] Maynard-Smith, J. (1998). Evolutionary Genetics. Oxford University Press, Oxford, UK.
[11] Fernando, C., Vasas, V., Szathmary, E., and Husbands, P. (2011). Evolvable paths: a novel basis for information and search in the brain. PLoS One, 6, e23534. This is another interesting paper, and serves as a precursor to the current work.
[12] an evolutionary encoding, for those not familiar, is the basic structure and set of assumptions that go into representing evolution as a simulation. For example, how does one represent a simple phenotype (usually a simple geometry) as a genotype (a string of binary digits)? There are many possible ways in which to do this, which is why encoding schemes are often the most critical measure for the success or failure of a particular evolutionary model.
[13] Izhikevich, E.M. (2007). Solving the distal reward problem through linkage of STDP and dopamine signaling. Cerebral Cortex, 17, 2443-2452 AND Izhikevich, E.M. (2006). Polysynchronization: computation with spikes. Neural Computation, 18, 245-282.
In this model, population heterogeneities (so-called polysynchronous groups) are reinforced by artificial dopaminergic signals incorporated into the neuronal model.
INTERESTING FACT: Izhikevich is the founder of Scholarpedia, which is a peer-reviewed version of Wikipedia.