July 28, 2013

Argument from Non-Optimality: what does it mean to be optimal?

One of my theoretical interests, which has been previously featured on this blog, is something called non-optimality [1]. Non-optimality, as you might surmise, is the tendency for systems to not behave optimally or result in optimal outcomes. This is challenging to people to wrap their head around, as there are entire shelves of library books on optimization methods. Additionally, there have also been several attempts to focus on the nature of non-optimal outcomes in biology and human behavior [2].

Example of the perfect mousetrap, or example of the least optimal biological system: the laryngeal nerve in Giraffa camelopardalis (giraffe). COURTESY: NatGeo YouTube video.

Optimization, whether through approximation or through the use of multiple criteria, is the standard world-view in fields as diverse as economics and physics to engineering and computer science. However, as one moves towards the social and natural sciences, an interesting phenomenon emerges. While optimality criteria can be applied to specific outcomes using theoretical models, they may only sporadically describe general trends [3].

This is an example of ant colony optimization (ACO), an engineering technique derived from insect ethology (e.g. collective behavior in ants, soemtimes referred to as stigmergy). In nature, ants find an optimal (e.g. the shortest) path to a food source by integrating multiple environmental signals (e.g. a network of pheromones). Process of the engineering endeavor shown in A) optimal path-finding demonstration, B) algorithm to discover shortest paths, C) the amount of time it takes to discover a series of shortest paths (e.g. potential optima). COURTESY [3] -- A) Figure 1.7, B) Box 2.4, C) Figure 1.10.

So why is this state of affairs the case? Are social and natural too complex to be optimized, or are the prevailing models simply wrong? To get at this issue in a systematic fashion, I will conduct a point-by-point critique of optimality in evolutionary biology to demonstrate where optimality may truly exist and where the hypothesis may fall short.

We can begin with the classics. In 1990, Parker and Maynard-Smith [4] reviewed the use and usefulness of optimality models in evolutionary biology. While the application of optimality criteria to evolutionary systems is highly diverse, they boil down to five basic components:

1) A model of adaptation must be constructed. Adaptation (through natural selection) is assumed to occur in an optimal fashion. In this sense, optimization as an outcome of evolution is implicitly adaptationist [5]. This adaptationist model is also implicitly probabilistic. For example, what fitness values for x are most likely to result from natural selection? If those values are maximized in a systematic fashion, then it is assumed that natural selection is responsible.

2) Potential strategies related to obtaining an outcome must be defined, such as discrete behaviors or phenotypic variants. This component is biased towards behavioral ecology, but "strategy" can be thought of as a general tendency rather than an intentional behavior. In either case, there is an assumed causal relationship between the employed strategy and the outcome.

3) Must have a maximization (e.g. fitness) or minimization (e.g. energetic expenditure) criteria. In both cases, there is an expectation of a directional process. This process (the route to optimization) is adaptive by definition. Less clear is what constitutes asymptotic convergence of the optimization process. In other words, while a system might tend towards optimization, it does not follow that this automatically results in a given evolutionary system settling into an optimal equilibrium.

A graphical representation of the Prisoner's Dilemma (PD) game-theoretic model of cooperation.

4) Payoffs for pursuing various strategies must be defined (units of max/min criterion). For game-theoretic applications (e.g. PD), the payoff structure is intuitive. However, any direct consequence of the strategies discussed in point 2 has a payoff. This in turn drives adaptation -- the assumption being that equilibrium behavior is the natural outcome of long-term interaction [6].

5) This optimum can either be frequency-independent (individualistic) or frequency-dependent (population context). This means that either individual performance can become increasingly better over time, or that differential reproduction occurs in a population based on the trait in question.

Now that we have reduced optimality models to their basic components, I will now conduct a point-by-point critique of optimality approaches. Hopefully, this will bring us closer to a theory of non-optimality. For now, I will present off-the-cuff critical observations.

a) Not all evolutionary change is adaptive. For example, models of exaptation [7] and  neutrality have been proposed that account for non-adaptive evolutionary changes. Traits that arise by these mechanisms are not likely to be optimized. In fact, much like cultural traits [8], they may often be maladaptive. Alternately, highly adaptive traits may be built upon latent abilities that would be lost through strict optimization [7].

In [5], in silico metabolic reaction networks that evolved to metabolize glucose also allowed for viability using other carbon sources as well. The evolution of environmental generalization undercuts the argument that this system evolved towards an optimum   COURTESY: Figure 1 in [7].

b) What if the "strategies" that result in optimal behavior occur at different levels of organization conflict [9] with each other? For example, different strategies may be taken within the same organism in terms of behavioral ecology and gene expression. This might be understood in terms of the relatively clear mapping between gene expression and behavior in insects [10] versus the unyielding brain-to-behavior complexity observed in mammals.

c) What if the strength of selection is weak, or if selection is alternating across the evolutionary process? In cases of uniformly strong selection, we might expect strong optimization. In cases of sporadic or weak selection, however, this outcome may be highly nonlinear (e.g. small selective advantages can lead to large changes in fitness). This may or may not lead to optimal phenotypes. Such a claim assumes that we even know what the effects of optimization look like at the phenotypic level. It might be useful to review the concept of optimizing selection [11] and its effects on phenotype.

See this HHMI video on pocket mouse evolution for more information on the relationship between the strength of selection and resulting fitness dynamics.

d) Some strategies may have conditional or compound payoffs which may not translate into a global optimum. For example, there may be clear benefits to niche specialization or functional partitioning. Whether or not this consititutes an globally (e.g. whole-organism) optimal outcome is an open issue. In the case of niche specialization, niche construction [12] involves the modification of the environment, which leads to cultural and natural selective feedbacks. Achievement of the optimal outcome may depend on whether this feedback leads to environmental stability or further fluctuations. In addition, recent research may suggest that theoretical predictions of the PD model do not match the situational behavior of humans and other animals [13].

e) While the model of Parker and Maynard-Smith is a heuristic model for biological optimality [14], it is worth noting that in engineering, multiobjective criteria are often used to approximate the optimal properties of a complex system. Since determining biological optimality is also an exercise in approximation, we need to incorporate better ways of finding dynamic equilibrium using multiple attributes. One example of this involves the application of maximization principles to ecological simulations [15]. In this case, even though a system might evolve to maximize one thing, this may not translate into global optimization or even a maximization of fitness.

Or perhaps we can take a lesson from some quarters of economics [16] where Grossman and Stiglitz found that a competitive economy cannot always be in equilibrium, but rather an equilibrium degree of disequilibrium. This has been used as evidence against the efficient markets hypothesis, which relies (sans natural selection) on the concept of market optimization over time.

While I have not focused on more traditional critiques of optimization in the evolutionary biology literature, I hope that this exercise actually leads to a series of non-optimal mathematical models that can go toe-to-toe with traditional optimization models. But I'll leave that development to future posts.


[1] see the Synthetic Daisies #non-optimality tag for more information. Not all are relevant to biology, but all feature various takes on this concept.

[2] Boyd, R.   Cultural Adaptation and Maladaptation: of kayaks and commissars. In The Evolution of the Mind: fundamental questions and controversies. S.W. Gangestad and J.A. Simpson eds. Guilford Press (2007). In this reference, maladaptation is defined in contrast to adaptation.

Crespi, B.   The evolution of maladaptation. Heredity, 84, 623–629 (2000). In this reference, maladaptation is defined as a deviation from adaptive peaks.

[3] Dorigo, M. and Stutzle, T.   Ant Colony Optimization. MIT Press, Cambridge, MA (2004).

[4] Parker, G.A. and Maynard-Smith, J.   Optimality theory in evolutionary biology. Nature 348, 27-33 (1990).

For a different take (with perspective from the evolutionary computation community and the cognition-as-computation debate), please see: Harvey, I.   Cognition is not Computation: evolution is not optimisation. ICANN97, 685-690 (1987).

Another perspective (from a discrete mathematician) can be found here: Kelk, S.   What mathematical optimization can, and cannot, do for biologists. Lorentz Center presentations.

[5] For more reading on this idea, please see: Orzack, S.H. and Sober, E.   Adaptationism and optimality. Cambridge University Press, Cambridge, UK (2001).

[6] For a take on this idea using human societies as an example, please see: Cremer, H., Marchand, M., Pestieau, P.   Investment in local public services: Nash equilibrium and social optimum. Journal of Public Economics, 65(1), 23–35 (1997).

[7] Barve, A. and Wagner, A.   A latent capacity for evolutionary innovation through exaptation in metabolic systems. Nature, doi:10.1038/nature12301 (2013).

[8] For an adaptationist perspective on this, please see: Logan, M.H. and Qirko, H.N.   An evolutionary perspective on maladaptive traits and cultural conformity. American Journal of Human Biology, 8(5), 615–629 (1996).

[9] for the role of genomic conflict and how it may potentially undercut optimization, please see: Werren, J.H.   Selfish genetic elements, genetic conflict, and evolutionary innovation. PNAS, 108(2), 10863-10870.

[10] For two takes on this, please see: Robinson, G.E., Grozinger, C.M., and Whitfield,C.W. Sociogenomics: social life in molecular terms. Nature Reviews Genetics, 6(4), 257-270 (2005) AND Boguski, M.S. and Jones, A.R. Neurogenomics: at the intersection of neurobiology and genome sciences. Nature Neuroscience, 7(5), 429-433 (2004).

[11] For more information on optimizing selection (which is similar to but distinct from normalizing or stabilizing selection), please see: Travis, J.   The Role of Optimizing Selection in Natural Populations. Annual Review of Ecology and Systematics, 20(1), 279-296 (1989).

[12] Olding-Smee, F.J., Laland, K.N., and Feldman, M.W.   Niche construction: the neglected process in evolution. Princeton University Press, Princeton, NJ (2003).

[13] Khadjavi, M. and Lange, A.   Prisoners and their dilemma. Journal of Economic Behavior and Organization, 92, 163-175 (2013).

[14] Sometimes, the application of heuristics does not translate into optimal behavior or tradeoffs. For an example from information-seeking behavior in an in silico model (ACT-R cognitive architecture), please see: Fu, W-T., Gray, W.D.   Suboptimal tradeoffs in information seeking. Cognitive Psychology, 52, 195-242 (2006).

[15] Ackland, G.   Maximization principles and daisyworld. Journal of Theoretical Biology, 227(1), 121-128 (2004).

[16] Grossman, S.J. and Stiglitz, J.E.   On the impossibility of informationally efficient markets. American Economic Review, 70(3), 393-408 (1980).

No comments:

Post a Comment